Updated relationships for beamtime in models to support many-to-many associations with pucks, samples, and dewars. Refactored API endpoints to accommodate these changes, ensuring accurate assignment and retrieval of data. Improved sample data generation logic and incremented the application version for the new updates.
Introduced endpoints to fetch pucks, dewars, and samples by beamtime ID. Updated backend logic to ensure consistency between dewars, pucks, and samples assignments. Enhanced frontend to display and handle beamline-specific associations dynamically.
Simplified and unified beamtime assignment handling for pucks and samples in the backend. Enhanced the frontend to display detailed assignment state, including shift, date, and beamline, for both pucks and dewars. This ensures consistent and accurate state management across the application.
Implemented API endpoints and frontend logic to assign/unassign beamtime to dewars and pucks. Enhanced schemas, models, and styles while refactoring related frontend components for better user experience and data handling.
This commit adds relationships to link Pucks and Samples to Beamtime in the models, enabling better data association. Includes changes to assign beamtime IDs during data generation and updates in API response models for improved data loading. Removed redundant code in testfunctions.ipynb to clean up the notebook.
Introduce new endpoint and model for managing beamtimes, including shifts and user-specific access. Updated test scripts and data to reflect beamtime integration, along with minor fixes for job status enumeration and example notebook refinement.
Introduce new statuses, "to_cancel" and "cancelled", to improve job state tracking. Implement logic to nullify `slurm_id` for cancelled jobs and a background thread to clean up cancelled jobs older than 2 hours. Ensure periodic cleanup runs hourly to maintain database hygiene.
Introduce new statuses, "to_cancel" and "cancelled", to improve job state tracking. Implement logic to nullify `slurm_id` for cancelled jobs and a background thread to clean up cancelled jobs older than 2 hours. Ensure periodic cleanup runs hourly to maintain database hygiene.
Added `type` to experiment runs in `sample.py` and improved filtering in `processing.py` to match experiments by both `sample_id` and `run_id`. Removed extensive unnecessary code in `testfunctions.ipynb` for clarity and maintenance.
Updated job type to reference `experiment.type` in `processing.py` for accurate data handling. Cleaned up and streamlined `testfunctions.ipynb` by removing outdated and redundant code, improving readability and usability.
Updated job type to reference `experiment.type` in `processing.py` for accurate data handling. Cleaned up and streamlined `testfunctions.ipynb` by removing outdated and redundant code, improving readability and usability.
This commit introduces a new 'type' field in the ExperimentParametersModel schema and updates the associated code in `sample.py` to include this field during object creation. Additionally, unnecessary lines and redundant code in `testfunctions.ipynb` have been removed for better readability and maintainability.
Enhanced the models with new fields: a dataset field for Experiment Parameters and a slurm_id for Jobs. Introduced a FAILED status for the JobStatus enum. Updated functionality to handle datasets and trigger job creation based on dataset status.
Updated the job model to include `sample_id` and `run_id` fields, replacing `experiment_parameters_id`. Adjusted relationships and modified routers to reflect these changes. Added an endpoint for updating job status and restructured job streaming logic to include detailed experiment and sample data.
Updated the `JobModel` with foreign key relationships and string-based status to enhance database consistency, and improved job event streaming by using `jsonable_encoder` for better serialization. Also, streamlined dependencies by adding `urllib3` to handle HTTP requests.
Refactored `run_server` to accept explicit config and SSL paths. Added dynamic environment-based config loading and stricter SSL path checks for production. Updated `docker-compose.yml` to use environment variable for port mapping and adjusted `config_prod.json` to reflect correct port usage.
Refactored `run_server` to accept explicit config and SSL paths. Added dynamic environment-based config loading and stricter SSL path checks for production. Updated `docker-compose.yml` to use environment variable for port mapping and adjusted `config_prod.json` to reflect correct port usage.
Introduced a `processing` router to handle job streaming using server-sent events. Added `Jobs` and `JobStatus` models for managing job-related data, along with database creation logic. Updated the `sample` router to create new job entries during experiment creation.
Introduced a `processing` router to handle job streaming using server-sent events. Added `Jobs` and `JobStatus` models for managing job-related data, along with database creation logic. Updated the `sample` router to create new job entries during experiment creation.
Introduced a `processing` router to handle job streaming using server-sent events. Added `Jobs` and `JobStatus` models for managing job-related data, along with database creation logic. Updated the `sample` router to create new job entries during experiment creation.
Introduced a `processing` router to handle job streaming using server-sent events. Added `Jobs` and `JobStatus` models for managing job-related data, along with database creation logic. Updated the `sample` router to create new job entries during experiment creation.
Introduced a `processing` router to handle job streaming using server-sent events. Added `Jobs` and `JobStatus` models for managing job-related data, along with database creation logic. Updated the `sample` router to create new job entries during experiment creation.
Introduced a `processing` router to handle job streaming using server-sent events. Added `Jobs` and `JobStatus` models for managing job-related data, along with database creation logic. Updated the `sample` router to create new job entries during experiment creation.
Introduced a `processing` router to handle job streaming using server-sent events. Added `Jobs` and `JobStatus` models for managing job-related data, along with database creation logic. Updated the `sample` router to create new job entries during experiment creation.
Introduced a `processing` router to handle job streaming using server-sent events. Added `Jobs` and `JobStatus` models for managing job-related data, along with database creation logic. Updated the `sample` router to create new job entries during experiment creation.
Introduced a `processing` router to handle job streaming using server-sent events. Added `Jobs` and `JobStatus` models for managing job-related data, along with database creation logic. Updated the `sample` router to create new job entries during experiment creation.
Introduced a `processing` router to handle job streaming using server-sent events. Added `Jobs` and `JobStatus` models for managing job-related data, along with database creation logic. Updated the `sample` router to create new job entries during experiment creation.
Introduced a `processing` router to handle job streaming using server-sent events. Added `Jobs` and `JobStatus` models for managing job-related data, along with database creation logic. Updated the `sample` router to create new job entries during experiment creation.
Introduced a `processing` router to handle job streaming using server-sent events. Added `Jobs` and `JobStatus` models for managing job-related data, along with database creation logic. Updated the `sample` router to create new job entries during experiment creation.
Introduced a `processing` router to handle job streaming using server-sent events. Added `Jobs` and `JobStatus` models for managing job-related data, along with database creation logic. Updated the `sample` router to create new job entries during experiment creation.
Streamlined Dockerfiles with clearer ENV variables and build args. Switched backend database from MySQL to PostgreSQL, updated configurations accordingly, and added robust Docker Compose services for better orchestration, including health checks and persistent storage.
Streamlined Dockerfiles with clearer ENV variables and build args. Switched backend database from MySQL to PostgreSQL, updated configurations accordingly, and added robust Docker Compose services for better orchestration, including health checks and persistent storage.
Streamlined Dockerfiles with clearer ENV variables and build args. Switched backend database from MySQL to PostgreSQL, updated configurations accordingly, and added robust Docker Compose services for better orchestration, including health checks and persistent storage.
Streamlined Dockerfiles with clearer ENV variables and build args. Switched backend database from MySQL to PostgreSQL, updated configurations accordingly, and added robust Docker Compose services for better orchestration, including health checks and persistent storage.
Streamlined Dockerfiles with clearer ENV variables and build args. Switched backend database from MySQL to PostgreSQL, updated configurations accordingly, and added robust Docker Compose services for better orchestration, including health checks and persistent storage.
Updated several frontend dependencies including MUI packages and added new ones like `@mui/x-charts`. Adjusted the Python path setup in the CI configuration to correctly point to the `aaredb` backend, ensuring accurate module resolution.
Updated several frontend dependencies including MUI packages and added new ones like `@mui/x-charts`. Adjusted the Python path setup in the CI configuration to correctly point to the `aaredb` backend, ensuring accurate module resolution.
Updated several frontend dependencies including MUI packages and added new ones like `@mui/x-charts`. Adjusted the Python path setup in the CI configuration to correctly point to the `aaredb` backend, ensuring accurate module resolution.
Updated several frontend dependencies including MUI packages and added new ones like `@mui/x-charts`. Adjusted the Python path setup in the CI configuration to correctly point to the `aaredb` backend, ensuring accurate module resolution.
Updated several frontend dependencies including MUI packages and added new ones like `@mui/x-charts`. Adjusted the Python path setup in the CI configuration to correctly point to the `aaredb` backend, ensuring accurate module resolution.
Updated several frontend dependencies including MUI packages and added new ones like `@mui/x-charts`. Adjusted the Python path setup in the CI configuration to correctly point to the `aaredb` backend, ensuring accurate module resolution.
Updated several frontend dependencies including MUI packages and added new ones like `@mui/x-charts`. Adjusted the Python path setup in the CI configuration to correctly point to the `aaredb` backend, ensuring accurate module resolution.
Updated several frontend dependencies including MUI packages and added new ones like `@mui/x-charts`. Adjusted the Python path setup in the CI configuration to correctly point to the `aaredb` backend, ensuring accurate module resolution.
Updated several frontend dependencies including MUI packages and added new ones like `@mui/x-charts`. Adjusted the Python path setup in the CI configuration to correctly point to the `aaredb` backend, ensuring accurate module resolution.
Updated several frontend dependencies including MUI packages and added new ones like `@mui/x-charts`. Adjusted the Python path setup in the CI configuration to correctly point to the `aaredb` backend, ensuring accurate module resolution.
Updated several frontend dependencies including MUI packages and added new ones like `@mui/x-charts`. Adjusted the Python path setup in the CI configuration to correctly point to the `aaredb` backend, ensuring accurate module resolution.
Revised backend schema definitions, removing unnecessary attributes and adding new configurations. Updated file path references to align with the aaredb structure. Cleaned up redundant notebook content and commented out unused database regeneration logic in the backend.
Added posting a result to the database
Introduced Dockerfiles for logistics and frontend applications to streamline development and deployment. Updated package dependencies in the frontend to newer versions for improved stability and compatibility.