Compare commits

..

No commits in common. "main" and "v0.10.7" have entirely different histories.

64 changed files with 3531 additions and 5870 deletions

View File

@ -18,10 +18,7 @@ Provide steps to reproduce the behaviour, including a minimal code snippet (if a
## Expected behaviour ## Expected behaviour
A clear and concise description of what you expected to happen. A clear and concise description of what you expected to happen.
## Actual behaviour ## Screenshot/Video
Describe what you see instead of the expected behaviour.
### Screenshot/Video
If applicable, add visual content that helps explain your problem. If applicable, add visual content that helps explain your problem.
## Additional context ## Additional context

View File

@ -22,7 +22,7 @@ jobs:
- name: Build a binary wheel and a source tarball - name: Build a binary wheel and a source tarball
run: python3 -m build run: python3 -m build
- name: Store the distribution packages - name: Store the distribution packages
uses: actions/upload-artifact@v4 uses: actions/upload-artifact@v3
with: with:
name: python-package-distributions name: python-package-distributions
path: dist/ path: dist/
@ -44,7 +44,7 @@ jobs:
steps: steps:
- name: Download all the dists - name: Download all the dists
uses: actions/download-artifact@v4 uses: actions/download-artifact@v3
with: with:
name: python-package-distributions name: python-package-distributions
path: dist/ path: dist/
@ -65,7 +65,7 @@ jobs:
steps: steps:
- name: Download all the dists - name: Download all the dists
uses: actions/download-artifact@v4 uses: actions/download-artifact@v3
with: with:
name: python-package-distributions name: python-package-distributions
path: dist/ path: dist/

View File

@ -28,7 +28,7 @@ jobs:
run: | run: |
python -m pip install --upgrade pip python -m pip install --upgrade pip
python -m pip install poetry python -m pip install poetry
poetry install --with dev --all-extras poetry install --with dev
- name: Check with ruff - name: Check with ruff
run: | run: |
poetry run ruff check src poetry run ruff check src

View File

@ -226,15 +226,44 @@ For details, please see [here](https://pydase.readthedocs.io/en/stable/user-guid
## Logging in pydase ## Logging in pydase
The `pydase` library provides structured, per-module logging with support for log level configuration, rich formatting, and optional client identification in logs. The `pydase` library organizes its loggers on a per-module basis, mirroring the Python package hierarchy. This structured approach allows for granular control over logging levels and behaviour across different parts of the library.
To configure logging in your own service, you can use: ### Changing the Log Level
```python You have two primary ways to adjust the log levels in `pydase`:
from pydase.utils.logging import configure_logging_with_pydase_formatter
```
For more information, see the [full guide](https://pydase.readthedocs.io/en/stable/user-guide/Logging/). 1. directly targeting `pydase` loggers
You can set the log level for any `pydase` logger directly in your code. This method is useful for fine-tuning logging levels for specific modules within `pydase`. For instance, if you want to change the log level of the main `pydase` logger or target a submodule like `pydase.data_service`, you can do so as follows:
```python
# <your_script.py>
import logging
# Set the log level for the main pydase logger
logging.getLogger("pydase").setLevel(logging.INFO)
# Optionally, target a specific submodule logger
# logging.getLogger("pydase.data_service").setLevel(logging.DEBUG)
# Your logger for the current script
logger = logging.getLogger(__name__)
logger.info("My info message.")
```
This approach allows for specific control over different parts of the `pydase` library, depending on your logging needs.
2. using the `ENVIRONMENT` environment variable
For a more global setting that affects the entire `pydase` library, you can utilize the `ENVIRONMENT` environment variable. Setting this variable to "production" will configure all `pydase` loggers to only log messages of level "INFO" and above, filtering out more verbose logging. This is particularly useful for production environments where excessive logging can be overwhelming or unnecessary.
```bash
ENVIRONMENT="production" python -m <module_using_pydase>
```
In the absence of this setting, the default behavior is to log everything of level "DEBUG" and above, suitable for development environments where more detailed logs are beneficial.
**Note**: It is recommended to avoid calling the `pydase.utils.logging.setup_logging` function directly, as this may result in duplicated logging messages.
## Documentation ## Documentation
@ -244,14 +273,6 @@ The full documentation provides more detailed information about `pydase`, includ
We welcome contributions! Please see [contributing.md](https://pydase.readthedocs.io/en/stable/about/contributing/) for details on how to contribute. We welcome contributions! Please see [contributing.md](https://pydase.readthedocs.io/en/stable/about/contributing/) for details on how to contribute.
## Acknowledgements
This work was funded by the [ETH Zurich-PSI Quantum Computing Hub](https://www.psi.ch/en/lnq/qchub).
The main idea behind `pydase` is based on a previous project called `tiqi-plugin`, which
was developed within the same research group. While the concept was inspired by that
project, `pydase` was implemented from the ground up with a new architecture and design.
## License ## License
`pydase` is licensed under the [MIT License][License]. `pydase` is licensed under the [MIT License][License].

View File

@ -1,15 +1,6 @@
::: pydase.data_service ::: pydase.data_service
handler: python handler: python
::: pydase.data_service.data_service_cache
handler: python
::: pydase.data_service.data_service_observer
handler: python
::: pydase.data_service.state_manager
handler: python
::: pydase.server.server ::: pydase.server.server
handler: python handler: python
@ -47,9 +38,6 @@
options: options:
filters: ["!render_in_frontend"] filters: ["!render_in_frontend"]
::: pydase.utils.logging
handler: python
::: pydase.units ::: pydase.units
handler: python handler: python

View File

@ -5,7 +5,7 @@
end="<!--getting-started-end-->" end="<!--getting-started-end-->"
%} %}
[RESTful API]: ./user-guide/interaction/RESTful-API.md [RESTful API]: ./user-guide/interaction/README.md#restful-api
[Python RPC Client]: ./user-guide/interaction/Python-Client.md [Python RPC Client]: ./user-guide/interaction/README.md#python-rpc-client
[Custom Components]: ./user-guide/Components.md#custom-components-pydasecomponents [Custom Components]: ./user-guide/Components.md#custom-components-pydasecomponents
[Components]: ./user-guide/Components.md [Components]: ./user-guide/Components.md

View File

@ -11,7 +11,7 @@
[Defining DataService]: ./getting-started.md#defining-a-dataservice [Defining DataService]: ./getting-started.md#defining-a-dataservice
[Web Interface Access]: ./getting-started.md#accessing-the-web-interface [Web Interface Access]: ./getting-started.md#accessing-the-web-interface
[Short RPC Client]: ./getting-started.md#connecting-to-the-service-via-python-rpc-client [Short RPC Client]: ./getting-started.md#connecting-to-the-service-via-python-rpc-client
[Customizing Web Interface]: ./user-guide/interaction/Auto-generated-Frontend.md#customization-options [Customizing Web Interface]: ./user-guide/interaction/README.md#customization-options
[Task Management]: ./user-guide/Tasks.md [Task Management]: ./user-guide/Tasks.md
[Units]: ./user-guide/Understanding-Units.md [Units]: ./user-guide/Understanding-Units.md
[Property Validation]: ./user-guide/Validating-Property-Setters.md [Property Validation]: ./user-guide/Validating-Property-Setters.md

View File

@ -30,7 +30,7 @@ example of how to separate service code from configuration.
- **`ENVIRONMENT`**: - **`ENVIRONMENT`**:
Defines the operation mode (`"development"` or `"production"`), which influences Defines the operation mode (`"development"` or `"production"`), which influences
behaviour such as logging (see [Logging in pydase](./Logging.md)). behaviour such as logging (see [Logging in pydase](https://github.com/tiqi-group/pydase?tab=readme-ov-file#logging-in-pydase)).
- **`SERVICE_CONFIG_DIR`**: - **`SERVICE_CONFIG_DIR`**:
Specifies the directory for configuration files (e.g., `web_settings.json`). Defaults Specifies the directory for configuration files (e.g., `web_settings.json`). Defaults
@ -46,8 +46,8 @@ example of how to separate service code from configuration.
port. Default: `8001`. port. Default: `8001`.
- **`GENERATE_WEB_SETTINGS`**: - **`GENERATE_WEB_SETTINGS`**:
When `true`, generates or updates the `web_settings.json` file (see [Tailoring Frontend Component Layout](./interaction/Auto-generated-Frontend.md#tailoring-frontend-component-layout)). When `true`, generates or updates the `web_settings.json` file. Existing entries are
Existing entries are preserved, and new entries are appended. preserved, and new entries are appended.
### Configuring `pydase` via Keyword Arguments ### Configuring `pydase` via Keyword Arguments
@ -70,32 +70,32 @@ server = Server(
## Separating Service Code from Configuration ## Separating Service Code from Configuration
To decouple configuration from code, `pydase` utilizes `confz` for configuration To decouple configuration from code, `pydase` utilizes `confz` for configuration
management. Below is an example that demonstrates how to configure a `pydase` service management. Below is an example that demonstrates how to configure a `pydase` service
for a sensor readout application. for a sensor readout application.
### Scenario: Configuring a Sensor Service ### Scenario: Configuring a Sensor Service
Imagine you have multiple sensors distributed across your lab. You need to configure Imagine you have multiple sensors distributed across your lab. You need to configure
each service instance with: each service instance with:
1. **Hostname**: The hostname or IP address of the sensor. 1. **Hostname**: The hostname or IP address of the sensor.
2. **Authentication Token**: A token or credentials to authenticate with the sensor. 2. **Authentication Token**: A token or credentials to authenticate with the sensor.
3. **Readout Interval**: A periodic interval to read sensor data and log it to a 3. **Readout Interval**: A periodic interval to read sensor data and log it to a
database. database.
Given the repository structure: Given the repository structure:
```bash title="Service Repository Structure" ```bash title="Service Repository Structure"
my_sensor my_sensor
├── pyproject.toml ├── pyproject.toml
├── README.md ├── README.md
└── src └── src
└── my_sensor └── my_sensor
├── my_sensor.py ├── my_sensor.py
├── config.py ├── config.py
├── __init__.py ├── __init__.py
└── __main__.py └── __main__.py
``` ```
Your service might look like this: Your service might look like this:
@ -119,7 +119,7 @@ class MySensorConfig(confz.BaseConfig):
This class defines configurable parameters and loads values from a `config.yaml` file This class defines configurable parameters and loads values from a `config.yaml` file
located in the services configuration directory (which is configurable through an located in the services configuration directory (which is configurable through an
environment variable, see [above](#configuring-pydase-using-environment-variables)). environment variable, see [above](#configuring-pydase-using-environment-variables)).
A sample YAML file might look like this: A sample YAML file might look like this:
```yaml title="config.yaml" ```yaml title="config.yaml"

View File

@ -1,91 +0,0 @@
# Logging in pydase
The `pydase` library organizes its loggers per module, mirroring the Python package hierarchy. This structured approach allows for granular control over logging levels and behaviour across different parts of the library. Logs can also include details about client identification based on headers sent by the client or proxy, providing additional context for debugging or auditing.
## Changing the pydase Log Level
You have two primary ways to adjust the log levels in `pydase`:
1. **Directly targeting `pydase` loggers**
You can set the log level for any `pydase` logger directly in your code. This method is useful for fine-tuning logging levels for specific modules within `pydase`. For instance, if you want to change the log level of the main `pydase` logger or target a submodule like `pydase.data_service`, you can do so as follows:
```python
# <your_script.py>
import logging
# Set the log level for the main pydase logger
logging.getLogger("pydase").setLevel(logging.INFO)
# Optionally, target a specific submodule logger
# logging.getLogger("pydase.data_service").setLevel(logging.DEBUG)
# Your logger for the current script
from pydase.utils.logging import configure_logging_with_pydase_formatter
configure_logging_with_pydase_formatter(level=logging.DEBUG)
logger = logging.getLogger(__name__)
logger.debug("My debug message.")
```
This approach allows for specific control over different parts of the `pydase` library, depending on your logging needs.
2. **Using the `ENVIRONMENT` environment variable**
For a more global setting that affects the entire `pydase` library, you can utilize the `ENVIRONMENT` environment variable. Setting this variable to `"production"` will configure all `pydase` loggers to only log messages of level `"INFO"` and above, filtering out more verbose logging. This is particularly useful for production environments where excessive logging can be overwhelming or unnecessary.
```bash
ENVIRONMENT="production" python -m <module_using_pydase>
```
In the absence of this setting, the default behavior is to log everything of level `"DEBUG"` and above, suitable for development environments where more detailed logs are beneficial.
## Client Identification in pydase Logs
The logging system in `pydase` includes information about clients based on headers sent by the client or a proxy. The priority for identifying the client is fixed and as follows:
1. **`Remote-User` Header**: This header is typically set by authentication servers like [Authelia](https://www.authelia.com/). While it can be set manually by users, its primary purpose is to provide client information authenticated through such servers.
2. **`X-Client-ID` Header**: This header is intended for use by Python clients to pass custom client identification information. It acts as a fallback when the `Remote-User` header is not available.
3. **Default Socket.IO Session ID**: If neither of the above headers is present, the system falls back to the default Socket.IO session ID to identify the client.
For example, a log entries might include the following details based on the available headers:
```plaintext
2025-01-20 06:47:50.940 | INFO | pydase.server.web_server.api.v1.application:_get_value:36 - Client [id=This is me!] is getting the value of 'property_attr'
2025-01-20 06:48:13.710 | INFO | pydase.server.web_server.api.v1.application:_get_value:36 - Client [user=Max Muster] is getting the value of 'property_attr'
```
## Configuring Logging in Services
To configure logging in services built with `pydase`, use the helper function [`configure_logging_with_pydase_formatter`][pydase.utils.logging.configure_logging_with_pydase_formatter]. This function sets up a logger with the same formatting used internally by `pydase`, so your service logs match the style and structure of `pydase` logs.
### Example
If your service follows a typical layout like:
```text
└── src
└── my_service
├── __init__.py
└── ...
```
you should call `configure_logging_with_pydase_formatter` inside `src/my_service/__init__.py`. This ensures the logger is configured as soon as your service is imported, and before any log messages are emitted.
```python title="src/my_service/__init__.py"
import sys
from pydase.utils.logging import configure_logging_with_pydase_formatter
configure_logging_with_pydase_formatter(
name="my_service", # Use the package/module name or None for the root logger
level=logging.DEBUG, # Set the desired logging level (defaults to INFO)
stream=sys.stderr # Optional: set the output stream (stderr by default)
)
```
### Notes
- If you pass `name=None`, the root logger will be configured. This affects **all logs** that propagate to the root logger.
- Passing a specific `name` like `"my_service"` allows you to scope the configuration to your service only, which is safer in multi-library environments.
- You can use `sys.stdout` instead of `sys.stderr` if your logs are being captured or processed differently (e.g., in containers or logging systems).

View File

@ -2,47 +2,29 @@
`pydase` allows you to easily persist the state of your service by saving it to a file. This is especially useful when you want to maintain the service's state across different runs. `pydase` allows you to easily persist the state of your service by saving it to a file. This is especially useful when you want to maintain the service's state across different runs.
To enable persistence, pass a `filename` keyword argument to the constructor of the [`pydase.Server`][pydase.Server] class. The `filename` specifies the file where the state will be saved: To save the state of your service, pass a `filename` keyword argument to the constructor of the `pydase.Server` class. If the file specified by `filename` does not exist, the state manager will create this file and store its state in it when the service is shut down. If the file already exists, the state manager will load the state from this file, setting the values of its attributes to the values stored in the file.
- If the file **does not exist**, it will be created and populated with the current state when the service shuts down or saves. Here's an example:
- If the file **already exists**, the state manager will **load** the saved values into the service at startup.
Heres an example:
```python ```python
import pydase import pydase
class Device(pydase.DataService): class Device(pydase.DataService):
# ... define your service class ... # ... defining the Device class ...
if __name__ == "__main__": if __name__ == "__main__":
service = Device() service = Device()
pydase.Server(service=service, filename="device_state.json").run() pydase.Server(service=service, filename="device_state.json").run()
``` ```
In this example, the service state will be automatically loaded from `device_state.json` at startup (if it exists), and saved to the same file periodically and upon shutdown. In this example, the state of the `Device` service will be saved to `device_state.json` when the service is shut down. If `device_state.json` exists when the server is started, the state manager will restore the state of the service from this file.
## Automatic Periodic State Saving
When a `filename` is provided, `pydase` automatically enables **periodic autosaving** of the service state to that file. This ensures that the current state is regularly persisted, reducing the risk of data loss during unexpected shutdowns.
The autosave happens every 30 seconds by default. You can customize the interval using the `autosave_interval` argument (in seconds):
```python
pydase.Server(
service=service,
filename="device_state.json",
autosave_interval=10.0, # save every 10 seconds
).run()
```
To disable automatic saving, set `autosave_interval` to `None`.
## Controlling Property State Loading with `@load_state` ## Controlling Property State Loading with `@load_state`
By default, the state manager only restores values for public attributes of your service (i.e. *it does not restore property values*). If you have properties that you want to control the loading for, you can use the [`@load_state`][pydase.data_service.state_manager.load_state] decorator on your property setters. This indicates to the state manager that the value of the property should be loaded from the state file. By default, the state manager only restores values for public attributes of your service. If you have properties that you want to control the loading for, you can use the `@load_state` decorator on your property setters. This indicates to the state manager that the value of the property should be loaded from the state file.
Example: Here is how you can apply the `@load_state` decorator:
```python ```python
import pydase import pydase
@ -61,6 +43,7 @@ class Device(pydase.DataService):
self._name = value self._name = value
``` ```
With the `@load_state` decorator applied to the `name` property setter, the state manager will load and apply the `name` property's value from the file upon server startup. With the `@load_state` decorator applied to the `name` property setter, the state manager will load and apply the `name` property's value from the file storing the state upon server startup, assuming it exists.
Note: If the service class structure has changed since the last time its state was saved, only the attributes and properties decorated with `@load_state` that have remained the same will be restored from the settings file.
**Note**: If the structure of your service class changes between saves, only properties decorated with `@load_state` and unchanged public attributes will be restored safely.

View File

@ -1,8 +1,8 @@
# Understanding Tasks # Understanding Tasks
In `pydase`, a task is defined as an asynchronous function without arguments that is decorated with the [`@task`][pydase.task.decorator.task] decorator and contained in a class that inherits from [`pydase.DataService`][pydase.DataService]. These tasks usually contain a while loop and are designed to carry out periodic functions. For example, a task might be used to periodically read sensor data, update a database, or perform any other recurring job. In `pydase`, a task is defined as an asynchronous function without arguments that is decorated with the `@task` decorator and contained in a class that inherits from `pydase.DataService`. These tasks usually contain a while loop and are designed to carry out periodic functions. For example, a task might be used to periodically read sensor data, update a database, or perform any other recurring job.
`pydase` allows you to control task execution via both the frontend and Python clients and can automatically start tasks upon initialization of the service. By using the [`@task`][pydase.task.decorator.task] decorator with the `autostart=True` argument in your service class, `pydase` will automatically start these tasks when the server is started. Here's an example: `pydase` allows you to control task execution via both the frontend and Python clients and can automatically start tasks upon initialization of the service. By using the `@task` decorator with the `autostart=True` argument in your service class, `pydase` will automatically start these tasks when the server is started. Here's an example:
```python ```python
import pydase import pydase
@ -35,48 +35,4 @@ if __name__ == "__main__":
In this example, `read_sensor_data` is a task that continuously reads data from a sensor. By decorating it with `@task(autostart=True)`, it will automatically start running when `pydase.Server(service).run()` is executed. In this example, `read_sensor_data` is a task that continuously reads data from a sensor. By decorating it with `@task(autostart=True)`, it will automatically start running when `pydase.Server(service).run()` is executed.
## Task Lifecycle Control The `@task` decorator replaces the function with a task object that has `start()` and `stop()` methods. This means you can control the task execution directly using these methods. For instance, you can manually start or stop the task by calling `service.read_sensor_data.start()` and `service.read_sensor_data.stop()`, respectively.
The [`@task`][pydase.task.decorator.task] decorator replaces the function with a task object that has `start()` and `stop()` methods. This means you can control the task execution directly using these methods. For instance, you can manually start or stop the task by calling `service.read_sensor_data.start()` and `service.read_sensor_data.stop()`, respectively.
## Advanced Task Options
The [`@task`][pydase.task.decorator.task] decorator supports several options inspired by systemd unit services, allowing fine-grained control over task behavior:
- **`autostart`**: Automatically starts the task when the service initializes. Defaults to `False`.
- **`restart_on_exception`**: Configures whether the task should restart if it exits due to an exception (other than `asyncio.CancelledError`). Defaults to `True`.
- **`restart_sec`**: Specifies the delay (in seconds) before restarting a failed task. Defaults to `1.0`.
- **`start_limit_interval_sec`**: Configures a time window (in seconds) for rate limiting task restarts. If the task restarts more than `start_limit_burst` times within this interval, it will no longer restart. Defaults to `None` (disabled).
- **`start_limit_burst`**: Defines the maximum number of restarts allowed within the interval specified by `start_limit_interval_sec`. Defaults to `3`.
- **`exit_on_failure`**: If set to `True`, the service will exit if the task fails and either `restart_on_exception` is `False` or the start rate limiting is exceeded. Defaults to `False`.
### Example with Advanced Options
Here is an example showcasing advanced task options:
```python
import pydase
from pydase.task.decorator import task
class AdvancedTaskService(pydase.DataService):
def __init__(self):
super().__init__()
@task(
autostart=True,
restart_on_exception=True,
restart_sec=2.0,
start_limit_interval_sec=10.0,
start_limit_burst=5,
exit_on_failure=True,
)
async def critical_task(self):
while True:
raise Exception("Critical failure")
if __name__ == "__main__":
service = AdvancedTaskService()
pydase.Server(service=service).run()
```

View File

@ -1,48 +0,0 @@
# Connecting Through a SOCKS5 Proxy
If your target service is only reachable via an SSH gateway or resides behind a
firewall, you can route your [`pydase.Client`][pydase.Client] connection through a local
SOCKS5 proxy. This is particularly useful in network environments where direct access to
the service is not possible.
## Setting Up a SOCKS5 Proxy
You can create a local [SOCKS5 proxy](https://en.wikipedia.org/wiki/SOCKS) using SSH's
`-D` option:
```bash
ssh -D 2222 user@gateway.example.com
```
This command sets up a SOCKS5 proxy on `localhost:2222`, securely forwarding traffic
over the SSH connection.
## Using the Proxy in Your Python Client
Once the proxy is running, configure the [`pydase.Client`][pydase.Client] to route
traffic through it using the `proxy_url` parameter:
```python
import pydase
client = pydase.Client(
url="ws://target-service:8001",
proxy_url="socks5://localhost:2222"
).proxy
```
* You can also use this setup with `wss://` URLs for encrypted WebSocket connections.
## Installing Required Dependencies
To use this feature, you must install the optional `socks` dependency group, which
includes [`aiohttp_socks`](https://pypi.org/project/aiohttp-socks/):
- `poetry`
```bash
poetry add "pydase[socks]"
```
- `pip`
```bash
pip install "pydase[socks]"
```

View File

@ -89,7 +89,7 @@ Each key in the file corresponds to the full access path of public attributes, p
- **Control Component Visibility**: Utilize the `"display"` key-value pair to control whether a component is rendered in the frontend. Set the value to `true` to make the component visible or `false` to hide it. - **Control Component Visibility**: Utilize the `"display"` key-value pair to control whether a component is rendered in the frontend. Set the value to `true` to make the component visible or `false` to hide it.
- **Adjustable Component Order**: The `"displayOrder"` values determine the order of components. Alter these values to rearrange the components as desired. The value defaults to [`Number.MAX_SAFE_INTEGER`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Number/MAX_SAFE_INTEGER). - **Adjustable Component Order**: The `"displayOrder"` values determine the order of components. Alter these values to rearrange the components as desired. The value defaults to [`Number.MAX_SAFE_INTEGER`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Number/MAX_SAFE_INTEGER).
The `web_settings.json` file will be stored in the directory specified by the `SERVICE_CONFIG_DIR` environment variable. You can generate a `web_settings.json` file by setting the `GENERATE_WEB_SETTINGS` to `True`. For more information, see the [configuration section](../Configuration.md). The `web_settings.json` file will be stored in the directory specified by the `SERVICE_CONFIG_DIR` environment variable. You can generate a `web_settings.json` file by setting the `GENERATE_WEB_SETTINGS` to `True`. For more information, see the [configuration section](../Configuration).
For example, styling the following service For example, styling the following service

View File

@ -1,6 +1,6 @@
# Python RPC Client # Python RPC Client
The [`pydase.Client`][pydase.Client] allows you to connect to a remote `pydase` service using Socket.IO, facilitating interaction with the service as though it were running locally. The [`pydase.Client`][pydase.Client] allows you to connect to a remote `pydase` service using socket.io, facilitating interaction with the service as though it were running locally.
## Basic Usage ## Basic Usage
@ -9,7 +9,6 @@ import pydase
# Replace <ip_addr> and <service_port> with the appropriate values for your service # Replace <ip_addr> and <service_port> with the appropriate values for your service
client_proxy = pydase.Client(url="ws://<ip_addr>:<service_port>").proxy client_proxy = pydase.Client(url="ws://<ip_addr>:<service_port>").proxy
# For SSL-encrypted services, use the wss protocol # For SSL-encrypted services, use the wss protocol
# client_proxy = pydase.Client(url="wss://your-domain.ch").proxy # client_proxy = pydase.Client(url="wss://your-domain.ch").proxy
@ -23,12 +22,6 @@ The proxy acts as a local representation of the remote service, enabling intuiti
The proxy class automatically synchronizes with the server's attributes and methods, keeping itself up-to-date with any changes. This dynamic synchronization essentially mirrors the server's API, making it feel like you're working with a local object. The proxy class automatically synchronizes with the server's attributes and methods, keeping itself up-to-date with any changes. This dynamic synchronization essentially mirrors the server's API, making it feel like you're working with a local object.
### Accessing Services Behind Firewalls or SSH Gateways
If your service is only reachable through a private network or SSH gateway, you can route your connection through a local SOCKS5 proxy using the `proxy_url` parameter.
See [Connecting Through a SOCKS5 Proxy](../advanced/SOCKS-Proxy.md) for details.
## Context Manager Support ## Context Manager Support
You can also use the client within a context manager, which automatically handles connection management (i.e., opening and closing the connection): You can also use the client within a context manager, which automatically handles connection management (i.e., opening and closing the connection):
@ -57,15 +50,12 @@ import pydase
class MyService(pydase.DataService): class MyService(pydase.DataService):
proxy = pydase.Client( proxy = pydase.Client(
url="ws://<ip_addr>:<service_port>", url="ws://<ip_addr>:<service_port>",
block_until_connected=False, block_until_connected=False
client_id="my_pydase_client_id", # optional, defaults to system hostname
).proxy ).proxy
# For SSL-encrypted services, use the wss protocol # For SSL-encrypted services, use the wss protocol
# proxy = pydase.Client( # proxy = pydase.Client(
# url="wss://your-domain.ch", # url="wss://your-domain.ch",
# block_until_connected=False, # block_until_connected=False
# client_id="my_pydase_client_id",
# ).proxy # ).proxy
if __name__ == "__main__": if __name__ == "__main__":
@ -76,12 +66,11 @@ if __name__ == "__main__":
In this example: In this example:
- The `MyService` class has a `proxy` attribute that connects to a `pydase` service at `<ip_addr>:<service_port>`. - The `MyService` class has a `proxy` attribute that connects to a `pydase` service at `<ip_addr>:<service_port>`.
- By setting `block_until_connected=False`, the service can start without waiting for the connection to succeed. - By setting `block_until_connected=False`, the service can start without waiting for the connection to succeed, which is particularly useful in distributed systems where services may initialize in any order.
- The `client_id` is optional. If not specified, it defaults to the system hostname, which will be sent in the `X-Client-Id` HTTP header for logging or authentication on the server side.
## Custom `socketio.AsyncClient` Connection Parameters ## Custom `socketio.AsyncClient` Connection Parameters
You can configure advanced connection options by passing arguments to the underlying [`AsyncClient`][socketio.AsyncClient] via `sio_client_kwargs`. For example: You can also configure advanced connection options by passing additional arguments to the underlying [`AsyncClient`][socketio.AsyncClient] via `sio_client_kwargs`. This allows you to fine-tune reconnection behaviour, delays, and other settings:
```python ```python
client = pydase.Client( client = pydase.Client(

View File

@ -1,7 +1,81 @@
# Interacting with `pydase` Services # Interacting with `pydase` Services
`pydase` offers multiple ways for users to interact with the services they create. `pydase` offers multiple ways for users to interact with the services they create, providing flexibility and convenience for different use cases. This section outlines the primary interaction methods available, including an auto-generated frontend, a RESTful API, and a Python client based on Socket.IO.
- [Auto-generated Frontend](./Auto-generated-Frontend.md) {%
- [RESTful API](./RESTful-API.md) include-markdown "./Auto-generated Frontend.md"
- [Python Client](./Python-Client.md) heading-offset=1
%}
{%
include-markdown "./RESTful API.md"
heading-offset=1
%}
{%
include-markdown "./Python Client.md"
heading-offset=1
%}
<!-- ## 2. **Socket.IO for Real-Time Updates** -->
<!-- For scenarios requiring real-time data updates, `pydase` includes a Socket.IO server. This feature is ideal for applications where live data tracking is crucial, such as monitoring systems or interactive dashboards. -->
<!---->
<!-- ### Key Features: -->
<!-- - **Live Data Streams**: Receive real-time updates for data changes. -->
<!-- - **Event-Driven Communication**: Utilize event-based messaging to push updates and handle client actions. -->
<!---->
<!-- ### Example Usage: -->
<!-- Clients can connect to the Socket.IO server to receive updates: -->
<!-- ```javascript -->
<!-- var socket = io.connect('http://<hostname>:<port>'); -->
<!-- socket.on('<event_name>', function(data) { -->
<!-- console.log(data); -->
<!-- }); -->
<!-- ``` -->
<!---->
<!-- **Use Cases:** -->
<!---->
<!-- - Real-time monitoring and alerts -->
<!-- - Live data visualization -->
<!-- - Collaborative applications -->
<!---->
<!-- ## 3. **Auto-Generated Frontend** -->
<!-- `pydase` automatically generates a web frontend based on the service definitions. This frontend is a convenient interface for interacting with the service, especially for users who prefer a graphical interface over command-line or code-based interactions. -->
<!---->
<!-- ### Key Features: -->
<!-- - **User-Friendly Interface**: Intuitive and easy to use, with real-time interaction capabilities. -->
<!-- - **Customizable**: Adjust the frontend's appearance and functionality to suit specific needs. -->
<!---->
<!-- ### Accessing the Frontend: -->
<!-- Once the service is running, access the frontend via a web browser: -->
<!-- ``` -->
<!-- http://<hostname>:<port> -->
<!-- ``` -->
<!---->
<!-- **Use Cases:** -->
<!---->
<!-- - End-user interfaces for data control and visualization -->
<!-- - Rapid prototyping and testing -->
<!-- - Demonstrations and training -->
<!---->
<!-- ## 4. **Python Client** -->
<!-- `pydase` also provides a Python client for programmatic interactions. This client is particularly useful for developers who want to integrate `pydase` services into other Python applications or automate interactions. -->
<!---->
<!-- ### Key Features: -->
<!-- - **Direct Interaction**: Call methods and access properties as if they were local. -->
<!-- - **Tab Completion**: Supports tab completion in interactive environments like Jupyter notebooks. -->
<!---->
<!-- ### Example Usage: -->
<!-- ```python -->
<!-- import pydase -->
<!---->
<!-- client = pydase.Client(hostname="<ip_addr>", port=8001) -->
<!-- service = client.proxy -->
<!-- service.some_method() -->
<!-- ``` -->
<!---->
<!-- **Use Cases:** -->
<!---->
<!-- - Integrating with other Python applications -->
<!-- - Automation and scripting -->
<!-- - Data analysis and manipulation -->

View File

@ -13,7 +13,7 @@
// this will be set by the python backend if the service is behind a proxy which strips a prefix. The frontend can use this to build the paths to the resources. // this will be set by the python backend if the service is behind a proxy which strips a prefix. The frontend can use this to build the paths to the resources.
window.__FORWARDED_PREFIX__ = ""; window.__FORWARDED_PREFIX__ = "";
window.__FORWARDED_PROTO__ = ""; window.__FORWARDED_PROTO__ = "";
</script> </script>`
<body> <body>
<noscript>You need to enable JavaScript to run this app.</noscript> <noscript>You need to enable JavaScript to run this app.</noscript>

File diff suppressed because it is too large Load Diff

View File

@ -10,31 +10,31 @@
"preview": "vite preview" "preview": "vite preview"
}, },
"dependencies": { "dependencies": {
"@emotion/styled": "^11.14.0", "@emotion/styled": "^11.11.0",
"@mui/material": "^5.16.14", "@mui/material": "^5.14.1",
"bootstrap": "^5.3.3", "bootstrap": "^5.3.3",
"deep-equal": "^2.2.3", "deep-equal": "^2.2.3",
"react": "^19.0.0", "react": "^18.3.1",
"react-bootstrap": "^2.10.7", "react-bootstrap": "^2.10.0",
"react-bootstrap-icons": "^1.11.5", "react-bootstrap-icons": "^1.11.4",
"socket.io-client": "^4.8.1" "socket.io-client": "^4.7.1"
}, },
"devDependencies": { "devDependencies": {
"@eslint/js": "^9.18.0", "@eslint/js": "^9.6.0",
"@types/deep-equal": "^1.0.4", "@types/deep-equal": "^1.0.4",
"@types/eslint__js": "^8.42.3", "@types/eslint__js": "^8.42.3",
"@types/node": "^20.17.14", "@types/node": "^20.14.10",
"@types/react": "^19.0.7", "@types/react": "^18.3.3",
"@types/react-dom": "^19.0.3", "@types/react-dom": "^18.3.0",
"@typescript-eslint/eslint-plugin": "^7.15.0", "@typescript-eslint/eslint-plugin": "^7.15.0",
"@vitejs/plugin-react-swc": "^3.7.2", "@vitejs/plugin-react-swc": "^3.5.0",
"eslint": "^8.57.1", "eslint": "^8.57.0",
"eslint-config-prettier": "^9.1.0", "eslint-config-prettier": "^9.1.0",
"eslint-plugin-prettier": "^5.2.3", "eslint-plugin-prettier": "^5.1.3",
"eslint-plugin-react": "^7.37.4", "eslint-plugin-react": "^7.34.3",
"prettier": "3.3.2", "prettier": "3.3.2",
"typescript": "^5.7.3", "typescript": "^5.5.3",
"typescript-eslint": "^7.18.0", "typescript-eslint": "^7.15.0",
"vite": "^6.3.5" "vite": "^5.3.1"
} }
} }

View File

@ -1,4 +1,4 @@
import React, { useEffect, useRef, useState } from "react"; import React, { useEffect, useState } from "react";
import { Form, InputGroup } from "react-bootstrap"; import { Form, InputGroup } from "react-bootstrap";
import { DocStringComponent } from "./DocStringComponent"; import { DocStringComponent } from "./DocStringComponent";
import "../App.css"; import "../App.css";
@ -175,33 +175,6 @@ const handleNumericKey = (
return { value: newValue, selectionStart: selectionStart + 1 }; return { value: newValue, selectionStart: selectionStart + 1 };
}; };
/**
* Calculates the new cursor position after moving left by a specified step size.
*
* @param cursorPosition - The current position of the cursor.
* @param step - The number of positions to move left.
* @returns The new cursor position, clamped to a minimum of 0.
*/
const getCursorLeftPosition = (cursorPosition: number, step: number): number => {
return Math.max(0, cursorPosition - step);
};
/**
* Calculates the new cursor position after moving right by a specified step size.
*
* @param cursorPosition - The current position of the cursor.
* @param step - The number of positions to move right.
* @param maxPosition - The maximum allowed cursor position (e.g., value.length).
* @returns The new cursor position, clamped to a maximum of maxPosition.
*/
const getCursorRightPosition = (
cursorPosition: number,
step: number,
maxPosition: number,
): number => {
return Math.min(maxPosition, cursorPosition + step);
};
export const NumberComponent = React.memo((props: NumberComponentProps) => { export const NumberComponent = React.memo((props: NumberComponentProps) => {
const { const {
fullAccessPath, fullAccessPath,
@ -218,8 +191,7 @@ export const NumberComponent = React.memo((props: NumberComponentProps) => {
} = props; } = props;
// Create a state for the cursor position // Create a state for the cursor position
const cursorPositionRef = useRef<number | null>(null); const [cursorPosition, setCursorPosition] = useState<number | null>(null);
// Create a state for the input string // Create a state for the input string
const [inputString, setInputString] = useState(value.toString()); const [inputString, setInputString] = useState(value.toString());
const renderCount = useRenderCount(); const renderCount = useRenderCount();
@ -227,36 +199,25 @@ export const NumberComponent = React.memo((props: NumberComponentProps) => {
const handleKeyDown = (event: React.KeyboardEvent<HTMLInputElement>) => { const handleKeyDown = (event: React.KeyboardEvent<HTMLInputElement>) => {
const { key, target } = event; const { key, target } = event;
// Typecast
const inputTarget = target as HTMLInputElement; const inputTarget = target as HTMLInputElement;
if (
// Get the current input value and cursor position key === "F1" ||
const { value } = inputTarget; key === "F5" ||
const valueLength = value.length; key === "F12" ||
const selectionEnd = inputTarget.selectionEnd ?? 0; key === "Tab" ||
let selectionStart = inputTarget.selectionStart ?? 0; key === "ArrowRight" ||
key === "ArrowLeft"
if (key === "F1" || key === "F5" || key === "F12" || key === "Tab") { ) {
return;
} else if (key === "ArrowLeft" || key === "ArrowRight") {
const hasSelection = selectionEnd > selectionStart;
if (hasSelection && !event.shiftKey) {
// Collapse selection: ArrowLeft -> start, ArrowRight -> end
const collapseTo = key === "ArrowLeft" ? selectionStart : selectionEnd;
cursorPositionRef.current = collapseTo;
} else {
// No selection or shift key is pressed, just move cursor by one
const newSelectionStart =
key === "ArrowLeft"
? getCursorLeftPosition(selectionStart, 1)
: getCursorRightPosition(selectionEnd, 1, valueLength);
cursorPositionRef.current = newSelectionStart;
}
return; return;
} }
event.preventDefault(); event.preventDefault();
// Get the current input value and cursor position
const { value } = inputTarget;
const selectionEnd = inputTarget.selectionEnd ?? 0;
let selectionStart = inputTarget.selectionStart ?? 0;
let newValue: string = value; let newValue: string = value;
if (event.ctrlKey && key === "a") { if (event.ctrlKey && key === "a") {
// Select everything when pressing Ctrl + a // Select everything when pressing Ctrl + a
@ -356,7 +317,7 @@ export const NumberComponent = React.memo((props: NumberComponentProps) => {
setInputString(newValue); setInputString(newValue);
// Save the current cursor position before the component re-renders // Save the current cursor position before the component re-renders
cursorPositionRef.current = selectionStart; setCursorPosition(selectionStart);
}; };
const handleBlur = () => { const handleBlur = () => {
@ -409,11 +370,8 @@ export const NumberComponent = React.memo((props: NumberComponentProps) => {
useEffect(() => { useEffect(() => {
// Set the cursor position after the component re-renders // Set the cursor position after the component re-renders
const inputElement = document.getElementsByName(id)[0] as HTMLInputElement; const inputElement = document.getElementsByName(id)[0] as HTMLInputElement;
if (inputElement && cursorPositionRef.current !== null) { if (inputElement && cursorPosition !== null) {
inputElement.setSelectionRange( inputElement.setSelectionRange(cursorPosition, cursorPosition);
cursorPositionRef.current,
cursorPositionRef.current,
);
} }
}); });

View File

@ -1,9 +1,8 @@
import { useState, useEffect } from "react"; import { useState, useEffect } from "react";
import { authority } from "../socket";
export default function useLocalStorage(key: string, defaultValue: unknown) { export default function useLocalStorage(key: string, defaultValue: unknown) {
const [value, setValue] = useState(() => { const [value, setValue] = useState(() => {
const storedValue = localStorage.getItem(`${authority}:${key}`); const storedValue = localStorage.getItem(key);
if (storedValue) { if (storedValue) {
return JSON.parse(storedValue); return JSON.parse(storedValue);
} }
@ -12,7 +11,7 @@ export default function useLocalStorage(key: string, defaultValue: unknown) {
useEffect(() => { useEffect(() => {
if (value === undefined) return; if (value === undefined) return;
localStorage.setItem(`${authority}:${key}`, JSON.stringify(value)); localStorage.setItem(key, JSON.stringify(value));
}, [value, key]); }, [value, key]);
return [value, setValue]; return [value, setValue];

View File

@ -6,20 +6,14 @@ nav:
- Getting Started: getting-started.md - Getting Started: getting-started.md
- User Guide: - User Guide:
- Components Guide: user-guide/Components.md - Components Guide: user-guide/Components.md
- Interaction: - Interacting with pydase Services: user-guide/interaction/README.md
- Overview: user-guide/interaction/README.md
- Auto-generated Frontend: user-guide/interaction/Auto-generated-Frontend.md
- RESTful API: user-guide/interaction/RESTful-API.md
- Python Client: user-guide/interaction/Python-Client.md
- Achieving Service Persistence: user-guide/Service_Persistence.md - Achieving Service Persistence: user-guide/Service_Persistence.md
- Understanding Tasks: user-guide/Tasks.md - Understanding Tasks: user-guide/Tasks.md
- Understanding Units: user-guide/Understanding-Units.md - Understanding Units: user-guide/Understanding-Units.md
- Validating Property Setters: user-guide/Validating-Property-Setters.md - Validating Property Setters: user-guide/Validating-Property-Setters.md
- Configuring pydase: user-guide/Configuration.md - Configuring pydase: user-guide/Configuration.md
- Logging in pydase: user-guide/Logging.md
- Advanced: - Advanced:
- Deploying behind a Reverse Proxy: user-guide/advanced/Reverse-Proxy.md - Deploying behind a Reverse Proxy: user-guide/advanced/Reverse-Proxy.md
- Connecting through a SOCKS Proxy: user-guide/advanced/SOCKS-Proxy.md
- Developer Guide: - Developer Guide:
- Developer Guide: dev-guide/README.md - Developer Guide: dev-guide/README.md
- API Reference: dev-guide/api.md - API Reference: dev-guide/api.md
@ -60,7 +54,7 @@ plugins:
handlers: handlers:
python: python:
paths: [src] # search packages in the src folder paths: [src] # search packages in the src folder
inventories: import:
- https://docs.python.org/3/objects.inv - https://docs.python.org/3/objects.inv
- https://docs.pydantic.dev/latest/objects.inv - https://docs.pydantic.dev/latest/objects.inv
- https://confz.readthedocs.io/en/latest/objects.inv - https://confz.readthedocs.io/en/latest/objects.inv

3242
poetry.lock generated

File diff suppressed because it is too large Load Diff

View File

@ -1,56 +1,50 @@
[project]
name = "pydase"
version = "0.10.16"
description = "A flexible and robust Python library for creating, managing, and interacting with data services, with built-in support for web and RPC servers, and customizable features for diverse use cases."
authors = [
{name = "Mose Müller",email = "mosemueller@gmail.com"}
]
readme = "README.md"
requires-python = ">=3.10,<4.0"
dependencies = [
"toml (>=0.10.2,<0.11.0)",
"python-socketio (>=5.13.0,<6.0.0)",
"confz (>=2.1.0,<3.0.0)",
"pint (>=0.24.4,<0.25.0)",
"websocket-client (>=1.8.0,<2.0.0)",
"aiohttp (>=3.11.18,<4.0.0)",
"click (>=8.2.0,<9.0.0)",
"aiohttp-middlewares (>=2.4.0,<3.0.0)",
"anyio (>=4.9.0,<5.0.0)"
]
[project.optional-dependencies]
socks = ["aiohttp-socks (>=0.10.1,<0.11.0)"]
[tool.poetry] [tool.poetry]
packages = [{include = "pydase", from = "src"}] name = "pydase"
version = "0.10.7"
description = "A flexible and robust Python library for creating, managing, and interacting with data services, with built-in support for web and RPC servers, and customizable features for diverse use cases."
authors = ["Mose Mueller <mosmuell@ethz.ch>"]
readme = "README.md"
packages = [{ include = "pydase", from = "src" }]
[tool.poetry.dependencies]
python = "^3.10"
toml = "^0.10.2"
python-socketio = "^5.8.0"
confz = "^2.0.0"
pint = "^0.24"
websocket-client = "^1.7.0"
aiohttp = "^3.9.3"
click = "^8.1.7"
aiohttp-middlewares = "^2.3.0"
anyio = "^4.6.0"
[tool.poetry.group.dev] [tool.poetry.group.dev]
optional = true optional = true
[tool.poetry.group.dev.dependencies] [tool.poetry.group.dev.dependencies]
types-toml = "^0.10.8.20240310" types-toml = "^0.10.8.6"
pytest = "^8.3.5" pytest = "^7.4.0"
pytest-cov = "^6.1.1" pytest-cov = "^4.1.0"
mypy = "^1.15.0" mypy = "^1.4.1"
matplotlib = "^3.10.3" matplotlib = "^3.7.2"
pyright = "^1.1.400" pyright = "^1.1.323"
pytest-mock = "^3.14.0" pytest-mock = "^3.11.1"
ruff = "^0.11.10" ruff = "^0.5.0"
pytest-asyncio = "^0.26.0" pytest-asyncio = "^0.23.2"
[tool.poetry.group.docs] [tool.poetry.group.docs]
optional = true optional = true
[tool.poetry.group.docs.dependencies] [tool.poetry.group.docs.dependencies]
mkdocs-material = "^9.6.14" mkdocs-material = "^9.5.30"
mkdocs-include-markdown-plugin = "^7.1.5" mkdocs-include-markdown-plugin = "^3.9.1"
mkdocstrings = {extras = ["python"], version = "^0.29.1"} mkdocstrings = {extras = ["python"], version = "^0.25.2"}
pymdown-extensions = "^10.15" pymdown-extensions = "^10.1"
mkdocs-swagger-ui-tag = "^0.7.1" mkdocs-swagger-ui-tag = "^0.6.10"
[build-system] [build-system]
requires = ["poetry-core>=2.0.0,<3.0.0"] requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api" build-backend = "poetry.core.masonry.api"
[tool.ruff] [tool.ruff]
@ -92,7 +86,6 @@ select = [
ignore = [ ignore = [
"RUF006", # asyncio-dangling-task "RUF006", # asyncio-dangling-task
"PERF203", # try-except-in-loop "PERF203", # try-except-in-loop
"ASYNC110", # async-busy-wait
] ]
[tool.ruff.lint.mccabe] [tool.ruff.lint.mccabe]
@ -111,10 +104,3 @@ disallow_incomplete_defs = true
disallow_any_generics = true disallow_any_generics = true
check_untyped_defs = true check_untyped_defs = true
ignore_missing_imports = false ignore_missing_imports = false
[tool.pytest.ini_options]
asyncio_default_fixture_loop_scope = "function"
filterwarnings = [
# I don't controll the usage of the timeout
"ignore:parameter 'timeout' of type 'float' is deprecated, please use 'timeout=ClientWSTimeout"
]

View File

@ -6,7 +6,7 @@ from pydase.utils.logging import setup_logging
setup_logging() setup_logging()
__all__ = [ __all__ = [
"Client",
"DataService", "DataService",
"Server", "Server",
"Client",
] ]

View File

@ -1,14 +1,11 @@
import asyncio import asyncio
import logging import logging
import socket
import sys import sys
import threading import threading
import urllib.parse import urllib.parse
from builtins import ModuleNotFoundError
from types import TracebackType from types import TracebackType
from typing import TYPE_CHECKING, Any, TypedDict, cast from typing import TYPE_CHECKING, Any, TypedDict, cast
import aiohttp
import socketio # type: ignore import socketio # type: ignore
from pydase.client.proxy_class import ProxyClass from pydase.client.proxy_class import ProxyClass
@ -36,60 +33,48 @@ class NotifyDict(TypedDict):
def asyncio_loop_thread(loop: asyncio.AbstractEventLoop) -> None: def asyncio_loop_thread(loop: asyncio.AbstractEventLoop) -> None:
asyncio.set_event_loop(loop) asyncio.set_event_loop(loop)
try: loop.run_forever()
loop.run_forever()
finally:
loop.close()
class Client: class Client:
"""A client for connecting to a remote pydase service using Socket.IO. This client """
A client for connecting to a remote pydase service using socket.io. This client
handles asynchronous communication with a service, manages events such as handles asynchronous communication with a service, manages events such as
connection, disconnection, and updates, and ensures that the proxy object is connection, disconnection, and updates, and ensures that the proxy object is
up-to-date with the server state. up-to-date with the server state.
Args: Args:
url: The URL of the pydase Socket.IO server. This should always contain the url:
protocol (e.g., `ws` or `wss`) and the hostname, and can optionally include The URL of the pydase Socket.IO server. This should always contain the
a path prefix (e.g., `ws://localhost:8001/service`). protocol and the hostname.
block_until_connected: If set to True, the constructor will block until the block_until_connected:
connection to the service has been established. This is useful for ensuring If set to True, the constructor will block until the connection to the
the client is ready to use immediately after instantiation. Default is True. service has been established. This is useful for ensuring the client is
sio_client_kwargs: Additional keyword arguments passed to the underlying ready to use immediately after instantiation. Default is True.
sio_client_kwargs:
Additional keyword arguments passed to the underlying
[`AsyncClient`][socketio.AsyncClient]. This allows fine-tuning of the [`AsyncClient`][socketio.AsyncClient]. This allows fine-tuning of the
client's behaviour (e.g., reconnection attempts or reconnection delay). client's behaviour (e.g., reconnection attempts or reconnection delay).
client_id: An optional client identifier. This ID is sent to the server as the Default is an empty dictionary.
`X-Client-Id` HTTP header. It can be used for logging or authentication
purposes on the server side. If not provided, it defaults to the hostname
of the machine running the client.
proxy_url: An optional proxy URL to route the connection through. This is useful
if the service is only reachable via an SSH tunnel or behind a firewall
(e.g., `socks5://localhost:2222`).
Example: Example:
Connect to a service directly: The following example demonstrates a `Client` instance that connects to another
pydase service, while customising some of the connection settings for the
underlying [`AsyncClient`][socketio.AsyncClient].
```python ```python
client = pydase.Client(url="ws://localhost:8001") pydase.Client(url="ws://localhost:8001", sio_client_kwargs={
"reconnection_attempts": 2,
"reconnection_delay": 2,
"reconnection_delay_max": 8,
})
``` ```
Connect over a secure connection: When connecting to a server over a secure connection (i.e., the server is using
SSL/TLS encryption), make sure that the `wss` protocol is used instead of `ws`:
```python ```python
client = pydase.Client(url="wss://my-service.example.com") pydase.Client(url="wss://my-service.example.com")
```
Connect using a SOCKS5 proxy (e.g., through an SSH tunnel):
```bash
ssh -D 2222 user@gateway.example.com
```
```python
client = pydase.Client(
url="ws://remote-server:8001",
proxy_url="socks5://localhost:2222"
)
``` ```
""" """
@ -99,8 +84,6 @@ class Client:
url: str, url: str,
block_until_connected: bool = True, block_until_connected: bool = True,
sio_client_kwargs: dict[str, Any] = {}, sio_client_kwargs: dict[str, Any] = {},
client_id: str | None = None,
proxy_url: str | None = None,
): ):
# Parse the URL to separate base URL and path prefix # Parse the URL to separate base URL and path prefix
parsed_url = urllib.parse.urlparse(url) parsed_url = urllib.parse.urlparse(url)
@ -113,14 +96,17 @@ class Client:
# Store the path prefix (e.g., "/service" in "ws://localhost:8081/service") # Store the path prefix (e.g., "/service" in "ws://localhost:8081/service")
self._path_prefix = parsed_url.path.rstrip("/") # Remove trailing slash if any self._path_prefix = parsed_url.path.rstrip("/") # Remove trailing slash if any
self._url = url self._url = url
self._proxy_url = proxy_url self._sio = socketio.AsyncClient(**sio_client_kwargs)
self._client_id = client_id or socket.gethostname() self._loop = asyncio.new_event_loop()
self._sio_client_kwargs = sio_client_kwargs self.proxy = ProxyClass(
self._loop: asyncio.AbstractEventLoop | None = None sio_client=self._sio, loop=self._loop, reconnect=self.connect
self._thread: threading.Thread | None = None )
self.proxy: ProxyClass
"""A proxy object representing the remote service, facilitating interaction as """A proxy object representing the remote service, facilitating interaction as
if it were local.""" if it were local."""
self._thread = threading.Thread(
target=asyncio_loop_thread, args=(self._loop,), daemon=True
)
self._thread.start()
self.connect(block_until_connected=block_until_connected) self.connect(block_until_connected=block_until_connected)
def __enter__(self) -> Self: def __enter__(self) -> Self:
@ -135,84 +121,23 @@ class Client:
self.disconnect() self.disconnect()
def connect(self, block_until_connected: bool = True) -> None: def connect(self, block_until_connected: bool = True) -> None:
if self._thread is None or self._loop is None:
self._loop = self._initialize_loop_and_thread()
self._initialize_socketio_client()
self.proxy = ProxyClass(
sio_client=self._sio,
loop=self._loop,
reconnect=self.connect,
)
connection_future = asyncio.run_coroutine_threadsafe( connection_future = asyncio.run_coroutine_threadsafe(
self._connect(), self._loop self._connect(), self._loop
) )
if block_until_connected: if block_until_connected:
connection_future.result() connection_future.result()
def _initialize_socketio_client(self) -> None:
if self._proxy_url is not None:
try:
import aiohttp_socks.connector
except ModuleNotFoundError:
raise ModuleNotFoundError(
"Missing dependency 'aiohttp_socks'. To use SOCKS5 proxy support, "
"install the optional 'socks' extra:\n\n"
' pip install "pydase[socks]"\n\n'
"This is required when specifying a `proxy_url` for "
"`pydase.Client`."
)
session = aiohttp.ClientSession(
connector=aiohttp_socks.connector.ProxyConnector.from_url(
url=self._proxy_url, loop=self._loop
),
loop=self._loop,
)
self._sio = socketio.AsyncClient(
http_session=session, **self._sio_client_kwargs
)
else:
self._sio = socketio.AsyncClient(**self._sio_client_kwargs)
def _initialize_loop_and_thread(self) -> asyncio.AbstractEventLoop:
"""Initialize a new asyncio event loop, start it in a background thread,
and create the ProxyClass instance bound to that loop.
"""
loop = asyncio.new_event_loop()
self._thread = threading.Thread(
target=asyncio_loop_thread,
args=(loop,),
daemon=True,
)
self._thread.start()
return loop
def disconnect(self) -> None: def disconnect(self) -> None:
if self._loop is not None and self._thread is not None: connection_future = asyncio.run_coroutine_threadsafe(
connection_future = asyncio.run_coroutine_threadsafe( self._disconnect(), self._loop
self._disconnect(), self._loop )
) connection_future.result()
connection_future.result()
# Stop the event loop and thread
self._loop.call_soon_threadsafe(self._loop.stop)
self._thread.join()
self._thread = None
async def _connect(self) -> None: async def _connect(self) -> None:
logger.debug("Connecting to server '%s' ...", self._url) logger.debug("Connecting to server '%s' ...", self._url)
await self._setup_events() await self._setup_events()
headers = {}
if self._client_id is not None:
headers["X-Client-Id"] = self._client_id
await self._sio.connect( await self._sio.connect(
url=self._base_url, self._base_url,
headers=headers,
socketio_path=f"{self._path_prefix}/ws/socket.io", socketio_path=f"{self._path_prefix}/ws/socket.io",
transports=["websocket"], transports=["websocket"],
retry=True, retry=True,
@ -229,7 +154,7 @@ class Client:
async def _handle_connect(self) -> None: async def _handle_connect(self) -> None:
logger.debug("Connected to '%s' ...", self._url) logger.debug("Connected to '%s' ...", self._url)
serialized_object = cast( serialized_object = cast(
"SerializedDataService", await self._sio.call("service_serialization") SerializedDataService, await self._sio.call("service_serialization")
) )
ProxyLoader.update_data_service_proxy( ProxyLoader.update_data_service_proxy(
self.proxy, serialized_object=serialized_object self.proxy, serialized_object=serialized_object

View File

@ -67,7 +67,7 @@ class ProxyClass(ProxyClassMixin, pydase.components.DeviceConnection):
def serialize(self) -> SerializedObject: def serialize(self) -> SerializedObject:
if self._service_representation is None: if self._service_representation is None:
serialization_future = cast( serialization_future = cast(
"asyncio.Future[SerializedDataService]", asyncio.Future[SerializedDataService],
asyncio.run_coroutine_threadsafe( asyncio.run_coroutine_threadsafe(
self._sio.call("service_serialization"), self._loop self._sio.call("service_serialization"), self._loop
), ),
@ -80,7 +80,7 @@ class ProxyClass(ProxyClassMixin, pydase.components.DeviceConnection):
self._service_representation = serialization_future.result() self._service_representation = serialization_future.result()
device_connection_value = cast( device_connection_value = cast(
"dict[str, SerializedObject]", dict[str, SerializedObject],
pydase.components.DeviceConnection().serialize()["value"], pydase.components.DeviceConnection().serialize()["value"],
) )
@ -90,7 +90,7 @@ class ProxyClass(ProxyClassMixin, pydase.components.DeviceConnection):
value = { value = {
**cast( **cast(
"dict[str, SerializedObject]", dict[str, SerializedObject],
# need to deepcopy to not overwrite the _service_representation dict # need to deepcopy to not overwrite the _service_representation dict
# when adding a prefix with add_prefix_to_full_access_path # when adding a prefix with add_prefix_to_full_access_path
deepcopy(self._service_representation["value"]), deepcopy(self._service_representation["value"]),

View File

@ -123,35 +123,35 @@ class ProxyList(list[Any]):
update_value(self._sio, self._loop, full_access_path, value) update_value(self._sio, self._loop, full_access_path, value)
def append(self, object_: Any, /) -> None: def append(self, __object: Any) -> None:
full_access_path = f"{self._parent_path}.append" full_access_path = f"{self._parent_path}.append"
trigger_method(self._sio, self._loop, full_access_path, [object_], {}) trigger_method(self._sio, self._loop, full_access_path, [__object], {})
def clear(self) -> None: def clear(self) -> None:
full_access_path = f"{self._parent_path}.clear" full_access_path = f"{self._parent_path}.clear"
trigger_method(self._sio, self._loop, full_access_path, [], {}) trigger_method(self._sio, self._loop, full_access_path, [], {})
def extend(self, iterable: Iterable[Any], /) -> None: def extend(self, __iterable: Iterable[Any]) -> None:
full_access_path = f"{self._parent_path}.extend" full_access_path = f"{self._parent_path}.extend"
trigger_method(self._sio, self._loop, full_access_path, [iterable], {}) trigger_method(self._sio, self._loop, full_access_path, [__iterable], {})
def insert(self, index: SupportsIndex, object_: Any, /) -> None: def insert(self, __index: SupportsIndex, __object: Any) -> None:
full_access_path = f"{self._parent_path}.insert" full_access_path = f"{self._parent_path}.insert"
trigger_method(self._sio, self._loop, full_access_path, [index, object_], {}) trigger_method(self._sio, self._loop, full_access_path, [__index, __object], {})
def pop(self, index: SupportsIndex = -1, /) -> Any: def pop(self, __index: SupportsIndex = -1) -> Any:
full_access_path = f"{self._parent_path}.pop" full_access_path = f"{self._parent_path}.pop"
return trigger_method(self._sio, self._loop, full_access_path, [index], {}) return trigger_method(self._sio, self._loop, full_access_path, [__index], {})
def remove(self, value: Any, /) -> None: def remove(self, __value: Any) -> None:
full_access_path = f"{self._parent_path}.remove" full_access_path = f"{self._parent_path}.remove"
trigger_method(self._sio, self._loop, full_access_path, [value], {}) trigger_method(self._sio, self._loop, full_access_path, [__value], {})
class ProxyClassMixin: class ProxyClassMixin:
@ -266,7 +266,7 @@ class ProxyLoader:
return ProxyList( return ProxyList(
[ [
ProxyLoader.loads_proxy(item, sio_client, loop) ProxyLoader.loads_proxy(item, sio_client, loop)
for item in cast("list[SerializedObject]", serialized_object["value"]) for item in cast(list[SerializedObject], serialized_object["value"])
], ],
parent_path=serialized_object["full_access_path"], parent_path=serialized_object["full_access_path"],
sio_client=sio_client, sio_client=sio_client,
@ -283,7 +283,7 @@ class ProxyLoader:
{ {
key: ProxyLoader.loads_proxy(value, sio_client, loop) key: ProxyLoader.loads_proxy(value, sio_client, loop)
for key, value in cast( for key, value in cast(
"dict[str, SerializedObject]", serialized_object["value"] dict[str, SerializedObject], serialized_object["value"]
).items() ).items()
}, },
parent_path=serialized_object["full_access_path"], parent_path=serialized_object["full_access_path"],
@ -300,7 +300,7 @@ class ProxyLoader:
proxy_class._proxy_setters.clear() proxy_class._proxy_setters.clear()
proxy_class._proxy_methods.clear() proxy_class._proxy_methods.clear()
for key, value in cast( for key, value in cast(
"dict[str, SerializedObject]", serialized_object["value"] dict[str, SerializedObject], serialized_object["value"]
).items(): ).items():
type_handler: dict[str | None, None | Callable[..., Any]] = { type_handler: dict[str | None, None | Callable[..., Any]] = {
None: None, None: None,
@ -333,7 +333,7 @@ class ProxyLoader:
) -> Any: ) -> Any:
# Custom types like Components or DataService classes # Custom types like Components or DataService classes
component_class = cast( component_class = cast(
"type", Deserializer.get_service_base_class(serialized_object["type"]) type, Deserializer.get_service_base_class(serialized_object["type"])
) )
class_bases = ( class_bases = (
ProxyClassMixin, ProxyClassMixin,

View File

@ -33,8 +33,8 @@ from pydase.components.image import Image
from pydase.components.number_slider import NumberSlider from pydase.components.number_slider import NumberSlider
__all__ = [ __all__ = [
"NumberSlider",
"Image",
"ColouredEnum", "ColouredEnum",
"DeviceConnection", "DeviceConnection",
"Image",
"NumberSlider",
] ]

View File

@ -13,11 +13,11 @@ class NumberSlider(DataService):
Args: Args:
value: value:
The initial value of the slider. Defaults to 0.0. The initial value of the slider. Defaults to 0.
min_: min_:
The minimum value of the slider. Defaults to 0.0. The minimum value of the slider. Defaults to 0.
max_: max_:
The maximum value of the slider. Defaults to 100.0. The maximum value of the slider. Defaults to 100.
step_size: step_size:
The increment/decrement step size of the slider. Defaults to 1.0. The increment/decrement step size of the slider. Defaults to 1.0.
@ -84,9 +84,9 @@ class NumberSlider(DataService):
def __init__( def __init__(
self, self,
value: Any = 0.0, value: Any = 0.0,
min_: Any = 0.0, min_: float = 0.0,
max_: Any = 100.0, max_: float = 100.0,
step_size: Any = 1.0, step_size: float = 1.0,
) -> None: ) -> None:
super().__init__() super().__init__()
self._step_size = step_size self._step_size = step_size
@ -95,17 +95,17 @@ class NumberSlider(DataService):
self._max = max_ self._max = max_
@property @property
def min(self) -> Any: def min(self) -> float:
"""The min property.""" """The min property."""
return self._min return self._min
@property @property
def max(self) -> Any: def max(self) -> float:
"""The min property.""" """The min property."""
return self._max return self._max
@property @property
def step_size(self) -> Any: def step_size(self) -> float:
"""The min property.""" """The min property."""
return self._step_size return self._step_size

View File

@ -15,9 +15,9 @@ from pydase.utils.helpers import (
is_property_attribute, is_property_attribute,
) )
from pydase.utils.serialization.serializer import ( from pydase.utils.serialization.serializer import (
SerializedObject,
Serializer, Serializer,
) )
from pydase.utils.serialization.types import SerializedObject
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -27,17 +27,17 @@ class DataService(AbstractDataService):
super().__init__() super().__init__()
self.__check_instance_classes() self.__check_instance_classes()
def __setattr__(self, name: str, value: Any, /) -> None: def __setattr__(self, __name: str, __value: Any) -> None:
# Check and warn for unexpected type changes in attributes # Check and warn for unexpected type changes in attributes
self._warn_on_type_change(name, value) self._warn_on_type_change(__name, __value)
# every class defined by the user should inherit from DataService if it is # every class defined by the user should inherit from DataService if it is
# assigned to a public attribute # assigned to a public attribute
if not name.startswith("_") and not inspect.isfunction(value): if not __name.startswith("_") and not inspect.isfunction(__value):
self.__warn_if_not_observable(value) self.__warn_if_not_observable(__value)
# Set the attribute # Set the attribute
super().__setattr__(name, value) super().__setattr__(__name, __value)
def _warn_on_type_change(self, attr_name: str, new_value: Any) -> None: def _warn_on_type_change(self, attr_name: str, new_value: Any) -> None:
if is_property_attribute(self, attr_name): if is_property_attribute(self, attr_name):
@ -56,14 +56,16 @@ class DataService(AbstractDataService):
def _is_unexpected_type_change(self, current_value: Any, new_value: Any) -> bool: def _is_unexpected_type_change(self, current_value: Any, new_value: Any) -> bool:
return ( return (
isinstance(current_value, float) and not isinstance(new_value, float) isinstance(current_value, float)
) or ( and not isinstance(new_value, float)
isinstance(current_value, u.Quantity) or (
and not isinstance(new_value, u.Quantity) isinstance(current_value, u.Quantity)
and not isinstance(new_value, u.Quantity)
)
) )
def __warn_if_not_observable(self, value: Any, /) -> None: def __warn_if_not_observable(self, __value: Any) -> None:
value_class = value if inspect.isclass(value) else value.__class__ value_class = __value if inspect.isclass(__value) else __value.__class__
if not issubclass( if not issubclass(
value_class, value_class,
@ -79,7 +81,7 @@ class DataService(AbstractDataService):
| Observable | Observable
| Callable | Callable
), ),
) and not is_descriptor(value): ) and not is_descriptor(__value):
logger.warning( logger.warning(
"Class '%s' does not inherit from DataService. This may lead to" "Class '%s' does not inherit from DataService. This may lead to"
" unexpected behaviour!", " unexpected behaviour!",

View File

@ -2,10 +2,10 @@ import logging
from typing import TYPE_CHECKING, Any, cast from typing import TYPE_CHECKING, Any, cast
from pydase.utils.serialization.serializer import ( from pydase.utils.serialization.serializer import (
SerializedObject,
get_nested_dict_by_path, get_nested_dict_by_path,
set_nested_value_by_path, set_nested_value_by_path,
) )
from pydase.utils.serialization.types import SerializedObject
if TYPE_CHECKING: if TYPE_CHECKING:
from pydase import DataService from pydase import DataService
@ -14,22 +14,6 @@ logger = logging.getLogger(__name__)
class DataServiceCache: class DataServiceCache:
"""Maintains a serialized cache of the current state of a DataService instance.
This class is responsible for storing and updating a representation of the service's
public attributes and properties. It is primarily used by the StateManager and the
web server to serve consistent state to clients without accessing the DataService
attributes directly.
The cache is initialized once upon construction by serializing the full state of
the service. After that, it can be incrementally updated using attribute paths and
values as notified by the
[`DataServiceObserver`][pydase.data_service.data_service_observer.DataServiceObserver].
Args:
service: The DataService instance whose state should be cached.
"""
def __init__(self, service: "DataService") -> None: def __init__(self, service: "DataService") -> None:
self._cache: SerializedObject self._cache: SerializedObject
self.service = service self.service = service
@ -46,13 +30,13 @@ class DataServiceCache:
def update_cache(self, full_access_path: str, value: Any) -> None: def update_cache(self, full_access_path: str, value: Any) -> None:
set_nested_value_by_path( set_nested_value_by_path(
cast("dict[str, SerializedObject]", self._cache["value"]), cast(dict[str, SerializedObject], self._cache["value"]),
full_access_path, full_access_path,
value, value,
) )
def get_value_dict_from_cache(self, full_access_path: str) -> SerializedObject: def get_value_dict_from_cache(self, full_access_path: str) -> SerializedObject:
return get_nested_dict_by_path( return get_nested_dict_by_path(
cast("dict[str, SerializedObject]", self._cache["value"]), cast(dict[str, SerializedObject], self._cache["value"]),
full_access_path, full_access_path,
) )

View File

@ -10,29 +10,17 @@ from pydase.observer_pattern.observer.property_observer import (
) )
from pydase.utils.helpers import ( from pydase.utils.helpers import (
get_object_attr_from_path, get_object_attr_from_path,
normalize_full_access_path_string,
) )
from pydase.utils.serialization.serializer import ( from pydase.utils.serialization.serializer import (
SerializationPathError, SerializationPathError,
SerializedObject,
dump, dump,
) )
from pydase.utils.serialization.types import SerializedObject
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
def _is_nested_attribute(full_access_path: str, changing_attributes: list[str]) -> bool:
"""Return True if the full_access_path is a nested attribute of any
changing_attribute."""
return any(
(
full_access_path.startswith((f"{attr}.", f"{attr}["))
and full_access_path != attr
)
for attr in changing_attributes
)
class DataServiceObserver(PropertyObserver): class DataServiceObserver(PropertyObserver):
def __init__(self, state_manager: StateManager) -> None: def __init__(self, state_manager: StateManager) -> None:
self.state_manager = state_manager self.state_manager = state_manager
@ -42,7 +30,11 @@ class DataServiceObserver(PropertyObserver):
super().__init__(state_manager.service) super().__init__(state_manager.service)
def on_change(self, full_access_path: str, value: Any) -> None: def on_change(self, full_access_path: str, value: Any) -> None:
if _is_nested_attribute(full_access_path, self.changing_attributes): if any(
full_access_path.startswith(changing_attribute)
and full_access_path != changing_attribute
for changing_attribute in self.changing_attributes
):
return return
cached_value_dict: SerializedObject cached_value_dict: SerializedObject
@ -110,7 +102,8 @@ class DataServiceObserver(PropertyObserver):
) )
def _notify_dependent_property_changes(self, changed_attr_path: str) -> None: def _notify_dependent_property_changes(self, changed_attr_path: str) -> None:
changed_props = self.property_deps_dict.get(changed_attr_path, []) normalized_attr_path = normalize_full_access_path_string(changed_attr_path)
changed_props = self.property_deps_dict.get(normalized_attr_path, [])
for prop in changed_props: for prop in changed_props:
# only notify about changing attribute if it is not currently being # only notify about changing attribute if it is not currently being
# "changed" e.g. when calling the getter of a property within another # "changed" e.g. when calling the getter of a property within another

View File

@ -1,4 +1,3 @@
import asyncio
import contextlib import contextlib
import json import json
import logging import logging
@ -17,11 +16,11 @@ from pydase.utils.helpers import (
from pydase.utils.serialization.deserializer import loads from pydase.utils.serialization.deserializer import loads
from pydase.utils.serialization.serializer import ( from pydase.utils.serialization.serializer import (
SerializationPathError, SerializationPathError,
SerializedObject,
generate_serialized_data_paths, generate_serialized_data_paths,
get_nested_dict_by_path, get_nested_dict_by_path,
serialized_dict_is_nested_object, serialized_dict_is_nested_object,
) )
from pydase.utils.serialization.types import SerializedObject
if TYPE_CHECKING: if TYPE_CHECKING:
from pydase import DataService from pydase import DataService
@ -67,41 +66,43 @@ def has_load_state_decorator(prop: property) -> bool:
class StateManager: class StateManager:
""" """
Manages the state of a DataService instance, serving as both a cache and a Manages the state of a DataService instance, serving as both a cache and a
persistence layer. It provides fast access to the most recently known state of the persistence layer. It is designed to provide quick access to the latest known state
service and ensures consistent state updates across connected clients and service for newly connecting web clients without the need for expensive property accesses
restarts. that may involve complex calculations or I/O operations.
The StateManager is used by the web server to apply updates to service attributes The StateManager listens for state change notifications from the DataService's
and to serve the current state to newly connected clients. Internally, it creates a callback manager and updates its cache accordingly. This cache does not always
[`DataServiceCache`][pydase.data_service.data_service_cache.DataServiceCache] reflect the most current complex property states but rather retains the value from
instance to track the state of public attributes and properties. the last known state, optimizing for performance and reducing the load on the
system.
The StateManager also handles state persistence: it can load a previously saved While the StateManager ensures that the cached state is as up-to-date as possible,
state from disk at startup and periodically autosave the current state to a file it does not autonomously update complex properties of the DataService. Such
during runtime. properties must be updated programmatically, for instance, by invoking specific
tasks or methods that trigger the necessary operations to refresh their state.
The cached state maintained by the StateManager is particularly useful for web
clients that connect to the system and need immediate access to the current state of
the DataService. By avoiding direct and potentially costly property accesses, the
StateManager provides a snapshot of the DataService's state that is sufficiently
accurate for initial rendering and interaction.
Args: Args:
service: The DataService instance whose state is being managed. service:
filename: The file name used for loading and storing the DataService's state. The DataService instance whose state is being managed.
If provided, the state is loaded from this file at startup and saved to it filename:
on shutdown or at regular intervals. The file name used for storing the DataService's state.
autosave_interval: Interval in seconds between automatic state save events.
If set to `None`, automatic saving is disabled.
Note: Note:
The StateManager does not autonomously poll hardware state. It relies on the The StateManager's cache updates are triggered by notifications and do not
service to perform such updates. The cache maintained by include autonomous updates of complex DataService properties, which must be
[`DataServiceCache`][pydase.data_service.data_service_cache.DataServiceCache] managed programmatically. The cache serves the purpose of providing immediate
reflects the last known state as notified by the `DataServiceObserver`, and is state information to web clients, reflecting the state after the last property
used by the web interface to provide fast and accurate state rendering for update.
connected clients.
""" """
def __init__( def __init__(
self, self, service: "DataService", filename: str | Path | None = None
service: "DataService",
filename: str | Path | None = None,
autosave_interval: float | None = None,
) -> None: ) -> None:
self.filename = getattr(service, "_filename", None) self.filename = getattr(service, "_filename", None)
@ -114,51 +115,30 @@ class StateManager:
self.service = service self.service = service
self.cache_manager = DataServiceCache(self.service) self.cache_manager = DataServiceCache(self.service)
self.autosave_interval = autosave_interval
async def autosave(self) -> None:
"""Periodically saves the current service state to the configured file.
This coroutine is automatically started by the [`pydase.Server`][pydase.Server]
when a filename is provided. It runs in the background and writes the latest
known state of the service to disk every `autosave_interval` seconds.
If `autosave_interval` is set to `None`, autosaving is disabled and this
coroutine exits immediately.
"""
if self.autosave_interval is None:
return
while True:
try:
if self.filename is not None:
self.save_state()
await asyncio.sleep(self.autosave_interval)
except Exception as e:
logger.exception(e)
@property @property
def cache_value(self) -> dict[str, SerializedObject]: def cache_value(self) -> dict[str, SerializedObject]:
"""Returns the "value" value of the DataService serialization.""" """Returns the "value" value of the DataService serialization."""
return cast("dict[str, SerializedObject]", self.cache_manager.cache["value"]) return cast(dict[str, SerializedObject], self.cache_manager.cache["value"])
def save_state(self) -> None: def save_state(self) -> None:
"""Saves the DataService's current state to a JSON file defined by """
`self.filename`. Saves the DataService's current state to a JSON file defined by `self.filename`.
Logs an error if `self.filename` is not set.
""" """
if self.filename is not None: if self.filename is not None:
with open(self.filename, "w") as f: with open(self.filename, "w") as f:
json.dump(self.cache_value, f, indent=4) json.dump(self.cache_value, f, indent=4)
else: else:
logger.debug( logger.info(
"State manager was not initialised with a filename. Skipping " "State manager was not initialised with a filename. Skipping "
"'save_state'..." "'save_state'..."
) )
def load_state(self) -> None: def load_state(self) -> None:
"""Loads the DataService's state from a JSON file defined by `self.filename`. """
Loads the DataService's state from a JSON file defined by `self.filename`.
Updates the service's attributes, respecting type and read-only constraints. Updates the service's attributes, respecting type and read-only constraints.
""" """
@ -203,7 +183,7 @@ class StateManager:
with open(self.filename) as f: with open(self.filename) as f:
# Load JSON data from file and update class attributes with these # Load JSON data from file and update class attributes with these
# values # values
return cast("dict[str, Any]", json.load(f)) return cast(dict[str, Any], json.load(f))
return {} return {}
def set_service_attribute_value_by_path( def set_service_attribute_value_by_path(
@ -211,7 +191,8 @@ class StateManager:
path: str, path: str,
serialized_value: SerializedObject, serialized_value: SerializedObject,
) -> None: ) -> None:
"""Sets the value of an attribute in the service managed by the `StateManager` """
Sets the value of an attribute in the service managed by the `StateManager`
given its path as a dot-separated string. given its path as a dot-separated string.
This method updates the attribute specified by 'path' with 'value' only if the This method updates the attribute specified by 'path' with 'value' only if the

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@ -7,15 +7,15 @@
<meta name="viewport" content="width=device-width, initial-scale=1.0" /> <meta name="viewport" content="width=device-width, initial-scale=1.0" />
<meta name="theme-color" content="#000000" /> <meta name="theme-color" content="#000000" />
<meta name="description" content="Web site displaying a pydase UI." /> <meta name="description" content="Web site displaying a pydase UI." />
<script type="module" crossorigin src="/assets/index-XZbNXHJp.js"></script> <script type="module" crossorigin src="/assets/index-BqF7l_R8.js"></script>
<link rel="stylesheet" crossorigin href="/assets/index-Cs09d5Pk.css"> <link rel="stylesheet" crossorigin href="/assets/index-D2aktF3W.css">
</head> </head>
<script> <script>
// this will be set by the python backend if the service is behind a proxy which strips a prefix. The frontend can use this to build the paths to the resources. // this will be set by the python backend if the service is behind a proxy which strips a prefix. The frontend can use this to build the paths to the resources.
window.__FORWARDED_PREFIX__ = ""; window.__FORWARDED_PREFIX__ = "";
window.__FORWARDED_PROTO__ = ""; window.__FORWARDED_PROTO__ = "";
</script> </script>`
<body> <body>
<noscript>You need to enable JavaScript to run this app.</noscript> <noscript>You need to enable JavaScript to run this app.</noscript>

View File

@ -55,10 +55,6 @@ class Observable(ObservableObject):
value = super().__getattribute__(name) value = super().__getattribute__(name)
if is_property_attribute(self, name): if is_property_attribute(self, name):
# fixes https://github.com/tiqi-group/pydase/issues/187 and
# https://github.com/tiqi-group/pydase/issues/192
if isinstance(value, ObservableObject):
value.add_observer(self, name)
self._notify_changed(name, value) self._notify_changed(name, value)
return value return value

View File

@ -1,6 +1,5 @@
from __future__ import annotations from __future__ import annotations
import contextlib
import logging import logging
import weakref import weakref
from abc import ABC, abstractmethod from abc import ABC, abstractmethod
@ -165,9 +164,9 @@ class _ObservableList(ObservableObject, list[Any]):
self._notify_changed(f"[{key}]", value) self._notify_changed(f"[{key}]", value)
def append(self, object_: Any, /) -> None: def append(self, __object: Any) -> None:
self._notify_change_start("") self._notify_change_start("")
super().append(self._initialise_new_objects(f"[{len(self)}]", object_)) super().append(self._initialise_new_objects(f"[{len(self)}]", __object))
self._notify_changed("", self) self._notify_changed("", self)
def clear(self) -> None: def clear(self) -> None:
@ -177,33 +176,33 @@ class _ObservableList(ObservableObject, list[Any]):
self._notify_changed("", self) self._notify_changed("", self)
def extend(self, iterable: Iterable[Any], /) -> None: def extend(self, __iterable: Iterable[Any]) -> None:
self._remove_self_from_observables() self._remove_self_from_observables()
try: try:
super().extend(iterable) super().extend(__iterable)
finally: finally:
for i, item in enumerate(self): for i, item in enumerate(self):
super().__setitem__(i, self._initialise_new_objects(f"[{i}]", item)) super().__setitem__(i, self._initialise_new_objects(f"[{i}]", item))
self._notify_changed("", self) self._notify_changed("", self)
def insert(self, index: SupportsIndex, object_: Any, /) -> None: def insert(self, __index: SupportsIndex, __object: Any) -> None:
self._remove_self_from_observables() self._remove_self_from_observables()
try: try:
super().insert(index, object_) super().insert(__index, __object)
finally: finally:
for i, item in enumerate(self): for i, item in enumerate(self):
super().__setitem__(i, self._initialise_new_objects(f"[{i}]", item)) super().__setitem__(i, self._initialise_new_objects(f"[{i}]", item))
self._notify_changed("", self) self._notify_changed("", self)
def pop(self, index: SupportsIndex = -1, /) -> Any: def pop(self, __index: SupportsIndex = -1) -> Any:
self._remove_self_from_observables() self._remove_self_from_observables()
try: try:
popped_item = super().pop(index) popped_item = super().pop(__index)
finally: finally:
for i, item in enumerate(self): for i, item in enumerate(self):
super().__setitem__(i, self._initialise_new_objects(f"[{i}]", item)) super().__setitem__(i, self._initialise_new_objects(f"[{i}]", item))
@ -211,11 +210,11 @@ class _ObservableList(ObservableObject, list[Any]):
self._notify_changed("", self) self._notify_changed("", self)
return popped_item return popped_item
def remove(self, value: Any, /) -> None: def remove(self, __value: Any) -> None:
self._remove_self_from_observables() self._remove_self_from_observables()
try: try:
super().remove(value) super().remove(__value)
finally: finally:
for i, item in enumerate(self): for i, item in enumerate(self):
super().__setitem__(i, self._initialise_new_objects(f"[{i}]", item)) super().__setitem__(i, self._initialise_new_objects(f"[{i}]", item))
@ -253,8 +252,7 @@ class _ObservableDict(ObservableObject, dict[str, Any]):
self.__setitem__(key, self._initialise_new_objects(f'["{key}"]', value)) self.__setitem__(key, self._initialise_new_objects(f'["{key}"]', value))
def __del__(self) -> None: def __del__(self) -> None:
with contextlib.suppress(KeyError): self._dict_mapping.pop(id(self._original_dict))
self._dict_mapping.pop(id(self._original_dict))
def __setitem__(self, key: str, value: Any) -> None: def __setitem__(self, key: str, value: Any) -> None:
if not isinstance(key, str): if not isinstance(key, str):

View File

@ -22,7 +22,7 @@ def reverse_dict(original_dict: dict[str, list[str]]) -> dict[str, list[str]]:
def get_property_dependencies(prop: property, prefix: str = "") -> list[str]: def get_property_dependencies(prop: property, prefix: str = "") -> list[str]:
source_code_string = inspect.getsource(prop.fget) # type: ignore[arg-type] source_code_string = inspect.getsource(prop.fget) # type: ignore[arg-type]
pattern = r"self\.([^\s\{\}\(\)]+)" pattern = r"self\.([^\s\{\}]+)"
matches = re.findall(pattern, source_code_string) matches = re.findall(pattern, source_code_string)
return [prefix + match for match in matches if "(" not in match] return [prefix + match for match in matches if "(" not in match]
@ -100,7 +100,7 @@ class PropertyObserver(Observer):
elif isinstance(collection, dict): elif isinstance(collection, dict):
for key, val in collection.items(): for key, val in collection.items():
if isinstance(val, Observable): if isinstance(val, Observable):
new_prefix = f'{parent_path}["{key}"]' new_prefix = f"{parent_path}['{key}']"
deps.update( deps.update(
self._get_properties_and_their_dependencies(val, new_prefix) self._get_properties_and_their_dependencies(val, new_prefix)
) )

View File

@ -14,6 +14,7 @@ from pydase.data_service.data_service_observer import DataServiceObserver
from pydase.data_service.state_manager import StateManager from pydase.data_service.state_manager import StateManager
from pydase.server.web_server import WebServer from pydase.server.web_server import WebServer
from pydase.task.autostart import autostart_service_tasks from pydase.task.autostart import autostart_service_tasks
from pydase.utils.helpers import current_event_loop_exists
HANDLED_SIGNALS = ( HANDLED_SIGNALS = (
signal.SIGINT, # Unix signal 2. Sent by Ctrl+C. signal.SIGINT, # Unix signal 2. Sent by Ctrl+C.
@ -83,17 +84,21 @@ class Server:
The `Server` class provides a flexible server implementation for the `DataService`. The `Server` class provides a flexible server implementation for the `DataService`.
Args: Args:
service: The DataService instance that this server will manage. service:
host: The host address for the server. Defaults to `'0.0.0.0'`, which means all The DataService instance that this server will manage.
host:
The host address for the server. Defaults to `'0.0.0.0'`, which means all
available network interfaces. available network interfaces.
web_port: The port number for the web server. If set to None, it will use the web_port:
port defined in The port number for the web server. Defaults to
[`ServiceConfig().web_port`][pydase.config.ServiceConfig.web_port]. Defaults [`ServiceConfig().web_port`][pydase.config.ServiceConfig.web_port].
to None. enable_web:
enable_web: Whether to enable the web server. Whether to enable the web server.
filename: Filename of the file managing the service state persistence. filename:
additional_servers: A list of additional servers to run alongside the main Filename of the file managing the service state persistence.
server. additional_servers:
A list of additional servers to run alongside the main server.
Here's an example of how you might define an additional server: Here's an example of how you might define an additional server:
```python ```python
@ -132,47 +137,39 @@ class Server:
) )
server.run() server.run()
``` ```
autosave_interval: Interval in seconds between automatic state save events. **kwargs:
If set to `None`, automatic saving is disabled. Defaults to 30 seconds. Additional keyword arguments.
**kwargs: Additional keyword arguments.
""" """
def __init__( # noqa: PLR0913 def __init__( # noqa: PLR0913
self, self,
service: DataService, service: DataService,
host: str = "0.0.0.0", host: str = "0.0.0.0",
web_port: int | None = None, web_port: int = ServiceConfig().web_port,
enable_web: bool = True, enable_web: bool = True,
filename: str | Path | None = None, filename: str | Path | None = None,
additional_servers: list[AdditionalServer] | None = None, additional_servers: list[AdditionalServer] | None = None,
autosave_interval: float = 30.0,
**kwargs: Any, **kwargs: Any,
) -> None: ) -> None:
if additional_servers is None: if additional_servers is None:
additional_servers = [] additional_servers = []
self._service = service self._service = service
self._host = host self._host = host
if web_port is None: self._web_port = web_port
self._web_port = ServiceConfig().web_port
else:
self._web_port = web_port
self._enable_web = enable_web self._enable_web = enable_web
self._kwargs = kwargs self._kwargs = kwargs
self._additional_servers = additional_servers self._additional_servers = additional_servers
self.should_exit = False self.should_exit = False
self.servers: dict[str, asyncio.Future[Any]] = {} self.servers: dict[str, asyncio.Future[Any]] = {}
self._state_manager = StateManager(self._service, filename)
self._loop = asyncio.new_event_loop()
asyncio.set_event_loop(self._loop)
self._state_manager = StateManager(
service=self._service,
filename=filename,
autosave_interval=autosave_interval,
)
self._observer = DataServiceObserver(self._state_manager) self._observer = DataServiceObserver(self._state_manager)
self._state_manager.load_state() self._state_manager.load_state()
autostart_service_tasks(self._service) autostart_service_tasks(self._service)
if not current_event_loop_exists():
self._loop = asyncio.new_event_loop()
asyncio.set_event_loop(self._loop)
else:
self._loop = asyncio.get_event_loop()
def run(self) -> None: def run(self) -> None:
""" """
@ -180,10 +177,7 @@ class Server:
This method should be called to start the server after it's been instantiated. This method should be called to start the server after it's been instantiated.
""" """
try: self._loop.run_until_complete(self.serve())
self._loop.run_until_complete(self.serve())
finally:
self._loop.close()
async def serve(self) -> None: async def serve(self) -> None:
process_id = os.getpid() process_id = os.getpid()
@ -229,8 +223,6 @@ class Server:
server_task.add_done_callback(self._handle_server_shutdown) server_task.add_done_callback(self._handle_server_shutdown)
self.servers["web"] = server_task self.servers["web"] = server_task
self._loop.create_task(self._state_manager.autosave())
def _handle_server_shutdown(self, task: asyncio.Task[Any]) -> None: def _handle_server_shutdown(self, task: asyncio.Task[Any]) -> None:
"""Handle server shutdown. If the service should exit, do nothing. Else, make """Handle server shutdown. If the service should exit, do nothing. Else, make
the service exit.""" the service exit."""
@ -266,7 +258,7 @@ class Server:
except asyncio.CancelledError: except asyncio.CancelledError:
logger.debug("Cancelled '%s' server.", server_name) logger.debug("Cancelled '%s' server.", server_name)
except Exception as e: except Exception as e:
logger.exception("Unexpected exception: %s", e) logger.error("Unexpected exception: %s", e)
async def __cancel_tasks(self) -> None: async def __cancel_tasks(self) -> None:
for task in asyncio.all_tasks(self._loop): for task in asyncio.all_tasks(self._loop):

View File

@ -1,11 +1,9 @@
import inspect import inspect
import logging import logging
from functools import partial
from typing import TYPE_CHECKING from typing import TYPE_CHECKING
import aiohttp.web import aiohttp.web
import aiohttp_middlewares.error import aiohttp_middlewares.error
import click
from pydase.data_service.state_manager import StateManager from pydase.data_service.state_manager import StateManager
from pydase.server.web_server.api.v1.endpoints import ( from pydase.server.web_server.api.v1.endpoints import (
@ -27,14 +25,12 @@ STATUS_FAILED = 400
async def _get_value( async def _get_value(
request: aiohttp.web.Request, state_manager: StateManager state_manager: StateManager, request: aiohttp.web.Request
) -> aiohttp.web.Response: ) -> aiohttp.web.Response:
log_id = get_log_id(request) logger.info("Handle api request: %s", request)
access_path = request.rel_url.query["access_path"] access_path = request.rel_url.query["access_path"]
logger.info("Client [%s] is getting the value of '%s'", log_id, access_path)
status = STATUS_OK status = STATUS_OK
try: try:
result = get_value(state_manager, access_path) result = get_value(state_manager, access_path)
@ -46,16 +42,10 @@ async def _get_value(
async def _update_value( async def _update_value(
request: aiohttp.web.Request, state_manager: StateManager state_manager: StateManager, request: aiohttp.web.Request
) -> aiohttp.web.Response: ) -> aiohttp.web.Response:
log_id = get_log_id(request)
data: UpdateDict = await request.json() data: UpdateDict = await request.json()
logger.info(
"Client [%s] is updating the value of '%s'", log_id, data["access_path"]
)
try: try:
update_value(state_manager, data) update_value(state_manager, data)
@ -66,17 +56,11 @@ async def _update_value(
async def _trigger_method( async def _trigger_method(
request: aiohttp.web.Request, state_manager: StateManager state_manager: StateManager, request: aiohttp.web.Request
) -> aiohttp.web.Response: ) -> aiohttp.web.Response:
log_id = get_log_id(request)
data: TriggerMethodDict = await request.json() data: TriggerMethodDict = await request.json()
access_path = data["access_path"] method = get_object_attr_from_path(state_manager.service, data["access_path"])
logger.info("Client [%s] is triggering the method '%s'", log_id, access_path)
method = get_object_attr_from_path(state_manager.service, access_path)
try: try:
if inspect.iscoroutinefunction(method): if inspect.iscoroutinefunction(method):
@ -93,33 +77,22 @@ async def _trigger_method(
return aiohttp.web.json_response(dump(e), status=STATUS_FAILED) return aiohttp.web.json_response(dump(e), status=STATUS_FAILED)
def get_log_id(request: aiohttp.web.Request) -> str:
client_id_header = request.headers.get("x-client-id", None)
remote_username_header = request.headers.get("remote-user", None)
if remote_username_header is not None:
log_id = f"user={click.style(remote_username_header, fg='cyan')}"
elif client_id_header is not None:
log_id = f"id={click.style(client_id_header, fg='cyan')}"
else:
log_id = f"id={click.style(None, fg='cyan')}"
return log_id
def create_api_application(state_manager: StateManager) -> aiohttp.web.Application: def create_api_application(state_manager: StateManager) -> aiohttp.web.Application:
api_application = aiohttp.web.Application( api_application = aiohttp.web.Application(
middlewares=(aiohttp_middlewares.error.error_middleware(),) middlewares=(aiohttp_middlewares.error.error_middleware(),)
) )
api_application.router.add_get( api_application.router.add_get(
"/get_value", partial(_get_value, state_manager=state_manager) "/get_value",
lambda request: _get_value(state_manager=state_manager, request=request),
) )
api_application.router.add_put( api_application.router.add_put(
"/update_value", partial(_update_value, state_manager=state_manager) "/update_value",
lambda request: _update_value(state_manager=state_manager, request=request),
) )
api_application.router.add_put( api_application.router.add_put(
"/trigger_method", partial(_trigger_method, state_manager=state_manager) "/trigger_method",
lambda request: _trigger_method(state_manager=state_manager, request=request),
) )
return api_application return api_application

View File

@ -20,7 +20,7 @@ from pydase.data_service.data_service_observer import DataServiceObserver
from pydase.data_service.state_manager import StateManager from pydase.data_service.state_manager import StateManager
from pydase.server.web_server.api.v1 import endpoints from pydase.server.web_server.api.v1 import endpoints
from pydase.utils.logging import SocketIOHandler from pydase.utils.logging import SocketIOHandler
from pydase.utils.serialization.types import SerializedObject from pydase.utils.serialization.serializer import SerializedObject
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -141,41 +141,22 @@ def setup_sio_server(
def setup_sio_events(sio: socketio.AsyncServer, state_manager: StateManager) -> None: # noqa: C901 def setup_sio_events(sio: socketio.AsyncServer, state_manager: StateManager) -> None: # noqa: C901
@sio.event # type: ignore @sio.event # type: ignore
async def connect(sid: str, environ: Any) -> None: async def connect(sid: str, environ: Any) -> None:
client_id_header = environ.get("HTTP_X_CLIENT_ID", None) logger.debug("Client [%s] connected", click.style(str(sid), fg="cyan"))
remote_username_header = environ.get("HTTP_REMOTE_USER", None)
if remote_username_header is not None:
log_id = f"user={click.style(remote_username_header, fg='cyan')}"
elif client_id_header is not None:
log_id = f"id={click.style(client_id_header, fg='cyan')}"
else:
log_id = f"sid={click.style(sid, fg='cyan')}"
async with sio.session(sid) as session:
session["client_id"] = log_id
logger.info("Client [%s] connected", session["client_id"])
@sio.event # type: ignore @sio.event # type: ignore
async def disconnect(sid: str) -> None: async def disconnect(sid: str) -> None:
async with sio.session(sid) as session: logger.debug("Client [%s] disconnected", click.style(str(sid), fg="cyan"))
logger.info("Client [%s] disconnected", session["client_id"])
@sio.event # type: ignore @sio.event # type: ignore
async def service_serialization(sid: str) -> SerializedObject: async def service_serialization(sid: str) -> SerializedObject:
async with sio.session(sid) as session: logger.debug(
logger.info( "Client [%s] requested service serialization",
"Client [%s] requested service serialization", session["client_id"] click.style(str(sid), fg="cyan"),
) )
return state_manager.cache_manager.cache return state_manager.cache_manager.cache
@sio.event @sio.event
async def update_value(sid: str, data: UpdateDict) -> SerializedObject | None: async def update_value(sid: str, data: UpdateDict) -> SerializedObject | None:
async with sio.session(sid) as session:
logger.info(
"Client [%s] is updating the value of '%s'",
session["client_id"],
data["access_path"],
)
try: try:
endpoints.update_value(state_manager=state_manager, data=data) endpoints.update_value(state_manager=state_manager, data=data)
except Exception as e: except Exception as e:
@ -185,12 +166,6 @@ def setup_sio_events(sio: socketio.AsyncServer, state_manager: StateManager) ->
@sio.event @sio.event
async def get_value(sid: str, access_path: str) -> SerializedObject: async def get_value(sid: str, access_path: str) -> SerializedObject:
async with sio.session(sid) as session:
logger.info(
"Client [%s] is getting the value of '%s'",
session["client_id"],
access_path,
)
try: try:
return endpoints.get_value( return endpoints.get_value(
state_manager=state_manager, access_path=access_path state_manager=state_manager, access_path=access_path
@ -201,23 +176,16 @@ def setup_sio_events(sio: socketio.AsyncServer, state_manager: StateManager) ->
@sio.event @sio.event
async def trigger_method(sid: str, data: TriggerMethodDict) -> Any: async def trigger_method(sid: str, data: TriggerMethodDict) -> Any:
async with sio.session(sid) as session: method = get_object_attr_from_path(state_manager.service, data["access_path"])
logger.info(
"Client [%s] is triggering the method '%s'",
session["client_id"],
data["access_path"],
)
try: try:
method = get_object_attr_from_path(
state_manager.service, data["access_path"]
)
if inspect.iscoroutinefunction(method): if inspect.iscoroutinefunction(method):
return await endpoints.trigger_async_method( return await endpoints.trigger_async_method(
state_manager=state_manager, data=data state_manager=state_manager, data=data
) )
return endpoints.trigger_method(state_manager=state_manager, data=data) return endpoints.trigger_method(state_manager=state_manager, data=data)
except Exception as e: except Exception as e:
logger.exception(e) logger.error(e)
return dump(e) return dump(e)

View File

@ -150,7 +150,10 @@ class WebServer:
f"{escaped_prefix}/favicon.ico", f"{escaped_prefix}/favicon.ico",
) )
return aiohttp.web.Response(text=modified_html, content_type="text/html") return aiohttp.web.Response(
text=modified_html, content_type="text/html"
)
return aiohttp.web.FileResponse(self.frontend_src / "index.html")
app = aiohttp.web.Application() app = aiohttp.web.Application()

View File

@ -26,25 +26,15 @@ class PerInstanceTaskDescriptor(Generic[R]):
the service class. the service class.
""" """
def __init__( # noqa: PLR0913 def __init__(
self, self,
func: Callable[[Any], Coroutine[None, None, R]] func: Callable[[Any], Coroutine[None, None, R]]
| Callable[[], Coroutine[None, None, R]], | Callable[[], Coroutine[None, None, R]],
autostart: bool, autostart: bool = False,
restart_on_exception: bool,
restart_sec: float,
start_limit_interval_sec: float | None,
start_limit_burst: int,
exit_on_failure: bool,
) -> None: ) -> None:
self.__func = func self.__func = func
self.__autostart = autostart self.__autostart = autostart
self.__task_instances: dict[object, Task[R]] = {} self.__task_instances: dict[object, Task[R]] = {}
self.__restart_on_exception = restart_on_exception
self.__restart_sec = restart_sec
self.__start_limit_interval_sec = start_limit_interval_sec
self.__start_limit_burst = start_limit_burst
self.__exit_on_failure = exit_on_failure
def __set_name__(self, owner: type[DataService], name: str) -> None: def __set_name__(self, owner: type[DataService], name: str) -> None:
"""Stores the name of the task within the owning class. This method is called """Stores the name of the task within the owning class. This method is called
@ -77,28 +67,14 @@ class PerInstanceTaskDescriptor(Generic[R]):
if instance not in self.__task_instances: if instance not in self.__task_instances:
self.__task_instances[instance] = instance._initialise_new_objects( self.__task_instances[instance] = instance._initialise_new_objects(
self.__task_name, self.__task_name,
Task( Task(self.__func.__get__(instance, owner), autostart=self.__autostart),
self.__func.__get__(instance, owner),
autostart=self.__autostart,
restart_on_exception=self.__restart_on_exception,
restart_sec=self.__restart_sec,
start_limit_interval_sec=self.__start_limit_interval_sec,
start_limit_burst=self.__start_limit_burst,
exit_on_failure=self.__exit_on_failure,
),
) )
return self.__task_instances[instance] return self.__task_instances[instance]
def task( # noqa: PLR0913 def task(
*, *, autostart: bool = False
autostart: bool = False,
restart_on_exception: bool = True,
restart_sec: float = 1.0,
start_limit_interval_sec: float | None = None,
start_limit_burst: int = 3,
exit_on_failure: bool = False,
) -> Callable[ ) -> Callable[
[ [
Callable[[Any], Coroutine[None, None, R]] Callable[[Any], Coroutine[None, None, R]]
@ -120,30 +96,13 @@ def task( # noqa: PLR0913
periodically or perform asynchronous operations, such as polling data sources, periodically or perform asynchronous operations, such as polling data sources,
updating databases, or any recurring job that should be managed within the context updating databases, or any recurring job that should be managed within the context
of a `DataService`. of a `DataService`.
time.
The keyword arguments that can be passed to this decorator are inspired by systemd
unit services.
Args: Args:
autostart: autostart:
If set to True, the task will automatically start when the service is If set to True, the task will automatically start when the service is
initialized. Defaults to False. initialized. Defaults to False.
restart_on_exception:
Configures whether the task shall be restarted when it exits with an
exception other than [`asyncio.CancelledError`][asyncio.CancelledError].
restart_sec:
Configures the time to sleep before restarting a task. Defaults to 1.0.
start_limit_interval_sec:
Configures start rate limiting. Tasks which are started more than
`start_limit_burst` times within an `start_limit_interval_sec` time span are
not permitted to start any more. Defaults to None (disabled rate limiting).
start_limit_burst:
Configures unit start rate limiting. Tasks which are started more than
`start_limit_burst` times within an `start_limit_interval_sec` time span are
not permitted to start any more. Defaults to 3.
exit_on_failure:
If True, exit the service if the task fails and restart_on_exception is
False or burst limits are exceeded.
Returns: Returns:
A decorator that wraps an asynchronous function in a A decorator that wraps an asynchronous function in a
[`PerInstanceTaskDescriptor`][pydase.task.decorator.PerInstanceTaskDescriptor] [`PerInstanceTaskDescriptor`][pydase.task.decorator.PerInstanceTaskDescriptor]
@ -181,14 +140,6 @@ def task( # noqa: PLR0913
func: Callable[[Any], Coroutine[None, None, R]] func: Callable[[Any], Coroutine[None, None, R]]
| Callable[[], Coroutine[None, None, R]], | Callable[[], Coroutine[None, None, R]],
) -> PerInstanceTaskDescriptor[R]: ) -> PerInstanceTaskDescriptor[R]:
return PerInstanceTaskDescriptor( return PerInstanceTaskDescriptor(func, autostart=autostart)
func,
autostart=autostart,
restart_on_exception=restart_on_exception,
restart_sec=restart_sec,
start_limit_interval_sec=start_limit_interval_sec,
start_limit_burst=start_limit_burst,
exit_on_failure=exit_on_failure,
)
return decorator return decorator

View File

@ -1,10 +1,7 @@
import asyncio import asyncio
import inspect
import logging import logging
import os
import signal
from collections.abc import Callable, Coroutine from collections.abc import Callable, Coroutine
from datetime import datetime
from time import time
from typing import ( from typing import (
Generic, Generic,
TypeVar, TypeVar,
@ -31,9 +28,6 @@ class Task(pydase.data_service.data_service.DataService, Generic[R]):
decorator, it is replaced by a `Task` instance that controls the execution of the decorator, it is replaced by a `Task` instance that controls the execution of the
original function. original function.
The keyword arguments that can be passed to this class are inspired by systemd unit
services.
Args: Args:
func: func:
The asynchronous function that this task wraps. It must be a coroutine The asynchronous function that this task wraps. It must be a coroutine
@ -41,22 +35,6 @@ class Task(pydase.data_service.data_service.DataService, Generic[R]):
autostart: autostart:
If set to True, the task will automatically start when the service is If set to True, the task will automatically start when the service is
initialized. Defaults to False. initialized. Defaults to False.
restart_on_exception:
Configures whether the task shall be restarted when it exits with an
exception other than [`asyncio.CancelledError`][asyncio.CancelledError].
restart_sec:
Configures the time to sleep before restarting a task. Defaults to 1.0.
start_limit_interval_sec:
Configures start rate limiting. Tasks which are started more than
`start_limit_burst` times within an `start_limit_interval_sec` time span are
not permitted to start any more. Defaults to None (disabled rate limiting).
start_limit_burst:
Configures unit start rate limiting. Tasks which are started more than
`start_limit_burst` times within an `start_limit_interval_sec` time span are
not permitted to start any more. Defaults to 3.
exit_on_failure:
If True, exit the service if the task fails and restart_on_exception is
False or burst limits are exceeded.
Example: Example:
```python ```python
@ -85,24 +63,14 @@ class Task(pydase.data_service.data_service.DataService, Generic[R]):
`service.my_task.start()` and `service.my_task.stop()`, respectively. `service.my_task.start()` and `service.my_task.stop()`, respectively.
""" """
def __init__( # noqa: PLR0913 def __init__(
self, self,
func: Callable[[], Coroutine[None, None, R | None]], func: Callable[[], Coroutine[None, None, R | None]],
*, *,
autostart: bool, autostart: bool = False,
restart_on_exception: bool,
restart_sec: float,
start_limit_interval_sec: float | None,
start_limit_burst: int,
exit_on_failure: bool,
) -> None: ) -> None:
super().__init__() super().__init__()
self._autostart = autostart self._autostart = autostart
self._restart_on_exception = restart_on_exception
self._restart_sec = restart_sec
self._start_limit_interval_sec = start_limit_interval_sec
self._start_limit_burst = start_limit_burst
self._exit_on_failure = exit_on_failure
self._func_name = func.__name__ self._func_name = func.__name__
self._func = func self._func = func
self._task: asyncio.Task[R | None] | None = None self._task: asyncio.Task[R | None] | None = None
@ -141,95 +109,38 @@ class Task(pydase.data_service.data_service.DataService, Generic[R]):
self._task = None self._task = None
self._status = TaskStatus.NOT_RUNNING self._status = TaskStatus.NOT_RUNNING
exception = None exception = task.exception()
try:
exception = task.exception()
except asyncio.CancelledError:
return
if exception is not None: if exception is not None:
# Handle the exception, or you can re-raise it.
logger.error( logger.error(
"Task '%s' encountered an exception: %r", "Task '%s' encountered an exception: %s: %s",
self._func_name, self._func_name,
type(exception).__name__,
exception, exception,
) )
os.kill(os.getpid(), signal.SIGTERM) raise exception
else:
self._result = task.result() self._result = task.result()
async def run_task() -> R | None:
if inspect.iscoroutinefunction(self._func):
logger.info("Starting task %r", self._func_name)
self._status = TaskStatus.RUNNING
res: Coroutine[None, None, R | None] = self._func()
try:
return await res
except asyncio.CancelledError:
logger.info("Task '%s' was cancelled", self._func_name)
return None
logger.warning(
"Cannot start task %r. Function has not been bound yet", self._func_name
)
return None
logger.info("Creating task %r", self._func_name) logger.info("Creating task %r", self._func_name)
self._task = self._loop.create_task(self.__running_task_loop()) self._task = self._loop.create_task(run_task())
self._task.add_done_callback(task_done_callback) self._task.add_done_callback(task_done_callback)
async def __running_task_loop(self) -> R | None:
logger.info("Starting task %r", self._func_name)
self._status = TaskStatus.RUNNING
attempts = 0
start_time_of_start_limit_interval = None
while True:
try:
return await self._func()
except asyncio.CancelledError:
logger.info("Task '%s' was cancelled", self._func_name)
raise
except Exception as e:
attempts, start_time_of_start_limit_interval = (
self._handle_task_exception(
e, attempts, start_time_of_start_limit_interval
)
)
if not self._should_restart_task(
attempts, start_time_of_start_limit_interval
):
if self._exit_on_failure:
raise e
break
await asyncio.sleep(self._restart_sec)
return None
def _handle_task_exception(
self,
exception: Exception,
attempts: int,
start_time_of_start_limit_interval: float | None,
) -> tuple[int, float]:
"""Handle an exception raised during task execution."""
if start_time_of_start_limit_interval is None:
start_time_of_start_limit_interval = time()
attempts += 1
logger.exception(
"Task %r encountered an exception: %r [attempt %s since %s].",
self._func.__name__,
exception,
attempts,
datetime.fromtimestamp(start_time_of_start_limit_interval),
)
return attempts, start_time_of_start_limit_interval
def _should_restart_task(
self, attempts: int, start_time_of_start_limit_interval: float
) -> bool:
"""Determine if the task should be restarted."""
if not self._restart_on_exception:
return False
if self._start_limit_interval_sec is not None:
if (
time() - start_time_of_start_limit_interval
) > self._start_limit_interval_sec:
# Reset attempts if interval is exceeded
start_time_of_start_limit_interval = time()
attempts = 1
elif attempts > self._start_limit_burst:
logger.error(
"Task %r exceeded restart burst limit. Stopping.",
self._func.__name__,
)
return False
return True
def stop(self) -> None: def stop(self) -> None:
"""Stops the running asynchronous task by cancelling it.""" """Stops the running asynchronous task by cancelling it."""

View File

@ -219,18 +219,29 @@ def is_descriptor(obj: object) -> bool:
def current_event_loop_exists() -> bool: def current_event_loop_exists() -> bool:
"""Check if a running and open asyncio event loop exists in the current thread. """Check if an event loop has been set."""
This checks if an event loop is set via the current event loop policy and verifies
that the loop has not been closed.
Returns:
True if an event loop exists and is not closed, False otherwise.
"""
import asyncio import asyncio
try: return asyncio.get_event_loop_policy()._local._loop is not None # type: ignore
return not asyncio.get_event_loop().is_closed()
except RuntimeError:
return False def normalize_full_access_path_string(s: str) -> str:
"""Normalizes a string representing a full access path by converting double quotes
to single quotes.
This function is useful for ensuring consistency in strings that represent access
paths containing dictionary keys, by replacing all double quotes (`"`) with single
quotes (`'`).
Args:
s (str): The input string to be normalized.
Returns:
A new string with all double quotes replaced by single quotes.
Example:
>>> normalize_full_access_path_string('dictionary["first"].my_task')
"dictionary['first'].my_task"
"""
return s.replace('"', "'")

View File

@ -4,7 +4,7 @@ import logging.config
import sys import sys
from collections.abc import Callable from collections.abc import Callable
from copy import copy from copy import copy
from typing import ClassVar, Literal, TextIO from typing import ClassVar, Literal
import click import click
import socketio # type: ignore[import-untyped] import socketio # type: ignore[import-untyped]
@ -29,44 +29,22 @@ LOGGING_CONFIG = {
"datefmt": "%Y-%m-%d %H:%M:%S", "datefmt": "%Y-%m-%d %H:%M:%S",
}, },
}, },
"filters": {
"only_pydase_server": {
"()": "pydase.utils.logging.NameFilter",
"match": "pydase.server",
},
"exclude_pydase_server": {
"()": "pydase.utils.logging.NameFilter",
"match": "pydase.server",
"invert": True,
},
},
"handlers": { "handlers": {
"stdout_handler": { "default": {
"formatter": "default", "formatter": "default",
"class": "logging.StreamHandler", "class": "logging.StreamHandler",
"stream": "ext://sys.stdout", "stream": "ext://sys.stdout",
"filters": ["only_pydase_server"],
},
"stderr_handler": {
"formatter": "default",
"class": "logging.StreamHandler",
"stream": "ext://sys.stderr",
"filters": ["exclude_pydase_server"],
}, },
}, },
"loggers": { "loggers": {
"pydase": { "pydase": {"handlers": ["default"], "level": LOG_LEVEL, "propagate": False},
"handlers": ["stdout_handler", "stderr_handler"],
"level": LOG_LEVEL,
"propagate": False,
},
"aiohttp_middlewares": { "aiohttp_middlewares": {
"handlers": ["stderr_handler"], "handlers": ["default"],
"level": logging.WARNING, "level": logging.WARNING,
"propagate": False, "propagate": False,
}, },
"aiohttp": { "aiohttp": {
"handlers": ["stderr_handler"], "handlers": ["default"],
"level": logging.INFO, "level": logging.INFO,
"propagate": False, "propagate": False,
}, },
@ -74,23 +52,6 @@ LOGGING_CONFIG = {
} }
class NameFilter(logging.Filter):
"""
Logging filter that allows filtering logs based on the logger name.
Can either include or exclude a specific logger.
"""
def __init__(self, match: str, invert: bool = False):
super().__init__()
self.match = match
self.invert = invert
def filter(self, record: logging.LogRecord) -> bool:
if self.invert:
return not record.name.startswith(self.match)
return record.name.startswith(self.match)
class DefaultFormatter(logging.Formatter): class DefaultFormatter(logging.Formatter):
""" """
A custom log formatter class that: A custom log formatter class that:
@ -189,51 +150,3 @@ def setup_logging() -> None:
logger.debug("Configuring pydase logging.") logger.debug("Configuring pydase logging.")
logging.config.dictConfig(LOGGING_CONFIG) logging.config.dictConfig(LOGGING_CONFIG)
def configure_logging_with_pydase_formatter(
name: str | None = None, level: int = logging.INFO, stream: TextIO | None = None
) -> None:
"""Configure a logger with the pydase `DefaultFormatter`.
This sets up a `StreamHandler` with the custom `DefaultFormatter`, which includes
timestamp, log level with color (if supported), logger name, function, and line
number. It can be used to configure the root logger or any named logger.
Args:
name: The name of the logger to configure. If None, the root logger is used.
level: The logging level to set on the logger (e.g., logging.DEBUG,
logging.INFO). Defaults to logging.INFO.
stream: The output stream for the log messages (e.g., sys.stdout or sys.stderr).
If None, defaults to sys.stderr.
Example:
Configure logging in your service:
```python
import sys
from pydase.utils.logging import configure_logging_with_pydase_formatter
configure_logging_with_pydase_formatter(
name="my_service", # Use the package/module name or None for the root logger
level=logging.DEBUG, # Set the desired logging level (defaults to INFO)
stream=sys.stdout # Set the output stream (stderr by default)
)
```
Notes:
- This function adds a new handler each time it's called.
Use carefully to avoid duplicate logs.
- Colors are enabled if the stream supports TTY (e.g., in terminal).
""" # noqa: E501
logger = logging.getLogger(name=name)
handler = logging.StreamHandler(stream=stream)
formatter = DefaultFormatter(
fmt="%(asctime)s.%(msecs)03d | %(levelprefix)s | "
"%(name)s:%(funcName)s:%(lineno)d - %(message)s",
datefmt="%Y-%m-%d %H:%M:%S",
)
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(level)

View File

@ -85,7 +85,7 @@ class Deserializer:
def deserialize_list(cls, serialized_object: SerializedObject) -> Any: def deserialize_list(cls, serialized_object: SerializedObject) -> Any:
return [ return [
cls.deserialize(item) cls.deserialize(item)
for item in cast("list[SerializedObject]", serialized_object["value"]) for item in cast(list[SerializedObject], serialized_object["value"])
] ]
@classmethod @classmethod
@ -93,7 +93,7 @@ class Deserializer:
return { return {
key: cls.deserialize(value) key: cls.deserialize(value)
for key, value in cast( for key, value in cast(
"dict[str, SerializedObject]", serialized_object["value"] dict[str, SerializedObject], serialized_object["value"]
).items() ).items()
} }
@ -148,7 +148,7 @@ class Deserializer:
# Process and add properties based on the serialized object # Process and add properties based on the serialized object
for key, value in cast( for key, value in cast(
"dict[str, SerializedObject]", serialized_object["value"] dict[str, SerializedObject], serialized_object["value"]
).items(): ).items():
if value["type"] != "method": if value["type"] != "method":
class_attrs[key] = cls.create_attr_property(value) class_attrs[key] = cls.create_attr_property(value)

View File

@ -20,29 +20,29 @@ from pydase.utils.helpers import (
parse_full_access_path, parse_full_access_path,
parse_serialized_key, parse_serialized_key,
) )
from pydase.utils.serialization.types import (
DataServiceTypes,
SerializedBool,
SerializedDataService,
SerializedDatetime,
SerializedDict,
SerializedEnum,
SerializedException,
SerializedFloat,
SerializedInteger,
SerializedList,
SerializedMethod,
SerializedNoneType,
SerializedObject,
SerializedQuantity,
SerializedString,
SignatureDict,
)
if TYPE_CHECKING: if TYPE_CHECKING:
from collections.abc import Callable from collections.abc import Callable
from pydase.client.proxy_class import ProxyClass from pydase.client.proxy_class import ProxyClass
from pydase.utils.serialization.types import (
DataServiceTypes,
SerializedBool,
SerializedDataService,
SerializedDatetime,
SerializedDict,
SerializedEnum,
SerializedException,
SerializedFloat,
SerializedInteger,
SerializedList,
SerializedMethod,
SerializedNoneType,
SerializedObject,
SerializedQuantity,
SerializedString,
SignatureDict,
)
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -253,7 +253,7 @@ class Serializer:
for k, v in sig.parameters.items(): for k, v in sig.parameters.items():
default_value = cast( default_value = cast(
"dict[str, Any]", {} if v.default == inspect._empty else dump(v.default) dict[str, Any], {} if v.default == inspect._empty else dump(v.default)
) )
default_value.pop("full_access_path", None) default_value.pop("full_access_path", None)
signature["parameters"][k] = { signature["parameters"][k] = {
@ -385,7 +385,7 @@ def set_nested_value_by_path(
current_dict, path_part, allow_append=False current_dict, path_part, allow_append=False
) )
current_dict = cast( current_dict = cast(
"dict[Any, SerializedObject]", dict[Any, SerializedObject],
next_level_serialized_object["value"], next_level_serialized_object["value"],
) )
@ -393,7 +393,7 @@ def set_nested_value_by_path(
current_dict, path_parts[-1], allow_append=True current_dict, path_parts[-1], allow_append=True
) )
except (SerializationPathError, KeyError) as e: except (SerializationPathError, KeyError) as e:
logger.exception("Error occured trying to change %a: %s", path, e) logger.error("Error occured trying to change %a: %s", path, e)
return return
if next_level_serialized_object["type"] == "method": # state change of task if next_level_serialized_object["type"] == "method": # state change of task
@ -426,7 +426,7 @@ def get_nested_dict_by_path(
current_dict, path_part, allow_append=False current_dict, path_part, allow_append=False
) )
current_dict = cast( current_dict = cast(
"dict[Any, SerializedObject]", dict[Any, SerializedObject],
next_level_serialized_object["value"], next_level_serialized_object["value"],
) )
return get_container_item_by_key(current_dict, path_parts[-1], allow_append=False) return get_container_item_by_key(current_dict, path_parts[-1], allow_append=False)
@ -456,7 +456,7 @@ def get_or_create_item_in_container(
return container[key] return container[key]
except IndexError: except IndexError:
if allow_add_key and key == len(container): if allow_add_key and key == len(container):
cast("list[SerializedObject]", container).append( cast(list[SerializedObject], container).append(
create_empty_serialized_object() create_empty_serialized_object()
) )
return container[key] return container[key]
@ -541,7 +541,7 @@ def get_data_paths_from_serialized_object( # noqa: C901
elif serialized_dict_is_nested_object(serialized_obj): elif serialized_dict_is_nested_object(serialized_obj):
for key, value in cast( for key, value in cast(
"dict[str, SerializedObject]", serialized_obj["value"] dict[str, SerializedObject], serialized_obj["value"]
).items(): ).items():
# Serialized dictionaries need to have a different new_path than nested # Serialized dictionaries need to have a different new_path than nested
# classes # classes
@ -628,13 +628,13 @@ def add_prefix_to_full_access_path(
if isinstance(serialized_obj["value"], list): if isinstance(serialized_obj["value"], list):
for value in serialized_obj["value"]: for value in serialized_obj["value"]:
add_prefix_to_full_access_path(cast("SerializedObject", value), prefix) add_prefix_to_full_access_path(cast(SerializedObject, value), prefix)
elif isinstance(serialized_obj["value"], dict): elif isinstance(serialized_obj["value"], dict):
for value in cast( for value in cast(
"dict[str, SerializedObject]", serialized_obj["value"] dict[str, SerializedObject], serialized_obj["value"]
).values(): ).values():
add_prefix_to_full_access_path(cast("SerializedObject", value), prefix) add_prefix_to_full_access_path(cast(SerializedObject, value), prefix)
except (TypeError, KeyError, AttributeError): except (TypeError, KeyError, AttributeError):
# passed dictionary is not a serialized object # passed dictionary is not a serialized object
pass pass

View File

@ -2,9 +2,8 @@ import threading
from collections.abc import Generator from collections.abc import Generator
from typing import Any from typing import Any
import pytest
import pydase import pydase
import pytest
from pydase.client.proxy_loader import ProxyAttributeError from pydase.client.proxy_loader import ProxyAttributeError
@ -53,7 +52,6 @@ def pydase_client() -> Generator[pydase.Client, None, Any]:
yield client yield client
client.disconnect()
server.handle_exit() server.handle_exit()
thread.join() thread.join()
@ -163,17 +161,3 @@ def test_context_manager(pydase_client: pydase.Client) -> None:
assert client.proxy.my_property == 1337.01 assert client.proxy.my_property == 1337.01
assert not client.proxy.connected assert not client.proxy.connected
def test_client_id(
pydase_client: pydase.Client, caplog: pytest.LogCaptureFixture
) -> None:
import socket
pydase.Client(url="ws://localhost:9999")
assert f"Client [id={socket.gethostname()}]" in caplog.text
caplog.clear()
pydase.Client(url="ws://localhost:9999", client_id="my_service")
assert "Client [id=my_service] connected" in caplog.text

View File

@ -2,26 +2,27 @@ import threading
from collections.abc import Callable, Generator from collections.abc import Callable, Generator
from typing import Any from typing import Any
import pydase
import pytest import pytest
import socketio.exceptions import socketio.exceptions
import pydase
@pytest.fixture(scope="function") @pytest.fixture(scope="function")
def pydase_restartable_server() -> Generator[ def pydase_restartable_server() -> (
tuple[ Generator[
pydase.Server, tuple[
threading.Thread, pydase.Server,
pydase.DataService, threading.Thread,
Callable[ pydase.DataService,
[pydase.Server, threading.Thread, pydase.DataService], Callable[
tuple[pydase.Server, threading.Thread], [pydase.Server, threading.Thread, pydase.DataService],
tuple[pydase.Server, threading.Thread],
],
], ],
], None,
None, Any,
Any, ]
]: ):
class MyService(pydase.DataService): class MyService(pydase.DataService):
def __init__(self) -> None: def __init__(self) -> None:
super().__init__() super().__init__()
@ -61,6 +62,9 @@ def pydase_restartable_server() -> Generator[
yield server, thread, service_instance, restart yield server, thread, service_instance, restart
server.handle_exit()
thread.join()
def test_reconnection( def test_reconnection(
pydase_restartable_server: tuple[ pydase_restartable_server: tuple[
@ -101,6 +105,3 @@ def test_reconnection(
# the service proxies successfully reconnect and get the new service name # the service proxies successfully reconnect and get the new service name
assert client.proxy.name == "New service name" assert client.proxy.name == "New service name"
assert client_2.proxy.name == "New service name" assert client_2.proxy.name == "New service name"
server.handle_exit()
thread.join()

View File

@ -7,7 +7,7 @@ from pydase.task.autostart import autostart_service_tasks
from pytest import LogCaptureFixture from pytest import LogCaptureFixture
@pytest.mark.asyncio(loop_scope="function") @pytest.mark.asyncio(scope="function")
async def test_reconnection(caplog: LogCaptureFixture) -> None: async def test_reconnection(caplog: LogCaptureFixture) -> None:
class MyService(pydase.components.device_connection.DeviceConnection): class MyService(pydase.components.device_connection.DeviceConnection):
def __init__( def __init__(

View File

@ -1,9 +1,8 @@
import logging import logging
from typing import Any from typing import Any
import pytest
import pydase import pydase
import pytest
from pydase.data_service.data_service_observer import DataServiceObserver from pydase.data_service.data_service_observer import DataServiceObserver
from pydase.data_service.state_manager import StateManager from pydase.data_service.state_manager import StateManager
from pydase.utils.serialization.serializer import SerializationError, dump from pydase.utils.serialization.serializer import SerializationError, dump
@ -168,8 +167,8 @@ def test_normalized_attr_path_in_dependent_property_changes(
state_manager = StateManager(service=service_instance) state_manager = StateManager(service=service_instance)
observer = DataServiceObserver(state_manager=state_manager) observer = DataServiceObserver(state_manager=state_manager)
assert observer.property_deps_dict['service_dict["one"]._prop'] == [ assert observer.property_deps_dict["service_dict['one']._prop"] == [
'service_dict["one"].prop' "service_dict['one'].prop"
] ]
# We can use dict key path encoded with double quotes # We can use dict key path encoded with double quotes
@ -185,99 +184,3 @@ def test_normalized_attr_path_in_dependent_property_changes(
) )
assert service_instance.service_dict["one"].prop == 12.0 assert service_instance.service_dict["one"].prop == 12.0
assert "'service_dict[\"one\"].prop' changed to '12.0'" in caplog.text assert "'service_dict[\"one\"].prop' changed to '12.0'" in caplog.text
def test_nested_dict_property_changes(
caplog: pytest.LogCaptureFixture,
) -> None:
def get_voltage() -> float:
"""Mocking a remote device."""
return 2.0
def set_voltage(value: float) -> None:
"""Mocking a remote device."""
class OtherService(pydase.DataService):
_voltage = 1.0
@property
def voltage(self) -> float:
# Property dependency _voltage changes within the property itself.
# This should be handled gracefully, i.e. not introduce recursion
self._voltage = get_voltage()
return self._voltage
@voltage.setter
def voltage(self, value: float) -> None:
self._voltage = value
set_voltage(self._voltage)
class MyService(pydase.DataService):
def __init__(self) -> None:
super().__init__()
self.my_dict = {"key": OtherService()}
service = MyService()
pydase.Server(service)
# Changing the _voltage attribute should re-evaluate the voltage property, but avoid
# recursion
service.my_dict["key"].voltage = 1.2
def test_read_only_dict_property(caplog: pytest.LogCaptureFixture) -> None:
class MyObservable(pydase.DataService):
def __init__(self) -> None:
super().__init__()
self._dict_attr = {"dotted.key": 1.0}
@property
def dict_attr(self) -> dict[str, Any]:
return self._dict_attr
service_instance = MyObservable()
state_manager = StateManager(service=service_instance)
DataServiceObserver(state_manager)
service_instance._dict_attr["dotted.key"] = 2.0
assert "'dict_attr[\"dotted.key\"]' changed to '2.0'" in caplog.text
def test_dependency_as_function_argument(caplog: pytest.LogCaptureFixture) -> None:
class MyObservable(pydase.DataService):
some_int = 0
@property
def other_int(self) -> int:
return self.add_one(self.some_int)
def add_one(self, value: int) -> int:
return value + 1
service_instance = MyObservable()
state_manager = StateManager(service=service_instance)
DataServiceObserver(state_manager)
service_instance.some_int = 1337
assert "'other_int' changed to '1338'" in caplog.text
def test_property_starting_with_dependency_name(
caplog: pytest.LogCaptureFixture,
) -> None:
class MyObservable(pydase.DataService):
my_int = 0
@property
def my_int_2(self) -> int:
return self.my_int + 1
service_instance = MyObservable()
state_manager = StateManager(service=service_instance)
DataServiceObserver(state_manager)
service_instance.my_int = 1337
assert "'my_int_2' changed to '1338'" in caplog.text

View File

@ -1,13 +1,10 @@
import asyncio
import json import json
from pathlib import Path from pathlib import Path
from typing import Any from typing import Any
import anyio
import pydase import pydase
import pydase.components import pydase.components
import pydase.units as u import pydase.units as u
import pytest
from pydase.data_service.data_service_observer import DataServiceObserver from pydase.data_service.data_service_observer import DataServiceObserver
from pydase.data_service.state_manager import ( from pydase.data_service.state_manager import (
StateManager, StateManager,
@ -352,24 +349,4 @@ def test_property_load_state(tmp_path: Path) -> None:
assert service_instance.name == "Some other name" assert service_instance.name == "Some other name"
assert service_instance.not_loadable_attr == "Not loadable" assert service_instance.not_loadable_attr == "Not loadable"
assert not has_load_state_decorator(type(service_instance).property_without_setter) # type: ignore assert not has_load_state_decorator(type(service_instance).property_without_setter)
@pytest.mark.asyncio()
async def test_autosave(tmp_path: Path, caplog: LogCaptureFixture) -> None:
filename = tmp_path / "state.json"
service = Service()
manager = StateManager(service=service, filename=filename, autosave_interval=0.1)
DataServiceObserver(state_manager=manager)
task = asyncio.create_task(manager.autosave())
service.property_attr = 198.0
await asyncio.sleep(0.1)
task.cancel()
assert filename.exists(), "Autosave should write to the file"
async with await anyio.open_file(filename) as f:
data = json.loads(await f.read())
assert data["property_attr"]["value"] == service.property_attr

View File

@ -1,9 +1,8 @@
import asyncio import asyncio
import threading import threading
import pytest
import pydase import pydase
import pytest
from pydase.observer_pattern.observable.decorators import validate_set from pydase.observer_pattern.observable.decorators import validate_set
@ -18,10 +17,7 @@ def linspace(start: float, stop: float, n: int):
def asyncio_loop_thread(loop: asyncio.AbstractEventLoop) -> None: def asyncio_loop_thread(loop: asyncio.AbstractEventLoop) -> None:
asyncio.set_event_loop(loop) asyncio.set_event_loop(loop)
try: loop.run_forever()
loop.run_forever()
finally:
loop.close()
def test_validate_set_precision(caplog: pytest.LogCaptureFixture) -> None: def test_validate_set_precision(caplog: pytest.LogCaptureFixture) -> None:
@ -93,10 +89,10 @@ def test_validate_set_timeout(caplog: pytest.LogCaptureFixture) -> None:
def value(self, value: float) -> None: def value(self, value: float) -> None:
self.loop.create_task(self.set_value(value)) self.loop.create_task(self.set_value(value))
async def set_value(self, value: float) -> None: async def set_value(self, value) -> None:
for i in linspace(self._value, value, 10): for i in linspace(self._value, value, 10):
self._value = i self._value = i
await asyncio.sleep(0.01) await asyncio.sleep(0.1)
class Service(pydase.DataService): class Service(pydase.DataService):
def __init__(self) -> None: def __init__(self) -> None:
@ -108,7 +104,7 @@ def test_validate_set_timeout(caplog: pytest.LogCaptureFixture) -> None:
return self._driver.value return self._driver.value
@value_1.setter @value_1.setter
@validate_set(timeout=0.01) @validate_set(timeout=0.5)
def value_1(self, value: float) -> None: def value_1(self, value: float) -> None:
self._driver.value = value self._driver.value = value
@ -117,7 +113,7 @@ def test_validate_set_timeout(caplog: pytest.LogCaptureFixture) -> None:
return self._driver.value return self._driver.value
@value_2.setter @value_2.setter
@validate_set(timeout=0.11) @validate_set(timeout=1)
def value_2(self, value: float) -> None: def value_2(self, value: float) -> None:
self._driver.value = value self._driver.value = value

View File

@ -4,13 +4,12 @@ from collections.abc import Generator
from typing import Any from typing import Any
import aiohttp import aiohttp
import pytest
import pydase import pydase
import pytest
from pydase.utils.serialization.deserializer import Deserializer from pydase.utils.serialization.deserializer import Deserializer
@pytest.fixture(scope="module") @pytest.fixture()
def pydase_server() -> Generator[None, None, None]: def pydase_server() -> Generator[None, None, None]:
class SubService(pydase.DataService): class SubService(pydase.DataService):
name = "SubService" name = "SubService"
@ -53,9 +52,6 @@ def pydase_server() -> Generator[None, None, None]:
yield yield
server.handle_exit()
thread.join()
@pytest.mark.parametrize( @pytest.mark.parametrize(
"access_path, expected", "access_path, expected",
@ -111,7 +107,7 @@ def pydase_server() -> Generator[None, None, None]:
), ),
], ],
) )
@pytest.mark.asyncio(loop_scope="module") @pytest.mark.asyncio()
async def test_get_value( async def test_get_value(
access_path: str, access_path: str,
expected: dict[str, Any], expected: dict[str, Any],
@ -183,13 +179,12 @@ async def test_get_value(
), ),
], ],
) )
@pytest.mark.asyncio(loop_scope="module") @pytest.mark.asyncio()
async def test_update_value( async def test_update_value(
access_path: str, access_path: str,
new_value: dict[str, Any], new_value: dict[str, Any],
ok: bool, ok: bool,
pydase_server: pydase.DataService, pydase_server: pydase.DataService,
caplog: pytest.LogCaptureFixture,
) -> None: ) -> None:
async with aiohttp.ClientSession("http://localhost:9998") as session: async with aiohttp.ClientSession("http://localhost:9998") as session:
resp = await session.put( resp = await session.put(
@ -223,7 +218,7 @@ async def test_update_value(
), ),
], ],
) )
@pytest.mark.asyncio(loop_scope="module") @pytest.mark.asyncio()
async def test_trigger_method( async def test_trigger_method(
access_path: str, access_path: str,
expected: Any, expected: Any,
@ -255,43 +250,3 @@ async def test_trigger_method(
if resp.ok: if resp.ok:
content = Deserializer.deserialize(json.loads(await resp.text())) content = Deserializer.deserialize(json.loads(await resp.text()))
assert content == expected assert content == expected
@pytest.mark.parametrize(
"headers, log_id",
[
({}, "id=None"),
(
{
"X-Client-Id": "client-header",
},
"id=client-header",
),
(
{
"Remote-User": "Remote User",
},
"user=Remote User",
),
(
{
"X-Client-Id": "client-header",
"Remote-User": "Remote User",
},
"user=Remote User",
),
],
)
@pytest.mark.asyncio(loop_scope="module")
async def test_client_information_logging(
headers: dict[str, str],
log_id: str,
pydase_server: pydase.DataService,
caplog: pytest.LogCaptureFixture,
) -> None:
async with aiohttp.ClientSession("http://localhost:9998") as session:
await session.get(
"/api/v1/get_value?access_path=readonly_attr", headers=headers
)
assert log_id in caplog.text

View File

@ -1,316 +0,0 @@
import threading
from collections.abc import Generator
from typing import Any
import pytest
import socketio
import pydase
from pydase.utils.serialization.deserializer import Deserializer
@pytest.fixture(scope="module")
def pydase_server() -> Generator[None, None, None]:
class SubService(pydase.DataService):
name = "SubService"
subservice_instance = SubService()
class MyService(pydase.DataService):
def __init__(self) -> None:
super().__init__()
self._readonly_attr = "MyService"
self._my_property = 12.1
self.sub_service = SubService()
self.list_attr = [1, 2]
self.dict_attr = {
"foo": subservice_instance,
"dotted.key": subservice_instance,
}
@property
def my_property(self) -> float:
return self._my_property
@my_property.setter
def my_property(self, value: float) -> None:
self._my_property = value
@property
def readonly_attr(self) -> str:
return self._readonly_attr
def my_method(self, input_str: str) -> str:
return f"{input_str}: my_method"
async def my_async_method(self, input_str: str) -> str:
return f"{input_str}: my_async_method"
server = pydase.Server(MyService(), web_port=9997)
thread = threading.Thread(target=server.run, daemon=True)
thread.start()
yield
server.handle_exit()
thread.join()
@pytest.mark.parametrize(
"access_path, expected",
[
(
"readonly_attr",
{
"full_access_path": "readonly_attr",
"doc": None,
"readonly": False,
"type": "str",
"value": "MyService",
},
),
(
"sub_service.name",
{
"full_access_path": "sub_service.name",
"doc": None,
"readonly": False,
"type": "str",
"value": "SubService",
},
),
(
"list_attr[0]",
{
"full_access_path": "list_attr[0]",
"doc": None,
"readonly": False,
"type": "int",
"value": 1,
},
),
(
'dict_attr["foo"]',
{
"full_access_path": 'dict_attr["foo"]',
"doc": None,
"name": "SubService",
"readonly": False,
"type": "DataService",
"value": {
"name": {
"doc": None,
"full_access_path": 'dict_attr["foo"].name',
"readonly": False,
"type": "str",
"value": "SubService",
}
},
},
),
],
)
@pytest.mark.asyncio(loop_scope="module")
async def test_get_value(
access_path: str,
expected: dict[str, Any],
pydase_server: None,
) -> None:
client = socketio.AsyncClient()
await client.connect(
"http://localhost:9997", socketio_path="/ws/socket.io", transports=["websocket"]
)
response = await client.call("get_value", access_path)
assert response == expected
await client.disconnect()
@pytest.mark.parametrize(
"access_path, new_value, ok",
[
(
"sub_service.name",
{
"full_access_path": "sub_service.name",
"doc": None,
"readonly": False,
"type": "str",
"value": "New Name",
},
True,
),
(
"list_attr[0]",
{
"full_access_path": "list_attr[0]",
"doc": None,
"readonly": False,
"type": "int",
"value": 11,
},
True,
),
(
'dict_attr["foo"].name',
{
"full_access_path": 'dict_attr["foo"].name',
"doc": None,
"readonly": False,
"type": "str",
"value": "foo name",
},
True,
),
(
"readonly_attr",
{
"full_access_path": "readonly_attr",
"doc": None,
"readonly": True,
"type": "str",
"value": "Other Name",
},
False,
),
(
"invalid_attribute",
{
"full_access_path": "invalid_attribute",
"doc": None,
"readonly": False,
"type": "float",
"value": 12.0,
},
False,
),
],
)
@pytest.mark.asyncio(loop_scope="module")
async def test_update_value(
access_path: str,
new_value: dict[str, Any],
ok: bool,
pydase_server: None,
caplog: pytest.LogCaptureFixture,
) -> None:
client = socketio.AsyncClient()
await client.connect(
"http://localhost:9997", socketio_path="/ws/socket.io", transports=["websocket"]
)
response = await client.call(
"update_value",
{"access_path": access_path, "value": new_value},
)
if ok:
assert response is None
else:
assert response["type"] == "Exception"
await client.disconnect()
@pytest.mark.parametrize(
"access_path, expected, ok",
[
(
"my_method",
"Hello from function: my_method",
True,
),
(
"my_async_method",
"Hello from function: my_async_method",
True,
),
(
"invalid_method",
None,
False,
),
],
)
@pytest.mark.asyncio(loop_scope="module")
async def test_trigger_method(
access_path: str,
expected: Any,
ok: bool,
pydase_server: pydase.DataService,
) -> None:
client = socketio.AsyncClient()
await client.connect(
"http://localhost:9997", socketio_path="/ws/socket.io", transports=["websocket"]
)
response = await client.call(
"trigger_method",
{
"access_path": access_path,
"kwargs": {
"full_access_path": "",
"type": "dict",
"value": {
"input_str": {
"docs": None,
"full_access_path": "",
"readonly": False,
"type": "str",
"value": "Hello from function",
},
},
},
},
)
if ok:
content = Deserializer.deserialize(response)
assert content == expected
else:
assert response["type"] == "Exception"
await client.disconnect()
@pytest.mark.parametrize(
"headers, log_id",
[
({}, "sid="),
(
{
"X-Client-Id": "client-header",
},
"id=client-header",
),
(
{
"Remote-User": "Remote User",
},
"user=Remote User",
),
(
{
"X-Client-Id": "client-header",
"Remote-User": "Remote User",
},
"user=Remote User",
),
],
)
@pytest.mark.asyncio(loop_scope="module")
async def test_client_information_logging(
headers: dict[str, str],
log_id: str,
pydase_server: pydase.DataService,
caplog: pytest.LogCaptureFixture,
) -> None:
client = socketio.AsyncClient()
await client.connect(
"http://localhost:9997",
socketio_path="/ws/socket.io",
transports=["websocket"],
headers=headers,
)
await client.call("get_value", "readonly_attr")
assert log_id in caplog.text
await client.disconnect()

View File

@ -1,20 +1,19 @@
import asyncio import asyncio
import logging import logging
import pytest
from pytest import LogCaptureFixture
import pydase import pydase
import pytest
from pydase.data_service.data_service_observer import DataServiceObserver from pydase.data_service.data_service_observer import DataServiceObserver
from pydase.data_service.state_manager import StateManager from pydase.data_service.state_manager import StateManager
from pydase.task.autostart import autostart_service_tasks from pydase.task.autostart import autostart_service_tasks
from pydase.task.decorator import task from pydase.task.decorator import task
from pydase.task.task_status import TaskStatus from pydase.task.task_status import TaskStatus
from pytest import LogCaptureFixture
logger = logging.getLogger("pydase") logger = logging.getLogger("pydase")
@pytest.mark.asyncio() @pytest.mark.asyncio(scope="function")
async def test_start_and_stop_task(caplog: LogCaptureFixture) -> None: async def test_start_and_stop_task(caplog: LogCaptureFixture) -> None:
class MyService(pydase.DataService): class MyService(pydase.DataService):
@task() @task()
@ -29,11 +28,11 @@ async def test_start_and_stop_task(caplog: LogCaptureFixture) -> None:
DataServiceObserver(state_manager) DataServiceObserver(state_manager)
autostart_service_tasks(service_instance) autostart_service_tasks(service_instance)
await asyncio.sleep(0.01) await asyncio.sleep(0.1)
assert service_instance.my_task.status == TaskStatus.NOT_RUNNING assert service_instance.my_task.status == TaskStatus.NOT_RUNNING
service_instance.my_task.start() service_instance.my_task.start()
await asyncio.sleep(0.01) await asyncio.sleep(0.1)
assert service_instance.my_task.status == TaskStatus.RUNNING assert service_instance.my_task.status == TaskStatus.RUNNING
assert "'my_task.status' changed to 'TaskStatus.RUNNING'" in caplog.text assert "'my_task.status' changed to 'TaskStatus.RUNNING'" in caplog.text
@ -41,12 +40,12 @@ async def test_start_and_stop_task(caplog: LogCaptureFixture) -> None:
caplog.clear() caplog.clear()
service_instance.my_task.stop() service_instance.my_task.stop()
await asyncio.sleep(0.01) await asyncio.sleep(0.1)
assert service_instance.my_task.status == TaskStatus.NOT_RUNNING assert service_instance.my_task.status == TaskStatus.NOT_RUNNING
assert "Task 'my_task' was cancelled" in caplog.text assert "Task 'my_task' was cancelled" in caplog.text
@pytest.mark.asyncio() @pytest.mark.asyncio(scope="function")
async def test_autostart_task(caplog: LogCaptureFixture) -> None: async def test_autostart_task(caplog: LogCaptureFixture) -> None:
class MyService(pydase.DataService): class MyService(pydase.DataService):
@task(autostart=True) @task(autostart=True)
@ -62,16 +61,13 @@ async def test_autostart_task(caplog: LogCaptureFixture) -> None:
autostart_service_tasks(service_instance) autostart_service_tasks(service_instance)
await asyncio.sleep(0.01) await asyncio.sleep(0.1)
assert service_instance.my_task.status == TaskStatus.RUNNING assert service_instance.my_task.status == TaskStatus.RUNNING
assert "'my_task.status' changed to 'TaskStatus.RUNNING'" in caplog.text assert "'my_task.status' changed to 'TaskStatus.RUNNING'" in caplog.text
service_instance.my_task.stop()
await asyncio.sleep(0.01)
@pytest.mark.asyncio(scope="function")
@pytest.mark.asyncio()
async def test_nested_list_autostart_task( async def test_nested_list_autostart_task(
caplog: LogCaptureFixture, caplog: LogCaptureFixture,
) -> None: ) -> None:
@ -90,7 +86,7 @@ async def test_nested_list_autostart_task(
DataServiceObserver(state_manager) DataServiceObserver(state_manager)
autostart_service_tasks(service_instance) autostart_service_tasks(service_instance)
await asyncio.sleep(0.01) await asyncio.sleep(0.1)
assert service_instance.sub_services_list[0].my_task.status == TaskStatus.RUNNING assert service_instance.sub_services_list[0].my_task.status == TaskStatus.RUNNING
assert service_instance.sub_services_list[1].my_task.status == TaskStatus.RUNNING assert service_instance.sub_services_list[1].my_task.status == TaskStatus.RUNNING
@ -103,12 +99,8 @@ async def test_nested_list_autostart_task(
in caplog.text in caplog.text
) )
service_instance.sub_services_list[0].my_task.stop()
service_instance.sub_services_list[1].my_task.stop()
await asyncio.sleep(0.01)
@pytest.mark.asyncio(scope="function")
@pytest.mark.asyncio()
async def test_nested_dict_autostart_task( async def test_nested_dict_autostart_task(
caplog: LogCaptureFixture, caplog: LogCaptureFixture,
) -> None: ) -> None:
@ -128,7 +120,7 @@ async def test_nested_dict_autostart_task(
autostart_service_tasks(service_instance) autostart_service_tasks(service_instance)
await asyncio.sleep(0.01) await asyncio.sleep(0.1)
assert ( assert (
service_instance.sub_services_dict["first"].my_task.status == TaskStatus.RUNNING service_instance.sub_services_dict["first"].my_task.status == TaskStatus.RUNNING
@ -147,12 +139,8 @@ async def test_nested_dict_autostart_task(
in caplog.text in caplog.text
) )
service_instance.sub_services_dict["first"].my_task.stop()
service_instance.sub_services_dict["second"].my_task.stop()
await asyncio.sleep(0.01)
@pytest.mark.asyncio(scope="function")
@pytest.mark.asyncio()
async def test_manual_start_with_multiple_service_instances( async def test_manual_start_with_multiple_service_instances(
caplog: LogCaptureFixture, caplog: LogCaptureFixture,
) -> None: ) -> None:
@ -173,7 +161,7 @@ async def test_manual_start_with_multiple_service_instances(
autostart_service_tasks(service_instance) autostart_service_tasks(service_instance)
await asyncio.sleep(0.01) await asyncio.sleep(0.1)
assert ( assert (
service_instance.sub_services_list[0].my_task.status == TaskStatus.NOT_RUNNING service_instance.sub_services_list[0].my_task.status == TaskStatus.NOT_RUNNING
@ -301,180 +289,3 @@ async def test_manual_start_with_multiple_service_instances(
await asyncio.sleep(0.01) await asyncio.sleep(0.01)
assert "Task 'my_task' was cancelled" in caplog.text assert "Task 'my_task' was cancelled" in caplog.text
@pytest.mark.asyncio()
async def test_restart_on_exception(caplog: LogCaptureFixture) -> None:
class MyService(pydase.DataService):
@task(restart_on_exception=True, restart_sec=0.1)
async def my_task(self) -> None:
logger.info("Triggered task.")
raise Exception("Task failure")
service_instance = MyService()
state_manager = StateManager(service_instance)
DataServiceObserver(state_manager)
service_instance.my_task.start()
await asyncio.sleep(0.01)
assert "Task 'my_task' encountered an exception" in caplog.text
caplog.clear()
await asyncio.sleep(0.1)
assert service_instance.my_task.status == TaskStatus.RUNNING
assert "Task 'my_task' encountered an exception" in caplog.text
assert "Triggered task." in caplog.text
service_instance.my_task.stop()
await asyncio.sleep(0.01)
@pytest.mark.asyncio()
async def test_restart_sec(caplog: LogCaptureFixture) -> None:
class MyService(pydase.DataService):
@task(restart_on_exception=True, restart_sec=0.1)
async def my_task(self) -> None:
logger.info("Triggered task.")
raise Exception("Task failure")
service_instance = MyService()
state_manager = StateManager(service_instance)
DataServiceObserver(state_manager)
service_instance.my_task.start()
await asyncio.sleep(0.001)
assert "Triggered task." in caplog.text
caplog.clear()
await asyncio.sleep(0.05)
assert "Triggered task." not in caplog.text
await asyncio.sleep(0.05)
assert "Triggered task." in caplog.text # Ensures the task restarted after 0.2s
service_instance.my_task.stop()
await asyncio.sleep(0.01)
@pytest.mark.asyncio()
async def test_exceeding_start_limit_interval_sec_and_burst(
caplog: LogCaptureFixture,
) -> None:
class MyService(pydase.DataService):
@task(
restart_on_exception=True,
restart_sec=0.0,
start_limit_interval_sec=1.0,
start_limit_burst=2,
)
async def my_task(self) -> None:
raise Exception("Task failure")
service_instance = MyService()
state_manager = StateManager(service_instance)
DataServiceObserver(state_manager)
service_instance.my_task.start()
await asyncio.sleep(0.1)
assert "Task 'my_task' exceeded restart burst limit" in caplog.text
assert service_instance.my_task.status == TaskStatus.NOT_RUNNING
@pytest.mark.asyncio()
async def test_non_exceeding_start_limit_interval_sec_and_burst(
caplog: LogCaptureFixture,
) -> None:
class MyService(pydase.DataService):
@task(
restart_on_exception=True,
restart_sec=0.1,
start_limit_interval_sec=0.1,
start_limit_burst=2,
)
async def my_task(self) -> None:
raise Exception("Task failure")
service_instance = MyService()
state_manager = StateManager(service_instance)
DataServiceObserver(state_manager)
service_instance.my_task.start()
await asyncio.sleep(0.5)
assert "Task 'my_task' exceeded restart burst limit" not in caplog.text
assert service_instance.my_task.status == TaskStatus.RUNNING
service_instance.my_task.stop()
await asyncio.sleep(0.01)
@pytest.mark.asyncio()
async def test_exit_on_failure(
monkeypatch: pytest.MonkeyPatch, caplog: LogCaptureFixture
) -> None:
class MyService(pydase.DataService):
@task(restart_on_exception=False, exit_on_failure=True)
async def my_task(self) -> None:
logger.info("Triggered task.")
raise Exception("Critical failure")
def mock_os_kill(pid: int, signal: int) -> None:
logger.critical("os.kill called with signal=%s and pid=%s", signal, pid)
monkeypatch.setattr("os.kill", mock_os_kill)
service_instance = MyService()
state_manager = StateManager(service_instance)
DataServiceObserver(state_manager)
service_instance.my_task.start()
await asyncio.sleep(0.1)
assert "os.kill called with signal=" in caplog.text
assert "Task 'my_task' encountered an exception" in caplog.text
@pytest.mark.asyncio()
async def test_exit_on_failure_exceeding_rate_limit(
monkeypatch: pytest.MonkeyPatch, caplog: LogCaptureFixture
) -> None:
class MyService(pydase.DataService):
@task(
restart_on_exception=True,
restart_sec=0.0,
start_limit_interval_sec=0.1,
start_limit_burst=2,
exit_on_failure=True,
)
async def my_task(self) -> None:
raise Exception("Critical failure")
def mock_os_kill(pid: int, signal: int) -> None:
logger.critical("os.kill called with signal=%s and pid=%s", signal, pid)
monkeypatch.setattr("os.kill", mock_os_kill)
service_instance = MyService()
state_manager = StateManager(service_instance)
DataServiceObserver(state_manager)
service_instance.my_task.start()
await asyncio.sleep(0.5)
assert "os.kill called with signal=" in caplog.text
assert "Task 'my_task' encountered an exception" in caplog.text
@pytest.mark.asyncio()
async def test_gracefully_finishing_task(
monkeypatch: pytest.MonkeyPatch, caplog: LogCaptureFixture
) -> None:
class MyService(pydase.DataService):
@task()
async def my_task(self) -> None:
print("Hello")
await asyncio.sleep(0.1)
service_instance = MyService()
state_manager = StateManager(service_instance)
DataServiceObserver(state_manager)
service_instance.my_task.start()
await asyncio.sleep(0.05)
assert service_instance.my_task.status == TaskStatus.RUNNING
await asyncio.sleep(0.1)
assert service_instance.my_task.status == TaskStatus.NOT_RUNNING

View File

@ -4,5 +4,5 @@ import toml
def test_project_version() -> None: def test_project_version() -> None:
pyproject = toml.load("pyproject.toml") pyproject = toml.load("pyproject.toml")
pydase_pyroject_version = pyproject["project"]["version"] pydase_pyroject_version = pyproject["tool"]["poetry"]["version"]
assert pydase.version.__version__ == pydase_pyroject_version assert pydase.version.__version__ == pydase_pyroject_version

View File

@ -3,15 +3,15 @@ from datetime import datetime
from enum import Enum from enum import Enum
from typing import Any, ClassVar from typing import Any, ClassVar
import pytest
import pydase import pydase
import pydase.units as u import pydase.units as u
import pytest
from pydase.components.coloured_enum import ColouredEnum from pydase.components.coloured_enum import ColouredEnum
from pydase.task.task_status import TaskStatus from pydase.task.task_status import TaskStatus
from pydase.utils.decorators import frontend from pydase.utils.decorators import frontend
from pydase.utils.serialization.serializer import ( from pydase.utils.serialization.serializer import (
SerializationPathError, SerializationPathError,
SerializedObject,
add_prefix_to_full_access_path, add_prefix_to_full_access_path,
dump, dump,
generate_serialized_data_paths, generate_serialized_data_paths,
@ -21,7 +21,6 @@ from pydase.utils.serialization.serializer import (
serialized_dict_is_nested_object, serialized_dict_is_nested_object,
set_nested_value_by_path, set_nested_value_by_path,
) )
from pydase.utils.serialization.types import SerializedObject
class MyEnum(enum.Enum): class MyEnum(enum.Enum):
@ -208,7 +207,7 @@ def test_ColouredEnum_serialize() -> None:
} }
@pytest.mark.asyncio(loop_scope="module") @pytest.mark.asyncio(scope="module")
async def test_method_serialization() -> None: async def test_method_serialization() -> None:
class ClassWithMethod(pydase.DataService): class ClassWithMethod(pydase.DataService):
def some_method(self) -> str: def some_method(self) -> str:
@ -253,7 +252,7 @@ def test_methods_with_type_hints() -> None:
def method_with_type_hint(some_argument: int) -> None: def method_with_type_hint(some_argument: int) -> None:
pass pass
def method_with_union_type_hint(some_argument: int | float) -> None: # noqa: PYI041 def method_with_union_type_hint(some_argument: int | float) -> None:
pass pass
assert dump(method_without_type_hint) == { assert dump(method_without_type_hint) == {

View File

@ -1,10 +1,9 @@
import logging import logging
import pytest from pytest import LogCaptureFixture
from pydase.utils.logging import configure_logging_with_pydase_formatter
def test_log_error(caplog: pytest.LogCaptureFixture) -> None: def test_log_error(caplog: LogCaptureFixture):
logger = logging.getLogger("pydase") logger = logging.getLogger("pydase")
logger.setLevel(logging.ERROR) logger.setLevel(logging.ERROR)
@ -21,7 +20,7 @@ def test_log_error(caplog: pytest.LogCaptureFixture) -> None:
assert any(record.levelname == "ERROR" for record in caplog.records) assert any(record.levelname == "ERROR" for record in caplog.records)
def test_log_warning(caplog: pytest.LogCaptureFixture) -> None: def test_log_warning(caplog: LogCaptureFixture):
logger = logging.getLogger("pydase") logger = logging.getLogger("pydase")
logger.setLevel(logging.WARNING) logger.setLevel(logging.WARNING)
@ -38,7 +37,7 @@ def test_log_warning(caplog: pytest.LogCaptureFixture) -> None:
assert any(record.levelname == "ERROR" for record in caplog.records) assert any(record.levelname == "ERROR" for record in caplog.records)
def test_log_debug(caplog: pytest.LogCaptureFixture) -> None: def test_log_debug(caplog: LogCaptureFixture):
logger = logging.getLogger("pydase") logger = logging.getLogger("pydase")
logger.setLevel(logging.DEBUG) logger.setLevel(logging.DEBUG)
@ -54,7 +53,7 @@ def test_log_debug(caplog: pytest.LogCaptureFixture) -> None:
assert "This is an error message" in caplog.text assert "This is an error message" in caplog.text
def test_log_info(caplog: pytest.LogCaptureFixture) -> None: def test_log_info(caplog: LogCaptureFixture):
logger = logging.getLogger("pydase") logger = logging.getLogger("pydase")
logger.setLevel(logging.INFO) logger.setLevel(logging.INFO)
@ -68,21 +67,3 @@ def test_log_info(caplog: pytest.LogCaptureFixture) -> None:
assert "This is an info message" in caplog.text assert "This is an info message" in caplog.text
assert "This is a warning message" in caplog.text assert "This is a warning message" in caplog.text
assert "This is an error message" in caplog.text assert "This is an error message" in caplog.text
def test_before_configuring_root_logger(caplog: pytest.LogCaptureFixture) -> None:
logger = logging.getLogger(__name__)
logger.info("Hello world")
assert "Hello world" not in caplog.text
def test_configure_root_logger(caplog: pytest.LogCaptureFixture) -> None:
configure_logging_with_pydase_formatter()
logger = logging.getLogger(__name__)
logger.info("Hello world")
assert (
"INFO tests.utils.test_logging:test_logging.py:83 Hello world"
in caplog.text
)