⚠️ This tool is currently under active development!The first official release is not yet available but you are welcome to use ARES as a beta version. Please stay tuned for updates and be aware that features and documentation may change frequently. If you encounter bugs or have feature requests, please report them via the issue tracker. Your feedback is highly appreciated!
The Automated Rapid Embedded Simulation project is a tool for performing open-loop simulations of software components. The primary application area is the development of software components for embedded applications.
- 1. Installation
- 2. Usage
- 3. Architecture
- 4. Bug & Feature Report
- 5. Contributing
- 6. Workflows
- 7. Examples
- 8. Future Developments
- 9. License
ARES is currently under active development and is distributed as a source package. The recommended way to use ARES is within a Python virtual environment.
Before setting up ARES, ensure your system meets the following requirements:
- Operating System: Linux, Windows
- Python: Version equal 3.13.7 or higher and lower than 3.14
- Build Tools:
make(only required for Option A: Automated Setup)
Using a virtual environment is recommended to avoid conflicts with system packages. You can set this up automatically or manually.
-
Clone the repository:
git clone --recurse-submodules https://github.com/olympus-tools/ARES.git cd ares -
Create and configure the environment: This command creates a
.venvdirectory and installs dependencies:make setup-venv
To also install documentation dependencies (required for
make docs), add theVENV_RELEASE=trueflag:make setup-venv VENV_RELEASE=true
-
Activate the environment:
# Bash/Zsh source .venv/bin/activate # Fish source .venv/bin/activate.fish # Windows (CMD) .venv\Scripts\activate.bat # Windows (PowerShell) .venv\Scripts\Activate.ps1
If you prefer to configure the virtual environment manually:
-
Create a virtual environment:
python -m venv .venv
-
Activate the environment:
# Bash/Zsh source .venv/bin/activate # Fish source .venv/bin/activate.fish # Windows (CMD) .venv\Scripts\activate.bat # Windows (PowerShell) .venv\Scripts\Activate.ps1
-
Install ARES:
pip install .
ARES is primarily used via its Command Line Interface (CLI). The main command is pipeline, which executes a simulation workflow defined in a JSON file.
python -m ares pipeline --workflow <path_to_workflow.json> [OPTIONS]| Option | Short | Description | Required | Default |
|---|---|---|---|---|
--workflow |
-wf |
Path to the workflow JSON file. | Yes | - |
--output |
-o |
Directory where output files will be saved. | No | Workflow directory |
--log-level |
Logging verbosity (10=DEBUG, 20=INFO, 30=WARNING, 40=ERROR). | No | 20 (INFO) |
python -m ares pipeline -wf ./my_workflow.json -o ./results --log-level 10ARES is built on a four-layer architecture that enables flexible, extensible simulation workflows:
- Orchestration - Pipeline orchestrates workflow execution from JSON definitions
- Plugins - Extensible processing units (SimUnit for C/C++ simulations, custom plugins)
- Interfaces - Format-agnostic I/O with automatic handler selection and caching
- Base Types - Core data structures (Signal, Parameter)
The architecture uses design patterns like Flyweight (hash-based caching), Factory (automatic format detection), and Strategy (pluggable handlers) to achieve high performance and maintainability.
📖 For detailed architecture documentation including system diagrams, class structures, and design decisions, see architecture.md.
We are committed to a welcoming and inclusive community. To contribute to the ARES project, please see the CONTRIBUTING.md file for details.
ARES pipelines are defined using a JSON file, referred to as a Workflow. The workflow structure is a dictionary where:
- Keys: Unique identifiers (names) for each workflow element.
- Values: Configuration objects defining the element's properties.
Elements can define dependencies on other elements. The ARES pipeline analyzes these dependencies to determine the correct execution order automatically.
Each element in the workflow must adhere to a specific schema defined by the ARES Pydantic models. The type field is mandatory and determines the validation rules.
Handles time-dependent signal data (e.g., measurement files, time-series). These signals are typically fed into simulation units step-by-step during execution.
| Field | Required | Type | Supported Values | Description |
|---|---|---|---|---|
type |
Yes | str |
"data" |
Unique identifier for the element type. |
mode |
Yes | str |
"read", "write" |
Operation mode. |
file_path |
If read |
list[str] |
Path(s) to input data files or directories. Accepted file format: .mf4. Directories are scanned non-recursively — all matching files are included, others are silently skipped. |
|
data |
If write |
list[str] |
List of element names to write to file. | |
output_format |
If write |
str |
"mf4" |
Target file format. |
label_filter |
No | list[str] |
Filter specific signals by name or pattern. | |
stepsize |
No | int |
Resampling step size in ms. | |
vstack_pattern |
No | list[str] | list[dict] |
List of regular expressions to stack signals into arrays. Using the dict version enables additional fields: signal_name, x-axis, and y-axis. |
Handles parameter sets that remain constant throughout the simulation duration. These are used to configure, trim, or calibrate software components (e.g., characteristic curves, scalar values).
| Field | Required | Type | Supported Values | Description |
|---|---|---|---|---|
type |
Yes | str |
"parameter" |
Unique identifier for the element type. |
mode |
Yes | str |
"read", "write" |
Operation mode. |
file_path |
If read |
list[str] |
Path(s) to parameter files or directories. Accepted file formats: .dcm, .json. Directories are scanned non-recursively — all matching files are included, others are silently skipped. |
|
parameter |
If write |
list[str] |
List of element names to write to file. | |
output_format |
If write |
str |
"dcm", "json" |
Target file format. |
label_filter |
No | list[str] |
Filter specific parameters by name or pattern. |
Executes a compiled dynamic library (e.g., .dll, .so). This can represent any software component, such as a controller algorithm or a physical plant model.
| Field | Required | Type | Supported Values | Description |
|---|---|---|---|---|
type |
Yes | str |
"sim_unit" |
Unique identifier for the element type. |
file_path |
Yes | str |
Path to the compiled library (.dll, .so). |
|
data_dictionary |
Yes | str |
Path to the data dictionary definition. | |
stepsize |
Yes | int |
Simulation step size in ms. | |
data |
Yes | list[str] |
List of data element names providing inputs. | |
parameter |
No | list[str] |
List of parameter element names. | |
init |
No | list[str] |
List of elements for initialization. | |
cancel_condition |
No | str |
Expression to stop simulation early. | |
vstack_pattern |
No | list[str] | list[dict] |
List of regular expressions to stack signals into arrays. Using the dict version enables additional fields: signal_name, x-axis, and y-axis. |
Allows users to execute custom Python scripts within the workflow. Plugins can depend on other elements or serve as dependencies for others. They are versatile and can be used for tasks such as data manipulation, optimization loops, plotting, or automated testing.
| Field | Required | Type | Supported Values | Description |
|---|---|---|---|---|
type |
Yes | str |
"plugin" |
Unique identifier for the element type. |
file_path |
Yes | str |
Path to the Python plugin script. | |
plugin_name |
No | str |
Name of the plugin function to execute. If not specified, defaults to "ares_plugin". |
|
| Custom | No | Any |
Additional fields as required by the plugin. |
Merges data and parameters from multiple workflow elements by creating all possible combinations (cartesian product) of the inputs. This is useful for combining results from different simulation units or data sources, generating all permutations of parameter sets and data sources.
When merging elements with overlapping parameter names or signal labels, later elements in the list take precedence and override values from earlier elements.
| Field | Required | Type | Supported Values | Description |
|---|---|---|---|---|
type |
Yes | str |
"merge" |
Unique identifier for the element type. |
parameter |
No | list[str] |
List of parameter element names to merge. All combinations are generated. Later elements override earlier ones. | |
data |
No | list[str] |
List of data element names to merge. All combinations are generated. Later elements override earlier ones. | |
label_filter_data |
No | list[str] |
Filter specific signals by name or pattern when merging data. | |
label_filter_parameter |
No | list[str] |
Filter specific parameters by name or pattern when merging parameters. | |
vstack_pattern_data |
No | list[str] | list[dict] |
List of regular expressions to stack signals into arrays for data. Using the dict version enables additional fields: signal_name, x-axis, and y-axis. |
|
vstack_pattern_parameter |
No | list[str] | list[dict] |
List of regular expressions to stack signals into arrays for parameters. Using the dict version enables additional fields: signal_name, x-axis, and y-axis. |
|
stepsize |
No | int |
Resampling step size in ms applied to merged data. |
The following diagram illustrates an example of an open-loop simulation workflow. It demonstrates how parameters and data are fed into simulation units, and how the output of one unit can serve as input for another.
flowchart LR
PARAM1(Parameters 1) --> SWU1(SW Unit 1)
PARAM1 --> SWU2(SW Unit 2)
MEAS1(Data 1) --> SWU1
SWU1 --> SWU2
SWU2 --> MEAS2(Data 2)
PARAM2(Parameters 2) --> SWU3(SW Unit 3)
PARAM2 --> SWU4(SW Unit 4)
PARAM2 --> SWU5(SW Unit 5)
MEAS1 --> SWU3
MEAS1 --> SWU4
MEAS1 --> SWU5
SWU3 --> SWU4
SWU4 --> MEAS3(Data 3)
SWU4 --> SWU5
SWU5 --> PLUGIN1(Plugin 1)
classDef Parameters color:#a44300, stroke:#a44300;
classDef Data color:#1e9bec, stroke:#1e9bec;
classDef SW_Unit color:#d30000, stroke:#d30000;
classDef Plugin color:#e5d300, stroke:#e5d300;
class PARAM1 Parameters;
class PARAM2 Parameters;
class MEAS1 Data;
class MEAS2 Data;
class MEAS3 Data;
class SWU1 SW_Unit;
class SWU2 SW_Unit;
class SWU3 SW_Unit;
class SWU4 SW_Unit;
class SWU5 SW_Unit;
class PLUGIN1 Plugin;
For a detailed explanation of the example applications, please refer to README.md.
ARES is constantly evolving. Future developments will focus on:
- Closed-Loop Simulation: Enabling feedback loops where the output of a simulation unit influences its own input in subsequent steps.
- Expanded Simulation Support: Integration of the FMI standard to support Functional Mock-up Units (FMUs).
- Additional Data File Formats: Support for more data formats (e.g., Parquet, MAT). Note that data sets can temporarily only be output as JSON.
- Parameter Support: Enabling writing of DCM parameter files (currently not possible).
- Enhanced Plugin System: More built-in plugins for common tasks like plotting and reporting.
- Performance Optimization: Parallel execution of independent workflow branches.
This project is licensed under the Apache License 2.0 — see the LICENSE file for details.
This project includes or depends on third-party software components. All dependencies and their respective licenses are documented in the NOTICE file in accordance with Apache License 2.0 requirements.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.