A Testbed Experiment Platform for testing Flower federated learning algorithms.
📑 Table of Contents
Flower Testbed is an open-source platform for experimenting with federated learning algorithms using the Flower framework. It provides a comprehensive environment for testing, monitoring, and managing federated learning experiments across different computational resources.
- Algorithm Management: Upload and test custom FL algorithms
- Model Tracking: Export model states at each federated round
- Metrics Monitoring: Real-time tracking of training metrics
- Resource Flexibility: CPU/GPU support with configurable client resources
- Node.js 20+
- pnpm
- Docker & Docker Compose
- Python 3.9+ (for Flower experiments)
Steps:
-
Clone the repository
git clone https://github.com/phrp720/flower-testbed.git cd flower-testbed -
Install dependencies
pnpm deps
-
Copy environment configuration
cp .env.example .env
-
Start PostgreSQL database
docker compose -f deployments/development/docker-compose.yml up -d
-
Push database schema
pnpm db:push
-
Start the development server
pnpm dev
-
Open the dashboard
Navigate to http://localhost:3000/ in your web browser.
The default credentials are
admin:admin
-
For each release, a
deployment.zipfile is provided.You can find it in the ReleasesThis archive contains everything required to deploy the application, including:
- The Flower application
- PostgreSQL
- An
.env.exampleexample file for configuration
-
Download and unzip the
deployment.zipfile. -
Open the
.env.examplefile, update the configuration values as needed and rename it to.env. -
Start the application using Docker Compose:
docker compose up -d
-
Select Framework: Choose your ML framework (PyTorch, TensorFlow, etc.)
-
Upload Files:
- Algorithm (required): Your FL strategy implementation (.py)
- Model (optional): Model definition (.py)
- Config (optional): Training configuration (.py, .json, .yaml)
- Dataset (optional): Custom dataset implementation (.py, .csv)
-
Configure Parameters:
- Number of clients
- Number of rounds
- Client fraction (% of clients per round)
- Local epochs
- Learning rate
-
Start Experiment: Click "Start Experiment" to begin
Tip
You can download templates for Algorithm, Config,Strategy and Dataset files to get started quickly.
Note
The application supports only Pytorch for now. Support for TensorFlow is coming soon.
The gh-action/ directory contains a reusable GitHub Action that lets any repository trigger a simulation on a running Flower Testbed instance whenever files are pushed to a designated folder.
-
Add secrets to your repository (
Settings → Secrets and variables → Actions):Secret Description TESTBED_URLURL of your testbed instance (must be https://)TESTBED_USERNAMELogin username TESTBED_PASSWORDLogin password -
Create a simulation folder in your repo (default:
flower-simulation/) and add your files following the naming convention:File pattern Type strategy*.pyFL strategy implementation model*.pyModel definition config*.py/config*.json/config*.yamlTraining configuration dataset*.pyCustom dataset loader -
Add the workflow — copy
gh-action/examples/workflow.ymlto.github/workflows/flower-simulation.ymlin your repo:on: push: paths: - 'flower-simulation/**' jobs: simulate: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - uses: phrp720/flower-testbed/gh-action@main with: testbed_url: ${{ secrets.TESTBED_URL }} testbed_username: ${{ secrets.TESTBED_USERNAME }} testbed_password: ${{ secrets.TESTBED_PASSWORD }} num_rounds: '5'
-
Push — every push touching
flower-simulation/will trigger a new experiment automatically.
| Input | Default | Description |
|---|---|---|
testbed_url |
— | Testbed instance URL (required) |
testbed_username |
— | Auth username (required) |
testbed_password |
— | Auth password (required) |
simulation_folder |
flower-simulation |
Folder to scan for simulation files |
experiment_name |
<repo>@<sha> |
Optional base name for the experiment. When set, the action uses <experiment_name>-<shortSHA> |
framework |
pytorch |
ML framework |
num_clients |
10 |
Number of federated clients |
num_rounds |
3 |
Number of federated rounds |
client_fraction |
0.5 |
Fraction of clients selected per round |
local_epochs |
1 |
Local training epochs per client |
learning_rate |
0.01 |
Client optimizer learning rate |
wait_for_completion |
false |
Block the job until the experiment finishes |
timeout_minutes |
60 |
Max wait time when wait_for_completion is true |
| Output | Description |
|---|---|
experiment_id |
ID of the created experiment |
experiment_url |
Direct link to the experiment on the dashboard |
status |
Final status: pending / running / completed / failed |
final_accuracy |
Final accuracy (set only when wait_for_completion: true) |
final_loss |
Final loss (set only when wait_for_completion: true) |
This is a research project. Contributions, issues, and feature requests are welcome!
This project is licensed under the MIT license.
See LICENSE for more information.