This document describes the database architecture used by HyperFleet API.
HyperFleet API uses PostgreSQL with GORM ORM. The schema follows a simple relational model with polymorphic associations.
Primary resources for cluster management. Contains cluster metadata and JSONB spec field for provider-specific configuration.
Child resources owned by clusters, representing groups of compute nodes. Uses foreign key relationship with cascade delete.
Polymorphic status records for both clusters and node pools. Stores adapter-reported conditions in JSONB format.
Polymorphic pattern:
owner_type+owner_idallows one table to serve both clusters and node pools- Enables efficient status lookups across resource types
Key-value pairs for resource categorization and search. Uses polymorphic association to support both clusters and node pools.
clusters (1) ──→ (N) node_pools
│ │
│ │
└────────┬───────────┘
│
├──→ adapter_statuses (polymorphic)
└──→ labels (polymorphic)
Flexible schema storage for:
- spec - Provider-specific cluster/nodepool configurations
- conditions - Adapter status condition arrays
- data - Adapter metadata
Benefits:
- Support multiple cloud providers without schema changes
- Runtime validation against OpenAPI schema
- PostgreSQL JSON query capabilities
Resources use GORM's soft delete pattern with deleted_at timestamp. Soft-deleted records are excluded from queries by default.
Uses GORM AutoMigrate:
- Non-destructive (never drops columns or tables)
- Additive (creates missing tables, columns, indexes)
- Run via
./bin/hyperfleet-api migrate
Problem: During rolling deployments, multiple pods attempt to run migrations simultaneously, causing race conditions and deployment failures.
Solution: PostgreSQL advisory locks ensure exclusive migration execution.
// Only one pod/process acquires the lock and runs migrations
// Others wait until the lock is released
db.MigrateWithLock(ctx, factory)Implementation:
- Pod sets statement timeout (5 minutes) to prevent indefinite blocking
- Pod acquires advisory lock via
pg_advisory_xact_lock(hash("migrations"), hash("Migrations")) - Lock holder runs migrations exclusively
- Other pods block until lock is released or timeout is reached
- Lock automatically released on transaction commit
Key Features:
- Zero infrastructure overhead - Uses native PostgreSQL locks
- Automatic cleanup - Locks released on transaction end or pod crash
- Timeout protection - 5-minute timeout prevents indefinite blocking if a pod hangs
- Nested lock support - Same lock can be acquired in nested contexts without deadlock
- UUID-based ownership - Only original acquirer can unlock
Integration tests validate concurrent behavior:
make test-integration # Runs TestConcurrentMigrationsTest coverage:
TestConcurrentMigrations- Multiple pods running migrations simultaneouslyTestAdvisoryLocksConcurrently- Lock serialization under race conditionsTestAdvisoryLocksWithTransactions- Lock + transaction interactionTestAdvisoryLockBlocking- Lock blocking behavior
# Create PostgreSQL container
make db/setup
# Run migrations
./bin/hyperfleet-api migrate
# Connect to database
make db/loginSee development.md for detailed setup instructions.
The API uses an optimized transaction strategy to maximize connection pool efficiency and reduce latency under high adapter polling load.
Write operations create full GORM transactions with ACID guarantees:
- Transaction begins before handler execution
- Automatic commit on success, rollback on error (via
MarkForRollback()) - Transaction ID tracked in logs for debugging
Read operations skip transaction creation entirely for performance:
- Direct database session without BEGIN/COMMIT overhead
- No transaction ID consumption
- Reduced connection hold time and pool pressure
List Operations: COUNT and SELECT queries execute as separate autocommit statements (read operations don't use transactions). PostgreSQL's default READ COMMITTED isolation level means each statement gets a fresh snapshot:
- Under concurrent deletes,
totalcount may slightly exceed actualitemsreturned - This is a cosmetic pagination issue, not a data integrity problem
- Occurs only during the ~1ms window between COUNT and SELECT
- Low probability in practice (requires delete between two consecutive queries)
Why not use transactions for reads? Creating transactions for every GET request would:
- Increase connection pool pressure under high adapter polling load
- Consume transaction IDs unnecessarily
- Add latency (BEGIN/COMMIT overhead)
Why not use REPEATABLE READ? The current inconsistency is acceptable for pagination UX. REPEATABLE READ would add overhead and doesn't align with the read-heavy workload optimization.
Alternative: Clients can use continuation tokens (Kubernetes pattern) instead of page/total pagination if strict consistency is required.
The API manages a Go sql.DB connection pool with the following tunable parameters, exposed as CLI flags:
| Flag | Default | Description |
|---|---|---|
--db-max-open-connections |
50 | Maximum open connections to the database |
--db-max-idle-connections |
10 | Maximum idle connections retained in the pool |
--db-conn-max-lifetime |
5m | Maximum time a connection can be reused before being closed |
--db-conn-max-idle-time |
1m | Maximum time a connection can sit idle before being closed |
--db-request-timeout |
30s | Context deadline applied to each HTTP request's database transaction |
--db-conn-retry-attempts |
10 | Retry attempts for initial database connection on startup |
--db-conn-retry-interval |
3s | Wait time between connection retry attempts |
Every API request that touches the database gets a context deadline via --db-request-timeout. If a request cannot acquire a connection or complete its query within this window, it fails with a 500 and the connection is released. This prevents requests from hanging indefinitely when the pool is exhausted under load.
Note: Under heavy load you may see
500responses caused by the request timeout. This is expected backpressure behavior — the API deliberately fails fast rather than letting requests queue indefinitely. Clients (e.g., adapters) should treat these as transient errors and retry with backoff. If500rates are sustained, consider scaling the deployment or tuning--db-max-open-connectionsand--db-request-timeout.
On startup the API retries the database connection up to --db-conn-retry-attempts times. This handles sidecar startup races (e.g., pgbouncer may not be listening when the API container starts). Retries are logged at WARN level with attempt counts.
The readiness probe (/readyz) pings the database with a separate timeout controlled by --health-db-ping-timeout (default 2s). This ensures health checks respond quickly even when the main connection pool is under pressure, preventing Kubernetes from removing the pod from service endpoints due to slow readiness responses during load spikes. The default is set below the Kubernetes readiness probe timeoutSeconds (3s) so the Go-level timeout fires first and returns a proper 503 rather than a connection timeout.
The Helm chart supports generic sidecar injection via the sidecars list in values.yaml. Each entry is a full Kubernetes container spec injected into the deployment pod. This can be used for any purpose — database proxies, log shippers, monitoring agents, etc.
A common use case is database proxy sidecars (PgBouncer, Cloud SQL Auth Proxy). The example below shows a PgBouncer configuration.
# values.yaml
sidecars:
- name: pgbouncer
image: public.ecr.aws/bitnami/pgbouncer:1.25.1
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop: [ALL]
readOnlyRootFilesystem: true
seccompProfile:
type: RuntimeDefault
ports:
- name: pgbouncer
containerPort: 6432
protocol: TCP
env:
- name: POSTGRESQL_HOST
value: my-postgresql-host
- name: POSTGRESQL_PORT
value: "5432"
- name: POSTGRESQL_DATABASE
value: hyperfleet
- name: POSTGRESQL_USERNAME
value: hyperfleet
- name: POSTGRESQL_PASSWORD
valueFrom:
secretKeyRef:
name: my-db-secret
key: db.password
- name: PGBOUNCER_PORT
value: "6432"
- name: PGBOUNCER_POOL_MODE
value: transaction
resources:
limits:
cpu: 200m
memory: 128Mi
requests:
cpu: 50m
memory: 64MiWhen using a proxy sidecar, the API container must connect to the proxy instead of PostgreSQL directly. The database.external.secretName secret must contain the direct database endpoint (host/port) so the db-migrate init container can run migrations before sidecars start. To route runtime traffic through the proxy, use extraEnv overrides in the main container:
extraEnv:
- name: HYPERFLEET_DATABASE_HOST
value: "localhost"
- name: HYPERFLEET_DATABASE_PORT
value: "6432" # proxy portUse extraVolumes and extraVolumeMounts for any volumes the sidecar requires (e.g., temp dirs, config dirs).
┌─────────────────────────────────────────┐
│ Pod │
│ │
│ ┌──────────────┐ ┌──────────┐ │
│ │ hyperfleet │────▶│ proxy │─────┼──▶ Database
│ │ API │ │ sidecar │ │
│ │ (localhost) │ └──────────┘ │
│ └──────────────┘ │
│ │
│ Init containers (migrate) ─────────────┼──▶ Database
│ (direct connection, bypasses proxy) │
└─────────────────────────────────────────┘
- Init containers (migrations) should connect directly to the database — they run before sidecars start, and DDL operations may not be compatible with connection pooling.
- PgBouncer: Lightweight connection pooler. Use
transactionpool mode for stateless APIs. See commented example invalues.yaml. - Cloud SQL Auth Proxy: Required for GCP Cloud SQL access without complex network/IP setup. See commented example in
values.yaml.
- Development Guide - Database setup and migrations
- Deployment - Database configuration and connection settings
- API Resources - Resource data models