Releases: LabOverWire/MQDB
Releases · LabOverWire/MQDB
v0.7.6
2026-05-06 — mqdb-cli 0.7.6
Fixed
--timeoutdid not apply during the MQTT CONNECT handshake.connect_clientincrates/mqdb-cli/src/common.rsonly wrapped the request/response wait intokio::time::timeout; theMqttClient::connect[_with_options]calls themselves had no timeout, so any command (mqdb list,read,create,update,delete, etc.) would hang indefinitely against a TCP listener that accepts the connection but never sends CONNACK (silent broker, half-open NAT, firewall drop after SYN-ACK).- Extracted a shared
connect_with_timeout(client, client_id, conn)helper incommon.rsthat wraps bothMqttClient::connect[_with_options]calls intokio::time::timeout(Duration::from_secs(conn.timeout), …)and surfacesconnect to {broker} timed out after {N}son expiry. The helper also honorsconn.insecurefor self-signed TLS — previously only the bench paths set this, the CRUD path silently skipped it. - Routed every CLI bench/dev_bench connect through the new helper to close the same bug class for
mqdb bench db(sync + async + cascade + unique + changefeed),mqdb bench pubsub,mqdb dev bench(db/pubsub/sub-pub), and the broker-readiness probes in bothbench/common.rs::wait_for_broker_readyanddev_bench/helpers.rs::wait_for_broker_ready. Removed two now-redundant localconnect_clienthelpers indb_cascade.rsanddb_changefeed.rs. Thepubsub.rspaths use customConnectOptions(clean-start, custom keep-alive) so their connect calls are wrapped inline with the same timeout pattern rather than going through the helper. - Regression test
test_cli_connect_timeout_against_silent_listenerincrates/mqdb-cli/tests/cli_test.rsspawns a TCP listener that accepts the connection without speaking MQTT and assertsmqdb list ... --timeout 2exits within 5 seconds with a "timed out" error. Verified to fail on main (pre-fix exits at ~6s with "Connection reset by peer") and pass with the fix in place.
2026-05-06 — mqdb-cluster 0.3.4
Fixed
- Partition snapshot import did not populate
FkReverseIndex. This was the "Known gap" called out in the 0.3.2 entry. After a rebalance-driven replica promotion, the new primary held the importeddb_datarecords and FK constraints but its in-memory reverse-index cache ((target_entity, target_id, source_entity, source_field) → {source_id, …}) was empty for those records.start_fk_reverse_lookupandhandle_fk_reverse_lookup_requestwould return empty for any record sitting on a newly-imported partition, causing ON DELETE CASCADE to miss children that the new primary owned and ON DELETE RESTRICT to silently allow deletes that should have been blocked. StoreManager::import_partitionnow calls a new privaterebuild_fk_indexes_after_importstep at the end of the import. It iterates every registered FK constraint and calls the existingrebuild_fk_index_for_constrainthelper, which walksdb_data.list(source_entity)(now populated with the just-imported records) and seeds the reverse index. Mirrors the existing pattern atapply.rs:215where constraint Insert via Raft replication triggers the same rebuild.- Test coverage: 12 new tests (466 → 478 in the cluster lib). Direct
FkReverseIndexunit tests indata_store.rs(insert/lookup/remove, idempotent inserts, removing unknown source ids, field-scoped keys),update_fk_reverse_indexandrebuild_fk_index_for_constraintunit tests inconstraint_ops.rs(Insert/Update/Delete paths, no-op when no constraints, malformed JSON, non-FK constraint), and a regression testimport_partition_rebuilds_fk_reverse_indexinpartition_io.rsthat confirmed by fail-on-disable / pass-on-restore that the rebuild call is what makes the assertion pass. - E2E in
examples/cluster-rebalance-stores/run.shnow creates 20 extra child comments (2 per parent) spread across all 10 parents and adds a cascade-via-node-4 observation: deletes every parent through node 4 after rebalance, then prints how many of the eligible children were cascade-removed. Surfaced as an observation rather than a hard assertion because cascade outcomes through any specific node depend on whether that node has the FK constraint locally, which is governed by schema/constraint replication topology (separate concern; see below).
Discovered while running the new E2E (separate follow-up)
- Constraints don't reach all nodes uniformly. Across runs of the new E2E, only the leader (node 1) consistently held both the unique and FK constraints locally; nodes 2/3 sometimes had a subset, and a freshly-joined node 4 had none. Because constraints route through
schema_partition(entity), any node that doesn't own that partition reaches the constraint only via forwarding — not in its localdb_constraintsstore. The FkReverseIndex rebuild this PR adds is correct in its scope (it rebuilds for whatever constraints the importing node has locally), but a fully-correct cascade through every node requires constraints to be cluster-wide broadcast state. Tracked as future work alongside the schema replication topology issue first noted in the 0.3.2 CHANGELOG entry.
v0.7.5
Release 0.7.5
v0.7.3
Release 0.7.3
v0.7.2
Affected crates: mqdb-core (0.5.1), mqdb-agent (0.7.0), mqdb-cli (0.7.2).
Added
MqdbAgent::start()method that returns aJoinHandleand awatch::Receiver<bool>readiness signal, firing only after both the TCP accept loop and the internal$DB/#handler are ready- Handler readiness oneshot in
spawn_handler_task— signals after the$DB/#subscribe succeeds
Fixed
- Replace hardcoded 500ms sleep in CLI tests with deterministic
start()+ready_rxreadiness signal - Replace
wait_for_port+wait_for_readypolling in admin tests withstart()+ready_rx - Replace static port counters with OS-assigned ephemeral ports in all test suites (agent, cli, cluster) to eliminate cross-binary port collisions
- Ensure database directory tree exists before fjall open to prevent EBADF on
FROM scratchDocker images - Direct tracing subscriber output to stderr in CLI to prevent log lines from corrupting JSON stdout
v0.7.1
Affected crates: mqdb-core (0.5.1), mqdb-agent (0.6.1).
Security
- Cap password length at 256 bytes to prevent Argon2id CPU exhaustion
- Add rate limiters to vault enable/change MQTT handlers and OAuth token refresh endpoint
- Validate entity names (alphanumeric,
_,-, max 128 chars) and record IDs (reject+,#,/, max 512 bytes) in topic parsers - Reject JSON payloads over 4 MiB before parsing
- Normalize challenge error messages to prevent internal status leakage
- Replace bare SHA256 with HMAC-SHA256 for email hash fallback
v0.7.0
Affected crates: mqdb-core (0.5.0), mqdb-agent (0.6.0), mqdb-cli (0.7.0).
Added
- Password reset endpoints:
POST /auth/password/reset/startandPOST /auth/password/reset/submit(HTTP, unauthenticated) for "forgot password" flow - Password reset MQTT topics:
$DB/_auth/password/reset/startand$DB/_auth/password/reset/submitfor authenticated users - Challenge
purposefield to distinguish password reset from email verification challenges - Purpose guard in
handle_verify_submitto reject password reset challenges --no-rate-limitnow disables all HTTP rate limiters (login, register, verify, password change, password reset)AdminRequiredtopic protection now falls through to ACL for non-admin users, enabling operator-provisioned service accounts
Security
- Promote
$DB/_verify/#toAdminRequiredtopic protection tier to prevent leakage of verification codes and receipt spoofing
v0.6.0
Affected crates: mqdb-core (0.4.0), mqdb-agent (0.5.0), mqdb-cluster (0.3.0), mqdb-cli (0.6.0).
Added
- Password change endpoint:
POST /auth/password/change(HTTP) and$DB/_auth/password/change(MQTT) for email-auth users with verified email $DB/_auth/topic namespace for self-service auth operations, exempt from topic protection- MQTT 5.0
correlation_dataechoing in all DB and admin response handlers, enablingmqttv5 --wait-responseand standard request-response clients - Dedicated
password_change_rate_limiter(HTTP) and reuse ofvault_unlock_limiter(MQTT) for brute-force protection
Changed
- Cluster mode returns explicit error for
$DB/_auth/topics (agent-only)
v0.5.0
Affected crates: mqdb-core (0.3.0), mqdb-agent (0.4.0), mqdb-cluster (0.2.0), mqdb-cli (0.5.0).
Added
- MQTT vault admin operations:
$DB/_vault/{enable,unlock,lock,disable,change,status}— self-service vault management over MQTT 5.0 request-response, no HTTP session required - Shared
vault_opsmodule extracting transport-agnostic vault batch operations from HTTP handlers - Direct-DB vault operations (
_dbvariants) for the MQTT handler path, avoiding deadlock from nested MQTT round-trips in the sequential message handler loop ErrorCode::RateLimited(429) for vault unlock brute-force protection over MQTT- Topic protection exemptions for
$DB/_vault/*and$DB/_verify/*(non-admin users can access these) --vault-min-passphrase-lengthflag (env:MQDB_VAULT_MIN_PASSPHRASE_LENGTH, default 0) to enforce minimum passphrase length on vault enable and change operations
Changed
- Vault HTTP handlers refactored to thin wrappers over shared
vault_opsfunctions - Cluster mode returns explicit error for vault admin topics (vault requires agent mode)
v0.4.0
add email verification protocol and email/password auth
v0.3.0
Affected crates: mqdb-cli.
Added
- Environment variable support for all
agent startandcluster startCLI flags (MQDB_BIND,MQDB_DB,MQDB_DURABILITY,MQDB_NODE_ID, etc.) - Inline content environment variables for file-path flags:
MQDB_PASSWD,MQDB_ACL,MQDB_SCRAM,MQDB_JWT_KEY,MQDB_PASSPHRASE,MQDB_LICENSE,MQDB_QUIC_CERT,MQDB_QUIC_KEY,MQDB_QUIC_CA,MQDB_OAUTH_CLIENT_SECRET,MQDB_IDENTITY_KEY,MQDB_FEDERATED_JWT_CONFIG,MQDB_CERT_AUTH - Precedence: CLI flags > inline env vars (
MQDB_*) > file-path env vars (MQDB_*_FILE)