Skip to content

feat: database-jobs v2 — upstream-compatible API with internalized JWT context#61

Merged
pyramation merged 10 commits intomainfrom
feat/database-jobs-v2
May 3, 2026
Merged

feat: database-jobs v2 — upstream-compatible API with internalized JWT context#61
pyramation merged 10 commits intomainfrom
feat/database-jobs-v2

Conversation

@pyramation
Copy link
Copy Markdown
Contributor

@pyramation pyramation commented May 1, 2026

Summary

Upgrades database-jobs with upstream Graphile Worker performance optimizations and full API internalization.

Performance: queue_name defaults to NULL (no throwaway queues), Strategy 2 get_job (~20x improvement), is_available generated column + partial covering indexes, statement-level NOTIFY trigger.

API Internalization: Removed db_id parameter from add_job — now reads database_id and actor_id internally from JWT claims, matching upstream Graphile Worker's signature. Both columns are nullable (no indexes — they're envelope metadata, not hot-path query fields).

Scheduled Jobs: add_scheduled_job now also reads both from JWT claims. run_scheduled_job propagates both columns when spawning jobs.

New ops: remove_job(job_key), force_unlock_workers(worker_ids).

Removed: revision column, flags/forbidden_flags (unused by our architecture).

Naming: Old database-jobsdatabase-jobs-v1. New optimized version → database-jobs.

Review & Testing Checklist for Human

Risk: yellow (signature change affects all downstream callers)

  • Verify add_job signature change doesn't break any callers not covered by the three PRs (pgpm-modules, constructive-db, constructive-hub)
  • Verify constructive-functions worker reads job.actor_id from the column (not job.payload.actor_id) — this is a known follow-up
  • Deploy to a staging database and run add_job with JWT context to confirm database_id and actor_id are populated correctly

Notes

Link to Devin session: https://app.devin.ai/sessions/88b82416d018482791c791cdc91118de
Requested by: @pyramation

…tions

New @pgpm/database-jobs-v2 package based on @pgpm/database-jobs v0.21.0 with:

Phase 1 - Critical performance fixes:
- Fix queue_name default from gen_random_uuid()::text to NULL
- Rewrite get_job with pre-computed queue set (Strategy 2 vs old per-row EXISTS)

Phase 2 - Index improvements:
- Add is_available generated column (locked_at IS NULL AND attempts < max_attempts)
- Replace priority_run_at_id_idx with partial covering indexes (jobs_main_index, jobs_no_queue_index)

Phase 3 - Modernization:
- Add revision column (incremented on upsert via job key)
- Add flags jsonb column with forbidden_flags filtering in get_job
- Change notify trigger from FOR EACH ROW to FOR EACH STATEMENT
- Add remove_job(job_key text) function
- Add force_unlock_workers(worker_ids text[]) function

Interface is fully backward-compatible: all existing function signatures preserved.

Refs: constructive-io/constructive-planning#772
@devin-ai-integration
Copy link
Copy Markdown

🤖 Devin AI Engineer

I'll be helping with this pull request! Here's what you should know:

✅ I will automatically:

  • Address comments on this PR. Add '(aside)' to your comment to have me ignore it.
  • Look at CI failures and help fix them

Note: I can only respond to comments from users who have write access to this repository.

⚙️ Control Options:

  • Disable automatic comment and CI monitoring

pyramation added 5 commits May 1, 2026 06:09
- Remove revision column (unused by our architecture — no job replacement tracking needed)
- Remove flags jsonb column and forbidden_flags parameter from get_job (worker routing is handled at K8s level, not SQL level)
- Remove flags parameter from add_job and all flags-related upsert logic
- Keep is_available generated column (powers partial indexes for get_job performance)
- Update snapshot test to reflect removed columns
- Rename database-jobs → database-jobs-v1 (legacy, v0.15.5)
- Rename database-jobs-v2 → database-jobs (optimized, v0.22.0)
- Update package.json, control files, and pgpm.plan for both
- Update CI workflow matrix accordingly

The new database-jobs now contains all the upstream performance
optimizations (Strategy 2 get_job, is_available partial indexes,
NULL queue_name default, statement-level notify trigger) plus
remove_job and force_unlock_workers functions.
@devin-ai-integration devin-ai-integration Bot changed the title feat: add @pgpm/database-jobs-v2 with upstream performance optimizations feat: upgrade database-jobs with upstream performance optimizations May 1, 2026
pyramation added 2 commits May 2, 2026 22:48
- Remove db_id parameter from add_job() and add_scheduled_job()
- Both functions now read database_id from jwt_private.current_database_id()
- add_job() reads actor_id from jwt_public.current_user_id() and stores as column
- Add actor_id uuid column to jobs table
- Simplify trigger helpers (no longer pass database_id explicitly)
- Add pgpm-jwt-claims as a dependency
- Signature now matches upstream Graphile Worker: add_job(identifier, payload, ...)
@devin-ai-integration devin-ai-integration Bot changed the title feat: upgrade database-jobs with upstream performance optimizations feat: database-jobs v2 — upstream-compatible API with internalized JWT context May 2, 2026
pyramation added 2 commits May 3, 2026 02:10
Allows system-level jobs without tenant context. No indexes on
database_id or actor_id — these are envelope metadata read by the
worker after job fetch, not used in queue hot path queries.
- Add actor_id uuid column to scheduled_jobs table
- Update add_scheduled_job to read actor_id from jwt_public.current_user_id()
- Update run_scheduled_job to copy actor_id from scheduled_jobs to jobs
- Consistent with add_job which already reads both from JWT claims
@pyramation pyramation merged commit 0fb21a7 into main May 3, 2026
25 checks passed
@pyramation pyramation deleted the feat/database-jobs-v2 branch May 3, 2026 06:24
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant