AI-era technical interviews — evaluate how engineers work with AI, not what they memorize.
This was entirely vibe coded with Claude Code. Run at your own risk.
Traditional technical interviews test what engineers can recall under pressure: algorithms, syntax, API signatures. But in 2026, every engineer works alongside AI. Memorization-based interviews no longer predict job performance.
vibe-interviewing drops candidates into real open-source codebases with tasks that mirror actual engineering work — debugging subtle bugs, building features from specs, or refactoring code for quality. They work using Claude Code — the same AI tool they'd use on the job. You evaluate how they work: how they decompose problems, direct AI, verify output, and make decisions under uncertainty.
No whiteboards. No leetcode. Just real engineering.
# Install globally (requires Node.js 20+)
npm install -g vibe-interviewing
# Set up optional tools (Claude Code skill for creating scenarios)
vibe-interviewing setup
# List available scenarios
vibe-interviewing list
# Start an interview session
vibe-interviewing start patch-data-lossPrerequisites: Node.js 20+, Git, and Claude Code installed globally (
npm install -g @anthropic-ai/claude-code).
Run everything on one machine — great for self-study or in-person interviews.
vibe-interviewing start patch-data-loss- Clones a real open-source repo at a pinned commit
- Injects a subtle bug via find/replace patch
- Wipes git history (no cheating with
git diff) - Shows the candidate a briefing
- Launches Claude Code with hidden AI behavioral rules
- Candidate debugs. Timer runs. You evaluate.
Host an interview and have the candidate join with a session code — works across the internet, no shared network needed.
Interviewer runs:
vibe-interviewing host patch-data-lossThe workspace uploads to the cloud. You get a session code and can close your terminal.
Candidate runs:
vibe-interviewing join VIBE-A3X9K2The candidate downloads the workspace from the cloud, runs setup, and launches Claude Code — all with a single command.
LAN mode: Use
--localto serve directly on your network instead of the cloud:vibe-interviewing host --local patch-data-loss
- Real codebases — candidates work in actual open-source projects, not toy examples
- Workspace isolation — the candidate never sees the scenario config, solution, or AI behavioral rules
- System prompt injection — AI rules go via
--append-system-prompt, keeping the workspace clean - Reproducible — scenarios pin to a specific commit SHA so every candidate sees the same code
| Scenario | Type | Difficulty | Time | Description |
|---|---|---|---|---|
patch-data-loss |
Debug | Hard | ~30-45 min | PATCH requests silently drop fields from records |
storage-adapter-refactor |
Refactor | Medium | ~45-60 min | Refactor tightly-coupled storage for pluggable backends |
webhook-notifications |
Feature | Hard | ~45-60 min | Build a webhook notification system for a REST API |
Use vibe-interviewing list to see all available scenarios.
Each scenario includes a structured interviewer guide that is displayed when you host a session. It gives you context on what to evaluate, specific green/red flag behaviors to watch for, common candidate pitfalls, and debrief questions to ask afterward. The guide is never shown to the candidate.
Share your custom scenarios by hosting the scenario.yaml anywhere accessible via URL:
# From a GitHub repo (blob URLs are auto-converted to raw)
vibe-interviewing host -s https://github.com/your-org/scenarios/blob/main/scenario.yaml
# From a GitHub gist
vibe-interviewing host -s https://gist.githubusercontent.com/user/abc123/raw/scenario.yamlThe -s flag accepts both local file paths and URLs in all commands (start, host).
Run vibe-interviewing setup to install the Claude Code slash command. Then open Claude Code in any project and run:
/create-scenario
The skill guides you through choosing a starting point (built-in template, current repo, or GitHub URL), selecting a scenario type, and generating a complete scenario.yaml with patches, briefing, AI rules, and evaluation criteria.
Create a scenario.yaml at your project root:
name: my-scenario
description: 'Brief description of the task (never reveal the answer here)'
type: debug # debug | feature | refactor
difficulty: medium
estimated_time: '30-45m'
tags: [node, express]
repo: 'https://github.com/owner/repo'
commit: 'full-40-char-sha'
setup:
- 'npm install --ignore-scripts'
patch:
- file: 'src/handler.ts'
find: 'if (count > limit)'
replace: 'if (count > limit + 1)'
briefing: |
Hey — we're getting reports that...
ai_rules:
role: |
You are a senior engineer helping debug...
rules:
- 'Never reveal the bug directly'
- 'Encourage test-driven debugging'
knowledge: |
The bug is in src/handler.ts...
solution: |
Change `count > limit + 1` back to `count > limit`
evaluation:
criteria:
- 'Found the bug'
- 'Understood root cause'
- 'Used AI effectively'Then validate it:
vibe-interviewing validate path/to/scenario.yamlvibe-interviewing start [scenario] Start a local interview session
-s, --scenario-file <path> Path or URL to a scenario.yaml
-w, --workdir <path> Custom workspace directory
-t, --tool <name> AI tool to use (default: claude-code)
-m, --model <model> Model override for Claude Code
--no-web Disable web search/fetch tools
vibe-interviewing host [scenario] Host a session for a remote candidate
-s, --scenario-file <path> Path or URL to a scenario.yaml
-p, --port <port> Port to serve on (LAN mode only)
--local Use LAN mode instead of cloud hosting
--worker-url <url> Custom cloud relay URL
vibe-interviewing join <code> Join a hosted session using a session code
-w, --workdir <path> Custom workspace directory
-t, --tool <name> AI tool to use (default: claude-code)
-m, --model <model> Model override for Claude Code
--no-web Disable web search/fetch tools
--worker-url <url> Custom cloud relay URL
vibe-interviewing list List available scenarios
vibe-interviewing validate <path> Validate a scenario.yaml file
vibe-interviewing setup Set up optional tools (Claude Code skills)
vibe-interviewing update Update to the latest version
vibe-interviewing sessions list List sessions (--all to include completed)
vibe-interviewing sessions clean Remove completed sessions (--dry-run to preview)
pnpm monorepo powered by Turborepo:
| Package | Description |
|---|---|
packages/core |
Scenario engine, git-based session management, Claude Code launcher |
packages/cli |
CLI entry point (commander-based), commands, UI |
packages/scenarios |
Built-in scenario configs and registry |
packages/cloudflare |
Cloudflare Worker for cloud-hosted session relay |
packages/web |
Landing page and interactive demo (vibe-interviewing.iar.dev) |
Key technologies: TypeScript, Zod, simple-git, Commander.
git clone https://github.com/cpaczek/vibe-interviewing.git
cd vibe-interviewing
pnpm install
pnpm build
pnpm test
pnpm lint
# Run the CLI locally
node packages/cli/dist/vibe-interviewing.js listThe release workflow (.github/workflows/release.yml) automatically publishes to npm when a push to main includes a version bump. It publishes three packages in order:
@vibe-interviewing/scenarios@vibe-interviewing/corevibe-interviewing(CLI)
Required GitHub secret: NPM_TOKEN — an npm access token with publish permissions.
To set it up:
- Go to npmjs.com and create an account
- Generate an access token: Account > Access Tokens > Generate New Token > Granular Access Token
- Grant read/write permissions for the packages
vibe-interviewing,@vibe-interviewing/core, and@vibe-interviewing/scenarios - In your GitHub repo, go to Settings > Secrets and variables > Actions > New repository secret
- Name:
NPM_TOKEN, Value: paste the token
To publish a new version:
# Bump version in all packages
pnpm changeset # create a changeset
pnpm changeset version # apply version bumps
git add . && git commit -m "chore: release"
git push