A complete development workflow for AI coding tools
Documentation · Quick Start · Examples
Every AI coding tool wants its own config: Claude needs CLAUDE.md, Cursor wants .cursor/rules/, Copilot expects .github/copilot-instructions.md. Each has different formats, frontmatter, and directory conventions. If you use more than one tool, you're maintaining duplicate rules that inevitably drift apart.
Write your rules, context, skills, agents, and commands once in .ai-rulez/. Run generate. Get native configs for every tool you use.
npx ai-rulez@latest init && npx ai-rulez@latest generateai-rulez generates correct, tool-native output for 19 platforms: Claude, Cursor, Windsurf, Copilot, Gemini, Cline, Continue.dev, Codex, OpenCode, Amp, Junie, Antigravity, and more. Each preset respects the target tool's conventions — proper frontmatter, directory structure, file extensions, agent formats.
ai-rulez isn't just a config generator. It ships with 32 builtin domains containing opinionated rules, agents, and workflows that establish a professional development baseline immediately.
These activate automatically. No configuration needed.
| Domain | What it enforces |
|---|---|
| ai-governance | No AI signatures in commits. Concise communication. Systematic debugging. Verification before claiming success. Critical review of subagent output. |
| code-quality | Anti-patterns prevention. Complexity limits. Dead code removal. Error handling standards. Readability. |
| testing | TDD workflow (red-green-refactor, no exceptions). Testing anti-patterns. Meaningful assertions. Test independence. |
| git-workflow | Atomic commits. Conventional commit messages. Safe operations. Branch hygiene. |
| security | Secrets handling. Input validation. Dependency auditing. Least privilege. |
| token-efficiency | Task runner usage. Incremental approach. Context preservation. Batch operations. |
| agent-delegation | Multi-agent coordination and delegation patterns. |
Specialized agents ready to use as subagents:
| Agent | Domain | Model | What it does |
|---|---|---|---|
| code-reviewer | ai-governance | sonnet | Reviews changes for correctness, security, and conventions. Reports by severity. |
| test-writer | testing | sonnet | Writes tests following strict TDD. Fails first, then implements. |
| security-auditor | security | sonnet | Audits dependencies, scans for CVEs, reviews input validation. |
| docs-writer | ai-governance | haiku | Writes clear, concise documentation. No fluff. |
| devops-engineer | cicd | haiku | CI/CD pipelines, GitHub Actions, Docker, deployment automation. |
| release-engineer | cicd | haiku | Version management, changelogs, multi-registry publishing. |
Enable these based on your stack:
Languages (10): rust, python, typescript, go, java, ruby, php, elixir, csharp, r
Bindings (10): pyo3, napi-rs, magnus, ext-php-rs, rustler, wasm, jni-rs, extendr, cgo, vite-plus
Operational: cicd, docker, observability, documentation, default-commands
# .ai-rulez/config.toml
builtins = ["rust", "python", "pyo3", "cicd", "docker", "default-commands"]| Type | Purpose | Example |
|---|---|---|
| Rules | What AI must/must not do | Security standards, coding conventions |
| Context | What AI should know | Architecture docs, domain knowledge |
| Skills | Reusable prompts and workflows | Deployment checklist, review protocol |
| Agents | Specialized AI personas | Code reviewer, performance engineer |
| Commands | Slash commands across tools | /review, /deploy, /test |
ai-rulez scales from solo projects to large organizations:
Domains — Group content by feature, language, or team:
.ai-rulez/domains/backend/rules/
.ai-rulez/domains/frontend/rules/
Profiles — Generate different configs for different audiences:
[profiles]
backend = ["backend", "database"]
frontend = ["frontend", "ui"]Remote Includes — Share rules across repositories:
[[includes]]
name = "company-standards"
source = "https://github.com/company/ai-rules.git"
merge_strategy = "local-override"Reasoning effort across providers — Tune how hard each AI tool thinks:
# .ai-rulez/agents/security-reviewer.md
---
name: security-reviewer
description: Reviews code for security regressions
effort: high
---# .ai-rulez/config.yaml or config.toml
[defaults]
effort = "medium" # global default for every supported preset
[defaults.effort_by_preset]
codex = "high" # overrides the global default for Codex
claude = "xhigh" # …and for ClaudeAccepted values: low, medium, high, xhigh, max, inherit. ai-rulez emits the right field per preset:
- Claude —
effortin.claude/agents/*.mdfrontmatter (per-agent) - Codex —
model_reasoning_effortin.codex/config.toml(global) - Amp —
amp.anthropic.effortin.amp/settings.json(global) - Windsurf —
reasoning_effortin.windsurf/agents/*.mdfrontmatter (per-agent)
Each preset maps the value to its own vocabulary; tools without a documented config surface (Cursor, Copilot, Gemini, etc.) are silently skipped. See docs/configuration.md for the full mapping table.
Installed Skills — Pull reusable skills from external repos:
[[installed_skills]]
name = "kreuzberg"
source = "https://github.com/kreuzberg-dev/kreuzberg"ai-rulez includes a built-in MCP server with 35+ tools that lets AI assistants manage their own governance. Add rules, update context, generate configs — all programmatically.
[[mcp_servers]]
name = "ai-rulez"
command = "npx"
args = ["-y", "ai-rulez@latest", "mcp"]# No install required
npx ai-rulez@latest <command>
# Or install globally
brew install goldziher/tap/ai-rulez # macOS
npm install -g ai-rulez # npm
pip install ai-rulez # pip
go install github.com/Goldziher/ai-rulez/cmd@latest # GoFull documentation at goldziher.github.io/ai-rulez.
MIT
