Rust CLI for AI-assisted git commits, pushes, and pull requests.
It analyzes the current repository state, builds a constrained prompt from the relevant diff context, and runs that through a local or compatible LLM backend to generate:
- commit messages
- pull request titles and bodies
- combined commit + push + PR flows
The README now reflects the code that is implemented and validated in this repo:
- provider-neutral LLM manager with
ollamaandopenai-compatible - custom prompt templates with
--template - explicit user context with
--context - real
--push,--pr, and--push-prflows - default-branch detection
- auto feature-branch creation when pushing from the default branch
gh pr createintegration with existing-PR fallback viagh pr view --json url
- Rust 1.70+
- Git
- One of:
- Ollama
- an OpenAI-compatible HTTP endpoint
ghif you want real PR creation instead of dry-run PR drafts
git clone https://github.com/npiesco/gitAICommit.git
cd gitAICommit
cargo install --path .Basic commit flow:
git-ai-commitStage everything first:
git-ai-commit --add-unstagedPreview only:
git-ai-commit --dry-run --verboseGenerate a PR draft instead of a commit:
git-ai-commit --pr --dry-runCommit and push:
git-ai-commit --pushCommit, push, and open or reuse a PR:
git-ai-commit --push-prDefault provider:
git-ai-commit --provider ollama --model qwen3-coder:latestList local models:
git-ai-commit --list-modelsUse any OpenAI-compatible endpoint:
git-ai-commit \
--provider openai-compatible \
--model your-model-name \
--base-url http://localhost:11434With an API key:
git-ai-commit \
--provider openai-compatible \
--model gpt-4.1-mini \
--base-url https://api.openai.com \
--api-key "$OPENAI_API_KEY"Notes:
--modelis required for non-Ollama providers.- If
--provider openai-compatibleis used without--base-url, the tool defaults tohttp://localhost:<port>.
git-ai-commit [OPTIONS]
Model options:
--provider <PROVIDER> LLM backend: ollama | openai-compatible
-m, --model <MODEL> Model name
--base-url <URL> Base URL for selected provider
--api-key <KEY> API key for selected provider
--list-models List models for the selected provider
-p, --port <PORT> Ollama port [default: 11434]
Diff options:
-f, --max-files <COUNT> Max files included in analysis [default: 10]
-l, --max-diff-lines <LINES> Max diff lines / prompt budget unit [default: 50]
Commit options:
-a, --add-unstaged Stage unstaged changes first
--pr Generate PR title/body instead of a commit
--push Push after commit
--push-pr Commit, push, then open/reuse PR
--confirm Ask before committing
Customization:
--template <FILE> Custom prompt template
--context <TEXT> Extra user context for commit/PR generation
Debug:
-d, --dry-run Preview without mutating git state
-v, --verbose Print generated prompt/output details
Advanced:
-t, --timeout-seconds <SEC> Generation timeout [default: 60]
- default commit prompts use staged diff summary context
- AI output is sanitized before commit
- the final commit is written through
git commit --file - malformed prompt echoes and shell-wrapper junk are stripped before write
- if you run
--pushor--push-prfrom a non-default branch, that branch is reused - if you run from the detected default branch, the tool creates a slugified feature branch first
- branch naming uses user
--contextfirst, then the sanitized commit message
--pr --dry-runprints a generatedTITLE:/BODY:draft- non-dry-run PR flows call
gh pr create --base <default_branch> - if
gh pr createfails because a PR already exists, the tool falls back togh pr view --json url - clean worktrees are supported when the current branch is already ahead of the default branch
Use a custom prompt template file:
git-ai-commit --template ./prompt.txtThe built-in default prompt is intentionally constrained, but custom templates still receive the richer {CONTEXT} payload from the repo analysis path.
Config file path:
~/.config/git-ai-commit/config.toml
Example:
provider = "ollama"
model = "qwen3-coder:latest"
base_url = "http://localhost:11434"
api_key = ""
max_files = 10
max_diff_lines = 50
port = 11434
timeout_seconds = 60Notes:
- CLI flags override config values.
- For
openai-compatible, setmodelexplicitly in config or on the command line.
Commit with explicit context:
git-ai-commit --context "focus on cleanup before release"Preview a PR from an existing feature branch:
git-ai-commit --pr --dry-run --context "user-facing behavior change"Commit, push, and open/reuse a PR through local Ollama OpenAI-compatible mode:
git-ai-commit \
--push-pr \
--provider openai-compatible \
--model tinyllama:latest \
--base-url http://localhost:11434Local test loop:
cargo test
cargo fmt --all
cargo clippy --all-targets --all-features -- -D warnings
cargo test
cargo build --releaseLive GitHub-backed integration test:
cargo test test_push_pr_with_real_github_repo_creates_then_reuses_pull_request --test interactive_test -- --ignoredThat test:
- creates a disposable private GitHub repo with
gh - clones it into
/tmp - runs the real
git-ai-commitPR flow - verifies PR creation and existing-PR reuse
- deletes the repo on teardown
MIT