docs: generalize skills section to reference all coding agents#6
Conversation
There was a problem hiding this comment.
Cadence Session Review
Tiny docs-only PR generalizes "Claude Code Skills" heading to "Coding Agent Skills" and broadens the description to reference multiple coding agents. None of the attached AI sessions produced this change — all sessions focus on RALPH-driven agent implementation (docker-agent, kube-agent, ansible-agent, sql-agent).
- No session transcript corresponds to the actual diff; the change appears to be a manual edit or was generated in an unrecorded session.
- The diff itself is clean and minimal — two lines changed in README.md.
- Session data shows heavy RALPH loop automation but zero prompting activity related to this documentation update.
Recommendations
Prompting — Log all sessions, including trivial doc edits
When making small documentation edits, capture even brief prompting sessions so reviewers can attribute changes and coach on prompting patterns. Without a session log, there is nothing to evaluate.
Before
(No recorded prompt — change appears manual or from an unlogged session)
Reframe
Include a brief task description referencing the exact file and section to change, e.g. "Update the Skills heading in README.md to be agent-agnostic, replacing 'Claude Code Skills' with 'Coding Agent Skills' and broadening the description."
Tip
Even one-shot prompts benefit from being logged; it builds a feedback loop for prompt quality over time.
Agent instructions — Codify agent-neutral language rule in instruction files
The naming convention shift from product-specific to generic language is a good pattern. Encoding this as an explicit instruction prevents the model from reverting to product-specific phrasing in future doc generation.
Reframe
Add to AGENTS.md or CLAUDE.md: "Use agent-neutral language in all user-facing docs. Say 'coding agents' instead of naming specific products."
Tip
Instruction-level guardrails are more reliable than hoping the model infers style from examples.
There was a problem hiding this comment.
Cadence Session Review
Trivial two-line docs edit generalizes "Claude Code Skills" heading and description to reference all coding agents. No AI session directly produced this change — all candidate sessions focus on sql-agent, ansible-agent, docker-agent, and kube-agent implementation via the RALPH loop.
- Diff is purely cosmetic: heading rename + description broadening in README.md
- None of the 8 captured sessions correspond to this documentation change
- Sessions show a disciplined PLAN → WORK → REVIEW cadence with structured validation gates
Recommendations
Prompting — Track all agent-assisted changes in sessions
This docs change has no matching AI session, making it impossible to evaluate prompting quality. When using a coding agent for even small edits, run them inside a tracked session so reviews can assess instruction-following and provide feedback.
Before
(No prompt captured — change appears to have been made outside a tracked session)
Reframe
"Update the README skills section heading and description to be agent-agnostic — change 'Claude Code Skills' to 'Coding Agent Skills' and broaden the description to mention Claude Code, Codex, etc."
Tip
Even one-liner changes benefit from session tracking — it builds a reviewable audit trail and helps calibrate prompting habits over time.
Agent instructions — Add docs-generalization guidance to instruction files
The RALPH sessions show meticulous spec-driven work on agents but no instruction guiding the model to generalize product-specific references in documentation. Adding a docs-consistency rule to the instruction file would let the model catch these automatically during implementation.
Reframe
Add to AGENTS.md or CLAUDE.md: "When updating docs that reference a specific tool (e.g. Claude Code), check whether the language should be generalized to cover all supported agents."
Tip
This prevents future drift where new agents are added but docs still reference only one tool.
Prompting — Match session candidates to actual PR content
Eight detailed RALPH sessions were provided as candidates, but none produced the actual two-line diff in this PR. This creates noise for session review. When submitting PRs, link only the sessions that contributed to the change, or note explicitly that the change was manual.
Tip
A simple convention like 'Session: none (manual edit)' in the PR description saves review effort.
Summary