Conversation
Introduces a comprehensive AI strategy for Software Engineering, detailing the adoption and governance of developer-facing AI tools (primarily GitHub Copilot). The document covers principles, objectives, use cases, capability development, governance, security, implementation roadmap, metrics, risk mitigation, and review processes to ensure responsible and secure AI adoption.
Refined language to emphasize platform controls over sandboxes for AI developer tools. Updated "Enabling Platform & Tooling" for consistency, removed redundant lines, and omitted outdated references to annexes and sandboxes. Edits improve clarity and align the strategy with current Copilot adoption practices.
✅ Snyk checks have passed. No issues have been found so far.
💻 Catch issues earlier using the plugins for VS Code, JetBrains IDEs, Visual Studio, and Eclipse. |
software-engineering/strategy/software-engineering-ai-strategy.md
Outdated
Show resolved
Hide resolved
software-engineering/strategy/software-engineering-ai-strategy.md
Outdated
Show resolved
Hide resolved
software-engineering/strategy/software-engineering-ai-strategy.md
Outdated
Show resolved
Hide resolved
software-engineering/strategy/software-engineering-ai-strategy.md
Outdated
Show resolved
Hide resolved
software-engineering/strategy/software-engineering-ai-strategy.md
Outdated
Show resolved
Hide resolved
elerivaliant
left a comment
There was a problem hiding this comment.
Content reads fine, could use some links to where we're doing some of the things we're saying we going to do later.
- Updated Code Generation Tools Policy reference to a relative Markdown link. - Clarified Copilot training requirements and added PluralSight as a resource. - Fixed typographical error in Phase 2 heading. - Standardized hyphens and replaced non-ASCII characters in Risks & Mitigations.
Changed the policy link in software-engineering-ai-strategy.md to explicitly include the .md file extension, ensuring correct navigation and compatibility with environments that require file extensions.
| - Monthly engineering leadership dashboard summarising adoption and key impact metrics. | ||
| - Quarterly review with security and legal for policy adjustments. | ||
|
|
||
| ## 11. Risks & Mitigations |
There was a problem hiding this comment.
Risk that code reviews become perfunctory tick boxing - change the way we do reviews to be more detailed, keeps engineerings aligned with the codebase and catches more subtle security/quality issues.
Risk we don't maintain the skills to write code in defence due to the technology adoption lag that we expect to see, particularly around appetite to use AI in high security networks - deep dive code reviews will help, non-ai code competitions/hackathons?
| - Monitor usage metrics (adoption rate, active users, sessions) and correlate with productivity and quality metrics. | ||
| - Identify slower adopters and provide targeted coaching, workshops, and incentives. | ||
|
|
||
| - **People & skills:** |
There was a problem hiding this comment.
Ensure there isn't a two tier environment where people who have to hand-craft code are not left behind and are equally valued in the organisation.
I'm worried that the human-in-the-loop gets diluted over time. How does an engineer who has grow up with AI learn what good looks like? How do they learn what a secure deployment looks like. They may get the opportunity to observe AI doing it but do they understand it enough to judge the output?
|
|
||
| - AI-assisted development (code generation, refactoring, documentation) | ||
| - Automated testing and test generation | ||
| - CI/CD optimisation and release automation |
There was a problem hiding this comment.
This is the risky one from a security perspective. Generally I wouldn't give a junior the task of creating the inital CD as they're unlikley to understand some key concepts around VPCs, networking, etc.
Initial draft of Software Engineering's AI Strategy Document