A curated collection of resources covering AI security, LLM safety, prompt injection, agent security, secure coding practices, and related topics.
- Frameworks and Standards -- Risk frameworks, standards, and foundational security references
- Governance and Policy -- Compliance, AI policy, legal, and trust
- Threat Modeling -- Threat modeling frameworks, tools, and methodologies
- Architecture -- Secure AI design patterns and principles
- Prompt Injection -- Taxonomy, techniques, datasets, and defenses
- Jailbreaking -- Jailbreaking techniques and research
- Model Attacks -- Poisoning, backdoors, extraction, and adversarial ML
- Supply Chain -- Dependency attacks, model integrity, and signing
- Incidents -- Real-world breaches, exploits, and case studies
- Guardrails and Firewalls -- Guardrails, firewalls, and runtime protection
- Sandboxing and Isolation -- Runtime containment and code execution security
- Detection and Monitoring -- Vulnerability scanners and threat detection
- Secrets Management -- Protecting secrets from AI agents
- Honeypots and Deception -- Honeypots and adversary engagement
- Anti-Crawling -- Tarpits, cloaking, data poisoning, and crawler access control
- Agent Security -- Agent-specific security concerns and threats
- Agent Identity -- OAuth, NHI, authentication, and authorization
- MCP (Model Context Protocol) -- Gateways, scanners, research, and tooling
- Agent Frameworks -- General agent frameworks, platforms, and tools
- Secure Coding -- Rules files, vibe coding security, and secure prompt engineering
- Code Analysis -- SAST, code review, and vulnerability scanning
- Coding Tools -- IDE integrations, copilots, and assistants
- Papers -- Academic papers and surveys
- Benchmarks -- Evaluation frameworks and datasets
- Safety and Alignment -- AI safety, alignment, and privacy
- Red Teaming -- Offensive AI security, tools, and methodologies
- Engineering Patterns -- Harness engineering and building patterns
- Privacy -- Data leakage, PII protection, and exfiltration
- General Reading -- Blog posts, talks, opinion, and commentary
- Awesome LLM Security (corca-ai)
- Awesome LLM Safety
- Awesome LLM Security (christiancscott)
- Awesome LLM Supply Chain Security
- Awesome LLMSecOps
- Awesome LLM Agent Security
- Awesome LM-SSP
- System Prompt Leaks
- AI Security Forum Quick List
- Awesome AI Security (TalEliyahu)
This repo includes custom Claude Code slash commands for managing the compendium:
/add-resource <url> [url2] ...-- Fetch titles, classify links into the correct category, and commit them to the repo./search-resources <query>-- Search across all compendium files for resources matching a keyword or topic.