Skip to content

AnthonyHerman/ai-security

Repository files navigation

AI Security Resources

A curated collection of resources covering AI security, LLM safety, prompt injection, agent security, secure coding practices, and related topics.


Table of Contents

Foundations

Attacks

  • Prompt Injection -- Taxonomy, techniques, datasets, and defenses
  • Jailbreaking -- Jailbreaking techniques and research
  • Model Attacks -- Poisoning, backdoors, extraction, and adversarial ML
  • Supply Chain -- Dependency attacks, model integrity, and signing
  • Incidents -- Real-world breaches, exploits, and case studies

Defense

Agents

Coding

  • Secure Coding -- Rules files, vibe coding security, and secure prompt engineering
  • Code Analysis -- SAST, code review, and vulnerability scanning
  • Coding Tools -- IDE integrations, copilots, and assistants

Research

Practice

General


Lists and Aggregators

System Prompts

Prompt Design

Claude Code Skills

This repo includes custom Claude Code slash commands for managing the compendium:

  • /add-resource <url> [url2] ... -- Fetch titles, classify links into the correct category, and commit them to the repo.
  • /search-resources <query> -- Search across all compendium files for resources matching a keyword or topic.

About

AI Security Resources

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages