Warning
This project is simple.
The results provided by this project cannot be fully trusted.
(I'm actively using this project myself, and I'll continue to improve it whenever I have time.)
CodeSentinel is an AI-powered security auditor designed to scan project directories for malicious intent, dangerous coding practices, and obfuscated payloads. By leveraging Large Language Models (LLMs) and Tree-sitter, it provides both surface-level scans and deep, dependency-aware analysis.
Note
Read-only scan of target files/directories
no modifications are made to the scanned content.
- AI-Powered Analysis: Uses LLMs to audit code for backdoors, SQL injection,
eval()usage, and more. - Deep Analysis Mode: Traces cross-file logic by providing the AI with the context of local dependencies (either full code or skeletal structures).
- Multi-Language Support: Optimized for Python and JavaScript/TypeScript using Tree-sitter, with heuristic support for many other languages.
- Intelligent Skeletons: Extracts class and function signatures to provide context without exhausting LLM token limits.
- Detailed Reporting: Generates interactive CLI output and structured JSON reports (Full scan vs. Problems only).
- Flexible Backend: Compatible with OpenAI, LM Studio, Ollama, and other OpenAI-compatible APIs.
- Python 3.10+
- (Optional) A local LLM runner like LM Studio, Ollama, llama.cpp ...
-
Clone the repository:
git clone https://github.com/yourlayer/CodeSentinel.git cd CodeSentinel -
Install dependencies:
pip install -r requirements.txt
Edit src/config.py or use environment variables to configure the scanner:
OPENAI_API_KEY: Your API key (default:any-key-for-local).OPENAI_BASE_URL: The API endpoint (e.g.,http://localhost:1234/v1for LM Studio).AI_MODEL: The name of the model to use.
Scan a directory using the default configuration:
python -m src.main --dir ./path/to/projectAnalyze files along with their local dependencies:
python -m src.main --dir ./path/to/project --deep--dir <path>,-d <path>: Directory to scan (default: current directory).--dry-run: List files that would be scanned without sending them to the AI.--model <name>: Override the model specified in config.--url <url>: Override the API base URL.--full-deps: In deep mode, include the full source code of dependencies instead of just skeletons.
Reports are saved in the reports/scan_YYYYMMDD_HHMMSS/ directory:
full_report.json: Detailed results for every scanned file.problems_report.json: Filtered results containing only[DANGER]and[WARNING]status.project_structure.txt: A text-based visualization of the scanned directory.
Run the test suite:
python -m unittest discover testDocumentation maintained by Charles Tsaur. Last updated: 2026-01-30.