🛡️Vigil
⚡ Security scanner for Large Language Model (LLM) prompts ⚡
Overview 🏕️
Vigil
is a Python library and REST API for assessing Large Language Model prompts and responses against a set of scanners to detect prompt injections, jailbreaks, and other potential risks. This repository also provides the detection signatures and datasets needed to get started with self-hosting.
This application is currently in an alpha state and should be considered experimental.
Work is ongoing to expand detection mechanisms and features.
Highlights ✨
Analyze LLM prompts for common injections and risky inputs
Use Vigil as a Python library or REST API
Evaluate detections and pipelines with Vigil-Eval (coming soon)
Scanners are modular and easily extensible
Available scan modules
Supports local embeddings and/or OpenAI
Signatures and embeddings for common attacks
Custom detections via YARA signatures
Quick Links
🛠️Install Vigil🧪Use Vigil🎯ScannersLast updated