🔬
Vigil: Documentation
GitHub Repo
  • 🛡️Vigil
  • Overview
    • 🏗️Release Blog
    • 🛠️Install Vigil
      • 🔥Install PyTorch (optional)
    • 🧪Use Vigil
      • ⚙️Configuration
        • 🔄Auto-updating vector database
      • 🗄️Load Datasets
      • 🌐Web server
        • 🤝API Endpoints
        • 🪐Web UI playground
      • 🐍Python library
      • 🎯Scanners
        • 🤗Transformer
        • ❕YARA / Heuristics
        • 📑Prompt-response Similarity
        • 💾Vector database
        • 🐤Canary Tokens
    • 🛡️Customize Detections
      • 🌟Add custom YARA signatures
      • 🔢Add embeddings
      • 🐍Custom scanners
    • 🪄Sample scan results
Powered by GitBook
On this page
  • Overview 🏕️
  • Highlights ✨
  • Quick Links

Vigil

⚡ Security scanner for Large Language Model (LLM) prompts ⚡

NextRelease Blog

Last updated 1 year ago

Overview 🏕️

Vigil is a Python library and REST API for assessing Large Language Model prompts and responses against a set of scanners to detect prompt injections, jailbreaks, and other potential risks. This repository also provides the detection signatures and datasets needed to get started with self-hosting.

This application is currently in an alpha state and should be considered experimental.

Work is ongoing to expand detection mechanisms and features.

Highlights ✨

  • Analyze LLM prompts for common injections and risky inputs

  • Use Vigil as a Python library or REST API

  • Evaluate detections and pipelines with Vigil-Eval (coming soon)

  • Scanners are modular and easily extensible

  • Available scan modules

  • Signatures and embeddings for common attacks

Quick Links


Repo:

Supports and/or

Custom detections via

🛡️
https://github.com/deadbits/vigil-llm
Vector database / text similarity
Auto-update vector database with detected prompts
Heuristics via YARA
Transformer model
Prompt-response similarity
Canary Tokens
local embeddings
OpenAI
YARA signatures
Streamlit web UI playground
🛠️Install Vigil
🧪Use Vigil
🎯Scanners