Self-hosted prompt injection detection using Superagent guard-1.7B model via local Ollama. Classifies every LLM input as pass/block with auto-chunking and parallel processing. Zero external API calls — all inference runs on your VM.
Sign in and create an instance to install this skill.