Detect
Behavioral anomaly detection with a 3-layer AI funnel
Rules (50ms) → local AI (~2s) → cloud AI (~5s) · 90 / 7 / 3 production traffic split · 7-day learning-mode baseline · Supports Claude, OpenAI, Ollama, or offline-only.
WHAT THIS LAYER DOES
L4 Detect catches what rules cannot: novel attack patterns, behavioral drift, coordinated multi-step attacks. A 3-layer funnel routes ~90% of traffic to cheap rule matching, ~7% to a local LLM (Ollama / llama.cpp) for semantic analysis, and only ~3% to cloud AI for deep reasoning on ambiguous cases.
WHY YOU NEED IT
Rules catch the 90% of attacks you have seen before. The other 10% need AI — but if you call a cloud LLM on every request, cost and latency explode. The funnel keeps P50 under 50ms, P99 under 5s, at 95% cheaper than naive "always call GPT" architectures.
HOW IT WORKS
SmartRouter in packages/panguard-guard/src/engines/smart-router.ts dispatches events by confidence. EnvironmentBaseline learns normal processes / connections / logins during the 7-day learning window, then flips to protection mode. AnalyzeAgent wraps Anthropic / OpenAI / Ollama with a unified interface; investigation engine correlates across events.
TRY IT NOW
Add cloud AI for deeper detection (optional — local-only also works):
pga guard setup-aiATTACKS THIS LAYER CATCHES
Concrete threats, concrete controls
Multi-skill chain attack
HIGHIndividually benign tool calls that combine into a malicious sequence — rules miss, behavioral detection catches.
Novel prompt injection variants
MEDIUMAdversarial prompts that evade known regex — local AI analyzes semantic intent.