SAFE-MCP Gets $12.5M. Here Is How ATR Fits Into the Stack.
Six tech giants funded SAFE-MCP to secure AI agent tool calls. ATR provides the detection layer that makes standards enforceable. Here is how they fit together.
The Biggest AI Security News of 2026
On March 27, 2026, something unprecedented happened: Anthropic, Google DeepMind, OpenAI, AWS, GitHub, and Microsoft -- companies that are otherwise competitors -- jointly funded $12.5 million through OpenSSF and the Alpha-Omega Foundation. The target: securing open-source AI agent tool calls.
The framework they are building is called SAFE-MCP. Its goal is to create security standards for when AI agents call external tools -- the exact attack surface that has produced every major AI security incident in the past 12 months.
Companies that compete on models, APIs, and cloud infrastructure decided this one problem is too important to fight over. That has never happened in AI before.
Why This Matters
Until today, "AI agent security" was a niche concern. Security teams did not have budget for it. CISOs did not have a framework to evaluate it. Developers did not think about it.
Now six of the largest technology companies on Earth are saying: this is real, this is urgent, and nobody has solved it yet.
The attack surface they identified is exactly what we have been documenting:
- ●Tool poisoning: malicious instructions hidden in MCP tool descriptions that hijack agent behavior
- ●Supply chain attacks: compromised packages in MCP registries (we found 182 CRITICAL findings across 36,394 skills)
- ●Privilege escalation: tools that start safe and gradually acquire more access
- ●Data exfiltration: skills that silently forward your data to external endpoints
This is the same attack taxonomy that ATR provides executable detection rules for.
SAFE-MCP Is a Standard. ATR Is a Detection Layer.
SAFE-MCP will define what needs to be secured. ATR provides 113 executable rules that detect these attacks in real time. They solve different problems at different layers:
| Layer | What It Does | Example |
|---|
|-------|-------------|---------|
| SAFE-MCP | Defines threat categories | "Tool poisoning is a risk" |
|---|
| OWASP Agentic Top 10 | Enumerates attack surfaces | "ASI01: Agent Goal Hijack" |
|---|
| ATR | Provides executable detection rules | "Match regex pattern X in tool description, severity CRITICAL, response: block" |
|---|
| PanGuard | Runs ATR rules + AI analysis on every skill | "tesla-fleet-api: CRITICAL. Prompt injection found at line 47." |
|---|
Standards without implementation are checklists. Implementation without standards is ad hoc. The industry needs both.
What We Have Shipped So Far
While the standardization process begins, there are already executable tools available:
- ●113 ATR detection rules covering OWASP Agentic Top 10: 10/10 categories
- ●36,394 skills scanned in the ClawHub ecosystem -- the largest AI agent skill security audit ever conducted
- ●16 platform support: Claude Code, Cursor, OpenClaw, Codex CLI, Windsurf, Zed, Gemini CLI, VS Code Copilot, and 8 more
- ●Threat Cloud: a collective intelligence network where every scan makes every user safer
- ●61.4% recall, 99.6% precision on the PINT benchmark -- published with full methodology
What This Means for PanGuard
Three things:
1. Market validation. Six of the largest technology companies confirmed that AI agent tool call security is a real and urgent problem. This validates the entire category.
2. Standards alignment. We will align ATR rule categories with SAFE-MCP threat taxonomy as soon as it is published. If SAFE-MCP defines 15 threat categories, ATR will have rules mapping to all 15.
3. OpenSSF submission. We are exploring submitting ATR as an OpenSSF project. ATR is MIT-licensed, community-driven, and vendor-neutral -- exactly the profile OpenSSF supports.
What You Should Do
If you are running AI agents in production -- Claude Code, Cursor, Codex, or any MCP-compatible tool -- you should not wait for SAFE-MCP to finish its 18-month standardization process.
The attacks are happening now. postmark-mcp stole inboxes. SANDWORM_MODE exfiltrated SSH keys from 19 typosquatted packages. 13.5% of ClawHub skills have security risk patterns.
Install PanGuard. Scan your skills. It takes 60 seconds.
curl -fsSL https://get.panguard.ai | bashFree. Open source. 113 detection rules. 16 platforms. The standard is coming. The protection is already here.