Home / Blog / MCP CVEs & Agent Governance

78% of Companies Don’t Govern Their AI Agents. The 30+ MCP CVEs Make That a Ticking Bomb.

Most security incidents in the next 18 months will not look like the ones in the previous five years.

The Model Context Protocol — Anthropic’s open standard for connecting AI agents to tools, data, and other agents — became the de facto wiring layer for agentic AI in 2025. By Q1 2026 it had been donated to a Linux Foundation working group with Block, OpenAI, Google, Microsoft, AWS, and Cloudflare on the steering committee. There are over 17,000 indexed MCP servers and more than 110 million SDK downloads per month. 78% of enterprise AI teams have at least one MCP agent in production.

It is also actively under attack. Since GA, MCP has accumulated more than 30 public CVEs. The exploit classes that should worry an MSP serving mid-market clients:

  • Tool poisoning. A malicious Jira ticket, Slack message, or retrieved document containing carefully-crafted text gets ingested by an agent. The resulting tool call is hijacked to do something the user never authorized.
  • JWT leakage. Stdio-mode MCP servers default to no authentication. Tokens get logged. Tokens get committed. Tokens get reused across tenants.
  • STDIO RCEs. Several popular MCP servers shipped with unsafe deserialization defaults. Ten remote-code-execution-class issues are public as of mid-2026.
  • No specified retry semantics. A deploy agent that runs twice because the orchestrator timed out can ship a release twice. The MCP spec doesn’t solve this; you have to.

None of these were design errors. They are the inevitable surface area of a new protocol that ate the world in 18 months.

Why the existing security stack misses this

Your firewall doesn’t see the agent traffic because it’s mostly outbound HTTPS to vendor APIs. Your EDR doesn’t flag a hijacked agent action because the binary running it is legitimate — it’s the prompt that was poisoned, not the executable. Your SIEM correlates logs, but the agent isn’t logging at the granularity needed to reconstruct what happened.

Compounding this: the Cloud Security Alliance reported in early 2026 that machine identities now outnumber humans 45 to 1 on average and 80 to 1 in cloud-native shops. 78% of organizations have no formal policy for AI-identity creation, scoping, or deprovisioning. Astrix, Entro, GitGuardian, Aembit, and Oasis Security each raised significant rounds in Q1 2026 to address this Non-Human Identity (NHI) gap, but their solutions are direct-to-enterprise, not channel-built. The mid-market firms an MSP serves don’t buy from those vendors directly.

What we’re shipping at Intelligent IT

Two products in the AiT family are the operational answer to this gap, and we run them on our own MSP first.

AiT Coord — the cross-project agent orchestrator

Every agent acting on a managed environment authenticates via a short-lived JWT signed with an RS256 key in Google Secret Manager. Tenant ID is a claim. The orchestrator verifies on every call. Every agent action lands in an audit table with idempotency keys, lock state, and tenant scope. NHI sprawl — the perimeter problem 78% of organizations admit they don’t govern — collapses to one verifiable signing key per environment.

Lock arbitration is real: before any agent acts on a project (scan, deploy, migrate, fix), it acquires a project-scoped lock with a TTL and an intent. Conflicting intents return a structured conflict response. No agent ever assumes ownership of state they didn’t lock.

AiT SOC Sentinel — the AI-aware oversight layer

Sentinel watches the agents themselves. Prompt-injection signatures fire at the gateway layer. Anomalous tool-call patterns (especially privilege-escalating, cross-tenant, or after-hours) generate investigative alerts. Model-supply-chain attestation flags any LoRA or fine-tune that wasn’t signed by an approved producer. The OWASP LLM Top 10 attack classes — LLM01 through LLM10 — have specific detection rules tied to them.

The combination is what makes this an MSP-scale answer rather than a vendor pitch: tenant-scoped, audit-trailed, and integrated with the rest of the IT stack we already manage for the client.

What you should ask your current MSP this week

  1. How do your AI agents prove their identity to each other, and what cryptographic primitive does that depend on?
  2. Show me the audit log of every AI-agent action that touched my environment in the last 30 days.
  3. What happens if two of your agents try to act on my system at the same time?
  4. Which OWASP LLM attack classes do you specifically detect, and where would the alert fire?
  5. How many machine identities are active in my tenant right now, and what’s the kill-switch latency for any single one of them?

If your MSP can’t answer those concretely, the conversation is overdue.

Read the deep dive

Our white paper on cross-project agent coordination explains the JWT + MCP pattern we built and run, complete with the failure modes the MAST research surfaced and the mitigations we put into production.

See the full AI security stack →

The bottom line

Multi-agent AI is in production at scale. The governance is not. The 30 MCP CVEs are the early signal of a much larger attack surface that the existing mid-market security stack doesn’t cover. The MSPs that update their tooling to handle agentic identity, lock arbitration, and prompt-injection defense will be the ones their clients keep when the first significant agent-driven breach hits the trade press.

We expect that breach within the next two quarters. We’d like our clients ready before then.