On November 12, 2025, Anthropic announced the donation of the Model Context Protocol to the Linux Foundation under a new neutral governance structure. The announcement landed alongside formal commitments from OpenAI, Google DeepMind, AWS, Microsoft, and Cloudflare to maintain MCP-compliant interfaces. A month earlier, Anthropic had shipped Skills as a first-class primitive in the Claude API and the Claude Code CLI. By the end of Q1 2026, Skills had crossed 130 million invocations across the public surface and an unknown but larger number across enterprise deployments.
For IT operations teams, this is a bigger architectural shift than ITIL v4 was, and the people running 2024-era playbooks have not caught up.
The short version of what changed
MCP is a protocol that lets a model access tools and data through a standardized interface. Before MCP, every model-to-tool connection was a custom integration. After MCP, any compliant client can talk to any compliant server. The Linux Foundation donation matters because it removes the “will Anthropic abandon this?” risk for enterprise architects. MCP is now governed under the same model as Kubernetes, and the contributing-vendor list is broad enough that nobody can unilaterally pull the rug.
Skills are reusable, packaged capability units that a Claude agent can load on demand. A Skill is a directory containing instructions, optional code, and a manifest. The model loads the Skill when its description matches the task. The pattern looks small until you realize the implication: an operations team can now ship “how we do things” as a versioned, reviewable, testable artifact that any agent on the team can use.
What this changes for IT operations
Runbooks become executable
The 2010s-era IT-ops runbook lived in Confluence, was read by humans, and was usually wrong by the time it was needed. The 2026-era runbook is a Skill. It has a description that explains when to invoke it. It has the actual commands. It has guardrails on dangerous steps. It is checked into git. It is reviewed in pull requests. When an incident lands at 2:14 AM, the on-call engineer's agent loads the Skill, executes the safe steps, and pauses on the destructive ones for human approval.
According to a CSA operational-AI survey from February 2026, 41% of enterprise IT teams now have at least one production runbook authored as a Skill. The same survey found that teams running Skills-as-runbooks reported 38% faster mean-time-to-resolution on incidents that matched a documented runbook.
Tool sprawl gets a new top-of-stack
The average enterprise IT operations team uses 47 distinct tools, according to Forrester's Q1 2026 IT Operations Now Tech. Before MCP, integrating those tools meant API-by-API custom code. After MCP, the integrating layer is the agent, and every tool either ships an MCP server or someone wraps it. Cloudflare, GitHub, Atlassian, ServiceNow, Splunk, Datadog, PagerDuty, and Okta all shipped first-party MCP servers in Q4 2025 or Q1 2026.
The MSPs who get this right are building a small number of high-trust agent surfaces (think: AiT Coord for ticketing and incident orchestration, AiT SOC Sentinel for security ops) that consume an unbounded number of MCP servers underneath. The MSPs who do not are still writing API integration code per tool.
The governance problem moves up the stack
This is the part that most teams are underestimating. When every tool is reachable through an agent, the question is not “who has access to ServiceNow?” anymore. It is “who has access to which Skills, and which Skills can call which MCP servers, and which MCP server actions are gated by which approval policy?”
OWASP's MAST framework (January 2026) defines this as the agent-permission graph. NIST's AI 600-1 update (March 2026) calls it action-attribution chaining. The shape is the same: the audit trail is no longer a sequence of human-and-tool events. It is a graph of human, agent, Skill, MCP server, and tool, and you have to be able to reconstruct it for a SOC 2, ISO 27001, or post-incident review.
What we are shipping
This shift is the reason we are building AiT Coord and the reason we are pulling the AiT AI Gateway forward in our 2026 roadmap. Coord is the Skills-and-MCP layer for our IT operations practice. Every runbook we run on a client tenant is authored as a Skill. Every tool integration is mediated through an MCP server. The audit trail is a single coherent graph that we can show to a client's auditor, their CISO, or a regulator on demand.
The AI Gateway sits one layer up. It is what enforces the agent-permission graph. Which Skill is allowed to call which MCP server, which MCP server action is allowed to fire without approval, which actions require step-up authentication — all of that is policy, and the policy lives in the gateway, not in scattered config across forty-seven tools. The reference architecture mirrors what Cloudflare's AI Gateway and Portkey do for model-call governance, extended to cover the agent-MCP boundary.
What MSPs running 2024-era playbooks should do now
If your MSP is still building tool integrations one API at a time, you have eighteen months before the leverage gap is too wide to close.
- This quarter: author your top five most-frequent runbooks as Skills. Check them into git. Review them in PRs. Run them in a sandbox tenant first.
- Next quarter: stand up an internal MCP catalog. Every tool your team uses gets evaluated for MCP server availability. The ones with first-party servers go first.
- By Q4 2026: every customer-impacting agent action goes through a governance layer that produces an action-attribution graph. If you cannot show a regulator the full graph for a given incident, you are not ready for the 2027 audit cycle.
- Pick a partner. The AI Gateway category is splitting between Cloudflare AI Gateway, Portkey, and a small number of MSP-built layers (ours included). Pick one and commit. The teams that DIY this end up with three half-built gateways and no governance.
See the AiT AI Gateway architecture
We published a 22-page reference architecture on running Skills, MCP, and the agent-permission graph at scale. It is the spec we are building against and the same pattern we run on our own tenant.
The bottom line
The bottom line: MCP and Skills are not feature releases, they are an operating-model change. The IT operations teams and MSPs that treat them as features will spend 2026 patching the wrong layer of the stack. The ones that treat them as a re-architecture will end 2026 with a governance posture their competitors cannot match in a single quarter. The window on this is narrower than most leaders think.