Three conversations from the last sixty days, lightly anonymized, all real.
The CFO of a 230-person professional services firm asks me, “should we be using Claude for our quarterly variance commentary, or is this a NetSuite problem?” The IT Director at a portfolio company under one of our private-equity clients asks, “how do you actually deploy MCP without lighting your audit trail on fire?” A one-person MSP I have known for six years messages me on Sunday: “I have $200 a month for tools and three hours a week to learn. What do I touch first?”
Three different titles, three different budgets, three completely different worlds. One observation underneath all of it: nobody in any of those conversations is short on access to AI tools. ChatGPT is a $20 monthly subscription. Claude is a $20 monthly subscription. Microsoft Copilot is bundled into the M365 license they already pay for. The gap is not access. The gap is knowing the order in which to use them, what to do when the model is wrong, and how to measure whether the time saved actually showed up on the P&L.
What we learned running our own MSP on this stack
Intelligent IT is a managed services provider. We run a helpdesk, a security operations bench, and a back-office for a portfolio of mid-market clients. We run that operation on the same AI tools we are about to teach: Claude as the primary reasoning model, Microsoft Copilot inside M365, MCP for tool integration, n8n and Make for the automation glue, Cursor for engineering work. There is no internal pilot, no “sandbox” account waiting to graduate. This is the production stack.
Three numbers from our own runtime, not theory:
- L1 ticket deflection moved from 23% to 47% in 90 days after we deployed our triage agent on top of our PSA. The 23% baseline matches the industry average that Pylon documented in their 2025 deflection report. The 47% number is our own measurement, on our own helpdesk.
- AP invoice processing dropped from $12 per invoice to $3.20 per invoice. That is our own AP, not a customer case study. The $12 baseline is the standard Ardent / IOFM benchmark for manual three-way matching; the $3.20 number is what we measure today after invoice extraction, PO matching, and approval routing run on a Claude pipeline that ChatFin documented at the same ROI tier in their Finance AI Stack 2026 report.
- Average human-hours per ticket dropped roughly 40% across a steady-state ticket mix. That number isn’t the cherry-picked best week; it is the trailing 90-day median across a four-engineer bench.
Those numbers are why prospects keep asking us how we did it. The Academy is the answer at scale.
The thesis: augment, not replace
The most aggressive AI vendors in 2026 are pitching some version of “replace your help desk,” “replace your AP team,” or “replace your tier-1 SOC analyst.” We watched that pitch land last year, and we watched several of those companies quietly hire people back at higher salaries six months later. The pattern was not subtle. The AI was doing the volume just fine. There was nobody left to escalate to when it got something wrong, nobody to write the policy that decided what got escalated, nobody to make the judgment call that the model was not authorized to make on its own.
Our position is the opposite, and we have been operating it for over a year. The human is doing the work that requires judgment, context, and accountability. The AI is doing the volume. The ratio of work that the human does drops sharply, but the ratio of work the human is uniquely qualified for stays the same or grows. A finance analyst stops pasting invoices into a queue and starts authoring the exception policy that decides when an invoice gets paid versus held. A tier-1 SOC analyst stops clicking through alerts and starts writing the rule that says when Claude is allowed to acknowledge an alert and when it has to stop and ask. An IT solopreneur stops drafting proposals from scratch and starts running five client engagements with the time they used to spend on the first three.
That is not slower work. That is higher-leverage work. And it is far more defensible than the “we fired the team” story when a regulator, a customer, or a board member asks how the work is being done.
The Academy: three tracks for three buyers
The Academy ships in three cohort programs. Different audiences, different durations, different price points, same backbone curriculum: audit your time, build real workflows on your own data, ship a capstone you can actually put in front of your CEO or your client.
Operator Track · $1,950 / 6 weeks
For IT solopreneurs and one-person MSPs running 5–25 SMB clients. Six weeks, async-first with four group calls and a Slack community. The math: independent IT consultants bill at $150–$300 per hour blended; the typical fully-stacked operator reclaims 15+ hours a week (Elephas’ 2026 solo-consultant benchmark). At $150/hour, the program pays for itself in week one of recovered capacity. See the Operator Track →
Director Track · $3,950 / 4 weeks
For IT Directors at 50–500 employee firms. Four weeks, weekly live cohort calls plus async builds. The outcome metric is the one we measure in our own helpdesk: triage agent and KB agent live in your service desk by week four, baseline deflection measured pre/post. The Pylon data says best-in-class IT orgs hit 40–60% deflection; the median is still 23%. The delta is the program. Team rate available for enrolling your L2/L3 with you. See the Director Track →
Executive Track · $4,950 / 4 weeks ยท Beta
For CFOs, VPs of Finance, and Controllers at mid-market firms. Four weeks, weekly cohort plus 1:1 office hours, capstone is three production AI workflows on your own GL: AP, close, and forecasting. CFO Connect’s 2026 study shows 17% of finance teams have AI in core workflows; the other 83% are stuck in pilots. This track is in beta for Q4 2026 because we are signing a co-instructor with finance credibility before we run a paid cohort — applications open now. See the Executive Track →
Why us
Three reasons, none of them about credentials.
First, we operate the stack we teach. Our SOC bench, our helpdesk, our AP automation, and our weekly content engine all run on the same tools that show up in week two of the Operator Track. When a student asks “does this actually scale past three clients?”, the answer is a runtime metric, not a slide.
Second, we publish our numbers. The 23% to 47% deflection number, the $12 to $3.20 AP number, the 40% human-hour drop — those have to be reproducible by the students on their own data, or the program is a sales pitch. The Director Track capstone deliberately requires before/after measurement on your own helpdesk for exactly that reason.
Third, we don’t sell the tools. We are not a Bill.com reseller, an Atomicwork partner, or a Moveworks affiliate. The curriculum doesn’t have vendor sponsorships steering what we teach. When we say Claude is the right primary model and Microsoft Copilot is the right embedded second, that is an operator’s call, not a kickback.
What is deliberately not in the Academy
An honest list of what we’re leaving out, because the buyers we want know the difference.
- It is not a Claude or ChatGPT cheerleading session. We spend roughly twenty minutes total on “here is how to write a prompt” and the rest on workflow design, change management, failure modes, and measurement.
- It is not a “build a chatbot” project. Every capstone is a workflow that displaces real time on your calendar. A chatbot in a sandbox is a vanity demo; we don’t ship those.
- It is not certification. The market is saturated with vendor-issued AI badges, and they don’t mean anything yet. We sign a graduation certificate; we are not pretending it is an industry credential.
- It is not group consulting. The cohorts have a curriculum and a capstone, not a free-form mastermind. If you want a fractional CIO or fractional CFO advisor, that is a separate engagement and we will be honest about which one you actually need.
How to engage
If you are an IT operator, IT Director, or finance leader and you have looked at the AI landscape and concluded that you can figure it out on your own with enough weekend hours, you are probably right — eventually. The Academy is for people who would rather buy back the eighteen months of wandering and walk in with a working stack on day thirty.
Browse the cohorts
Three tracks, three calendars, transparent pricing. The Operator and Director tracks are open for general enrollment. The Executive Track for CFOs is in beta — applications open now, Q4 2026 paid cohort.
The bottom line
The companies that win 2027 will not be the ones with the best AI tools. They will be the ones with the best AI workflows. Workflows are operations. Operations is where MSPs already live. We have spent two decades helping mid-market firms run better operations — the Academy is what happens when we point that same playbook at the inside of our clients’ own teams.
If that lands, you already know which track is yours.