Healthcare is moving fast on AI agents. Patient intake forms, clinical documentation, insurance verification, appointment scheduling — these workflows have all moved into production at major health systems in the last 90 days. But most of them are operating on borrowed time from their compliance officers, and a single HHS Audit or Office for Civil Rights investigation could reset the entire market.
If you are deploying AI agents in healthcare, you need to know five things before your first agent goes live. This is not theory; this is what we have built into production for Intelligent IT clients in healthcare.
1. A Business Associate Agreement (BAA) is mandatory, not optional
If an AI vendor or service provider touches Protected Health Information (PHI), you need a signed Business Associate Agreement. This includes LLM vendors. Anthropic (Claude), OpenAI, and Google Cloud all signed standardized BAAs in 2025. But many smaller vendors and open-source deployments do not. If you are not sure whether your AI provider has a BAA, do not use them for PHI. The penalty is up to $1.5M per violation, and “we did not know” is not a defense.
2. PHI gating is not optional; it is critical
You cannot stop your AI agent from touching PHI entirely — that is the whole point of the agent. But you can architect it so that PHI is redacted before it reaches the model, and the model responses are only used for non-PHI decisions. Example: an agent takes in “Patient: John Smith, age 42, diabetes, last A1C 6.8” and returns “Schedule follow-up in 3 months.” The agent sees “Patient: [REDACTED], age 42, diabetes, last A1C 6.8.” The response is logged without the patient identifier. Simple redaction logic can gate 70–80% of your PHI exposure.
3. Audit trails are non-negotiable
Every AI agent decision that touches a patient record must be logged with: who triggered it, when, what data the agent saw, what decision the agent made, and who reviewed/approved it before it was acted on. Most LLM wrappers do not log this by default. You have to build the audit envelope. Expect 20–40 hours of engineering to get this right. Do not skip it; HHS auditors will ask for this on day one.
4. AI explanations must be auditable by a human clinician
If an AI agent recommends a treatment change, denies a prior auth, or reschedules a patient, a licensed clinician must be able to read the reasoning and agree or disagree. This is a liability issue and a trust issue. Build the agent so that its reasoning is returned in plain language, not just a final decision. “Recommend follow-up because A1C has risen 0.5 points in three months” is auditable. “Probability 0.87” is not.
5. Incident response requires a 72-hour breach notification window
If your AI agent makes a decision that results in a breach of PHI (e.g., the agent leaks a patient identifier to an unauthenticated user, or sends a treatment recommendation to the wrong patient), you have 72 hours to notify HHS and the affected individuals. Your incident response playbook must account for AI-generated incidents. Most security teams have not thought about this yet. Add it to your IR plan now.
The operational reality
None of these five things will stop you from deploying AI agents in healthcare. All five of them will force you to spend engineering and compliance time upfront. The organizations that are ahead are the ones that budgeted for that time, worked with their legal team and their compliance officers, and got the envelope built before they went to production.
The organizations that are taking shortcuts will get 6–12 months of operational benefit, and then they will hit an audit or a breach and will have to rebuild everything under pressure.
Explore AiT Hosted Agents for healthcare
AiT Hosted Agents ship with HIPAA BAA, built-in PHI redaction, mandatory audit logging, and a playbook for human-in-the-loop decisions. We run this in production for healthcare customers. See the system in action and discuss your use case.
Do not wait
The clinical teams are already asking for AI agents. The compliance envelope will move from a blocker to a checkbox if you build it now. Build it later and it becomes a blocker again. Build it under an audit and it becomes a crisis. Timing matters.