Executive summary
- The US AI compliance picture in May 2026 is not "we are waiting for federal law." It is a stack of state attorney-general statutes, federal sector regulators, and contract obligations that already bind any regulated SMB doing business in California, Colorado, Texas, New York, Utah, or under HHS, SEC, FINRA, CFPB, OCC, FDIC, NYDFS, PCI SSC, or DoD authority.
- The NIST AI Risk Management Framework (AI RMF 1.0, January 2023) and its Generative AI Profile (NIST-AI-600-1, July 2024) are the de-facto national template. They are referenced or mirrored in every meaningful state AI law and in OMB procurement guidance.[1]
- The Colorado AI Act (SB 24-205) was the country's first comprehensive state AI statute. Its substantive obligations were delayed from February 1, 2026 to June 30, 2026, and on April 27, 2026 a federal court paused enforcement while the legislature reconsiders the scope. The fact that it can be paused does not undo the underlying obligation; planning for it is still planning for what every other state is now copying.[2]
- Texas TRAIGA (HB 149) is in force January 1, 2026 with civil penalties of $10,000 to $200,000 per violation, plus $2,000 to $40,000 per day for continuing violations.[3] California AB 2013 took effect January 1, 2026; SB 942 (watermarking) is operative August 2, 2026.[4] Connecticut's AI Responsibility and Transparency Act passed both chambers May 1, 2026, with employment-AI obligations effective October 1, 2026.[5]
- Sector federal exposure is already binding. FINRA Notice 24-09 (June 27, 2024) confirms every existing securities rule applies to generative AI.[6] The SEC charged Delphia and Global Predictions $400,000 combined for AI-washing in March 2024.[7] The FTC banned Rite Aid from facial recognition for five years and required model deletion in December 2023, then opened Operation AI Comply in September 2024.[8] NYDFS issued AI-specific cybersecurity guidance under 23 NYCRR Part 500 in October 2024 and AI-specific third-party guidance in October 2025.[9]
- The cyber-insurance market is where the cost shows up first. Carriers added AI-specific renewal questionnaires in early 2026, started writing AI-specific exclusions, and expect documentation. Non-disclosure of AI use is already triggering renewal pressure or denial.[10]
- The audit pack the regulators, examiners, and insurers actually want is the same artifact in every case: AI inventory, vendor risk register, NIST AI RMF self-assessment, impact assessments per high-risk use case, training-data provenance, consumer notices, retention policies, AI-aware incident response. AiTLLM Private and Sovereign tiers ship that pack pre-built.
The map of US AI obligations as of May 2026
Each row below is a binding source of obligation today, tomorrow, or this calendar year. Effective dates and penalty figures are confirmed against the primary source linked in the footnotes; "paused" means the obligation exists in statute but enforcement is currently stayed.
| Obligation | Effective | Scope & who enforces | Top-line consequence |
|---|---|---|---|
| NIST AI RMF 1.0 + GAI Profile (NIST-AI-600-1) | Jan 26, 2023 / Jul 26, 2024 | Voluntary federal standard. Cited in OMB M-25-21/22, Colorado AI Act, Texas TRAIGA, NYDFS guidance, FINRA expectations. | de-facto national template Used by examiners and insurers as the benchmark for "reasonable AI risk management." |
| Colorado AI Act (SB 24-205) | Jun 30, 2026 paused Apr 27, 2026 | "High-risk" AI in employment, education, finance, healthcare, housing, insurance, legal, gov services. Developer + deployer duties. Colorado AG, exclusive enforcement, no private right of action. | Up to $20,000 per violation ($50,000 for violations involving consumers age 60+) under the Colorado Consumer Protection Act. |
| California AB 2013 (Generative AI Training Data Transparency) | Jan 1, 2026 | Developers of GenAI systems made available to Californians and released or modified on/after Jan 1, 2022 must publish training-data documentation on their website. | Disclosure obligation, IP and personal-information flags, dataset summaries. Trade-secret tension still being litigated. |
| California SB 942 (AI Transparency Act) | Aug 2, 2026 amended date | "Covered providers" with 1M+ monthly users that produce image, video or audio. Visible labels + hidden watermarks containing provider name, system, timestamp, unique ID. | Civil penalties via the California AG. Text-only systems are exempt. |
| California AB 1008 (CCPA scope clarification) | Jan 1, 2025 | Confirms personal information under the CCPA includes data in AI systems "capable of outputting personal information." | Brings GenAI vendors and large-model developers inside the CCPA's "data broker" and "sale of personal information" framing. |
| Texas Responsible AI Governance Act (HB 149) | Jan 1, 2026 | Anyone promoting, advertising, doing business in TX, or producing a product used by TX residents. Government entity disclosure of AI interactions. Enforced by the Texas AG; amends the Texas Data Privacy and Security Act. | $10,000 to $200,000 per violation; $2,000 to $40,000 per day for continuing violations. |
| NYC Local Law 144 (AEDT bias audit) | In force Jul 5, 2023 | Annual bias audit + public posting + candidate notice for any automated employment decision tool screening NYC candidates or employees. DCWP enforcement. | $500 first violation, $500–$1,500 per subsequent day. Each candidate use can be a separate violation. stricter 2026 enforcement after Dec 2, 2025 NY State Comptroller audit. |
| Utah AI Policy Act (SB 149, amended by SB 226 + SB 332) | May 1, 2024 (extended through Jul 2027) | Generative AI disclosure when prompted, plus higher disclosure bar for "regulated occupations" and "high-risk" interactions. | Utah AG and Department of Commerce enforcement. Disclosure-driven; consumer-protection framing. |
| Connecticut SB 5 (AI Responsibility and Transparency Act) | Oct 1, 2026 (employment-AI provisions); Oct 1, 2027 (interactive disclosure) | Employers and AEDT vendors. Frontier-model whistleblower protections. Synthetic-content provenance for 1M+ user GenAI systems. Connecticut AG, exclusive enforcement; 60-day cure period through Dec 31, 2027. | Unfair/deceptive trade practice. WARN-style technology-change notice for layoffs caused by AI. |
| HHS / OCR HIPAA Security Rule NPRM (proposed Jan 6, 2025) | Final rule expected Summer 2026 | First major HIPAA Security Rule update in 20 years. AI tools that touch PHI must be in the risk analysis. BAA language must address AI training and PHI minimum-necessary. | Existing HIPAA penalty tiers remain (up to $2,134,831 per violation category per year, 2026 adjusted). AI now an audit-trail expectation. |
| SEC Marketing Rule (Rule 206(4)-1) + AI-washing enforcement | In force; enforcement live | Investment advisers and broker-dealers. False or misleading AI capability claims = Marketing Rule violation. SEC examinations actively review AI claims and policies. | Civil penalties + censure. Delphia $225K + Global Predictions $175K (Mar 2024). 2025 SEC Marketing Rule risk alert flags AI-related testimonial and disclosure gaps. |
| FINRA Notice 24-09 + 2026 Annual Regulatory Oversight Report | Jun 27, 2024 / Dec 9, 2025 | Member firms (broker-dealers). Existing FINRA rules (Rule 3110 Supervision, Rule 2210 Communications, Rule 4511 Books & Records) apply unchanged to GenAI use. 2026 ROR adds an expanded GenAI section with explicit governance and agentic-AI expectations. | Standard FINRA disciplinary range. AI is now a named exam topic. |
| CFPB Circular 2023-03 (Adverse Action + AI) | Sep 19, 2023 | Creditors using AI/ML must provide accurate, specific reasons for denial under ECOA Reg B. "No special exemption for AI." Generic checkbox forms insufficient when AI uses non-traditional data. | CFPB enforcement, plus state AG concurrent authority under Dodd-Frank. |
| OCC + FRB + FDIC interagency model risk management guidance (revised Apr 17, 2026) | Apr 17, 2026 | Banking organizations. Model risk management expectations should be "risk-based, tailored, and commensurate." Joint RFI on AI/agentic AI in banking models forthcoming. | Supervisory criticism via examinations. Sets the bar for what an examiner expects on AI/ML model documentation. |
| NYDFS AI cybersecurity + third-party guidance (23 NYCRR Part 500) | Oct 16, 2024 + Oct 21, 2025 | NYDFS-licensed financial institutions and insurers. AI-enabled social engineering, AI-enhanced attacks, NPI/biometric exposure, supply chain. Recommended TPSP contract clauses for AI use, training, and remedy. | Existing 23 NYCRR Part 500 enforcement, with AI now a named risk category. |
| PCI DSS v4.0.1 future-dated requirements | Mar 31, 2025 | Any merchant or service provider in scope for cardholder data. 51 future-dated controls became mandatory. PCI SSC AI assessment guidance issued for QSAs. | Card-brand fines, contract termination by acquirer, breach-cost amplification. |
| DoD CMMC 2.0 Final Rule (DFARS) | Effective Nov 10, 2025; phased through Nov 10, 2028 | Any DoD prime or sub handling FCI or CUI. Phase 1: Level 1 + Level 2 self-assessment. Level 2 still scored against NIST SP 800-171 Rev 2 (Rev 3 not yet authorized for CMMC). | Loss of contract eligibility. False-claims liability for misattested certifications. |
| FTC Section 5 + Operation AI Comply | In force; actions ongoing | Any company making AI claims to US consumers. Five Operation AI Comply actions filed Sep 25, 2024; Rite Aid 5-year facial-recognition ban + model deletion Dec 19, 2023. | Consent orders, monetary judgments, and algorithmic disgorgement (forced deletion of models trained on illegally obtained data). |
| OMB M-25-21 + M-25-22 | Apr 3, 2025; procurement requirements apply to solicitations issued on/after Sep 30, 2025 | Federal agencies + their contractors. Replaces Biden-era M-24-10 and M-24-18; emphasizes American-made AI and standardized procurement criteria. | Contract-eligibility gate for federal AI procurement. Effectively imports NIST RMF expectations into the federal AI supply chain. |
Two things stand out reading the table together. First, the obligations are cumulative, not exclusive. A 75-person Connecticut RIA selling into Texas using a HIPAA-relevant medical-billing back office and a CMMC-Level-2 government subcontract is sitting in five jurisdictions and four sector regimes at once. Second, the obligations are operationally similar. Every regime, with minor framing differences, asks for the same artifacts: an inventory of AI use, a risk classification, a documented control set, a vendor pipeline, and an audit log. That convergence is why the NIST AI RMF is the right place to start.
Why this hits SMBs harder than enterprises
An enterprise with a privacy office, an in-house AI committee, a full-time GRC team, and a panel of outside counsel can absorb a new state law every quarter. A 50-to-300-person regulated SMB cannot. The regulatory load that hyperscalers and Fortune 500s budget for in millions lands on the same one to three people who are also running endpoints, email security, and the help desk.
The mismatch is not theoretical. The same SMB that spent 2024 and 2025 catching up on basic identity hygiene (MFA on every account, conditional access, baseline DLP) is now expected, in 2026, to produce an AI inventory, a NIST AI RMF self-assessment, vendor attestations from every AI subprocessor, training-data documentation under California AB 2013, and a bias-audit posting under NYC Local Law 144 if they touch any candidate in New York City. The gap between what is required and what an in-house team can produce is widening.
The regulators know this, which is why state AGs and the FTC are using existing consumer-protection authority rather than waiting for AI-specific statutes. The FTC's Rite Aid order is the cleanest example: no AI-specific statute was needed. Section 5 of the FTC Act covered it, and the remedy included algorithmic disgorgement. Insurers know it too, which is why the loudest near-term cost shows up at renewal.
Colorado AI Act: what the canary tells us
Colorado SB 24-205 was the country's first comprehensive AI statute, signed May 17, 2024 by Governor Polis. Its substantive obligations were originally set to take effect on February 1, 2026, were postponed in August 2025 to June 30, 2026 by a special-session bill, and on April 27, 2026 a federal district court approved a joint motion to stay enforcement and litigation deadlines while the legislature reconsiders the scope.[2] The pause is real. The text is also still on the books, and every other state currently drafting AI legislation is treating it as a reference.
Reading the statute is therefore the cheapest forecast of what Texas, Connecticut, California, and the dozen other state bills on dockets right now will actually require.
What is a "high-risk artificial intelligence system"?
Colorado defines high-risk as any AI system that, when deployed, makes or is a substantial factor in making a "consequential decision" about a consumer. Consequential decision means a decision that has a material legal or similarly significant effect on the provision or denial, cost, or terms of education enrollment, employment, financial or lending services, healthcare services, housing, insurance, legal services, or essential government services.
A regulated SMB will encounter the high-risk threshold in workflows it already runs: AI-assisted resume screening, AI-driven loan or credit decisioning, AI features inside an EHR or practice-management platform, AI inside a tenant-screening or insurance-quote tool. The bar is whether AI is "a substantial factor" in the decision, not whether it is the only factor. Vendors saying "we just suggest, the human decides" do not necessarily clear the threshold.
Developer vs deployer
A developer designs, codes, or substantially modifies a high-risk AI system. A deployer is the entity actually using it to make a consequential decision. The SMB is almost always the deployer. Developer obligations live mostly with the vendor; deployer obligations live with you. They include implementing a risk management program, conducting an annual impact assessment per high-risk system, providing consumer notice of the use of AI in a consequential decision, providing an appeal-and-correction mechanism, and reporting algorithmic discrimination to the AG within 90 days of discovery.[2b]
Safe harbor and penalties
The statute provides an affirmative defense for entities that follow a recognized risk management framework "such as the latest version of the National Institute of Standards and Technology Artificial Intelligence Risk Management Framework." Mapping your controls to NIST AI RMF is the explicit safe-harbor path. Penalties are up to $20,000 per violation under the Colorado Consumer Protection Act, $50,000 per violation involving consumers age 60 or older, with no private right of action.
Even in its delayed-and-paused state, Colorado already gave the country the template that other state laws are now copying: developer/deployer split, NIST RMF safe harbor, impact assessment per high-risk system, AG-only enforcement, no private right of action, consumer-protection-act penalty stack. Plan to it. Texas and Connecticut already are.
NIST AI RMF as the spine
The NIST AI Risk Management Framework 1.0 was published January 26, 2023, and the Generative AI Profile (NIST-AI-600-1) on July 26, 2024.[1] The framework is voluntary, but it is now the closest thing the United States has to a national AI standard. Colorado's safe-harbor language names it. Texas TRAIGA mirrors its risk-tiering vocabulary. NYDFS guidance points at it. OMB M-25-21 imports its function set into federal AI use, and OMB M-25-22 imports it into AI procurement.
The framework has four core functions, each implementable as a small set of SMB controls.
Govern
The cross-cutting function. For an SMB this is: a one-page AI policy, a named owner (often the CISO or head of IT), an AI inventory that lists every AI use case in the business, a vendor risk register that maps each AI feature to its provider, a training plan, and an incident-response playbook with AI-specific triggers (model exfiltration, prompt-injection breach, hallucination causing customer harm, vendor outage). The artifact is short. The discipline of producing it once and updating it quarterly is what counts.
Map
For each AI use case in the inventory, describe its context: who it affects, what data it touches, what decision it influences, what the failure modes look like, who would be harmed by failure, what regulatory regime it sits inside, and what the deployment surface is (M365 tenant, customer-facing chatbot, in-EHR copilot). This is the input that drives risk classification.
Measure
For high-risk use cases, measure performance against the failure modes identified in Map. Bias testing, hallucination rate, security testing of the prompt boundary, data-leakage testing, model drift over time. For SMBs, "measure" does not mean "build a benchmarking pipeline." It means: pick three test cases per high-risk use, run them quarterly, write down the result.
Manage
Put the controls in place that the Map and Measure work indicated were needed: DLP at the prompt boundary, RAG retrieval with identity scoping, human-in-the-loop checkpoints on consequential decisions, model-version pinning, retention policies, vendor BAAs and DPAs, and the consumer notices the relevant statute requires.
That is the entire spine. Every state law and sector regulator we mapped above is asking for some subset of those four. The reason to anchor your program to NIST AI RMF is not because NIST is the regulator. It is because every other regulator has chosen to point at NIST instead of building their own framework from scratch.
Sector layers
Healthcare (HIPAA)
HHS Office for Civil Rights proposed the first major update to the HIPAA Security Rule in 20 years on January 6, 2025, with a final rule expected in summer 2026. The proposal eliminates the addressable/required distinction and explicitly brings AI tools that touch PHI into the risk analysis. The implication for a covered entity or business associate is that "we use ChatGPT for charting" is a risk-analysis failure unless it is documented, scoped, and contractually controlled.
The BAA expectations follow. A covered entity using AI on PHI must have a Business Associate Agreement with the AI vendor that explicitly prohibits using PHI to train or refine the model unless authorized in writing, requires audit-trail retention, and adds AI-aware breach notification. Most consumer LLM products do not offer a HIPAA BAA. Vendors that do (Microsoft Azure OpenAI, AWS Bedrock under HIPAA-eligible services, Google Vertex AI under HIPAA-eligible services, Together AI for specific configurations) require explicit account-level configuration before the BAA attaches.
What your auditor will ask
- Show me your AI inventory. Which entries touch PHI?
- For each AI vendor that touches PHI, show me the BAA and the data-use clause.
- Show me the risk analysis section that covers AI tooling.
- Show me the breach-notification playbook entry that covers AI-driven incidents.
Financial services (SEC + FINRA + GLBA + NYDFS)
The SEC has been the most aggressive AI enforcer in financial services. The first AI-washing actions, against Delphia and Global Predictions in March 2024, settled for a combined $400,000 in civil penalties.[7] The 2025 SEC Marketing Rule risk alert (issued December 2025) flagged AI claims as an exam priority and called out testimonial, endorsement, and substantiation gaps tied to AI marketing.
FINRA Notice 24-09 (June 27, 2024) is short and direct. Existing rules apply. Rule 3110 (Supervision) requires firms using GenAI in supervisory functions to address technology governance, model risk, data privacy, and accuracy. Rule 2210 (Communications with the Public) applies the same content standards to AI-generated content. Rule 4511 (Books and Records) and SEC Rule 17a-4 retention obligations apply to AI prompts and outputs that constitute business records.[6] The 2026 FINRA Annual Regulatory Oversight Report (published December 9, 2025) substantially expanded the GenAI section and added explicit expectations for autonomous AI agents.
For NYDFS-licensed entities, the October 16, 2024 industry letter named four AI-specific cybersecurity risks (AI-enabled social engineering, AI-enhanced attacks, AI-driven NPI exposure, AI supply chain) and pointed every Covered Entity back to the existing 23 NYCRR Part 500 controls. The October 21, 2025 follow-up specifically recommends contract language for third-party AI service providers covering training limitations, location and transfer restrictions, subcontractor disclosure, and acceptable-use clauses.[9]
GLBA's Safeguards Rule (16 CFR Part 314) treats AI vendors as service providers and inherits its standard third-party-provider language: written contracts, periodic assessment, access controls, encryption.
What your examiner will ask
- Show me your AI marketing claims and the evidence supporting each one.
- Show me how your supervisory system addresses AI-generated communications and AI-driven trade decisions.
- Show me your AI vendor contracts and the third-party assessment for each.
- Show me how AI prompts and outputs that are business records are retained under 17a-4.
Retail and consumer (PCI DSS, FTC)
PCI DSS v4.0.1 brought 51 future-dated requirements into mandatory force on March 31, 2025. The PCI Security Standards Council subsequently issued AI assessment guidance for QSAs covering AI involvement in artifact review, work paper creation, and remote interviews. For a merchant, the operational impact is that any AI tool deployed in or adjacent to the cardholder data environment is in PCI scope, including AI-assisted customer-service tools that handle payment information.
The FTC has been the most assertive federal regulator on consumer-protection grounds. The Rite Aid consent order in December 2023 banned the company from facial-recognition surveillance for five years and required model deletion ("algorithmic disgorgement") plus deletion of derived models held by third parties.[8] Operation AI Comply followed in September 2024 with five enforcement actions against companies making deceptive AI claims, including DoNotPay's "AI lawyer," Rytr's review-generator, and three e-commerce "AI business opportunity" schemes. Algorithmic disgorgement is the FTC's most consequential remedy: it can erase the asset, not just fine the use of it.
Government contracting (NIST 800-171 + CMMC 2.0)
The DoD CMMC 2.0 Final Rule was finalized September 10, 2025 and became effective November 10, 2025, with phased implementation through November 10, 2028. Phase 1 introduces Level 1 and Level 2 self-assessment requirements for any contractor handling Federal Contract Information or Controlled Unclassified Information. NIST SP 800-171 Revision 3 was finalized by NIST but is not yet authorized by DoD for CMMC scoring; Level 2 self-assessments and C3PAO audits remain on Revision 2. AI use inside a CUI environment must be scoped against the same control set as any other technology touching CUI.
What your CMMC assessor will ask
- Show me the AI tools deployed in or with access to your CUI boundary.
- Show me the system security plan entry that covers each AI tool.
- Show me how prompt and output data is treated under your media protection and incident response controls.
Cyber insurance is where this hurts first
Statutes are slow. Renewal cycles are not. Cyber-insurance carriers were the first commercial actors to make AI use a documented requirement, because the next loss they pay out is theirs to underwrite. Three changes have hit the market in 2025 and 2026.
First, AI-specific renewal questionnaires. Major cyber and EPLI carriers (Beazley among them) added explicit AI questions to renewal applications in early 2026: do you use AI in employment decisions, do you have a written AI use policy, do you conduct bias testing, do you use AI in customer-facing contexts, do you have BAA or DPA coverage with each AI vendor, do you have human-in-the-loop checkpoints on consequential decisions. Insurers verify some of these answers against external data (DMARC records, Shodan exposure, dark-web credential monitoring); AI questions are increasingly verified against vendor-side attestations.[10]
Second, AI-specific exclusions. Starting in early 2026, multiple carriers began applying generative-AI exclusions to standard CGL and some cyber forms, covering defamation, privacy violations, copyright infringement, and bodily injury or property damage arising from AI outputs. "Unauthorized AI tool" exclusions remove coverage for losses arising from employee use of AI tools not on the organization's approved list. The era of "silent AI" coverage, the period when AI risk slipped into a policy because nothing excluded it, is closing.
Third, non-disclosure consequences. Carriers treat AI use materially the same way they treat ransomware exposure: a misstatement on the application can void coverage at claim time. The single most common SMB mistake in 2026 renewals is answering "no" to the AI question because nobody at the company has run the inventory. The AI is in the M365 Copilot license, the help desk's ChatGPT use, the marketing team's HeyGen subscription, the vendor's "AI mode" toggle. The misanswer is not strategic; it is administrative. The cost is real.
Run the AI inventory before your next cyber-policy or EPLI renewal questionnaire. Treat the questionnaire as the discovery prompt that makes you do the work that every regulator already expects.
The audit pack: what regulators, examiners, and insurers actually want
Across every regime mapped above, the underlying request converges on the same set of artifacts. The vocabulary differs by audience (the SEC says "policies and procedures," the cyber underwriter says "controls evidence," the Colorado AG says "risk management program," the HHS auditor says "risk analysis"), but the documents are the same.
- AI inventory. One row per AI use case in the business. Owner, vendor, data classification, decision impact, regulatory regime, status (production / pilot / shadow). Living document, refreshed quarterly.
- Vendor risk register for AI subprocessors. Per AI vendor: contract type (BAA / DPA / standard MSA), training-data clause, data-residency clause, subprocessor disclosure, retention and deletion clause, security-attestation evidence (SOC 2 Type II report, ISO 27001 certificate, HIPAA attestation).
- NIST AI RMF self-assessment. The four functions, mapped to your controls, with gaps documented.
- Impact assessment per high-risk use case. Following the NIST GAI Profile and the Colorado / Texas / Connecticut content requirements: purpose, affected populations, data inputs, failure modes, mitigations, residual risk, sign-off.
- Training-data provenance and consumer notice templates. AB 2013-style training-data summary if you build, plus consumer-facing notice templates for high-risk AI use (Colorado / Texas / Connecticut).
- Data flow diagrams. One per AI use case. Source, transit, processing location, retention, egress controls.
- Retention and deletion policies. Specifically: retention of prompts, outputs, model versions; alignment with SEC 17a-4 (financial services) and HIPAA (healthcare); user-driven deletion for CCPA, GDPR.
- Incident response plan with AI-specific triggers. Prompt-injection compromise, model-output harm, training-data exfiltration, vendor outage, agentic-AI scope creep.
- Third-party AI service provider contracts. Aligned with NYDFS October 2025 guidance for FS, with HHS BAA expectations for healthcare, with CMMC flow-down for govcon.
- Audit log retention. Per use case: prompt, response, model version, identity, timestamp, retention period. Aligned to the longest applicable statute.
An SMB that produces this stack, even at minimum-viable depth, is in better shape than 90% of the market. A regulator or insurer asking for "your AI program" gets handed a single binder (or a single shared folder) and the conversation gets shorter.
What an SMB can ship in 30, 60, and 90 days
The audit pack above looks long. In practice it is two-to-three weeks of focused work for an SMB that is willing to start with the lightest acceptable artifact and iterate. The following is a concrete 90-day plan that produces every document in the audit pack.
Days 0–30
Inventory + risk classification
- Run AI discovery across the M365 / Workspace tenant, ticketing, helpdesk, and finance stack. Include shadow AI (employee personal accounts).
- Build the AI inventory. One row per use case. Populate owner, vendor, data class, decision impact, status.
- Classify each row: low / medium / high-risk per NIST and per Colorado/Texas-style "consequential decision" test.
- For every high-risk row, freeze the use case until a basic impact assessment is in place.
- Build the AI vendor risk register. Pull every AI vendor's SOC 2, BAA, DPA into one folder.
Days 30–60
NIST AI RMF baseline + sector overlay
- Stand up the four NIST functions (Govern / Map / Measure / Manage) at minimum-viable depth.
- Write the one-page AI policy. Name the owner. Schedule the quarterly review.
- Run impact assessments on high-risk use cases. Document residual risk and sign-off.
- Apply the sector overlay: HIPAA Security Rule mapping, FINRA 24-09 supervision update, NYDFS 23 NYCRR 500 control alignment, CMMC SSP entry as applicable.
- Draft consumer notices and disclosure language for the relevant state regimes.
Days 60–90
Audit pack + insurance disclosure + first-tier controls
- Deliver the audit pack: 10 documents, one folder, one quarterly review cadence.
- Update the cyber and EPLI renewal questionnaires with accurate, documented answers.
- Implement first-tier controls: DLP at prompt boundary, identity scoping on RAG retrieval, model-version pinning, prompt and output retention, AI-aware incident-response playbook entries.
- Brief the board / executive team. The compliance posture becomes part of the standard quarterly business review.
- Schedule the next pass for the next state law that takes effect (Connecticut SB 5 in October 2026, HHS HIPAA Security Rule final, etc.).
How AiTLLM ships you the audit pack on day one
Most of the work above is not the controls themselves. It is the documentation that proves the controls exist. AiTLLM, Intelligent iT's private LLM tier, ships the audit pack pre-built so that the SMB's day-one posture matches what a regulator, examiner, or underwriter expects on day ninety.
AiTLLM Sovereign and Private tiers ship with: a populated NIST AI RMF self-assessment template scoped to the deployment; the AI inventory pre-populated with every gateway-routed use case; the vendor risk register populated with the partner-infrastructure attestations (SOC 2 Type II, HIPAA, ISO 27001) at the underlying provider; gateway-side audit logging (every prompt, every output, every model version, every identity, every retention class) wired into the AiT Trust Portal; consumer notice templates for Colorado, Texas, Connecticut, California, and the EU AI Act Article 50 transparency obligations; and a quarterly impact-assessment review with an Intelligent iT engineer.
AiTLLM Connect tier includes a quarterly checkpoint review covering the same artifacts at lower depth, with the customer keeping the controlled vendor relationship (Anthropic, OpenAI, Google) directly. Connect customers get the inventory template, vendor-register starter, and consumer-notice templates without the gateway-side audit logging.
The point is not the product. The point is that the audit pack is an artifact of running the system, not a separate compliance project. If you have to build it from scratch every renewal cycle, every state law, and every examiner visit, you will lose. If it falls out of your AI substrate by default, the regulatory load stops growing faster than your team.
Conclusion
The US AI compliance picture in May 2026 is loud, fragmented, and binding. The fragmentation is not a reason to wait; it is the reason to start. Every regime points at the same handful of controls. Every regulator, examiner, and underwriter wants the same handful of documents. The first SMB in a sector to ship the audit pack is the one that gets a clean renewal, a short examination, and a contract with a customer who needs evidence on day one. The last one is the one explaining to its insurer at claim time why the questionnaire was answered "no."
The 90-day plan above is achievable for any regulated SMB with a single named owner and an MSP partner. We built AiTLLM so that owner does not start from a blank page. Book a 30-minute call or read the AiTLLM page for the deployment options.