Home / Blog / NIST AI RMF

NIST AI RMF Is Your Affirmative Defense Against TRAIGA’s $200K Penalties.

If your firm uses or deploys AI systems and operates in Texas — or has customers in Texas — you have an enforceable compliance obligation right now. Most legal and IT teams I talk to don’t know it.

The Texas Responsible Artificial Intelligence Governance Act (TRAIGA, codified as Texas HB 149) became enforceable on 1 January 2026. The law applies to any organization developing, deploying, or substantially modifying high-risk AI systems with a connection to Texas. Penalties run up to $200,000 per violation, and "violation" is interpreted on a per-incident basis the way HIPAA penalties are. The Texas Attorney General’s office has already opened investigations.

The good news: TRAIGA names a specific affirmative defense. If your organization can demonstrate substantial compliance with the NIST AI Risk Management Framework (NIST AI RMF 1.0 plus the Generative AI Profile), the law presumes good faith and significantly limits penalty exposure. The framework is voluntary, free, and well-documented — but most firms don’t have the documentation to demonstrate substantial compliance, which means the affirmative defense is unavailable when they need it.

What "substantial compliance" actually means

NIST AI RMF defines four core functions: GOVERN, MAP, MEASURE, MANAGE. Each has subcategories with specific outcomes. Substantial compliance does not require perfection on every subcategory. It requires demonstrable, dated, retained evidence that your organization:

  • Has a written AI governance policy that names accountable individuals and defines acceptable use, prohibited use, and review cadence (GOVERN).
  • Maintains an inventory of AI systems in use, classified by risk tier, with documented purpose and intended population (MAP).
  • Measures the performance, fairness, and security of those systems against defined metrics on a defined cadence (MEASURE).
  • Has a process for responding to identified risks — remediation, retirement, escalation — with retained records (MANAGE).

Of those four, MAP is where most organizations are weakest. They use AI tools (ChatGPT, Copilot, internal RAG systems, third-party AI features in SaaS), but the inventory exists in nobody’s head and no document. When the AG’s office asks for the AI inventory, the answer is "we’re working on it," which is not the affirmative defense.

Treasury maps NIST onto SOC 2

In February 2026, the US Treasury Department published a framework that maps NIST AI RMF outcomes onto the SOC 2 Trust Services Criteria. The framework identified roughly 230 control objectives that overlap with existing SOC 2 controls and another 80 that are AI-specific additions. The implication for any firm with an existing SOC 2 attestation: you’re already 75% of the way to NIST AI RMF substantial compliance, you just need to formalize the AI-specific control family and start collecting evidence.

For firms without SOC 2: NIST AI RMF substantial compliance is achievable independently and the work creates a paved path toward eventual SOC 2 if you choose to pursue it.

What we’re shipping at Intelligent IT

AiT Audit and AiT Trust Portal handle the documentation and evidence collection that NIST AI RMF substantial compliance demands. Specifically:

  • AI system inventory. Every AI system the client uses or deploys is inventoried with risk tier, intended population, owner, and review cadence.
  • Continuous evidence. Performance, fairness, and security metrics tracked in a Supabase database with row-level security per client. The evidence is dated, signed, and retained for the policy period — not assembled in a panic the week before an audit.
  • Auditor-ready export. Trust Portal exposes the evidence to the client’s designated auditor or AG investigator with one click. The format matches what NIST AI RMF Generative AI Profile recommends.
  • SOC 2 cross-mapping. Where the client has an existing SOC 2 attestation, we use the Treasury Feb 2026 framework to identify which existing controls overlap and where the AI-specific gaps are.

The retainer is bundled into our managed-services pricing rather than charged separately, which means clients don’t face a "buy compliance tooling" budget conversation on top of the existing IT spend.

What you should be doing this quarter

  1. Inventory every AI system in use across your organization. Including the AI features inside SaaS tools you didn’t think of as "AI systems."
  2. Classify each by risk tier per NIST AI RMF. High-risk uses (anything affecting employment, health, education, financial services, or housing decisions) trigger more rigorous obligations.
  3. Write the AI governance policy if you don’t have one. Name the accountable executives.
  4. Define metrics you will track for performance, fairness, and security. Begin tracking them now — the affirmative defense requires retained records, not freshly-generated ones.
  5. If you have SOC 2, map your existing controls onto the Treasury framework. If you don’t, NIST AI RMF substantial compliance is achievable on its own.

NIST AI RMF substantial compliance, evidenced continuously

AiT Audit + Trust Portal handle the inventory, metrics, and audit trail your TRAIGA affirmative defense depends on. Tied to your existing managed-services posture, not a separate compliance tooling buy. Read WP-06 for the operational detail.

Read WP-06: Continuous Audit as a Service →

The bottom line

TRAIGA is the first wave. California, New York, and Illinois have similar bills in progress. The federal regulatory landscape is consolidating around NIST AI RMF as the de facto standard, and the Treasury framework just made the mapping to SOC 2 explicit. Firms that operationalize substantial compliance now have an affirmative defense that costs cents on the dollar of the penalties they would otherwise face. Firms that don’t are building a documentation gap that the first AG investigation will widen. The right time to start was last year. The next-best time is this quarter.