ShieldAIShieldAI
March 8, 2026

The Treasury's New AI Risk Management Framework: What Financial Firms Need to Do Now

In February 2026, the U.S. Department of the Treasury released two major deliverables: an AI Lexicon and the Financial Services AI Risk Management Framework (FS AI RMF). These aren't just policy papers — they're practical tools that regulators will use to evaluate how your firm manages AI.

If you're a compliance officer, CCO, or managing partner at a financial services firm, this is the most important AI governance development of 2026.

What the FS AI RMF Actually Says

The framework was developed through the Financial and Banking Information Infrastructure Committee (FBIIC) and the Financial Services Sector Coordinating Council (FSSCC). It covers:

  1. Common AI terminology — the Lexicon establishes shared definitions so regulators, vendors, and compliance teams speak the same language
  2. Risk categorization — how to classify AI tools by risk level (not all AI is equal)
  3. Lifecycle governance — managing AI risk from procurement through deployment through decommissioning
  4. Accountability structures — who owns what when AI makes or supports decisions
  5. Audit and documentation — what records you need to maintain for examinations

Why This Matters Now

Before the FS AI RMF, financial firms were cobbling together AI governance from NIST AI RMF (too broad), ISO 42001 (too abstract), and their own interpretations of existing SEC/FINRA guidance.

Now there's a financial-services-specific standard. That means:

  • SEC examiners have a reference point when they ask about your AI controls
  • FINRA can point to it during annual examinations
  • State regulators will increasingly adopt it as baseline expectations
  • Your E&O insurance carrier may start asking about it

The firms that adopt it early will have a compliance advantage. The firms that ignore it will be scrambling during their next exam.

What You Need to Do (Practical Steps)

1. Inventory Your AI Tools

You can't govern what you can't see. Start with a complete list of every AI tool employees use — approved or not. Include ChatGPT, Copilot, Claude, Gemini, plus AI features embedded in existing platforms.

2. Classify by Risk Tier

  • Tier 1 (Low): Grammar checkers, scheduling assistants — minimal client data exposure
  • Tier 2 (Medium): Research tools, document summarizers — may process client information
  • Tier 3 (High): Portfolio analytics, client communication tools, anything touching PII or investment recommendations

3. Document Your Governance Process

The FS AI RMF expects firms to have a documented process for approval, monitoring, incident response, and training.

4. Assign Ownership

Someone needs to own this. At smaller firms, this typically falls to the CCO, a designated AI governance officer, or an existing technology risk committee.

5. Build Your Audit Trail

When examiners ask — and they will — you need to show what tools are approved and why, what was rejected and why, when each tool was last reviewed, and what controls are in place.

The Gap the Framework Doesn't Fill

The FS AI RMF tells you what to do. It doesn't give you the how. It doesn't provide a tool inventory template, an automated approval workflow, continuous monitoring, or audit-ready report generation.

That's exactly what ShieldAI was built for. We translate the framework into a working system — import your AI tools, run them through risk-tiered evaluations, generate compliance documentation, and maintain an audit trail that satisfies examiners.

See how ShieldAI implements the FS AI RMF →