ShieldAIShieldAI
March 6, 2026

AI Risk Assessment in Banking: A Practical Guide

Banks face unique challenges in AI adoption. Unlike other industries, banks must navigate complex regulatory frameworks while ensuring AI models meet the same rigor as traditional credit risk models. The OCC's SR 11-7 guidance, originally written for traditional models, now applies to AI systems—creating both clarity and complexity.

Understanding SR 11-7 for AI Systems

What Is SR 11-7?

SR 11-7 (Supervisory Guidance on Model Risk Management) establishes the framework for banks to identify, measure, monitor, and control model risk. While written before the AI boom, its principles apply directly to AI systems.

Key SR 11-7 requirements that impact AI:

  • Model inventory: All models must be cataloged and risk-rated
  • Model validation: Independent testing of model performance and controls
  • Model governance: Board and senior management oversight of model risk
  • Ongoing monitoring: Continuous assessment of model performance

OCC's View on AI Models

The OCC has clarified that AI systems are subject to SR 11-7 when they:

  • Support decisions that have material impact on bank operations
  • Process regulated data or influence customer outcomes
  • Present model risk as defined in SR 11-7

This includes:

  • Credit scoring and underwriting models
  • Fraud detection systems
  • Customer segmentation and marketing models
  • Operational risk models
  • Stress testing and capital planning models

This typically excludes:

  • General productivity tools (email writing assistants)
  • Simple automation without learning components
  • Tools processing only public data

The Three-Tier AI Risk Assessment Framework

Tier 1: Low Risk AI Tools

Characteristics:

  • Limited decision impact
  • No regulated data access
  • Established vendor with strong controls
  • Simple, interpretable outputs

Examples:

  • Employee scheduling optimization
  • Internal document summarization
  • General productivity assistants
  • Public data research tools

Assessment requirements:

  • Vendor due diligence questionnaire
  • Basic security and compliance review
  • Documentation of use case and limitations
  • Annual vendor review

SR 11-7 application: Minimal—treated as operational tool rather than model

Tier 2: Medium Risk AI Systems

Characteristics:

  • Moderate decision impact
  • Internal data processing
  • Well-established AI techniques
  • Human oversight of outputs

Examples:

  • Customer service chatbots (internal data)
  • Employee performance analytics
  • Internal process optimization
  • Risk reporting enhancement tools

Assessment requirements:

  • Enhanced vendor due diligence
  • SOC 2 Type II certification
  • Data processing agreement
  • Quarterly performance monitoring
  • Annual model review

SR 11-7 application: Simplified model validation with focus on data governance and operational controls

Tier 3: High Risk AI Models

Characteristics:

  • Material impact on bank decisions
  • Customer or regulatory data processing
  • Complex or novel AI techniques
  • Direct influence on financial outcomes

Examples:

  • Credit underwriting models
  • Fraud detection systems
  • Anti-money laundering transaction monitoring
  • Stress testing models
  • Customer risk rating systems

Assessment requirements:

  • Full SR 11-7 model validation
  • Independent model validation function review
  • Model risk rating and governance
  • Ongoing monitoring and backtesting
  • Senior management reporting

SR 11-7 application: Full compliance with all SR 11-7 requirements

AI Risk Assessment Methodology

Step 1: Model Classification and Inventory

Business Impact Assessment:

  • Does the AI influence customer decisions or outcomes?
  • What is the financial impact of model failure?
  • Are outputs used in regulatory reporting?
  • Do decisions affect bank capital or risk profile?

Data Sensitivity Classification:

  • Public data: No additional controls required
  • Internal data: Standard information security controls
  • Customer data: Enhanced privacy and security controls
  • Regulated data: Full compliance and oversight requirements

Technical Complexity Assessment:

  • Simple models: Linear models, decision trees, interpretable algorithms
  • Moderate complexity: Ensemble methods, neural networks with explanation capabilities
  • High complexity: Deep learning, large language models, black-box systems

Step 2: Vendor and Technology Risk Assessment

Vendor Financial Stability:

  • Financial health and business continuity
  • Customer concentration and dependencies
  • Insurance coverage and liability limitations
  • Exit planning and data portability

Technical Controls:

  • Model development and testing practices
  • Version control and change management
  • Performance monitoring and alerting
  • Bias detection and mitigation procedures

Security and Compliance:

  • SOC 2 Type II certification
  • Data encryption and access controls
  • Incident response and breach procedures
  • Regulatory compliance documentation

Step 3: Data Risk Assessment

Data Quality and Lineage:

  • Source system reliability and controls
  • Data transformation and preprocessing
  • Training data representativeness and completeness
  • Ongoing data quality monitoring

Privacy and Regulatory Compliance:

  • Customer consent and data use agreements
  • Cross-border data transfer restrictions
  • Data retention and deletion capabilities
  • Regulatory reporting implications

Training Data Governance:

  • Source and provenance of training data
  • Bias testing and demographic representation
  • Data refresh and model retraining procedures
  • Opt-out mechanisms for customer data

Step 4: Model Performance Risk Assessment

Accuracy and Reliability:

  • Performance metrics and benchmarking
  • Error rates and false positive/negative analysis
  • Performance across different customer segments
  • Stress testing under adverse conditions

Explainability and Interpretability:

  • Ability to explain individual decisions
  • Understanding of model decision factors
  • Regulatory examination readiness
  • Customer dispute resolution capabilities

Monitoring and Governance:

  • Real-time performance monitoring
  • Model drift detection and alerting
  • Escalation procedures for performance degradation
  • Retraining and model refresh procedures

Implementing SR 11-7 Compliance for AI

Model Inventory Requirements

Minimum documentation for each AI model:

  • Model name and vendor information
  • Business use case and decision impact
  • Data inputs and processing description
  • Risk rating and governance classification
  • Owner and stakeholder identification
  • Implementation and review dates

Example model inventory entry:

Model: Acme AI Credit Scorer v2.1
Business Owner: Chief Credit Officer
Technical Owner: Risk Management IT
Use Case: Consumer loan underwriting (amounts < $50k)
Risk Rating: High (SR 11-7 Tier 3)
Data Inputs: Credit bureau data, bank relationship history
Vendor: Acme Financial AI
Validation Status: Completed Q4 2025
Next Review: Q2 2026
Regulatory Notes: Used in CECL calculations

Independent Model Validation

For high-risk AI models, validation must include:

Conceptual Soundness:

  • Review of model development documentation
  • Assessment of theoretical foundation and assumptions
  • Evaluation of input data appropriateness
  • Analysis of model limitations and use restrictions

Outcome Analysis:

  • Backtesting against historical outcomes
  • Benchmark testing against alternative approaches
  • Performance analysis across different time periods
  • Stress testing under adverse scenarios

Ongoing Monitoring:

  • Real-time performance tracking
  • Model drift detection and measurement
  • Regular recalibration and revalidation
  • Exception reporting and escalation procedures

Documentation Requirements

Model Development Documentation:

  • Business justification and use case
  • Model selection rationale
  • Training data description and quality assessment
  • Model architecture and parameter selection
  • Validation testing results and conclusions

Operational Documentation:

  • Implementation procedures and controls
  • User training and competency requirements
  • Monitoring and reporting procedures
  • Incident response and escalation protocols
  • Change management and version control

Governance Documentation:

  • Model approval decisions and rationale
  • Risk rating assignment and justification
  • Ongoing review and validation schedules
  • Senior management reporting and oversight
  • Audit trail and decision records

Common AI Risk Assessment Challenges

Challenge 1: Black Box Models

Problem: Many AI models, especially deep learning systems, are not easily interpretable. Solution:

  • Require explainability features for high-risk applications
  • Implement LIME, SHAP, or other explanation techniques
  • Consider interpretable model alternatives for regulated use cases
  • Document explanation capabilities in model validation

Challenge 2: Vendor Model Validation

Problem: Limited access to vendor model internals for independent validation. Solution:

  • Negotiate model validation rights in vendor contracts
  • Use outcome analysis and benchmark testing
  • Require vendor to provide detailed model documentation
  • Implement robust ongoing monitoring to detect performance issues

Challenge 3: Rapid Model Updates

Problem: AI models may be updated frequently, challenging traditional validation timelines. Solution:

  • Establish tiered change management procedures
  • Automate validation testing where possible
  • Pre-approve minor updates with monitoring triggers
  • Reserve full validation for material model changes

Challenge 4: Data Quality and Bias

Problem: AI models can amplify data quality issues and create discriminatory outcomes. Solution:

  • Implement comprehensive bias testing procedures
  • Establish data quality monitoring and controls
  • Require diverse training data and regular retraining
  • Monitor outcomes across different demographic groups

Specific Considerations for Different AI Applications

Credit Underwriting Models

Additional requirements:

  • Fair lending compliance and disparate impact testing
  • FCRA accuracy and dispute procedures
  • UDAAP risk assessment
  • Consumer disclosure requirements

Key risk factors:

  • Proxy discrimination through non-traditional data
  • Model complexity vs. explanation requirements
  • Performance across different credit segments
  • Integration with existing credit processes

Fraud Detection Systems

Additional requirements:

  • False positive impact on customer experience
  • Real-time performance and availability requirements
  • Integration with existing fraud operations
  • Regulatory reporting and SAR filing implications

Key risk factors:

  • Model accuracy across different fraud types
  • Alert fatigue and operational efficiency
  • Customer privacy and data protection
  • Cross-border transaction monitoring

Customer Segmentation and Marketing

Additional requirements:

  • Consumer privacy and consent management
  • Marketing compliance and fair treatment
  • Data use restrictions and opt-out procedures
  • Cross-selling and relationship management integration

Key risk factors:

  • Customer treatment fairness across segments
  • Data use beyond original consent
  • Integration with existing marketing controls
  • Measurement of customer outcomes

Building an AI Risk Assessment Program

Governance Structure

Board Oversight:

  • AI risk appetite and tolerance statements
  • Regular reporting on AI model inventory and performance
  • Approval of high-risk AI implementations
  • Annual assessment of AI risk management framework

Senior Management:

  • AI risk strategy and implementation oversight
  • Resource allocation for AI risk management
  • Cross-functional coordination and communication
  • Escalation procedures for significant AI risks

Three Lines of Defense:

  • First line: Business units implementing and using AI models
  • Second line: Risk management and compliance oversight
  • Third line: Internal audit validation and assurance

Technology Infrastructure

Model Risk Management Platform:

  • Centralized model inventory and documentation
  • Workflow management for model approval and validation
  • Performance monitoring and alerting
  • Audit trail and regulatory reporting capabilities

Data Management Platform:

  • Data lineage and quality monitoring
  • Privacy and consent management
  • Cross-border data transfer controls
  • Data retention and deletion automation

Talent and Training

Key competencies needed:

  • AI/ML technical expertise for model validation
  • Regulatory knowledge for compliance assessment
  • Risk management experience for governance oversight
  • Business knowledge for use case evaluation

Training requirements:

  • Board and senior management AI risk awareness
  • Business user training on AI limitations and controls
  • Technical training for risk management staff
  • Regular updates on regulatory developments

Preparing for Regulatory Examination

Examination Readiness Checklist

  • [ ] Complete AI model inventory with risk ratings
  • [ ] Documentation of model validation procedures and results
  • [ ] Evidence of ongoing monitoring and performance tracking
  • [ ] Board and senior management reporting records
  • [ ] Audit trail of model approval and change decisions
  • [ ] Training records for staff involved in AI risk management
  • [ ] Incident reports and remediation actions
  • [ ] Vendor management documentation and contracts

Common Examination Questions

About model inventory:

  • "Show us your complete inventory of AI models and how you classify risk."
  • "How do you ensure all AI models are captured in your inventory?"
  • "What controls prevent unauthorized AI model deployment?"

About model validation:

  • "Demonstrate your independent validation of [specific AI model]."
  • "How do you validate models when you have limited access to vendor internals?"
  • "Show us your ongoing monitoring procedures and performance alerts."

About governance:

  • "How does the board exercise oversight of AI model risk?"
  • "What is your process for escalating AI model performance issues?"
  • "Show us examples of model risk decisions and the supporting rationale."

Looking Ahead: Regulatory Evolution

Expected Developments

  • Enhanced OCC guidance specifically addressing AI model risk management
  • Interagency coordination between OCC, Fed, and FDIC on AI oversight
  • Industry standards development for AI model validation
  • Examination procedures specifically designed for AI models

Preparation Recommendations

  • Stay current with regulatory developments and industry guidance
  • Participate in industry working groups and standard-setting efforts
  • Build relationships with regulators through proactive communication
  • Document your AI risk management evolution and lessons learned

Banks that proactively implement robust AI risk assessment frameworks will be positioned to:

  • Navigate increasing regulatory scrutiny confidently
  • Accelerate AI adoption with appropriate controls
  • Demonstrate risk management leadership to regulators
  • Capture competitive advantages from AI innovation

The key is starting now—building capabilities, documenting procedures, and establishing governance before regulatory requirements become more prescriptive.

ShieldAI helps banks implement SR 11-7 compliant AI risk assessment frameworks with automated model inventory, validation workflows, and regulatory reporting. Start your free trial →