New Guidance on Model Risk – What’s In and What’s Out

Banking regulators recently revised guidance on model risk management (MRM), superseding earlier, more detailed guidance from 2011. What were the major changes? And what impact are these changes likely to have on bank supervision? Drawing upon my own experience, I’ll look at what’s in, what’s out, and the implication of these changes for both examiners and bankers.

The new guidance shares with the prior guidance a broadly similar view of what constitutes effective MRM. The primary differences lie in scope and level of detail. The revised guidance cuts the word count by 70%, omitting much of the prior guidance’s supporting details. The agencies adopted a largely “principles-based” approach. While the concept of principles-based supervision can sound appealing, in practice it can replace substance with vague platitudes.

Both supervisors and the industry like flexibility and the ability to adjust for special cases or as conditions change. The trouble is that vague principles can make it difficult to take supervisory action while still leaving supervisors open to second guessing. And it’s not just supervisors. While often complaining about bright line tests and prescriptive requirements, the banking industry also likes safe harbors. In addition, corrective actions to supervisory critiques shouldn’t involve a guessing game of what the supervisor really wants.

The new guidance supposedly reflects “supervisory experience and industry feedback, as well as technological advancements in modeling over the past several years.” Well, it does reflect industry feedback. As noted in an earlier post, the Bank Policy Institute (BPI) had essentially declared war on the prior guidance. The role of “supervisory experience” is much less clear. The 2026 guidance takes a similar approach to guidance issued in 2000, when the role of models was still evolving. The expanded 2011 guidance reflected lessons from the Global Financial Crisis. The 2026 guidance was more a matter of “never mind.” Having participated in at least two dozen reviews since 2011 that covered model risk (among other topics). I don’t recall examiners clamoring for vaguer guidance.

What’s In and What’s Out

The issuance accompanying the revised guidance highlights some key changes. However, those highlights are hardly comprehensive, and regulators didn’t provide a red-lined version where the changes would be more apparent. The chart below contrasts the old and the new guidance, describing what’s been added (what’s in) and what’s been taken out (what’s out).

Not an Enforceable Standard

The new guidance “does not set forth enforceable standards or prescriptive requirements; accordingly, non-compliance with this guidance will not result in supervisory criticism against a banking organization.” It’s long been the case that guidance does not have the force of regulation, a point reiterated by banking regulators in 2018 (Guidance on Guidance) and 2021 (12 CFR 4, Subpart F). Even before those initiatives, examiners were not supposed to cite “violations” of guidance. What’s different this time is a matter of emphasis and context. I looked at OCC guidance issued since 2018. This is the first document that specifically stressed its lack of enforceability. Usually labeling an issuance as guidance was sufficient.

While a bank should not be criticized solely due to a “violation” of supervisory guidance, such guidance does reflect supervisory expectations, which could form the basis for an MRA or other supervisory actions. The 2026 guidance adds a footnote indicating, “supervisory action may result for any violations of law or unsafe or unsound practices stemming from insufficient management of model risk.” There’s a bit of mixed messaging here. Essentially, the agencies are telling banks they can totally ignore the guidance, but we’re leaving the door open to criticize you later.

The same goes for examiners. Stressing the unenforceability of guidance will likely have a chilling effect on attempts to criticize deficient model risk management processes. Should, however, model risk weaknesses lead to a major blowup, examiners should prepare for second guessing. This certainly was the case with Silicon Valley Bank. SVB had excessive IRR and deficient IRR practices that should have been called out more forcefully. But the Federal Reserve lacked any regulation specifically addressing IRR, relying instead on an (unenforceable) Interagency Advisory.

Carveout for Smaller and Midsize Banks

While much of the new guidance is principles-based, it does provide a specific asset-based carveout for banks with less than $30 billion in total assets. The prior guidance stated that its application should be “commensurate with a bank’s risk exposures, its business activities, and the complexity and extent of its model use.” The new guidance indicates that it might still apply to some smaller banks due to the “prevalence and complexity of their models or because of activities outside the scope of traditional community banking.” The tailoring regulations allowed for similar exceptions, but as a practical matter banks below certain asset-based thresholds were excluded from more intensive supervision.

AI Carveout

The revised MRM guidance explicitly excludes generative and agentic AI models. The guidance describes these models as “novel and rapidly evolving,” and mentions plans “to issue a request for information that … banks’ use of AI, including generative AI and agentic AI and AI-based models.” The BPI had expressed concern that the prior guidance’s “extensive documentation and comprehensive testing from model owners and model validators … can delay the time to release into the production and the cost.” In other words, demonstrating that models work before putting them into production could slow things down. Well, yeah. Sometimes it’s a matter of doing things fast or doing things right.

Then how should examiners and bank managers handle AI models in the meantime? The prior guidance did not impose a specific set of performance metrics but pointed out that model quality can be evaluated based on precision, accuracy, discriminatory power, robustness, stability, and reliability. While the prior guidance predates the extensive use of AI in financial modeling, the now-deleted Comptroller’s Handbook (last updated in 2021), touches on the use of AI. It identifies some AI applications in banking and some characteristics of sound AI risk management. The Handbook also references a 2017 paper by the Financial Stability Board, which discusses the uses and risks of AI models in more detail.

Rescission of Handbook Section on Model Risk Management

In addition to rescinding the 2011 guidance, OCC also rescinded the Comptroller’s Handbook section on Model Risk Management. OCC didn’t bother to provide an explanation, although the BPI had previously complained about its length. What was so awful about this Handbook section? When the latest version was issued in 2021, its stated goals were to inform and educate “examiners about sound risk management practices that should be assessed during an examination” as well as planning and coordinating such examinations. Rescinding this section seems to deemphasize any role that supervisors might play in assessing a bank’s market risk management. The prior guidance used the term “supervisory expectations” three times. The new guidance? Not once.

What is a Model? And What Isn’t?

The prior guidance defined a model as a “quantitative method, system, or approach that applies statistical, economic, financial, or mathematical theories, techniques, and assumptions to process input data into quantitative estimates.” The revised guidance stipulates that the method also be “complex” and excludes spreadsheet-type applications that involve little more than simple arithmetic calculations. The prior guidance expanded the model definition to also include “quantitative approaches whose inputs are partially or wholly qualitative or based on expert judgment, provided that the output is quantitative in nature.” This additional feature is missing from the new guidance.

As noted in an earlier post, model risk is not just about the math. Consider deposit models. Assumptions around these models can have a large impact on overall risk measures. They also present some significant modeling challenges. Those factors make an effective challenge process especially important. But most of these models are not statistically complex and may be scoped out of the new guidance. Maybe. The new guidance contradicts itself in this regard. While removing the expanded model definition, the document later refers to cases where a model’s design relies substantially on expert judgment. These statements may require further clarification.

Applying full-blown model governance to simple spreadsheets wouldn’t pass a cost-benefit test. But this is also a bit of a red herring. Disputes over what was or was not a model were common during bank exams. Rarely (in my experience, never) did these debates revolve around simple spreadsheets. Much more common were differences over approaches that blend quantitative and qualitative elements. These models were often mathematically simple but complex in other ways and could still present considerable model risk.

Governance

The previous guidance emphasized model governance. That included board involvement, clear separation between model development and validation functions, and validation requirements. The new guidance does not mention the board, takes a flexible approach to the independence of the validation function, and spells out fewer requirements for model use.

Both the old and new guidance acknowledged the importance of effective challenge to minimize model risk. The old guidance saw the independence of the validation function as a prerequisite for effective challenge. The new guidance merely indicated that validation functions should have “sufficient independence to maintain objectivity.” Substance matters more than form when it comes to independence and effective challenge, but conflicts of interest (direct or indirect) can undermine that independence.

The old guidance stressed the role of the board of directors and senior management. Neither group is even mentioned in the new guidance. Most directors are not modeling experts, and you should not expect them to take an active role in day-to-day model risk management. However, placing oversight of model risk under the board’s purview sends an important message. Understanding that they may be ultimately responsible for model risk failures makes directors more likely to ensure adequate staffing and support for model risk management functions.

 Documentation and Testing

The prior guidance included an extensive discussion of documentation standards. The words “documentation” or “documented” are used 32 times. It discussed the role of documentation in every phase of model risk management and why such documentation is essential. The revised guidance does not dismiss the importance of documentation but deemphasizes it. Its discussion of documentation standards amounts to a single sentence.

It’s easy to dismiss documentation as more a matter of form than substance. Some regulators also see it this way. But documentation deficiencies rarely, if ever, revolve around a typo on page 43 of a document or ending a sentence with a preposition. Model reviews by the regulators’ in-house quants can sometimes come across as pedantic, as though they were grading a student’s paper. But rarely do these sorts of academic critiques make it into a supervisory letter. The most significant documentation gaps revolve around more substantive matters, such as a failure to explain and support the bank’s modeling choices or to clearly describe whether a model is effective and reliable.

The revised guidance provides a good summary of the purpose of testing but avoids specifics. The prior guidance described testing in more detail, including key modeling metrics and a more extensive discussion of back-testing. It also discussed the use of sensitivity analysis in model development and testing. Such analysis is especially critical for models constructed with limited historical data or when modeling more extreme events, such as in stress testing. Sensitivity analysis goes unmentioned in the new guidance.

Advice for Examiners

The new guidance can place examiners in a difficult position. The guidance focuses on high level principles rather than specifics and stresses its unenforceability. At the same time, it leaves the door open to deem deficient risk practices unsafe and unsound. A natural but unwise response is to self-censor and avoid raising model risk issues due to an expected lack of support from higher ups. High end agency officials, especially political appointees, tend to develop selective amnesia when things hit the fan. A better approach is to continue to raise issues where warranted. The new guidance doesn’t necessarily preclude such actions. If your recommendations get overruled, so be it. At least you are on record.

It is also advisable to retain copies of the old guidance (and the rescinded Handbook section, if available). The older, more detailed guidance can fill in some of the gaps in the current, principles-based guidance. There is little in the new guidance that directly contradicts the prior approach. It merely keeps things vague. Model risk won’t disappear and a good understanding of models and their risks can be a valuable commodity.

Advice to Bankers

The high-level nature of the revised guidance provides banks with some flexibility in designing and implementing their model risk management functions. But that vagueness can also make banks more vulnerable to second guessing, especially if things go wrong. While some bankers view MRM procedures as little more than a compliance exercise, inappropriate use of models can lead to real risks and real losses. Criticisms of MRM practices stem less from merely diverging from the examiner’s preferred approach than from the bank’s unwillingness to prioritize and empower these functions. Much of the now-rescinded guidance could still prove useful.

Administrations change every four years, and the regulatory environment can change as well.  Dismantling a risk management function may be easy. Rebuilding that function is much more difficult.


Posted

in

by

Tags:

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *