Financial institutions have long used models in their everyday activities – asset/liability management, interest rate risk, liquidity, credit risk, anti-money laundering/Bank Secrecy Act – the list continues to evolve and increase. In June 2016, the Financial Accounting Standards Board (FASB) issued a current expected credit losses (CECL) standard for estimating allowances for credit losses which will cause financial institutions to implement another new model. As usage of models has increased, so have regulatory expectations, culminating in directives to maintain a model risk management program.
The Office of the Comptroller of the Currency (OCC), the Federal Deposit Insurance Corporation (FDIC) and the Federal Reserve Bank (FRB) have all issued guidance regarding model risk management. The OCC and FRB issued guidance in 2011 and the FDIC issued their guidance in 2017. The NCUA (National Credit Union Association) has guidance, but it focuses more on interest rate risk models and references the OCC guidance.
Key aspects of a model risk management program include:
(a) model development, implementation, and use
(b) validation, and
(c) governance, policies and controls.
To understand the importance of a model risk management program, it is helpful to first define a model. Per the FDIC’s Supervisory Guidance on Model Risk Management, a model is “a quantitative method, system, or approach that applies statistical, economic, financial, or mathematical theories, techniques, and assumptions to process input data into quantitative estimates. A model consists of three components: an information input component, which delivers assumptions and data to the model; a processing component, which transforms inputs into estimates; and a reporting component, which translates the estimates into useful business information.”
A common misconception is that if the financial institution has not purchased a dedicated system/software to perform the analysis, it is not a model and therefore not subject to model risk management. For example, many financial institutions use internally developed spreadsheets to estimate liquidity and do not realize this activity qualifies as modeling and must be included in the model risk management program.
Next, we should understand that given the key decisions that are being informed by the output (reporting) of these models, weaknesses or unreliability within the models and their output could result in unsound strategic decisions (“model risk”). This could have immediate and/or sustained impacts on a financial institution’s performance, profitability and/or reputation.
With these basic definitions and concepts established, we can start with the backbone of model risk management: its governance, policies and controls.
One of the first steps in developing a model risk management program is to assemble a library of the models in use at the financial institution. Responsibility for developing and maintaining the library should be centralized, and key data should be gathered and documented for each model, including:
It is also helpful to include the:
This process will involve all the various lines of business and managers and may require some education to ensure everyone is armed with the appropriate definition of a model. The library should track updates, exceptions to policy and whether each model is functioning properly. A process to regularly update the library should also be established, as this must be a dynamic document. Any new, discontinued or changed models should be promptly captured in the library. The financial institution needs to track to ensure all of the existing (and incoming) models are subject to appropriate risk management procedures.
Along with the inventory, a strong model risk management program requires an underlying framework established by senior management and the board. A board-approved policy should be created to govern model risk (both in the aggregate and at the individual level). Policies should include a definition of the terms “model” and “model risk,” assessment of model risk, acceptable practices for the development, implementation, and use of models, appropriate model validation activities, and governance/controls over model risk management. Among other things, the policy should also describe the processes used to select and retain vendor models, including the people who should be involved in making these decisions. Validation standards should be established, for both pre-implementation and ongoing stages, including both internal and external/vendor/third party models. Policies should also state the documentation requirements, including the inventory mentioned above, results of modeling and validation processes, and model issues and resolution. Documented procedures should also be created to support and implement the policy.
Policies should provide clear expectations for the expertise, authority, reporting lines, and continuity of the staff to which model risk management responsibilities have been assigned/delegated. This includes defining how the potential use of external resources for validation and compliance will be integrated into the model risk management framework. When assigning responsibilities for ownership, controls and compliance, it is necessary to consider reporting lines and incentives and/or other potential conflicts of interest. “Model owners” are usually within the business units and are ultimately accountable to ensure the established framework is being observed for the use and performance of “their” models. This includes making sure they are properly developed, implemented and used, as well as ensuring they undergo the appropriate validation and approval processes.
The responsibilities to control model risk could be assigned to individuals and/or committees. Such responsibilities include measuring and monitoring risk against established limits. Management of the independent validation and review process should also be included in these responsibilities, which includes assigning appropriate resources that can perform effective challenges to the model’s components.
Internal audit also plays a role in the overall governance of model risk management. As an independent function, internal audit should evaluate the financial institution’s model risk management program’s overall effectiveness, including an assessment of whether it addresses both (a) the risk of fundamental errors leading to incorrect outputs and (b) the risk of incorrect/inappropriate use. These assessments should be at the individual model level and in the aggregate and should be completed to ensure compliance with the regulatory guidance.
With the governance aspect considered, we can look to the other two model risk management aspects for specific expectations related to developing and using models, as well as validating them.
While most financial institutions rely on vendors with subject matter expertise to develop the models, there are still instances in which internally developed models (e.g., spreadsheets) are being used, so financial institutions must be aware of the requirements for this phase. This starts with assignment of responsibility, as the development team should include individuals with the necessary degree of technical knowledge and expertise. When developing a model, a statement of purpose should be the first step, which should then guide the process from beginning to end; knowing the “why” behind the model is necessary to know the “how.” Documentation is key; design, theory and underlying logic should be clearly spelled out, as well as any limitations.
The quality and relevance of any data used must be examined to ensure it is suitable and consistent with the intent of the model. Robust, thorough testing (and documentation of it) is key. The capabilities of the systems being used to obtain model inputs must be considered; calculation errors may occur if there are inconsistencies between the model’s expected data points and the data points available from the core system. If any data is not derived from or consistent with the financial institution’s own balance sheet / customer accounts (e.g., average industry data), analysis and tracking should be performed to ensure the financial institution/user is aware of the impact this could have on the results being reported. Rigorous testing is necessary to confirm the mathematical and statistical accuracy and the conceptual soundness of the model. This testing should be documented and should include a variety of scenarios (including extreme scenarios), market conditions and products. The impact of assumptions should also be evaluated. During model development, the proposed methodology should be compared to alternative methods and theories.
Another way models are continually being tested is simply via use. Users provide an opportunity for developers to obtain “real world” feedback, although this must sometimes by taken with a grain of salt. It may be easy and instinctive to point to model weaknesses as a reason for negative results appearing in a model’s reports, and, conversely, users are less likely to communicate any constructive feedback if the model is reporting results that are favorable to his/her own purposes.
Model validation is a key process that should be performed regularly, by an independent person/party, to confirm the model is sound. Some users erroneously believe an externally developed model does not require validation, but this is not true and is part of the regulatory guidance. Validation of internal and external models should include inputs, processing and reporting.
While it is not required to use a third party to perform validations, there is a requirement for independence, as well as adequate/relevant skills, knowledge and expertise. The individuals performing validations should possess enough autonomy from the model developers/users that any potential criticisms can be reported or escalated without active or implied retribution. In general, the individual(s) who developed or use the model should not also perform the validation; nor should someone who has a stake in the model’s reporting/results/accuracy. If strict segregation is not possible, the work should be subject to a critical review and some level of re-performance to mitigate the lack of segregation.
Validation prior to use of a model is certainly important so any identified issues can be addressed immediately, but validation should also occur on an ongoing basis – especially when there are major changes. Sound practice includes establishing a documented expectation for the frequency of model validations. Regulators expect an annual review to include evaluation of whether validation activities are adequate.
There are three elements of a full model validation: conceptual soundness evaluation, ongoing monitoring and outcomes analysis.
The validation should critically evaluate the developer’s evidence to support the model’s overall theoretical construction, key assumptions, data and specific mathematical calculations. This may include thoroughly reviewing the developer’s documentation but also may include testing. A comparison to alternative approaches and theories should be performed, as well as an evaluation of whether the data used to build the model appropriately reflects the financial institution’s actual portfolio or market conditions – particularly for models that rely on external data rather than the bank’s own data. Sensitivity testing should also be considered, whereby small changes are made to various inputs to identify any unexpectedly large changes in results. It is also important to assess the logic, judgment, and types of information used; any qualitative judgments made by the development team should be reviewed to confirm they are well documented and supported.
Market conditions, products, and customer activities constantly evolve, and it is vital that models evolve to accommodate these shifts; if a model cannot accommodate modifications, the financial institution must identify the need to potentially replace it. Important steps in ongoing monitoring include process verification and benchmarking. Process verification seeks to confirm the components of the model are functioning as designed – whether the data inputs are still accurate, complete, appropriate and of high quality. Quality and change control procedures over the model’s coding should be verified as adequate to prevent unapproved and inappropriate edits. Continued accuracy of system integration should also be confirmed since a large volume of the model’s inputs are derived from files/feeds from other systems; changes in the source systems must be captured and mapped accurately. It is common to include comparisons of model outputs to relevant benchmarks in the monitoring phase, with significant variances analyzed to determine the cause.
The most common form of outcomes analysis is back-testing. This involves selecting a forecasted time period and comparing the actual results to the model’s forecasted results. For example, for the 12-month forecast created in March 2020, compare the actual data as of March 2021 to that forecast, and then analyze the reason(s) the forecast did not hold true. While back-testing is the most common type of outcomes analysis, there are a variety of other options that can be used, with the selection(s) based on the model’s methodology and complexity, among other factors. It is important to note that a one-time analysis does not necessarily indicate the need for immediate changes in model approach; however, analysis of multiple short- and longer-term time periods may identify trends that point to a need for adjustments.
It is common for financial institutions to engage qualified third parties to help fulfill at least one aspect of their model risk management programs, or at least to assess the quality of their program. Further details on this topic can be found in regulatory guidance such as the Supervisory Guidance on Model Risk Management issued by the OCC, FDIC, Federal Reserve and NCUA. Please contact your client service team to discuss ways Baker Tilly can assist your financial institution.
For more information on this topic, or to learn how Baker Tilly’s banking and capital markets industry Value Architects™ can help, contact our team.