FRTB (Part 2): Why we need Next-Generation Risk Platforms

Executive Summary
The purpose of this blog post is to highlight the limitations we are observing around legacy risk platforms and how they may need to be able to meet the calculation demands of the FRTB Internal Model Approach (IMA) cost-effectively.
In part 1 of our blog FRTB Primer (part 1), we explained that all banks can implement one of two methodologies to comply with FRTB by January 2025. The options are the Standardized Approach, SA, or ‘Internal Model Approach,’ IMA.
Based on BCBS estimates, IMA (w.r.t. SA) can lower Market Risk Capital requirements between 9% to 55% (1), so there are many economic reasons to implement it. However, FRTB IMA requires a system that does approximately nine times more calculations than what banks currently perform for their VaR jobs. Still, many banks don’t have a solution that makes increasing the costs by nine times feasible.
To use an analogy, when you are driving a high fuel-consuming vehicle like a ‘fancy 4-Wheel Drive Truck’ to work every day, and it’s only a 15-minute drive, you may not be overly concerned about the amount you’re spending on gasoline each week. The vehicle’s fuel efficiency isn’t too much of a concern because the overall costs of filling your tank are relatively low. However, if you were now required to work in a new location that was a nine times greater distance away and it’s a 2:15 hour drive each way (ignoring how painful that commute maybe), then you would probably start getting a little more cost concerned when you fuel up.
The “Truck” will still get you to work daily, but the costs will increase. As a result, hybrid vehicles, electric vehicles, and other fuel-efficient options might be used as an alternative because it’s the logical next step.

We’re seeing banks look at this now when they’re estimating compute costs associated with performing IMA on their legacy risk systems. Moving to a next-generation risk platform with a lower cost per calculation becomes the logical next step. They want to trade their “Truck” in for a Tesla.
What are the Benefits and Costs to Calculate IMA
Institutions can invest more in technology to implement IMA to reduce their capital charges and thereby make more money. However, Implementing IMA requires a much greater amount of data, calculations, and a lengthy regulatory approval process.
Benefit | Cost |
---|---|
Reduced capital requirements vis-à-vis | 9 times increased computational workload |
Freedom of taking on additional business with assurance in internal model risk calculations | Involves lengthy approval processes |
Each IMA implementation can also vary in complexity based on the instruments traded, data requirements, and maturity of legacy risk platforms.
Calculation Demands of IMA
To put into perspective the magnitude of calculations involved with IMA, let’s assume that a bank that wishes to implement IMA is currently running an EOD Value at Risk (VaR) job across all of its portfolios.
Here are some assumptions we’ll use about this current state VaR job:
Metric | Value |
---|---|
Time to complete VaR job | 3 hours |
Number of calculations | 10 billion |
IMA under FRTB requires banks to perform calculations that are multiples of what they are performing for VaR. It is due to:
- Reduced Risk Factor Adjustment
- Liquidity Adjustment
- Diversification Adjustment
In short, banks must perform around nine times the calculations to fulfill the FRTB IMA requirements compared to VaR. If we keep all things equal on the current state system, we expect the IMA job to take 27 hours to run and perform 90 billion calculations.
What an IMA job would look like without changing the current state risk platform compute resources:
Metric | Value |
---|---|
Time to complete IMA job | 27 hours |
Number of calculations | 90 billion |
Many legacy systems have some level of horizontal scalability, and the number could further reduce due to additional computing resources to meet the requirements of IMA. However, many of these system designs cannot scale to this level. Although it may be technically possible to perform this number of calculations in an acceptable time window, scaling this infrastructure to this level may become prohibitively expensive.
How does Ness Solve this problem for Clients?
Adopt Streaming Architectures
Streaming architectures are only fit for real-time event-driven use cases and unsuitable for batch jobs. However, by treating batch jobs as a bounded stream, they can be supported by streaming architectures and take advantage of all the performance advantages that come with them. Furthermore, the designs of the Next-generation risk platforms can help both on the same technology stack. Streaming architectures using open-source technologies such as Apache Flink and Apache Kafka bring a lot of advantages. These technologies are mature, but the open-source community continuously improves them to make them easier to use. We find connectors and interfaces to most data sources that help to accelerate the ability to develop with this stack. Additionally, there are advantages to using these industry-standard components when hiring resources to work on this platform (because people have experience using these open-source technologies) and attracting talent (because people want to work with best-in-class technologies useful to various use cases).
See our earlier blog post – https://ness.com/risk-architectures-from-batch-to-streaming/ for a more detailed view of the benefits of streaming architectures compared to batch-processing-based systems.

Re-thinking How to Manage Data Requirements
A common pattern in legacy risk systems is that the first step is determining the necessary data dependencies and fetching that data from various data sources. This data-loading step can be a massive bottleneck for jobs with large data footprints, like the expected shortfall calculations under IMA. In addition, in many cases, the loaded data (like position data and reference data) is relatively static but reloaded many times.
By re-orchestrating how this data is identified and loaded, we can distribute the data necessary for calculations across nodes in the compute cluster in an effective manner. Additionally, data update events can be processed in real-time to ensure that the compute jobs are working with a complete set of accurate data.
This pattern helps us greatly reduce the overall processing time and can have a huge impact on costs.
Reusing Calculation Results
Many legacy risk platforms take one job at a time, load all the necessary data, perform a series of calculations, publish the results, and then the next job starts on a clean slate. All the data is loaded again, all the calculations are also again freshly calculated, and results are saved again without considering whether the data was from the previous load or the calculations performed before were executed at an earlier job.
When designing solutions to perform calculations at the scale of IMA, taking advantage of previous results is critical to ensuring that overall processing time is minimal. Therefore, a large part of designing a calculation framework to support IMA is evaluating opportunities to reuse intermediate calculation results. By reusing calculation results effectively, the resulting system can experience many benefits, such as reduced processing times, reduced data inconsistencies across jobs, and a more cost-optimized infrastructure.
Invest in Future-Proof, Wholistic Solutions
Considering how the burden of regulations has only been increasing over time, the design of a next-generation risk platform should be flexible enough to support increased demand by regulators. Calculations should be at a more granular level. As a result, there may be an increase in the performance of the number of scenarios.
Conclusion
Under FRTB, IMA can lower Capital requirements between 9% and 55% concerning SA. Clients have mature risk systems capable of making billions of calculations in 3 hours. Since FRTB IMA is up to 9 times more calculations than VaR, these batches could run for 27 hours!
Existing risk systems are well-built, but
- The designs could not scale 9x from a performance point of view.
- The underly technologies are only suited to scale a little. New technologies are.
- Current technologies could be more cost-effective to scale 9x, and there are cheaper alternatives today.
Ness brings a wealth of experience across risk use cases of varying scales, product composition/complexity, and legacy technologies. Partnering with Ness to evaluate how to support IMA calculations can ensure cutting-edge design patterns to implement and ensure any previous mistakes do not repeat.
A new risk platform is a big investment. Ensuring the solution is modular, scalable, and not limited to a single use case is important. For example, batch and event-driven processing both need to be supported. In addition, calculation granularity cannot go through a fix at a single level. Therefore, the requirements for a new system should be taken into account from a long-term perspective during any design of a next-gen risk platform.
Sources
1) Banking Committee on Banking Supervision, Explanatory note on the minimum capital requirements for market risk, January 2019, (https://www.bis.org/bcbs/publ/d457_note.pdf)