Big Data challenges: giving banks opportunities to rethink their data structures

0


Regulatory revisions to bank mandatory capital requirements, which will be implemented in a few jurisdictions in January 2023 and globally in 2024, will pose a significant challenge for these institutions as the volumes of data they owe. manage and analyze are increasing exponentially. However, technology comes to the rescue, and Christophe Rivoire, UK Country Director Opensee discusses turning these challenges into unique opportunities for banks to simplify their data structures

Christophe Rivoire, Opensee

While the precise calculation of market risk indicators and credit risk measures has always been imperative for financial institutions, new calculations introduced by the Basel Committee on Banking Supervision through its revisions of the Fundamental Review of the Portfolio negotiation (FRTB) demands an unprecedented level of granularity and historical breadth.

Banks are already facing a significant increase in data for their risk management and regulatory reporting. This task has become even more difficult with the explosion in data volumes triggered by the new FRTB capital requirement rules, in which banks calculate the amount of capital they must hold to absorb losses due to market risk. At stake is the accuracy and speed of risk management calculations and regulatory reporting – which, in turn, have a direct impact on their data infrastructure and associated costs.

Capital calculation methods

Banks must choose between two methods when calculating their capital under the new FRTB rules: a standard approach or an approach by internal models (IMA). There are a lot of new complexities to calculate IMA beyond the requirement to align trading desk pricing and risk management. The result is a substantial increase in the volume of data, both transactional and historical. But the challenges don’t end there. There are also data management issues, such as using proxy data and business rules across multiple jurisdictions, with full auditability and required data versioning everywhere.

Since the methodology chosen by a bank must be applied at the level of the trading room, the results of the simulations for either approach must be analyzed at the most granular level to avoid shortcuts that could call into question. the relevance of the decision. Banks must therefore be able to simulate scenarios as well as quickly adapt to new situations. This not only means analyzing or processing more data, but it requires more flexibility on the part of data management configurations.

As a result, many international banks have been forced to revise their data analytics solutions to help them meet their large-scale data challenges and provide their business users with the autonomy to perform any aggregation, compute sets of data. constantly growing data and more effectively manage exponential growth of data. All of this should be delivered at minimal cost without compromising performance or the volumes of data involved.

FRTB The implementation presents a tremendous opportunity for banks to rethink their overall risk data structures, taking advantage of the full horizontal scalability offered by new technologies. In the past, there was no choice but to have separate market risk and credit risk data structures, with multiple datasets for each. This was especially due to the limitations of in-memory technologies, which forced banks to separate between yesterday’s data and historical data, normal data sets and stressed data sets, etc.

When technology opens up a new window of unlimited opportunity, why stop there? Why not rethink the entire organization of the data structure? It becomes possible for risk managers to view and report a country’s contribution to value at risk, exposure at default for the same country, duration in dollars and much more, such as information on profits and losses. With one click, they can analyze all these numbers and look at the trends. Access to data is no longer delayed as it resides in multiple datasets, and joint reports no longer need to be compiled manually from inputs from multiple sources. With the end of multiple data sets, data duplication and the high operating costs of storing redundant data are also eradicated.

Optimization of data storage

Opensee worked with a large bank to rethink their entire data structure and data model with this vision in mind. The first step was to design a data model with real-time access so that users could query market and credit risk information regardless of the granularity or history of the data. This involved combining eight very large data sets totaling several hundred terabytes. It immediately removed at least 20% of data points, which were duplicates, dramatically reducing storage costs and operational risks of errors between datasets through a streamlined adjustment process. With an efficient abstraction model layer, which removes complexity from datasets, users do not need to know the data model when calculating regulatory ratios, but rather understand the different risk exposures under multiple angles and enrich their dashboards with relevant information.

This example illustrates how a single platform that optimizes daily data storage and is scalable can improve the entire risk management process for banks. With longer ranges of historical data, banks can retain and build more meaningful trend analyzes, ensure data consistency between stress testing exercises and daily risk management, and ultimately offer their users more data capabilities with reduced operational risks and better data quality.

Tackling the exponential growth in data volumes opens the door for banks to rethink the entire data structure, making it more efficient through real-time self-service analytics on all their data at a cost of ‘lower operating. FRTB can turn out to be a blessing in disguise.


Leave A Reply

Your email address will not be published.