Interview with Prof. Paul Embrechts, ETH Zürich

The complexity of Operational Risk


Interview with Prof. Paul Embrechts, ETH Zürich: The complexity of Operational Risk Interview

Operational risks and reputation risks play an increasingly important role in the financial sector and the real economy. But from a methodological perspective these risks – which often relate to very specific situations and in some cases involve existence threatening levels of loss – are very difficult to constrain, a problem that also applies to successful regulation of these risk classes. We spoke to one of the founding fathers of quantita- tive risk management – Prof. Paul Embrechts, co-author of "Modelling Extremal Events for Insurance and Finance" and "Quantitative Risk Management: Concepts, Techniques and Tools" – about new approaches to modelling, estimating and quantifying operational losses and the opportunities and limits of quantitative models. In our interview, Paul Embrechts shares some fascinating anecdotes from his many years of lively debate with banks, insurance companies, regulators and the academic community.

Together with Valérie Chavez-Demoulin and Marius Hofert you recently published some academic papers on OpRisk. What were the main findings and practical implications?

Paul Embrechts: Within the group of regulatory risks facing a financial institution (market, credit, operational), operational risk (OpRisk) plays a very interesting role from a purely scientific point of view. Both in structure (matrix ordered along business lines (BL)/ event types (ET)) as well as with respect to statistical properties, OpRisk data are very challenging to analyse. In the paper that you mention we develop an Extreme Value Theory (EVT) based tool which allows such data to be studied taking the various BLs, ETs and for instance time as co-variables so that the data matrix can be modeled as one (dynamic) data set. We consider this work very much as "statistical research motivated by OpRisk-type data" rather than "here is the final model to use in OpRisk practice". I have other papers out there with a similar flavour, for instance with Giovanni Puccetti (2008) we looked at quantile(VaR)-based estimates for matrix ordered loss data as a function of the order in which marginal VaR measures are aggregated, e.g. first row-estimation followed by VaR-aggregation or first column-estimation then followed by VaR-aggregation. This question came to us via the Basel Committee. Both papers have practical relevance in so far that they clearly highlight the near impossibility to come up with a reasonably objective risk measure estimate for OpRisk data. A referee from the banking industry for the paper with Valérie and Marius at first became very excited and thought we had indeed come up with THE method to use in OpRisk-practice! He/she went to great length to reprogram the new statistical methodology for their proprietary OpRisk data base, unfortunately to come up with the conclusion that also this method would not give "the desired" results. Of course, as we do not have a copy of the data used by that referee, it is difficult to understand the main reason(s) for this conclusion. Most likely it is related to the high complexity and extreme heavy-tailedness of OpRisk data; we will surely come back to this issue in the discussions below.

What is your view on the methodology by which the capital charge for OpRisk is computed in the Solvency II standard formula? Does it make sense from an economic perspective?

Paul Embrechts: The Solvency II standard model for OpRisk is mainly volume based, in particular based on premiums and reserves. The various weighting factors are defined, and if necessary updated, through successive Quantitative Impact Studies and this in close discussion with industry. Overall this is a laudable procedure which goes back to the earliest days of insurance regulation and also found its way into the Basel standard approaches for banking regulation. As with any standard approach, one can question whether the indicators used (i.e. premiums and reserves) sufficiently capture relevant economic reality, like economic cycles for instance, or the dynamic development of a company. It is no coincidence that Basel III for OpRisk recently decided to replace the key indicator Gross Income (GI) in its standard formula by a new Business Indicator (BI). No doubt this decision is aimed at achieving "greater risk sensitivity" as well as "improved economic reality". We can expect similar developments to take place for insurance regulation. Originally, under the Swiss Solvency Test (SST) which is legally in force since January 1, 2011, there was, and there still is no (quantitative) capital charge for OpRisk. OpRisk is subsumed under the Swiss Quality Assessment (as part of Pillar 2). Interestingly, recent discussions at the level of the Swiss regulator FINMA seem to indicate that a (quantitative) capital charge for OpRisk is being discussed. This would bring the two regulatory approaches (at least for OpRisk) closer together.

Will we ever be able to derive high quantiles for OpRisk with precision? Or is this an issue where mathematical research should flag a big warning sign and address the influence of model risk?

Paul Embrechts: Here I can emphatically answer NO to the first question and YES to the second. A main reason for the NO is the complexity of OpRisk loss data: great inhomogeneity between the various loss-types, not well understood interdependence, influence of economic and regulatory factors leading to intricate time dependence, etc. This clearly implies the crucial importance of model risk (and hence the YES above) in the context of OpRisk; I do however want to stress that model risk is crucial for risk management as a whole. Concerning OpRisk, I have voiced model risk concerns over and over again and this also to the higher instances within regulation and industry. I still very much remember an invitation for RiskLab to teach a one-week course on Extreme Value Theory (EVT) at the Boston Federal Reserve in 2005; at the time EVT was considered as the Deus ex Machina for OpRisk capital charge calculations. After we explained the various model conditions needed for EVT- based high-quantile estimation it became clear that OpRisk, with the data properties as we knew them at the time, was well outside the scope of standard EVT. The head of the OpRisk unit at the Boston Fed very clearly replied in a private conversation: "In that case it would be good to make this finding more widely known early on". How right he was. I also recall that, at about the same time, a UK risk management professional came to me stating that UK banks were avoiding the use of EVT for OpRisk because I was quoted as having said in a talk that an EVT based approach would not save the day. This point of view got indirectly further support through an important publication by the Bank of Italy's Marco Moscadelli (2004) who analyzed the then Basel Committee QIS data actually using EVT and who obtained several infinite mean models for the data aggregated at business line level. This clearly showed the serious statistical problems underlying OpRisk data. The main consequence being that, if data exhibit such extreme heavy-tailedness, regulators and risk management functions have to take a step back and seriously question the soundness of quan- titative questions asked as well as analyses performed.

Together with Valérie Chavez-Demoulin and Johanna Nešlehová we wrote the first (in a way one could say, inaugural) article for the then (2006) new Journal of Operational Risk, exactly on this topic. I do want to stress however that is precisely EVT which allows us to understand where we have to draw the line between the quantitatively achievable and non-achievable. Further along this path, I strongly advice those interested in operational risk to take a close look at similar discussions in the realm of environmental risk as summarized through the so-called Dismal Theorem (Martin Weitzman from Harvard). Finally, during several discussions with practitioners I challenged them by saying that I was convinced that a non-trivial amount of backward engineering was going on of the type: "Give me the number, I will give you the model (parameters) so that we can achieve that number". I realize that I am somewhat cynical here, but the complexity of OpRisk data prompts me in making such a bold statement. All too often I have witnessed parameter fine-tuning after an initial statistical estimate from a well-intended model came up with a ridiculously high capital charge.

Collecting empirical data on market risk clearly has value for its prediction. But what about OpRisk, where our intervention after losses (hopefully) changes the distribution of future losses? Isn't this philosophically different?

Paul Embrechts: First of all, whatever the regulatory approach used, I find it absolutely paramount that OpRisk data are collected, carefully monitored and correctly communicated within the individual institution. This is a matter of best practice quality control, a practice we learned about from manufacturing industry, and this over very many years. You are absolutely correct that data-based intervention has a hopefully positive feedback on future OpRisk losses, and hence by design defies standard statistical assumptions underlying trustful prediction of future losses. By definition OpRisk is not traded so there is nothing like "implied volatility" as for market risk gauging the sentiments and hidden information in "the market". Interestingly, that sentiment is increasingly gauged through Google-based searches and counting of specific keywords which may be relevant for OpRisk.

Speaking of data. My personal view is that academic research on OpRisk could strongly be fostered by providing a pool of anonymized data that is available for academic research. What do you think about such an initiative and how could it be realized?

Paul Embrechts: This view is correct and has been voiced by academics from the start. So far we have only few OpRisk data-sets at our disposal on which we can test either new methodology, or that can be used as a basis for a "to flag a big warning"-decision as you mentioned. We based our statistical analysis for the new model discussed under Question 1 on, admittedly selected media data collected by Willis Professional Risks. Because of extensive anonymization, even industry-wide collected data is only of limited value to the participating companies of the underlying consortiums. And let us not forget, the final data used for risk capital calculations consist not only of internal historical and industry wide consortium data but also of company internal expert opinion data. It is exactly this multiple source data structure that opens the door for applications of credibility theory, a theory very well known to actuaries. I personally am not very hopeful that in the relatively near future high-quality OpRisk data sets will become more readily available for academic research. Just think for instance of the highly relevant class of legal risk; incidentally, the record loss in this class stands at about 16.65 billion US$ (!) imposed by the US Justice Department on Bank of America in 2014. Often such losses correspond to fines settled out of court. How is one supposed to get all the relevant information on such important losses, losses that in a way correspond to an institution's "capital sins"? There are of course subclasses like hard- and software losses for which much more detailed data as well as methodology (hard- and software reliability in this case) are available.

In the scientific community there is currently an interesting debate on elicitable risk measures. Will this be the third dog running away with the bone in the debate VaR vs. ES?

Paul Embrechts: No not at all. Elicitability is a concept related to forecasting and backtesting of risk measures, it offers a sound methodological basis for comparing statistical properties between several statistical estimators. The whole risk management discussion on elicitability (a concept that has been around for many years) exploded after I gave a talk at Imperial College London on the issue of "either VaR or ES" for the trading book. Let me be clear on this: in my opinion, both VaR as well as ES can be backtested in a way which is sufficiently precise for practical risk management purposes. Also note that, whereas ES on its own is not elicitable, as a pair, (VaR, ES) is jointly elicitable. It definitely pays to think more carefully about using certain risk measures, for this choice there are various criteria to look for like practical understanding and ease of communication for instance, but also statistical estimation; elicitability and robustness are further relevant criteria. In the end it very much depends on what one wants to achieve by using a particular risk measure. Together with Ruodu Wang and Haiyan Liu (Waterloo) we have just finished writing a paper on quantile-based risk sharing which contains several results which I personally think are at least (if not more) relevant for the ES versus VaR debate. The "more" above of course very much reflects my personal taste.

Should companies place greater emphasis on operational risk to develop competitive advantages in operations? Should they focus on developing models to quantify OpRisk or should they try to get a better understanding for operations and eliminating errors?

Paul Embrechts: If there is one thing that we learned from quality control in the manufacturing industry is that one (eliminating errors) cannot do well without the other (developing models to quantify risk). Data and (internal) models are there to make the weak spots in an institution more visible. It is surely absolutely paramount that an in depth understanding of operations at various company levels is a best practice sine qua non. This view is also very much a driving force behind the Pillar 2 ORSA (Own Risk and Solvency Assessment) component of Solvency II. As a consequence, placing greater emphasis on OpRisk no doubt will lead to a competitive advantage.

Is Reputational risk a byproduct of operational risk? How should companies deal with the challenge to quantify this risk connected to negative customer impacts?

Paul Embrechts: It is interesting to note that reputational risk is not part of the precise definition of OpRisk under neither Basel II (and hence III) nor under Solvency II. It is very much a Corporate Governance issue. At the same time (and indeed scientific research very much supports this) an institute's reputation stands to lose considerably as a result of larger operational risk losses. This is abundantly clear in the case of legal risk. Furthermore, substantive operational risk losses (especially in the legal risk domain) typically have a direct effect on the company's stock price; think for instance of the ongoing Volkswagen case. Quantifying reputational risk I find a daunting (near impossible) task, qualitatively understanding it is much more relevant. One thing I early on learned on reputational risk, and this as a Board Member in banking and insurance, is that "If you do not want to read about it in tomorrow's newspaper, then you better don't do it!"

Is Regulatory risk another – quite fast growing – byproduct of operational risk?

Paul Embrechts: Here I would draw the line, if not, we drift back to the definition on the table at the very beginning of the Basel II debate, namely that operational risk is "all but market and credit risk". It is not so straightforward to come up with a workable definition of regulatory risk. Of course there is an obvious two-way interaction between regulation and solvency, and this almost by definition. If you mean by regulatory risk the possibility that an over-regulated market may cause otherwise healthy companies to experience solvency problems or, vice versa, how an unbridled growth in volume of certain products or complexity of financial institutions may harm society at large, then I observe that worldwide the resulting political debates on legislation in order to prevent these are taking place. However over-regulation may also lead to the growth of shadow-banking as well as shadow-insurance; the former we all are aware about, this is unfortunately much less so for the important latter. Let me just ad as an interludium the fact that the 1933 Glass-Steagall Act contained 37 pages whereas the 2010 Dodd-Frank Act originally contained 848 pages with legal refinements running into the many thousands of pages. On the other hand, wrong-regulation will often lead to excessive attempts for regulatory arbitrage. In a very visible document of 2001, "An academic response to Basel II", together with colleagues from the London School of Economics, we specifically pointed out serious weaknesses in the then new regulatory guidelines, mainly for credit and operational risk. In that document, which was officially submitted to the Basel Committee, we explicitly stated (2001!) "Reconsider before it is too late!" Nothing much was done at the time, and indeed in 2007 it was too late. I would very much wish that more of my academic colleagues worldwide would spend more time on that kind of interdisciplinary research; unfortunately, findings of this kind of research rarely make it into the perceived "top" journals. This touches on several aspects of current ranking obsessed academia, going well beyond (quantitative) risk management. As this interview concerns operational risk, let me add the example of my paper with Giovanni Puccetti mentioned under Question 1. As already stated, the problem we treated came to us via the regulators, we gave a partial solution and submitted the paper to an academic probability journal: we wanted this kind of problems to become known to a wider scientific audience. After several rounds of refereeing (two reports were in favour praising the practical relevance, three against mainly criticizing the lack of mathematical depth) the paper was turned down with the suggestion to try a more applied (risk management) journal. Interestingly, one of the two referees in favour of publication commented that the journal we first submitted to originally started off with this kind of papers in mind. In the end, as suggested, we did submit the paper to a more applied (risk management) journal, The Journal of Operational Risk; it was accepted for publication within a couple of weeks. I could add further such examples.

Over 90% of the world's data has been created in the last two years. And Big Data is already being embraced in many fields. Do you think that Big Data has revolutionary potential in the world of risk management? Can Big Data improve the predictive power of risk models or the effectiveness? Can they gain more accurate risk intelligence? Or is it just another big hype?

Paul Embrechts: To make sure, more data does not necessarily equates to more information. Further, I prefer to talk about Data Science rather than Big Data. In certain areas of risk management Big Data no doubt has large potential; for instance in the realm of credit card fraud or credit risk screening where already now machine learning tools are being applied with considerable success. In that sense I would personally not speak of a big hype. But let me make some comments relating current IT driven developments with operational risk. It is unquestionable that the IT driven data revolution out there will have a huge impact on society in general and OpRisk for insurance and banking in particular. Very worrying for me, in the wake of this IT revolution, cybercrime is moving at a very fast pace center stage of the OpRisk theatre. Let me give you a further historical anecdote: at one of the early OpRisk conferences in 2004, IT was only very marginally mentioned as an important loss driver. I recall that I asked at that conference whether this was justified as at the time I was very well aware of the losses banks and insurance companies were incurring because of misguided IT investments. I do hope that at the current moment in time nobody doubts the enormous paradigm shifts that are taking place in cyber space; the loss-potential for OpRisk will be considerable. For instance, we already observe a strong increase in Peer-to-Peer (P2P) lending and what I like to refer to as "Facebook banking and insurance"; these are typical "shadow" developments. When we add to these developments efforts like algorithmic and high-frequency trading, as well as distributed ledger transactions based on blockchain technology (the technology underpinning crypto currencies like bitcoin) then I do believe that we find ourselves at an important crossroad. At this crossroad, in my opinion, awareness of operational risk is bound to play a pivotal role.

What current results on OpRisk (written by others) is a recommendation of yours to read? What is a historical treatment of OpRisk everyone should know?

Paul Embrechts: Over the recent years, comprehensive textbooks have been written on the topic, let me single out the voluminous "Fundamental Aspects of Operational Risk and Insurance Analytics: A Handbook of Operational Risk. Wiley 2015 by M.G. Cruz, G.W. Peters and P.V. Shevchenko". The authors combine both academic excellence as well as practical relevance, a must for the field. I often refer to two more historical publications: first the already mentioned statistical analysis of Marco Moscadelli "The modelling of operational risk: experience with the analysis of the data, collected by the Basel Committee. Banca d'Italia, Discussion Paper No 517, July 2014" and second, as an example of how a big bank (Deutsche in this case) goes about developing an internal OpRisk model, "F. Aue and M. Kalkbrener (2006). LDA at work: Deutsche Bank's approach to quantifying operational risk. J. Operational Risk, 1(4): 49-93". Let me finish with another historical comment; however a comment with an important present day flavour. In several current publications on quantitative risk management in general and the fields of stress testing and OpRisk in particular Bayesian Hierarchical Networks (BHN) play an increasingly important role, a role that I do support. I still remember around 1999 stepping into the office of Alexander McNeil, at the time a Post Doc within ETH's RiskLab, and finding a BHN for OpRisk on his blackboard … as so often, history repeats itself!

[The questions were asked by Frank Romeike and Matthias Scherer.]

Paul Embrechts is Professor of Mathematics at ETH Zürich and Senior Swiss Finance Institute Professor, specialising in actuarial mathematics and quantitative risk management. During his academic career his research and teaching posts have included the Universities of Leuven, Limburg and London (Imperial College). Paul Embrechts has been a visiting professor at various universities, including the Scuola Normale in Pisa (Cattedra Galileiana), the London School of Economics (Centennial Professor of Finance), the University of Vienna, Paris (Panthéon-Sorbonne), the National University of Singapore and Kyoto University. In 2014, he held the visiting chair at the University of Oxford's Man Institute, and has received honorary doctorates from the University of Waterloo, Heriot-Watt University, Edinburgh and the Université Catholique de Louvain. He is an Elected Fellow of the Institute for Mathematical Statistics and the American Statistical Association, an Honorary Fellow of the Institute and Faculty of Actuaries, Actuary-SAA, Member Honoris Causa of the Belgian Institute of Actuaries and is an editor of numerous academic journals.

Paul Embrechts is Professor of Mathematics at ETH Zürich and Senior Swiss Finance Institute Professor, specialising in actuarial mathematics and quantitative risk management. [Image source: ETH Zürich]

[ Bildquelle Titelbild: © duncanandison - Fotolia.com / Image source P. Embrechts: ETH Zürich ]
Risk Academy

Die Intensiv-Seminare der RiskAcademy® konzentrieren sich auf Methoden und Instrumente für evolutionäre und revolutionäre Wege im Risikomanagement.

Seminare ansehen
Newsletter

Der Newsletter RiskNEWS informiert über Entwicklungen im Risikomanagement, aktuelle Buchveröffentlichungen sowie Kongresse und Veranstaltungen.

jetzt anmelden
Lösungsanbieter

Sie suchen eine Softwarelösung oder einen Dienstleister rund um die Themen Risikomanagement, GRC, IKS oder ISMS?

Partner finden
Ihre Daten werden selbstverständlich vertraulich behandelt und nicht an Dritte weitergegeben. Weitere Informationen finden Sie in unseren Datenschutzbestimmungen.