The Bactra Review: Occasional and ecletic book reviews by Cosma Shalizi   126

Operational Risk

Measurement and Modelling

by Jack L. King

New York: Wiley, 2001

Risk, with Reservations

[A modified version of this review first appeared in Quantitative Finance, vol. 2, no. 3 (June 2002), pp. 177--178.]

``Financial risk'' traditionally refers to the results of forces from outside a financial institution: prices go the wrong way; assets which should have been anti-correlated start moving together; a new government declares the institution a nest of lumpentechnocratic exploiters of the working poor and seizes its assets to pay for a literacy campaign. One can sensibly calculate the odds of some of these risks, and for these we can take rational precautions, either through keeping reserves large enough to typically cover them, or through insurance contracts. A typical risk-coverage policy might be, for instance, to keep assets in the bank able to cover the financial losses incurred in a year, with 95% confidence (i.e., the reserves would be larger than the year's losses in 95% of all years).

Financiers, however, do not just face financial risk. It is a melancholy truth, known at least since the first family, that human beings often fail at tasks assigned to them. While we all look forward to the rapidly approaching day when financial institutions involve no human beings at all (except, perhaps, shareholders), our machines and our software have sadly inherited our frailty. It is imprudent to count on even the simplest thing being done right, at least being done right all the time, and doubly imprudent to act as though mistakes will ever be in our favour. The prudent financial institution, therefore, will attempt to assess the risk of loss it faces from internal causes, from its people and other parts not working right --- in a word, its operational risk.

People don't do what they're supposed to, either because they can't, or because they don't want to. People hit the wrong keys when tired, plug the wrong numbers into formulae they don't understand, read the wrong lines from displays, or make decisions based, as the poet says, on ``testosterone and cocaine''. Let us call all this ``stupidity''. On the other hand, people lie, cheat and steal, especially when there are large sums of money to be made doing so. Let us call all this ``malice''. Operational risk results from stupidity and malice.

Stupidity is clearly more boring, more common, and more easily treated than malice. Mistakes, it is reasonable to suppose, happen at random: therefore they can be treated statistically. If we have operational records, and can identify mistakes, then determining the operational risk due to stupidity is essentially an exercise in data-mining. We fit our favourite probability distributions to the records, and do Monte Carlo runs to estimate, say, the 95% confidence interval for the operational risk per year. Formally, that is to say, operational risk from stupidity is very much like normal financial risk, though we need more real facts, and fewer stylized ones.

Malice is more interesting, and harder to deal with. Unlike mistakes and bull-headedness, corruption and deception are the stuff of which interesting stories are made, and sometimes truly spectacular sums of money are involved (vide Barings, Enron). The problem, from our point of view, is that these cases belong more to the realm of game theory than that of statistics --- the strategic, interdependent actions of intelligent creatures, trying to do each other wrong. Perhaps there are statistical regularities to malicious operational risk --- there are, after all, statistical regularities to homicide --- but it is hard to get good data, for four reasons.

First, successful malice, almost by definition, goes unnoticed until it's too late. Second, few institutions survive many really bad acts of malice, so the sample size is intrinsically small. Third, institutions are unlikely to boast of being swindled by their own employees or managers; many cases are thus unavailable to those seeking to augment their own databases. Fourth, there is every reason to think that malicious people help assess operational risk.

The book under review is intended to be a practical guide to measuring, dealing with, and perhaps even reducing, operational risk. King's strategy basically has two parts. The first is to separate out typical, small, ``operational'' losses, presumably due to stupidity, from rare, large ``extreme'' losses, presumably due to malice. (King doesn't put it quite that way, but that's what it comes to.) Small losses are supposed to have more or less assignable causes: operational losses on bank loans, for instance, happen because you assigned the customer the wrong credit score, or because collateral isn't worth what you thought it was, etc. On the other hand, large, extreme losses are supposed to fall from the sky (``control breakdowns''); no attempt is made to deal with their causes. We shall return to them presently.

King recommends handling ordinary operational errors in one of two ways: the ``delta method'' and by using causal models. The delta method, it turns out, is just error propagation, familiar from introductory lab courses in physics, chemistry, etc. Assume that the errors in your measurements have Gaussian distributions, and that they are small enough that errors in derived quantities are linear functions of measurement errors (i.e., that you can truncate a Taylor series at first order). Then the errors in your derived quantities are also Gaussian, and the variances are just the sums of the measurement variances, weighted by the appropriate partial derivatives. For measurement errors, read operational errors; for errors in derived quantities, read losses. King explains this at what I can only call tedious length.

Causal models are more interesting. The basic idea is that you can represent the causal dependencies among different variables by means of a graph, one node for each variable, with an arrow running from X to Y if and only if X is a direct cause of Y. A few reasonable-looking probabilistic assumptions --- for instance, the Markov property, that X is statistically independent of its ultimate causes, given its immediate causes --- lead to a remarkably powerful machinery for inferring causal relations from data, and for predicting the effects of interventions which alter the values of variables. King does not discuss the inferential machinery in great detail. Rather, he suggests that practitioners try to write out, from their own experience, a causal model of risks facing the firm, and then fit the necessary probabilities by data-mining. Causal models are more 'actionable' than error propagation, in that they give you a better idea of how changes in operations would affect losses.

As to extreme losses, King suggests modelling them with extreme value theory --- in other words, one wants to fit the tail of a probability curve, where samples are, by definition, sparse. Most of the curve, accordingly, comes from the assumed class of distributions. Specifically, he recommends assuming that the size of extreme losses follows a generalized Pareto distribution (power law), and their arrival times a Poisson distribution. The distribution of extreme losses over the course of, say, a year can then be estimated by Monte Carlo. Where do the data come from for fitting the distributions? Either, King says, one relies on the internal records of the firm, or the published records of similar firms, or one employs 'scenarios' --- i.e., one pulls numbers out of the air. I appreciate the value of seeing what numbers plausible-sounding scenarios lead to, but suspect certain people (e.g., managing directors) will place altogether too much trust in those numbers, unless they are fed them very carefully indeed.

King obviously meant to write a good book. While none of his ideas are new, they are all up-to-date, generally more advanced than common practice, and certainly sound. (This is particularly true of wanting to use causal models in risk management, which I've seen in grant applications.) He tries to make the book self-contained; dedicated readers with only a basic grasp of probability theory and the idea of risk could learn enough from this to set up a system of the kind he proposes. The methods would also be useful for operational risk management in almost any kind of enterprise, not just financial institutions. The main drawback to this book is the fact that King is a hopelessly bad writer. He repeats himself, uses too many words, constantly states the obvious, and can't define terms to save his life. (Compare the definitions of 'risk factor' and 'causal factor' on page 57). He writes, in a word, like a consultant. Unfortunately, his book is at least as good as any other I've seen on operational risk, and better than some; recommended as an introduction, with reservations.


Disclaimer: I got a review copy of this book from Quantitative Finance, but I have no stake in the book's success.


276 pp., figures, bibliography, spotty index

Economics / Probability and Statistics

Currently in print as a hardback, ISBN 0-471-85209-0, US$95


Posted 5 April 2003