On the afternoon of July 5th 2010, most of the people who worked in Toronto’s downtown core were somewhat discombobulated by a power failure. Once its cause was identified (transformer caught fire), and implications assessed to be relatively trivial (90% power restored in 3 hours), most people were given the rest of the day off, though some of them had to trudge down 30 stories of stairs as the elevators were not functioning. This incident did have an ominous antecedent, the massive power failure across the North America’s Eastern Coast in August 2003. It also provoked a sigh of relief: say for instance, what if the power failure had happened during the G20 Summit which was just the week before? Whatever one’s views of the Summit or of the global economy, such a power failure could have had quite unexpected outcomes.
Nicholas Nassim Taleb’s wonderfully engaging concept of Black Swan events (unpredictable, massively gamechanging, and post facto rationalized) obviously comes to mind. His basic argument is that future events are not predictable, so (from a business perspective) our naïve belief in the risk assessments of various models gives us a false sense of security until a massively game-changing event occurs. The black swan events can be negative e.g. the 2008 economic crisis, or they can be positive e.g. the rise of Google. The key issue I want to raise here is that of risk, and specifically in the context of business, where it is assumed to be brought under human control as much as possible.
The advanced research in Physics, Economics, Mathematics (and even in emerging sub-disciplines like Chaos theory) are brought to bear on the science of managing risk. The rewards are great and the reassurance of the science employed gives us, one and all, a sense of confidence that the risk is indeed being managed. One of the first big principles I studied in Finance was the difference between market risk (the spread of risk among all investments in the market), and unique risk (the risk associated with specific investments). The name of the game has become one of managing this latter type of risk through judicious portfolio management, but rarely does one (in a non-academic setting) consider the implications of the ever-present systematic risk. It is usually represented as a uniform band, probably because it recognizes the existence of less than perfect markets, exogenous conditions like black swans, which by being slight and rare respectively do get very thinly spread out over a large volume of transactions and extended period of time. But, to pursue this metaphor of risk being ‘spread out,’ presumably as butter is spread out on a slice of bread, the problem is that the topography of risk is not as smooth as a slice of bread; it is a combination of smooth plains as well as steep cliffs and deep crevasses. It is these exogenous, Non-quantifiable. Non-measureable, and largely undiscoverable species of risk that completely wipe out any risk-mitigating strategies that may have existed prior to their unexpected incidence.
How, for example, would Bear Stearns have been able to insure themselves against the actual risk associated with the mortgage-backed assets that ended up taking down the company? On a broader economic front, one could argue that it is precisely this sort of fear of a black swan event that would put such a massive fear into the minds of people’s disposition to invest, which would ironically end up crippling the economy: no one would want to transact business out of fear of that huge, unknown and unexpected risk. And they would be right. However, the ‘huge, unknown and unexpected risk’ is not mitigated: we can simply push that into a corner of our mind as we place more faith in calculations, analyses, reputations, novel financial models, and soon amidst all the paperwork, the excel sheets, the models—not to mention the deliberate book cooking, socialization of losses and privatization of gains—we have managed to take a junk bond and work it up to a triple A rating. The risk is not mitigated, it is just swept under the carpet.
Why does this happen? Being more familiar with human behaviour than with the complexities of finance, I would hazard a guess that it stems from a need for control. The market risk is so uncontrollable; the unique risk on the other hand yields itself to measurement, is affected by endogenous factors, and can be spread smoothly over efficiency frontiers (assuming a normal Gaussian distribution, a smooth and well-mapped topography of risk). Therefore when the one firm (or the one portfolio management team), wants to distinguish itself from the others, it seeks the people who will be able to make a difference in the areas that can be controlled, hence the physicists on Wall Street and the superstar financial consultants. As this controllable aspect of risk (unique risk & its predictive models) becomes more and more the differentiating factor between competing individuals, the ante gets upped, and the war for talent reaches unprecedented levels. Meanwhile, the really significant black swan type risks are safely ignored until it is too late and all hell breaks loose—as it did in 2008.
The most recent Toronto blackout is therefore a reminder that the world is not as predictable as we expect. Sometimes unpredictable events happen and have consequences. Yes, we all know that, and yet we get out of bed each morning and get on with the business of the day. The value of the reminder lies in never forgetting how much of the real impact of risk is exogenous, unquantifiable, unpredictable. That sobering consideration tempering our decisions and actions as we carry out the business of the day may well be our last defence against the market risk that threatens…well, the business of the day.
Taleb’s “Ten Principles for a Black Swan Robust World” Financial Times. April 7, 2009 is a short, entertaining and provocative recommendation. <http://www.fooledbyrandomness.com/tenprinciples.pdf>