Carrillion: Old Lessons from a New Failure

Carillion.

A new year and new corporate death: once again an organisation employing tens of thousands of people with revenues measured in billions, that, according to its last published corporate accounts, was in rude health has collapsed into bankruptcy and insolvency in indecent haste and to the apparent astonishment of all the wise heads who are expected to know and understand these matters – boards of directors, pension trustees, auditors, financial regulators, and the government itself.

Consider these few facts: in March 2017 the board of directors approved the statutory accounts of Carillion for the year ended 2016. The same accounts, of course, were given a clean bill of health by Carillion auditors KPMG. Somehow, slightly more than three months later, the same board of directors were moved to issue a profit warning and part company with the CEO who, only a short while before, received generous “performance related” bonuses. By the end of September 2017 the board were presenting the results of a “contracts and strategic review” which identified amongst other things the urgent need to dispose of assets and implement cost reduction measures to buttress operational cash flow. The review highlighted the results of accepting contracts where there was a
“high degree of uncertainty around key assumptions”. By January 2018 the board was petitioning Government for an emergency cash bailout. When this was refused the company was placed into insolvency procedures with reported debts of approximately £5 billion and cash reserves of £29 million.

One of our favourite observations is that risk blindness is the result of familiarity with imperfect information. Perhaps the board of Carillion were collectively victims of risk blindness. On the other hand, if reports in the press are true, the board of Carillion has, for some years, been in the practice of raising cash from asset sales to fund both dividend payments and executive compensation. The former, it has been suggested, falls within the definition of paying dividends from capital which is illegal in the UK. One assumes the directors of Carillion individually and collectively were aware of the potential illegality of their decisions. If so, it did not stop them.

Consequently, it might be tempting to regard the case of Carillion, along with many other corporate failures, as being exceptional examples of failures in governance. As the Carillion story unfolds and the wheels of bureaucracy turn to initiate any number of official enquiries, perhaps more evidence will emerge of either individual or collective wrongdoing. In some respects this outcome would obscure what should be general lessons to be learned from the Carillion experience for all company directors and senior executives whose responsibilities include the governance of risk.

According to the Carillion 2016 annual report and accounts, the board of directors maintained a rigorous and robust risk management process. The report identifies what the board considered were the principal risks facing the company. Of interest is the fact that the board considered that there were no significant risks to the future prosperity of the company that might be graded as more than medium on a “net” basis, i.e. after taking into account potential mitigation actions. It hardly bears noting that this supposedly robust system was even at the time of the approval of the 2016 accounts failing dramatically and had probably been failing dramatically for some significant period previously.


As we have observed and commented upon many times, standard approaches to risk management in many organisations are not only inadequate but frequently dangerously misleading as regards existential threats to the company’s existence. The very familiarity of the board with this misleading information leads directly to blindness to the existence of potential or actual existential threats. This is a lesson that all boards and directors can learn from Carillion as well as many other equally painful examples.


Comments

Different Approaches to Classifying Risks

Classification is one of the most important functions humans perform to speed our cognitive processing of the overwhelming amount of external stimuli we absorb every day.

In the world of risk, many different classification schemes are employed, both formal and informal. But after many years of interacting with them, we remain unsure of whether they clarify or further confuse many of the underlying risk governance and management issues facing boards and leadership teams.

In this note, we’ll try to categorize some of the different risk classification schemes we’ve encountered, and highlight the distinctions they seek to draw.

Broadly speaking, various classification schemes can be grouped into four categories:

  • Potential Causes of Future Risk-Related Events

  • Risk-Related Events

  • Consequences of Risk-Related Events

  • Other Approaches

Potential Causes of Future Risk Related Events

We have often noted how in complex socio-technical systems, causal reasoning is often difficult, because of the dense mix of interrelationships they contain, many of which are characterized by time delays and non-linearities.

However, we often see analyses that classify risks in terms of broad causal forces, such as technology change; environmental change (from macroscopic – e.g., climate change – to microscopic – e.g., antimicrobial resistance); economic and military developments, demographic and social forces, and political and regulatory trends. The World Economic Forum’s annual
Global Risks Report is a good example of the “risk event causes” approach.

Risk-Related Events

This classification scheme is the most traditional, as it is closely tied to frequentist statistics and actuarial science methods that facilitate the quantification, pricing, and transfer of certain types of risk. A good example of this approach is “A Common Risk Classification System for the Actuarial Profession” by Kelliher et al.

A good example of this approach is the division of potentially harmful events into business, market, credit, operational, and more recently cyber risks.

However, as was made painfully apparent in the 2008 financial crisis (not to mention this history of war and politics), this approach suffers from four key shortcomings.

First, not all risks can be easily represented by discrete events; some take the form of gradually accumulating forces that eventually pass a tipping point, causing adverse consequences to accelerate.

Second, the discrete event approach often struggles with “rare event” or “tail” risks, for which historical experience is largely lacking.

Third, capturing the interrelationship between various risks continues to be a challenge, especially in quantitative models.

And fourth, it neglects the fact that complex socio-technical systems are usually characterized by ongoing evolution and the emergence of new phenomena, which reduce the usefulness (or at least the accuracy) of the past as a guide to the future.

Consequences of Risk-Related Events

This is perhaps the broadest approach that is used to classify risk, though at the same time the least consistent. It includes relatively organized approaches to classifying the consequences of risk events (e.g., revenue reduction, cost increase, fall in asset value, and/or increase in liability value), as well as individual categories that aren’t part of an integrated system of consequences (e.g., liquidity risk, reputation risk, strategic/existential risk, etc.).

Risk classification based on consequences also raises questions about sequencing – e.g., what is a first, second, or third order impact. For example, a serious cyber event could lead to weakening sales volumes, pricing pressures, and/or rising costs, which in turn would depress margins, and eventually lead to liquidity problems.

Other Approaches

Distinct from logically sequenced classification schemes based on casual forces, risk events, and subsequent consequences are a number of others that take a different approach.

One example is the distinction that is often made between risk, uncertainty, and ignorance. Events characterized as “risks” can be described statistically, and thus priced and usually transferred. In contrast, “uncertainties” – which cannot be described using frequentist statistics – are both far more common and impossible to transfer via derivative and insurance markets (though they can sometimes be hedged via other means). And ignorance – the realm of Donald Rumsfeld’s famous “unknown unknowns” – is ever present, but of unknowable scope and potential danger.

Other example is the characterization of potential risk events in terms of their relationships to other risk events, and thus their potential to trigger “risk cascades” with non-linear impact.

A final example is the characterization of risks according to either the velocity at which they are maturing, or the net difference between risk maturity and velocity and the time required to formulate and execute an adequate organizational response.

All of these various risk classification approaches have their strengths and weaknesses; each highlight certain aspects of risk, but sometimes at the price of blinding us to others. It is for that reason that we recommend using a combination of approaches – or different frames – when analyzing the risks facing an organization.

This approach almost always produces richer board and management team discussions about risks, as well as superior decisions about how best to govern and manage them.
Comments

Probability versus Plausibility in the Assessment of Uncertainty

In their paper “Pursuing Plausibility”, Selin and Pereira note that “our alarming inability to grapple with manifold uncertainty and govern indeterminate complex systems highlights the importance of questioning our contemporary frameworks for assessing risk and managing futures.” Here at Britten Coyne, we couldn’t agree more.

In this note, we’ll look at two concepts – plausibility and subjective probability – that have been put forth by various authors as means of approaching the problem of uncertainty.

Let’s start with some brief definitions. We use the term “risk” to denote any lack of certainty that can be described statistically. Closely related to this is traditional or “frequentist” probability, which is based on analysis of the occurrences and impacts of repeated phenomena, like car accidents or human heights and weights.

In contrast to risk, and as described by writers like Knight and Keynes, uncertainty denotes any lack of certainty in which come combination of the set of possible outcomes, their frequency of occurrence, and/or their impact should they occur cannot be statistically described based on an analysis of their past occurrences. For example, the future impact of advances in automation, robotics, and artificial intelligence on labor markets cannot be assessed on the basis of history.

However, we are still left with the need to make decisions in the face of this uncertainty. This gives rise to three different approaches. The first is what Keynes called “conventions” or assumptions about the future that are widely used as a basis for making decisions in the present. The most common of these is that the future will be reasonably like the present.

As we have repeatedly noted this assumption is often fragile, especially in complex adaptive socio-technical systems which give rise to emergent behavior that is often non-linear (i.e., aggregate system behavior that arises from the interaction of agents, and cannot be predicted on the basis of their individual decision rules). This is especially so in our world of increasingly dense network connections, which accentuates the behavior impact of social observations and thus the systems tendency toward sudden non-linear changes that negate conventional assumptions.

The second is the use of subjective degrees of belief in different possible future outcomes, expressed in the quantitative language of probability. This approach goes back to Thomas Bayes, and is related to the later work Leonard Savage on subjective expected utility. The possible futures themselves can be developed using a wide variety of methods, from quantitative modeling to qualitative scenario generation or pre-mortem analyses. More broadly, all of these methods are examples of counterfactual reasoning, or the use of subjunctive conditional claims (e.g., employing “would” or “might”) about alternative possibilities and their consequences.

The third approach to uncertainty employs the qualitative concept of plausibility or its inverse, implausibility. For example, one approach to future scenario building suggests judging the reasonableness of the results (either individually or collectively) by their plausibility.

A common observation regarding plausibility is the difficulty most people have in defining it, and distinguishing it from degree of belief/subjective probability.

In “
A Model of Plausibility”, Connell and Keane provide perhaps the best examination of how, as a practical matter, human beings implicitly define plausibility (using both a quantitative model and human experimental results).

Their summary is worth quoting at some length: “A plausible scenario is one that has a good fit with prior knowledge…(1) Degree of Corroboration: a scenario should have several distinct pieces of prior knowledge supporting any necessary inferences…(2) Degree of Complexity: the scenario should not rely on extended or convoluted justifications…(3) Extent of Conjecture: the scenario should avoid, where possible, the introduction of many hypotheticals.”

Put differently, a scenario’s plausibility will increase “if it has minimal complexity and conjecture, and/or maximal corroboration.” They go on to create an index in which plausibility equals one minus implausibility, with the latter defined as the extent of complexity divided by (the extent of corroboration less the number of conjectures used).

In another paper (“
Making the Implausible Plausible”), Connell shows how the perceived plausibility of a scenario can be increased when it is represented as a causal chain or temporal sequence.

As you can see, many of the factors which make a scenario seem more plausible to an audience are also ones that would likely increase the same audience’s estimate of its subjective probability.

So is there any reason to use plausibility instead of subjective probability?

Writing in the 1940s, the English economist George Shackle proposed what he called “Potential Surprise Theory” as an alternative to the use of subjective probability when thinking about potential future outcomes.

Shackle’s objection to the use of subjective probability was what he termed “the problem of additivity” in situations of uncertainty, where the full range of possible outcomes was unknown. Assume that at first you identified four possible outcomes, and assigned them probabilities of 50%, 35%, 10%, and 5%, based on the normal practice of probabilities for a given set of outcomes having to sum to 100%.

What happens if you later identify two new potential outcomes? Logically, the subjective probabilities of the new set of six possible outcomes should be adjusted so that they once again sum to 100%. But if those outcomes are generated by complex socio-technical systems (as is usually the case for many business and political decisions), causal relationships are only partially understood, and often confusing (e.g., because of time delays and non-linearities). This makes it very hard to adjust subjective probabilities on any systematic basis.

Moreover, given the presence of emergence in complex adaptive systems, the full set of possible outcomes that such systems can produce will never be known in advance, making it impossible for the probabilities associated with those possibilities that have been identified to logically sum to 100%.

Instead of quantitative probabilities, Shackle suggested that we focus on the degree of implausibility associated with possible future outcomes, as measured by the degree of surprise you would feel if a given outcome actually occurred. Importantly, and unlike probability, the extent of your disbelief (potential surprise) in a set of hypotheses does not need to sum to 100%, nor does your degree of disbelief in individual hypotheses need to be adjusted if additional hypotheses are added to a set.

Just as important, in a group setting it is far easier to delve into the underlying reasons for an estimated degree of implausibility/potential surprise (e.g., the complexity of the logic chain, number of known unknowns and conjectures about them, etc.) than it is to do this for a subjective probability estimate (just as an analyst’s degree of uncertainty about an estimate is easier to “unpack” than her or his confidence in it).

In sum, most of the decisions we face today are characterized by true uncertainty rather than risk. Rather than defaulting to subjective probability methods when analyzing these decisions, managers should consider complementing them with an approach based on implausibility and surprise. Ideally, the two methods should arrive at the same conclusion; their real value, however, lies in situations when they do not agree, which forces boards and management teams to more deeply explore the underlying uncertainties confronting them.

Comments