Chronic Illnesses Kill Organizations Too

In most of our work on Strategic Risk we examine the effects of uncertainty in a complex adaptive system world and try to understand how and why catastrophic, existential threats to businesses develop and what lessons boards of directors and executive teams can learn from these dynamics. Yet there is another class of threat to a business which can be as serious in its effect and may also eventually lead to its demise – serial underperformance.

Analogous to a chronic rather than an acute illness, businesses (or other forms of non-profit organizations) that fail to achieve health year after year, sometimes for decades, also carry a heavy price for their failure. Moreover, their long-term enfeeblement inevitably reduces organizational resilience and eliminates many options for adapting in the face of external crises. The weak die first.

This point was highlighted to us recently by the commentary of Terry Smith, iconoclastic banker, stockbroker, author* and CEO at Fundsmith. Commenting on fashions in investing Smith notes that serial underperformers will almost never represent better value, even if the stock price appears to represent a “value investment” opportunity. He cites Charlie Munger of Berkshire Hathaway:

“Over the long term, it’s hard for a stock to earn a much better return that the business which underlies it earns. If the business earns six percent on capital over forty years and you hold it for that forty years, you’re not going to make much different than a six percent return - even if you originally buy it at a huge discount. Conversely, if a business earns eighteen percent on capital over twenty or thirty years, even if you pay an expensive looking price, you’ll end up with one hell of a result.’ Our emphasis added.

Mr Munger is not offering a theory or an opinion — what he is saying is a mathematical certainty.“

To put this another way, the cost of serial failure is severe and will inevitably end up as an existential threat to an organization. That is why it represents a Strategic Risk.

Now, no doubt some readers are already objecting that they cannot be held accountable for events four decades in the future. Leaving aside the point that, at least in the UK, directors have a statutory duty to ensure the long term success of the enterprise (so perhaps that four-decade outcome is your accountability after all), serial underperformance does not require that kind of timescale to produce serious adverse effects.

Consider a firm making 2% ROCE. Over 5 years the cumulative return, with full reinvestment, is approximately 10.5%. For a firm making 5% ROCE, the comparable cumulative return is 27.6%, i.e. more than two and a half times better. A five year timespan is quite feasible as a time threshold for an existential threat. If the second company has been effectively anticipating and assessing such a potential threat, it has had five years in which to prepare for adaptation and mitigation (assuming it has been monitoring indicators of the potential event). It has also had more resources to dedicate to such a task.

When we examine the cases of corporate failure to identify the root causes and repeating patterns that have led to them, we frequently observe a chain of causality that has stretched back over time, five, ten or even twenty years. These are mostly directly attributable to failures of anticipation, risk blindness arising from reliance on faulty strategic assumptions, failures of assessment that added to blindness through the false comfort of subjective risk probabilities, and failures of action to monitor and mitigate the effects of emerging threats. When serial underperformance is added to this deadly mix, the result – the demise of the corporation – becomes almost inevitable.

When did your board last seriously seek to anticipate threats to the health of the enterprise (and how to adapt to them) over even a five year period?

* Accounting for Growth, Random House, London


Comments

Reduce Overload by Seeking High Value Information

At Britten Coyne Partners, we provide clients with advice, education, and forecasting offerings to help them better anticipate, accurately assess, and adapt in time to emerging threats to their strategy and survival.

One of the points we stress is the importance of reducing overload by searching for high value information, and using it to update prior beliefs (e.g., estimates, forecasts, models). But what constitutes high value information?

We focus on two types.

The first is "high likelihood indicators." In the face of an uncertain future, we often have a mix of probabilistic beliefs about different scenarios or outcomes that could develop in the future. High likelihood indicators are valuable because their presence (or absence) is much more likely to be observed in the case of one scenario or outcome compared to the others. Put differently, high likelihood indicators reduce our uncertainty about the future.

The second type of high value information is anything that triggers the feeling of surprise, which is usually evidence that is at odds with what our mental model of a system or situation would lead us to expect. Surprise is valuable because it increases our uncertainty about the accuracy of our mental models, and signals that we need to improve them.

Unfortunately, the feeling of surprise, and its ability to focus our attention on high value information, is very often fleeting. As Daniel Kahneman has noted, our “System 1” brain automatically seeks to make new sensory inputs cohere with our existing beliefs. This often happens very quickly; we find ourselves remembering feeling surprised, but forgetting what it was that triggered it.

For that reason, leveraging surprise requires us to call on “System 2”, our conscious deliberative brain. In our experience, the best way to do this is to write down the information that triggered the feeling of surprise, so that we can later consider its implications.



Comments

Implications of the Queen's Speech for Board Strategic Risk Governance

Today's Queen's Speech has potentially important implications for board practice and strategic risk governance, initially in the UK, and perhaps later in other Anglosphere nations.

The recent update to the UK Corporate Governance Code set a new requirement for boards to carry out a robust assessment of emerging risks. Today's speech announced the creation of a new Audit, Reporting, and Governance Authority, which may have the power to pursue criminal charges against directors who make misleading statements to financial markets.

Clearly, this will very substantially raise the bar for the required rigour of the processes and systems boards use to anticipate, assess, and adapt in time to emerging strategic threats, which is the focus of our work at Britten Coyne Partners.
Comments

Important Lessons from Two New Reports on Communicating Uncertainty

Risk, Uncertainty, and Ignorance are terms which often confuse us and bedevil our decisions.

Let’s therefore begin with some definitions.

As we use the term, “risk” refers to situations in which the frequency distribution of the outcomes of a data generating process of interest is known, and the underlying process will not change in the future. Some writers call this “aleatory”, or “irreducible“ uncertainty (e.g., that is caused by random factors, like measurement errors).

In situations of so called “Knightian” uncertainty (after the economist Frank Knight, who first wrote about it in 1921 in “
Risk, Uncertainty, and Profit”), these conditions don’t hold. The historical frequency distribution of possible outcomes and/or whether the underlying outcome generating process is stationary or evolving aren’t known. However, in theory they are knowable. Some have termed this “epistemic” uncertainty, as it arises from a lack of knowledge that in principle could be discovered.

In situations of ignorance, not only we unaware of the range of possible outcomes and the nature of the outcome generating process, but these are also unknowable. As a result, we have no reliable basis for forecasting the future. John Maynard Keynes wrote about this in 1921, in his “
Treatise on Probability”, and later in 1936 in his “General Theory of Employment, Interest, and Money”, in which he highlighted our reliance in such circumstances on socially accepted (but ultimately fragile) “conventions” that enable action in the face of our ignorance. Some have termed this “ontological uncertainty”, which Donald Rumsfeld’s famously called “unknown unknowns”. As we have noted elsewhere, ontological uncertainty exponentially increases as socio-technical systems become more complex over time.

Too often, we confuse these three types of uncertainty and their implications. When people discuss uncertainty, they often do so in ways that imply it is relatively well-behaved risk. However, most real-world decisions are made in the face of far more challenging epistemic and ontological uncertainty, which people and organizations are often reluctant to acknowledge.

For example, in “
Communicating Uncertainty in Policy Analysis”, professor Charles Manski from Northwestern University notes that, “the term “policy analysis” describes scientific evaluations of the impacts of past public policies and predictions of the outcomes of potential future policies. A prevalent practice has been to report policy analysis with incredible certitude. That is, exact predictions of policy outcomes are routine, while expressions of uncertainty are rare. However, predictions and estimates often are fragile, resting on unsupported assumptions and limited data. Therefore, the expressed certitude is not credible.”

On the other hand, the financial services, military, and intelligence communities have been more willing to recognize and try to communicate epistemic and ontological uncertainty, but have done so using multiple and often confusing definitions and approaches.

In light of this background, we were very excited to see the recent publication of two large analyses of the challenges in communicating uncertainty. The first is by the NATO Science and Technology Organization: “
Assessment and Communication of Uncertainty in Intelligence to Support Decision Making”. The second is by the Royal Society: “Communicating Uncertainty About Facts, Numbers, and Science”, by van der Bles et al.

Both of these are well worth a read in their entirety. Here we will just present some highlights.

Chapter 19 of the NATO report is titled, “
How Intelligence Organizations Communicate Confidence (Unclearly)”. It notes that, “Given that intelligence is typically derived from incomplete and ambiguous evidence, analysts must accurately assess and communicate their level of uncertainty to consumers. One facet of this perennial challenge is the communication of analytic confidence, or the level of confidence that an analyst has in his or her judgements, including those already qualified by probability terms such as “very unlikely” or “almost certainly”. Consumers are better equipped to make sound decisions when they understand the methodological and evidential strength (or flimsiness) of intelligence assessments. Effective communication of confidence also militates against the pernicious misconception that the intelligence community (IC) is omniscient.”

“As part of broader efforts to improve communication fidelity and rein in subjectivity in intelligence production, most intelligence organizations have adopted standardized lexicons for rating and communicating analytic confidence. These standards provide a range of confidence levels (e.g., high, moderate, low), along with relevant rating criteria, and are often paired with scales used to express estimative probability.”

For example, here is a Canadian example:

“CFINTCOM instructs analysts to clearly indicate both their level of confidence and their reasons for ascribing it. Analytic confidence is based on three main factors:

  • Evidence: The strength of the knowledge base, to include the quality of the evidence and our depth of understanding about the issue.

  • Assumptions: The number and importance of assumptions used to fill information gaps.

  • Reasoning: The strength of the logic underpinning the argument, which encompasses the number and strength of analytic inferences as well as the rigour of the analytic methodology applied to the [intelligence] product” …

“Analysts are expected to outline their confidence ratings in a dedicated textbox, or to integrate them into the narrative text of the product:

Confidence Levels

  • High: Well-corroborated information from proven sources, low potential for deception, noncritical assumptions and/or gaps, or undisputed reasoning.

  • Moderate: Partially corroborated information from good sources, moderate potential for deception, potentially critical assumptions used to fill gaps, or a mix of inferences.

  • Low: Uncorroborated information from good or marginal sources, high potential for deception, key assumptions used to fill critical gaps, or mostly weak inferences.”

After a very interesting comparison of the confidence rating scales used by various organizations, the NATO report notes that “the analytic confidence standards examined generally incorporate the following determinants:

  • Source reliability
  • Information credibility
  • Evidence consistency/convergence
  • Strength of logic/reasoning
  • Quantity and significance of information gaps and assumptions used [to fill them].”

However, it also notes that, “Few standards attempt to operationalize these determinants or outline formal mechanisms for evaluation. Instead, they tend to provide vague, qualitative descriptions for each confidence level, which may lead to inconsistent confidence assessments.”

The NATO authors also observe that, “intelligence consumers tend to disaggregate analytic confidence into three dimensions:

  • The reliability of evidence [higher means higher confidence in the assessment];
  • The range of reasonable expert opinion [narrower means higher confidence]; and,
  • The potential responsiveness of the analysis to new information [lower means higher confidence].”

The chapter concludes with a recommendation to combine a verbal expression with a quantitative expression analytic confidence as well as an “informative, case-specific rationale” for this confidence level.”

This NATO chapter highlights why, in our forecasting work, we prefer to focus on the level of uncertainty rather than the level of confidence. Discussions focused on the forecaster’s level of confidence are not only confusing; they also can very easily (and often do) trigger emotional defenses. In contrast, discussions focused on the key uncertainties associated with a forecast (and how they might be reduced) are almost always more productive.

In “Communicating Uncertainty About Facts, Numbers, and Science” the authors’ focus is on epistemic uncertainty. They begin by noting that, “in an era of contested expertise, many shy away from openly communicating their uncertainty about what they know, fearful of their audience’s reaction.” For this reason, they explore what the limited research that has been done tells us about the effects of communicating our epistemic uncertainty on listeners’ cognition, affect, trust, and decision-making”.

Their goal is to provide, “a cohesive framework that aims to provide clarity and structure to the issues surrounding communication [of uncertainty].”

This framework has three elements: The object of epistemic uncertainty, the source of uncertainty, and the level of uncertainty.

As the authors describe it, the possible objects of epistemic uncertainty include:

  • “Facts that are (at least theoretically) directly verifiable;
  • “Numbers, that are continuous variables that describe the world, that may, at least in principle, be directly observable, or they may be theoretical constructs which are used as parameters within a model of the world”; and,
  • “Scientific hypotheses, that are theories about how the world works, expressed as structural models of the relationships between variables.” Depending on the circumstances, this can cross the border into ontological uncertainty, as the authors acknowledge: “We should in principle distinguish between uncertainty about the adequacy of a model to represent the world and uncertainty about the world itself…However, in practice, the lines between these often get blurred.”

With respect to the sources of uncertainty, the authors distinguish between:

  • Irreducible variability – i.e., aleatory uncertainty;
  • Limited knowledge or ignorance about the underlying results-generating processes; and,
  • Disagreements among experts about the meaning of available evidence.

Finally, the authors assert that there are two possible “levels” of uncertainty.

  • “Direct” or “first-order” uncertainty about one or more of the sources of uncertainty;
  • “Indirect” or second order uncertainty “about the quality of the knowledge and evidence upon which our uncertainty assessments are based”, which underlies our subjective judgment about the confidence we have in any claim we make.

The authors then proceed to a discussion of the impact of communicating uncertainty on “recipients’ cognition, affect, trust, and decision-making.”

They note at the outset the relative lack of research in many of these areas, and the focus of much of the research that exists on aleatory, rather than epistemic or ontological uncertainty. They also note that most of the research focuses on direct uncertainty, and not indirect uncertainty about the quality of underlying evidence.

With respect to cognitive reactions to communications regarding uncertainty, the authors note the conclusion of many studies about the wide range of probabilities that people attach to “words of estimative probability”, such as “unlikely”, “possible”, “probable”, and “almost certain” [hence the recommendation that verbal expressions should be accompanied by quantitative probabilities]. They also note how, when presented with a “most likely” result within a numerical range of possible outcomes, most people will interpret the range as either a uniform (flat) or normal (bell curve/Gaussian) distribution. This can be extremely problematic, because complex adaptive systems tend to generate pareto/power law distributions of outcomes, not ones that are flat or bell shaped.

Moving on to people’s emotional reactions to communications of uncertainty, the authors find that, “dual-process theories in psychology commonly describe two systems of thinking, with one system being more ‘analytic’, following rules of logic, probability and deliberation, whereas the other is more associative, fast, intuitive and affective…[These lead to] “people processing uncertain information in two qualitatively different ways, differentiate between processing risk as analysis versus risk as feelings…with the latter often dominant in processing risk information.”

The authors also note that, “research into how the communication of uncertainty impacts trust and credibility is very sparse.” That said, “at a generic level, there are some near-universal aspects of human social cognition that assist people in determining whom and what information to trust. Two of these basic dimensions include ‘competence’ and ‘warmth’. Affect and cognition fuse together here in establishing trust…In order to be perceived as credible, both ‘cold’ expertise is required (knowledgeability) as well as a perceived motivation to be sincere and truthful (warmth), that is, a feeling of trust… Yet, whether greater transparency in the communication of uncertainty will enhance credibility and public trust remains an open empirical question. On the one hand, presenting information as certain (when it is not) is misleading and can damage and undermine public trust. Thus, emphasizing uncertainty may help signal transparency and honesty. On the other hand, explicitly conveying scientific uncertainty may be used to undermine the perceived competence of the communicator as people tend to use precision as a cue for judging expertise.”

Regarding the impact of uncertainty communication on behavior and decision-making, the authors stress that there has been no systematic research on the impact of epistemic uncertainty. There has, however, been research on the impact of aleatory uncertainty communications, which have generally been found to improve the quality of decisions. However, they also speculate uncertainty communications about ontological uncertainty “may interfere with people’s basic psychological needs for control and predictability [leading, for example, to confirmation and conformity biases], whereas epistemic uncertainty about the past or present may not always be subject to the same concerns.”

Conclusion

Former Bank of England Governor Mervyn King has observed that we now live in an age of “radical uncertainty.” Given our strong evolutionary aversion to this feeling, one of the great challenges we face in this new environment is how to effectively communicate about uncertainty itself. Both of these reports make a substantial contribution to helping us do that.
Comments

Review of “Forecasting in Social Settings: The State of the Art” by Makridakis et al

In our course on Strategic Risk Management and Governance, we note the very substantial challenge of forecasting the future behavior of complex adaptive systems made up of human beings and their organizations. There are many reasons for this, including:

  • Agents pursue multiple goals, with different incentives and priorities, and may change their goals and priorities over time as the system evolves;

  • When deciding on actions to achieve their goals, agents differ in terms of the range of experiences they can draw on, and their cognitive ability to reason multiple time steps ahead about the likely consequences of their actions;

  • Agents differ in their perceptions of the environment, and their beliefs about the future;

  • Agents differ in the structure of their social networks, which also evolve over time (more technically, the data generating process in complex adaptive systems is non-stationary, which reduced the usefulness of historical results as a guide to future outcomes);

  • Agents decide on their actions based not only on rational calculation, but also on their emotional reactions to competing narratives as well as the potential social impacts of their decisions;

  • Agents differ in their desire to conform to the beliefs and copy the actions of other members of their group, with the latter typically increasing with the level of perceived uncertainty;

  • Social feedback loops can produce emergent non-linear collective phenomena like herding, fads, booms and busts. These extreme events have been extremely hard to consistently forecast.

Taken together, these factors usually cause the accuracy of forecasts of complex adaptive system behavior to exponentially decline as the time horizon lengthens.

Given this background, we read the new paper by Makridakis and his colleagues with great interest.

At the outset, the authors clearly state that, “although forecasting in the physical sciences can attain amazing levels of accuracy, such is not the case in social contexts, where practically all predictions are uncertain, and a good number can be unambiguously wrong.”

There are a number of reasons for this. “First, there is usually a limited theoretical basis for presenting a causal or underlying mechanism” for the target variable being forecasted. “Thus we rely on statistical approximations that roughly describe what we observe, but may not represent a causal [process].” Second, “despite the deluge of data that is available today, much of this information does not concern what we want to forecast directly…Third, what we are trying to forecast is often affected by the forecasts themselves…Such feedback does not occur in weather forecasts…For these reasons, social science forecasts are unlikely to ever be as accurate as forecasts in the physical sciences, and the potential for improvements in accuracy is somewhat limited.”

The authors also note that when it comes to forecasting social systems, “unless uncertainty is expressed clearly and unambiguously, forecasting is not far removed from fortune-telling. However, uncertainty about judgmental forecasts of social system behavior is likely to be “underestimated greatly for two reasons.”

“First, our attitude to extrapolating in a linear fashion from the present to the future, and second, our fear of the unknown and our psychological need to reduce the anxiety associated with such a fear by believing we can control the future by predicting it accurately.”

Use of statistical instead of judgmental forecasting models improves the treatment of uncertainty, but this approach is far from perfect. The authors claim that, “there are at least three reasons for standard statistical models’ underestimations of the uncertainty:”

  1. “Probably the biggest factor is that model uncertainty is not taken into account. The prediction intervals are produced under the assumption that the model is ‘‘correct’’, which clearly is never the case.” The authors note that combining forecasts made using different models reduces this uncertainty.

  1. “Even if the model is specified correctly, the parameters must be estimated, and also the parameter uncertainty is rarely accounted for in time series forecasting models.” However, techniques like Monte Carlo simulation allow parameter uncertainty to be made explicit.

  1. “Most prediction intervals are produced under the assumption of Gaussian [normally distributed] errors. When this assumption is not correct, the prediction interval coverage will usually be underestimated, especially when the errors have a fat-tailed distribution [as is often the case in complex adaptive systems, which tend to produce outcomes that follow a Pareto/power law rather than a normal/bell curve distribution].”

The paper also includes sections on different types of uncertainty, the challenges of incorporating causality into forecasting models, and the difficulty of predicting one off and extreme events.

In sum, the authors have produced an excellent (and extensively referenced) overview of the current state of the art of forecasting in social settings.
Comments