Implications of the Queen's Speech for Board Strategic Risk Governance

Today's Queen's Speech has potentially important implications for board practice and strategic risk governance, initially in the UK, and perhaps later in other Anglosphere nations.

The recent update to the UK Corporate Governance Code set a new requirement for boards to carry out a robust assessment of emerging risks. Today's speech announced the creation of a new Audit, Reporting, and Governance Authority, which may have the power to pursue criminal charges against directors who make misleading statements to financial markets.

Clearly, this will very substantially raise the bar for the required rigour of the processes and systems boards use to anticipate, assess, and adapt in time to emerging strategic threats, which is the focus of our work at Britten Coyne Partners.
Comments

Important Lessons from Two New Reports on Communicating Uncertainty

Risk, Uncertainty, and Ignorance are terms which often confuse us and bedevil our decisions.

Let’s therefore begin with some definitions.

As we use the term, “risk” refers to situations in which the frequency distribution of the outcomes of a data generating process of interest is known, and the underlying process will not change in the future. Some writers call this “aleatory”, or “irreducible“ uncertainty (e.g., that is caused by random factors, like measurement errors).

In situations of so called “Knightian” uncertainty (after the economist Frank Knight, who first wrote about it in 1921 in “
Risk, Uncertainty, and Profit”), these conditions don’t hold. The historical frequency distribution of possible outcomes and/or whether the underlying outcome generating process is stationary or evolving aren’t known. However, in theory they are knowable. Some have termed this “epistemic” uncertainty, as it arises from a lack of knowledge that in principle could be discovered.

In situations of ignorance, not only we unaware of the range of possible outcomes and the nature of the outcome generating process, but these are also unknowable. As a result, we have no reliable basis for forecasting the future. John Maynard Keynes wrote about this in 1921, in his “
Treatise on Probability”, and later in 1936 in his “General Theory of Employment, Interest, and Money”, in which he highlighted our reliance in such circumstances on socially accepted (but ultimately fragile) “conventions” that enable action in the face of our ignorance. Some have termed this “ontological uncertainty”, which Donald Rumsfeld’s famously called “unknown unknowns”. As we have noted elsewhere, ontological uncertainty exponentially increases as socio-technical systems become more complex over time.

Too often, we confuse these three types of uncertainty and their implications. When people discuss uncertainty, they often do so in ways that imply it is relatively well-behaved risk. However, most real-world decisions are made in the face of far more challenging epistemic and ontological uncertainty, which people and organizations are often reluctant to acknowledge.

For example, in “
Communicating Uncertainty in Policy Analysis”, professor Charles Manski from Northwestern University notes that, “the term “policy analysis” describes scientific evaluations of the impacts of past public policies and predictions of the outcomes of potential future policies. A prevalent practice has been to report policy analysis with incredible certitude. That is, exact predictions of policy outcomes are routine, while expressions of uncertainty are rare. However, predictions and estimates often are fragile, resting on unsupported assumptions and limited data. Therefore, the expressed certitude is not credible.”

On the other hand, the financial services, military, and intelligence communities have been more willing to recognize and try to communicate epistemic and ontological uncertainty, but have done so using multiple and often confusing definitions and approaches.

In light of this background, we were very excited to see the recent publication of two large analyses of the challenges in communicating uncertainty. The first is by the NATO Science and Technology Organization: “
Assessment and Communication of Uncertainty in Intelligence to Support Decision Making”. The second is by the Royal Society: “Communicating Uncertainty About Facts, Numbers, and Science”, by van der Bles et al.

Both of these are well worth a read in their entirety. Here we will just present some highlights.

Chapter 19 of the NATO report is titled, “
How Intelligence Organizations Communicate Confidence (Unclearly)”. It notes that, “Given that intelligence is typically derived from incomplete and ambiguous evidence, analysts must accurately assess and communicate their level of uncertainty to consumers. One facet of this perennial challenge is the communication of analytic confidence, or the level of confidence that an analyst has in his or her judgements, including those already qualified by probability terms such as “very unlikely” or “almost certainly”. Consumers are better equipped to make sound decisions when they understand the methodological and evidential strength (or flimsiness) of intelligence assessments. Effective communication of confidence also militates against the pernicious misconception that the intelligence community (IC) is omniscient.”

“As part of broader efforts to improve communication fidelity and rein in subjectivity in intelligence production, most intelligence organizations have adopted standardized lexicons for rating and communicating analytic confidence. These standards provide a range of confidence levels (e.g., high, moderate, low), along with relevant rating criteria, and are often paired with scales used to express estimative probability.”

For example, here is a Canadian example:

“CFINTCOM instructs analysts to clearly indicate both their level of confidence and their reasons for ascribing it. Analytic confidence is based on three main factors:

  • Evidence: The strength of the knowledge base, to include the quality of the evidence and our depth of understanding about the issue.

  • Assumptions: The number and importance of assumptions used to fill information gaps.

  • Reasoning: The strength of the logic underpinning the argument, which encompasses the number and strength of analytic inferences as well as the rigour of the analytic methodology applied to the [intelligence] product” …

“Analysts are expected to outline their confidence ratings in a dedicated textbox, or to integrate them into the narrative text of the product:

Confidence Levels

  • High: Well-corroborated information from proven sources, low potential for deception, noncritical assumptions and/or gaps, or undisputed reasoning.

  • Moderate: Partially corroborated information from good sources, moderate potential for deception, potentially critical assumptions used to fill gaps, or a mix of inferences.

  • Low: Uncorroborated information from good or marginal sources, high potential for deception, key assumptions used to fill critical gaps, or mostly weak inferences.”

After a very interesting comparison of the confidence rating scales used by various organizations, the NATO report notes that “the analytic confidence standards examined generally incorporate the following determinants:

  • Source reliability
  • Information credibility
  • Evidence consistency/convergence
  • Strength of logic/reasoning
  • Quantity and significance of information gaps and assumptions used [to fill them].”

However, it also notes that, “Few standards attempt to operationalize these determinants or outline formal mechanisms for evaluation. Instead, they tend to provide vague, qualitative descriptions for each confidence level, which may lead to inconsistent confidence assessments.”

The NATO authors also observe that, “intelligence consumers tend to disaggregate analytic confidence into three dimensions:

  • The reliability of evidence [higher means higher confidence in the assessment];
  • The range of reasonable expert opinion [narrower means higher confidence]; and,
  • The potential responsiveness of the analysis to new information [lower means higher confidence].”

The chapter concludes with a recommendation to combine a verbal expression with a quantitative expression analytic confidence as well as an “informative, case-specific rationale” for this confidence level.”

This NATO chapter highlights why, in our forecasting work, we prefer to focus on the level of uncertainty rather than the level of confidence. Discussions focused on the forecaster’s level of confidence are not only confusing; they also can very easily (and often do) trigger emotional defenses. In contrast, discussions focused on the key uncertainties associated with a forecast (and how they might be reduced) are almost always more productive.

In “Communicating Uncertainty About Facts, Numbers, and Science” the authors’ focus is on epistemic uncertainty. They begin by noting that, “in an era of contested expertise, many shy away from openly communicating their uncertainty about what they know, fearful of their audience’s reaction.” For this reason, they explore what the limited research that has been done tells us about the effects of communicating our epistemic uncertainty on listeners’ cognition, affect, trust, and decision-making”.

Their goal is to provide, “a cohesive framework that aims to provide clarity and structure to the issues surrounding communication [of uncertainty].”

This framework has three elements: The object of epistemic uncertainty, the source of uncertainty, and the level of uncertainty.

As the authors describe it, the possible objects of epistemic uncertainty include:

  • “Facts that are (at least theoretically) directly verifiable;
  • “Numbers, that are continuous variables that describe the world, that may, at least in principle, be directly observable, or they may be theoretical constructs which are used as parameters within a model of the world”; and,
  • “Scientific hypotheses, that are theories about how the world works, expressed as structural models of the relationships between variables.” Depending on the circumstances, this can cross the border into ontological uncertainty, as the authors acknowledge: “We should in principle distinguish between uncertainty about the adequacy of a model to represent the world and uncertainty about the world itself…However, in practice, the lines between these often get blurred.”

With respect to the sources of uncertainty, the authors distinguish between:

  • Irreducible variability – i.e., aleatory uncertainty;
  • Limited knowledge or ignorance about the underlying results-generating processes; and,
  • Disagreements among experts about the meaning of available evidence.

Finally, the authors assert that there are two possible “levels” of uncertainty.

  • “Direct” or “first-order” uncertainty about one or more of the sources of uncertainty;
  • “Indirect” or second order uncertainty “about the quality of the knowledge and evidence upon which our uncertainty assessments are based”, which underlies our subjective judgment about the confidence we have in any claim we make.

The authors then proceed to a discussion of the impact of communicating uncertainty on “recipients’ cognition, affect, trust, and decision-making.”

They note at the outset the relative lack of research in many of these areas, and the focus of much of the research that exists on aleatory, rather than epistemic or ontological uncertainty. They also note that most of the research focuses on direct uncertainty, and not indirect uncertainty about the quality of underlying evidence.

With respect to cognitive reactions to communications regarding uncertainty, the authors note the conclusion of many studies about the wide range of probabilities that people attach to “words of estimative probability”, such as “unlikely”, “possible”, “probable”, and “almost certain” [hence the recommendation that verbal expressions should be accompanied by quantitative probabilities]. They also note how, when presented with a “most likely” result within a numerical range of possible outcomes, most people will interpret the range as either a uniform (flat) or normal (bell curve/Gaussian) distribution. This can be extremely problematic, because complex adaptive systems tend to generate pareto/power law distributions of outcomes, not ones that are flat or bell shaped.

Moving on to people’s emotional reactions to communications of uncertainty, the authors find that, “dual-process theories in psychology commonly describe two systems of thinking, with one system being more ‘analytic’, following rules of logic, probability and deliberation, whereas the other is more associative, fast, intuitive and affective…[These lead to] “people processing uncertain information in two qualitatively different ways, differentiate between processing risk as analysis versus risk as feelings…with the latter often dominant in processing risk information.”

The authors also note that, “research into how the communication of uncertainty impacts trust and credibility is very sparse.” That said, “at a generic level, there are some near-universal aspects of human social cognition that assist people in determining whom and what information to trust. Two of these basic dimensions include ‘competence’ and ‘warmth’. Affect and cognition fuse together here in establishing trust…In order to be perceived as credible, both ‘cold’ expertise is required (knowledgeability) as well as a perceived motivation to be sincere and truthful (warmth), that is, a feeling of trust… Yet, whether greater transparency in the communication of uncertainty will enhance credibility and public trust remains an open empirical question. On the one hand, presenting information as certain (when it is not) is misleading and can damage and undermine public trust. Thus, emphasizing uncertainty may help signal transparency and honesty. On the other hand, explicitly conveying scientific uncertainty may be used to undermine the perceived competence of the communicator as people tend to use precision as a cue for judging expertise.”

Regarding the impact of uncertainty communication on behavior and decision-making, the authors stress that there has been no systematic research on the impact of epistemic uncertainty. There has, however, been research on the impact of aleatory uncertainty communications, which have generally been found to improve the quality of decisions. However, they also speculate uncertainty communications about ontological uncertainty “may interfere with people’s basic psychological needs for control and predictability [leading, for example, to confirmation and conformity biases], whereas epistemic uncertainty about the past or present may not always be subject to the same concerns.”

Conclusion

Former Bank of England Governor Mervyn King has observed that we now live in an age of “radical uncertainty.” Given our strong evolutionary aversion to this feeling, one of the great challenges we face in this new environment is how to effectively communicate about uncertainty itself. Both of these reports make a substantial contribution to helping us do that.
Comments