Categorizing Uncertainty to Better Manage It

One way to substantially improve an organization’s management and governance of strategic risk is through a more precise understanding of the concept of “uncertainty.”

We believe this is analogous to Philip Tetlock’s finding, after analyzing the results of The Good Judgment Project, that the use of more precise probabilities, or more narrowly defined probability categories, was one of the techniques that improved forecasters’ accuracy (see his book, “
Superforecasting” and subsequent research papers). Having participated in the GJP, the author can attest that this was his personal experience as well.

In their paper “
Deep Uncertainty," Walker, Lempert, and Kwakkel define uncertainty as “any departure from the unachievable ideal of complete determinism.”

Various authors have created systems for defining different types and degrees of uncertainty.

For example, in “
Classifying and Communicating Uncertainties in Model-Based Policy Analysis," Kwakkel, Walker, and Marchau categorize uncertainty by where in a problem it occurs:

  • System Boundary: The demarcation of aspects of the real world which are included in the model from those that are not included.
  • Conceptual Model: Specifies the variables and relationships inside the model.
  • Computer Model: The implementation of the conceptual model in computer code.
  • Input Data: The values for the different parameters both inside the model and as inputs to the model.
  • Model Implementation: Bugs and errors in the computer code or the hardware used to run the model.
  • Processing of Model Output: Before it is presented to decision makers.

Many authors have adopted a three-category system for categorizing the nature of a given uncertainty:

“Aleatory” uncertainty is irreducible variability from sources of randomness in a system.

“Epistemic” uncertainty (i.e., “known unknowns”) is caused by lack of knowledge (or, alternatively unreliable or conflicting information), for example about the proper structure of our model of a system, or the correct values for the variables it contains.

In theory, epistemic uncertainty can be reduced through more knowledge and/or data. In practice, however, this often not the case, particularly in complex adaptive systems. Echoing John Maynard Keynes and Frank Knight’s earlier discussion of this issue, former Bank of England Governor Mervyn King has recently called this condition “radical uncertainty” (see his book, “The
End of Alchemy”). Critically, he notes the dangers created by our use of inherently fragile common assumptions, conventions, and stories to deal with situations or systems characterized by radical uncertainty (a point also made by Robert Shiller in his book, Narrative Economics).

“Ontological” or “unexpected” uncertainties are those about which we remain unaware – Donald Rumsfeld’s famous “unknown unknowns.”

Finally, in their paper on “
Deep Uncertainty”, Walker, Lempert, and Kwakkel propose a five-step scale for grading the severity of the uncertainties we may face in a given situation or decision:

“Level 1 Uncertainty: A situation in which one admits that one is not absolutely certain, but one is not willing or able to measure the degree of uncertainty in any explicit way. Level 1 uncertainty is often treated through a simple sensitivity analysis of model parameters.”

“Level 2 Uncertainty: Any uncertainty that can be described adequately in statistical terms.”

“Level 3 Uncertainty: A situation in which one is able to enumerate multiple alternatives and is able to rank the alternative in terms of perceived likelihood. In the case of uncertainty about the future, Level 3 uncertainty is often captured in the form of a few trend-based scenarios based on alternative assumptions about the driving forces. These scenarios are ranked according to their perceived likelihood, but no probabilities are assigned.”

“Level 4 Uncertainty: A situation in which one is able to enumerate multiple plausible alternative without being able to rank their perceived likelihood. This inability can be due to a lack of knowledge or data about the mechanism or functional relationships being studies; but this inability can also arise due to a lack of agreement on the appropriate model(s) to describe interactions among the system’s variables, to select the probability distributions to represent uncertainty about key parameters of the model(s), and/or how to value the desirability of alternative outcomes.”

“Level 5 Uncertainty: The deepest level of recognized uncertainty; in this case what is known is only that we do not know." (i.e., known unknowns)

To this list we would add Level 6 Uncertainty, or Rumsfeld’s “Unknown Unknowns” which is always present, if not always acknowledged.

As Payzan-LeNestour and Bossaerts have shown, the difference between Level 5 and Level 6 is often not clear-cut, with one person recognizing an unknown, while another does not (see, “
Risk, Unexpected Uncertainty, and Estimation Uncertainty”).

In our work over the years as consultants and executives, we have found that it is usually far more productive for groups to discuss the uncertainties in a given situation or decision than the extent of one’s subjective confidence about a given conclusion or recommendation.

Uncertainties invite further investigation and pragmatic discussion, while questioning a colleague’s degree of confidence can too easily be interpreted as a personal attack.

In addition, researchers have found that explicitly discussing known unknowns significantly reduces overconfidence in estimates and decisions (see, "
Known Unknowns: A Critical Determinant of Confidence and Calibration" by Walters et al).

The frameworks described above can make management and governance discussions about uncertainties even more productive and useful.
Comments

Understanding Explanation - A Critical Skill for Strategy and Risk

Explanation – as both a process and a result – is such an integral part of our life that we hardly notice it.

Yet it is one of those uniquely human capabilities that have enabled us to emerge from the East African plain and evolve to where we are today. Given its importance, is a subject that has received far less attention than it should.

Fortunately, some people have been researching explanation, though their work is far better known in the military and intelligence communities than the private sector. This note will draw on some of our favorites, the work of Gary Klein and his collaborators, and two of their key papers: “
Causal Reasoning: Initial Report of a Naturalistic Study of Causal Inference” and “Influencing Preferences for Different Types of Causal Explanation of Complex Events.”

Let’s start with a definition of what we mean by “explanation.”

Common sense tells us that an explanation relates a prior cause or causes to a subsequent effect or effects, and answers two questions about the latter: Why and How? Those causes and effects can be in the past (where we try to connect observed effects back to their possible causes) or in the future (where we typically use the term forecasting to describe the process of linking causes to their hypothetical effects).

As is true of many of our other cognitive activities, most explanation happens without our noticing it.

In his book, “
Thinking Fast and Slow”, Daniel Kahneman notes the difference between two types of human cognitive processing that he terms “System 1” and “System 2.”

The first operates rapidly, automatically, and unconsciously. Its primary purpose is rapid assessment of multiple sensory inputs that trigger strong emotions to protect us from danger. But to preserve our scarce cognitive energy, it also seeks to associate and assimilate new sensory inputs into our existing belief structures and “mental models” of causal mechanisms in the world around us. Most of the times System 1 gets explanation right. However, the goal of preserving cognitive energy can also lead to the preservation of erroneous beliefs.

It is only when an effect sufficiently surprises us because of the extent of its poor fit with our existing beliefs and mental models that our conscious awareness and attention are engaged.

Such effects activate our System 2 cognition, which is conscious, deliberate, and consumes far more time and energy than System 1. It is also subject to a wide range of biases, including our individual tendencies towards excessive optimism, overconfidence, and seeking information that confirms our existing views, and groups’ tendency towards conformity, particularly when uncertainty and/or adversity are high.

Another critical issue regarding explanation is the distinction between deterministic and indeterminate systems. The former includes natural and mechanical systems, in which, via the scientific method, it is theoretically possible to identify the “true cause” or causes of an observed effect. Throughout history, determinate systems have been the focus of most research into the concepts of causation and explanation.

Far less research has been done into causal and explanatory thinking in so-called “indeterminate” systems, which include the complex socio-technical (i.e., adaptive) systems about which we’ve previously written. In such systems effects often have multiple causes that are difficult to disentangle, which are further complicated by time delays and non-linearity.

In complex systems, so called “true causes” are usually impossible to determine. However, as Klein notes, that doesn’t mean we can’t identify some of the possible causes that contributed to an observed effect. Doing this involves two separate cognitive processes.

The first generates possible causes in three broad categories:

  • Proximate decisions (i.e., human agency) or preceding events;
  • System forces that existed before the proximate causes, and may have affected them; and,
  • Randomness.

The second tests these possible causes against three criteria:

  • Reversibility: Reasoning counterfactually, how likely is it that the observed effect would have occurred in the absence of the possible cause?

  • Covariation: How were the possible cause and observed effect related in time;

  • Plausibility: Is a proposed cause consistent with our beliefs? If not, to what extent is it supported by sound evidence and logic?

Klein and his co-authors have also noted that, having identified one or more possible causes of the effect to be explained, human beings take a variety of approaches to organizing and communicating these causes to others. In order of increasing complexity, these include:

  • A simple list;

  • A logical sequence of causes;

  • A causal story that incorporates context and complex causal relationships.

Finally, other researchers (e.g., Marvin Cohen), have documented common mistakes that humans make when crafting stories to describe complex chains of causation. Here are three of the most important:

  • In Western cultures, people can place relatively too much weight on decisions (i.e., human agency) as the cause of observed effects; in contrast, people from Eastern cultures can place too much weight on system forces.

  • Regardless of cultural heritage, human beings are linear thinkers. Our stories usually fail to include the positive feedback loops and critical thresholds that are often causes of sudden, non-linear change in complex socio-technical systems.

  • We are insensitive to the number of assumptions our stories contain, the likelihood that they are correct, and hence the low joint (multiplicative) probability that our stories are accurate. Cohen has noted that the greater the number of assumptions our explanatory stories require, the more likely it is that there are “unknown unknowns” that we have missed.
Comments

Sudden Social Shifts and Strategic Risk

The economist Rudiger Dornbusch famously noted that, “crises take a much longer time coming than you think, and then happen much faster than you would have thought.”

Today’s hyperconnected world provides ample evidence of the truth of his insight. The more connected we become, the more the potential exists for rapid shifts in opinion and behavior that create new strategic risks for many organizations.

In order to better anticipate these sudden and substantial shifts, directors and executives should keep in mind the underlying processes that are at work. This note provides a brief overview of four of the most important. In practice, these often operate together, with positive (amplifying) feedback loops between them.

Our starting point is Daniel Kahneman’s distinction (made in his book, “
Thinking Fast and Slow”) between two modes of thought.

“System 1” is rapid, instinctive, based on association, and generally unconscious. It generates fast conclusions and emotions that prime us for actions (e.g., fight or flight) that, from an evolutionary perspective, have been highly adaptive for the survival and success of the human species.

In contrast, “System 2” thinking is slower, deliberate, logical, and conscious. Because it is also more effortful, most human cognition is based on System 1, not System 2.

Drivers of rapid shifts in popular perception, opinion, and behavior are found in both System 1 and System 2.

In the case of the former, the most primal driver is our capacity to rapidly transmit fear, through a variety of non-verbal means, including facial expressions and odor.

A more evolved System 1 response is an automatic increase in our desire to avoid disagreement or abandonment by a group when perceived uncertainty or adversity increases. This is the neurobiological root cause of the conformity and groupthink that can produce substantial shifts in opinion and behavior.

System 2 processes can produce so called “cascades” and “herding” when we can observe previous decisions made by other people.

Research has found that so-called “loss aversion” is a normal human response when the result of a decision will remain private. However, when it will be visible to others gains become more important. This tendency is further reinforced by other research, which has found that copying others – so called “social learning” – is increasingly effective as the decision environment becomes more complex.

These drivers underlie other research, which finds that a relatively small fraction of committed agents in a population (about 10% or more) can produce and sustain a significant opinion shift.

Broadly speaking, decisions made by System 2 are based on three types of input: private information not available to everyone, public information, and social information. In cascades and herding, people will often disregard private information that is inconsistent with social signals. However, rapid changes in public perception and behavior can also be triggered by the appearance of a new public signal.

As described in the work of a number of researchers (e.g., David Tuckett, Robert Shiller, Paul Ormerod, and Marvin Cohen), in the face of high complexity and uncertainty, humans decide and act not on the basis of detailed quantitative analyses (which are impractical and often impossible), but rather based on so-called “conviction narratives” or, more simply, the stories we tell ourselves and others.

These narratives are based on our private, public, and social information, along with logic and assumptions. They produce varying levels of certainty (or “conviction”) about the accuracy of our perceptions, appropriateness of our proposed actions, and the likelihood they will achieve our goals in a given situation.

From the sum of individual conviction narratives and the strength with which they are held there emerges the phenomenon Keynes called “animal spirits," or what we more generally call the level of public confidence in a given situation.

In many cases, people will have varying degrees of conviction about the accuracy and importance of the private information they possess, and, as we have noted, will often weigh social information more heavily when constructing narratives that explain the past and forecast the future.
However, a surprising new public signal – say, the announcement that a high ranking government official has been indicted for a crime, or that a key party is abandoning a coalition government – can quickly and dramatically change these narratives, and thus public perceptions, confidence, and behavior.

So, to sum up: Increasing uncertainty or adversity causes System 1 to prime us to conform to the views of the group, however they may change. And that reaction goes into overdrive when uncertainty and adversity crystallize into threats, danger, and fear.

Thus primed, the interactions of System 1 with System 2 can produce dramatic shifts in public perceptions and behavior due to changes in social information (often originated by a relatively small number of people and rapidly amplified in our hyperconnected world) and/or by the appearance of a surprising public signal.
Comments