Understanding Explanation - A Critical Skill for Strategy and Risk

Explanation – as both a process and a result – is such an integral part of our life that we hardly notice it.

Yet it is one of those uniquely human capabilities that have enabled us to emerge from the East African plain and evolve to where we are today. Given its importance, is a subject that has received far less attention than it should.

Fortunately, some people have been researching explanation, though their work is far better known in the military and intelligence communities than the private sector. This note will draw on some of our favorites, the work of Gary Klein and his collaborators, and two of their key papers: “
Causal Reasoning: Initial Report of a Naturalistic Study of Causal Inference” and “Influencing Preferences for Different Types of Causal Explanation of Complex Events.”

Let’s start with a definition of what we mean by “explanation.”

Common sense tells us that an explanation relates a prior cause or causes to a subsequent effect or effects, and answers two questions about the latter: Why and How? Those causes and effects can be in the past (where we try to connect observed effects back to their possible causes) or in the future (where we typically use the term forecasting to describe the process of linking causes to their hypothetical effects).

As is true of many of our other cognitive activities, most explanation happens without our noticing it.

In his book, “
Thinking Fast and Slow”, Daniel Kahneman notes the difference between two types of human cognitive processing that he terms “System 1” and “System 2.”

The first operates rapidly, automatically, and unconsciously. Its primary purpose is rapid assessment of multiple sensory inputs that trigger strong emotions to protect us from danger. But to preserve our scarce cognitive energy, it also seeks to associate and assimilate new sensory inputs into our existing belief structures and “mental models” of causal mechanisms in the world around us. Most of the times System 1 gets explanation right. However, the goal of preserving cognitive energy can also lead to the preservation of erroneous beliefs.

It is only when an effect sufficiently surprises us because of the extent of its poor fit with our existing beliefs and mental models that our conscious awareness and attention are engaged.

Such effects activate our System 2 cognition, which is conscious, deliberate, and consumes far more time and energy than System 1. It is also subject to a wide range of biases, including our individual tendencies towards excessive optimism, overconfidence, and seeking information that confirms our existing views, and groups’ tendency towards conformity, particularly when uncertainty and/or adversity are high.

Another critical issue regarding explanation is the distinction between deterministic and indeterminate systems. The former includes natural and mechanical systems, in which, via the scientific method, it is theoretically possible to identify the “true cause” or causes of an observed effect. Throughout history, determinate systems have been the focus of most research into the concepts of causation and explanation.

Far less research has been done into causal and explanatory thinking in so-called “indeterminate” systems, which include the complex socio-technical (i.e., adaptive) systems about which we’ve previously written. In such systems effects often have multiple causes that are difficult to disentangle, which are further complicated by time delays and non-linearity.

In complex systems, so called “true causes” are usually impossible to determine. However, as Klein notes, that doesn’t mean we can’t identify some of the possible causes that contributed to an observed effect. Doing this involves two separate cognitive processes.

The first generates possible causes in three broad categories:

  • Proximate decisions (i.e., human agency) or preceding events;
  • System forces that existed before the proximate causes, and may have affected them; and,
  • Randomness.

The second tests these possible causes against three criteria:

  • Reversibility: Reasoning counterfactually, how likely is it that the observed effect would have occurred in the absence of the possible cause?

  • Covariation: How were the possible cause and observed effect related in time;

  • Plausibility: Is a proposed cause consistent with our beliefs? If not, to what extent is it supported by sound evidence and logic?

Klein and his co-authors have also noted that, having identified one or more possible causes of the effect to be explained, human beings take a variety of approaches to organizing and communicating these causes to others. In order of increasing complexity, these include:

  • A simple list;

  • A logical sequence of causes;

  • A causal story that incorporates context and complex causal relationships.

Finally, other researchers (e.g., Marvin Cohen), have documented common mistakes that humans make when crafting stories to describe complex chains of causation. Here are three of the most important:

  • In Western cultures, people can place relatively too much weight on decisions (i.e., human agency) as the cause of observed effects; in contrast, people from Eastern cultures can place too much weight on system forces.

  • Regardless of cultural heritage, human beings are linear thinkers. Our stories usually fail to include the positive feedback loops and critical thresholds that are often causes of sudden, non-linear change in complex socio-technical systems.

  • We are insensitive to the number of assumptions our stories contain, the likelihood that they are correct, and hence the low joint (multiplicative) probability that our stories are accurate. Cohen has noted that the greater the number of assumptions our explanatory stories require, the more likely it is that there are “unknown unknowns” that we have missed.
blog comments powered by Disqus