Probability versus Plausibility in the Assessment of Uncertainty

In their paper “Pursuing Plausibility”, Selin and Pereira note that “our alarming inability to grapple with manifold uncertainty and govern indeterminate complex systems highlights the importance of questioning our contemporary frameworks for assessing risk and managing futures.” Here at Britten Coyne, we couldn’t agree more.

In this note, we’ll look at two concepts – plausibility and subjective probability – that have been put forth by various authors as means of approaching the problem of uncertainty.

Let’s start with some brief definitions. We use the term “risk” to denote any lack of certainty that can be described statistically. Closely related to this is traditional or “frequentist” probability, which is based on analysis of the occurrences and impacts of repeated phenomena, like car accidents or human heights and weights.

In contrast to risk, and as described by writers like Knight and Keynes, uncertainty denotes any lack of certainty in which come combination of the set of possible outcomes, their frequency of occurrence, and/or their impact should they occur cannot be statistically described based on an analysis of their past occurrences. For example, the future impact of advances in automation, robotics, and artificial intelligence on labor markets cannot be assessed on the basis of history.

However, we are still left with the need to make decisions in the face of this uncertainty. This gives rise to three different approaches. The first is what Keynes called “conventions” or assumptions about the future that are widely used as a basis for making decisions in the present. The most common of these is that the future will be reasonably like the present.

As we have repeatedly noted this assumption is often fragile, especially in complex adaptive socio-technical systems which give rise to emergent behavior that is often non-linear (i.e., aggregate system behavior that arises from the interaction of agents, and cannot be predicted on the basis of their individual decision rules). This is especially so in our world of increasingly dense network connections, which accentuates the behavior impact of social observations and thus the systems tendency toward sudden non-linear changes that negate conventional assumptions.

The second is the use of subjective degrees of belief in different possible future outcomes, expressed in the quantitative language of probability. This approach goes back to Thomas Bayes, and is related to the later work Leonard Savage on subjective expected utility. The possible futures themselves can be developed using a wide variety of methods, from quantitative modeling to qualitative scenario generation or pre-mortem analyses. More broadly, all of these methods are examples of counterfactual reasoning, or the use of subjunctive conditional claims (e.g., employing “would” or “might”) about alternative possibilities and their consequences.

The third approach to uncertainty employs the qualitative concept of plausibility or its inverse, implausibility. For example, one approach to future scenario building suggests judging the reasonableness of the results (either individually or collectively) by their plausibility.

A common observation regarding plausibility is the difficulty most people have in defining it, and distinguishing it from degree of belief/subjective probability.

In “
A Model of Plausibility”, Connell and Keane provide perhaps the best examination of how, as a practical matter, human beings implicitly define plausibility (using both a quantitative model and human experimental results).

Their summary is worth quoting at some length: “A plausible scenario is one that has a good fit with prior knowledge…(1) Degree of Corroboration: a scenario should have several distinct pieces of prior knowledge supporting any necessary inferences…(2) Degree of Complexity: the scenario should not rely on extended or convoluted justifications…(3) Extent of Conjecture: the scenario should avoid, where possible, the introduction of many hypotheticals.”

Put differently, a scenario’s plausibility will increase “if it has minimal complexity and conjecture, and/or maximal corroboration.” They go on to create an index in which plausibility equals one minus implausibility, with the latter defined as the extent of complexity divided by (the extent of corroboration less the number of conjectures used).

In another paper (“
Making the Implausible Plausible”), Connell shows how the perceived plausibility of a scenario can be increased when it is represented as a causal chain or temporal sequence.

As you can see, many of the factors which make a scenario seem more plausible to an audience are also ones that would likely increase the same audience’s estimate of its subjective probability.

So is there any reason to use plausibility instead of subjective probability?

Writing in the 1940s, the English economist George Shackle proposed what he called “Potential Surprise Theory” as an alternative to the use of subjective probability when thinking about potential future outcomes.

Shackle’s objection to the use of subjective probability was what he termed “the problem of additivity” in situations of uncertainty, where the full range of possible outcomes was unknown. Assume that at first you identified four possible outcomes, and assigned them probabilities of 50%, 35%, 10%, and 5%, based on the normal practice of probabilities for a given set of outcomes having to sum to 100%.

What happens if you later identify two new potential outcomes? Logically, the subjective probabilities of the new set of six possible outcomes should be adjusted so that they once again sum to 100%. But if those outcomes are generated by complex socio-technical systems (as is usually the case for many business and political decisions), causal relationships are only partially understood, and often confusing (e.g., because of time delays and non-linearities). This makes it very hard to adjust subjective probabilities on any systematic basis.

Moreover, given the presence of emergence in complex adaptive systems, the full set of possible outcomes that such systems can produce will never be known in advance, making it impossible for the probabilities associated with those possibilities that have been identified to logically sum to 100%.

Instead of quantitative probabilities, Shackle suggested that we focus on the degree of implausibility associated with possible future outcomes, as measured by the degree of surprise you would feel if a given outcome actually occurred. Importantly, and unlike probability, the extent of your disbelief (potential surprise) in a set of hypotheses does not need to sum to 100%, nor does your degree of disbelief in individual hypotheses need to be adjusted if additional hypotheses are added to a set.

Just as important, in a group setting it is far easier to delve into the underlying reasons for an estimated degree of implausibility/potential surprise (e.g., the complexity of the logic chain, number of known unknowns and conjectures about them, etc.) than it is to do this for a subjective probability estimate (just as an analyst’s degree of uncertainty about an estimate is easier to “unpack” than her or his confidence in it).

In sum, most of the decisions we face today are characterized by true uncertainty rather than risk. Rather than defaulting to subjective probability methods when analyzing these decisions, managers should consider complementing them with an approach based on implausibility and surprise. Ideally, the two methods should arrive at the same conclusion; their real value, however, lies in situations when they do not agree, which forces boards and management teams to more deeply explore the underlying uncertainties confronting them.

blog comments powered by Disqus