Categorizing Uncertainty to Better Manage It
24/Dec/17 12:59
One way to substantially improve an organization’s management and governance of strategic risk is through a more precise understanding of the concept of “uncertainty.”
We believe this is analogous to Philip Tetlock’s finding, after analyzing the results of The Good Judgment Project, that the use of more precise probabilities, or more narrowly defined probability categories, was one of the techniques that improved forecasters’ accuracy (see his book, “Superforecasting” and subsequent research papers). Having participated in the GJP, the author can attest that this was his personal experience as well.
In their paper “Deep Uncertainty," Walker, Lempert, and Kwakkel define uncertainty as “any departure from the unachievable ideal of complete determinism.”
Various authors have created systems for defining different types and degrees of uncertainty.
For example, in “Classifying and Communicating Uncertainties in Model-Based Policy Analysis," Kwakkel, Walker, and Marchau categorize uncertainty by where in a problem it occurs:
Many authors have adopted a three-category system for categorizing the nature of a given uncertainty:
“Aleatory” uncertainty is irreducible variability from sources of randomness in a system.
“Epistemic” uncertainty (i.e., “known unknowns”) is caused by lack of knowledge (or, alternatively unreliable or conflicting information), for example about the proper structure of our model of a system, or the correct values for the variables it contains.
In theory, epistemic uncertainty can be reduced through more knowledge and/or data. In practice, however, this often not the case, particularly in complex adaptive systems. Echoing John Maynard Keynes and Frank Knight’s earlier discussion of this issue, former Bank of England Governor Mervyn King has recently called this condition “radical uncertainty” (see his book, “The End of Alchemy”). Critically, he notes the dangers created by our use of inherently fragile common assumptions, conventions, and stories to deal with situations or systems characterized by radical uncertainty (a point also made by Robert Shiller in his book, Narrative Economics).
“Ontological” or “unexpected” uncertainties are those about which we remain unaware – Donald Rumsfeld’s famous “unknown unknowns.”
Finally, in their paper on “Deep Uncertainty”, Walker, Lempert, and Kwakkel propose a five-step scale for grading the severity of the uncertainties we may face in a given situation or decision:
“Level 1 Uncertainty: A situation in which one admits that one is not absolutely certain, but one is not willing or able to measure the degree of uncertainty in any explicit way. Level 1 uncertainty is often treated through a simple sensitivity analysis of model parameters.”
“Level 2 Uncertainty: Any uncertainty that can be described adequately in statistical terms.”
“Level 3 Uncertainty: A situation in which one is able to enumerate multiple alternatives and is able to rank the alternative in terms of perceived likelihood. In the case of uncertainty about the future, Level 3 uncertainty is often captured in the form of a few trend-based scenarios based on alternative assumptions about the driving forces. These scenarios are ranked according to their perceived likelihood, but no probabilities are assigned.”
“Level 4 Uncertainty: A situation in which one is able to enumerate multiple plausible alternative without being able to rank their perceived likelihood. This inability can be due to a lack of knowledge or data about the mechanism or functional relationships being studies; but this inability can also arise due to a lack of agreement on the appropriate model(s) to describe interactions among the system’s variables, to select the probability distributions to represent uncertainty about key parameters of the model(s), and/or how to value the desirability of alternative outcomes.”
“Level 5 Uncertainty: The deepest level of recognized uncertainty; in this case what is known is only that we do not know." (i.e., known unknowns)
To this list we would add Level 6 Uncertainty, or Rumsfeld’s “Unknown Unknowns” which is always present, if not always acknowledged.
As Payzan-LeNestour and Bossaerts have shown, the difference between Level 5 and Level 6 is often not clear-cut, with one person recognizing an unknown, while another does not (see, “Risk, Unexpected Uncertainty, and Estimation Uncertainty”).
In our work over the years as consultants and executives, we have found that it is usually far more productive for groups to discuss the uncertainties in a given situation or decision than the extent of one’s subjective confidence about a given conclusion or recommendation.
Uncertainties invite further investigation and pragmatic discussion, while questioning a colleague’s degree of confidence can too easily be interpreted as a personal attack.
In addition, researchers have found that explicitly discussing known unknowns significantly reduces overconfidence in estimates and decisions (see, "Known Unknowns: A Critical Determinant of Confidence and Calibration" by Walters et al).
The frameworks described above can make management and governance discussions about uncertainties even more productive and useful.
We believe this is analogous to Philip Tetlock’s finding, after analyzing the results of The Good Judgment Project, that the use of more precise probabilities, or more narrowly defined probability categories, was one of the techniques that improved forecasters’ accuracy (see his book, “Superforecasting” and subsequent research papers). Having participated in the GJP, the author can attest that this was his personal experience as well.
In their paper “Deep Uncertainty," Walker, Lempert, and Kwakkel define uncertainty as “any departure from the unachievable ideal of complete determinism.”
Various authors have created systems for defining different types and degrees of uncertainty.
For example, in “Classifying and Communicating Uncertainties in Model-Based Policy Analysis," Kwakkel, Walker, and Marchau categorize uncertainty by where in a problem it occurs:
- System Boundary: The demarcation of aspects of the real world which are included in the model from those that are not included.
- Conceptual Model: Specifies the variables and relationships inside the model.
- Computer Model: The implementation of the conceptual model in computer code.
- Input Data: The values for the different parameters both inside the model and as inputs to the model.
- Model Implementation: Bugs and errors in the computer code or the hardware used to run the model.
- Processing of Model Output: Before it is presented to decision makers.
Many authors have adopted a three-category system for categorizing the nature of a given uncertainty:
“Aleatory” uncertainty is irreducible variability from sources of randomness in a system.
“Epistemic” uncertainty (i.e., “known unknowns”) is caused by lack of knowledge (or, alternatively unreliable or conflicting information), for example about the proper structure of our model of a system, or the correct values for the variables it contains.
In theory, epistemic uncertainty can be reduced through more knowledge and/or data. In practice, however, this often not the case, particularly in complex adaptive systems. Echoing John Maynard Keynes and Frank Knight’s earlier discussion of this issue, former Bank of England Governor Mervyn King has recently called this condition “radical uncertainty” (see his book, “The End of Alchemy”). Critically, he notes the dangers created by our use of inherently fragile common assumptions, conventions, and stories to deal with situations or systems characterized by radical uncertainty (a point also made by Robert Shiller in his book, Narrative Economics).
“Ontological” or “unexpected” uncertainties are those about which we remain unaware – Donald Rumsfeld’s famous “unknown unknowns.”
Finally, in their paper on “Deep Uncertainty”, Walker, Lempert, and Kwakkel propose a five-step scale for grading the severity of the uncertainties we may face in a given situation or decision:
“Level 1 Uncertainty: A situation in which one admits that one is not absolutely certain, but one is not willing or able to measure the degree of uncertainty in any explicit way. Level 1 uncertainty is often treated through a simple sensitivity analysis of model parameters.”
“Level 2 Uncertainty: Any uncertainty that can be described adequately in statistical terms.”
“Level 3 Uncertainty: A situation in which one is able to enumerate multiple alternatives and is able to rank the alternative in terms of perceived likelihood. In the case of uncertainty about the future, Level 3 uncertainty is often captured in the form of a few trend-based scenarios based on alternative assumptions about the driving forces. These scenarios are ranked according to their perceived likelihood, but no probabilities are assigned.”
“Level 4 Uncertainty: A situation in which one is able to enumerate multiple plausible alternative without being able to rank their perceived likelihood. This inability can be due to a lack of knowledge or data about the mechanism or functional relationships being studies; but this inability can also arise due to a lack of agreement on the appropriate model(s) to describe interactions among the system’s variables, to select the probability distributions to represent uncertainty about key parameters of the model(s), and/or how to value the desirability of alternative outcomes.”
“Level 5 Uncertainty: The deepest level of recognized uncertainty; in this case what is known is only that we do not know." (i.e., known unknowns)
To this list we would add Level 6 Uncertainty, or Rumsfeld’s “Unknown Unknowns” which is always present, if not always acknowledged.
As Payzan-LeNestour and Bossaerts have shown, the difference between Level 5 and Level 6 is often not clear-cut, with one person recognizing an unknown, while another does not (see, “Risk, Unexpected Uncertainty, and Estimation Uncertainty”).
In our work over the years as consultants and executives, we have found that it is usually far more productive for groups to discuss the uncertainties in a given situation or decision than the extent of one’s subjective confidence about a given conclusion or recommendation.
Uncertainties invite further investigation and pragmatic discussion, while questioning a colleague’s degree of confidence can too easily be interpreted as a personal attack.
In addition, researchers have found that explicitly discussing known unknowns significantly reduces overconfidence in estimates and decisions (see, "Known Unknowns: A Critical Determinant of Confidence and Calibration" by Walters et al).
The frameworks described above can make management and governance discussions about uncertainties even more productive and useful.
blog comments powered by Disqus