Reduce Overload by Seeking High Value Information
02/Jan/20 12:01
At Britten Coyne Partners, we provide clients with advice, education, and forecasting offerings to help them better anticipate, accurately assess, and adapt in time to emerging threats to their strategy and survival.
One of the points we stress is the importance of reducing overload by searching for high value information, and using it to update prior beliefs (e.g., estimates, forecasts, models). But what constitutes high value information?
We focus on two types.
The first is "high likelihood indicators." In the face of an uncertain future, we often have a mix of probabilistic beliefs about different scenarios or outcomes that could develop in the future. High likelihood indicators are valuable because their presence (or absence) is much more likely to be observed in the case of one scenario or outcome compared to the others. Put differently, high likelihood indicators reduce our uncertainty about the future.
The second type of high value information is anything that triggers the feeling of surprise, which is usually evidence that is at odds with what our mental model of a system or situation would lead us to expect. Surprise is valuable because it increases our uncertainty about the accuracy of our mental models, and signals that we need to improve them.
Unfortunately, the feeling of surprise, and its ability to focus our attention on high value information, is very often fleeting. As Daniel Kahneman has noted, our “System 1” brain automatically seeks to make new sensory inputs cohere with our existing beliefs. This often happens very quickly; we find ourselves remembering feeling surprised, but forgetting what it was that triggered it.
For that reason, leveraging surprise requires us to call on “System 2”, our conscious deliberative brain. In our experience, the best way to do this is to write down the information that triggered the feeling of surprise, so that we can later consider its implications.
One of the points we stress is the importance of reducing overload by searching for high value information, and using it to update prior beliefs (e.g., estimates, forecasts, models). But what constitutes high value information?
We focus on two types.
The first is "high likelihood indicators." In the face of an uncertain future, we often have a mix of probabilistic beliefs about different scenarios or outcomes that could develop in the future. High likelihood indicators are valuable because their presence (or absence) is much more likely to be observed in the case of one scenario or outcome compared to the others. Put differently, high likelihood indicators reduce our uncertainty about the future.
The second type of high value information is anything that triggers the feeling of surprise, which is usually evidence that is at odds with what our mental model of a system or situation would lead us to expect. Surprise is valuable because it increases our uncertainty about the accuracy of our mental models, and signals that we need to improve them.
Unfortunately, the feeling of surprise, and its ability to focus our attention on high value information, is very often fleeting. As Daniel Kahneman has noted, our “System 1” brain automatically seeks to make new sensory inputs cohere with our existing beliefs. This often happens very quickly; we find ourselves remembering feeling surprised, but forgetting what it was that triggered it.
For that reason, leveraging surprise requires us to call on “System 2”, our conscious deliberative brain. In our experience, the best way to do this is to write down the information that triggered the feeling of surprise, so that we can later consider its implications.
Comments
Implications of the Queen's Speech for Board Strategic Risk Governance
19/Dec/19 14:03
Today's Queen's Speech has potentially important implications for board practice and strategic risk governance, initially in the UK, and perhaps later in other Anglosphere nations.
The recent update to the UK Corporate Governance Code set a new requirement for boards to carry out a robust assessment of emerging risks. Today's speech announced the creation of a new Audit, Reporting, and Governance Authority, which may have the power to pursue criminal charges against directors who make misleading statements to financial markets.
Clearly, this will very substantially raise the bar for the required rigour of the processes and systems boards use to anticipate, assess, and adapt in time to emerging strategic threats, which is the focus of our work at Britten Coyne Partners.
The recent update to the UK Corporate Governance Code set a new requirement for boards to carry out a robust assessment of emerging risks. Today's speech announced the creation of a new Audit, Reporting, and Governance Authority, which may have the power to pursue criminal charges against directors who make misleading statements to financial markets.
Clearly, this will very substantially raise the bar for the required rigour of the processes and systems boards use to anticipate, assess, and adapt in time to emerging strategic threats, which is the focus of our work at Britten Coyne Partners.
Important Lessons from Two New Reports on Communicating Uncertainty
03/Dec/19 13:54
Risk, Uncertainty, and Ignorance are terms which often confuse us and bedevil our decisions.
Let’s therefore begin with some definitions.
As we use the term, “risk” refers to situations in which the frequency distribution of the outcomes of a data generating process of interest is known, and the underlying process will not change in the future. Some writers call this “aleatory”, or “irreducible“ uncertainty (e.g., that is caused by random factors, like measurement errors).
In situations of so called “Knightian” uncertainty (after the economist Frank Knight, who first wrote about it in 1921 in “Risk, Uncertainty, and Profit”), these conditions don’t hold. The historical frequency distribution of possible outcomes and/or whether the underlying outcome generating process is stationary or evolving aren’t known. However, in theory they are knowable. Some have termed this “epistemic” uncertainty, as it arises from a lack of knowledge that in principle could be discovered.
In situations of ignorance, not only we unaware of the range of possible outcomes and the nature of the outcome generating process, but these are also unknowable. As a result, we have no reliable basis for forecasting the future. John Maynard Keynes wrote about this in 1921, in his “Treatise on Probability”, and later in 1936 in his “General Theory of Employment, Interest, and Money”, in which he highlighted our reliance in such circumstances on socially accepted (but ultimately fragile) “conventions” that enable action in the face of our ignorance. Some have termed this “ontological uncertainty”, which Donald Rumsfeld’s famously called “unknown unknowns”. As we have noted elsewhere, ontological uncertainty exponentially increases as socio-technical systems become more complex over time.
Too often, we confuse these three types of uncertainty and their implications. When people discuss uncertainty, they often do so in ways that imply it is relatively well-behaved risk. However, most real-world decisions are made in the face of far more challenging epistemic and ontological uncertainty, which people and organizations are often reluctant to acknowledge.
For example, in “Communicating Uncertainty in Policy Analysis”, professor Charles Manski from Northwestern University notes that, “the term “policy analysis” describes scientific evaluations of the impacts of past public policies and predictions of the outcomes of potential future policies. A prevalent practice has been to report policy analysis with incredible certitude. That is, exact predictions of policy outcomes are routine, while expressions of uncertainty are rare. However, predictions and estimates often are fragile, resting on unsupported assumptions and limited data. Therefore, the expressed certitude is not credible.”
On the other hand, the financial services, military, and intelligence communities have been more willing to recognize and try to communicate epistemic and ontological uncertainty, but have done so using multiple and often confusing definitions and approaches.
In light of this background, we were very excited to see the recent publication of two large analyses of the challenges in communicating uncertainty. The first is by the NATO Science and Technology Organization: “Assessment and Communication of Uncertainty in Intelligence to Support Decision Making”. The second is by the Royal Society: “Communicating Uncertainty About Facts, Numbers, and Science”, by van der Bles et al.
Both of these are well worth a read in their entirety. Here we will just present some highlights.
Chapter 19 of the NATO report is titled, “How Intelligence Organizations Communicate Confidence (Unclearly)”. It notes that, “Given that intelligence is typically derived from incomplete and ambiguous evidence, analysts must accurately assess and communicate their level of uncertainty to consumers. One facet of this perennial challenge is the communication of analytic confidence, or the level of confidence that an analyst has in his or her judgements, including those already qualified by probability terms such as “very unlikely” or “almost certainly”. Consumers are better equipped to make sound decisions when they understand the methodological and evidential strength (or flimsiness) of intelligence assessments. Effective communication of confidence also militates against the pernicious misconception that the intelligence community (IC) is omniscient.”
“As part of broader efforts to improve communication fidelity and rein in subjectivity in intelligence production, most intelligence organizations have adopted standardized lexicons for rating and communicating analytic confidence. These standards provide a range of confidence levels (e.g., high, moderate, low), along with relevant rating criteria, and are often paired with scales used to express estimative probability.”
For example, here is a Canadian example:
“CFINTCOM instructs analysts to clearly indicate both their level of confidence and their reasons for ascribing it. Analytic confidence is based on three main factors:
“Analysts are expected to outline their confidence ratings in a dedicated textbox, or to integrate them into the narrative text of the product:
Confidence Levels
After a very interesting comparison of the confidence rating scales used by various organizations, the NATO report notes that “the analytic confidence standards examined generally incorporate the following determinants:
However, it also notes that, “Few standards attempt to operationalize these determinants or outline formal mechanisms for evaluation. Instead, they tend to provide vague, qualitative descriptions for each confidence level, which may lead to inconsistent confidence assessments.”
The NATO authors also observe that, “intelligence consumers tend to disaggregate analytic confidence into three dimensions:
The chapter concludes with a recommendation to combine a verbal expression with a quantitative expression analytic confidence as well as an “informative, case-specific rationale” for this confidence level.”
This NATO chapter highlights why, in our forecasting work, we prefer to focus on the level of uncertainty rather than the level of confidence. Discussions focused on the forecaster’s level of confidence are not only confusing; they also can very easily (and often do) trigger emotional defenses. In contrast, discussions focused on the key uncertainties associated with a forecast (and how they might be reduced) are almost always more productive.
In “Communicating Uncertainty About Facts, Numbers, and Science” the authors’ focus is on epistemic uncertainty. They begin by noting that, “in an era of contested expertise, many shy away from openly communicating their uncertainty about what they know, fearful of their audience’s reaction.” For this reason, they explore what the limited research that has been done tells us about the effects of communicating our epistemic uncertainty on listeners’ cognition, affect, trust, and decision-making”.
Their goal is to provide, “a cohesive framework that aims to provide clarity and structure to the issues surrounding communication [of uncertainty].”
This framework has three elements: The object of epistemic uncertainty, the source of uncertainty, and the level of uncertainty.
As the authors describe it, the possible objects of epistemic uncertainty include:
With respect to the sources of uncertainty, the authors distinguish between:
Finally, the authors assert that there are two possible “levels” of uncertainty.
The authors then proceed to a discussion of the impact of communicating uncertainty on “recipients’ cognition, affect, trust, and decision-making.”
They note at the outset the relative lack of research in many of these areas, and the focus of much of the research that exists on aleatory, rather than epistemic or ontological uncertainty. They also note that most of the research focuses on direct uncertainty, and not indirect uncertainty about the quality of underlying evidence.
With respect to cognitive reactions to communications regarding uncertainty, the authors note the conclusion of many studies about the wide range of probabilities that people attach to “words of estimative probability”, such as “unlikely”, “possible”, “probable”, and “almost certain” [hence the recommendation that verbal expressions should be accompanied by quantitative probabilities]. They also note how, when presented with a “most likely” result within a numerical range of possible outcomes, most people will interpret the range as either a uniform (flat) or normal (bell curve/Gaussian) distribution. This can be extremely problematic, because complex adaptive systems tend to generate pareto/power law distributions of outcomes, not ones that are flat or bell shaped.
Moving on to people’s emotional reactions to communications of uncertainty, the authors find that, “dual-process theories in psychology commonly describe two systems of thinking, with one system being more ‘analytic’, following rules of logic, probability and deliberation, whereas the other is more associative, fast, intuitive and affective…[These lead to] “people processing uncertain information in two qualitatively different ways, differentiate between processing risk as analysis versus risk as feelings…with the latter often dominant in processing risk information.”
The authors also note that, “research into how the communication of uncertainty impacts trust and credibility is very sparse.” That said, “at a generic level, there are some near-universal aspects of human social cognition that assist people in determining whom and what information to trust. Two of these basic dimensions include ‘competence’ and ‘warmth’. Affect and cognition fuse together here in establishing trust…In order to be perceived as credible, both ‘cold’ expertise is required (knowledgeability) as well as a perceived motivation to be sincere and truthful (warmth), that is, a feeling of trust… Yet, whether greater transparency in the communication of uncertainty will enhance credibility and public trust remains an open empirical question. On the one hand, presenting information as certain (when it is not) is misleading and can damage and undermine public trust. Thus, emphasizing uncertainty may help signal transparency and honesty. On the other hand, explicitly conveying scientific uncertainty may be used to undermine the perceived competence of the communicator as people tend to use precision as a cue for judging expertise.”
Regarding the impact of uncertainty communication on behavior and decision-making, the authors stress that there has been no systematic research on the impact of epistemic uncertainty. There has, however, been research on the impact of aleatory uncertainty communications, which have generally been found to improve the quality of decisions. However, they also speculate uncertainty communications about ontological uncertainty “may interfere with people’s basic psychological needs for control and predictability [leading, for example, to confirmation and conformity biases], whereas epistemic uncertainty about the past or present may not always be subject to the same concerns.”
Conclusion
Former Bank of England Governor Mervyn King has observed that we now live in an age of “radical uncertainty.” Given our strong evolutionary aversion to this feeling, one of the great challenges we face in this new environment is how to effectively communicate about uncertainty itself. Both of these reports make a substantial contribution to helping us do that.
Let’s therefore begin with some definitions.
As we use the term, “risk” refers to situations in which the frequency distribution of the outcomes of a data generating process of interest is known, and the underlying process will not change in the future. Some writers call this “aleatory”, or “irreducible“ uncertainty (e.g., that is caused by random factors, like measurement errors).
In situations of so called “Knightian” uncertainty (after the economist Frank Knight, who first wrote about it in 1921 in “Risk, Uncertainty, and Profit”), these conditions don’t hold. The historical frequency distribution of possible outcomes and/or whether the underlying outcome generating process is stationary or evolving aren’t known. However, in theory they are knowable. Some have termed this “epistemic” uncertainty, as it arises from a lack of knowledge that in principle could be discovered.
In situations of ignorance, not only we unaware of the range of possible outcomes and the nature of the outcome generating process, but these are also unknowable. As a result, we have no reliable basis for forecasting the future. John Maynard Keynes wrote about this in 1921, in his “Treatise on Probability”, and later in 1936 in his “General Theory of Employment, Interest, and Money”, in which he highlighted our reliance in such circumstances on socially accepted (but ultimately fragile) “conventions” that enable action in the face of our ignorance. Some have termed this “ontological uncertainty”, which Donald Rumsfeld’s famously called “unknown unknowns”. As we have noted elsewhere, ontological uncertainty exponentially increases as socio-technical systems become more complex over time.
Too often, we confuse these three types of uncertainty and their implications. When people discuss uncertainty, they often do so in ways that imply it is relatively well-behaved risk. However, most real-world decisions are made in the face of far more challenging epistemic and ontological uncertainty, which people and organizations are often reluctant to acknowledge.
For example, in “Communicating Uncertainty in Policy Analysis”, professor Charles Manski from Northwestern University notes that, “the term “policy analysis” describes scientific evaluations of the impacts of past public policies and predictions of the outcomes of potential future policies. A prevalent practice has been to report policy analysis with incredible certitude. That is, exact predictions of policy outcomes are routine, while expressions of uncertainty are rare. However, predictions and estimates often are fragile, resting on unsupported assumptions and limited data. Therefore, the expressed certitude is not credible.”
On the other hand, the financial services, military, and intelligence communities have been more willing to recognize and try to communicate epistemic and ontological uncertainty, but have done so using multiple and often confusing definitions and approaches.
In light of this background, we were very excited to see the recent publication of two large analyses of the challenges in communicating uncertainty. The first is by the NATO Science and Technology Organization: “Assessment and Communication of Uncertainty in Intelligence to Support Decision Making”. The second is by the Royal Society: “Communicating Uncertainty About Facts, Numbers, and Science”, by van der Bles et al.
Both of these are well worth a read in their entirety. Here we will just present some highlights.
Chapter 19 of the NATO report is titled, “How Intelligence Organizations Communicate Confidence (Unclearly)”. It notes that, “Given that intelligence is typically derived from incomplete and ambiguous evidence, analysts must accurately assess and communicate their level of uncertainty to consumers. One facet of this perennial challenge is the communication of analytic confidence, or the level of confidence that an analyst has in his or her judgements, including those already qualified by probability terms such as “very unlikely” or “almost certainly”. Consumers are better equipped to make sound decisions when they understand the methodological and evidential strength (or flimsiness) of intelligence assessments. Effective communication of confidence also militates against the pernicious misconception that the intelligence community (IC) is omniscient.”
“As part of broader efforts to improve communication fidelity and rein in subjectivity in intelligence production, most intelligence organizations have adopted standardized lexicons for rating and communicating analytic confidence. These standards provide a range of confidence levels (e.g., high, moderate, low), along with relevant rating criteria, and are often paired with scales used to express estimative probability.”
For example, here is a Canadian example:
“CFINTCOM instructs analysts to clearly indicate both their level of confidence and their reasons for ascribing it. Analytic confidence is based on three main factors:
- Evidence: The strength of the knowledge base, to include the quality of the evidence and our depth of understanding about the issue.
- Assumptions: The number and importance of assumptions used to fill information gaps.
- Reasoning: The strength of the logic underpinning the argument, which encompasses the number and strength of analytic inferences as well as the rigour of the analytic methodology applied to the [intelligence] product” …
“Analysts are expected to outline their confidence ratings in a dedicated textbox, or to integrate them into the narrative text of the product:
Confidence Levels
- High: Well-corroborated information from proven sources, low potential for deception, noncritical assumptions and/or gaps, or undisputed reasoning.
- Moderate: Partially corroborated information from good sources, moderate potential for deception, potentially critical assumptions used to fill gaps, or a mix of inferences.
- Low: Uncorroborated information from good or marginal sources, high potential for deception, key assumptions used to fill critical gaps, or mostly weak inferences.”
After a very interesting comparison of the confidence rating scales used by various organizations, the NATO report notes that “the analytic confidence standards examined generally incorporate the following determinants:
- Source reliability
- Information credibility
- Evidence consistency/convergence
- Strength of logic/reasoning
- Quantity and significance of information gaps and assumptions used [to fill them].”
However, it also notes that, “Few standards attempt to operationalize these determinants or outline formal mechanisms for evaluation. Instead, they tend to provide vague, qualitative descriptions for each confidence level, which may lead to inconsistent confidence assessments.”
The NATO authors also observe that, “intelligence consumers tend to disaggregate analytic confidence into three dimensions:
- The reliability of evidence [higher means higher confidence in the assessment];
- The range of reasonable expert opinion [narrower means higher confidence]; and,
- The potential responsiveness of the analysis to new information [lower means higher confidence].”
The chapter concludes with a recommendation to combine a verbal expression with a quantitative expression analytic confidence as well as an “informative, case-specific rationale” for this confidence level.”
This NATO chapter highlights why, in our forecasting work, we prefer to focus on the level of uncertainty rather than the level of confidence. Discussions focused on the forecaster’s level of confidence are not only confusing; they also can very easily (and often do) trigger emotional defenses. In contrast, discussions focused on the key uncertainties associated with a forecast (and how they might be reduced) are almost always more productive.
In “Communicating Uncertainty About Facts, Numbers, and Science” the authors’ focus is on epistemic uncertainty. They begin by noting that, “in an era of contested expertise, many shy away from openly communicating their uncertainty about what they know, fearful of their audience’s reaction.” For this reason, they explore what the limited research that has been done tells us about the effects of communicating our epistemic uncertainty on listeners’ cognition, affect, trust, and decision-making”.
Their goal is to provide, “a cohesive framework that aims to provide clarity and structure to the issues surrounding communication [of uncertainty].”
This framework has three elements: The object of epistemic uncertainty, the source of uncertainty, and the level of uncertainty.
As the authors describe it, the possible objects of epistemic uncertainty include:
- “Facts that are (at least theoretically) directly verifiable;
- “Numbers, that are continuous variables that describe the world, that may, at least in principle, be directly observable, or they may be theoretical constructs which are used as parameters within a model of the world”; and,
- “Scientific hypotheses, that are theories about how the world works, expressed as structural models of the relationships between variables.” Depending on the circumstances, this can cross the border into ontological uncertainty, as the authors acknowledge: “We should in principle distinguish between uncertainty about the adequacy of a model to represent the world and uncertainty about the world itself…However, in practice, the lines between these often get blurred.”
With respect to the sources of uncertainty, the authors distinguish between:
- Irreducible variability – i.e., aleatory uncertainty;
- Limited knowledge or ignorance about the underlying results-generating processes; and,
- Disagreements among experts about the meaning of available evidence.
Finally, the authors assert that there are two possible “levels” of uncertainty.
- “Direct” or “first-order” uncertainty about one or more of the sources of uncertainty;
- “Indirect” or second order uncertainty “about the quality of the knowledge and evidence upon which our uncertainty assessments are based”, which underlies our subjective judgment about the confidence we have in any claim we make.
The authors then proceed to a discussion of the impact of communicating uncertainty on “recipients’ cognition, affect, trust, and decision-making.”
They note at the outset the relative lack of research in many of these areas, and the focus of much of the research that exists on aleatory, rather than epistemic or ontological uncertainty. They also note that most of the research focuses on direct uncertainty, and not indirect uncertainty about the quality of underlying evidence.
With respect to cognitive reactions to communications regarding uncertainty, the authors note the conclusion of many studies about the wide range of probabilities that people attach to “words of estimative probability”, such as “unlikely”, “possible”, “probable”, and “almost certain” [hence the recommendation that verbal expressions should be accompanied by quantitative probabilities]. They also note how, when presented with a “most likely” result within a numerical range of possible outcomes, most people will interpret the range as either a uniform (flat) or normal (bell curve/Gaussian) distribution. This can be extremely problematic, because complex adaptive systems tend to generate pareto/power law distributions of outcomes, not ones that are flat or bell shaped.
Moving on to people’s emotional reactions to communications of uncertainty, the authors find that, “dual-process theories in psychology commonly describe two systems of thinking, with one system being more ‘analytic’, following rules of logic, probability and deliberation, whereas the other is more associative, fast, intuitive and affective…[These lead to] “people processing uncertain information in two qualitatively different ways, differentiate between processing risk as analysis versus risk as feelings…with the latter often dominant in processing risk information.”
The authors also note that, “research into how the communication of uncertainty impacts trust and credibility is very sparse.” That said, “at a generic level, there are some near-universal aspects of human social cognition that assist people in determining whom and what information to trust. Two of these basic dimensions include ‘competence’ and ‘warmth’. Affect and cognition fuse together here in establishing trust…In order to be perceived as credible, both ‘cold’ expertise is required (knowledgeability) as well as a perceived motivation to be sincere and truthful (warmth), that is, a feeling of trust… Yet, whether greater transparency in the communication of uncertainty will enhance credibility and public trust remains an open empirical question. On the one hand, presenting information as certain (when it is not) is misleading and can damage and undermine public trust. Thus, emphasizing uncertainty may help signal transparency and honesty. On the other hand, explicitly conveying scientific uncertainty may be used to undermine the perceived competence of the communicator as people tend to use precision as a cue for judging expertise.”
Regarding the impact of uncertainty communication on behavior and decision-making, the authors stress that there has been no systematic research on the impact of epistemic uncertainty. There has, however, been research on the impact of aleatory uncertainty communications, which have generally been found to improve the quality of decisions. However, they also speculate uncertainty communications about ontological uncertainty “may interfere with people’s basic psychological needs for control and predictability [leading, for example, to confirmation and conformity biases], whereas epistemic uncertainty about the past or present may not always be subject to the same concerns.”
Conclusion
Former Bank of England Governor Mervyn King has observed that we now live in an age of “radical uncertainty.” Given our strong evolutionary aversion to this feeling, one of the great challenges we face in this new environment is how to effectively communicate about uncertainty itself. Both of these reports make a substantial contribution to helping us do that.
Review of “Forecasting in Social Settings: The State of the Art” by Makridakis et al
30/Oct/19 17:54
In our course on Strategic Risk Management and Governance, we note the very substantial challenge of forecasting the future behavior of complex adaptive systems made up of human beings and their organizations. There are many reasons for this, including:
Taken together, these factors usually cause the accuracy of forecasts of complex adaptive system behavior to exponentially decline as the time horizon lengthens.
Given this background, we read the new paper by Makridakis and his colleagues with great interest.
At the outset, the authors clearly state that, “although forecasting in the physical sciences can attain amazing levels of accuracy, such is not the case in social contexts, where practically all predictions are uncertain, and a good number can be unambiguously wrong.”
There are a number of reasons for this. “First, there is usually a limited theoretical basis for presenting a causal or underlying mechanism” for the target variable being forecasted. “Thus we rely on statistical approximations that roughly describe what we observe, but may not represent a causal [process].” Second, “despite the deluge of data that is available today, much of this information does not concern what we want to forecast directly…Third, what we are trying to forecast is often affected by the forecasts themselves…Such feedback does not occur in weather forecasts…For these reasons, social science forecasts are unlikely to ever be as accurate as forecasts in the physical sciences, and the potential for improvements in accuracy is somewhat limited.”
The authors also note that when it comes to forecasting social systems, “unless uncertainty is expressed clearly and unambiguously, forecasting is not far removed from fortune-telling. However, uncertainty about judgmental forecasts of social system behavior is likely to be “underestimated greatly for two reasons.”
“First, our attitude to extrapolating in a linear fashion from the present to the future, and second, our fear of the unknown and our psychological need to reduce the anxiety associated with such a fear by believing we can control the future by predicting it accurately.”
Use of statistical instead of judgmental forecasting models improves the treatment of uncertainty, but this approach is far from perfect. The authors claim that, “there are at least three reasons for standard statistical models’ underestimations of the uncertainty:”
The paper also includes sections on different types of uncertainty, the challenges of incorporating causality into forecasting models, and the difficulty of predicting one off and extreme events.
In sum, the authors have produced an excellent (and extensively referenced) overview of the current state of the art of forecasting in social settings.
- Agents pursue multiple goals, with different incentives and priorities, and may change their goals and priorities over time as the system evolves;
- When deciding on actions to achieve their goals, agents differ in terms of the range of experiences they can draw on, and their cognitive ability to reason multiple time steps ahead about the likely consequences of their actions;
- Agents differ in their perceptions of the environment, and their beliefs about the future;
- Agents differ in the structure of their social networks, which also evolve over time (more technically, the data generating process in complex adaptive systems is non-stationary, which reduced the usefulness of historical results as a guide to future outcomes);
- Agents decide on their actions based not only on rational calculation, but also on their emotional reactions to competing narratives as well as the potential social impacts of their decisions;
- Agents differ in their desire to conform to the beliefs and copy the actions of other members of their group, with the latter typically increasing with the level of perceived uncertainty;
- Social feedback loops can produce emergent non-linear collective phenomena like herding, fads, booms and busts. These extreme events have been extremely hard to consistently forecast.
Taken together, these factors usually cause the accuracy of forecasts of complex adaptive system behavior to exponentially decline as the time horizon lengthens.
Given this background, we read the new paper by Makridakis and his colleagues with great interest.
At the outset, the authors clearly state that, “although forecasting in the physical sciences can attain amazing levels of accuracy, such is not the case in social contexts, where practically all predictions are uncertain, and a good number can be unambiguously wrong.”
There are a number of reasons for this. “First, there is usually a limited theoretical basis for presenting a causal or underlying mechanism” for the target variable being forecasted. “Thus we rely on statistical approximations that roughly describe what we observe, but may not represent a causal [process].” Second, “despite the deluge of data that is available today, much of this information does not concern what we want to forecast directly…Third, what we are trying to forecast is often affected by the forecasts themselves…Such feedback does not occur in weather forecasts…For these reasons, social science forecasts are unlikely to ever be as accurate as forecasts in the physical sciences, and the potential for improvements in accuracy is somewhat limited.”
The authors also note that when it comes to forecasting social systems, “unless uncertainty is expressed clearly and unambiguously, forecasting is not far removed from fortune-telling. However, uncertainty about judgmental forecasts of social system behavior is likely to be “underestimated greatly for two reasons.”
“First, our attitude to extrapolating in a linear fashion from the present to the future, and second, our fear of the unknown and our psychological need to reduce the anxiety associated with such a fear by believing we can control the future by predicting it accurately.”
Use of statistical instead of judgmental forecasting models improves the treatment of uncertainty, but this approach is far from perfect. The authors claim that, “there are at least three reasons for standard statistical models’ underestimations of the uncertainty:”
- “Probably the biggest factor is that model uncertainty is not taken into account. The prediction intervals are produced under the assumption that the model is ‘‘correct’’, which clearly is never the case.” The authors note that combining forecasts made using different models reduces this uncertainty.
- “Even if the model is specified correctly, the parameters must be estimated, and also the parameter uncertainty is rarely accounted for in time series forecasting models.” However, techniques like Monte Carlo simulation allow parameter uncertainty to be made explicit.
- “Most prediction intervals are produced under the assumption of Gaussian [normally distributed] errors. When this assumption is not correct, the prediction interval coverage will usually be underestimated, especially when the errors have a fat-tailed distribution [as is often the case in complex adaptive systems, which tend to produce outcomes that follow a Pareto/power law rather than a normal/bell curve distribution].”
The paper also includes sections on different types of uncertainty, the challenges of incorporating causality into forecasting models, and the difficulty of predicting one off and extreme events.
In sum, the authors have produced an excellent (and extensively referenced) overview of the current state of the art of forecasting in social settings.
The Critical Importance of Process in Board Decision Making
21/Oct/19 08:45
When we were researching the relationship between Board Chairs and CEOs we asked our interviewees about the processes they used in the boardroom. Mostly this elicited a response along the lines of “We have an agenda and we take minutes”. It seems most Boards of directors confuse process with procedure. Procedure may be involved but process is a much broader concept; it is a structured method or sequence of steps or activities, but not used for the purpose of administrative consistency or compliance. Process is used for quality of outcome.
There is perhaps a tendency for directors to view the discipline of process as a constraint on the free reign of experienced judgement. If so, this would be a mistake. A very wide body of experience and research has demonstrated that Board decision making is no less subject to the pitfalls of cognitive and behavioural limitations and biases than any other human activity. In certain important respects, in specific cases, Board decision making has been shown to be worse than in other, similarly important contexts. This is especially true of the performance of Boards as regards Strategic Risk.
There are two compelling reasons for Boards to take another look at their decision making processes – the first is the consequences for directors when their processes are absent or poor and the second is the real potential for improved Board effectiveness.
Traditionally, Board decision making has been defended in practice from external legal scrutiny by a convention, sometimes reinforced at common (i.e. judge made) law, that “business judgement” was not judiciable. This assumption has been long established. Yet even if it was ever wholly true, which is debatable, clear political and regulatory trends are moving against it.
In the UK, statute law has been enacted to both clarify and tighten directors’ accountabilities. Consequently, as recent research* (led by Professor Joan Loughrey at the University of Leeds) has highlighted, there has been dramatic growth in directors’ decisions being challenged in the courts. In less than a decade such cases grew by a factor of 10. Moreover, when directors are taken to court the probability is that they will lose. 63% of cases reviewed by Prof. Loughrey’s team were found against the directors, i.e. they were found liable for the consequences of their decisions. The proportion of cases being found against directors is also increasing. “Business Judgement”, at least in the UK, is no longer (if it ever really was) a blanket defence.
Even in the US, the legal environment is moving in the direction of greater director liability, especially in the arena of risk governance. The Supreme Court of Delaware, long a bell-weather for the legal approach to corporate governance in the US due to the large number of corporations domiciled in the state, has tended to err in defence of the “business judgement” principle. Historically the so-called Caremark doctrine held that directors would not be held liable for a failure in risk oversight unless there was “… an utter failure to ensure … [ ] … a reasonable system exists.” Now that seems to be changing too. In a recent judgement** the court found that “… directors must make a good faith effort to implement a [risk] oversight system and monitor it themselves.” This new ruling places the onus of accountability squarely on the Board to demonstrate an effective process is in place.
What does this mean for Board decision making? For a start it means much more than simply recording decisions in minutes. Prof. Loughrey’s team found that when cases were found in favour of directors it was most frequently because the defendants could show that they had followed a clear decision making process, supported by relevant and timely information and advice.
Thus, there is a clear case for Boards to pay greater attention to the appropriateness and rigour of their decision making processes, especially when critical strategic issues and risks are involved. Yet even if self-protection is not sufficient justification there is another good reason for Boards to up their decision making process game. Better processes produce better outcomes.
Nobel prize winner and doyen of the behavioural science underpinning human decision making, Daniel Kahneman recently published*** on why disciplined decision making processes are required. The fruits of his over 30 years of research give Kahneman deep insights into the cognitive and behavioural biases that afflict most human decision making. In the context of high stakes, high impact strategic Board-level decision making, these biases are frequently exacerbated by the individual egos, personalities and group dynamics involved. To counter these conditions Kahneman and his co-authors set out a compelling case for why decision making processes are needed.
In case after case of major corporate failure, the root cause can be found to be poor Board decision making. Yet when Boards get it wrong it is not just the individual directors who may have cause for regret. The consequences and costs fall far more widely than on the directors involved. Indeed, they are most frequently cushioned from financial impacts; employees, suppliers, customers and society at large are not. It can be argued that Boards and directors have a clear moral responsibility to improve their critical decision making processes.
* Business Judgement and the Courts; Centre for Business Law and Practice, University of Leeds; University of Liverpool Management School
** Marchand v. Barnhill, No. 533, 2018 (Del. June 19, 2019)
*** A Structured Approach to Strategic Decisions; Daniel Kahneman, Dan Lovallo, Olivier Sibony; MIT Sloan Management Review, March 04, 2019
There is perhaps a tendency for directors to view the discipline of process as a constraint on the free reign of experienced judgement. If so, this would be a mistake. A very wide body of experience and research has demonstrated that Board decision making is no less subject to the pitfalls of cognitive and behavioural limitations and biases than any other human activity. In certain important respects, in specific cases, Board decision making has been shown to be worse than in other, similarly important contexts. This is especially true of the performance of Boards as regards Strategic Risk.
There are two compelling reasons for Boards to take another look at their decision making processes – the first is the consequences for directors when their processes are absent or poor and the second is the real potential for improved Board effectiveness.
Traditionally, Board decision making has been defended in practice from external legal scrutiny by a convention, sometimes reinforced at common (i.e. judge made) law, that “business judgement” was not judiciable. This assumption has been long established. Yet even if it was ever wholly true, which is debatable, clear political and regulatory trends are moving against it.
In the UK, statute law has been enacted to both clarify and tighten directors’ accountabilities. Consequently, as recent research* (led by Professor Joan Loughrey at the University of Leeds) has highlighted, there has been dramatic growth in directors’ decisions being challenged in the courts. In less than a decade such cases grew by a factor of 10. Moreover, when directors are taken to court the probability is that they will lose. 63% of cases reviewed by Prof. Loughrey’s team were found against the directors, i.e. they were found liable for the consequences of their decisions. The proportion of cases being found against directors is also increasing. “Business Judgement”, at least in the UK, is no longer (if it ever really was) a blanket defence.
Even in the US, the legal environment is moving in the direction of greater director liability, especially in the arena of risk governance. The Supreme Court of Delaware, long a bell-weather for the legal approach to corporate governance in the US due to the large number of corporations domiciled in the state, has tended to err in defence of the “business judgement” principle. Historically the so-called Caremark doctrine held that directors would not be held liable for a failure in risk oversight unless there was “… an utter failure to ensure … [ ] … a reasonable system exists.” Now that seems to be changing too. In a recent judgement** the court found that “… directors must make a good faith effort to implement a [risk] oversight system and monitor it themselves.” This new ruling places the onus of accountability squarely on the Board to demonstrate an effective process is in place.
What does this mean for Board decision making? For a start it means much more than simply recording decisions in minutes. Prof. Loughrey’s team found that when cases were found in favour of directors it was most frequently because the defendants could show that they had followed a clear decision making process, supported by relevant and timely information and advice.
Thus, there is a clear case for Boards to pay greater attention to the appropriateness and rigour of their decision making processes, especially when critical strategic issues and risks are involved. Yet even if self-protection is not sufficient justification there is another good reason for Boards to up their decision making process game. Better processes produce better outcomes.
Nobel prize winner and doyen of the behavioural science underpinning human decision making, Daniel Kahneman recently published*** on why disciplined decision making processes are required. The fruits of his over 30 years of research give Kahneman deep insights into the cognitive and behavioural biases that afflict most human decision making. In the context of high stakes, high impact strategic Board-level decision making, these biases are frequently exacerbated by the individual egos, personalities and group dynamics involved. To counter these conditions Kahneman and his co-authors set out a compelling case for why decision making processes are needed.
In case after case of major corporate failure, the root cause can be found to be poor Board decision making. Yet when Boards get it wrong it is not just the individual directors who may have cause for regret. The consequences and costs fall far more widely than on the directors involved. Indeed, they are most frequently cushioned from financial impacts; employees, suppliers, customers and society at large are not. It can be argued that Boards and directors have a clear moral responsibility to improve their critical decision making processes.
* Business Judgement and the Courts; Centre for Business Law and Practice, University of Leeds; University of Liverpool Management School
** Marchand v. Barnhill, No. 533, 2018 (Del. June 19, 2019)
*** A Structured Approach to Strategic Decisions; Daniel Kahneman, Dan Lovallo, Olivier Sibony; MIT Sloan Management Review, March 04, 2019