Asking the Right Forecasting Questions
18/Jun/18 11:57
During the four years I spent on the Good Judgment Project team, I learned and applied a set of techniques that, as shown in Philip Tetlock's book "Superforecasting" significantly improved forecast accuracy, even in the case of complex socio-technical systems.
Far less noted, however, was the second critical insight from the GJP: the importance of asking the right forecasting questions. Put differently, the real value of a forecast is a function of both the question asked and the accuracy of the answer provided.
This raises the issue of just how an individual or organization should go about deciding on the forecasting questions to ask.
There is no obvious answer.
A rough analogy is to three types of reasoning: inductive, deductive, and abductive. In the case of induction, there are well-known processes to follow when weighing evidence to reach a conclusion (see our previous blog post on this).
Even more well-developed are the rules for logically deducing a conclusion from major and minor premises.
By far the least well codified type of reasoning is abduction — the process of generating plausible causes for an observed or hypothesized effect (sometimes called "inference to the best explanation").
To complete the analogy, we use abduction to generate forecasting questions, which often ask us to estimate the probability of that a hypothesized future effect will occur within a specified time frame.
To develop our estimate, we use abduction to generate plausible causal hypotheses of how the effect could occur. We then use deduction to identify high value evidence (i.e., indicators) we would be very likely to observe (or not observe) if the causal hypothesis were true (or false).
After seeking to collect or observe the evidence for and against various hypotheses, we use induction to weigh it and reach a conclusion — which in this example takes the form of our probability estimate.
So far, so good. But this begs the question of what guides our adductive reasoning. Taking Judea Pearl's hierarchy of different types of reasoning, we can identify three approaches to generating forecasting questions.
Pearl's lowest level of reasoning is associational (also sometimes called correlational). Basically, if a set of factors (e.g., observable evidence) existed in the past at the same time as a given effect, we assume that the effect will also occur in the future if the same set of factors exist or occur. Note that there is no assumption here of causation; only statistical association.
Simple historical reasoning provides an example of this; given an important effect that occurs multiple times, we can seek factors that are common to the given cases to use to formulate forecasting questions related to the potential for the same effect to occur in the future. To be sure, this is an imperfect approach, because the complex systems that produce observed historical outcomes are themselves constantly adapting and evolving. It is for this reason that it is often said that while history seldom repeats, it often rhymes.
Pearl's next highest level of cognition explicitly creates mental models or more complex theories that logically link causes to effects. These theories can result from both qualitative and quantitative analysis. As noted above, we can use deduction to predict the future effects, assuming a given theory is true and specific causes occur. Hence, different causal theories (usually put forth by different experts) can be used to formulate forecasting questions.
Pearl's highest level of cognition is counterfactual reasoning (e.g., "if I hadn't done that, this wouldn't have happened" or "if I had done that instead, it would have produced this result"). One way to use counterfactual reasoning to generate forecasting questions is via the pre-mortem technique, in which you assume a plan has failed or a forecast has been badly wrong, and ask why this happened, including the evidence you missed and what you could have done differently. The results of pre-mortem analyses are often a rich source of new forecasting questions.
In sum, avoiding strategic failure is as much about taking time to formulate the right forecasting questions as it is using methods to enhance the accuracy with which they are answered.
Far less noted, however, was the second critical insight from the GJP: the importance of asking the right forecasting questions. Put differently, the real value of a forecast is a function of both the question asked and the accuracy of the answer provided.
This raises the issue of just how an individual or organization should go about deciding on the forecasting questions to ask.
There is no obvious answer.
A rough analogy is to three types of reasoning: inductive, deductive, and abductive. In the case of induction, there are well-known processes to follow when weighing evidence to reach a conclusion (see our previous blog post on this).
Even more well-developed are the rules for logically deducing a conclusion from major and minor premises.
By far the least well codified type of reasoning is abduction — the process of generating plausible causes for an observed or hypothesized effect (sometimes called "inference to the best explanation").
To complete the analogy, we use abduction to generate forecasting questions, which often ask us to estimate the probability of that a hypothesized future effect will occur within a specified time frame.
To develop our estimate, we use abduction to generate plausible causal hypotheses of how the effect could occur. We then use deduction to identify high value evidence (i.e., indicators) we would be very likely to observe (or not observe) if the causal hypothesis were true (or false).
After seeking to collect or observe the evidence for and against various hypotheses, we use induction to weigh it and reach a conclusion — which in this example takes the form of our probability estimate.
So far, so good. But this begs the question of what guides our adductive reasoning. Taking Judea Pearl's hierarchy of different types of reasoning, we can identify three approaches to generating forecasting questions.
Pearl's lowest level of reasoning is associational (also sometimes called correlational). Basically, if a set of factors (e.g., observable evidence) existed in the past at the same time as a given effect, we assume that the effect will also occur in the future if the same set of factors exist or occur. Note that there is no assumption here of causation; only statistical association.
Simple historical reasoning provides an example of this; given an important effect that occurs multiple times, we can seek factors that are common to the given cases to use to formulate forecasting questions related to the potential for the same effect to occur in the future. To be sure, this is an imperfect approach, because the complex systems that produce observed historical outcomes are themselves constantly adapting and evolving. It is for this reason that it is often said that while history seldom repeats, it often rhymes.
Pearl's next highest level of cognition explicitly creates mental models or more complex theories that logically link causes to effects. These theories can result from both qualitative and quantitative analysis. As noted above, we can use deduction to predict the future effects, assuming a given theory is true and specific causes occur. Hence, different causal theories (usually put forth by different experts) can be used to formulate forecasting questions.
Pearl's highest level of cognition is counterfactual reasoning (e.g., "if I hadn't done that, this wouldn't have happened" or "if I had done that instead, it would have produced this result"). One way to use counterfactual reasoning to generate forecasting questions is via the pre-mortem technique, in which you assume a plan has failed or a forecast has been badly wrong, and ask why this happened, including the evidence you missed and what you could have done differently. The results of pre-mortem analyses are often a rich source of new forecasting questions.
In sum, avoiding strategic failure is as much about taking time to formulate the right forecasting questions as it is using methods to enhance the accuracy with which they are answered.
blog comments powered by Disqus