What Makes an Effective Early Warning Indicator?

We’ve previously noted that, along with new causal hypotheses and data that surprises us, early warning indicators are among the most valuable new information we can receive. Potentially, they can provide a competitive advantage by giving us more time than other organizations to adapt to emerging threats and opportunities.

But while they are easy to conceptualize and understand, in practice establishing and using early warning indicators is often surprisingly difficult, for both boards and management teams.

The process of establishing early warning indicators begins with developing alternative forecasts for the way the future could evolve (e.g., via scenario construction or other techniques), and the opportunities and threats associated with each of them.

The defining characteristic of an effective early warning indicator is that it is relatively unambiguous; it has a much higher probability of being observed (or not observed) if one forecasted scenario is developing than it does under every other alternative scenario. In Bayesian terms, an effective early warning indicator has a high likelihood ratio.

It is also important that the warning indicator (or combination of indicators that collectively have a high likelihood ratio) can be observed relatively easily, and as early as possible in the process through which a given threat or opportunity is expected to develop.

Finally, if it is to act as an effective spur to organizational action, a warning indicator should be embedded in an emotionally powerful narrative that engages people’s attention and motivates them to act.

So far, so good. Unfortunately, developing early warning indicators is the easy part. History has repeatedly shown that deciding to act after a warning has been received is the real challenge.

Why is that?

As Daniel Kahneman has shown in his research, at an individual level our minds automatically fight to maintain the coherence our current mental models, and have a natural tendency to explain away discordant warning indicator evidence that doesn’t fit with them. Moreover, the more a potential threat is at odds with the conventional wisdom (particularly if it is widely held by a group), the harder we will resist recognizing that it has become an imminent danger. This process is well captured by the old saying about how companies go bankrupt: at first slowly, and then rapidly.

Other research has found that we typically underreact to warning indicators that are based on the absence rather than the presence of evidence. That’s why Sherlock Holmes’ dog that didn’t bark remains such a striking story.

There is also an inescapable tradeoff between so-called “Type 1” and “Type-2” errors.

Type 1 errors are known as a “false positives” or, more practically, “false alarms”. These are errors of commission, when you warn of a threat that never becomes dangerous.

Type 2 errors are known as “false negatives” or “missed alarms.” These are i errors of omission, when you fail to warn of a threat that later becomes dangerous.

Most human beings have a greater desire to avoid errors of commission than errors of omission, because, at least in the short-term, the former produce much stronger feelings of regret than the latter.

This tendency is reinforced by the nature of organizational politics.

As Gary Klein and others have shown, as organizations grow their focus subtly shifts from generating insights in order to become more effective, to avoiding errors in order to become more predictable and efficient.

Hence, in larger organizations issuing a warning involves far more career risk for individuals and political risk for a group than does going along with the conventional wisdom and risking an error of omission. This point was famously summed up by John Maynard Keynes: "Worldly wisdom teaches that it is better for reputation to fail conventionally than to succeed unconventionally."

So are we doomed to repeat history’s long track record of organizations that have been surprised (sometimes fatally) in spite of ample warning of impending danger?

We agree with researchers who have found that embedding warning indicators in emotionally powerful and credible stories significantly improves the chances that they will be taken seriously.

More importantly, we have also discovered that the nature of the story itself is critical. Specifically, effective warning stories clearly capture the evolving dynamic – that is, the gap -- between the time remaining before an emerging risk becomes an existential threat and the time required for an organization to adequately respond to it.

Focusing management teams’ and boards’ attention on the evolution of this gap is the surest way we know to ensure a timely and adequate response to well-designed early warning indicators.


blog comments powered by Disqus