What Makes an Effective Early Warning Indicator?

We’ve previously noted that, along with new causal hypotheses and data that surprises us, early warning indicators are among the most valuable new information we can receive. Potentially, they can provide a competitive advantage by giving us more time than other organizations to adapt to emerging threats and opportunities.

But while they are easy to conceptualize and understand, in practice establishing and using early warning indicators is often surprisingly difficult, for both boards and management teams.

The process of establishing early warning indicators begins with developing alternative forecasts for the way the future could evolve (e.g., via scenario construction or other techniques), and the opportunities and threats associated with each of them.

The defining characteristic of an effective early warning indicator is that it is relatively unambiguous; it has a much higher probability of being observed (or not observed) if one forecasted scenario is developing than it does under every other alternative scenario. In Bayesian terms, an effective early warning indicator has a high likelihood ratio.

It is also important that the warning indicator (or combination of indicators that collectively have a high likelihood ratio) can be observed relatively easily, and as early as possible in the process through which a given threat or opportunity is expected to develop.

Finally, if it is to act as an effective spur to organizational action, a warning indicator should be embedded in an emotionally powerful narrative that engages people’s attention and motivates them to act.

So far, so good. Unfortunately, developing early warning indicators is the easy part. History has repeatedly shown that deciding to act after a warning has been received is the real challenge.

Why is that?

As Daniel Kahneman has shown in his research, at an individual level our minds automatically fight to maintain the coherence our current mental models, and have a natural tendency to explain away discordant warning indicator evidence that doesn’t fit with them. Moreover, the more a potential threat is at odds with the conventional wisdom (particularly if it is widely held by a group), the harder we will resist recognizing that it has become an imminent danger. This process is well captured by the old saying about how companies go bankrupt: at first slowly, and then rapidly.

Other research has found that we typically underreact to warning indicators that are based on the absence rather than the presence of evidence. That’s why Sherlock Holmes’ dog that didn’t bark remains such a striking story.

There is also an inescapable tradeoff between so-called “Type 1” and “Type-2” errors.

Type 1 errors are known as a “false positives” or, more practically, “false alarms”. These are errors of commission, when you warn of a threat that never becomes dangerous.

Type 2 errors are known as “false negatives” or “missed alarms.” These are i errors of omission, when you fail to warn of a threat that later becomes dangerous.

Most human beings have a greater desire to avoid errors of commission than errors of omission, because, at least in the short-term, the former produce much stronger feelings of regret than the latter.

This tendency is reinforced by the nature of organizational politics.

As Gary Klein and others have shown, as organizations grow their focus subtly shifts from generating insights in order to become more effective, to avoiding errors in order to become more predictable and efficient.

Hence, in larger organizations issuing a warning involves far more career risk for individuals and political risk for a group than does going along with the conventional wisdom and risking an error of omission. This point was famously summed up by John Maynard Keynes: "Worldly wisdom teaches that it is better for reputation to fail conventionally than to succeed unconventionally."

So are we doomed to repeat history’s long track record of organizations that have been surprised (sometimes fatally) in spite of ample warning of impending danger?

We agree with researchers who have found that embedding warning indicators in emotionally powerful and credible stories significantly improves the chances that they will be taken seriously.

More importantly, we have also discovered that the nature of the story itself is critical. Specifically, effective warning stories clearly capture the evolving dynamic – that is, the gap -- between the time remaining before an emerging risk becomes an existential threat and the time required for an organization to adequately respond to it.

Focusing management teams’ and boards’ attention on the evolution of this gap is the surest way we know to ensure a timely and adequate response to well-designed early warning indicators.


Comments

The Confusing Terms We Use When Thinking About the Future

One of the fundamental keys to human progress has been our capacity to think about the future. Yet when we try to get more specific about just how we do this, confusing terminology often gets in the way of clear thinking.

With that in mind, let’s take a brief look at some commonly encountered terms.

“Prospection” is a broad term that refers to the generation and assessment of mental representations about the future. Its opposite is “retrospection”, which is focused on the past.

“Anticipation” and “anticipatory thinking” are more commonly used terms that have the same broad meaning as prospection.

“Prefactual Thinking” is narrower than prospection, and is specifically focused on “action-outcome” causal processes, in the same manner that they are used in “counterfactual thinking” when looking back at the past. Both prefactual and counterfactual thinking often have a significant emotional component involving regret -- either how it could have been avoided in the past or how it can be avoided in the future.

“Foresight” has been defined as “the ability to foresee or prepare wisely for the future; prescience.” Put differently, it relates to the accuracy of our prospections, or lack thereof.

In general terms, a “Forecast” is a statement about the state or states of a system at some point in the future.

Forecasts can be categorized using many different criteria.

Using different levels of aggregation is one approach. For example, strategic forecasts tend to focus on what may happen and why; operational forecasts describe how these “whats” could occur; and tactical forecasts focus on questions of who, when, and where associated with each of these “hows”.

A second basis for categorizing forecasts is their specificity. A “prediction” is the logical deductive consequence if a causal hypothesis is true. In other words, If Cause, then Effect.

A looser form of prediction is that a future state or outcome has a high (low) likelihood because it is a logical consequence of many (few) causal hypotheses that could be true. In other words, If Cause 1 or Cause 2, etc., then Effect.

Other forecasts (such as those based on scenarios) present a range of possible effects that could be observed in the future depending on which of a number of causal hypotheses is true. In other words, If Cause 1, then Effect 1; If Cause 2, then Effect 2, etc.

Finally, forecasts can also be categorized based on the methodology that was used to produce them (e.g., quantitative versus qualitative approaches). This also includes forecasts that result from combining the outputs of multiple methodologies (which is typically done to increase accuracy).
Comments

Neglected Existential Risks and Valuation Shocks

The emerging story of scandal over quality control at Kobe Steel illustrates an important element of strategic risk governance. Aside from the specific failures in management, leadership, and corporate culture that underpin the scandal (all elements for which the board of directors must ultimately bear accountability), there is also the instructive response of the financial markets.As reported in the Financial Times, the cost of Kobe Steel’s five-year credit default swaps quadrupled over two days. What this means is that the financial markets have priced in a very different assumption about the continued creditworthiness of Kobe. Bearing in mind that, as yet, there have been no direct financial consequences from cancelled orders or regulatory intervention (as for example we have seen over time in the case of the VW Dieselgate scandal), this market correction is wholly based on an updated view of Kobe Steel’s future ability to meet its obligations, and, quite possibly, its future survival as an independent entity.

The fact that the financial markets have reacted in this manner to new information is unremarkable. However, what this reaction illustrates quite dramatically is that the present value of an enterprise is normally based on the unstated but critical assumption that it will survive. Such an assumption is not of itself unreasonable, but it is nonetheless an assumption of an outcome that is, in reality, uncertain.

Whilst financial markets strive to “price in” the risk to future cash flows, the large corrections such as we have seen in the case of Kobe show that existential threats to a company’s survival are unlikely to be fully priced, and can wreak valuation havoc when they appear.

To avoid this, boards and management teams must to a better job of anticipating, assessing, and adapting to these threats, and, crucially, clearly communicating the effective operation of these processes to shareholders and analysts.

Comments

What is High Value Information?

In the context of strategic risk management and governance, we've thought a lot over the years about how to distinguish truly valuable information in a world of data overload and scarce attention. 

In essence, decision makers have to simultaneously search through “sensemaking space” and “course of action space" — the former to reach a better understanding of the dynamics of the situation they confront, and the latter to determine how to most effectively and efficiently achieve their goals with limited resources in the face of uncertainty.

The first three types of valuable information pertain to the sensemaking challenge, which is our primary concern at Britten Coyne Partners:

  • New causal theories — that enable you to either better understand and explain the past or better forecast and anticipate the future.

  • Indicators — high likelihood information (in the Bayesian sense) that enables you to better discriminate between different hypotheses — e.g., between multiple scenarios that could develop in the future. In Bayesian terms, information that reduces uncertainty and thus enables you to substantially adjust your prior probabilities across some range of hypotheses.

  • Surprises — information that triggers the feeling of surprise, usually because it is outside the range of outcomes your existing mental model would predict. Statistically, it tells you that the variance of possible outcomes is larger than you had thought. In Bayesian terms, may trigger expansion of the range of possible outcomes you had previously considered, as well as an adjustment in their probabilities. In Shannon terms, it is information with high entropy, because it increases your degree of uncertainty. 

The next three types of valuable information pertain to the search through option space and the selection of a course of action to pursue:

  • New Action Options — that enable you to achieve your goals with some combination of more effectiveness, more efficiency, and/or more flexibility (i.e., adaptability).

  • New Goals or Decision Criteria — that enable you to better evaluate and tradeoff the options you have.

  • New Time Updates — that tell you how much time is left before you must decide and act.
Read Moreā€¦
Comments

Reflections on the Failure of Northern Rock

As the anniversary of the failure of Northern Rock attracts some media attention it prompts us to reflect on the lessons this example holds for directors. A local councillor reflected on the crisis at the bank, saying, "Nobody saw this coming".

The real question is why not? 

Consider this: Northern Rock had a distinctly different funding model from a traditional mortgage lender, they did not reply on deposits to expand their mortgage book, but the wholesale money markets. This was a strategic choice. No doubt this strategic choice carried some apparent advantages - possibly lower financing costs and more rapid growth among them. 

But inevitably this choice also carried distinct risks from the traditional model. The board should have also been asking "What if the wholesale market fails - what does that do to us?" The point is not that such an event is unlikely or improbable; the point is that had anyone asked that question the consequences to Northern Rock would have become very apparent.

The second follow up question the board should have asked is "What set of circumstances might lead to a failure in the wholesale market?" and having considered these it should then have asked "what tell-tales signs and signals can we monitor that would give us some warning that these set of circumstances are, in fact, happening?" Finally they should have asked "If we see the signs that this event is happening, how much time will we have to react?." 

These questions illustrate the fundamental difference in the nature of strategic risk compared with quantifiable and transferable risks. The existential threats that arise from strategic risks are rarely, if ever, amenable to quantification and cannot be transferred. 

Governing strategic risk is a fundamental accountability of the board. Only by asking the right questions and adopting the right tools to answer them can boards and directors avoid the familiarity with imperfect information that creates the blindspots that "nobody can see coming".

The board of Northern Rock was blind to the risk of a failure in the wholesale market. Probably they thought it so improbable that it was not worth consideration. But improbable is not the same as impossible.

Comments