The Critical Importance of Anticipatory Intelligence in Our Complex, Uncertain World

The deceptive economic and geopolitical calm of the past decade has been an aberration, brought about by unprecedented global monetary stimulus to hold at bay the deflationary forces that have been building in the global economy. Thanks to central bankers’ efforts, volatility has remained low, and organizations have not had to worry too much about disruptive risks beyond those posed by rapid technological change. That is about to change: Brexit, the election of Donald Trump, the emergence of a new US-China Cold War, and nearly two trillion dollars of sovereign bonds bearing negative interest rates are early indications that we are entering a period of much higher uncertainty.

With this change will come much greater organizational focus on developing the processes, methods, tools, and skills needed to survive and thrive in a much more dangerous environment. Josh Kerbel, a faculty member at the United States’ National Intelligence University, recently published an article that we hope will have a substantial impact on these efforts, and closely reflects our views at Britten Coyne Partners.

In “Coming to Terms with Anticipatory Intelligence”, Kerbel notes that it is “a relatively new type of intelligence that is distinct from the “strategic intelligence” that the intelligence community has traditionally focused on. It was born from recognition that the spiking global complexity (interconnectivity and interdependence, both virtual and physical) that characterizes the post–Cold War security environment, with its proclivity to generate emergent (non-additive or nonlinear) phenomena, is essentially new. And as such, it demands new approaches.”

“More precisely, this new strategic environment means that it is no longer enough for the intelligence community to just do traditional strategic intelligence: locking onto, drilling down on, and — less frequently — forecasting the future of issues once they’ve emerged. While still important, such an approach will increasingly be too late. Rather, the intelligence community should also learn to practice foresight (which is not the same as forecasting) and imagine or envision possibilities before they emerge. In other words, it should learn to anticipate.”

Kerbel echoes longstanding concerns among some members of the intelligence community. For example, a 1983 CIA analysis of failed intelligence estimates noted that, “each involved historical discontinuity, and, in the early stages...unlikely outcomes. The basic problem was...situations in which trend continuity and precedent were of marginal, if not counterproductive value."

This distinction was also brought home to me during the four years I spend on the Good Judgement Project, which demonstrated that forecasting skills could be significantly improved through the use of a mix of techniques. But hiding in the background was an equally important question: What was the source of the questions whose outcome we were forecasting? One of my key takeaways was that anticipatory thinking – posing the right questions – was just as important to successful policy and action as accurately forecasting their outcome.

Kerbel notes that, “as clear and compelling as the case for anticipatory intelligence is, it remains poorly understood… Since the 1990s, increasing complexity has been an issue that many in the intelligence community have impulsively dismissed or discounted. Their refrain echoes: “But the world has always been complex.” That’s true. However, what they fail to understand is that the closed and discrete character of the Soviet Union and the bipolar nature of the Cold War — the intelligence community’s formative experience — eclipsed much of the world’s complexity and effectively rendered America’s strategic challenge merely complicated (no, they’re not the same). Consequently, the intelligence community’s prevailing habits, processes, mindsets, etc. — as exemplified in the traditional practice of strategic intelligence — are simply incompatible with the challenges posed by the exponentially more complex post-Cold War strategic environment.”

Kerbel’s view is that “Fundamentally, anticipatory intelligence is about the anticipation of emergence… Truly emergent issues are fundamentally new — nonlinear — behaviors that result unpredictably but not unforeseeably from micro-behaviors in highly complex (interconnected and interdependent) systems, such as the post–Cold War strategic environment. Although emergence can seemingly happen quite quickly (hence the need to anticipate), the conditions enabling it are often building for some time — just waiting for the “spark.” It is these conditions and what they are potentially “ripe” for — not the spark — that anticipatory intelligence should seek to understand… Foresight involves imagining how a broad set of possible conditions (trends, actors, developments, behaviors, etc.) might interact and generate emergent outcomes.”

This begs the question of which foresight methods and tools are most effective. We go into great detail about this in our Strategic Risk Governance and Management course. In this blog post we’ll highlight four key insights.

Traditional scenario methodologies often disappoint

  • As a general rule, when reasoning from the present to the future, we naturally (to maintain our sense psychological safety) minimize the extent of change that could occur.

  • In complex systems, it is almost always impossible to reduce the forces that could produce non-linear change to just two critical uncertainties, as is done in the familiar “2 x 2” scenario method. And in some cases, the uncertainties that most worry an organization’s senior leaders are either out of bounds for the scenario construction team, or the range of their possible outcomes is deliberately constrained.

  • I first studied the scenario methodology under Shell’s Pierre Wack back in 1983. In its early applications, this approach was often able to fulfill its goal of changing senior leaders’ perceptions. Over the years, however, I have seen what I call “scenario archetypes” become more common, which has weakened their ability to surprise leaders and change their perceptions. These archetypes result from one critical uncertainty being technological in nature, and the other being one whose negative outcome would be very bad indeed. This gives rise to three archetypes: (1) Business pretty much as usual, with current trends linearly extrapolated (this is usually the scenario that explicitly or implicitly underlies the organization’s strategy); (2) The World Goes To Hell (slow technology change and the negative outcome for the other uncertainty); and (3) Technology Saves the Day (fast technology change overcomes the negative outcome of the other uncertainty). This leaves what is usually the least well defined but potentially most important scenario, where technology rapidly develops, but the other uncertainty does not have the negative outcome. Too many organizations fail to fully explore the implications of this scenario, usually because they are more realistically threatening to the current strategy.
Historical analogies are limited by our knowledge of history

  • Whether the subject is political economic, technological, business, or military history, most of us have studied too little of it to have a rich based of historical analogies from which we can draw while trying to anticipate the future.

  • Consider some of the challenge we face at the present, including the transition from an industrial to an information and knowledge-based economy; the rapid improvement in potential “general purpose” technologies like automation and artificial intelligence; and the potential transition of the global political economy from a period of growing disorder and conflict to period of more ordered conflict due to a new Cold War between the US and China. In all these cases, the most relevant historical analogies may lie further in the past than many people realize.
Prospective hindsight – reasoning from the future to the present – is surprisingly effective

  • Research has shown that when we are given future event, told that it is true, and asked to explain how it happened, our causal reasoning is much more detailed than if we are simply asked, in the present, how this future event might happen.

  • However, that still leaves the “creative” or “imaginative” challenge of conceiving of these potential future events. We have found that starting with broad future outcomes – e.g., our company has failed; China has successfully forced the US from East Asia – generates a richer set of alternative narratives than a narrower focus on specific future events.
Explicitly focusing on system interactions helps identify emergent effects and early warning indicators

  • Quantitatively, agent-based models, which enable complex interactions between different types of agents, can produce surprising emergent effects, and, critically, help you to understand why they occur (which can aid in either their prediction or in designing interventions to promote or avoid them).

  • Qualitatively, we have found it very useful to create traditional scenarios in narrower policy areas (e.g., technology, the economy, national security, etc.) and then explicitly trace and assess overall system dynamics and how different scenario outcomes could interact across time and across policy areas (e.g., technology change often precedes economic and national security change) to produce varying emergent effects.

Kerbel concludes by noting that, “Exponentially increasing global complexity is the defining characteristic of the age.” Because of this, effective anticipatory intelligence capabilities are more important than ever before to organizations’ future survival and success – and more challenging to develop.
Comments

The Emerging Impact of Artificial Intelligence on Strategic Risk Management and Governance: A New Indicator

Britten Coyne Partners provides consulting and educational services that enable clients to substantially improve their ability to anticipate, accurately assess, and adapt in time to emerging threats to the success of their strategies and survival of their organizations.

Among the trends we obsessively monitor is progress in artificial intelligence technologies that could change the way clients approach these challenges.

We recently read a newly published paper that directly addressed this issue.

Before discussing the paper’s findings, it will be useful to provide some important background.

While recent advances in artificial intelligence in general and machine learning in particular have received extensive publicity, the limitations of AI technologies are far less well-known, but equally important. As described in his book (“The Book of Why”), Professor Judea Pearl’s “hierarchy of reasoning” provides an excellent way to approach this issue.

Pearl divides reasoning into three increasingly difficult levels. The lowest level is what he calls
“associative” or statistical reasoning, whose goal is finding relationships in a set of data that enable prediction. A simple example of this would be creation of a linear correlation matrix for 100 data series. Associative reasoning makes no causal claims (remember the old saying, “correlation does not mean causation”). Machine Learning’s achievements thus far have been based on various (and often very complex) types of associative reasoning.

And even at this level of reasoning, there are many circumstances in which machine learning methods struggle and often fail. First, if a data set has been generated by a random underlying process, then any patterns ML identifies in it will be spurious and unlikely to consistently produce accurate predictions (a mistake that human researchers also make…).

Second, if a data set has been generated by a so-called “non-stationary” process (i.e., a data-generating process that is evolving over time), then the accuracy of predictions are likely to decline over time as the historical training data bears less and less resemblance to the data currently being generated by the system. And most of the systems that involve human beings – so-called complex adaptive systems – are constantly evolving (e.g., as players change their goals, strategies, relationships, and often the rules of the implicit game they are playing).

In contrast, even in the case of very complex games like Go, the underlying system is stationary – e.g., the size of the board, rules governing allowable moves, etc. – do not evolve over time.

Of course, a predictive algorithm can be updated over time with new data; however, this raises two issues: (1) the cost of doing this, relative to the expected benefit, and (2) the respective rates at which the data generating process is evolving and the algorithm is being updated.

Third, machine learning methods can fail if a training data set is either mislabeled (in the case of supervised learning), or has been deliberately corrupted (a new area of cyberwarfare; e.g., see IARPA’s SAILS and TrojAI programs). For example, consider a set of training data that contains a small number of stop signs on which a small yellow square had been placed, linked to a “speed up” result. What will happen when an autonomous vehicle encounters a stop sign on which someone has placed a small square yellow sticker?

In Pearl’s reasoning hierarchy, the level above associative reasoning is
causal reasoning. At this level you don’t just say, “result B is associated with A”, but rather you explain why “effect B has or will result from cause A.”

In simple, stationary mechanical systems governed by unchanging physical laws, causal reasoning is straightforward. When you add in feedback loops, it becomes more difficult. But in complex adaptive systems that include human beings, accurate causal reasoning is extremely challenging, to the point of apparent impossibility in some cases.

For example, consider the difficulty of reasoning causally about history. In trying to explain an observed effect, the historian has to consider situational factors (and their complex interactions), human decisions and actions (and how they are influenced by the availability of information and the effects of social interactions), and the impact of randomness (i.e., good and bad luck). The same challenges confront an intelligence analyst – or active investor – who is trying to forecast probabilities for possible future outcomes that an evolving complex adaptive system could produce.

Today causal reasoning is the frontier of developing machine learning methods. It is extremely challenging for many reasons, including, for example, requirements for substantial improvements in natural language processing, knowledge integration, agent-based modeling of multi-level complex adaptive systems, automated inference of concepts, and their use in transfer learning (applying concepts across domains).

Despite these obstacles, AI researchers are making progress in the area of some areas of causal reasoning (e.g., “
Causal Generative Neural Networks”, by Goudet et al, “A Simple Neural Network Model for Relational Reasoning” by Santoro et al, and “Multimodal Storytelling via Generative Adversarial Imitation Learning” by Chen et al). But they still have a very long way to go.

At the top of Pearl’s hierarchy sits
counterfactual reasoning, which answers questions like, “What would have happened in the past if one or more causal factors had been different?”; “What will happen in the future if assumptions X, Y, and Z aren’t true?”; or “What would happen if a historical causal process has changed?”

One of my favorite examples of counterfactual reasoning comes from the movie Patton, where he has been notified of increased German activity in the Ardennes forest in December 1944, at the beginning of what would become the Battle of the Bulge. Patton says to his aide, “There's absolutely no reason for us to assume the Germans are mounting a major offensive. The weather is awful, their supplies are low, and the German army hasn't mounted a winter offensive since the time of Frederick the Great — therefore I believe that's exactly what they're going to do.”

Associational reasoning would have predicted just the opposite.

This example highlights an important point: in complex adaptive systems, counterfactual reasoning often depends as much on an intuitive grasp of situations and human behavior that we learn from the study of history and literature as it does on the application of more formal methods.

Counterfactual reasoning serves many purposes, including learning lessons from experience (e.g., “what would have worked better?”) and developing and testing our causal hypotheses (e.g., “what is the probability that effect E would have or will occur if hypothesized cause X was/is present or not present?”).

While Dr. Pearl has developed a systematic approach to causal and counterfactual reasoning methods, their application remains a continuing challenge for machine learning methods, and indeed even for human reasoning. For example, the Intelligence Advanced Research Projects Activity recently launched a new initiative to improve counterfactual reasoning methods (the “FOCUS” program).

In addition to the challenge of climbing higher up Pearl’s hierarchy of reasoning, further development and deployment of artificial intelligence technologies faces three further obstacles.

The first is the
hardware on which AI/ML software runs. In many cases, training ML software is more time, labor, and energy intensive than many people realize (e.g., “Neglected Dimensions of AI Progress” by Martinez-Plumed et al, and “Energy and Policy Considerations for Deep Learning in NLP”, by Strubell et al). However, recent evidence that quantum computing technologies are developing at a “super-exponential” rate suggests that this constraint on AI/ML development is likely to be significantly loosened over the next five to seven years (e.g., “A New Law Suggests Quantum Supremacy Could Happen This Year” by Kevin Hartnett). This dramatic increase in processing power that quantum computing will provide could, depending on software development (e.g., agent based modeling and simulation), make it possible to predict the behavior of complex adaptive systems and, using Generative Adversarial Networks (an approach to machine learning that is driven by “self-play” or competing algorithms), devise better strategies for achieving critical goals. Of course, this also raises the prospect of a world in which there are many more instances of “algorithm vs. algorithm” competition, similar to what we see in some financial markets today.

The second challenge is “
explainability”. As previously noted, the statistical relationships that ML identifies in large data sets are often extremely complex, which makes it hard for users to understand and trust the basis for the predictions they make.

This challenge becomes exponentially more difficult in the case of GANs. For example, after DeepMind’s AlphaZero system used GANs to rapidly develop the ability to defeat expert human chess players, the company’s founder, Demis Hassabis, observed that its approach to the game was “like chess from another dimension”, and extremely hard for human players to understand.

Yet other research has shown that human beings are much less likely to trust and act upon algorithmic predictions and decisions whose underlying logic they don’t understand. Thus, the development of “explainable AI” algorithms that can provide a clear causal logic for the predictions or decisions they make is regarded as a critical precondition for broader AI/ML deployment.

If history is a valid guide,
organizational obstacles will present a third challenge to the widespread deployment of ML and other AI technologies. In previous waves of information and communication technology (ICT) development, companies first attempted to insert their ICT investments into existing business processes, usually with the goal of improving their efficiency. The results, as measured by productivity improvement, were usually disappointing.

It wasn’t until changes were made to business processes, organizational structures, and employee knowledge and skills that significant productivity gains were realized. And it was only later that the other benefits of ICT were discovered and implemented, including more effective and adaptable products, services, organizations, and business models.

In the case of machine learning and other artificial intelligence technologies, the same problems seem to be coming up again (e.g., “
The Big Leap Toward AI at Scale” by BCG, and “Driving Impact at Scale from Automation and AI” and “AI Adoption Advances, but Foundational Barriers Remain” by McKinsey). Anybody in doubt about this need only look at the compensation packages companies are offering to recruit data scientists and other AI experts (even though the organizational challenges to implementing and scaling up AI/ML technologies go far beyond talent).

Having provided a general context, let’s now turn to the article that caught our attention: “
Anticipatory Thinking: A Metacognitive Capability”, by Amos-Binks and Dannenhauer that was published on Arxiv on 28 June 2019.

As we do at Britten Coyne Partners, the authors draw a distinction between “anticipatory thinking”, which seeks to identify what could happen in the future, and “forecasting”, which estimates the probability, and/or remaining time until the possible outcomes that have been anticipated will actually happen, and the impact they will have if and when they occur.

With respect to anticipatory thinking, we are acutely conscious of the conclusion reached by a 1983 CIA study of failed forecasts: "each involved historical discontinuity, and, in the early stages…unlikely outcomes. The basic problem was…situations in which trend continuity and precedent were of marginal, if not counterproductive value."
When it comes to forecasting, we know that in complex socio-technical systems that are constantly evolving, forecast accuracy over longer time horizons still heavily depends on causal and counterfactual reasoning by human beings (which, to be sure, can be augmented by technology that can assist us in performing activities such as hypothesis tracking, high value information collection – e.g., Google Alerts – and evidence weighting).

Anticipatory Thinking: A Metacognitive Capability” is a good (if not comprehensive) benchmark for the current state of artificial intelligence technology in this area.

The authors begin with a definition: “anticipatory thinking is a complex cognitive process…that involves the analysis of relevant future states…to identify threatening conditions so one might proactively mitigate and intervene at critical points to avoid catastrophic failure.”

They then clearly state that, “AI systems have yet to adopt this capability. While artificial agents with a metacognitive architecture can formulate their own goals or adapt their plans response to their environment, and learning-driven goal generation can anticipate new goals from past examples, they do not reason prospectively about how their current goals could potentially fail or become unattainable. Expectations have a similar limitation; they represent an agent’s mental view of future states, and are useful for diagnosing plan failure and discrepancies in execution. However, they do not critically examine a plan or goal for potential weaknesses or opportunities in advance…

"At present, artificial agents do not analyze plans and goals to reveal their unnamed risks (such as the possible actions of other agents) and how these risks might be proactively mitigated to avoid execution failures. Calls for the AI community to investigate so-called ‘imagination machines’ [e.g., “
Imagination Machines: A New Challenge for Artificial Intelligence” by Sridhar Mahadevan] highlights the limitations between current data-driven advances in AI and matching complex human performance in the long term.”

The authors’ goal is to “take a step towards imagination machines by operationalizing the concept of automated, software-based anticipatory thinking as a metacognitive capability” and show how it can be implemented using an existing cognitive software architecture for artificial agents used in planning and simulation models.

The authors’ logic is worth describing in detail, as it provides a useful reference:

First, identify goal vulnerabilities. “This step reasons over a plan’s structure to identify properties that would be particularly costly were they not to go according to plan.” They suggest prioritizing vulnerabilities based on how many elements in a plan are based on different “pre-conditions” (i.e., assumptions).

Second, “For each identified plan vulnerability, identify possible sources of failure” – that is, “conditioning events” which would exploit vulnerabilities and cause the plan to fail.

Third, identify modifications to the existing plan that would reduce exposure to the sources of failure.

Finally, prioritize the implementation of these plan modifications based on a resource constraint and each modifications forecast cost/benefit ratio, with the potential benefit measured by the incremental change in the probability of plan success as a result of the modification.

After reading this paper, our key takeaway is that when it comes to strategic risk governance and management, there appears to be a very long way to go before artificial intelligence technology is capable of automating, or even substantially augmenting, human activity.

For example, when the authors suggest “reasoning over a plan’s structure”, it isn’t clear whether they are referring to associational, causal, and/or counterfactual reasoning.

More importantly, plans are far more structured than strategy, and their assessment is therefore potentially much easier to automate.

As we define the term, “
Strategy is a causal theory, based on a set of beliefs, that exploits one or more decisive asymmetries to achieve an organization's most important goals - with limited resources, in the face of evolving uncertainty, constraints, and opposition.” 

There are many potential existential threats to the success of a strategy, and the survival of an organization (including setting the wrong goals). And new threats are constantly emerging.

Given this, for the foreseeable future, complex human cognition will continue to play a critical role in strategic risk management and governance – from anticipating potential threats, to foraging for information about them, to analyzing, weighing, and synthesizing it, and to testing our conclusions against intuition that is grounded in both experience and the instinctive sense of danger that has been bred into us by evolution.

The critical challenge for boards and management teams today isn’t how to better apply artificial intelligence to reduce strategic ignorance, uncertainty, and risk. Rather, it is how to quickly develop far better individual, team, and organizational capabilities to anticipate, accurately assess, and adapt in time to the threats (and opportunities) that are emerging at an accelerating rate from the increasingly complex world we face today.

While at Britten Coyne Partners we will continue to closely track the development of artificial intelligence technologies, our primary focus will remain helping our clients to develop and apply these increasingly critical capabilities throughout their organizations.
Comments

Fraud at Patisserie Valerie -- What Can We Learn?

The emergence of a major fraud at the UK based retail “coffee and cakes” chain Patisserie Valerie led to the failure of the business*, despite desperate efforts of its high-profile major investor and Executive Chairman, Luke Johnson, to fund it before it sunk into administration, including putting in a reported £10m of his own cash as an unsecured loan, now probably all lost.

The aftermath has stimulated a debate about the whether the role and personality of Luke Johnson, considered previously to have been an extremely successful entrepreneur, was a contributing factor; was this a failure of effective governance? Or is fraud just something that no-one, however talented, can guard against?

We suggest it is helpful to examine the issue from the perspective of Strategic Risk and Strategic Warning. In our work on Strategic Risk we emphasise the importance of “foraging for surprise” in the “Realm of Ignorance”, i.e. actively looking for potential strategic or existential threat. In this context, the threat of fraud, in any business, maybe a relatively uncommon occurrence, but it is hardly unknown. It is unwise to fall into the trap of believing that an uncommon occurrence is an impossible one. Trust is necessary in all businesses. At the same time unconditional trust might be considered a step too far.

Efforts to combat deception and fraud have, over time, led to a relatively robust framework of checks and controls in the finance profession. Yet these still inevitably rest on human behaviour and always susceptible to outright deceit by those most trusted to be honest. Also, as we know, audits cannot be relied upon to reliably detect fraud.

So, is fraud, like death and taxes, always with us? Is it a risk about which we can do nothing, despite the potential to bring about the death of the enterprise? We suggest not.

Firstly, boards should accept that the threat from a fraud represents a Strategic Risk that can be anticipated. Secondly, they might ask themselves, how long do companies have to try and recover from a major fraud after it has been uncovered? The answer to this question is, typically, not long enough. Thus, the focus is on what might give us some early warning of a fraud being perpetrated. To put this into the language of Strategic Warning: what could represent high-value indicators of possible fraud?

This is not a simple question to answer – if it was we would presume that fraud would be much less common than is experienced. After all, the fraudsters are, by definition, intent on hiding their activities. At the same time, fraudsters also know that their discovery is probably inevitable in time, (unless their intentions is a “temporary” fraud, which it is expected will be recovered from before detection, such as a sales manager who creates fictitious customer orders in order to meet a target in one month believing, or hoping, that a recovery in actual sales in future periods will allow the fiction to pass unnoticed.) Thus, the challenge is to identify what might be high-value indicators of potential fraud, which also might provide early warning and then to be able to monitor these indicators.

One of our favourite definitions of risk blindness is “familiarity with inferior (or incorrect) information”. Naturally, not all inferior information results from fraud, but it is, we hope, self-evident that fraudsters rely on creating risk blindness through familiarity with incorrect information. Boards rely on information provided to them by the executive and the organization at large. It would be impractical if not impossible to treat all of this as “inferior or incorrect”. At the same time, it is possible to be sensitive to and, from time to time, actively seek information that is contradictory to the standard board pack data. Such information, which may provoke a sense of surprise or suggest uncertainty or be plainly different is, by definition, high-value.

In the case of Patisserie Valerie, there are two examples in the public domain which may illustrate this principle. The first relates to the proximate cause of nearly all corporate failures: running out of cash. It is reported that the company’s cash position was overstated by (£54m or more), and “secret” overdrafts of c. £10m were unknown to the board — i.e. the content of bank accounts were very different from what was being reported. Perhaps it is asking too much for finance directors to occasionally give direct access to a bank statement to the non-executive directors (though, why not?); however Mr. Johnson was an Executive Chairman, so perhaps we may assume he could have done that check himself from time to time. In the event, apparently, he did not, for if he had we must presume that a significant discrepancy would have been apparent much earlier on, perhaps with enough warning to have saved the company.

The second example of high-value information could have been an indicator that the reportedly strong performance of the chain was at odds with the experience of some other branded hospitality chains and, according to a number of customer anecdotes reported in the press subsequently, very much at odds with the customer experience. Some customers have openly questioned whether Mr. Johnson can possibly have visited any of the rapidly expanded number of Patisserie Valerie outlets. They suggest that had he done so he would have seen at first-hand practically empty sites next to other relatively prosperous venues, suggesting that the Patisserie Valerie value proposition was either not as strong as believed or was not being delivered reliably or to a high enough standard.

We do not know and cannot confirm whether these reported experiences were in fact representative of the majority of the Patisserie Valerie estate, but let us assume for the moment that they could have been. We observe that the erosion of a business’ value proposition is itself a Strategic Warning that frequently is a causal factor in declining cash flow. It does not seem impossible that, in view of Mr. Johnson’s previous track record, the board had assumed that the strategy for rapid expansion of the chain was bound to succeed. Individuals in the business, aware of an expectation of success, may have initially sought to “massage” the numbers to soften bad news, similar to the example of the sales manager cited above. Then when the bad news kept coming or worsened, the deception became ingrained. Had “from the field” observations been seen to be at odds with reported sales, this could have been a powerful high-value early warning signal.

This, we know and accept, is speculation. We only indulge in it to illustrate some important principles. The critical importance of high-value indicators for effective Strategic Warning cannot be underestimated. It is a challenge for boards to seek to identify and monitor such indicators, but one that they have an absolute duty to confront. Secondly this case illustrates the key value to boards and directors of “foraging for surprise” and paying heed to the feeling. Surprise is itself an indicator that something is inconsistent with our mental models of reality. Boards and directors need to develop an acute sense of surprise being a clue to the existence of risk blindness.

*Note: Following administration and some closures the Patisserie Valerie business was sold to new owners by administrators and continues to trade.
Comments

Lessons for Boards and Management Teams from Planetary Defense Planning

In April international space agencies participating in the  2019 Planetary Defense Conference, launched an exercise designed to simulate the response to an asteroid strike on Earth. It is a result of increasing efforts over the past two decades to improve humanities chances of surviving the threat of an extinction level event from the impact of a large space object with the planet, similar to that considered to have led to the mass extinction of dinosaurs. Commenting on the purpose of these international agencies Mr. Rüdiger Jehn, Head of Planetary Defense for the European Space Agency (ESA) said, "The first step in protecting our planet is knowing what’s out there. Only then, with enough warning, can we take the steps needed to prevent an asteroid strike altogether, or to minimise the damage it does on the ground.”

Our interest in these events is due to the lessons we believe they offer to organizations striving to improve their own ability to survive existential threats through effective Strategic Risk Governance and Management. As Mr. Jehn observes the first step is to “know what is out there”, in other words to find the “known unknowns” that may represent a threat. This corresponds to the essential process of
Anticipation. We urge our clients to search in the “Realm of Ignorance” and “forage for surprise” to actively look for the few key Strategic Risks* that should concern them.

NASA’s Centre for Near Earth Object Studies (CNEOS) was charged by congress in 2005 with finding 90% of objects of a size, distance from earth and trajectory thought likely to represent a potential threat to the planet. Since then CNEOS funding has risen from $4m to nearly $200m p.a. as the number of such objects found has grown. 20,000 have been identified to date and the number is still growing at about 150 per month. Without such active search humankind would still be living in profound ignorance of the potential scale of the threat from these space objects – the same kind of profound ignorance that many boards of directors experience, as they become familiar with imperfect strategic assumptions and information about risk that fosters their Strategic Risk blindness.

Once potential threats have been identified there is still great uncertainty about when or if they will occur. NASA and other agencies seek to identify two parameters – the date of potential impact and the probability of occurrence. At first sight this may seem analogous to the approach taken with the conventional risk register assessments of companies. This is a mistake – when NASA computes a probability of impact they are basing this on the known measurement imperfections of size, position and trajectory. In other words, there is a robust basis for a computed probability. This is not the case for the overwhelming majority of Strategic Risks identified by businesses; these are uncertainties, frequently a manifestation of the complex economic environment in which businesses operate, for which, as John Maynard Keynes observed in 1937
“…there is no scientific basis on which to perform any calculable probability.”

The clue to the relevant
Assessment of Strategic Risk is in Mr. Jehn’s comment: “…with enough warning”. What matters for boards is not a subjective and unreliable probability but when the threat might materialise. It might be argued that there is as much difficulty in assessing the time to impact of an asteroid as the probability of impact. This is true – for asteroids. But when businesses focus on the expected time to event threshold for Strategic Risk there is much more data and evidence that can be applied to make a best estimate. Moreover, by applying the lessons learned from the Good Judgement project**, the estimation algorithm we use is amenable to update and adjustment over time. This is just what NASA, ESA and other agencies do to improves their estimates of time to, and probability of, impact. The key metric remains time, since this is what either facilitates or limits any potential response to the threat. Moreover, if the rate of change of the observed time to event accelerates, i.e. starts to become much shorter, this is a vital indicator of the need for action.

The action required for
Adaptation or mitigation of the effects of the observed threat is clearly determined by the nature and severity of it. For NASA and ESA such actions are “… the steps needed to prevent an asteroid strike altogether, or to minimise the damage it does on the ground.” For business organizations having strategies that are robust to their assumptions (i.e. they may survive such assumptions being wrong), or having built resilience, for example maintaining key strategic reserves, are amongst the methods that can be employed to adapt to anticipated threat. The lessons from corporate failures nearly always demonstrate that failure to adapt in time to an emerging existential underlies their eventual demise. It is just such a failure that NASA, ESA and others hope to avoid by running their simulation exercises.

When boards and executive teams apply these proven Anticipation, Assessment and Adaptation processes to their own Strategic Risks they may avoid, as NASA and ESA hope to also, an Extinction Level Event!

* We define Strategic Risk as any event which has the potential to cause serious adverse effect on strategic goals up to and including the death of the corporation

** Tom Coyne was a member of the winning team in the Intelligence Advanced Research Projects Activity’s forecasting tournament, as described by Philip Tetlock and Dan Gardner in their book “Superforecasting”

Comments

Comments on the Protiviti/NC State "Top Risks 2019" Survey

We always look forward to the release of new "top risks" surveys, both because of what they include and what they leave out.

Now in its seventh year, the Protiviti/NC State University survey is particularly interesting because it reports the views of board directors and C-Suite executives from around the world (this year there were 825 respondents).

The survey asks participants to rate (on a ten point scale) the potential impact on their company of 30 pre-selected risk issues over the next year.

Risks are further divided into three categories, including macroeconomic risks to potential growth opportunities, risks to the validity of current corporate strategy for pursuing those opportunities, and operational risks to the implementation of that strategy.

Organizationally, one of this year's key findings was that on average, board members reported higher potential risk impact on their organizations than CEOs, who in turn reported a higher average risk impact than their respective management teams. This isn't surprising, given that the structure of CEO and management team incentives is much more skewed towards achieving success than avoiding failure than board members' incentives. There may also be an experience factor at work, whereby board members with more years of experience (and accumulated scar tissue) are more attended to the dangers the lurk in uncertainty and ignorance than less experienced managers, who are naturally more focused on risks they believe they can control.

Here is the list of the survey's top risks for 2019:

(1) "Existing operations meeting performance expectations, competing against 'born digital' firms."

(2) "Succession challenges and the ability to attract and retain top talent."

(3) "Regulatory changes and regulatory scrutiny."

(4) "Cyber threats."

(5) "Resistance to change operations."

(6) "Rapid speed of disruptive innovation and new technologies."

(7) "Privacy/identify management and information security."

(8) "Inability to utilize analytics and big data."

(9) "Organization culture may not sufficiently encourage timely identification and escalation of risk issues."

(10) "Sustaining customer loyalty and retention."

At Britten Coyne Partners, we use a number of different frameworks and methodologies to help clients better anticipate, more accurately assess, and adapt in time to emerging threats. It is interesting to use them to evaluate the "top risks" identified in this survey.

One of these methods divides threats into four categories, based on the location and likely increasing severity of their impact, because of the relative difficulty in adapting to them. These categories include: (a) the competitiveness of a firm's value proposition; (b) the size of its served and potential market; (c) its business model design and economics; and (d) the social, economic, national security, and political context in which a company exists and competes.

Five of the survey's "top risks" seem to be in the value proposition category:

(1) Sustaining customer loyalty and retention

(2) Existing operations meeting performance expectations, competing against 'born digital' firms.

(3) Inability to use analytics and big data

(4) Privacy/Identity management and information security

(5) Cyber threats

Arguably, the last two might also have a negative impact on the overall size of a potentially served market (e.g., if rising privacy and identify protection concerns caused a whole market to shrink). This might also include "regulatory change and regulatory scrutiny."

Five risks seem to represent threats to business model design and economics:

(6) Regulatory change and regulatory scrutiny.

(7) Rapid speed of disruptive innovations and new technologies.

(8) Resistance to change operations.

(9) Succession challenges and ability to attract and retain top talent.

(10) Organization culture may not sufficiently encourage timely identification and escalation of risk issues.

It is interesting that no social, economic, national security, and political context risks made the top ten, as they are key drivers of many of the risks that did. However, this may well have been due the survey limiting macro risks to those that affect potential growth opportunities.

Another way to categorize these top ten risks is by the nature of the organizational issues that underlie them, including timely anticipation of emerging threats, accurate assessment of the their potential consequences and likely speed of development, and the ability to adapt to them in time.

Fears of inadequate organizational ability to anticipate emerging threats likely underline "Rapid speed of disruptive innovations and new technologies", "cyber threats", "privacy/identify management and information security", "regulatory changes and regulatory scrutiny", and meeting performance expectations when competing against born digital firms."

Anxiety about the accuracy and timeliness of assessments of emerging threats is indicated by "organization's culture may not encourage the timely identification and escalation of risk issues", "inability to use analytics and big data", and also concerns with "sustaining customer loyalty and retention."

Worries about a company's ability to adapt in time to emerging threats are clear in "timely escalation of risk issues", "resistance to change operations", and "succession challenges and ability to attract and retain top talent."

Last but not least, it is critical to note that directors' and executives' top ten risks are not risks at all, in the classical sense of discrete events whose historical frequencies can be observed, future probabilities of occurrence measured, and potential negative consequences priced and transferred to others (e.g., via insurance or financial derivative contracts).

Rather, they reflect a combination of uncertainties (about the nature, likelihood, timing, and impact of potential threats), and/or concerns about the potential extent of one's ignorance (e.g., about future regulatory changes or the speed of disruptive innovations and new technologies).

As always, the Protiviti/NC State survey provides a good overview of what risks most worry directors and management teams, and why they are important. But that is only the starting point.

Just as important, and far more difficult, are the challenges of how to better anticipate and more accurately assess the threats they pose, and then adapt to them in time. The good news for boards and management teams is that for the past seven years, this has been the focus of our work at Britten Coyne Partners.
































Comments