The Emerging Impact of Artificial Intelligence on Strategic Risk Management and Governance: A New Indicator

Britten Coyne Partners provides consulting and educational services that enable clients to substantially improve their ability to anticipate, accurately assess, and adapt in time to emerging threats to the success of their strategies and survival of their organizations.

Among the trends we obsessively monitor is progress in artificial intelligence technologies that could change the way clients approach these challenges.

We recently read a newly published paper that directly addressed this issue.

Before discussing the paper’s findings, it will be useful to provide some important background.

While recent advances in artificial intelligence in general and machine learning in particular have received extensive publicity, the limitations of AI technologies are far less well-known, but equally important. As described in his book (“The Book of Why”), Professor Judea Pearl’s “hierarchy of reasoning” provides an excellent way to approach this issue.

Pearl divides reasoning into three increasingly difficult levels. The lowest level is what he calls
“associative” or statistical reasoning, whose goal is finding relationships in a set of data that enable prediction. A simple example of this would be creation of a linear correlation matrix for 100 data series. Associative reasoning makes no causal claims (remember the old saying, “correlation does not mean causation”). Machine Learning’s achievements thus far have been based on various (and often very complex) types of associative reasoning.

And even at this level of reasoning, there are many circumstances in which machine learning methods struggle and often fail. First, if a data set has been generated by a random underlying process, then any patterns ML identifies in it will be spurious and unlikely to consistently produce accurate predictions (a mistake that human researchers also make…).

Second, if a data set has been generated by a so-called “non-stationary” process (i.e., a data-generating process that is evolving over time), then the accuracy of predictions are likely to decline over time as the historical training data bears less and less resemblance to the data currently being generated by the system. And most of the systems that involve human beings – so-called complex adaptive systems – are constantly evolving (e.g., as players change their goals, strategies, relationships, and often the rules of the implicit game they are playing).

In contrast, even in the case of very complex games like Go, the underlying system is stationary – e.g., the size of the board, rules governing allowable moves, etc. – do not evolve over time.

Of course, a predictive algorithm can be updated over time with new data; however, this raises two issues: (1) the cost of doing this, relative to the expected benefit, and (2) the respective rates at which the data generating process is evolving and the algorithm is being updated.

Third, machine learning methods can fail if a training data set is either mislabeled (in the case of supervised learning), or has been deliberately corrupted (a new area of cyberwarfare; e.g., see IARPA’s SAILS and TrojAI programs). For example, consider a set of training data that contains a small number of stop signs on which a small yellow square had been placed, linked to a “speed up” result. What will happen when an autonomous vehicle encounters a stop sign on which someone has placed a small square yellow sticker?

In Pearl’s reasoning hierarchy, the level above associative reasoning is
causal reasoning. At this level you don’t just say, “result B is associated with A”, but rather you explain why “effect B has or will result from cause A.”

In simple, stationary mechanical systems governed by unchanging physical laws, causal reasoning is straightforward. When you add in feedback loops, it becomes more difficult. But in complex adaptive systems that include human beings, accurate causal reasoning is extremely challenging, to the point of apparent impossibility in some cases.

For example, consider the difficulty of reasoning causally about history. In trying to explain an observed effect, the historian has to consider situational factors (and their complex interactions), human decisions and actions (and how they are influenced by the availability of information and the effects of social interactions), and the impact of randomness (i.e., good and bad luck). The same challenges confront an intelligence analyst – or active investor – who is trying to forecast probabilities for possible future outcomes that an evolving complex adaptive system could produce.

Today causal reasoning is the frontier of developing machine learning methods. It is extremely challenging for many reasons, including, for example, requirements for substantial improvements in natural language processing, knowledge integration, agent-based modeling of multi-level complex adaptive systems, automated inference of concepts, and their use in transfer learning (applying concepts across domains).

Despite these obstacles, AI researchers are making progress in the area of some areas of causal reasoning (e.g., “
Causal Generative Neural Networks”, by Goudet et al, “A Simple Neural Network Model for Relational Reasoning” by Santoro et al, and “Multimodal Storytelling via Generative Adversarial Imitation Learning” by Chen et al). But they still have a very long way to go.

At the top of Pearl’s hierarchy sits
counterfactual reasoning, which answers questions like, “What would have happened in the past if one or more causal factors had been different?”; “What will happen in the future if assumptions X, Y, and Z aren’t true?”; or “What would happen if a historical causal process has changed?”

One of my favorite examples of counterfactual reasoning comes from the movie Patton, where he has been notified of increased German activity in the Ardennes forest in December 1944, at the beginning of what would become the Battle of the Bulge. Patton says to his aide, “There's absolutely no reason for us to assume the Germans are mounting a major offensive. The weather is awful, their supplies are low, and the German army hasn't mounted a winter offensive since the time of Frederick the Great — therefore I believe that's exactly what they're going to do.”

Associational reasoning would have predicted just the opposite.

This example highlights an important point: in complex adaptive systems, counterfactual reasoning often depends as much on an intuitive grasp of situations and human behavior that we learn from the study of history and literature as it does on the application of more formal methods.

Counterfactual reasoning serves many purposes, including learning lessons from experience (e.g., “what would have worked better?”) and developing and testing our causal hypotheses (e.g., “what is the probability that effect E would have or will occur if hypothesized cause X was/is present or not present?”).

While Dr. Pearl has developed a systematic approach to causal and counterfactual reasoning methods, their application remains a continuing challenge for machine learning methods, and indeed even for human reasoning. For example, the Intelligence Advanced Research Projects Activity recently launched a new initiative to improve counterfactual reasoning methods (the “FOCUS” program).

In addition to the challenge of climbing higher up Pearl’s hierarchy of reasoning, further development and deployment of artificial intelligence technologies faces three further obstacles.

The first is the
hardware on which AI/ML software runs. In many cases, training ML software is more time, labor, and energy intensive than many people realize (e.g., “Neglected Dimensions of AI Progress” by Martinez-Plumed et al, and “Energy and Policy Considerations for Deep Learning in NLP”, by Strubell et al). However, recent evidence that quantum computing technologies are developing at a “super-exponential” rate suggests that this constraint on AI/ML development is likely to be significantly loosened over the next five to seven years (e.g., “A New Law Suggests Quantum Supremacy Could Happen This Year” by Kevin Hartnett). This dramatic increase in processing power that quantum computing will provide could, depending on software development (e.g., agent based modeling and simulation), make it possible to predict the behavior of complex adaptive systems and, using Generative Adversarial Networks (an approach to machine learning that is driven by “self-play” or competing algorithms), devise better strategies for achieving critical goals. Of course, this also raises the prospect of a world in which there are many more instances of “algorithm vs. algorithm” competition, similar to what we see in some financial markets today.

The second challenge is “
explainability”. As previously noted, the statistical relationships that ML identifies in large data sets are often extremely complex, which makes it hard for users to understand and trust the basis for the predictions they make.

This challenge becomes exponentially more difficult in the case of GANs. For example, after DeepMind’s AlphaZero system used GANs to rapidly develop the ability to defeat expert human chess players, the company’s founder, Demis Hassabis, observed that its approach to the game was “like chess from another dimension”, and extremely hard for human players to understand.

Yet other research has shown that human beings are much less likely to trust and act upon algorithmic predictions and decisions whose underlying logic they don’t understand. Thus, the development of “explainable AI” algorithms that can provide a clear causal logic for the predictions or decisions they make is regarded as a critical precondition for broader AI/ML deployment.

If history is a valid guide,
organizational obstacles will present a third challenge to the widespread deployment of ML and other AI technologies. In previous waves of information and communication technology (ICT) development, companies first attempted to insert their ICT investments into existing business processes, usually with the goal of improving their efficiency. The results, as measured by productivity improvement, were usually disappointing.

It wasn’t until changes were made to business processes, organizational structures, and employee knowledge and skills that significant productivity gains were realized. And it was only later that the other benefits of ICT were discovered and implemented, including more effective and adaptable products, services, organizations, and business models.

In the case of machine learning and other artificial intelligence technologies, the same problems seem to be coming up again (e.g., “
The Big Leap Toward AI at Scale” by BCG, and “Driving Impact at Scale from Automation and AI” and “AI Adoption Advances, but Foundational Barriers Remain” by McKinsey). Anybody in doubt about this need only look at the compensation packages companies are offering to recruit data scientists and other AI experts (even though the organizational challenges to implementing and scaling up AI/ML technologies go far beyond talent).

Having provided a general context, let’s now turn to the article that caught our attention: “
Anticipatory Thinking: A Metacognitive Capability”, by Amos-Binks and Dannenhauer that was published on Arxiv on 28 June 2019.

As we do at Britten Coyne Partners, the authors draw a distinction between “anticipatory thinking”, which seeks to identify what could happen in the future, and “forecasting”, which estimates the probability, and/or remaining time until the possible outcomes that have been anticipated will actually happen, and the impact they will have if and when they occur.

With respect to anticipatory thinking, we are acutely conscious of the conclusion reached by a 1983 CIA study of failed forecasts: "each involved historical discontinuity, and, in the early stages…unlikely outcomes. The basic problem was…situations in which trend continuity and precedent were of marginal, if not counterproductive value."
When it comes to forecasting, we know that in complex socio-technical systems that are constantly evolving, forecast accuracy over longer time horizons still heavily depends on causal and counterfactual reasoning by human beings (which, to be sure, can be augmented by technology that can assist us in performing activities such as hypothesis tracking, high value information collection – e.g., Google Alerts – and evidence weighting).

Anticipatory Thinking: A Metacognitive Capability” is a good (if not comprehensive) benchmark for the current state of artificial intelligence technology in this area.

The authors begin with a definition: “anticipatory thinking is a complex cognitive process…that involves the analysis of relevant future states…to identify threatening conditions so one might proactively mitigate and intervene at critical points to avoid catastrophic failure.”

They then clearly state that, “AI systems have yet to adopt this capability. While artificial agents with a metacognitive architecture can formulate their own goals or adapt their plans response to their environment, and learning-driven goal generation can anticipate new goals from past examples, they do not reason prospectively about how their current goals could potentially fail or become unattainable. Expectations have a similar limitation; they represent an agent’s mental view of future states, and are useful for diagnosing plan failure and discrepancies in execution. However, they do not critically examine a plan or goal for potential weaknesses or opportunities in advance…

"At present, artificial agents do not analyze plans and goals to reveal their unnamed risks (such as the possible actions of other agents) and how these risks might be proactively mitigated to avoid execution failures. Calls for the AI community to investigate so-called ‘imagination machines’ [e.g., “
Imagination Machines: A New Challenge for Artificial Intelligence” by Sridhar Mahadevan] highlights the limitations between current data-driven advances in AI and matching complex human performance in the long term.”

The authors’ goal is to “take a step towards imagination machines by operationalizing the concept of automated, software-based anticipatory thinking as a metacognitive capability” and show how it can be implemented using an existing cognitive software architecture for artificial agents used in planning and simulation models.

The authors’ logic is worth describing in detail, as it provides a useful reference:

First, identify goal vulnerabilities. “This step reasons over a plan’s structure to identify properties that would be particularly costly were they not to go according to plan.” They suggest prioritizing vulnerabilities based on how many elements in a plan are based on different “pre-conditions” (i.e., assumptions).

Second, “For each identified plan vulnerability, identify possible sources of failure” – that is, “conditioning events” which would exploit vulnerabilities and cause the plan to fail.

Third, identify modifications to the existing plan that would reduce exposure to the sources of failure.

Finally, prioritize the implementation of these plan modifications based on a resource constraint and each modifications forecast cost/benefit ratio, with the potential benefit measured by the incremental change in the probability of plan success as a result of the modification.

After reading this paper, our key takeaway is that when it comes to strategic risk governance and management, there appears to be a very long way to go before artificial intelligence technology is capable of automating, or even substantially augmenting, human activity.

For example, when the authors suggest “reasoning over a plan’s structure”, it isn’t clear whether they are referring to associational, causal, and/or counterfactual reasoning.

More importantly, plans are far more structured than strategy, and their assessment is therefore potentially much easier to automate.

As we define the term, “
Strategy is a causal theory, based on a set of beliefs, that exploits one or more decisive asymmetries to achieve an organization's most important goals - with limited resources, in the face of evolving uncertainty, constraints, and opposition.” 

There are many potential existential threats to the success of a strategy, and the survival of an organization (including setting the wrong goals). And new threats are constantly emerging.

Given this, for the foreseeable future, complex human cognition will continue to play a critical role in strategic risk management and governance – from anticipating potential threats, to foraging for information about them, to analyzing, weighing, and synthesizing it, and to testing our conclusions against intuition that is grounded in both experience and the instinctive sense of danger that has been bred into us by evolution.

The critical challenge for boards and management teams today isn’t how to better apply artificial intelligence to reduce strategic ignorance, uncertainty, and risk. Rather, it is how to quickly develop far better individual, team, and organizational capabilities to anticipate, accurately assess, and adapt in time to the threats (and opportunities) that are emerging at an accelerating rate from the increasingly complex world we face today.

While at Britten Coyne Partners we will continue to closely track the development of artificial intelligence technologies, our primary focus will remain helping our clients to develop and apply these increasingly critical capabilities throughout their organizations.
Comments