The Emerging Impact of Artificial Intelligence on Strategic Risk Management and Governance: A New Indicator

Britten Coyne Partners provides consulting and educational services that enable clients to substantially improve their ability to anticipate, accurately assess, and adapt in time to emerging threats to the success of their strategies and survival of their organizations.

Among the trends we obsessively monitor is progress in artificial intelligence technologies that could change the way clients approach these challenges.

We recently read a newly published paper that directly addressed this issue.

Before discussing the paper’s findings, it will useful to provide some important background.

While recent advances in artificial intelligence in general and machine learning in particular have received extensive publicity, the limitations of AI technologies are far less well-known, but equally important. As described in his book (“The Book of Why”), Professor Judea Pearl’s “hierarchy of reasoning” provides an excellent way to approach this issue.

Pearl divides reasoning into three increasingly difficult levels. The lowest level is what he calls
“associative” or statistical reasoning, whose goal is finding relationships in a set of data that enable prediction. A simple example of this would be creation of a linear correlation matrix for 100 data series. Associative reasoning makes no causal claims (remember the old saying, “correlation does not mean causation”). Machine Learning’s achievements thus far have been based on various (and often very complex) types of associative reasoning.

And even at this level of reasoning, there are many circumstances in which machine learning methods struggle and often fail. First, if a data set has been generated by a random underlying process, then any patterns ML identifies in it will be spurious and unlikely to consistently produce accurate predictions (a mistake that human researchers also make…).

Second, if a data set has been generated by a so-called “non-stationary” process (i.e., a data-generating process that is evolving over time), then the accuracy of predictions are likely to decline over time as the historical training data bears less and less resemblance to the data currently being generated by the system. And most of the systems that involve human beings – so-called complex adaptive systems – are constantly evolving (e.g., as players change their goals, strategies, relationships, and often the rules of the implicit game they are playing).

In contrast, even in the case of very complex games like Go, the underlying system is stationary – e.g., the size of the board, rules governing allowable moves, etc. – do not evolve over time.

Of course, a predictive algorithm can be updated over time with new data; however, this raises two issues: (1) the cost of doing this, relative to the expected benefit, and (2) the respective rates at which the data generating process is evolving and the algorithm is being updated.

Third, machine learning methods can fail if a training data set is either mislabeled (in the case of supervised learning), or has been deliberately corrupted (a new area of cyberwarfare; e.g., see IARPA’s SAILS and TrojAI programs). For example, consider a set of training data that contains a small number of stop signs on which a small yellow square had been placed, linked to a “speed up” result. What will happen when an autonomous vehicle encounters a stop sign on which someone has placed a small square yellow sticker?

In Pearl’s reasoning hierarchy, the level above associative reasoning is
causal reasoning. At this level you don’t just say, “result B is associated with A”, but rather you explain why “effect B has or will result from cause A.”

In simple, stationary mechanical systems governed by unchanging physical laws, causal reasoning is straightforward. When you add in feedback loops, it becomes more difficult. But in complex adaptive systems that include human beings, accurate causal reasoning is extremely challenging, to the point of apparent impossibility in some cases.

For example, consider the difficulty of reasoning causally about history. In trying to explain an observed effect, the historian has to consider situational factors (and their complex interactions), human decisions and actions (and how they are influenced by the availability of information and the effects of social interactions), and the impact of randomness (i.e., good and bad luck). The same challenges confront an intelligence analyst – or active investor – who is trying to forecast probabilities for possible future outcomes that an evolving complex adaptive system could produce.

Today causal reasoning is the frontier of developing machine learning methods. It is extremely challenging for many reasons, including, for example, requirements for substantial improvements in natural language processing, knowledge integration, agent-based modeling of multi-level complex adaptive systems, automated inference of concepts, and their use in transfer learning (applying concepts across domains).

Despite these obstacles, AI researchers are making progress in the area of some areas of causal reasoning (e.g., “
Causal Generative Neural Networks”, by Goudet et al, “A Simple Neural Network Model for Relational Reasoning” by Santoro et al, and “Multimodal Storytelling via Generative Adversarial Imitation Learning” by Chen et al). But they still have a very long way to go.

At the top of Pearl’s hierarchy sits
counterfactual reasoning, which answers questions like, “What would have happened in the past if one or more causal factors had been different?”; “What will happen in the future if assumptions X, Y, and Z aren’t true?”; or “What would happen if a historical causal process has changed?”

One of my favorite examples of counterfactual reasoning comes from the movie Patton, where he has been notified of increased German activity in the Ardennes forest in December 1944, at the beginning of what would become the Battle of the Bulge. Patton says to his aide, “There's absolutely no reason for us to assume the Germans are mounting a major offensive. The weather is awful, their supplies are low, and the German army hasn't mounted a winter offensive since the time of Frederick the Great — therefore I believe that's exactly what they're going to do.”

Associational reasoning would have predicted just the opposite.

This example highlights an important point: in complex adaptive systems, counterfactual reasoning often depends as much on an intuitive grasp of situations and human behavior that we learn from the study of history and literature as it does on the application of more formal methods.

Counterfactual reasoning serves many purposes, including learning lessons from experience (e.g., “what would have worked better?”) and developing and testing our causal hypotheses (e.g., “what is the probability that effect E would have or will occur if hypothesized cause X was/is present or not present?”).

While Dr. Pearl has developed a systematic approach to causal and counterfactual reasoning methods, their application remains a continuing challenge for machine learning methods, and indeed even for human reasoning. For example, the Intelligence Advanced Research Projects Activity recently launched a new initiative to improve counterfactual reasoning methods (the “FOCUS” program).

In addition to the challenge of climbing higher up Pearl’s hierarchy of reasoning, further development and deployment of artificial intelligence technologies faces three further obstacles.

The first is the
hardware on which AI/ML software runs. In many cases, training ML software is more time, labor, and energy intensive than many people realize (e.g., “Neglected Dimensions of AI Progress” by Martinez-Plumed et al, and “Energy and Policy Considerations for Deep Learning in NLP”, by Strubell et al). However, recent evidence that quantum computing technologies are developing at a “super-exponential” rate suggests that this constraint on AI/ML development is likely to be significantly loosened over the next five to seven years (e.g., “A New Law Suggests Quantum Supremacy Could Happen This Year” by Kevin Hartnett). This dramatic increase in processing power that quantum computing will provide could, depending on software development (e.g., agent based modeling and simulation), make it possible to predict the behavior of complex adaptive systems and, using Generative Adversarial Networks (an approach to machine learning that is driven by “self-play” or competing algorithms), devise better strategies for achieving critical goals. Of course, this also raises the prospect of a world in which there are many more instances of “algorithm vs. algorithm” competition, similar to what we see in some financial markets today.

The second challenge is “
explainability”. As previously noted, the statistical relationships that ML identifies in large data sets are often extremely complex, which makes it hard for users to understand and trust the basis for the predictions they make.

This challenge becomes exponentially more difficult in the case of GANs. For example, after DeepMind’s AlphaZero system used GANs to rapidly develop the ability to defeat expert human chess players, the company’s founder, Demis Hassabis, observed that its approach to the game was “like chess from another dimension”, and extremely hard for human players to understand.

Yet other research has shown that human beings are much less likely to trust and act upon algorithmic predictions and decisions whose underlying logic they don’t understand. Thus, the development of “explainable AI” algorithms that can provide a clear causal logic for the predictions or decisions they make is regarded as a critical precondition for broader AI/ML deployment.

If history is a valid guide,
organizational obstacles will present a third challenge to the widespread deployment of ML and other AI technologies. In previous waves of information and communication technology (ICT) development, companies first attempted to insert their ICT investments into existing business processes, usually with the goal of improving their efficiency. The results, as measured by productivity improvement, were usually disappointing.

It wasn’t until changes were made to business processes, organizational structures, and employee knowledge and skills that significant productivity gains were realized. And it was only later that the other benefits of ICT were discovered and implemented, including more effective and adaptable products, services, organizations, and business models.

In the case of machine learning and other artificial intelligence technologies, the same problems seem to be coming up again (e.g., “
The Big Leap Toward AI at Scale” by BCG, and “Driving Impact at Scale from Automation and AI” and “AI Adoption Advances, but Foundational Barriers Remain” by McKinsey). Anybody in doubt about this need only look at the compensation packages companies are offering to recruit data scientists and other AI experts (even though the organizational challenges to implementing and scaling up AI/ML technologies go far beyond talent).

Having provided a general context, let’s now turn to the article that caught our attention: “
Anticipatory Thinking: A Metacognitive Capability”, by Amos-Binks and Dannenhauer that was published on Arxiv on 28 June 2019.

As we do at Britten Coyne Partners, the authors draw a distinction between “anticipatory thinking”, which seeks to identify what could happen in the future, and “forecasting”, which estimates the probability, and/or remaining time until the possible outcomes that have been anticipated will actually happen, and the impact they will have if and when they occur.

With respect to anticipatory thinking, we are acutely conscious of the conclusion reached by a 1983 CIA study of failed forecasts: "each involved historical discontinuity, and, in the early stages…unlikely outcomes. The basic problem was…situations in which trend continuity and precedent were of marginal, if not counterproductive value."
When it comes to forecasting, we know that in complex socio-technical systems that are constantly evolving, forecast accuracy over longer time horizons still heavily depends on causal and counterfactual reasoning by human beings (which, to be sure, can be augmented by technology that can assist us in performing activities such as hypothesis tracking, high value information collection – e.g., Google Alerts – and evidence weighting).

Anticipatory Thinking: A Metacognitive Capability” is a good (if not comprehensive) benchmark for the current state of artificial intelligence technology in this area.

The authors begin with a definition: “anticipatory thinking is a complex cognitive process…that involves the analysis of relevant future states…to identify threatening conditions so one might proactively mitigate and intervene at critical points to avoid catastrophic failure.”

They then clearly state that, “AI systems have yet to adopt this capability. While artificial agents with a metacognitive architecture can formulate their own goals or adapt their plans response to their environment, and learning-driven goal generation can anticipate new goals from past examples, they do not reason prospectively about how their current goals could potentially fail or become unattainable. Expectations have a similar limitation; they represent an agent’s mental view of future states, and are useful for diagnosing plan failure and discrepancies in execution. However, they do not critically examine a plan or goal for potential weaknesses or opportunities in advance…

"At present, artificial agents do not analyze plans and goals to reveal their unnamed risks (such as the possible actions of other agents) and how these risks might be proactively mitigated to avoid execution failures. Calls for the AI community to investigate so-called ‘imagination machines’ [e.g., “
Imagination Machines: A New Challenge for Artificial Intelligence” by Sridhar Mahadevan] highlights the limitations between current data-driven advances in AI and matching complex human performance in the long term.”

The authors’ goal is to “take a step towards imagination machines by operationalizing the concept of automated, software-based anticipatory thinking as a metacognitive capability” and show how it can be implemented using an existing cognitive software architecture for artificial agents used in planning and simulation models.

The authors’ logic is worth describing in detail, as it provides a useful reference:

First, identify goal vulnerabilities. “This step reasons over a plan’s structure to identify properties that would be particularly costly were they not to go according to plan.” They suggest prioritizing vulnerabilities based on how many elements in a plan are based on different “pre-conditions” (i.e., assumptions).

Second, “For each identified plan vulnerability, identify possible sources of failure” – that is, “conditioning events” which would exploit vulnerabilities and cause the plan to fail.

Third, identify modifications to the existing plan that would reduce exposure to the sources of failure.

Finally, prioritize the implementation of these plan modifications based on a resource constraint and each modifications forecast cost/benefit ratio, with the potential benefit measured by the incremental change in the probability of plan success as a result of the modification.

After reading this paper, our key takeaway is that when it comes to strategic risk governance and management, there appears to be a very long way to go before artificial intelligence technology is capable of automating, or even substantially augmenting, human activity.

For example, when the authors suggest “reasoning over a plan’s structure”, it isn’t clear whether they are referring to associational, causal, and/or counterfactual reasoning.

More importantly, plans are far more structured than strategy, and their assessment is therefore potentially much easier to automate.

As we define the term, “
Strategy is a causal theory, based on a set of beliefs, that exploits one or more decisive asymmetries to achieve an organization's most important goals - with limited resources, in the face of evolving uncertainty, constraints, and opposition.” 

There are many potential existential threats to the success of a strategy, and the survival of an organization (including setting the wrong goals). And new threats are constantly emerging.

Given this, for the foreseeable future, complex human cognition will continue to play a critical role in strategic risk management and governance – from anticipating potential threats, to foraging for information about them, to analyzing, weighing, and synthesizing it, and to testing our conclusions against intuition that is grounded in both experience and the instinctive sense of danger that has been bred into us by evolution.

The critical challenge for boards and management teams today isn’t how to better apply artificial intelligence to reduce strategic ignorance, uncertainty, and risk. Rather, it is how to quickly develop far better individual, team, and organizational capabilities to anticipate, accurately assess, and adapt in time to the threats (and opportunities) that are emerging at an accelerating rate from the increasingly complex world we face today.

While at Britten Coyne Partners we will continue to closely track the development of artificial intelligence technologies, our primary focus will remain helping our clients to develop and apply these increasingly critical capabilities throughout their organizations.
Comments

Fraud at Patisserie Valerie -- What Can We Learn?

The emergence of a major fraud at the UK based retail “coffee and cakes” chain Patisserie Valerie led to the failure of the business*, despite desperate efforts of its high-profile major investor and Executive Chairman, Luke Johnson, to fund it before it sunk into administration, including putting in a reported £10m of his own cash as an unsecured loan, now probably all lost.

The aftermath has stimulated a debate about the whether the role and personality of Luke Johnson, considered previously to have been an extremely successful entrepreneur, was a contributing factor; was this a failure of effective governance? Or is fraud just something that no-one, however talented, can guard against?

We suggest it is helpful to examine the issue from the perspective of Strategic Risk and Strategic Warning. In our work on Strategic Risk we emphasise the importance of “foraging for surprise” in the “Realm of Ignorance”, i.e. actively looking for potential strategic or existential threat. In this context, the threat of fraud, in any business, maybe a relatively uncommon occurrence, but it is hardly unknown. It is unwise to fall into the trap of believing that an uncommon occurrence is an impossible one. Trust is necessary in all businesses. At the same time unconditional trust might be considered a step too far.

Efforts to combat deception and fraud have, over time, led to a relatively robust framework of checks and controls in the finance profession. Yet these still inevitably rest on human behaviour and always susceptible to outright deceit by those most trusted to be honest. Also, as we know, audits cannot be relied upon to reliably detect fraud.

So, is fraud, like death and taxes, always with us? Is it a risk about which we can do nothing, despite the potential to bring about the death of the enterprise? We suggest not.

Firstly, boards should accept that the threat from a fraud represents a Strategic Risk that can be anticipated. Secondly, they might ask themselves, how long do companies have to try and recover from a major fraud after it has been uncovered? The answer to this question is, typically, not long enough. Thus, the focus is on what might give us some early warning of a fraud being perpetrated. To put this into the language of Strategic Warning: what could represent high-value indicators of possible fraud?

This is not a simple question to answer – if it was we would presume that fraud would be much less common than is experienced. After all, the fraudsters are, by definition, intent on hiding their activities. At the same time, fraudsters also know that their discovery is probably inevitable in time, (unless their intentions is a “temporary” fraud, which it is expected will be recovered from before detection, such as a sales manager who creates fictitious customer orders in order to meet a target in one month believing, or hoping, that a recovery in actual sales in future periods will allow the fiction to pass unnoticed.) Thus, the challenge is to identify what might be high-value indicators of potential fraud, which also might provide early warning and then to be able to monitor these indicators.

One of our favourite definitions of risk blindness is “familiarity with inferior (or incorrect) information”. Naturally, not all inferior information results from fraud, but it is, we hope, self-evident that fraudsters rely on creating risk blindness through familiarity with incorrect information. Boards rely on information provided to them by the executive and the organization at large. It would be impractical if not impossible to treat all of this as “inferior or incorrect”. At the same time, it is possible to be sensitive to and, from time to time, actively seek information that is contradictory to the standard board pack data. Such information, which may provoke a sense of surprise or suggest uncertainty or be plainly different is, by definition, high-value.

In the case of Patisserie Valerie, there are two examples in the public domain which may illustrate this principle. The first relates to the proximate cause of nearly all corporate failures: running out of cash. It is reported that the company’s cash position was overstated by (£54m or more), and “secret” overdrafts of c. £10m were unknown to the board — i.e. the content of bank accounts were very different from what was being reported. Perhaps it is asking too much for finance directors to occasionally give direct access to a bank statement to the non-executive directors (though, why not?); however Mr. Johnson was an Executive Chairman, so perhaps we may assume he could have done that check himself from time to time. In the event, apparently, he did not, for if he had we must presume that a significant discrepancy would have been apparent much earlier on, perhaps with enough warning to have saved the company.

The second example of high-value information could have been an indicator that the reportedly strong performance of the chain was at odds with the experience of some other branded hospitality chains and, according to a number of customer anecdotes reported in the press subsequently, very much at odds with the customer experience. Some customers have openly questioned whether Mr. Johnson can possibly have visited any of the rapidly expanded number of Patisserie Valerie outlets. They suggest that had he done so he would have seen at first-hand practically empty sites next to other relatively prosperous venues, suggesting that the Patisserie Valerie value proposition was either not as strong as believed or was not being delivered reliably or to a high enough standard.

We do not know and cannot confirm whether these reported experiences were in fact representative of the majority of the Patisserie Valerie estate, but let us assume for the moment that they could have been. We observe that the erosion of a business’ value proposition is itself a Strategic Warning that frequently is a causal factor in declining cash flow. It does not seem impossible that, in view of Mr. Johnson’s previous track record, the board had assumed that the strategy for rapid expansion of the chain was bound to succeed. Individuals in the business, aware of an expectation of success, may have initially sought to “massage” the numbers to soften bad news, similar to the example of the sales manager cited above. Then when the bad news kept coming or worsened, the deception became ingrained. Had “from the field” observations been seen to be at odds with reported sales, this could have been a powerful high-value early warning signal.

This, we know and accept, is speculation. We only indulge in it to illustrate some important principles. The critical importance of high-value indicators for effective Strategic Warning cannot be underestimated. It is a challenge for boards to seek to identify and monitor such indicators, but one that they have an absolute duty to confront. Secondly this case illustrates the key value to boards and directors of “foraging for surprise” and paying heed to the feeling. Surprise is itself an indicator that something is inconsistent with our mental models of reality. Boards and directors need to develop an acute sense of surprise being a clue to the existence of risk blindness.

*Note: Following administration and some closures the Patisserie Valerie business was sold to new owners by administrators and continues to trade.
Comments

Lessons for Boards and Management Teams from Planetary Defense Planning

In April international space agencies participating in the  2019 Planetary Defense Conference, launched an exercise designed to simulate the response to an asteroid strike on Earth. It is a result of increasing efforts over the past two decades to improve humanities chances of surviving the threat of an extinction level event from the impact of a large space object with the planet, similar to that considered to have led to the mass extinction of dinosaurs. Commenting on the purpose of these international agencies Mr. Rüdiger Jehn, Head of Planetary Defense for the European Space Agency (ESA) said, "The first step in protecting our planet is knowing what’s out there. Only then, with enough warning, can we take the steps needed to prevent an asteroid strike altogether, or to minimise the damage it does on the ground.”

Our interest in these events is due to the lessons we believe they offer to organizations striving to improve their own ability to survive existential threats through effective Strategic Risk Governance and Management. As Mr. Jehn observes the first step is to “know what is out there”, in other words to find the “known unknowns” that may represent a threat. This corresponds to the essential process of
Anticipation. We urge our clients to search in the “Realm of Ignorance” and “forage for surprise” to actively look for the few key Strategic Risks* that should concern them.

NASA’s Centre for Near Earth Object Studies (CNEOS) was charged by congress in 2005 with finding 90% of objects of a size, distance from earth and trajectory thought likely to represent a potential threat to the planet. Since then CNEOS funding has risen from $4m to nearly $200m p.a. as the number of such objects found has grown. 20,000 have been identified to date and the number is still growing at about 150 per month. Without such active search humankind would still be living in profound ignorance of the potential scale of the threat from these space objects – the same kind of profound ignorance that many boards of directors experience, as they become familiar with imperfect strategic assumptions and information about risk that fosters their Strategic Risk blindness.

Once potential threats have been identified there is still great uncertainty about when or if they will occur. NASA and other agencies seek to identify two parameters – the date of potential impact and the probability of occurrence. At first sight this may seem analogous to the approach taken with the conventional risk register assessments of companies. This is a mistake – when NASA computes a probability of impact they are basing this on the known measurement imperfections of size, position and trajectory. In other words, there is a robust basis for a computed probability. This is not the case for the overwhelming majority of Strategic Risks identified by businesses; these are uncertainties, frequently a manifestation of the complex economic environment in which businesses operate, for which, as John Maynard Keynes observed in 1937
“…there is no scientific basis on which to perform any calculable probability.”

The clue to the relevant
Assessment of Strategic Risk is in Mr. Jehn’s comment: “…with enough warning”. What matters for boards is not a subjective and unreliable probability but when the threat might materialise. It might be argued that there is as much difficulty in assessing the time to impact of an asteroid as the probability of impact. This is true – for asteroids. But when businesses focus on the expected time to event threshold for Strategic Risk there is much more data and evidence that can be applied to make a best estimate. Moreover, by applying the lessons learned from the Good Judgement project**, the estimation algorithm we use is amenable to update and adjustment over time. This is just what NASA, ESA and other agencies do to improves their estimates of time to, and probability of, impact. The key metric remains time, since this is what either facilitates or limits any potential response to the threat. Moreover, if the rate of change of the observed time to event accelerates, i.e. starts to become much shorter, this is a vital indicator of the need for action.

The action required for
Adaptation or mitigation of the effects of the observed threat is clearly determined by the nature and severity of it. For NASA and ESA such actions are “… the steps needed to prevent an asteroid strike altogether, or to minimise the damage it does on the ground.” For business organizations having strategies that are robust to their assumptions (i.e. they may survive such assumptions being wrong), or having built resilience, for example maintaining key strategic reserves, are amongst the methods that can be employed to adapt to anticipated threat. The lessons from corporate failures nearly always demonstrate that failure to adapt in time to an emerging existential underlies their eventual demise. It is just such a failure that NASA, ESA and others hope to avoid by running their simulation exercises.

When boards and executive teams apply these proven Anticipation, Assessment and Adaptation processes to their own Strategic Risks they may avoid, as NASA and ESA hope to also, an Extinction Level Event!

* We define Strategic Risk as any event which has the potential to cause serious adverse effect on strategic goals up to and including the death of the corporation

** Tom Coyne was a member of the winning team in the Intelligence Advanced Research Projects Activity’s forecasting tournament, as described by Philip Tetlock and Dan Gardner in their book “Superforecasting”

Comments

Comments on the Protiviti/NC State "Top Risks 2019" Survey

We always look forward to the release of new "top risks" surveys, both because of what they include and what they leave out.

Now in its seventh year, the Protiviti/NC State University survey is particularly interesting because it reports the views of board directors and C-Suite executives from around the world (this year there were 825 respondents).

The survey asks participants to rate (on a ten point scale) the potential impact on their company of 30 pre-selected risk issues over the next year.

Risks are further divided into three categories, including macroeconomic risks to potential growth opportunities, risks to the validity of current corporate strategy for pursuing those opportunities, and operational risks to the implementation of that strategy.

Organizationally, one of this year's key findings was that on average, board members reported higher potential risk impact on their organizations than CEOs, who in turn reported a higher average risk impact than their respective management teams. This isn't surprising, given that the structure of CEO and management team incentives is much more skewed towards achieving success than avoiding failure than board members' incentives. There may also be an experience factor at work, whereby board members with more years of experience (and accumulated scar tissue) are more attended to the dangers the lurk in uncertainty and ignorance than less experienced managers, who are naturally more focused on risks they believe they can control.

Here is the list of the survey's top risks for 2019:

(1) "Existing operations meeting performance expectations, competing against 'born digital' firms."

(2) "Succession challenges and the ability to attract and retain top talent."

(3) "Regulatory changes and regulatory scrutiny."

(4) "Cyber threats."

(5) "Resistance to change operations."

(6) "Rapid speed of disruptive innovation and new technologies."

(7) "Privacy/identify management and information security."

(8) "Inability to utilize analytics and big data."

(9) "Organization culture may not sufficiently encourage timely identification and escalation of risk issues."

(10) "Sustaining customer loyalty and retention."

At Britten Coyne Partners, we use a number of different frameworks and methodologies to help clients better anticipate, more accurately assess, and adapt in time to emerging threats. It is interesting to use them to evaluate the "top risks" identified in this survey.

One of these methods divides threats into four categories, based on the location and likely increasing severity of their impact, because of the relative difficulty in adapting to them. These categories include: (a) the competitiveness of a firm's value proposition; (b) the size of its served and potential market; (c) its business model design and economics; and (d) the social, economic, national security, and political context in which a company exists and competes.

Five of the survey's "top risks" seem to be in the value proposition category:

(1) Sustaining customer loyalty and retention

(2) Existing operations meeting performance expectations, competing against 'born digital' firms.

(3) Inability to use analytics and big data

(4) Privacy/Identity management and information security

(5) Cyber threats

Arguably, the last two might also have a negative impact on the overall size of a potentially served market (e.g., if rising privacy and identify protection concerns caused a whole market to shrink). This might also include "regulatory change and regulatory scrutiny."

Five risks seem to represent threats to business model design and economics:

(6) Regulatory change and regulatory scrutiny.

(7) Rapid speed of disruptive innovations and new technologies.

(8) Resistance to change operations.

(9) Succession challenges and ability to attract and retain top talent.

(10) Organization culture may not sufficiently encourage timely identification and escalation of risk issues.

It is interesting that no social, economic, national security, and political context risks made the top ten, as they are key drivers of many of the risks that did. However, this may well have been due the survey limiting macro risks to those that affect potential growth opportunities.

Another way to categorize these top ten risks is by the nature of the organizational issues that underlie them, including timely anticipation of emerging threats, accurate assessment of the their potential consequences and likely speed of development, and the ability to adapt to them in time.

Fears of inadequate organizational ability to anticipate emerging threats likely underline "Rapid speed of disruptive innovations and new technologies", "cyber threats", "privacy/identify management and information security", "regulatory changes and regulatory scrutiny", and meeting performance expectations when competing against born digital firms."

Anxiety about the accuracy and timeliness of assessments of emerging threats is indicated by "organization's culture may not encourage the timely identification and escalation of risk issues", "inability to use analytics and big data", and also concerns with "sustaining customer loyalty and retention."

Worries about a company's ability to adapt in time to emerging threats are clear in "timely escalation of risk issues", "resistance to change operations", and "succession challenges and ability to attract and retain top talent."

Last but not least, it is critical to note that directors' and executives' top ten risks are not risks at all, in the classical sense of discrete events whose historical frequencies can be observed, future probabilities of occurrence measured, and potential negative consequences priced and transferred to others (e.g., via insurance or financial derivative contracts).

Rather, they reflect a combination of uncertainties (about the nature, likelihood, timing, and impact of potential threats), and/or concerns about the potential extent of one's ignorance (e.g., about future regulatory changes or the speed of disruptive innovations and new technologies).

As always, the Protiviti/NC State survey provides a good overview of what risks most worry directors and management teams, and why they are important. But that is only the starting point.

Just as important, and far more difficult, are the challenges of how to better anticipate and more accurately assess the threats they pose, and then adapt to them in time. The good news for boards and management teams is that for the past seven years, this has been the focus of our work at Britten Coyne Partners.
































Comments

Occurrence versus Emergence

Even after many years of work in the areas of strategy and risk -- from financial services to energy and most recently at Britten Coyne Partners and The Index Investor -- I am still amazed at how most people's notion of "risk" is subconsciously linked to the probability that a discrete event will occur within a defined period of time (usually the next 12 months).

The root causes of this phenomenon are undoubtedly complex, but no doubt include our exposure to statistics courses and insurance concepts, like definable hazards whose potential negative impact can be mitigated (for a price) by transferring them to others.

Our evolutionary past is also to blame. Long before writing and mathematics appeared, we used stories to explain the past and anticipate the future -- and stories are usually focused on people and events with emotional power that cause us to retain them (and the lessons they contain) in our individual and collective memory.

Yet one of the great lessons of history is that it usually isn't the occurrence of events that sinks companies and countries. To be sure, they are often cited as the proximate cause of failure. But in reality, these events mark the end of a much longer process, involving interacting trends, decisions, and randomness, from which new threats emerge, evolve, and sometimes reach a critical threshold that produce events that cause catastrophic failures.

Put differently, it is the continuous variables in a system that should attract our interest, not just the discrete ones; we should focus on emergence, not just occurrence. In turn, this requires that we adopt new mental models that seek to understand and adapt to uncertainty, not just risk -- that are focused on estimating the remaining time before a critical system threshold is reached, and not just the probability that an event will occur.

In truth, these concepts are actually related, even though we often fail to see them as such. Consider, for example, a simple system model that does not evolve over time, in which there is a 5% probability each year that a given event of magnitude X will occur. What we usually fail to consider is how that probability increases as the time horizon lengthens. Over five years, there is a 23% chance the event will occur; over 10 years, a 40% chance, and over 20, a 64% chance.

In practice, however, the challenge is usually much greater, because the complex socio-technical systems that produce emergent threats and catastrophic events are themselves constantly evolving. Most of the time, we operate in the realm of true uncertainty, not risk, and the mental models we use to make sense of the world too often fail to recognize this critical distinction.

The lesson of this short story is this: If our goal is to avoid catastrophic failures, we must constantly struggle to repress our natural tendency to focus only on the probability of discrete events occurring within the next year, and instead pay much more attention to the continuous interaction of forces within our world that give rise to the emergent threats which pose the greatest danger to the survival and success of both organizations and investment strategies.
Comments