The Fall of the US Capitol: All Three Causes of Strategic Risk Failure Were at Work

The fall of the US Capitol building to a mob of rioters on January 6, 2021 left millions stunned and horrified, and all asking the same question: "How Could This Happen?" Unfortunately, the answer is painfully familiar to those of us who study strategic failure and spend our days helping organizations to avoid it. The initial evidence suggests that all three of the most common root causes of such failures were present in this case too.

The first root cause is failure to anticipate potential future strategic risks, and to recognize them as they begin to emerge.

It is critical to distinguish between three levels of anticipation and warning.

t the strategic level, analysts focus on the "what" and "why" of potential threats. At the operational level, they focus on "how' different threat might materialize. At the tactical level, the focus is on the"who, what, and when" that drive effective response.

The fundamental challenge of threat anticipation and warning is that the possibilities increase exponentially as you move from the strategic to the operational and then to the tactical level.

During April and May 2020, protests against continuing COVID lockdowns erupted across the United States. Many of these were instigated and/or backed by groups on the populist right.

Following the death of George Floyd on May 25, 2020 in the course of his arrest, protest demonstrations were held across the country, with many instigated and/or backed by groups from the populist left. A significant number of these degenerated into rioting.

In September, a draft Department of Homeland Security assessment appeared in the press, which named white supremacist groups as the most dangerous terror threat facing the United States.

On October 8, 2020, the FBI announced the arrest of members of a right wing group who were conspiring to kidnap Michigan Governor Gretchen Whitmer due to anger over lockdowns and other alleged abuses of state government power.

In sum, the growing strategic threat of violent protests by both left and right wing groups was clear.

Operationally, it is certain that the Capitol Police and other intelligence and law enforcement agencies had anticipated the threat posed by a terrorist attack that sought to gain control of the building and harm legislators and staff within it. It is equally certain that they had in place and had frequently rehearsed plans to violently repel a violent attack.

It also is also certain that plans were in place to respond to demonstrations with a high assessed potential to become violent. However, it is not clear that there were operational plans to respond to a demonstration that evolved into an attempt to takeover the capital without the use of weapons and violence, as would be the case in a terrorist attack.

There is accumulating evidence that the Capitol Police had tactical (albeit noisy) warning that the demonstrations planned for January 6th had a significant potential to become violent.

For example, ProPublica reported that, "Capitol Rioters Planned for Weeks in Plain Sight. The Police Weren’t Ready. Insurrectionists made no effort to hide their intentions." Like a growing number of similar reports, this one highlights the amount of detailed information that was available online about plans by some groups to attempt to disrupt the certification of the Electoral College results on January 6th. As ProPublica notes, "The warnings of Wednesday’s assault on the Capitol were everywhere — perhaps not entirely specific about the planned time and exact location of an assault on the Capitol, but enough to clue in law enforcement about the potential for civil unrest."

Similarly, the BBC reported that, "In the days (and indeed weeks and months) before the attack, people monitoring online platforms used by extreme pro-Trump supporters and far-right groups had warned of rhetoric encouraging violence at the Capitol, including toward lawmakers, over the election result. Some were even pictured wearing clothing that said "MAGA: CIVIL WAR" printed alongside the 6 January 2021 date."

ABC News provided more specific information, reporting that, "Three days before supporters of President Donald Trump rioted at the Capitol, the Pentagon asked the US Capitol Police if it needed National Guard manpower."

The second root cause is failure to appropriately and accurately assess the nature, timing, and danger posed by an identified threat.

One clear assessment failure was the apparently very low probability given to a scenario in which people protesting the Electoral College result would attempt to takeover the Capitol building and harm the Vice President and legislators meeting there on January 6th. As the ABC report noted, Capitol Police were only preparing for a "free speech demonstration" of which there are many at the Capitol during the course of any year.

Considering that the last time the security of the Capitol was violently breached was in 1954 (when Puerto Rican terrorists shot down from the visitors gallery on members of the House of Representatives), the Capitol Police's threat assessment failure is depressingly common.

As Thomas Schelling famously noted, “There is a tendency in our planning to confuse the unfamiliar with the improbable. The contingency we have not considered seriously looks strange; what looks strange is thought improbable; what is improbable need not be considered seriously.”

Similarly, a 1983 CIA study found that, "In the [intelligence] estimates that failed, there were a number of recurrent common factors which, in retrospect, seem critical to the quality of the analysis. The most distinguishing characteristics of the failed estimates...was that each involved historical discontinuity and, in the early stages, apparently unlikely outcomes. The basic problem in each estimate was to recognize qualitative change and to deal with situations in which trend continuity and precedent were of marginal, if not counterproductive, value.”

The third root cause is a failure to adapt in time to a developing threat.

Another classic cause of strategic failure is a poor grasp of the interacting and usually non-linear time dynamics that are at work. Specifically, organizations typically overestimate the time that remains before a developing threat passes a critical threshold, while also overestimating how quickly they can develop and implement an effective response to it. In short, their remaining "safety margin" is often smaller and shrinking more quickly than they realize.

The available evidence suggests this source of failure almost certainly contributed to the fall of the Capitol on January 6th. For example, the AP reported that, "As the mob descended on the building Wednesday, Justice Department leaders reached out to offer up FBI agents. The [Capitol] police turned them down." While some of this may have been due to inter organizational rivalry and a desire on the part of the Capitol Police to avoid embarrassment, it is almost certain that if Capitol Police commanders had an accurate awareness situation's time dynamics, they would have accepted this offer of aid.

Nor were preparations in place to coordinate and control rapid adaptation if the demonstrations at the Capitol degenerated into rioting, as they did.

For example, Marketwatch reported that, "Army Secretary Ryan McCarthy said that as the rioting was underway, it became clear that the Capitol Police were overrun. But he said there was no contingency planning done in advance for what forces could do in case of a problem at the Capitol because Defense Department help was turned down."

No interagency command structure was established to coordinate tactical intelligence collection and fusion, and direct different agencies' response to the rapidly deteriorating situation at the Capitol.

In sum, while the actual events at the US Capitol on January 6th were unique, the underlying root cause of the strategic failure it represents were depressingly familiar.


Britten Coyne Partners advises clients how to establish methods, processes, structures, and systems that enable them to avoid strategic failures. Through our affiliate, The Strategic Risk Institute, we also provide online and in-person courses leading to a Certificate in Strategic Risk Governance and Management.







Comments

COVID-19's Lessons in Strategic Failure for Boards and Management Teams

Strategic failure is not just the opposite of success. It is a different phenomenon that has its roots our individual and collective inability or unwillingness to anticipate, accurately assess, or adapt in time to emerging threats.

COVID-19 has been no exception, and has important lessons to teach boards and management teams.

Novel threats are often difficult to anticipate because they emerge from complex global systems with interacting cause/effect relationships that are frequently time delayed and non-linear.

Anticipation is made more difficult by the way humans have been wired by evolution. We are naturally overoptimistic and focus on information that confirms our existing beliefs. And when uncertainty increases so too does our tendency to copy the beliefs and behavior of others, rather than trusting our own judgment.

In the case of COVID, however, there is ample evidence that public sector and, to a lesser extent, private sector organizations had anticipated the severe threat posed by a pandemic respiratory virus.

For example, “Event201” was an eerily prescient war game run in October 2019 by the Johns Hopkins Center for Health Security, based on a more transmissible version of the coronavirus that causes SARS. The game report concluded that, “we don’t have dedicated antivirals, or the expectation of producing an emergency vaccine in a timely fashion with coronaviruses.” Similar exercises emphasized the critical importance of time dynamics in the race between a respiratory pandemic’s emergence and a nation’s response to it.

In the case of COVID, warning indicators for pandemics had been established and were being monitored by national security and healthcare agencies.

However, a new first threat emerges, indicators are often noisy, and assessment is sometimes made more complicated by the appearance of surprises whose meaning is ambiguous.

In addition, evolution has burdened humans with some unhelpful instincts. We are naturally overconfident and have a poor intuitive grasp of non-linear, positive feedback-driven processes. We strive to avoid the cognitive dissonance and social disapproval that often accompany changes in our beliefs. And we are also easily distracted. For example, in the early stages of COVID’s emergence, much of the world was transfixed by Donald Trump’s impeachment trial, the US Democratic primaries, and the run-up to Brexit in the UK.

A further problem is that even accurate assessments of a novel threat may fail to convince leaders to issue warnings and take action. As Henry Kissinger is reputed to have once said to an intelligence analyst, “you warned me, but you didn’t convince me.”

For example, as late as February 26th, Donald Trump was still telling the public that the current number of COVID-19 cases in the U.S. was “going very substantially down, not up”, and claiming that “the U.S. is “rapidly developing a vaccine” for COVID-19 and “will essentially have a flu shot for this in a fairly quick manner.”

More broadly, warnings are often delayed because as organizations grow larger and more bureaucratic, they tend to penalize false alarms more heavily than missed alarms.

In the absence of timely and accurate warnings and advice from government leaders, many investors and companies were poorly prepared to independently assess the threat posed by COVID’s exponentially worsening effects.

For example, at the end of 2019 researchers at North Carolina State University found that, "only about half of surveyed organizations engage in formal risk identification and risk assessment processes [and] less than 20 percent of companies viewed their risk management process as providing important strategic advantage."

Investors’ assessments weren’t much better. As early as January 9th, the Financial Times warned, “China says pneumonia outbreak is linked to coronavirus.” Yet the US equity market didn’t peak until February 19th.

While information spreads almost instantly today, collective understanding of its meaning typically diffuses much more slowly, until it eventually passes a tipping point and the previously dominant narrative gives way to a new one. As was the case with COVID, for emerging threats this takes a surprisingly long time to happen.

However, even when a timely warning is issued, successful adaptation to an emerging threat still depends on a keen understanding of its time dynamics – specifically, what we call the safety margin.

This is the gap between the time remaining before a threat passes a critical strategic threshold and the time still needed to implement an effective response.

In the case of COVID, the impact of exponentially increasing infections on the rapidly shrinking safety margin seems to have been either misunderstood or ignored.

The result was a cascading series of adaptation failures, including the Trump administration’s initial refusal to implement the National Security Council’s “pandemic playbook”, then confusion about public sector chains of command, multiple snafus over the approval and deployment of testing and tracing processes, and delays in implementing masking and physical distancing measures.

The end result was hundreds of thousands of excess deaths, a shattered global economy, and greatly diminished public confidence in many institutions that will take years to repair.

The COVID pandemic will go down as one of the greatest strategic failures in history. But it was not inevitable.

As successful responses in countries like Taiwan showed, COVID’s human and economic costs could have been minimized had leaders designed and used effective processes for anticipating, accurately assessing, and adapting in time to emerging strategic threats.

Directors and boards should take these painful lessons to heart, and use them to strengthen their companies’ processes for governing and managing strategic risk.


Tom Coyne and Neil Britten are the cofounders of Britten Coyne Partners and the Strategic Risk Institute LLC, with offices in London and Denver
Comments

Three Great New Columns on Avoiding Strategic Failure

The first is "Anomaly Detection: The Art of Noticing the Unexpected". Dr. Gary Klein.

With our consulting work with clients at Britten Coyne Partners, and in our Strategic Risk Governance and Management courses at the
Strategic Risk Institute, we emphasize being alert to the feeling of surprise, and writing down what caused it, as a powerful method for identifying emerging threats and avoiding failure.

Writing down surprises is critical, because, as Daniel Kahneman explained in
Thinking Fast and Slow, our mind’s automatic System 1 reasoning will quickly attempt to fit the surprise into our existing mental models and beliefs. When this happens, they feeling of surprise and our memory of what triggered it will both disappear – unless we write it down, to enable conscious System 2 to think about what it could mean.

Klein notes that, “An anomaly is a violation of our expectancies that enables us to revise the way we understand a situation…Most deviations and outliers are uninteresting. At the cognitive level, anomalies matter primarily when they have the potential to alter the way we understand a situation. And that type of sensemaking is very different from the flagging of outliers found in statistical methods.”

This brings us to Bent Flyvbjerg’s new paper, “
The Law of Regression to the Tail.”

Fyvbjerg makes a point we have frequently made over the years: While that introductory statistics course firmly implanted the normal (Gaussian or Bell Curve) distribution in our mind, it is actually a very poor description of the distribution of outcomes that are typically produced by the complex adaptive systems that surround us (e.g., product markets, industry dynamics, the economy, politics, war casualties, etc.). Instead of the Bell Curve, complex adaptive systems produce outcomes that are much better described by power laws.

As Flyvbjerg notes, these distributions “have no population mean, or the mean is ill defined due to infinite variance. In other words, mean and/or variance do not exist. Regression to the mean is a meaningless concept for such distributions, whereas what one might call ‘regression to the tail is meaningful and consequential.”

What people under the spell of the normal distribution fail to realize is “We live in the age of regression to the tail. Tail risks are becoming increasingly important and common because of a more interconnected and fragile global system of human interaction… The pandemic and the climate crisis are presently the two most significant manifestations of the law and age of regression to the tail.”

The third is Tim Harford's new
Financial Times column, “The Power of Negative Thinking”.

Harford reminds us that, in a world characterized by power laws and tail risks, “we should all spend more time thinking about the prospect of failure and what we might do about it. It is a useful mental habit but it is neither easy nor enjoyable.”

This is a point also made by Gary Klein, who more than anyone has popularized the use of Pre-Mortem analysis, a method whose efficacy we have seen demonstrated time and again in our work with Britten Coyne Partners’ clients.

Best of all, it is often relatively quick and easy to apply. Assume it is some point in the future, and your initiative or strategy or start-up has failed. Ask the members of your team to anonymously write down their answers to three questions: (1) Why did we fail? (2) What warning signs did we miss? (3) What could we have done differently to avoid failure? Collect the answers, type them up, then collage, print, and distribute them back to the team. I have never seen a resulting discussion that did not produce a much better (and less risky) plan.

As Harford notes, “if we expect that things will go wrong, we design our projects to make learning and adapting part of the process. When we ignore the possibility of failure, when it comes it is likely to be expensive and hard to learn from.”

As we enter a period of unprecedented uncertainty, the likelihood of failure has exponentially increased.

The good news is that there are methods you can learn and apply that will improve your (and your organization’s) ability to anticipate emerging threats, appropriately assess them, and adapt to them in time to avoid strategic failure.


T
om Coyne and Neil Britten co-founded Britten Coyne Partners and the Strategic Risk Institute LLC, which provide consulting and education services that enable clients to successfully meet strategic risk governance and management challenges.

Comments

What's Ahead for the Economy? Insights from the KC Fed's Jackson Hole Symposium

At Britten Coyne Partners, the Strategic Risk Institute, The Index Investor, and The Retired Investor, our goal is to help clients avoid strategic failure and the painful losses it brings.

Our core process for accomplishing this goal is shown in the chart below. We stress the importance of anticipating and monitoring of emerging threats, and being alert to surprises that often indicate a new threat you have missed. We also stress the importance of appropriate assessment, early warning, and adapting in time using multiple approaches to minimize the impact of dangerous threats.


H2 Avoid Failure Chart


With this model in mind, I always pay attention to the academic research presentations that are on the agenda for the Federal Reserve Bank of Kansas City’s annual Jackson Hole Symposium (colloquially known as summer camp for the world’s most important central bankers).

This year's conference opened today, and the two papers featured this morning were on issues we have frequently addressed at BCP, SRI, Index, and Retired.

The first paper was “
What Happened to U.S. Business Dynamism?” by Ufuk Akcigit and Sina Ates. The authors note, “Market economies are characterized by the so-called “creative destruction” where unproductive incumbents are pushed out of the market by new entrants or other more productive incumbents or both...

“A byproduct of this up-or-out process is the creation of higher-paying jobs and reallocation of workers from less to more productive firms. [However], the U.S. economy has been losing this business dynamism since the 1980s and, even more strikingly, since the 2000s. This shift manifests itself in a number of empirical regularities", which Akcigit reviewed at this morning's session:

1. Market concentration has risen.

2. Average markups have increased.

3. Average profits have increased.

4. The labor share of GDP has gone down.

5. Market concentration and labor share are negatively associated.

6. The labor productivity gap between frontier and laggard firms has widened.

7. Firm entry rate and the share of young firms in economic activity has declined.

8. Job reallocation has slowed.

9. The dispersion of firm growth has decreased.

10. Aggregate productivity growth has fallen, except for a brief pickup in the late 1990s.

11. A secular decline in real interest rates has occurred.

Akcigit and Sina Ates’ observations are also consistent with research from McKinsey, which found that, “the top 10 percent of companies now capture 80 percent of positive economic profit…[Moreover], after adjusting for inflation, today’s superstar companies have 1.6 times more economic profit, on average, than the superstar companies of 20 years ago” (“
What Every CEO Needs to Know About Superstar Companies”).

Of the hypotheses that Akcigit and Ates tested to explain these trends, they found the evidence and their modeling best supported the hypothesis that, “reduction in knowledge diffusion [across firms] between 1980 and 2010 is the most powerful force in driving all of the observed trends simultaneously.”

Discussion at this morning’s symposium focused on the plausible obstacles to faster diffusion of advanced knowledge across firms. These included more patenting by larger firms, larger firm’s acquisition of patents from smaller firms, aggressive patent litigation by large firms, large firms luring away employees with the most patents from smaller firms, and larger firms’ heavy investment in lobbying and supporting regulatory changes that strengthen their advantage.

I was surprised, however, that another very likely obstacle to faster diffusion wasn’t mentioned this morning. In “
Digital Abundance and Scarce Genius”, Benzell and Brynjolfsson found that the shortage of talented employees is the most important constraint on the faster deployment and diffusion of advanced technologies across the economy. And Korn Ferry found, in “The Global Talent Crunch”, that “the United States faces one of the most alarming talent crunches of the twenty countries in our study”.

So what is to be done, given the authors’ observation that the COVID-19 pandemic will likely make these conditions worse?

Looking at possible policy changes that could help to avert this outcome, this morning’s discussion focused on the need for stronger anti-trust enforcement and other actions that would intensify the level of competition in the US economy. To these I would add that recovering students’ COVID-19 learning losses and substantially strengthening the US education system are also critical (and will require painful structural changes, not just further infusions of cash).

The second paper presented this morning was “
Scarring Body and Mind: The Long-Term Belief Scarring Effects of COVID-19”, by Kozlowski, Veldkamp, and Venkateswaran.

They find that, “the largest economic cost of the COVID-19 pandemic could arise from changes in behavior long after the immediate health crisis is resolved. A potential source of such a long-lived change is scarring of beliefs, a persistent change in the perceived probability of an extreme, negative shock in the future…

“The long-run costs for the U.S. economy from this [belief] channel are many times higher than the estimates of the short-run losses in output. This suggests that, even if a vaccine cures everyone in a year, the Covid-19 crisis will leave its mark on the US economy for many years to come.”

This is consistent with Robert Barro’s earlier research on the impact of “disaster risk” on investors’ decisions and required returns (see his 2006 paper on “
Rare Disasters and Asset Markets in the 20th Century).

It is also consistent with the findings in another recent paper, “
The Long Run Consequences of Pandemics”, by Jorda et al from the Federal Reserve Bank of San Francisco.

They analyzed the medium to long-term effects of pandemics, and how they differ from other economic disasters, by studying major pandemics using the rates of return on assets stretching back to the 14th century.

They concluded that, “significant macroeconomic after-effects of pandemics persist for decades, with real rates of return substantially depressed, in stark contrast to what happens after wars”, and observe that “this is consistent with the neoclassical growth model: capital is destroyed in wars, but not in pandemics; pandemics instead may induce relative labor scarcity and/or a shift to greater precautionary savings” by altering consumer’s beliefs.

This morning’s discussion of the paper by Kozlowski et al focused on the critical question of why belief scarring seemed to have had a much stronger and longer-lasting impact after the Great Depression than after the 9/11 terrorist attacks.

The consensus seemed to be that a range of very visible policy responses to reduce the risk of further terrorist attacks after 9/11 seemed to reduce belief scarring by much more than the policy responses to the Great Depression..

I
n sum, along with actions to restore business dynamism and strengthen competition, public perceptions of the efficacy of various policy responses to the COVID-19 pandemic will very likely be critical to minimizing its long-term negative impact on economic activity. Both of these are key indicators to monitor in the months ahead.



Britten Coyne Partners advises clients on strategic risk governance and management issues. The Strategic Risk Institute provides online and in-person courses leading to a Certificate in Strategic Risk Governance and Management. Since 1997, The Index Investor has published global macro research and asset allocation insights, with a particular focus on avoiding large portfolio losses. The Retired Investor has the same focus, customized for the unique needs of investors in the decumulation phase of their financial life.


Comments

How to Avoid Intelligence Analysis Errors

This week, Mike Morrell had former senior CIA executive Martin Peterson on his Intelligence Matters podcast, which is well worth listening to.

Petersen succinctly summarized the lessons he'd learned over a career (and taught to new analysts) about the root causes of many intelligence analysis errors. They apply equally well to many situations outside the world of intelligence, where analysts and decisions makers must make sense of highly uncertain situations.

Petersen highlighted three classic types of error, and the questions to ask to avoid them.

(1) You don't have a good understanding of the organization you're trying to analyze.

  • How do you get to the top in this organization?
  • What is the organization's preferred method of exercising power and making decisions?
  • What are acceptable and unacceptable uses of power in this organization?

(2) You don't have a good understanding of the individuals making decisions.

  • How do they assess the current situation?
  • How do they see their options?
  • What is their tolerance for risk, under the current circumstances?
  • What do they believe about your capabilities, intentions, and will?
  • What is their definition of an acceptable outcome?

(3) You don't understand your own analysis.

  • Rather than asking someone how confident they are in their analysis, ask them where their analysis is most vulnerable to error. This is the same approach as the one we use, which is identifying the most uncertain assumptions in an analysis, and the implications of different outcomes for them. The underlying issues are also surfaced by use of pre-mortems, which we also recommend.
  • What are you not seeing that you should be seeing if your hypothesis/theory/line of analysis is correct? As Sherlock Holmes (and Thomas Bayes) both teach us, sometimes the dog that doesn't bark provides the most important evidence.
  • Petersen emphasized that if you ever catch yourself saying, 'It makes no sense for them to do that', it is a clear warning sign that you either don't understand either the organization and/or the decision makers that are the target of your analysis.

All important lessons to keep in mind as you try to make sense of the complex, highly uncertain, and fast changing situations that abound in the world we face today.

Comments