February 2020
Solving the Independent Director Problem Raised by Warren Buffett
28/Feb/20 17:19
Writing on the subject of corporate governance in his letter this year to the shareholders of Berkshire Hathaway, Warren Buffett noted that, "overall the deck is stacked in favor of a deal that's coveted by the CEO and his/her obliging staff. It would be an interesting exercise for a company to hire two expert acquisition advisors, one pro and one con, to deliver his or her views on a proposed deal to the board — with the winning advisor to receive, say, ten times a token sum paid to the loser. [But] don't hold your breath waiting for this reform: The current system, whatever its shortcomings for shareholders, works magnificently for CEOs and the many advisors and other professionals who feast on deals."
Elsewhere in his letter, Buffett notes the potential conflict of interest facing "independent" directors who are paid a substantial amount for their board service, but are also expected to play a crucial role in challenging the CEO and management team when necessary.
While Neil and I have a great deal of empathy with Buffett's views on this, in our roles over the years as corporate officers, board directors, and consultants we have repeatedly seen that many of these situations are more complex than they may appear at first.
For example, virtually all of the independent directors we know are well aware that their fiduciary duty of care requires them to occasionally challenge management's thinking — and we have seen plenty of them carry out this duty over the years. We've also sometimes seen boards hire outside advisors who are independent of management.
But most of these situations were awkward for all the parties involved. Why? Because they were exceptions to the boards' normal collegial and often uncritical processes. And therein lies both the problem and the solution.
In our work with clients on strategic risk governance and management, we start with a review of important aspects of our nature as human beings and how we function in groups.
As individuals, we tend to be overoptimistic and overconfident, and to pay more attention (and give greater weight) to new information that supports our current view. It is also the case that when uncertainty increases, we naturally tend to conform more closely to our group or leader's view, and to unconsciously shift more towards learning from and copying the beliefs and behavior of other members of our group. Another aspect of operating in groups is that it triggers our instinctive competition for status. One example of this is that while operating individually (where nobody will see the results of our decisions) we tend to be risk averse, once we are in a group and others will see those outcomes, our mindset often switches from avoiding losses to maximizing gains. This often leads to an increase in collective (over) confidence, and an increased likelihood of rejecting dissenting advice from outsiders.
In our distant evolutionary past, all these behaviors were adaptive and enabled us to survive. In the far more complex situations we often face today, many of them no longer are.
So what is a board to do? Research has shown that simply making people aware of their biases has little impact on an individual or group's ability to overcome them. Given this, the solution we recommend is to establish regular processes — not one-off interventions — that are designed to counteract our evolutionary shortcomings.
Here's an example. Strategic plans and major transaction proposals should always be subjected to a "pre-mortem" risk review. Since human beings are far more detailed when asked to explain the past than they are when asked to predict the future, board members are told to assume it is three to five years in the future and the strategy or deal (or even the company) has failed. We then ask each of them to anonymously write down their answers to three questions: (1) Why did this happen? (2) What warning indicators were missed? And (3), what could have been done differently to avoid failure?
We type up the answers (e.g., while the board is having lunch), and then hand them out for individuals to read. Next, we discuss each failure scenario, without identifying who wrote it. These discussions are always rich and productive. In the context of Buffett's concerns, the key point is that by making pre-mortems and other techniques we use a normal part of the board's process, potential conflicts between independent directors and management are defused before they can occur.
In sum, the best way to respond to the concerns that Warren Buffett raised in this year's Berkshire Hathaway shareholder letter isn't by trying to change human nature. Rather, it is by changing board processes and structures (i.e., the use of outside advisors) to offset our natural instincts and the weaknesses that frequently get organizations into trouble.
Elsewhere in his letter, Buffett notes the potential conflict of interest facing "independent" directors who are paid a substantial amount for their board service, but are also expected to play a crucial role in challenging the CEO and management team when necessary.
While Neil and I have a great deal of empathy with Buffett's views on this, in our roles over the years as corporate officers, board directors, and consultants we have repeatedly seen that many of these situations are more complex than they may appear at first.
For example, virtually all of the independent directors we know are well aware that their fiduciary duty of care requires them to occasionally challenge management's thinking — and we have seen plenty of them carry out this duty over the years. We've also sometimes seen boards hire outside advisors who are independent of management.
But most of these situations were awkward for all the parties involved. Why? Because they were exceptions to the boards' normal collegial and often uncritical processes. And therein lies both the problem and the solution.
In our work with clients on strategic risk governance and management, we start with a review of important aspects of our nature as human beings and how we function in groups.
As individuals, we tend to be overoptimistic and overconfident, and to pay more attention (and give greater weight) to new information that supports our current view. It is also the case that when uncertainty increases, we naturally tend to conform more closely to our group or leader's view, and to unconsciously shift more towards learning from and copying the beliefs and behavior of other members of our group. Another aspect of operating in groups is that it triggers our instinctive competition for status. One example of this is that while operating individually (where nobody will see the results of our decisions) we tend to be risk averse, once we are in a group and others will see those outcomes, our mindset often switches from avoiding losses to maximizing gains. This often leads to an increase in collective (over) confidence, and an increased likelihood of rejecting dissenting advice from outsiders.
In our distant evolutionary past, all these behaviors were adaptive and enabled us to survive. In the far more complex situations we often face today, many of them no longer are.
So what is a board to do? Research has shown that simply making people aware of their biases has little impact on an individual or group's ability to overcome them. Given this, the solution we recommend is to establish regular processes — not one-off interventions — that are designed to counteract our evolutionary shortcomings.
Here's an example. Strategic plans and major transaction proposals should always be subjected to a "pre-mortem" risk review. Since human beings are far more detailed when asked to explain the past than they are when asked to predict the future, board members are told to assume it is three to five years in the future and the strategy or deal (or even the company) has failed. We then ask each of them to anonymously write down their answers to three questions: (1) Why did this happen? (2) What warning indicators were missed? And (3), what could have been done differently to avoid failure?
We type up the answers (e.g., while the board is having lunch), and then hand them out for individuals to read. Next, we discuss each failure scenario, without identifying who wrote it. These discussions are always rich and productive. In the context of Buffett's concerns, the key point is that by making pre-mortems and other techniques we use a normal part of the board's process, potential conflicts between independent directors and management are defused before they can occur.
In sum, the best way to respond to the concerns that Warren Buffett raised in this year's Berkshire Hathaway shareholder letter isn't by trying to change human nature. Rather, it is by changing board processes and structures (i.e., the use of outside advisors) to offset our natural instincts and the weaknesses that frequently get organizations into trouble.
Comments
Fundamental Sources of Forecast Error and Uncertainty
25/Feb/20 16:50
When seeking to improve forecast accuracy, it is critical to understand the major sources of forecast error. Unfortunately, this is not something that is typically taught in school. And learning it the hard way can be very expensive. Hence this note.
Broadly speaking, there are four sources of forecast uncertainty and error:
1. An incorrect underlying theory or theories;
2. Poor modeling of a theory to apply it to a problem;
3. Wrong parameter values for variables in a model;
4. Calculation mistakes.
Let’s take a closer look at each of these.
Theories
When we make a forecast we are usually basing it on a theory. The problem here is twofold.
First, we often fail to consciously acknowledge the theory that underlies our forecast.
Second, even when we do this, we usually fail to reflect on the limitations of that theory when it comes to accurately forecasting real world results. Here’s a case in point: How many economic forecasts have been based on rational expectations and/or efficient market theories, despite their demonstrated weaknesses as descriptions of reality? Or, to cite an even more painful example, in the years before the 2008 Global Financial Crises, central bank policy was guided by equilibrium theories that failed to provide early warning of the impending disaster.
The forecasts we make are actually conditional on the accuracy of the theories that underlie them. In the case of high impact outcomes that we believe to have a low likelihood of occurring, failing to take into account the probability of the underlying theory’s accuracy can lead to substantial underestimates of the chance a disaster may occur (see, “Probing the Improbable: Methodological Challenges for Risks with Low Probabilities and High Stakes”, by Ord et al).
There are three other situations where the role of theory is usually obscured.
The first is forecasts based on intuition. Research has found that accurate intuition is developed through the combination of (a) repeated experience over time; (b) in a system whose structure and dynamics don’t change; (c) the receipt of repeated feedback on the accuracy of forecasts; and (d) followed by explicit reflection on this feedback that gradually sharpens intuition.
When we make a forecast based on intuition, we are (usually implicitly) making the assumption that this theory applies to the situation at hand. Yet in too many cases, it does not (e.g., because the underlying system is continually evolving). In these cases, our “intuition” very likely rests on a small number of cases that are easily recalled either because they are recent or still vivid in our memory.
The second is a forecast based on analogies. The implicit theory here is that those analogies have enough in common with the situation at hand to make them a valid basis for a forecast. In too many cases, this is only loosely true, and the resulting forecast has a higher degree of uncertainty that we acknowledge.
The third is a forecast based on the application of machine learning algorithms to a large set of data. It is often said that these forecasts are “theory free” because their predictions are based on the application of complex relationships that were found in the analysis of the training data set.
Yet theory is still very much present, including, for example, those that underlie various approaches to machine learning, and those that guide explanation of the extremely complex process that produced the forecast.
Another theoretical concern with machine learning-based forecasts is the often implicit assumption that either the system that generated the data used to train the ML algorithm will remain stable in the future (which is not the case for complex adaptive social or socio-technical systems like the economy, society, politics, and financial markets), or that it will be possible to continually update the training data and machine learning algorithm to match the speed at which the system is changing.
Models
While theories are generalized approaches to explaining and predicting observed effects, models (i.e., a specification of input and output variables and the relationships between them) apply these theories to specific real world forecasting problems.
This creates multiple sources of uncertainty. The first is the decision about which theory to include in a model, as more than one may apply. RAND’s Robert Lempert is a leading expert in this area, who advocates the construction of “ensemble” models that combine the results from applying multiple theories. Most national weather services do the same thing to guide their forecasts. However, ensemble modeling is still far from mainstream.
A second source of uncertainty is the extent to which the implications of a theory are fully captured in a model. A recent example of this was the BBC’s 24 February 2020 story, “Australia Fires Were Worse Than Any Prediction”, which noted they surpassed anything that existing fire models had simulated.
A third source of modeling uncertainty has been extensively researched by Dr. Francois Hemez, a scientist at the Los Alamos and Lawrence Livermore National Laboratories in the United States whose focus is the simulation of nuclear weapons detonations.
He has concluded that all models of complex phenomena face an inescapable tradeoff between their fidelity to historical data, robustness to lack of knowledge, and consistency of predictions.
In evolving systems, models which closely reproduce historical effects often do a poor job of predicting the future. In other words, the better a model reproduces the past, the less accurately it will predict the future, even if its forecasts are relatively consistent.
Hemez also notes that, “while unavoidable, modeling assumptions provide us with a false sense of confidence because they tend to hide our lack-of-knowledge, and the effect that this ignorance may have on predictions. The important question then becomes: ‘how vulnerable to this ignorance are our predictions?’”
“This is the reason why ‘predictability’ should not just be about accuracy, or the ability of predictions to reproduce [historical outcomes]. It is equally important that predictions be robust to the lack-of-knowledge embodied in our assumptions” (see Hemez in “Challenges in Computational Social Modeling and Simulation for National Security Decision Making” by McNamara et al).
However, making a model more robust to our lack of knowledge (e.g., by using the ensemble approach) will often reduce the consistency of its predictions about the future.
The good news is that forecast accuracy often can be increased by combining predictions made using different models and assumptions, either by simply averaging them or via a more sophisticated method (e.g., shrinkage, extremizing, etc.).
Parameter Values
The values we place on model variables is the source of uncertainty with which people are most familiar.
As such, many approaches are used to address it, including scenarios and sensitivity analysis (e.g., best, worst, and most likely cases), Monte Carlo methods (i.e., specifying input variables and results as distributions of possible outcomes, rather than point estimates), and systematic Bayesian updating of estimated values as new information becomes available.
However, even when these methods are used important sources of uncertainty can still remain. For example, in Monte Carlo modeling there is often uncertainty about the correct form of the distributions to use for different input variables. Typical defaults include the uniform distribution (where all values are equally possible), the normal (bell curve) distribution, and a triangular distribution based on the most likely value as well as those believed to be at the 10th and 90th percentiles. Unfortunately, when variable values are produced by a complex adaptive system, they often follow a power law (Pareto) distribution, and the use of traditional distributions increases forecast uncertainty.
Another common source of uncertainty is the relationship between different variables. In many models, the default decision is to assume variables are independent, which is often not true.
A final source of uncertainty is that under different conditions, the values of some model input variables may only change with varying time lags, which are rarely taken into account.
Calculations
Researchers have found that calculation errors are distressingly common, and especially in spreadsheet models (e.g., “Revisiting the Panko-Halverson Taxonomy of Spreadsheet Errors” by Raymond Panko, “Comprehensive Review for Common Types of Errors Using Spreadsheets” by Ali Aburas, and “What We Don’t Know About Spreadsheet Errors Today: The Facts, Why We don’t Believe Them, and What We Need to Do”, by Raymond Panko).
While large enterprises that create and employ complex models increasingly have independent model validation and verification (V&V) groups, and while new automated error checking technologies are appearing (e.g., see the ExcelInt add-in), their use continues to be the exception not the rule.
As a result, a large number of model calculation errors probably go undetected, at least until they produce a catastrophic result (usually a large financial loss).
Conclusion
People frequently make forecasts that assign probabilities to one or more possible future outcomes. In some cases, these probabilities are based on historical frequencies – like the likelihood of being in a car accident.
But in far more cases, forecasts reflect our subjective belief about the likelihood of the outcome in question – i.e., “I believe the probability of “X” occurring before the end of 2030 is 25%.”
What few people realize is that these forecasts are actually conditional probabilities that contain multiple sources of cumulative uncertainty.
For example, consider the probability of “X” occurring before the end of 2030 is 25% -- conditional upon (1) the probability the theory that underlies my estimate is valid; (2) the probability my model has appropriately applied this theory to the forecasting question at hand; (3) the probability my estimated value or values for the variables in my model are accurate; and (4) the probability I have not made any calculation errors.
Given what we know about these four conditioning factors, it is clear that many of the subjective forecasts we encounter are a good deal more uncertain than we usually realize.
In the absence of the opportunity to delve more deeply into the potential sources of error in a given probability forecast, the best way to improve predictive accuracy is to select and combine multiple forecasts that are made using different methodologies, and/or alternative sources of information.
Broadly speaking, there are four sources of forecast uncertainty and error:
1. An incorrect underlying theory or theories;
2. Poor modeling of a theory to apply it to a problem;
3. Wrong parameter values for variables in a model;
4. Calculation mistakes.
Let’s take a closer look at each of these.
Theories
When we make a forecast we are usually basing it on a theory. The problem here is twofold.
First, we often fail to consciously acknowledge the theory that underlies our forecast.
Second, even when we do this, we usually fail to reflect on the limitations of that theory when it comes to accurately forecasting real world results. Here’s a case in point: How many economic forecasts have been based on rational expectations and/or efficient market theories, despite their demonstrated weaknesses as descriptions of reality? Or, to cite an even more painful example, in the years before the 2008 Global Financial Crises, central bank policy was guided by equilibrium theories that failed to provide early warning of the impending disaster.
The forecasts we make are actually conditional on the accuracy of the theories that underlie them. In the case of high impact outcomes that we believe to have a low likelihood of occurring, failing to take into account the probability of the underlying theory’s accuracy can lead to substantial underestimates of the chance a disaster may occur (see, “Probing the Improbable: Methodological Challenges for Risks with Low Probabilities and High Stakes”, by Ord et al).
There are three other situations where the role of theory is usually obscured.
The first is forecasts based on intuition. Research has found that accurate intuition is developed through the combination of (a) repeated experience over time; (b) in a system whose structure and dynamics don’t change; (c) the receipt of repeated feedback on the accuracy of forecasts; and (d) followed by explicit reflection on this feedback that gradually sharpens intuition.
When we make a forecast based on intuition, we are (usually implicitly) making the assumption that this theory applies to the situation at hand. Yet in too many cases, it does not (e.g., because the underlying system is continually evolving). In these cases, our “intuition” very likely rests on a small number of cases that are easily recalled either because they are recent or still vivid in our memory.
The second is a forecast based on analogies. The implicit theory here is that those analogies have enough in common with the situation at hand to make them a valid basis for a forecast. In too many cases, this is only loosely true, and the resulting forecast has a higher degree of uncertainty that we acknowledge.
The third is a forecast based on the application of machine learning algorithms to a large set of data. It is often said that these forecasts are “theory free” because their predictions are based on the application of complex relationships that were found in the analysis of the training data set.
Yet theory is still very much present, including, for example, those that underlie various approaches to machine learning, and those that guide explanation of the extremely complex process that produced the forecast.
Another theoretical concern with machine learning-based forecasts is the often implicit assumption that either the system that generated the data used to train the ML algorithm will remain stable in the future (which is not the case for complex adaptive social or socio-technical systems like the economy, society, politics, and financial markets), or that it will be possible to continually update the training data and machine learning algorithm to match the speed at which the system is changing.
Models
While theories are generalized approaches to explaining and predicting observed effects, models (i.e., a specification of input and output variables and the relationships between them) apply these theories to specific real world forecasting problems.
This creates multiple sources of uncertainty. The first is the decision about which theory to include in a model, as more than one may apply. RAND’s Robert Lempert is a leading expert in this area, who advocates the construction of “ensemble” models that combine the results from applying multiple theories. Most national weather services do the same thing to guide their forecasts. However, ensemble modeling is still far from mainstream.
A second source of uncertainty is the extent to which the implications of a theory are fully captured in a model. A recent example of this was the BBC’s 24 February 2020 story, “Australia Fires Were Worse Than Any Prediction”, which noted they surpassed anything that existing fire models had simulated.
A third source of modeling uncertainty has been extensively researched by Dr. Francois Hemez, a scientist at the Los Alamos and Lawrence Livermore National Laboratories in the United States whose focus is the simulation of nuclear weapons detonations.
He has concluded that all models of complex phenomena face an inescapable tradeoff between their fidelity to historical data, robustness to lack of knowledge, and consistency of predictions.
In evolving systems, models which closely reproduce historical effects often do a poor job of predicting the future. In other words, the better a model reproduces the past, the less accurately it will predict the future, even if its forecasts are relatively consistent.
Hemez also notes that, “while unavoidable, modeling assumptions provide us with a false sense of confidence because they tend to hide our lack-of-knowledge, and the effect that this ignorance may have on predictions. The important question then becomes: ‘how vulnerable to this ignorance are our predictions?’”
“This is the reason why ‘predictability’ should not just be about accuracy, or the ability of predictions to reproduce [historical outcomes]. It is equally important that predictions be robust to the lack-of-knowledge embodied in our assumptions” (see Hemez in “Challenges in Computational Social Modeling and Simulation for National Security Decision Making” by McNamara et al).
However, making a model more robust to our lack of knowledge (e.g., by using the ensemble approach) will often reduce the consistency of its predictions about the future.
The good news is that forecast accuracy often can be increased by combining predictions made using different models and assumptions, either by simply averaging them or via a more sophisticated method (e.g., shrinkage, extremizing, etc.).
Parameter Values
The values we place on model variables is the source of uncertainty with which people are most familiar.
As such, many approaches are used to address it, including scenarios and sensitivity analysis (e.g., best, worst, and most likely cases), Monte Carlo methods (i.e., specifying input variables and results as distributions of possible outcomes, rather than point estimates), and systematic Bayesian updating of estimated values as new information becomes available.
However, even when these methods are used important sources of uncertainty can still remain. For example, in Monte Carlo modeling there is often uncertainty about the correct form of the distributions to use for different input variables. Typical defaults include the uniform distribution (where all values are equally possible), the normal (bell curve) distribution, and a triangular distribution based on the most likely value as well as those believed to be at the 10th and 90th percentiles. Unfortunately, when variable values are produced by a complex adaptive system, they often follow a power law (Pareto) distribution, and the use of traditional distributions increases forecast uncertainty.
Another common source of uncertainty is the relationship between different variables. In many models, the default decision is to assume variables are independent, which is often not true.
A final source of uncertainty is that under different conditions, the values of some model input variables may only change with varying time lags, which are rarely taken into account.
Calculations
Researchers have found that calculation errors are distressingly common, and especially in spreadsheet models (e.g., “Revisiting the Panko-Halverson Taxonomy of Spreadsheet Errors” by Raymond Panko, “Comprehensive Review for Common Types of Errors Using Spreadsheets” by Ali Aburas, and “What We Don’t Know About Spreadsheet Errors Today: The Facts, Why We don’t Believe Them, and What We Need to Do”, by Raymond Panko).
While large enterprises that create and employ complex models increasingly have independent model validation and verification (V&V) groups, and while new automated error checking technologies are appearing (e.g., see the ExcelInt add-in), their use continues to be the exception not the rule.
As a result, a large number of model calculation errors probably go undetected, at least until they produce a catastrophic result (usually a large financial loss).
Conclusion
People frequently make forecasts that assign probabilities to one or more possible future outcomes. In some cases, these probabilities are based on historical frequencies – like the likelihood of being in a car accident.
But in far more cases, forecasts reflect our subjective belief about the likelihood of the outcome in question – i.e., “I believe the probability of “X” occurring before the end of 2030 is 25%.”
What few people realize is that these forecasts are actually conditional probabilities that contain multiple sources of cumulative uncertainty.
For example, consider the probability of “X” occurring before the end of 2030 is 25% -- conditional upon (1) the probability the theory that underlies my estimate is valid; (2) the probability my model has appropriately applied this theory to the forecasting question at hand; (3) the probability my estimated value or values for the variables in my model are accurate; and (4) the probability I have not made any calculation errors.
Given what we know about these four conditioning factors, it is clear that many of the subjective forecasts we encounter are a good deal more uncertain than we usually realize.
In the absence of the opportunity to delve more deeply into the potential sources of error in a given probability forecast, the best way to improve predictive accuracy is to select and combine multiple forecasts that are made using different methodologies, and/or alternative sources of information.
Critical Uncertainties About the Wuhan Coronavirus
20/Feb/20 11:48
Since the end of December, we have seen an expanding outbreak of a new variant of the coronavirus in China, spreading from its epicenter in Wuhan.
There are two critical uncertainties to resolve with more evidence: (1) the transmissibility of the Wuhan strain, which so far appears to be high, and (2) the pathogenicity (CFR), which at this point still appears to be relatively low. And when you hear an estimated CFR, always remember to check the denominator on which it is based (lab confirmed or just symptomatic cases).
When it comes to contagious viral diseases, there is usually a tradeoff between their transmissibility (how easily they spread) and their pathogenicity (how many people who become infected die). Viruses that quickly kill their infected hosts effectively limit their own spread.
The number of infected people who die is measured by the "Case Fatality Rate." However, this is a noisy estimate, because the denominator can be based on lab confirmed cases (which increases CFR) or just symptomatic cases (which lowers estimated CFR). Early estimates (based on very noisy reporting) have reported a preliminary CFR for the Wuhan strain of around 2%. However, this will likely change as more evidence becomes available.
To put Wuhan in perspective, the CFRs for Ebola and highly pathogenic H5N1 influenza are >60%. The 1918 pandemic flu was estimated at 10% to 20% (this strain was also relatively transmissible which is why it killed so many). The 2009 H1N1 "swine" flu CFR was estimated at 5% to 9%. By comparison, typical seasonal influenza has a CFR of one tenth of one percent or less (0.1%).
For other coronaviruses, SARS' CFR was estimated to be around 10%, while MERS' was 35%.
Transimissibility is measured using the “Basic Reproduction Number” (known as “R0” or “R-naught”), which is the number of people who will become infected by contact with one contagious person. If R is less than one (e.g., because of a high CFR), an epidemic will quickly “burn itself out”. In contrast, when R is greater than 1, a virus will spread exponentially.
Initial estimates of R for the Wuhan Novel Coronavirus are very noisy at this point. The World Health Organization has published a range of 1.4 to 2.5
For comparison, here are some historic estimated Basic Reproduction Numbers:
An article in the Lancet (“Nowcasting and Forecasting the Potential Domestic and International Spread of the 2019-nCoV Outbreak Originating in Wuhan, China”) found that, “Independent self-sustaining outbreaks in major cities globally could become inevitable because of substantial exportation of presymptomatic cases & the absence of large-scale public health interventions."
If it is supported by subsequent research, this initial finding will almost certainly lead to the imposition of more travel bans and quarantine measures in an attempt to limit transmission of the virus.
To end this post with a bit of good news, a very recent analysis has concluded that, because the number of new coronavirus cases in China is growing more slowly than the exponential rate implied by its Basic Reproduction Number, quarantine, travel bans, and "self-isolation" measures appear to be having a positive impact (https://arxiv.org/pdf/2002.07572.pdf).
There are two critical uncertainties to resolve with more evidence: (1) the transmissibility of the Wuhan strain, which so far appears to be high, and (2) the pathogenicity (CFR), which at this point still appears to be relatively low. And when you hear an estimated CFR, always remember to check the denominator on which it is based (lab confirmed or just symptomatic cases).
When it comes to contagious viral diseases, there is usually a tradeoff between their transmissibility (how easily they spread) and their pathogenicity (how many people who become infected die). Viruses that quickly kill their infected hosts effectively limit their own spread.
The number of infected people who die is measured by the "Case Fatality Rate." However, this is a noisy estimate, because the denominator can be based on lab confirmed cases (which increases CFR) or just symptomatic cases (which lowers estimated CFR). Early estimates (based on very noisy reporting) have reported a preliminary CFR for the Wuhan strain of around 2%. However, this will likely change as more evidence becomes available.
To put Wuhan in perspective, the CFRs for Ebola and highly pathogenic H5N1 influenza are >60%. The 1918 pandemic flu was estimated at 10% to 20% (this strain was also relatively transmissible which is why it killed so many). The 2009 H1N1 "swine" flu CFR was estimated at 5% to 9%. By comparison, typical seasonal influenza has a CFR of one tenth of one percent or less (0.1%).
For other coronaviruses, SARS' CFR was estimated to be around 10%, while MERS' was 35%.
Transimissibility is measured using the “Basic Reproduction Number” (known as “R0” or “R-naught”), which is the number of people who will become infected by contact with one contagious person. If R is less than one (e.g., because of a high CFR), an epidemic will quickly “burn itself out”. In contrast, when R is greater than 1, a virus will spread exponentially.
Initial estimates of R for the Wuhan Novel Coronavirus are very noisy at this point. The World Health Organization has published a range of 1.4 to 2.5
For comparison, here are some historic estimated Basic Reproduction Numbers:
- 1918 Spanish Flu = 2.3 to 3.4 (95% confidence interval)
- SARS Coronavirus = 1.9
- 1968 Flu = 1.80
- 2009 Swine Flu = 1.46
- Seasonal Influenza = 1.28
- MERS Coronavirus = <1.0
- Highly Pathogenic H5N1 Influenza = .90
- Ebola = .70
An article in the Lancet (“Nowcasting and Forecasting the Potential Domestic and International Spread of the 2019-nCoV Outbreak Originating in Wuhan, China”) found that, “Independent self-sustaining outbreaks in major cities globally could become inevitable because of substantial exportation of presymptomatic cases & the absence of large-scale public health interventions."
If it is supported by subsequent research, this initial finding will almost certainly lead to the imposition of more travel bans and quarantine measures in an attempt to limit transmission of the virus.
To end this post with a bit of good news, a very recent analysis has concluded that, because the number of new coronavirus cases in China is growing more slowly than the exponential rate implied by its Basic Reproduction Number, quarantine, travel bans, and "self-isolation" measures appear to be having a positive impact (https://arxiv.org/pdf/2002.07572.pdf).
New NACD Survey Highlights Strategy and Risk Challenges Facing Directors and Boards
07/Feb/20 15:16
As always, the recently released Public Company Governance Survey by the (US) National Association of Corporate Directors is a thought-provoking read.
Given Britten Coyne Partners’ focus on helping clients better anticipate, more accurately assess, and adapt in time to emerging threats (whether they are called strategic, disruptive, or existential risks), a few findings stood out.
(1) “Sixty-eight percent of directors report that their company can no longer count on extending its historical strategy over the next five years.” One hopes that this conclusion resulted from constructive board engagement with their management team in previous years, that challenged the continuing validity of the assumptions underlying current strategy, and thus triggered a deeper search for possible new threats and opportunities that has subsequently driven the design of new strategies and risk management processes.
(2) “Directors identified growing business model disruptions (52%) and a slowing global economy (51%) as the top trends most likely to impact their organizations over the next 12 months.” In our experience, it was possible to anticipate and assess these emerging threats long before now, and thus to time the exercise of options to adapt in time to the dangers posed by these and other emerging threats. Companies that are just now starting this process will likely struggle as the time before developing threats reach a critical threshold may be shorter than the time required to effectively adapt to them. In our work with clients, we call this the "Safety Margin."
(3) “Boards hear about risk largely from the CEO and CFO.” However, “just 56% of directors believe the risk information they receive allows their board to draw the right conclusions…Information asymmetry – the gap between what the board knows and what management knows – remains a challenge for boards.” For this very reason — and the risk blindness information asymmetry can cause — we help clients to put in place strategic risk management and governance processes that explicitly incorporate a variety of different internal and external perspectives to improve forecasting accuracy.
(4) The NACD notes that, “backward-looking risk information or information that is focused on well-known risks must be balanced with forward-looking risk reports that allow directors to peek around corners to understand emerging threats.” This aligns not only with our own experience, but also with findings from a recent EY survey of directors and CEOs, which found that “current Enterprise Risk Management processes are considered effective in assessing traditional risks, but not as effective in assessing and managing emerging and atypical risks. To address this issue, leading boards are integrating external perspectives and independent data into their [risk management processes] to expand their scope, promote fresh thinking, and challenge internal biases.”
(5) It was worrying to see in the NACD survey that over the next year, “61% of directors want to improve their board’s core oversight over strategy development, and 63% want to improve it over strategy execution.” In light of the consensus at the recently concluded World Economic Forum that we are entering a period of much higher uncertainty than has been the case over the past decade, many directors must wish that the enhanced oversight processes they desire were already in place.
(6) Finally, the NACD report also noted that, “board leaders can drive strategic board renewal by ensuring that the skills of directors in the boardroom correspond to the evolving needs of the organization.” For this reason, board members reported that director education was one of the top areas where additional time needed to be spent. In addition to consulting, Britten Coyne provides clients with a range of education offerings to help them substantially improve their capacity for anticipating, assessing, and adapting to emerging threats.
In sum, our overall conclusion after reading the NACD survey is that most directors have sensed that disruptive changes in technology, the environment, the economy, national security, society, politics, and financial markets are interacting to create more complexity, more uncertainty, and a far different set of challenges than their boards have faced in the past decade.
As the pace of these changes continues to accelerate, boards must strengthen directors' strategic risk management skills while simultaneously improving their organizations' governance and management processes if they are to successfully anticipate, assess, and adapt to the new threats they will face in the 2020s.
Given Britten Coyne Partners’ focus on helping clients better anticipate, more accurately assess, and adapt in time to emerging threats (whether they are called strategic, disruptive, or existential risks), a few findings stood out.
(1) “Sixty-eight percent of directors report that their company can no longer count on extending its historical strategy over the next five years.” One hopes that this conclusion resulted from constructive board engagement with their management team in previous years, that challenged the continuing validity of the assumptions underlying current strategy, and thus triggered a deeper search for possible new threats and opportunities that has subsequently driven the design of new strategies and risk management processes.
(2) “Directors identified growing business model disruptions (52%) and a slowing global economy (51%) as the top trends most likely to impact their organizations over the next 12 months.” In our experience, it was possible to anticipate and assess these emerging threats long before now, and thus to time the exercise of options to adapt in time to the dangers posed by these and other emerging threats. Companies that are just now starting this process will likely struggle as the time before developing threats reach a critical threshold may be shorter than the time required to effectively adapt to them. In our work with clients, we call this the "Safety Margin."
(3) “Boards hear about risk largely from the CEO and CFO.” However, “just 56% of directors believe the risk information they receive allows their board to draw the right conclusions…Information asymmetry – the gap between what the board knows and what management knows – remains a challenge for boards.” For this very reason — and the risk blindness information asymmetry can cause — we help clients to put in place strategic risk management and governance processes that explicitly incorporate a variety of different internal and external perspectives to improve forecasting accuracy.
(4) The NACD notes that, “backward-looking risk information or information that is focused on well-known risks must be balanced with forward-looking risk reports that allow directors to peek around corners to understand emerging threats.” This aligns not only with our own experience, but also with findings from a recent EY survey of directors and CEOs, which found that “current Enterprise Risk Management processes are considered effective in assessing traditional risks, but not as effective in assessing and managing emerging and atypical risks. To address this issue, leading boards are integrating external perspectives and independent data into their [risk management processes] to expand their scope, promote fresh thinking, and challenge internal biases.”
(5) It was worrying to see in the NACD survey that over the next year, “61% of directors want to improve their board’s core oversight over strategy development, and 63% want to improve it over strategy execution.” In light of the consensus at the recently concluded World Economic Forum that we are entering a period of much higher uncertainty than has been the case over the past decade, many directors must wish that the enhanced oversight processes they desire were already in place.
(6) Finally, the NACD report also noted that, “board leaders can drive strategic board renewal by ensuring that the skills of directors in the boardroom correspond to the evolving needs of the organization.” For this reason, board members reported that director education was one of the top areas where additional time needed to be spent. In addition to consulting, Britten Coyne provides clients with a range of education offerings to help them substantially improve their capacity for anticipating, assessing, and adapting to emerging threats.
In sum, our overall conclusion after reading the NACD survey is that most directors have sensed that disruptive changes in technology, the environment, the economy, national security, society, politics, and financial markets are interacting to create more complexity, more uncertainty, and a far different set of challenges than their boards have faced in the past decade.
As the pace of these changes continues to accelerate, boards must strengthen directors' strategic risk management skills while simultaneously improving their organizations' governance and management processes if they are to successfully anticipate, assess, and adapt to the new threats they will face in the 2020s.