However Awkward, Boards Need to Confront Overconfident CEOs

In “The Board’s Role in Strategy in a Changing Environment”, Reeves et a from BCG’s Henderson Institute note that in a complex and fast changing world, “corporate strategy is becoming both more important and increasingly challenging for today’s leaders.” They note that both investors and CEO’s are saying that boards need to spend more time on strategy.

We fully agree with these points. However, our research into the relationship between board chairs and CEOs raised an even more important point, which has recently appeared in two new research papers.

In “CEO Overconfidence and the Probability of Corporate Failure”, Leng et al find that, unsurprisingly, increasing CEO overconfidence raises the probability of firm bankruptcy. What was more interesting was their finding that large boards had a bigger impact on reducing the bankruptcy risk associated with overconfident CEOs than small boards. But when a CEO is not overconfident, the latter proved to be more effective.

However, another paper makes it clear that restraining an overconfident CEO is something that many boards find easier said than done. In “Director Perceptions of Their Board’s Effectiveness, Size and Composition, Dynamics, and Internal Governance”, Cheng et al note that almost all directors reported that the size of their board was “just right”, despite the wide variation in actual board size.

Moreover, while directors generally highly rated their board’s effectiveness, the weakest ratings were typically given to their performance in evaluating their CEO. The authors note that, “boards seem to see their primary function as providing counsel to, rather than monitoring the CEO.” This finding was backed by some painful director quotes, including

“We have not been effective in dealing with a highly aggressive CEO”

“Our board has been too slow to move on poorly performing CEOs”

“We put too much trust in the CEO and management team”

To be sure, this is not a new phenomenon. For example, in Berkshire Hathaway’s 1988 Annual Report, Warren Buffett famously observed hat “At board meetings, criticism of the CEO’s performance is often viewed as the social equivalent of belching.”

Unfortunately, heightened uncertainty tends to make human beings – including management teams and board directors – more likely to conform to the views of the group, even when it is led by an overconfident CEO. Indeed, in the face of uncertainty, overconfidence often increases in order to keep feelings of confusion and vulnerability at bay. You can see how this can easily trigger social dynamics that lead to organizational crisis and failure.

Challenging an overconfident CEO is never easy. But it is often one of the most critical activities non-executive chairs and directors perform.



Comments

Asking the Right Forecasting Questions

During the four years I spent on the Good Judgment Project team, I learned and applied a set of techniques that, as shown in Philip Tetlock's book "Superforecasting" significantly improved forecast accuracy, even in the case of complex socio-technical systems.

Far less noted, however, was the second critical insight from the GJP: the importance of asking the right forecasting questions. Put differently, the real value of a forecast is a function of both the question asked and the accuracy of the answer provided.

This raises the issue of just how an individual or organization should go about deciding on the forecasting questions to ask.

There is no obvious answer.

A rough analogy is to three types of reasoning: inductive, deductive, and abductive. In the case of induction, there are well-known processes to follow when weighing evidence to reach a conclusion (see our previous blog post on this).

Even more well-developed are the rules for logically deducing a conclusion from major and minor premises.

By far the least well codified type of reasoning is abduction — the process of generating plausible causes for an observed or hypothesized effect (sometimes called "inference to the best explanation").

To complete the analogy, we use abduction to generate forecasting questions, which often ask us to estimate the probability of that a hypothesized future effect will occur within a specified time frame.

To develop our estimate, we use abduction to generate plausible causal hypotheses of how the effect could occur. We then use deduction to identify high value evidence (i.e., indicators) we would be very likely to observe (or not observe) if the causal hypothesis were true (or false).

After seeking to collect or observe the evidence for and against various hypotheses, we use induction to weigh it and reach a conclusion — which in this example takes the form of our probability estimate.

So far, so good. But this begs the question of what guides our adductive reasoning. Taking Judea Pearl's hierarchy of different types of reasoning, we can identify three approaches to generating forecasting questions.

Pearl's lowest level of reasoning is associational (also sometimes called correlational). Basically, if a set of factors (e.g., observable evidence) existed in the past at the same time as a given effect, we assume that the effect will also occur in the future if the same set of factors exist or occur. Note that there is no assumption here of causation; only statistical association.

Simple historical reasoning provides an example of this; given an important effect that occurs multiple times, we can seek factors that are common to the given cases to use to formulate forecasting questions related to the potential for the same effect to occur in the future. To be sure, this is an imperfect approach, because the complex systems that produce observed historical outcomes are themselves constantly adapting and evolving. It is for this reason that it is often said that while history seldom repeats, it often rhymes.

Pearl's next highest level of cognition explicitly creates mental models or more complex theories that logically link causes to effects. These theories can result from both qualitative and quantitative analysis. As noted above, we can use deduction to predict the future effects, assuming a given theory is true and specific causes occur. Hence, different causal theories (usually put forth by different experts) can be used to formulate forecasting questions.

Pearl's highest level of cognition is counterfactual reasoning (e.g., "if I hadn't done that, this wouldn't have happened" or "if I had done that instead, it would have produced this result"). One way to use counterfactual reasoning to generate forecasting questions is via the pre-mortem technique, in which you assume a plan has failed or a forecast has been badly wrong, and ask why this happened, including the evidence you missed and what you could have done differently. The results of pre-mortem analyses are often a rich source of new forecasting questions.

In sum, avoiding strategic failure is as much about taking time to formulate the right forecasting questions as it is using methods to enhance the accuracy with which they are answered.



Comments