Ethical Dilemmas at the Heart of Risk Management and Governance

Chief Financial Officers (and in some cases, Chief Risk Officers) are unique, in that apart from the Chief Executive Officer they are the only corporate officers who have a direct reporting relationship to the Board of Directors (usually via the board’s audit and risk committee).

Over a long career, I have repeatedly seen how this reporting relationship creates a recurring ethical dilemma for CFOs and CROs.

In today’s rapidly changing world, corporate failure rates are high, CEOs' average tenure is shrinking, and many of them are powerfully incentivized (via their compensation package and the threat posed by activist investors) to deliver rising returns to investors over short time horizons.

At the individual level, this combination of factors increases the likelihood that they will fall into any number of cognitive traps, including wishful thinking, overconfidence, and reduced tolerance for doubt and dissent.

At the group level, research has shown that heightened uncertainty increases human beings’ desire to conform to the views of the group, and to give greater weight to others’ views than their own private information. In other words, the situation many companies find themselves in today militate against CEOs hearing dissenting views from many, if not most, members of their team.

Under such circumstances, CFOs and CROs can easily find themselves caught between their duties to honestly assess the risks facing a company, and to report them to the board, and the painful realization that doing so may not be in the best interest of their relationship with the CEO and career prospects.

Given these factors, the risk governance role of a board’s non-executive directors is critical. However, to play it effectively they also must overcome two challenges.

The first is human beings’ natural cognitive difficulty when it comes to recognizing the importance of evidence that is absent – i.e., Sherlock Holmes’ dog that didn’t bark. Sometimes the most important red flags should be raised by what isn’t said in the CFO or CRO’s risk report to the board.

The second is perhaps even more difficult. Many non-executive directors have been corporate officers themselves, and appreciate what it feels like to be challenged by a board. Seeking to avoid conflict with a management team is natural desire for directors, until it becomes unavoidable (by which time it often comes too late). A director may also wish to avoid being seen by other directors as causing conflict on the board. And when uncertainty is high, the same group level pressures towards conformity operate on boards just as they do on management teams.

Thus the success or failure of risk management and governance processes often comes down to how CFOs, CROs, and non-executive directors resolve the ethical dilemmas they encounter at critical junctures in corporate history.
Comments

Critical Modeling Tradeoffs We Ignore at Our Peril

Having been present at the birth of VisiCalc and dawn of the electronic spreadsheet age, I’ve spent a lot of years working with many different modeling software programs. However, it wasn’t until I started to read research papers published by Dr. Francois Hemez from Los Alamos National Laboratory that I really understood the tradeoffs we face when creating and using quantitative models.

Hemez writes about the challenge of using models to simulate complex physical phenomena, including the effects of nuclear weapons. His critical insight is that there are inescapable tradeoffs between three metrics that are often used to judge the quality of a model.

The first is the extent to which a model can reproduce historical data.

The second metric is the extent to which a model's predictive accuracy is robust to different types of uncertainty. These include the correct structure of the model itself (e.g., the variables to include and the relationships between them); how best to represent the range of possible values for model variables; and the potential impact of irreducible sources of randomness.

The third metric is what Hemez calls a model’s “predictability”. This is not the same as accuracy in reproducing historical data. Rather, it is the extent to which predictions are consistent from a group of models that are roughly equal in their ability to accurately reproduce the past and their robustness to uncertainty.

Building a model that excels at reproducing past results is unlikely to be robust to uncertainty, and will likely fail to provide equally accurate predictions in complex socio-technical systems (like markets or economies) that, unlike physical systems, are constantly evolving.

Similarly, increasing robustness to uncertainty comes at the cost of less consistency in predictions about the future. The wider the range of uncertainties you include regarding model structure and the value of model variables, the wider will be the range of forecast outcomes the model produces.

What are the practical implications of Hemez’ work for risk executives and other business leaders?

First, in complex socio-technical systems estimating a model’s ability to accurately predict the future on the basis of its ability replicate the past will likely lead to increasing errors as the forecast time horizon lengthens.

A better approach is to ask whether the results observed in the past are within the range of possible outcomes forecast by a model.

Second, using single inputs for model variables is a recipe for trouble when attempting to forecast the future outputs of a complex socio-technical system. Moreover, the traditional approach of using three model runs (representing the best, worst, and most likely cases) is unlikely to significantly improve forecast accuracy because in socio-technical systems variables tend not to all simultaneously take their best, worst, and most likely values.

A better approach is use Monte Carlo add-ins for a typical spreadsheet model that describe possible values for key variables and their relationships to each other as statistical distributions. Running such models multiple times produces statistical distributions for the forecast outcomes, which enables executives to better understand how the best and worst outcomes could arise. Yet this approach still neglects the evolution over time of key relationships between variables (and sometimes the addition of new variables) within the socio-technical system being modeled. To capture these dynamics, an approach would have to either add add multiple model structures or enable model evolution over time. A good example of this is the "ensemble" modeling approach utilized in weather forecasting, in which, for example, the UK Met, European, Canadian, and US Weather Service models are all run to produce a prediction.

Finally, confidence in prediction is increased when a number of fundamentally different modeling methodologies are used to forecast outcomes of interest in complex socio-technical systems – for example, Monte-Carlo spreadsheet models, systems dynamics models (e.g., built with Analytica, STELLA, or VenSim software), and agent based models (e.g., EAS, NetLogo, or SWARM software).

In sum, there are very practical ways we can apply Francois Hemez' insights about the tradeoffs we face when modeling the behavior of complex socio-technical systems. But we first have to recognize that they exist.
Comments

The Strategist's Art

After forty years in the private sector, I can think of few activities upon which more time has been wasted than “mission and vision” projects and offsites.

One problem is vast confusion over the meaning of these two terms: Ask ten people to define them and you’ll probably get ten different answers

A second problem is the insipid results of most organization’s attempts at “missioning and visioning”. Quick: Can you write down your organization’s mission and vision statements? Or can you tell me how they guide you in making the decisions you face every day?

If you couldn’t you’re far from alone. I’ve asked these questions of clients for almost 40 years with the same depressing result. The painful truth is that most mission and vision statements are exercises in organizational vanity that are indistinguishable, generic, and so filled with buzzwords that they are devoid of practical meaning or impact.

Instead of wasting time on “missioning and visioning”, I’ve learned the virtue of focusing instead on purpose and strategy.

Purpose is far easier to discern than most people think. Just ask yourself this question: What would the world lose if your organization ceased to exist tomorrow? Hopefully, the world’s response won’t be “good riddance.”

Strategy is also far less mysterious than it is often made out to be. In essence, it is simply a causal theory of how to achieve an organization’s most important goals with limited resources in the face of uncertainty. Plans implement strategy.

Strategies that are complete and effective begin with an unsparing assessment of how an organization came to its present circumstances, and different ways its environment could evolve in an uncertain future.

Based on this assessment, a strategist must determine the most important and measurable goals that the organization must achieve within a specified timeframe to enable it to survive and thrive in the face of uncertainty.

Setting out ambitious goals and describing how they could be achieved without taking limited resources and uncertainty into account is not strategy, but rather an exercise in wishful thinking that comes with a built in excuse: “It didn’t work because you didn’t give us enough money.”

At its heart, strategy is a creative process that resolves the tension between critical goals and limited resources in an uncertain environment.

That is why strategy – and its counterpart, strategic risk management and governance -- will always be an art.

Comments

Cutting Through Risk Managment's Confusing Terminology

The longer you spend working in, or studying, risk management, the more you come across common terms that are often used in very different and confusing ways by various organizations and authors.

Unfortunately, this can lead cynics to wrongly conclude that risk management is nothing more than a bunch of buzzwords, mixed with an unhealthy helping of mumbo jumbo.

With that in mind, this brief note explains how Britten Coyne Partners uses some commonly encountered risk management terms.

A "hazard" is an event or other development that could emerge from the environment and
plausibly have a negative impact on an organization, depending on the objective(s) it is pursuing.

Strictly speaking, a "risk" is an uncertainty that can be described statistically. However, this is a distinction that is frequently overlooked, with risk commonly taken to mean all types of uncertain events and developments that could
possibly have a negative impact on the achievement of one or more specific organizational objectives. The key point is this: Risks exist in relationship to specific objectives.

A "threat" is an event or other development that will
probably have a substantial negative impact on the achievement of one or more specific organizational objectives, unless effective adaptations are implemented in time. As a rule of thumb, a risk transforms into a threat when it ceases to be just a cognitive construction, and triggers a feeling of fear.

Given a set of objectives, an organization first seeks to anticipate the risks to their achievement.

It then proceeds to assess these risks, to determine their likelihood over different time frames, the magnitude of their potential negative impact, the speed with which they could develop into threats, and which of them most urgently require adaptive action(s) to be undertaken. The key point is this: The purpose of risk assessment is not to create attractive "heat maps." It is to prioritize the allocation of limited resources to various adaptive actions.

Broadly speaking, these adaptive actions can be divided into two categories: Those intended to mitigate the causes of different risks to reduce the chance that they will develop into threats and/or the slow the speed with which this could happen, and actions intended to mitigate the consequences if a threat does materialize.

The latter category includes adaptations (1) to reduce an organization's exposure to a given threat (e.g., buying insurance); (2) to reduce a threat's initial negative impact on performance for a given level of exposure (i.e., actions to increase robustness); and (3) to reduce the time required to return to or exceed a given level of performance after the initial negative impact (i.e., actions to increase resilience).


Comments