# Modeling -- Not as Easy as it Looks!

01/Apr/18 15:11

No, we’re not talking about a catwalk in stilettos. We’re talking about an activity that, especially since VisiCalc first ran on an Apple II in 1979, has become an integral part of management.

For all its current ubiquity, what too many people fail to appreciate is the amount of uncertainty inherent in quantitative modeling. With that in mind, we offer this quick review.

Level 1: Choice of Theory

Explicitly or implicitly, models intended to explain or predict observed effects begin with a causal theory or theories. The accuracy of the conceptual theory that underlies a quantitative model is rarely acknowledged as an important source of uncertainty.

Level 2: Choice of Modeling Method

The next step is choosing a modeling method that accurately captures the major features of the theory. For example, where theory states that the target effects to be modeled emerge from the bottom-up via the interaction of agents with varying information and beliefs, the agent-based modeling may be the method chosen. Alternatively, where theory states that the target effect is heavily driven by feedback loops, then a top-down system dynamics modeling approach may be used.

The extent of the match between theory and the modeling approach chosen is another potential source of modeling uncertainty.

Level 3: The Structure of the Model(s)

Yet another source of uncertainty are structural choices that are made when implementing a given modeling method. These include the variables that are included in the model, and the nature of the relationships between them (e.g., are they related to each other, and, if they are, is the relationship linear or non-linear, and constant or dependent on other variables?).

In some cases, uncertainties about the correct structure of a model can be resolved through the use of “ensemble” methods, which involves he construction of multiple models and the aggregation of their outputs.

Level 4: The Values of Model Variables

“Parameter uncertainty” refers to doubts about the accuracy of the values that are attached to a model’s variables, including dependency relationships between them (e.g., their degree of correlation). In simple deterministic models, this involved disagreements over values for individual variables, or the values to be used in “best-case, worst-case, most-likely case” scenarios.

In more complex Monte Carlo models, values for key variables are specified as distributions of possible outcomes. In this case, sources of uncertainty include the type of distribution used to describe the possible range of values for a variable (e.g., a normal/Gaussian or power law/Pareto distribution), and the specification of key values for the selected distribution (e.g., will rare but potentially critical events be captured?).

Level 5: Recognizing Randomness

For most variables, there is an irreducible level of uncertainty that cannot be reduced through more data or better knowledge about the variable in question. Sources of this randomness can include sensor and measurement errors, or small fluctuations caused by a complex mix of other factors. Whether and how this randomness is included in potential variable values is another source of model uncertainty.

Level 6: Mathematical Errors

We’ve all done it – wrongly specified an equation or variable value when building a model late at night (and/or under time pressure). And most of us are usually lucky enough to catch those errors the next morning before someone else does, when, after a few cups of coffee, we test our model before finalizing it and say, “that doesn’t look right.” Like it or not, mathematical errors are yet another – and very common – source of model uncertainty.

The discipline of model verification and validation is used to assess these six sources of model uncertainty. Verification focuses on the accuracy with which a model implements the theory upon which it is based, while validation assesses the accuracy with which a model represents the target system.

Level 7: Non-Stationarity

This brings us to the final source of uncertainty. Validation usually involves assessing the extent to which a model can reproduce the target system’s historical results. However, if the system itself is evolving or “non-stationary” – and particularly if that evolution is driven by a complex process that cannot be fully understood (i.e., it is “emergent”), then a final source of uncertainty is how long a model’s predictions will remain accurate (within certain bounds).

Computer models have substantially increased business productivity as they have come into widespread use over the past forty years. Yet they have also introduced new sources of uncertainty into decisions that are made using their outputs. It is for this reason that wise decision makers always test model results against their intuition, and when they disagree take the time to further explore and understand the root causes at work. Both modeling methods and decision makers’ intuition usually benefit from the time invested in this discussion.

For all its current ubiquity, what too many people fail to appreciate is the amount of uncertainty inherent in quantitative modeling. With that in mind, we offer this quick review.

Level 1: Choice of Theory

Explicitly or implicitly, models intended to explain or predict observed effects begin with a causal theory or theories. The accuracy of the conceptual theory that underlies a quantitative model is rarely acknowledged as an important source of uncertainty.

Level 2: Choice of Modeling Method

The next step is choosing a modeling method that accurately captures the major features of the theory. For example, where theory states that the target effects to be modeled emerge from the bottom-up via the interaction of agents with varying information and beliefs, the agent-based modeling may be the method chosen. Alternatively, where theory states that the target effect is heavily driven by feedback loops, then a top-down system dynamics modeling approach may be used.

The extent of the match between theory and the modeling approach chosen is another potential source of modeling uncertainty.

Level 3: The Structure of the Model(s)

Yet another source of uncertainty are structural choices that are made when implementing a given modeling method. These include the variables that are included in the model, and the nature of the relationships between them (e.g., are they related to each other, and, if they are, is the relationship linear or non-linear, and constant or dependent on other variables?).

In some cases, uncertainties about the correct structure of a model can be resolved through the use of “ensemble” methods, which involves he construction of multiple models and the aggregation of their outputs.

Level 4: The Values of Model Variables

“Parameter uncertainty” refers to doubts about the accuracy of the values that are attached to a model’s variables, including dependency relationships between them (e.g., their degree of correlation). In simple deterministic models, this involved disagreements over values for individual variables, or the values to be used in “best-case, worst-case, most-likely case” scenarios.

In more complex Monte Carlo models, values for key variables are specified as distributions of possible outcomes. In this case, sources of uncertainty include the type of distribution used to describe the possible range of values for a variable (e.g., a normal/Gaussian or power law/Pareto distribution), and the specification of key values for the selected distribution (e.g., will rare but potentially critical events be captured?).

Level 5: Recognizing Randomness

For most variables, there is an irreducible level of uncertainty that cannot be reduced through more data or better knowledge about the variable in question. Sources of this randomness can include sensor and measurement errors, or small fluctuations caused by a complex mix of other factors. Whether and how this randomness is included in potential variable values is another source of model uncertainty.

Level 6: Mathematical Errors

We’ve all done it – wrongly specified an equation or variable value when building a model late at night (and/or under time pressure). And most of us are usually lucky enough to catch those errors the next morning before someone else does, when, after a few cups of coffee, we test our model before finalizing it and say, “that doesn’t look right.” Like it or not, mathematical errors are yet another – and very common – source of model uncertainty.

The discipline of model verification and validation is used to assess these six sources of model uncertainty. Verification focuses on the accuracy with which a model implements the theory upon which it is based, while validation assesses the accuracy with which a model represents the target system.

Level 7: Non-Stationarity

This brings us to the final source of uncertainty. Validation usually involves assessing the extent to which a model can reproduce the target system’s historical results. However, if the system itself is evolving or “non-stationary” – and particularly if that evolution is driven by a complex process that cannot be fully understood (i.e., it is “emergent”), then a final source of uncertainty is how long a model’s predictions will remain accurate (within certain bounds).

Computer models have substantially increased business productivity as they have come into widespread use over the past forty years. Yet they have also introduced new sources of uncertainty into decisions that are made using their outputs. It is for this reason that wise decision makers always test model results against their intuition, and when they disagree take the time to further explore and understand the root causes at work. Both modeling methods and decision makers’ intuition usually benefit from the time invested in this discussion.

blog comments powered by Disqus