COVID Has Laid Bare Too Many Leaders' Lack of Critical Thinking Skills

One definition of critical thinking is “the use of a rigorous process to reach justifiable inferences.” The actions of various officials during the COVID pandemic have made painfully clear that it is a skill in very short supply.

In the United States, the ongoing debate over when to reopen schools for in-person instruction has put paid to K12 leaders’ frequent claim that they teach students how to think critically.

The battle over school reopening in the United States is a perfect case study.

Example #1: Framing of the reopening issue has ignored basic principles of inductive reasoning

Teachers unions and their supporters have basically demanded that district and state leaders (not to mention parents), “prove to us that it is safe to return to school.” And that is just what most proponents of reopening schools have tried to do.

Unfortunately, this approach runs smack into the so-called “problem of induction”, that was first identified by the philosopher David Hume his “Treatise on Human Nature”, published in 1739: No amount of evidence can ever conclusively prove that a hypothesis is true.

To be sure, there are techniques available for systematically weighing evidence to adjust your confidence in the likelihood that a hypothesis is true, such as the Baconian, Bayesian, and Dempster-Shafer methods. But I can find no examples of these methods being applied in any district’s debate about reopening schools.

Instead, parents and employers have repeatedly been treated to the ugly spectacle of both sides of this debate randomly hurling different studies at each other, without any attempt to systematically weigh the evidence they provide.

Nor up until recently have we seen any attempts to use Karl Popper’s approach to avoiding Hume’s problem of induction: Using evidence to falsify rather than prove a claim.

Fortunately this has begun to change, as more evidence accumulates that schools are not dangerous vectors of COVID transmission.


Example #2: Deductive reasoning has been absent

In response, reopening opponents have made a new claim: That in-person instruction is still not safe because of the prevailing rate of positive COVID tests and/or case numbers in the community surrounding the school district.

This has triggered an endless argument about what the community positive rate means for the safety of in-person instruction.

This argument will never end unless and until the warring parties start to complement induction with deductive reasoning — in this case, actually modeling the multiple factors that affect the level of COVID infection risk in school classrooms.

In addition to assumptions about the relative importance of different infection vectors (surface contact, droplet, and aerosols), and the community infection rate (which drives the probability that a student or adult at a school will be COVID positive and asymptomatic), other factors include the cubic feet of space per person in a classroom, the activity being performed (e.g., singing versus a lecture), the length of time a group is in the classroom, and HVAC system parameters (air changes per hour, percentage of outside air exchanged, type of filters in use, windows open/closed, etc.).

Yet I have yet to see this type of modeling systematically incorporated into state and school district discussions about how to measure and manage reopening risks. Unsurprisingly, it also seems to have been completely ignored by the teachers unions.

In the future, every party making claims and/or decisions about school reopening and COVID risk should have to answer these three questions, which have rarely been asked:

(1) What variables are you using in your model of in-school COVID infection risk?

(2) What assumptions are you making about the values of these variables, and how they interact to determine the level of infection risk?

(3) On what evidence are your assumptions based?


Example #3: Few if any forecast-based claims made during the debate of school reopening have been accompanied by estimates of the degree of uncertainty associated with them

Broadly speaking, there are four categories of uncertainty associated with any forecast.

First, there is uncertainty arising from the applicability of a given theory to the situation at hand.

For example, initial forecasts for the spread of COVID were based on the standard “Susceptible – Infected – Recovered” or “SIR” model of infectious disease epidemics. This model assumed that a homogenous population of agents would randomly encounter each other. With some probability, encounters between infected and non-infected agents would produce more infections. Some percentage of infected agents would die, and some would recover, and thereafter become immune to additional infection.

As it has turned out, the standard model’s assumptions did not match the reality of the COVID epidemic. For example, the population was not homogenous – some had characteristics (like age) and conditions (like asthma, obesity) that made them much more likely to become infected and die. Nor were encounters between infected and non-infected agents random – different people followed different patterns -- like riding the subway each day – that created higher and lower risks of infection, or infecting others (i.e., the impact of “superspreaders”). Finally, in the case of COVID, surviving infection did not make people immune to future infections (e.g., with a new variant of SARS-CoV-2) or infecting others.

Second, there is uncertainty associated with the way a theory is translated into a quantitative forecasting model. In the case of COVID, one of the challenges was how to model the impact of lockdowns and varying rates of compliance with them.

Third, there is uncertainty about what values to put on various parameters contained in a model – for example, to take into account the range of possible impacts that superspreaders could have.

Fourth, there is uncertainty associated with the possible presence of calculation errors within the model, particularly in light of research that has found that a substantial number of models have them (this is why more and more organizations now have separate Model Validation and Verification teams).


Example #4: Authorities’ decision processes have not clearly defined, acknowledged, and systematically traded off different parties’ competing goals.

The Wharton School at the University of Pennsylvania has produced an eye-opening economic analysis of the school reopening issue, modeling both students lost lifetime earnings due to school closure and the cost of COVID infection risk, using the same type of “statistical value of a life” used in other public risk analyses (e.g., of the costs and benefits of raising speed limits).

This analysis finds that, assuming minimal learning versus in-classroom instruction and no-recovery of learning losses, students lose between $12,000 and $15,000 in lifetime earnings for each month that schools remain closed.

To be conservative, let’s assume that due to somewhat effective remote instruction and recovery of learning losses, the average earnings hit is “only” $6,000 per month, and that schools “only” remain closed for nine months (three in the spring of 2020, and six during this school year). In a district of 25,000 students, the economic cost of unrecovered student learning losses is $1.4 billion. You read that right: $1.4 billion.

And that doesn’t include the cost of job losses (usually by mothers) caused by extended period of remote learning.

Given the high cost to students, the Wharton team concluded that it only makes sense to continue remote learning if in-person instruction would plausibly cause .355 new community COVID cases per student. And there is no evidence that this is the case.

However, I have yet to hear this long-term cost to students or this tradeoff mentioned in leaders’ discussions about returning to in-person instruction.

Instead, I’ve seen teachers unions opposed to returning to in-person instruction roll out the same playbook they routinely use in discussions about tenure and dismissal of poorly performing teachers.
This argument is based on the concept of Type-1 and Type-2 errors when testing a hypothesis. Errors of commission are Type-1 errors, also known as “false alarms.” Errors of omission are Type-2 errors, or “missed alarms.” There is an unavoidable trade-off between them — the more you reduce the likelihood of errors of commission, the more you increase the probability of errors of omission.

Here’s a real life example: If you incorrectly identify a teacher as poorly performing and dismiss them, you have made an error of commission. If you incorrectly fail to identify a poorly performing teacher and therefore fail to dismiss them, you have committed an error of omission.

Unfortunately, the cost of these two errors is highly asymmetrical. Teachers unions claim tenure is necessary to minimize the chance of errors of commission — wrongfully dismissing a teacher who is not poorly performing. They completely neglect the cost of the corresponding increase in the probability of errors of omission — failing to dismiss poor performers.

As Chetty, Friedman, and Rockhoff found in “Measuring the Impacts of Teachers”, this cost is extremely high — each student suffers an estimated lifetime earnings loss of $52,000. Assuming the poor teacher has a class of 25 students each year for 30 years, the total cost is $39 million.

We face the same tradeoff between errors of commission and omission in the school reopening decision. But yet again, we are failing to think critically about it, by explicitly discussing different parties’ competing goals, and how politicians and district leaders should weigh them in their decision process.

To reduce the probability of errors of commission (teachers becoming infected with COVID in school), teachers unions are refusing to return to in-person instruction until the risk of infection has effectively been eliminated. In turn, they expect students, parents, employers, and society to bear the burden of the far higher cost of the corresponding error of omission: failing to return to school when it was safe to do so. This cost is plausibly estimated to run into the high billions, if not trillions on the national level.

The predictable response of some who read this third critique of their lack of critical thinking is to once again toss critical thinking aside, and implausibly deny that students’ learning losses exist, or claim that they will easily be recovered.


Example #5: District decision makers have also fallen into other “wicked problem” traps

Dr. Anne-Marie Grisogono recently retired from the Australia Department of Defence’s Science and Technology Organization. She is one of the world’s leading experts on complex adaptive systems and the wicked problems that emerge from them.

Wicked problems are “characterized by multiple interdependent goals that are often poorly framed, unrealistic or conflicted, vague or not explicitly stated. Moreover, stakeholders will often disagree on the weights to place on the different goals, or change their minds.” When the pandemic arrived, leaders faced a classic wicked problem.

In a paper published last year (“How Could Future AI Help Tackle Global Complex Problems?") Grisogono described the traps that decision makers usually fall into when struggling with a wicked problem.

These will surprise nobody who has watched most school district decision makers during the pandemic.

One trap is structuring a complex decision process such that nobody involved is responsible for explicitly trading off competing goals Put differently, the buck stops at nobody’s desk. In the case of COVID, we have repeatedly seen health officials make decisions (e.g., imposing lockdowns) based solely on minimizing the risk of infections, without regard to associated economic, mental health, and student learning losses involved.

Grisogono describes other traps that have also been much in evidence during the COVID pandemic.

“Low ambiguity tolerance was found to be a significant factor in precipitating the behavior of prematurely jumping to conclusions about the nature of the problem and what was to be done about it, despite considerable uncertainty…

“The chosen (usually ineffective) course of action was then defended and persevered due to a combination of confirmation bias, commitment bias, and loss aversion, in spite of accumulating contradictory evidence.

“The unfolding disaster was compounded by a number of other reasoning shortcomings such as difficulties in steering processes with long time delays and in projecting cumulative and non-linear processes.”

Conclusion

As I said at the beginning of this post, one definition of critical thinking is “the use of a rigorous process to reach justifiable inferences.”

Unfortunately, there is abundant and damning evidence that critical thinking has been notable by its absence among too many leaders who have been making critical decisions in the fact of complexity, uncertainty, and time pressure during the course of the COVID pandemic. And millions of people have paid the price.
Comments