1. Introduction
Over the past decade, research has begun to explore how relatively minor microeconomic changes can have substantial effects beyond just the market in question (Acemoglu et al., Reference Acemoglu, Carvalho, Ozdaglar and Tahbaz-Salehi2012; Baqaee, Reference Baqaee2018; Foerster et al., Reference Foerster, Sarte and Watson2011; Taschereau-Dumouchel, Reference Taschereau-Dumouchel2020). This literature finds that small failures (e.g. firms exiting the market, over- or under-supply in the market) do not necessarily average out throughout the economy, as the Law of Large Numbers would suggest. Given different market structures, the negative effects can ripple through markets (Baqaee, Reference Baqaee2018). Both the connectivity of producers and the structure of the market matter for this effect to appear.
However, little attention has been paid in this literature to the cascading effects in the production of expert opinion. Specifically, do cascades remain locked within the area they occur or do they spill over into other areas? In the early days of the COVID-19 pandemic, we saw the influence of public health expertise show up in fields far removed from public health, but public health officials cared little for the insights of experts in those fields. I explore the effects of these choices by experts by combining the cascading failures literature (Banerjee, Reference Banerjee1992; Baqaee, Reference Baqaee2018; Bikhchandani et al., Reference Bikhchandani, Hirshleifer and Welch1992; Wu, Reference Wu2015) with recent work on the production of expert opinion (Gentzkow and Kamenica, Reference Gentzkow and Kamenica2017a, Reference Gentzkow and Kamenica2017b; Koppl, Reference Koppl2018, Reference Koppl2021; Koppl and Murphy, Reference Koppl and Murphy2022; Murphy et al., Reference Murphy, Devereaux, Goodman and Koppl2021) to show that seemingly small expert failures can have cascading effects on the decisions of unrelated actors, leading to large adverse effects.
Following Koppl (Reference Koppl2018), I define an expert as one paid for their opinion. Consequently, the nonexpert is the purchaser of expert opinion. This definition places the expert and nonexpert into a contractual relationship in the same way a market exchange between a producer and a consumer is a contractual relationship. Expert opinion is the commodity being exchanged. Defining the expert as one who is paid for their opinion helps us sidestep questions of reliability. We are not bogged down in who qualifies as an expert in this or that field; who qualifies as an expert is endogenous. By commodifying expert opinion, we can bring the analysis tools that have served economists well in law, political economy, and other fields to bear. ‘Failure’ takes on a specific meaning in the market for experts: Expert failure occurs when the expert's advice leads to a worse situation than expected by the individual purchasing the expert's opinion (Koppl, Reference Koppl2018).
Commodifying expert advice also helps distinguish the theory of expert failure from the theory of bureaucracy (Tullock, Reference Tullock2005a), hierarchy (Miller, Reference Miller1992), and public choice (Buchanan and Tullock, Reference Buchanan and Tullock1999). Whereas those theories focus on the operations of an individual within a bureaucratic or government system, the theory of expert failure focuses on experts qua experts. An expert may operate as an advisor to the government or even as an employee of a government agency, but the role of the individual is different. There is a kinship between the fields (Murphy et al., Reference Murphy, Devereaux, Goodman and Koppl2021), but expert failure is distinct.
The rest of the paper is as follows. Section 2 briefly discusses the literature of experts. Section 3 develops a theory of cascading expert failure. Section 4 discusses institutional arrangements that contribute to cascading expert failure. Section 5 provides two case studies on expert failure in the early days of the COVID-19 pandemic in the United States. Section 6 discusses how to prevent cascading failure. Section 7 concludes.
2. Literature review
Koppl (Reference Koppl2018) dates the literature on experts and expertise as beginning with Socrates's Apology (Xenophon, 2013). Socrates argued that experts should be obeyed in their areas of expertise as they are the ‘wisest authorities’ within those bounds. A broad literature review would be impossible given this ancient line of inquiry. The topic has arisen in fields as different as philosophy (Mannheim, Reference Mannheim1936), science and technology (Turner, Reference Turner2001), sociology (Berger and Luckmann, Reference Berger and Luckmann1966), law (Block et al., Reference Block, Parker, Vyborna and Dusek2000; Hand, Reference Hand1901; Lind et al., Reference Lind, Thibaut and Walker1973), and economics (Andreoni and Mylovanov, Reference Andreoni and Mylovanov2012; Gentzkow and Kamenica, Reference Gentzkow and Kamenica2017a, Reference Gentzkow and Kamenica2017b; Koppl, Reference Koppl2018; Milgrom and Roberts, Reference Milgrom and Roberts1986; Tullock, Reference Tullock2005b).
In economics, much of the attention on experts and expertise is geared toward producing and disseminating information from expert to the nonexpert. Generally, the nonexpert calls in the expert to help overcome informational issues: the nonexpert does not have enough information to act correctly, is aware they are limited, and knows that the information is costly to obtain and analyze. The nonexpert seeks the advice of an expert (or experts) to help them decide (Dewatripont and Tirole, Reference Dewatripont and Tirole1999). However, research has shown that experts face an incentive to conceal certain information if the nonexpert pays them for their advice. If information is detrimental to the expert's cause, they may not reveal that information to the nonexpert (ibid.). Alternatively, the expert may tailor their advice along with what the nonexpert wants to hear if the nonexpert is sufficiently large enough in the marketplace for opinion (Koppl, Reference Koppl2002).
One way to increase the information available to nonexperts is to increase the number of competing experts in the marketplace. By placing experts with differing interests into a dialogue with one another, more information is revealed. Both experts want to try to ‘win’ the business of the nonexpert, and thus have the incentive to reveal any information that would support their case or harm the other expert's case (Milgrom and Roberts, Reference Milgrom and Roberts1986). In equilibrium, all information is revealed. Further, while Milgrom and Roberts (ibid.) build their model without transaction costs, additional research reveals that costs do not necessarily affect information revelation (Froeb and Kobayashi, Reference Froeb and Kobayashi1996). Similarly, even if the nonexpert is biased toward a certain outcome, competition can lead to full revelation (Froeb and Kobayashi, Reference Froeb and Kobayashi1993; Shin, Reference Shin1998).
Even with increased competition among experts, the structure of competition matters. Gentzkow and Kamenica (Reference Gentzkow and Kamenica2017b) construct a game theory model that shows competition among experts may reveal no information if the situation is a Prisoner's Dilemma and the experts cannot reveal information about their competitors. Koppl and Murphy (Reference Koppl and Murphy2022) explore organizational structures and management strategies that can increase or hinder information revelation.
Koppl (Reference Koppl2018) provides the most detailed explication of the broad phenomenon of expert failure, although concerns about expertise are older. Adam Smith was warning of the dangers of overreaching expertise in his classroom lectures (Smith, Reference Smith1982) and published work (Smith, [1776]Reference Smith1981). The economic literature on failure focuses on incentive and institutional structures that can cause expert failure (Koppl, Reference Koppl2021; Murphy et al., Reference Murphy, Devereaux, Goodman and Koppl2021) and how receptive experts are to disconfirming information (Andreoni and Mylovanov, Reference Andreoni and Mylovanov2012; Kang and Kim, Reference Kang and Kim2021). Knowledge issues also arise regarding how well the expert can advise the nonexpert (Hayek, Reference Hayek1945; Lavoie, Reference Lavoie2016). Organizational psychology has focused on how the ways in which experts signal their trustworthiness may lead them to fail (Radzevick and Moore, Reference Radzevick and Moore2011).
There are works on informational cascades that parallel the argument in this paper. Bikhchandani et al. (Reference Bikhchandani, Hirshleifer and Welch1992) and Banerjee (Reference Banerjee1992) both develop herd behavior models where individuals in a decision chain take actions of previous individuals as informational inputs into their own decision-making process. At a certain point, the choice of the individual is relying entirely on the previous actions taken by individuals and no personal information is applied to the choice. Whereas Banerjee (ibid.) focuses mainly on herd behavior leading to cascades, Bikhchandani et al. (Reference Bikhchandani, Hirshleifer and Welch1992) show how fragile cascades can be. In both models, they rely on first-movers conveying information to laymen. In the Bikhchandani, Hirshleifer, and Welch model, they rely on certain ‘fashion leaders’ as first-movers who have higher signal accuracy. Wu (Reference Wu2015) expands the model to include both experts, who have high-quality signals, and laymen, who have lower-quality signals. My model compliments but differs from theirs, where I discuss how one expert's actions can lower the signal accuracy of another expert elsewhere in the decision-making chain.
Earl et al. (Reference Earl, Peng and Potts2007) also are in the tradition of cascades, specifically how decision rules on how to interpret the data we gather cascade. In their work, they show how decision rules made by experts in financial markets degrade over time as the rule is passed from one person to another. When the decision rule is made, it fits into a certain context. However, there are non-trivial time lags and by the time nonexperts adopt the rule, the context likely has changed. Consequently, a game of Telephone can happen, as key qualifications get lost (ibid., pp. 356–358). Decision rules get flattened down and become less effective at helping to formulate optimal decisions. My model is parallel, although I am less concerned with decision rules (i.e. how individuals should interpret data) with how the decision rules of experts affect the informational inputs other experts use in the production of their advice.
I aim to fill two gaps in this literature. First, I show how failures cascade not only within fields, but beyond them. Secondly, I address the gap of how cascades can perpetuate or end. Experts can become ‘siloed’ within their fields and thus be unaware of how to interpret some of the information they use in the opinion formation process. This siloing then causes them to repeatedly offer the same advice, even though it is failing to achieve the desired goals. Experts may not be the high-accuracy, high-signal individuals they are assumed to be.
3. Cascading expert failure
3.1 The basic model
Cascading expert failure occurs when one expert failure leads to other failures removed from the original transaction. Just as a single snowball may cascade into an avalanche of destruction, so too might a single failure cascade into multiple and multiplying failures.
Following the literature on cascading production failures (Baqaee, Reference Baqaee2018; Taschereau-Dumouchel, Reference Taschereau-Dumouchel2020), I aim to model cascading expert failure as a network problem.
Like macroeconomic analyses of microeconomic failures, which argue that microeconomic failures will average out at the macroeconomic level (Lucas, Reference Lucas1977), one may argue that expert failure, when sufficiently diffused, would average out in the aggregate. However, if we consider the economy as a network of inputs and outputs, then the shape of the network would matter as to whether shocks get averaged out. As Acemoglu et al. (Reference Acemoglu, Carvalho, Ozdaglar and Tahbaz-Salehi2012) discuss, interconnections between firms and sectors act as a propagation mechanism of these idiosyncratic shocks throughout the economy if the input–output networks are not symmetrical. Even relatively small shocks can amplify as they cascade through the network (Baqaee, Reference Baqaee2018).
Lucas-style reasoning would apply in symmetrical production networks, such as those represented in Figure 1. In a symmetrical network, each actor equally relies on each other actor in the network as both producer and consumer. In Figure 1(a) each sector network has a single sector (or node) that both produces and consumes output as indicated by the curved arrow. As such, the network is symmetrical. In Figure 1(a), since each sector is independent of the other, shocks in one sector would not spread to the others.
Figure 1(b) also represents a symmetrical network, despite interconnectedFootnote 1 sectors. Each sector relies on each other sector equally, and thus there is symmetry in the network. In Figure 1(b) the argument that diversification would cause failures to net out applies. According to the Law of Large Numbers, any shock to the individual sectors would average out rapidly at a rate of $\sqrt n$, where n is the number of sectors in an economy (Acemoglu et al., Reference Acemoglu, Carvalho, Ozdaglar and Tahbaz-Salehi2012). If, for example, sector 1 represented a producer who underproduced a needed good, this error would be counteracted as the other sectors, 2 through n, adjusted their own production and consumption to make up for the error.
Figure 2 is a representation of an asymmetric production network. In an asymmetric production network, not all sectors equally rely on each other. A single sector may dominate production, such as sector 1 in Figure 2. If a single sector (sector 1) supplies multiple other sectors, even small changes in that sector would not necessarily average out in the aggregate. Alternatively, we could imagine a situation similar to Figure 1(b), but one or two sectors have significant control over output. In this case, a failure of one of those major sectors would not average out as other firms could not necessarily pick up the slack (Baqaee, Reference Baqaee2018). Thus far, the network analysis I discussed only partially gets us to cascading failures. Figure 2 shows how a relatively minor failure will not necessarily dissipate as the effects move through the economy. If those sectors (2 through n in Figure 2) were effectively segregated from the larger economy, then the effects of the failure would remain contained to those sectors (Acemoglu et al., Reference Acemoglu, Carvalho, Ozdaglar and Tahbaz-Salehi2012); we would not have cascading failure. However, if those sectors were themselves producers, then the failure of sector 1 could cascade throughout the economy. Figure 3 demonstrates a network where cascades are possible.
In Figure 3, there is a sole shared supplier, 1, for sectors 2 through n. Sectors 2 through n are also suppliers for other sectors. In other words, sectors 2 through n are nodes, not terminals, as they are in Figure 2. Thus, production decisions or shocks made by sector 1 will affect sectors 2 through n and the sectors 2 through n serve. A failure can potentially cascade down through this network.
What is key here is not the size of the supplier in the network. Baqaee (Reference Baqaee2018) shows that systemic importance in a network is decoupled from firm size. Rather, it is the role as a supplier. The more interconnected the provider is, the most likely a cascade. The logic of this point can be seen in Figure 1(a). The sectors in Figure 1(a) are monopolies in their industries. However, a failure in one sector will not cascade to other sectors since they are segregated (i.e. not interconnected) from one another. Whether sector 1 has $1 in revenue or $1 billion, failure in sector 1 will not affect sectors 2 through n.
Given the definition of an expert as ‘one who is paid for advice’, we can apply the same logic of the cascading production network failure. We can conceptualize Figure 3 as a network of expert sectors (such as the Centers for Disease Control and Prevention (CDC), the United Kingdom's Scientific Advisory Group for Emergencies (SAGE), or other hospital group) and nonexpert sectors (such as households, firms, or legislators) rather than industry sectors. Assume sector 1 represents a shared provider of expert opinion to sectors 2 through n; sectors 2 through n use sector 1's advice in producing their advice or consuming it. If sector 1 fails in their expert advice, that will affect the actions of sectors 2 through n, and subsequently the consumers they may serve. Given the relatively high degree of interconnectedness of sector 1, a small failure could end up cascading through the economy.
For example, consider the case of the SAGE, as discussed by Koppl (Reference Koppl2021). The pandemic models SAGE and others used in formulating their advice for the COVID-19 pandemic relied heavily on assuming a homogeneous population (ibid.); that is, ‘all people hav[e] equal chances of mixing with each other and infecting each other’ (Ioannidis et al., Reference Ioannidis, Cripps and Tanner2022). This assumption is inappropriate as an empirical matter since people were voluntarily social distancing and locking down before government orders came (Goolsbee and Syverson, Reference Goolsbee and Syverson2021). As a modeling matter, the assumption leads to overestimating herd immunity thresholds (Britton et al., Reference Britton, Ball and Trapman2020; Gomes et al., Reference Gomes, Ferreira, Corder, King, Souto-Maior, Penha-Goncalves, Goncalves, Chikina, Pegden and Aguas2020).Footnote 2 As a consequence of modeling a homogeneous population, SAGE's advice to the British government was predicated on an analysis that likely overestimated benefits of various mitigation measures. Given that SAGE has significant market power in their role as advisor for the British government (Koppl, Reference Koppl2021), failure on their part could have an effect like that described in Figure 3: A relatively small failure (overestimating herd immunity thresholds and spread) causes the experts to be more pessimistic in their advice. This overestimation becomes an input into the British government's decision-making. The government developed suboptimal policy, which became an input into individual firms and business decisions within the United Kingdom.
3.2 Siloing
One aspect of cascading failure that deserves special attention is siloing. Siloing is a mechanism that can start an expert failure cascade. Siloing occurs when an expert has little relevant knowledge outside of their area of expertise, and thus is confined to their own discipline or ‘silo’. To attempt a more precise definition, siloing is when the expert has high expected costs and low expected benefits of interacting with experts from other disciplines in the formation of their opinion. Consequently, the expert does not interact with other silos or dismisses insights and challenges from others outside of their silo.
Siloing coincides with the division of labor. As labor is divided into different jobs, specialized knowledge of those jobs forms (Koppl, Reference Koppl2018; Smith, [1776]Reference Smith1981). Experts are trained in their fields and learn the tools favored by their colleagues. Consequently, the expert analyzes problems through their particular lens and theory and may be unaware of alternative explanations.Footnote 3 Even if they are aware of alternatives, the expert may not understand the subtleties of other fields. Thus, models or explanations in other fields may be misunderstood or misapplied.
Siloing also creates the impression of distinct boundaries between areas of expertise. For example, with siloing, economics and sociology are two distinct fields although both study human behavior. As a consequence, an effectively siloed researcher may discount or dismiss information presented by experts in other silos. Siloing encourages treating information as one-dimensional (X is an economic problem) as opposed to multi-dimensional (X is a problem with multiple aspects). Andreoni and Mylovanov (Reference Andreoni and Mylovanov2012) show individuals discount information when it is passed through others as opposed to presented directly. I argue this same mechanism is at play: experts discount information when generated other silos rather than presented directly to them from their own silo.
In short, siloing reduces the ability of experts to process, or even be aware of, all relevant information that is part of their opinion-formation process. We need not go as far as Adam Smith, who argued, ‘The man whose whole life is spent in performing a few simple operations…renders him, not only incapable of relishing or bearing a part in any rational conversation, but of conceiving any generous, noble, or tender sentiment, and consequently of forming any just judgment concerning many of even the ordinary duties of private life’ or ‘the great and extensive interests of his country’ (Smith, [1776]Reference Smith1981: 782). We must merely recognize that siloing creates barriers to information and knowledge transference between fields of expertise.
Figure 4 represents a network model of siloing in action. Like Figure 2, sector 1 represents an expert to sectors 2 through n. The solid arrows represent recognized information transfers.
Sectors α, β, and γ represent information spaces that provide data to sector 1. For example, if sector 1 is an economic advisor, α may represent the US Census Bureau, IMPUS, academic journals, and other providers of data, information, and knowledge. The solid double-arrows indicate that sector 1 uses input from sector α in its production process and provides sector α with inputs as well (e.g. published research).
However, sector 1 indirectly exchanges information with other sectors as well, perhaps unwittingly. That information is noisy as the siloed expert may not know how to interpret it. These exchanges are represented by the dashed arrows. Sectors β and γ represent sectors that are not associated with sector 1's area of expertise but can still provide useful insights if the producer of expert opinion is willing to look. To keep the analogy going, if sector 1 is an economic expert, then sectors β and γ may represent political science and psychology, respectively. An expert's silo is represented by the solid line from an information space to the producer of expert opinion.
The presence of these indirect exchanges of information among silos also explains why cascades can perpetuate. The signals that come from outside the expert's silo (the dashed lines in Figure 4) are noisy or perceived to be low signal to the expert. The expert may discount or not understand the relevance of the information they are seeing. In turn, this indicates the expert may not be aware their advice is failing at all, or why it is failing. This point will be explored further in section 5.2.
4. Institutions and cascading failure
Cascading expert failure occurs when there is a sufficiently interconnected provider of expert opinion. This interconnectedness in expert opinion may arise when there are high barriers to entry in a marketplace, preventing entry of new experts, and thus new network connections to form. Natural barriers to entry could include items like a certain level of technical competence necessary to be a valuable expert (e.g. a good neurosurgeon would require sophisticated knowledge of how the brain works). Artificial barriers to entry could include occupational licensing or ritualistic behavior such as completing a degree program. Artificial barriers to entry serve to ‘certify’ the expert to the nonexpert. Thus, the barriers may serve a purpose by reducing search costs for a group of nonexperts and result in the expert becoming interconnected.
In a similar manner, these certifications could suggest to nonexperts that these experts have superior information and judgement compared to other potential experts (Murphy, Reference Murphy2022). Experts are seen as relatively high-information, high-signal accuracy individuals compared to nonexperts. In turn, relatively high information indicates the expert may understand the world with a higher degree of accuracy than the nonexpert (Wu, Reference Wu2015). Consequently, nonexperts may follow their advice uncritically. For example, early in the COVID-19 pandemic, the CDC (and other health organizations like the National Institutions of Health) were frequently called on to advise policy. Newspapers uncritically reported their recommendations, and many individuals began following CDC guidelines before they acquired the force of law (Goolsbee and Syverson, Reference Goolsbee and Syverson2021). Even after official mandates were repealed or expired, many individual actors continued to follow the guidance of the CDC (Finucane and McKenna, Reference Finucane and McKenna2021). Thus, even without the force of law or regulation prohibiting entry of other experts, these organizations were able to strongly influence opinion given their perception as high-accuracy experts.
Additionally, these experts may also influence other experts' opinions. For example, a physician may uncritically follow the advice of a board of high-ranking physicians given the board's relative prestige to the physician. This advice will, in turn, affect the advice the physician gives their patient. Consequently, experts can become subject to informational cascades with some probability (Banerjee, Reference Banerjee1992; Bikhchandani et al., Reference Bikhchandani, Hirshleifer and Welch1992). The opinion of an expert and actions of other experts are taken together and the following experts adopt the opinion without adding their own private information.
Certification can lead to the siloing phenomenon I discuss above: only certain experts are ‘allowed’ to have opinions on a topic and any insights from outside the silo are perceived to be low information. Given how siloing results in noisy signals to the expert, the perception of experts as being relatively high-information, high-signal accurate compared to the nonexpert may not hold even within their area of expertise.
Ikeda (Reference Ikeda1997: 112–118) notes the importance of ideology in the process of shaping policy and advice. Barriers to entry can also affect the ideology of experts. By enforcing specific standards, gatekeeper experts can control entry of generally like-minded individuals into the market. Additionally, they can exclude (or minimize) heterodox ideologies and opinions (Callais and Salter, Reference Callais and Salter2020; Flegal, Reference Flegal2021). The gatekeepers can thus control, to some extent, the intellectual ideology in the market for expert opinion, reducing opinion and increasing the likelihood of a cascade.
Institutions that encourage uniform expert advice, as opposed to a diversity of opinion, can contribute to cascading expert failure. For example, SAGE in the United Kingdom has an explicit goal of ‘provid[ing] unified scientific advice on all the key issues, based on the body of scientific evidence presented by its expert participants’ (SAGE, 2020). SAGE's role is to provide the single opinion of their experts, rather than provide full information, to the decision-makers of the United Kingdom. Given the influence and legal authority of the government, this makes SAGE an interconnected expert node.
Section 6 will discuss ways to reform or enhance current institutions to prevent cascading expert failure. But first, I will examine two cases from the COVID-19 pandemic that demonstrate cascading expert failure.
5. Two case studies of cascading expert failure
5.1 COVID test regulatory policies
To explore the effects of cascading expert failure, I will examine how the decision by the Food and Drug Administration's COVID testing regulatory policies, coupled with advice from the CDC, led to expert failure in the epidemiological testing world.
One of the central questions arising from the COVID-19 pandemic is why there was no effort to conduct randomized testing early in the pandemic (Ioannidis, Reference Ioannidis2020; Padula, Reference Padula2020). Public decision-makers require reliable data to make decisions. In an outbreak of a novel virus, randomized testing helps us acquire that data (Ioannidis, Reference Ioannidis2020; Padula, Reference Padula2020). Despite the success of mass testing in other countries, the United States government made no effort to randomly test the population. Instead, the CDC recommended that tests be limited to patients who had returned from China or exhibited symptoms (Centers for Disease Control and Prevention, 2020b; Jernigan and CDC COVID-19 Response Team, Reference Jernigan2020). The advisory came, in part, due to the limited quantity of tests in the United States stemming from the FDA's and CDC's regulations on what tests may be used in the United States (Advisory Board, 2020). The recommendations led to unintended results for the CDC/FDA and caused suboptimal recommendations in other fields.
Randomized testing is needed to discover the characteristics of a novel disease, such as how quickly it spreads, who is most at risk, and what infection and fatality rates are (Hu et al., Reference Hu, Wang, Wang, Litvinova, Luo, Ren, Sun, Chen, Zeng, Li, Liang, Deng, Zheng, Li, Yang, Guo, Wang, Chen, Liu, Yan, Shi, Chen, Zhou, Sun, Vespignani, Viboud, Gao, Ajelli and Yu2021; Ioannidis, Reference Ioannidis2020). Even if only 70% of infected people tested are returned positive, mass testing still provides essential clues on how to combat disease and insights for policy (Paltiel et al., Reference Paltiel, Zheng and Walensky2020). However, the FDA's regulations limited the supply in the market by restricting who could produce tests (Food and Drug Administration Staff, 2021) and severely limiting imports of testing equipment (Food and Drug Administration Staff, 2021; US Customs and Border Protection, 2020). The lower supply of tests indicates that a socially optimal policy would be for the marginal test to be used for marginally higher-valued uses. According to the CDC's medical experts, the higher-valued use was to test those suspected of having the disease and then to engage in contract tracing, as evidenced by their advisory (Centers for Disease Control and Prevention, 2020b). Initial treatments of the COVID-19 virus treated it like a type of influenza (Ferguson et al., Reference Ferguson, Laydon, Gilani, Imai, Ainslie, Baguelin, Bhatia, Boonyasiri, Cucunuba, Cuomo-Dannenburg, Dighe, Dorigatti, Fu, Gaythorpe, Green, Hamlet, Hinsley, Okell, Elsland, Thompson, Verity, Volz, Wang, Wang, Walker, Walters, Winskill, Whittaker, Donnelly, Riley and Ghani2020). When a disease and its properties are well-known, testing patients who exhibit symptoms is a standard operating procedure (Centers for Disease Control and Prevention, 2020a), and medical professionals often recommend non-pharmaceutical interventions to limit spread (ibid.). The CDC and its leadership are primarily medical doctors. Thus, their behavior early in the pandemic is consistent with previous pandemics of known viruses.
However, the COVID-19 virus was novel. The information needed by decision-makers to formulate responses did not previously exist, nor could it be reasonably inferred. According to the statistical experts, randomized testing was the higher valued use of the limited tests, as randomized testing would provide the needed information (Padula, Reference Padula2020). Given the CDC's authority, both de jure as a regulatory body and de facto as a prominent expert body, their opinion prevailed, and the tests were allocated to testing symptomatic patients (Shear et al., Reference Shear, Goodnough, Kaplan, Fink, Thomas and Weiland2020). Subsequently, the data collected from those tests were incorporated into policymaking.
Here the first step of cascading expert failure is taken. The tests were to help guide policy on the pandemic. However, by limiting testing to suspected cases of COVID-19, the initial results likely resulted an upward bias in COVID-19 mortality and severity numbers (Ioannidis et al., Reference Ioannidis, Cripps and Tanner2022). The experts did not know what they needed to know about the virus. Patients hospitalized with the virus, or exhibiting symptoms that may trigger a test, are likely those with a more severe case. Thus, the initial case fatality rates were almost certainly too high, especially given asymptomatic carriers of COVID-19 (Michaels and Stevenson, Reference Michaels and Stevenson2020). Randomized testing would have helped eliminate these statistical biases as well as provided epidemiologists valuable information of the properties of the virus: how it spread, how fast it spread, time from infection to symptom, etc. Biased data and lack of epidemiological information hindered the CDC's ability to advise on the pandemic (Rosen, Reference Rosen2021). Additionally, the failure affected several other significant groups of experts.
Stemming from the CDC's first instance of expert failure to recommend using tests that resulted in estimates that were likely too high, the next step of cascading failure occurred. The disease figures generated by the CDC, the World Health Organization (WHO), and other organizations were used to build models of the virus's spread and death rates, such as the Imperial College Model (Ferguson et al., Reference Ferguson, Laydon, Gilani, Imai, Ainslie, Baguelin, Bhatia, Boonyasiri, Cucunuba, Cuomo-Dannenburg, Dighe, Dorigatti, Fu, Gaythorpe, Green, Hamlet, Hinsley, Okell, Elsland, Thompson, Verity, Volz, Wang, Wang, Walker, Walters, Winskill, Whittaker, Donnelly, Riley and Ghani2020) or the Institute for Health Metrics and Evaluation (IHME) model (IHME COVID-19 Health Service Utilization Forecasting Team and Murray, Reference Murray2020). Consequently, these modeling experts failed in their recommendations, as their models were too pessimistic given the statistically biased data being used in the models. Ioannidis et al. (Reference Ioannidis, Cripps and Tanner2022) note that a data bias undermined much of the forecasting in the COVID-19 pandemic. The initial failure of the CDC's recommendation led to a cascade into the model forecasting area of expertise, leading the epidemiologists to fail in their expert advice by producing upward-biased models.Footnote 4
The expert failure stemming from the CDC's initial recommendation to limit testing to patients exhibiting symptoms also had cascading effects in the realm of policy. We have already seen how the CDC's recommendation led to biased modeling. Likewise, those models were used to inform policy. Model projections informed recommendations on lockdowns, travel restrictions, mask requirements, social distancing, hospital and nursing home visitations, and medical procedures. Given that the models' projections were likely too high, empirical justifications for the lockdowns were also based on a cost-benefit analysis that over-stated the benefits relative to costs from these policies. Likewise, the models did not consider that people would change their behavior in ways that rendered policies like lockdowns redundant (Goolsbee and Syverson, Reference Goolsbee and Syverson2021; Leeson and Rouanet, Reference Leeson and Rouanet2021) or potentially deadly (Mulligan, Reference Mulligan2021) since the behavioral changes would not be incorporated into the data because of the biased sampling. The cost-benefit analysis used to justify lockdowns relied on models that overestimated the benefits of lockdowns and underestimated the costs.
It should be noted that I am not arguing that lockdowns were unjustified by cost-benefit analysis; a more accurate cost-benefit analysis may still have justified lockdowns, although the case may have been more marginal or the time frame shorter. Additionally, given heavily tail risks with a contagious disease, lockdowns may initially be justified even absent clear data (Cirillo and Taleb, Reference Cirillo and Taleb2020). My claim here is that the data used in the modeling to justify lockdowns were likely heavily distorted. The key statistical characteristics of the disease remained unknown. In turn, the information produced by experts was not more accurate compared to what was known before testing. These distortions led to an overstatement of the net benefits of lockdown and undue confidence on the part of experts in their recommendations.
Figure 5 is a visual representation of the COVID-19 expert opinion production network I have just discussed, set within the model developed in section 3. The CDC made a decision about how tests should be used early in the pandemic. As a consequence of that decision, they issued guidance on how tests ought to be used. That guidance influenced how testing clinics and hospitals tested patients. Consequently, this guidance influenced the information they reported to the CDC, as indicated by the arrow going from the ‘Clinics’ sector back to the ‘CDC’ sector. Data then reported by the CDC were used by modelers to produce their advice.Footnote 5 That advice then went to other consumers of expert opinion, shaping their behavior.
The cascading effects of the initial expert failure by the CDC are apparent. I have followed the line of failure down just one path of many that branch out from that decision. Much like Adam Smith's woolen coat, tracing out all the actions that spawn from that one decision regarding testing would be a difficult, if not impossible, task. Many other unforeseen consequences could be traced from the initial instance of the CDC's expert failure (see, e.g. Ravindran and Shah, Reference Ravindran and Shah2020). If the CDC had taken a different action in the early days of the pandemic and allocated tests to randomly testing the population, some of these failures could have been avoided.
The cascading expert failure discussed in this section by the CDC and subsequent experts is due to the interconnected and dominant position of the CDC as providers of expert opinion. The policy recommendations failed to achieve their desired aims of reducing the damage caused by the virus. In some cases, like the lack of randomized testing, experts' decisions may have caused the outbreak to worsen in the United States, given the lack of reliable data. Other policies, such as lockdowns that served to codify behavior people were already taking, may have failed a cost-benefit test since the benefits of the policies were likely overestimated.
5.2 Face mask recommendations during COVID
Expert failure can cascade into other seemingly unrelated silos because decision-making processes are interconnected, as discussed above in section 3.1. Relative prices transmit information to different participants of the production process such that the consumer of any given product need not know why it is more expensive relative to other goods to economize on the good (Hayek, Reference Hayek1945). Similarly, the expert advice given in one area can have cascading effects down on other areas as they are all interconnected.
The confusion about the effectiveness of masks at the beginning of the 2020 COVID-19 pandemic is an example of siloing causing cascading expert failure. In February 2020, Dr Anthony Fauci advised that most Americans do not need to wear masks to protect themselves against the coronavirus (O'Donnell, Reference O'Donnell2020). Other government expert advisors, such as the US Surgeon General (Cramer and Sheikh, Reference Cramer and Sheikh2020) and the WHO (Pan American Health Organization and World Health Organization, 2020) repeated this advice. By April, these experts had reversed course. They now recommended wearing masks as they were necessary to combat the spread of the coronavirus. When asked why the reversal in a June 2020 interview, Fauci stated he knew masks were effective when he provided the advice in February. He advised otherwise to ensure enough masks and personal protective equipment (PPE) were available to medical personnel (Why Weren't We Wearing Masks From the Beginning? Dr. Fauci Explains, 2020).
The mixed messaging had a detrimental effect on the US government's response to managing the pandemic (Fauci: Mixed Messaging On Masks Set U.S. Public Health Response Back, 2020; Scheid et al., Reference Scheid, Lupien, Ford and West2020), the opposite of the experts' goal and the goal of the nonexperts they were advising. The mixed messaging, combined with confusion from political leaders, gave the impression that masking advice was based off political, rather than scientific, reasoning (Ho and Huang, Reference Ho and Huang2021; Kiviniemi et al., Reference Kiviniemi, Orom, Hay and Waters2022; Noar and Austin, Reference Noar and Austin2020). This deterioration of trust hindered the ability of the experts to properly advise in the pandemic.
The advice failed in two other crucial ways: First, it discouraged a supply response to an increase in demand. When there is a sudden increase in demand, prices need to rise to allocate the scarce quantity on the market. The expert advisors appeared to have a mental model of a perfectly inelastic supply curve, where higher prices would only lead to masks being allocated to the highest bidders. These bidders may not be medical personnel. Their advice discounted the existence of an upward-sloping supply curve, allowing firms to increase production at a higher price. The initial advice ended up delaying the market response that would have allowed more masks to come to the market.
Secondly, the advice did not achieve the goal of preventing hoarding and reserving supply for medical workers. The mixed messaging failure may have encouraged hoarding. If prices do not rise, a shortage emerges. When the shortage persists, consumers tend to hoard in order to insure themselves against unreliable availability (Chakraborti and Roberts, Reference Chakraborti and Roberts2021b). Existing price controls and purchase quotas on many products, including PPE, encouraged hoarding by consumers of these increasingly hard-to-find products (ibid). Additionally, extra trips to the stores to hunt for products likely increased the spread of COVID in early 2020 (Chakraborti and Roberts, Reference Chakraborti and Roberts2021a).
Thus, the public health experts committed expert failure: their advice, which was supposed to reduce the spread of the disease, ended up contributing to the spread of COVID-19 in the early days of the pandemic. The reversal on masks, coupled with local and state mask mandates, led to a sudden increase in demand. The shortages that arose sent noisy signals to the advisors; because they were effectively siloed and did not seek input from economists, they did not see the shortages were the result of their advice. Taking a longer view, the use of price controls to prevent hoarding may reduce the effectiveness of the US to manage future pandemics as price controls discourage building inventory for demand shocks (Zycher et al., Reference Zycher, Solomon and Yager1991).
Figure 6 represents siloing during the COVID-19 pandemic. The public health experts' decision to advise against masks even though they knew masks would be necessary to limit COVID was based on the information and interpretation they developed in their silo, labeled ‘Public Health’. However, their advice also relied on insights from at least two other silos: economics and psychology. Economics had insights in how resources will be allocated following a sudden increase in demand. Psychology had insights in how people will react to sudden shortages and rapidly changing advice. The experts were unaware of these insights because they did not see value in interacting with those fields even though they were engaging with the fields. Consequently, the signals the experts got were very noisy. The experts were unaware that it was their advice that was causing the failure.
6. Reforms that can prevent cascading expert failure
A goal of expert advice is to help the nonexpert become more informed. Highly interconnected and siloed experts can work against this goal by (unintentionally) providing low-quality, low-information advice and causing a cascade of failure. Therefore, we must discuss institutions that can prevent the concentration of expert power, such as that indicated by Figure 3 and increase the information available to the nonexpert. In section 4, I discussed two institutional structures that lead to situations where cascading expert failure is more likely: uniform expert advice and institutions arising to combat extreme uncertainty. Thus, the policy proposals I discuss will focus on those institutional structures.
While uniform expert advice may be expedient, the speed comes at the trade-off with accuracy. Uniform expert advice will work against the goal of increasing the information available to the nonexpert. Milgrom and Roberts (Reference Milgrom and Roberts1986) developed a model showing that experts with ‘strongly opposed’ interests can increase the quantity and informativeness of information available to the nonexpert. They show how even a nonexpert who naïvely accepts all information given to them (i.e. they do not question the information themselves) comes to be fully informed (see also Gentzkow and Kamenica, Reference Gentzkow and Kamenica2017a). In an adversarial setting like a common-law courtroom,Footnote 6 one side must win, and another must lose. Thus, if the defendant's expert has information that could sway the nonexpert (e.g. the judge) to their side, the plaintiff's expert has the incentive to reveal more information to sway the nonexpert back to their side. Consequently, more information is revealed to the nonexpert.
The theoretical equilibrium is full information revelation to the nonexpert. An adversarial arraignment prevents monopolization of expertise, allows for multiple producers of expert opinion, and increases the quality of information, preventing a cascade.
Additionally, by having more access to information, the nonexperts are in a position to better evaluate the quality of advice given by experts with regard to the goals of the nonexpert. Even if the expert's advice, if taken, would lead to failure, the nonexpert can better judge the advice and opt to not take it, limiting a cascade.
Two studies from law support my contention that adversarial competition among experts increases the information available to the nonexpert. Lind et al. (Reference Lind, Thibaut and Walker1973) studied information purchased and conveyed by lawyers under three scenarios: client-oriented lawyers versus client-oriented lawyers (adversarial system), court-oriented lawyers versus court-oriented lawyers (inquisitorial system), and court-oriented lawyers versus client-oriented lawyers (a ‘mixed’ system). The lawyers did not ‘purchase’ more information under the different administrative regimes. However, lawyers operating under the adversarial regime did convey more information to their clients (even when that information was detrimental to the client's interests) when compared to the inquisitorial or ‘mixed’ system. In other words, the experts (lawyers) provided more information to the nonexperts (their clients), helping them to make more informed decisions under an adversarial system.
More recently, Block et al. (Reference Block, Parker, Vyborna and Dusek2000) directly test information revelation by contesting parties under the Milgrom and Roberts (Reference Milgrom and Roberts1986) model (the adversarial model I propose) and Tullock's (Reference Tullock1980) discussion of the inquisitorial model. They find the inquisitorial model reveals more information when information is private: the contesting parties do not know what knowledge the other parties possess. However, when information is correlated (each party has some clue that the other party possesses information that may discredit them), the parties reveal more information under the adversarial regime. In most policy discussions requiring expert opinion, information is likely correlated as expert witnesses are aware of differing interpretations and competing theories.
Further, generating this type of competition would be fairly easy to achieve. It does not take many competitors to cause monopoly firms to behave as though they operate in a competitive market (Bain, Reference Bain1954; Baumol et al., Reference Baumol, Panzar and Willig1983, Reference Baumol, Panzar and Willig1988; Kessel, Reference Kessel1971). Indeed, even network firms can behave as though they face competition even if they command a significant market share (Boudreaux and Folsom, Reference Boudreaux and Folsom1999). The Milgrom and Roberts model uses only two experts and full information is achieved. It is not just the number of experts, but also the adversarial competition that generates the full information result.
One may argue full-information revelation to the nonexpert could result in information overload. Information overload is unlikely in expert opinion because the nonexpert is paying for the information. Information overload is an externality that occurs because human attention is unpriced in most information transmission scenarios; information becomes detrimental when the average value of the information is declining (Zandt, Reference Zandt2004). Overload is typical in advertising, where the individual is bombarded with information whether they pay for it or not. However, when the nonexpert is purchasing the information, included in their offered price is an estimation of their attention span and capabilities. A rational actor would not purchase additional information once the marginal cost exceeds the (estimated) marginal benefit and thus not see declining average value of information. Additionally, competitive experts have an incentive to prevent information overload from occurring. In order to ‘win’ the business of the nonexpert, the expert is incentivized to make their information understandable to the nonexpert (Koppl, Reference Koppl2018).
Preventing cascading failure resulting from certification is a trickier problem. As discussed above, there is an asymmetric information situation with experts; experts are better informed in their area of expertise than nonexperts. Thus, certifications do serve an informational purpose, albeit at an elevated risk of cascading failure and information cascades. Certifications and ‘brand names’ (such as a Ph.D. from Harvard) can reduce information quality issues (Akerlof, Reference Akerlof1970). Consequently, removing certification in the market for expert opinion would reduce the quantity of information and expert opinion in the marketplace.
Rather than eliminating certification, increasing the number of voices in the market for expert opinion can prevent these cascades. Wu (Reference Wu2015) shows having a small number of low-signal accuracy nonexperts in the opinion formation process can reduce the probability of informational cascades. Wu is responding to the Bikhchandani et al. (Reference Bikhchandani, Hirshleifer and Welch1992), where each actor moves in a sequence after observing all behaviors ahead of them in the sequence. In that model, a low-accuracy individual would increase the number of decision-makers needed to trigger a cascade. In my network model, the extra low-accuracy individual is more akin to an additional node in the network that can absorb and stop a cascade.
Wu does note that the addition of the low-accuracy individual ‘decreases the overall information quality by a little’ (Wu, Reference Wu2015: 408). However, if the experts and nonexperts are in conversation with one another, as in the Milgrom and Roberts (Reference Milgrom and Roberts1986) model, the addition is less likely to result in lower quality information and may even increase the quality (Gentzkow and Kamenica, Reference Gentzkow and Kamenica2017a).
At this point, it may be tempting to say my argument is nothing more than increasing the number of competing experts in the market. Such an interpretation is incorrect. Merely increasing the number of experts may not necessarily lead to improved outcomes beyond a certain point (Koppl et al., Reference Koppl, Kurzban and Kobilinsky2008). Furthermore, groupthink may dominate even with multiple experts, preventing effective adversarial competition among experts (Koppl, Reference Koppl2021; Koppl and Murphy, Reference Koppl and Murphy2022). Indeed, Figure 1(a) has monopoly experts, but since each expert's opinion is only bought by one sector, cascades are unlikely. Instead, it is the market structure of expert opinion and interconnectedness of experts that matters for cascades (Baqaee, Reference Baqaee2018). Allowing free entry and exit of potential competitors will tend to reduce cascading failure.
7. Conclusion
Koppl (Reference Koppl2018) discusses at length expert failure, which helps economists explore why bad policy develops and persists. Using cascading network failure modeling, I expand Koppl's analysis to include a dynamic dimension: how failures can spread over time and areas of expertise. I show how even relatively small failures can cascade throughout a network to have significant aggregate impacts across sectors.
Compared to a competitive marketplace of expert opinion, where many experts from many fields can compete for consumers, interconnected experts are more likely to create such cascades. Additionally, siloed, yet interconnected, experts may provide lower-quality advice (Gentzkow and Kamenica, Reference Gentzkow and Kamenica2017a). However, there may be benefits to such a monopoly interconnected expert. If the market for expert opinion has sufficiently high negative externalities from the production of advice, then one would want a monopolist to restrict output. We should not dismiss monopoly in the market out of hand, but we should be aware of the potential dangers of such concentration.
Experts are a necessary part of life. Just as the division of labor and gains from trade improve economic outcomes, so does the division of knowledge. However, such division carries with it dangers. Smith ([1776]Reference Smith1981: 782) famously worried the division of knowledge taken too far could result in a human becoming ‘incapable of relishing or bearing a part in any rational conversation, but of conceiving any generous, noble, or tender sentiment, and consequently of forming any just judgment concerning many of even the ordinary duties of private life’. Understanding the role of experts and expertise, particularly the limits and failures, will help us researchers improve our own expert advice and improve the institutional arrangements of expertise in policymaking and general advising.
Acknowledgement
I thank John Palmer, Michael Enz, Abigail Devereaux, Roger Koppl, Art Carden, Alex Tabarrok, participants at the 58th Annual Meetings of the Public Choice Society, and three anonymous referees for valuable feedback.