We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
John Stuart Mill is central to parallel debates in mainstream contemporary political epistemology and philosophy of federalism concerning the epistemic dimension(s) of legitimate authority. Many scholars invoke Mill to support epistemic arguments for democratic decision-making and decentralized federalism as a means of conferring democratic legitimacy. This article argues that Millian considerations instead provide reason to reject common epistemic arguments for decentralized federalism. Combining Mill's own insights about the epistemic costs of decentralization and recent work in philosophy, politics, and economics undermines purportedly Millian arguments for federalism focused on political experimentation, diversity and participation. Contrary to many interpretations, Millian considerations weaken, rather than strengthen, arguments for federalism. Any valid justification for federalism must instead rest on non-epistemic considerations. This conclusion is notable regardless of how one interprets Mill. But it also supports Mill's stated preference for local decisions subject to central oversight.
I offer two interpretations of independence between experts: (i) independence as deciding autonomously, and (ii) independence as having different perspectives. I argue that when experts are grouped together, independence of both kinds is valuable for the same reason: they reduce the likelihood of erroneous consensus by enabling a greater variety of critical viewpoints. In offering this argument, I show that a purported proof from Finnur Dellsén that groups of more autonomous experts are more reliable does not work. It relies on a flawed ceteris paribus assumption, as well as a false equivalence between autonomy and probabilistic independence. A purely formal proof that more autonomous experts are more reliable is in fact not possible – substantive claims about how more autonomous groups reason are required. My alternative argument for the value of autonomy between experts rests on the claim that groups that triangulate a greater range of critical viewpoints will be less likely to accept hypotheses in error. As well as clarifying what makes autonomy between experts valuable, this mechanism of critical triangulation, gives us reason to value groups of experts that cover a wide range of relevant skills and knowledge. This justifies my second interpretation of expert independence.
This paper introduces the idea of testimonial compression, which I introduce as the audience enforcement of how testifiers may engage in testimonial exchange. More specifically, testimonial compression occurs when an audience requires the speaker to engage in the limited format of simple testimony. Using examples of dialogue from healthcare settings and scholarship on health communication, I illustrate the concept further. Dimensions of testimonial compression include its directness, the roles of audience enforcement and testifier resistance, the fuzziness surrounding structural and agential aspects, and ways in which compression can be negotiated between interlocutors. I further detail some rippling effects of testimonial compression, including impacts of patient-provider communication and the potential for epistemic harms (specifically informational and participatory prejudices). While testimonial compression is not unique to healthcare contexts, many contextual factors specific to medical discourse make testimonial compression especially useful.
Explanations of the CMB exhibited varied and often opposed epistemological motivations, and the models were correspondingly diverse. As the chapter clarifies, the explanations varied from subsidiaries to fully worked-out cosmological models, probing toy-models, and even the deliberate omission of modeling, relying on regular astrophysical insights alone. An admirable epistemic and observational diversity was achieved in the face of an emerging trend of ever-more centralized observational and theoretical programs that came to dominate much of physical science, including cosmology.
When thinking about designing social media platforms, we often focus on factors such as usability, functionality, aesthetics, ethics, and so forth. Epistemic considerations have rarely been given the same level of attention in design discussions. This paper aims to rectify this neglect. We begin by arguing that there are epistemic norms that govern environments, including social media environments. Next, we provide a framework for applying these norms to the question of platform design. We then apply this framework to the real-world case of long-form informational content platforms. We argue that many current long-form informational content platforms are epistemically unhealthy. The good news? We provide concrete advice on how to take steps toward improving their health! Specifically, we argue that they should change how they verify and authenticate content creators and how this information is displayed to content consumers. We conclude by connecting this guidance to broader issues about the epistemic health of platforms.
There are epistemic manipulators in the world. These people are actively attempting to sacrifice epistemic goods for personal gain. In doing so, manipulators have led many competent epistemic agents into believing contrarian theories that go against well-established knowledge. In this paper, I explore one mechanism by which manipulators get epistemic agents to believe contrarian theories. I do so by looking at a prominent empirical model of trustworthiness. This model identifies three major factors that epistemic agents look for when trying to determine who is trustworthy. These are (i) ability, (ii) benevolence, and (iii) moral integrity. I then show how manipulators can manufacture the illusion that they possess these factors. This leads epistemic agents to view manipulators as trustworthy sources of information. Additionally, I argue that fact-checking will be an ineffective – or even harmful – practice when correcting the beliefs of epistemic agents who have been tricked by this illusion of epistemic trustworthiness. I suggest that in such cases we should use an alternative correction, which I call trust undercutting.
In this paper I offer a characterization of the intellectual virtue of social inquisitiveness, paying attention to its difference from the individual virtue of inquisitiveness. I defend that there is a significant distinction between individual and social epistemic virtues: individual epistemic virtues are attributed to individuals and assessed by the quality of their cognitive powers, while social epistemic virtues are attributed to epistemic communities and are assessed by the quality of the epistemic relations within the communities. I begin presenting Lani Watson's characterization of the (individual) practice of questioning and its related intellectual virtue, inquisitiveness. While she does not employ normative language, I show that her description can be constructed through four norms. Then, based on an account of epistemic communities, I defend that, while epistemic virtues attributable to individuals have norms regulating cognitive powers, epistemic virtues attributable to epistemic communities have norms regulating social epistemic interactions and shared epistemic responsibility. I then present a robust characterization of the epistemic virtue of social inquisitiveness through its social epistemic norms: DISTRIBUTION, ACCESSIBILITY, SOCIAL SINCERITY, SOCIAL CONTEXT, and FREQUENCY. I respond to two possible objections to my account and conclude by offering suggestions to broaden the scope of the epistemology of questioning.
It is uncontroversial that something goes wrong with the blaming practices of hypocrites. However, it is more difficult to pinpoint exactly what is objectionable about their blaming practices. I contend that, just as epistemologists have recently done with blame, we can constructively treat hypocrisy as admitting of an epistemic species. This paper has two objectives: first, to identify the epistemic fault in epistemically hypocritical blame, and second, to explain why epistemically hypocritical blamers lose their standing to epistemically blame. I tackle the first problem by appealing to an epistemic norm of consistency. I address the second by arguing that the epistemically hypocritical blamer commits to an opting-out of the set of shared epistemic standards that importantly underlies our standing to epistemically blame. I argue further that being epistemically hypocritical undermines a blamer's standing even to judge epistemically blameworthy.
This paper will consider the extent to which patients' dependence on clinical expertise when making medical decisions threatens patient autonomy. I start by discussing whether or not dependence on experts is prima facie troubling for autonomy and suggest that it is not. I then go on to consider doctors' and other healthcare professionals' status as ‘medical experts’ of the relevant sort and highlight a number of ways in which their expertise is likely to be deficient. I then consider how this revised picture of medical expertise should lead us to view the potential threat to patient autonomy that results from depending on such ‘experts’. I argue that, whether or not patients are aware of the limitations of medical expertise, in practice it is difficult to do other than defer to medical advice, and this presents a threat to patient autonomy that should be addressed. I conclude by suggesting some ways in which this threat to autonomy might be mitigated.
This paper examines the relatively underexplored relationship between epistemic wrongs and epistemic harms in the context of epistemic injustice. Does the presence of one always imply the presence of the other? Or, is it possible to have one without the other? Here we aim to establish a prima facie case that epistemic wrongs do not always produce epistemic harms. We argue that the epistemic wrongness of an action should never be evaluated solely based on the action's consequences, viz. the epistemic and practical harms suffered by the wronged party. Instead – as we shall show – epistemic harms necessarily follow from epistemic wrongs. To conclude, we suggest ways in which extant accounts of epistemic wrongs and epistemic harms as they cash out in epistemic injustice contexts might be refined in light of our argument.
What a scientific community holds to be its core beliefs change over time. Gilbert and Weatherall and Gilbert argue that a community’s core beliefs should be understood as a collective belief formed by a joint commitment and that these core group beliefs are difficult to change as it would require a new joint commitment to be formed. This chapter argues that the primary normative constraints on group belief revision are the weight of the evidence being considered by the group, and not the normative constraints that arise from joint commitments. This chapter sketches a positive view of how epistemic groups may respond to new evidence by looking to Kuhn’s own account of how crises arise and are resolved in science.
According to one influential tradition, to assert that p is to express a belief that p. Yet how do assertions provide strong evidence for belief? Philosophers have recently drawn on evolutionary biology to help explain the stability of assertive communication. Mitchell Green suggests that assertions are akin to biological handicaps. Peter Graham argues against the handicap view and instead claims that the norms of assertion are deterrents. Contra Graham, I argue that both mechanisms may play a role in assertive communication, although assertions as deterrents will often fail to provide strong evidence for belief.
The natural sciences produce knowledge. Not necessarily because they do experiments, or because they use precise measurement devices, or because they investigate reality, but because they have developed highly conservative epistemic cultures whose members are overwhelmingly concerned with what the community thinks. My purpose in this chapter is to support this claim as one component of more general conception of disciplinary knowledge, a species of knowledge of which both the natural sciences and the humanities have historically been able stewards. If we use the natural sciences as a model for what real knowledge looks like, the question, “Do the humanities create knowledge?” turns not so much on the degree to which the humanities employ the Scientific Method, but on the degree to which they partake of the social processes by which disciplinary knowledge is achieved.
A proof ${\cal P}$ of a theorem T is transferable when it's possible for a typical expert to become convinced of T solely on the basis of their prior knowledge and the information contained in ${\cal P}$. Easwaran has argued that transferability is a constraint on acceptable proof. Meanwhile, a proof ${\cal P}$ is fixable when it's possible for other experts to correct any mistakes ${\cal P}$ contains without having to develop significant new mathematics. Habgood-Coote and Tanswell have observed that some acceptable proofs are both fixable and in need of fixing, in the sense that they contain non-trivial mistakes. The claim that acceptable proofs must be transferable seems quite plausible. The claim that some acceptable proofs need fixing seems plausible too. Unfortunately, these attractive suggestions stand in tension with one another. I argue that the transferability requirement is the problem. Acceptable proofs need to only satisfy a weaker requirement I call “corrigibility.” I explain why, despite appearances, the corrigibility standard is preferable to stricter alternatives.
Edited by
Jonathan Fuqua, Conception Seminary College, Missouri,John Greco, Georgetown University, Washington DC,Tyler McNabb, Saint Francis University, Pennsylvania
The key idea of Reformed Epistemology is that religious beliefs can be rational even if they are held noninferentially, without being based on arguments. The first part of this chapter clarifies in more detail what Reformed Epistemology says and how the view has evolved in three stages over the past forty years. The first stage was concerned with ground-clearing and initially characterizing the view; the second stage included book-length definitive statements of the view by William Alston and Alvin Plantinga. The third stage consists of twenty-first-century developments of the view, connecting it with, among other things, the cognitive science of religion, cognitively impacted experiences, epistemic intuition, and religious testimony. The second part of the chapter briefly presents three important objections to Reformed Epistemology – having to do with the need for independent confirmation, belief in the Great Pumpkin, and religious disagreement – and considers what can be said in response to them.
In recent works, Stephen John (2018, Social Epistemology32(2), 75–87; 2019, Studies in History and Philosophy of Science Part A78, 64–72) has deepened the social epistemological perspective on expert testimony by arguing that science communication often operates at the institutional level, and that at that level sincerity, transparency, and honesty are not necessarily epistemic virtues. In this paper I consider his arguments in the context of science journalism, a key constituent of the science communication ecosystem. I argue that this context reveals both the weakness of his arguments and a need for further analysis of how non-experts learn from experts.
Although it is widely thought that more education is a reliable remedy for democratic ills, I argue that it is not always so. The problem arises because education plays a role in shaping what I call people’s trust networks: the set of sources of information they regard as trustworthy. A democratic society can falter if its citizens live on isolated epistemic islands (i.e., occupy nonoverlapping trust networks). If the educational system serves to reinforce one kind of trust network rather than help people build bridges between trust networks, education will rearrange the population of these islands but potentially make the underlying topography less democracy-friendly. The chapter makes this case and then looks at some potential educational remedies to the problem it outlines.
Moral grandstanding is the use of moral talk for self-promotion. Recent philosophical work assumes that people can often accurately identify instances of grandstanding. In contrast, we argue that people are generally unable to reliably recognize instances of grandstanding and that we are typically unjustified in judging that others are grandstanding as a result. From there we argue that, under most circumstances, to judge others as grandstanders is to fail to act with proper intellectual humility. We then examine the significance of these conclusions for moral discourse. More specifically, we propose that moral discourse should focus on others’ stated reasons and whether their actions manifest respect.
This paper argues for two propositions. (I) Large asymmetries of power, status and influence exist between economists. These asymmetries constitute a hierarchy that is steeper than it could be and steeper than hierarchies in other disciplines. (II) This situation has potentially significant epistemic consequences. I collect data on the social organization of economics to show (I). I then argue that the hierarchy in economics heightens conservative selection biases, restricts criticism between economists and disincentivizes the development of novel research. These factors together constrain economics’ capacity to develop new beliefs and reduce the likelihood that its outputs will be true.
Criticism can sometimes provoke defensive reactions, particularly when it implicates identities people hold dear. For instance, feminists told they are upholding rape culture might become angry or upset because the criticism conflicts with an identity that is important to them. These kinds of defensive reactions are a primary focus of this paper. What is it to be defensive in this way, and why do some kinds of criticism or implied criticism tend to provoke this kind of response? What are the connections between defensiveness, identity, and active ignorance? What are the social, political, and epistemic consequences of the tendency to defensiveness? Are there ways to improve the situation?