Hostname: page-component-cd9895bd7-jkksz Total loading time: 0 Render date: 2024-12-26T05:06:20.996Z Has data issue: false hasContentIssue false

Participation and Objectivity

Published online by Cambridge University Press:  08 August 2022

Inkeri Koskinen*
Affiliation:
Practical Philosophy, University of Helsinki, Helsinki, Finland
Rights & Permissions [Opens in a new window]

Abstract

Many philosophers of science have recently argued that extra-academic participation in scientific knowledge production does not threaten scientific objectivity. Quite the contrary: Citizen science, participatory projects, transdisciplinary research, and other similar endeavours can even increase the objectivity of the research conducted. Simultaneously, researchers working in fields in which such participation is common have expressed worries about various ways in which it can result in biases. In this article I clarify how these arguments and worries can be compared, and how extra-academic participation can both increase and threaten the objectivity of the research conducted.

Type
Article
Copyright
© The Author(s), 2022. Published by Cambridge University Press on behalf of the Philosophy of Science Association

1. Introduction

Several philosophers of science have in recent years presented arguments linking extra-academic participation in science to scientific objectivity. Some hold that involving extra-academic partners in scientific knowledge production does not threaten the objectivity of the research conducted, and that such participation is commendable in a democratic society (e.g., Douglas Reference Douglas, Maassen and Weingart2005; Kitcher Reference Kitcher2011). Others claim that it can in fact increase objectivity (e.g., Harding Reference Harding2015; Wylie Reference Wylie, Padovani, Richardson and Jonathan2015). Though philosophers also endorse such participation for many other reasons, arguments related to objectivity play an important role in the philosophical discussion about citizen and stakeholder participation in science. At the same time, some scientists involved in citizen science or in participatory or transdisciplinary projects—or working in fields in which these or similar approaches are common—have expressed worries about biases in such research. While many of these worries have been addressed, some continue to be worthy of notice.

A better understanding of both the multiformity of extra-academic participation in scientific knowledge production today and of the various ways in which it affects the objectivity of the research conducted will make clear in what kind of contexts the philosophical arguments previously mentioned are relevant, and why the worries can be equally relevant. Reaching such an understanding, however, requires a careful analysis of the arguments and worries, as they incorporate different underlying assumptions about what “objectivity” means. In the analysis I will use the risk account of scientific objectivity (Koskinen Reference Koskinen2020, Reference Koskinen2021), as it offers tools that allow me to meaningfully compare the different meanings of objectivity that are at play in the discussions I will be perusing. I will argue that extra-academic participation can both increase and threaten the objectivity of the research conducted, sometimes even simultaneously.

I begin with an overview of the large and heterogeneous field of extra-academic participation in science. I then introduce two sets of philosophical arguments linking participation to objectivity and contrast these arguments to worries about participation threatening objectivity. After presenting the risk account of scientific objectivity, I use it to compare the arguments and worries and to dissolve the apparent tensions. Finally, I suggest some general guidelines for ensuring objectivity in research that breaks the boundaries of science by engaging extra-academic partners in scientific knowledge production in various ways.

2. Extra-academic participation in science

In many academic fields it is common today to involve extra-academic partners in active roles either in decision-making processes related to science, or in some stages of the research process. Such involvement can take many forms. For example, representatives of the general public can be invited to citizen panels or juries to discuss ethical issues arising from new technologies, or experts from diverse areas of society can work with scientists in complicated, solution-oriented, transdisciplinary collaborations. On the one hand, activist researchers conduct participatory projects together with members of socially marginalized communities, and, on the other, universities and research institutions compete in devising ways to increase the number of coresearch initiatives with commercial partners. Some aspects of these developments have received attention in the philosophy of science, but many important discussions about them have happened in other fields. I will now sketch a general overview of the phenomena I wish to discuss based on this multidisciplinary literature. This will enable me to make some distinctions that will be of use in the next section, where I return to philosophical arguments about objectivity.

My focus in this article will be on the kind of participation where extra-academic partners have an active role in research—where they take part in the practice of scientific knowledge production. This kind of active participation is but a part of a large cluster of phenomena that also includes less hands-on approaches; importantly, extra-academic partners also steer science through funding (see e.g., Holman and Elliott Reference Holman and Elliott2018; Fernandez Pinto Reference Fernández Pinto and Ludwig2021). Here, however, I am interested in the kind of concrete, direct participation in scientific knowledge production that we see, for instance, in citizen science projects where volunteers gather data, or transdisciplinary collaborations where stakeholders and experts from many areas of society work side by side with scientists. But to better understand this kind of participation, we must understand it as a part of a broader trend of intensifying extra-academic impact on science.

In the literatures discussing different aspects of this active breaking of the boundaries of science by introducing extra-academic partners in the research process, the change is often viewed as resulting from a change in the standing of science in society. In the eyes of the general public in many countries, the idea of science as a politically neutral source of reliable knowledge has become questionable (Maassen and Weingart Reference Maassen, Weingart, Maassen and Weingart2005; Jasanoff Reference Jasanoff and Felt2017). While the crucial importance of scientific knowledge production in contemporary societies is obvious, it has become less obvious to citizens and politicians that merely funding science would automatically benefit society. Such doubts have led to demands that in a democratic society, science must be steered toward societally important goals. There is, however, no general agreement on what these goals might be. For instance, patient activist groups demand that their viewpoints and interests be considered in research even when this is not in the interest of the medical industry, and environmental activists call into question ties between science and industry while many politicians press for innovations and clear economic impact through research collaborations with industry partners (Epstein Reference Epstein1998; Bucchi and Neresini Reference Bucchi, Neresini and Edward2008). Extra-academic involvement in science is a multiform response to this multiform demand. It is taken to lower the barriers between researchers and laypeople; point research toward urgent, socially relevant questions and problems; and increase the possibilities for both the general public and diverse stakeholder groups to make their voices heard in decisions about science and technology that have societal consequences (Gibbons et al. Reference Gibbons, Limoges, Nowotny, Schwartzman, Scott and Trow1994; Jasanoff Reference Jasanoff2003, Reference Jasanoff and Felt2017; Maassen and Weingart Reference Maassen, Weingart, Maassen and Weingart2005).

This broad trend has resulted in a multitude of new conceptualizations that try to grasp interactions between science and society in our times, such as “mode 2,” “postnormal science,” “triple helix,” and “responsible research and innovation,” as well as in the developments of new kinds of funding instruments (Funtowicz and Ravetz Reference Funtowicz and Ravetz1993; Gibbons et al. Reference Gibbons, Limoges, Nowotny, Schwartzman, Scott and Trow1994; Etzkowitz and Leydesdorff Reference Etzkowitz and Leydesdorff1995; Stilgoe and Guston Reference Stilgoe, Guston and Felt2017). And, of particular interest here, it has led to the development of a continually growing number of approaches that propose engaging extra-academic partners in scientific knowledge production, such as transdisciplinarity, citizen science, participatory action research, and diverse forms of activist research (Whyte Reference Whyte1990; Epstein Reference Epstein1998; Hirsch Hadorn et al. Reference Hirsch Hadorn and Hadorn2008; Kimura and Kinchy Reference Kimura and Kinchy2016). In addition to the general science policy trends and analyses mentioned in the preceding text, their emergence has been influenced by developments outside academia, for example by processes of democratization in policy making (Maassen and Weingart Reference Maassen, Weingart, Maassen and Weingart2005), and by management and business innovations, such as cocreation and crowdsourcing (Prahalad and Ramaswamy Reference Prahalad and Ramaswamy2000; Howe Reference Howe2006). While some of the approaches are quite clearly defined, more often the terminology is not well established, and similar projects can be called participatory research in one field and citizen science in another.

Discussions about these approaches rarely address them all as a whole. Rather, the different methods and practices are often discussed under broad rubrics such as engagement, participation, or democratization. To grasp some fundamental differences hidden by such terms, we should pay attention to the different roles extra-academic participants or partners have in the different forms of collaboration and engagement.

For the purposes of this article, I suggest we differentiate between (1) citizen engagement, (2) stakeholder engagement, and (3) collaboration with extra-academic experts (for alternative classifications for different purposes, see e.g. Braun and Schultz Reference Braun and Schultz2010; Eigi Reference Eigiforthcoming). In practice, these basic forms of participation are often mixed (Epstein Reference Epstein1998; Collins and Evans Reference Collins and Evans2002). For example, in many transdisciplinary projects the extra-academic participants have a dual role as stakeholders who bring in their interests and as experts who hold valuable knowledge. However, the distinction is conceptually important, as the aims in these three basic types of extra-academic involvement in science are often different (Collins and Evans Reference Collins and Evans2002; Solomon Reference Solomon and Van Bouwel2009).

Citizen engagement in science is often linked to the idea that science should be democratized: Members of the general public should be more in touch with science and with the ways in which it its results are used. Typically such engagement aims at transparency, and the involvement of citizens in decisions about goals and practices in science and technology. The impact goals of projects and programs that involve citizens often include democratic steering of research that addresses socially important problems and strengthening public trust in science (Bucchi and Neresini Reference Bucchi, Neresini and Edward2008). In practice, citizen involvement typically takes one of two basic forms: citizen panels or citizen science. Citizen panels often address ethical and practical issues related to the production and use of scientific knowledge. The participants are provided with information and their views are heard. While citizen panels can sometimes contribute to scientific knowledge production, most often they are used in the interface between science, technology, and policy, and one of their central aims is to increase trust in all three (Maassen and Weingart Reference Maassen, Weingart, Maassen and Weingart2005). While I will mention citizen panels several times, my focus here is on science rather than policy, and therefore I will talk more about the latter type of citizen engagement, the one that I call citizen science. In it, citizen participants participate in scientific knowledge production. In one of the most typical forms of citizen science, citizen volunteers are engaged in data collection, which has significantly increased its scale in many fields. Sometimes citizen volunteers can also be involved in more demanding tasks in the research process, or even influence research design (Cohn Reference Cohn2008; Bonney et al. Reference Bonney, Ballard, Jordan, McCallie, Phillips, Shirk and Wilderman2009; Dickinson et al. Reference Dickinson Janis, Zuckerberg and Bonter2010; Elliott and Rosenberg Reference Elliott and Rosenberg2019).

Both in citizen panels and in citizen science projects the organizers quite often have reason to attempt to engage disinterested citizens, not activists who might try to promote their own interests or agenda. Therefore, the organizers are in such cases careful to make sure that the extra-academic participants do not represent any advocacy group (“Rise of the Citizen Scientist” 2015; Braun and Schultz Reference Braun and Schultz2010; Eigi Reference Eigiforthcoming). However, researchers may also wish to involve stakeholders in scientific knowledge production. For instance, in projects that are supposed to inform policy, the extra-academic participants can represent groups that will be affected if the policies are changed. This is common in development projects. In practice, it often means hearing the stakeholders several times during the research process. But like in citizen engagement, the degree of involvement varies (Bonney et al. Reference Bonney, Ballard, Jordan, McCallie, Phillips, Shirk and Wilderman2009). When the extra-academic partners are stakeholders, they can become even actual members of the research team. The idea in this kind of participatory research is to take the interests and worries of the stakeholders into account in research, and to produce knowledge that is thus well suited for the policy decisions at hand. In activist research the researchers represent a stakeholder group, and the aim is often to question established practices and policies (Whyte Reference Whyte1990; Hess et al. Reference Hess, Breyman, Campbell, Martin and Edward2008; Koskinen and Rolin Reference Koskinen and Rolin2019).

As noted, stakeholders are often also extra-academic experts: For instance, stakeholders involved in a participatory developmental project aiming to improve living conditions in a specific environment can have valuable experiential knowledge about that environment. But not all extra-academic experts with whom scientists collaborate are stakeholders. This is not necessarily the case, for example, in research collaborations with civil servants or artists. When scientists cooperate with extra-academic experts, the participants typically join forces in multisectoral research collaborations, often to solve pressing and complex problems. This is particularly common in transdisciplinary research, that is, solution-oriented research where the problems are framed in cross-disciplinary or even extra-academic terms, and research projects typically include researchers from many fields and extra-academic partners who can be stakeholders, experts, or both (Hirsch Hadorn et al. Reference Hirsch Hadorn and Hadorn2008; Koskinen and Mäki Reference Koskinen and Mäki2016). In practice, transdisciplinary collaborations or coresearch projects with extra-academic experts can resemble interdisciplinary projects. The difference is subtle. As in interdisciplinary research, extra-academic experts naturally bring in their knowledge and skills. But it is often also assumed that their involvement will steer research toward the practical aspects of the problems at hand, as their expertise is typically linked to the use of knowledge rather than its production (Hirsch Hadorn et al. Reference Hirsch Hadorn and Hadorn2008).

To summarize, extra-academic participation in science is generally believed to increase the (direct, easily recognizable) societal impact of academic research. Depending on the type of participation and the roles the participants have in research, this can happen in many different ways. Participation can also entail various epistemic benefits, ranging from the possibility of having a workforce that enables large-scale data collection, to the chance of complementing scientific expertise with extra-academic expertise. My focus will now be on a subset of these benefits, and on related threats—that is, benefits and threats pertaining to scientific objectivity.

3. Participation not threatening, increasing, and threatening objectivity

Now that we have a general idea of the multifaceted phenomenon of extra-academic participation in science, I will proceed to summarize some arguments that link participation to objectivity, on the one hand, and to biases, on the other. While doing so, I will pay attention to the different forms of participation indicated in these arguments, using the tripartite distinction sketched in the previous section. As will become clear, the arguments make quite different assumptions about participation. And as will become fairly clear already in this section, and even more in the rest of this article, these arguments also include different underlying assumptions about what “objectivity” means or requires. But before digging deeper into different meanings of objectivity, let us look at the arguments.

Philosophers of science have presented several arguments in favor of involving extra-academic partners in scientific knowledge production. Here I will examine two groups of arguments that connect demands for extra-academic participation to scientific objectivity. The first ones maintain that participation does not threaten scientific objectivity, whereas the second ones argue that it can in fact increase the objectivity of research. But while philosophers have developed arguments endorsing participation, scientists—particularly ones engaged in citizen science and participatory projects or working in fields in which they are common—have identified ways in which involving extra-academic partners in scientific knowledge production can cause research to become biased and thus threaten scientific objectivity. I will now first summarize the philosophical arguments, and then return to the scientists’ worries.

The first group of arguments recommending the involvement of extra-academic partners in scientific knowledge production has been developed in recent discussions about values in science. The starting point is the rejection of an important approach to objectivity: The value-free ideal, or the idea that nonepistemic values must not influence the gathering of evidence or the acceptance of scientific theories in objective research. Many philosophers have recently argued that the ideal should be abandoned, as decisions based on nonepistemic values are unavoidable or necessary in all stages of research. We cannot be sure that the various background assumptions on which research is and must be based would be value free (Longino Reference Longino1990, Reference Longino2001), and in many fields it is impossible to avoid the use of value-laden concepts (Dupré Reference Dupré, Kincaid, Dupré and Wylie2007; Alexandrova Reference Alexandrova2018), so the demand for value-freedom is unattainable. Moreover, the distinction between epistemic and nonepistemic values is not clear (Rooney Reference Rooney1992; Longino Reference Longino, Nelson and Nelson1996). And finally, when deciding to make the inductive leap from evidence to the acceptance or the rejection of a hypothesis, researchers must often take nonepistemic values into account. It is necessary to weigh the consequences of being wrong when determining whether the evidence is strong enough, and nonepistemic values must be considered in such assessments (Rudner Reference Rudner1953; Douglas Reference Douglas2000, Reference Douglas2009).

Such arguments have led many philosophers to endorse the idea of involving extra-academic partners in science. They defend different versions of what could be called an argument from democracy. The argument is largely moral and based on democratic ideals: In a democratic society, scientists should not be making all the necessary value decisions alone. Instead, either citizens as representatives of the general public, or in some cases stakeholders, should take part in decisions regarding research agendas, or even in value decisions during the research process. Their involvement will ensure that scientists’ value presuppositions and value-laden decisions are made explicit, checked for controversiality, and altered if necessary (Kitcher Reference Kitcher2001, Reference Kitcher2011; Douglas Reference Douglas, Maassen and Weingart2005, Reference Douglas2009; Brown Reference Brown2009; Elliott Reference Elliott2011; Alexandrova Reference Alexandrova2018).

Usually the argument from democracy merely states (in many cases implicitly) that citizen or stakeholder participation in science does not threaten its objectivity. If value decisions are unavoidable in science, we either have no objectivity in science or we cannot equate objectivity with the value-free ideal. Some of the philosophers who have argued against the value-free ideal wish to hold on to the notion of objectivity. This has resulted in many different accounts of objectivity (to which I will return more in detail in the following sections). Heather Douglas (Reference Douglas2004, Reference Douglas2009), notably, has identified several viable accounts of objectivity that do not cite the value-free ideal, and argues that discussion of value judgments in science—be it just between scientists or with citizens or stakeholders—“need have no harmful repercussions for objectivity per se” (Reference Douglas, Maassen and Weingart2005, 156). Problems arise only if values take the place of evidence, or if some evidence is disregarded because it would run contrary to some desired outcome. But not everyone sympathetic to the argument from democracy wishes to retain the notion of objectivity. Matthew J. Brown (Reference Brown, McCain and Kampourakis2019), for instance, argues that the notion is so irredeemably tied to claims of value-freedom that it should be abandoned. In other words, the argument from democracy can be combined either with some alternative account of objectivity or by renouncing the notion of objectivity. Whichever the case, if value decisions are unavoidable in science, citizens or stakeholders participating in them do not affect the objectivity of the research.

Some philosophers have combined the argument from democracy with stronger claims: Participation can also be epistemically beneficial and possibly even increase objectivity (e.g., Douglas Reference Douglas, Maassen and Weingart2005; Elliott Reference Elliott2011). Which brings us to our second group of arguments claiming that extra-academic participation in science can increase the objectivity of the research conducted.

This group of arguments has been developed mostly by feminist philosophers of science. Instead of abandoning the notion of scientific objectivity, they have introduced alternative ways of understanding it—ones that reject the value-free ideal. When defending extra-academic participation in science, they typically focus on collaborations with stakeholders who are also experts, and those forms of participation that have emancipatory aims or ones related to social justice.

Standpoint epistemologists have drawn attention to the influence of power asymmetries in science and to the potential epistemic value of the viewpoints of socially marginal groups and communities. Due to their standing in society, they may be in an epistemically privileged position with regard to some issues, for instance, they can be aware of social mechanisms that remain invisible to people in a socially more privileged position. This is not a given: It must be determined case by case whether a standpoint makes an epistemic difference (Wylie Reference Wylie, Figueroa and Harding2003). But it is important for scientists to determine whether, for instance, members of some community have gained epistemically valuable viewpoints on the phenomenon being studied. Taking such viewpoints into account can help researchers to notice insufficiently studied issues and avoid biases. To ensure these benefits, several feminist philosophers of science have suggested research collaborations with representatives of socially marginal communities. As is easy to see, this argument is not only moral but also epistemic: Involving extra-academic participants in science will increase the epistemic quality of the research and its results (Jaggar Reference Jaggar and Harding2004; Wylie and Nelson Reference Wylie, Hankinson Nelson, Kincaid, Dupré and Wylie2007; Rolin Reference Rolin and Van Bouwel2009; Frickel et al. Reference Frickel, Gibbon, Howard, Kempner, Ottinger and Hess2010). Sandra Harding (Reference Harding2015) even argues that strong objectivity requires hearing the viewpoints and experiences of people who are traditionally excluded from scientific knowledge production. A claim can be weakly objective if it satisfies appropriate criteria of assessment accepted in some field. But it can be strongly objective only if it appears well established from as many standpoints as possible—including extra-academic ones.

Alison Wylie (Reference Wylie, Figueroa and Harding2003, Reference Wylie, Padovani, Richardson and Jonathan2015) has combined standpoint epistemology with Helen Longino’s (Reference Longino1990, Reference Longino2001) demands for critical interaction when arguing for collaboration with descendant communities in archaeology—that is, collaboration with stakeholders. According to Longino, to be objective research communities should sustain and even encourage diverse and competing viewpoints. They should also be responsive to outside criticism. The idea is that well-functioning epistemic communities guarantee efficient debates, which in turn cancel out the biases of individual researchers. Longino has formulated criteria that make it possible to evaluate the objectivity of research communities, the key point being effective critical interaction. Wylie points out that Longino’s demands add another dimension to the potential epistemic benefits of collaborations with descendant communities. Not only can they reveal insufficiently studied issues and perspectives but also members of the descendant communities can also be in a position to offer epistemically useful criticism to researchers.

While philosophers have developed arguments that link different forms of extra-academic participation to objectivity—often implying or citing different accounts of objectivity—scientists engaged in research that involves extra-academic partners in knowledge production have expressed some worries related to objectivity. Many of these worries have been discussed thoroughly in the fields where they are relevant, and some have been efficiently addressed, as will become clear in sections 6 and 7. But some worries remain pressing. More recently, some philosophers have also started paying attention to the matter. While the issues raised are distinct and usually related to some specific type of participation, they all make the same basic claim: extra-academic participation in science can lead to biased results. I will now summarize the most noticeable worries.

An editorial in Nature (Rise of the Citizen Scientist 2015) remarks on some much-discussed concerns related to citizen science. While citizen science can enable data collection at a much larger scale than would otherwise be possible—large numbers of volunteer citizens are able to collect or produce vast amounts of data—the quality of the data collected by citizen participants has been called into question (Cohn Reference Cohn2008; Dickinson et al. Reference Dickinson Janis, Zuckerberg and Bonter2010). This general worry about data quality becomes a worry about scientific objectivity if the data collected by citizen volunteers ends up being flawed in some systematic manner, as using systematically skewed data would lead to biased results. Such a problem can arise if the people participating in a citizen science project tend make some systematic error when collecting data—if they for example are unable to distinguish between some plant species they are supposed to be collecting. It can also occur if the citizen volunteers in fact represent some advocacy group: As the editorial in Nature notes, citizens may volunteer to advance their political objectives. If this kind of advocacy is common in some study, there is a significant risk of systematic bias in the collected data. For example, if volunteers in a geological project are opposed to fracking, they may focus on collecting the kind of data that would serve as evidence of its harmful effects (Rise of the Citizen Scientist 2015; Elliott and Rosenberg Reference Elliott and Rosenberg2019).

Another worry has arisen in participatory research: Researchers collaborating with representatives of socially marginal communities have noted that they risk remaining blind to the power structures within such communities. This can cause research to become seriously biased, if the already powerful are taken to represent their whole community (Cooke and Kothari Reference Cooke and Kothari2001). Similarly, when socially or economically powerful agents engage in transdisciplinary projects as stakeholders and extra-academic experts, their interests and needs can end up being overemphasized at the expense of others (Hirsch Hadorn et al. Reference Hirsch Hadorn and Hadorn2008; Koskinen and Mäki Reference Koskinen and Mäki2016).

Philosophers have already addressed some of these concerns. Recently Kevin Elliott and Jon Rosenberg (Reference Elliott and Rosenberg2019) have discussed concerns about data quality and problematic advocacy in citizen science, arguing that while such worries can be well founded in some cases, citizen science is by and large moving scientific research forward, and problems related to data quality have mostly been solved. Baptiste Bedessem and Stéphanie Ruphy (Reference Bedessem and Ruphy2020) have also addressed identified biases in citizen science, suggesting a cartography of epistemic benefits and epistemic risks pertaining to different forms of citizen science.

Problematic advocacy is a threat in many types of extra-academic participation in science: The participating citizens or stakeholders can wittingly or inadvertently let their interests take the place of evidence. The problem is not only that participants can be initially biased but also that participating groups can be hijacked. This can be a threat in virtually all forms of citizen or stakeholder engagement in science or in science policy. Some powerful stakeholders can actively attempt to influence the views of the citizens or stakeholders participating in a citizen panel, a citizen science project, a participatory project, or even an activist research initiative. This is one of the many ways in which such powerful stakeholders attempt to influence decision making informed by science. Especially when important interests are at stake, “lobbying, bullying, and bribery” (Wilholt Reference Wilholt2014, 171) can be rampant. For example, pharmaceutical companies fund patient activist groups, and this may skew the activists’ views in favor of approaches that benefit the companies (Holman and Geislar Reference Holman and Geislar2018; Kurtulmus Reference Kurtulmus and Ludwig2021).

Activist research is also prone to certain other types of bias. For example, if outside criticism is not considered in an activist research movement—if it is, for instance, systematically dismissed as expressions of political hostility—the movement faces the risk of dogmatism (Koskinen Reference Koskinen2014, Reference Koskinen, Mäki, Votsis, Ruphy and Schurz2015). Such movements can also relatively easily be biased toward theoretical and methodological approaches that serve their social and political goals (Hauswald Reference Hauswald2021). If unchecked, both developments can lead to biased results.

All these worries are context sensitive, and typically related to just some forms of participation: citizen science, transdisciplinary research, activist research, and so on. Likewise, each of the arguments linking participation to objectivity clearly addresses just some forms of participation: The argument from democracy commends citizen participation and sometimes stakeholder participation, and the standpoint argument endorses the participation of stakeholder experts.

Moreover, it seems that several different notions of objectivity are at play. In the preceding text, I have mentioned the idea that objectivity is threatened when values take the place of evidence (Douglas Reference Douglas2004, Reference Douglas2009), the idea that a claim is strongly objective only if it appears well-established from as many standpoints as possible (Harding Reference Harding2015), and the idea that objective research communities are characterized by critical interactions (Longino Reference Longino1990, Reference Longino2001). This is not surprising. The recent philosophical literature on scientific objectivity abounds with different notions of objectivity. Douglas (Reference Douglas2004, Reference Douglas2009) has identified eight meanings of process objectivity alone, and Marianne Janack (Reference Janack2002) has compiled a list of thirteen distinct meanings of objectivity used in philosophy of science. What then should we think about the arguments and worries I have summarized in this section? Perhaps they lead to different assessments about whether participation is helpful or hinders objectivity simply because they are all based on different accounts of objectivity? Can the arguments be compared, or perhaps even combined in meaningful ways?

These questions boil down to a question of conceptual unity: Can the different meanings of scientific objectivity be covered with a single account? Following Arthur Fine (Reference Fine1998), Douglas (Reference Douglas2004) suggests that while the different meanings are conceptually distinct, they all indicate a shared basis for trust: When we say that something is objective, we say that we trust it and that others can safely trust it too. I have recently defended an account of scientific objectivity that builds on this initial intuition and brings further unity to the complex notion (Koskinen Reference Koskinen2020, Reference Koskinen2021). The account gives me the tools with which I will now attempt to clarify how all the different arguments and worries I have summarized in this section are worries and arguments about objectivity in some sense that covers them all. This will help us to better understand how extra-academic participation in science can both increase and threaten scientific objectivity. Hopefully it can also be of use when attempting to ensure that participation does the former rather than the latter.

4. The risk account of scientific objectivity

According to the risk account of scientific objectivity, when we call X (a researcher, a method, a research community, or something else pertaining to science) objective, we endorse it: We say that we rely on X, and that others can safely do so too, because important epistemic risks arising from our imperfections as epistemic agents have been effectively mitigated or averted (Koskinen Reference Koskinen2020, Reference Koskinen2021).

The account is meant to bring some unity to the different meanings or accounts of objectivity that have been recognized in the recent philosophical literature on objectivity. These different accounts typically abandon any reference to things as they are independently of us, as well as the idea of objectivity as value-freedom. They allow for fallibilism, treat objectivity as a degree notion, and are often based on recognizing the highly contextual nature of scientific objectivity: What makes research objective in the experimental sciences can differ a great deal from what makes participatory ethnography objective. (See e.g., Janack Reference Janack2002; Montuschi Reference Montuschi2003; Alexandrova Reference Alexandrova2018; Zahle Reference Zahle2020). To mention a few examples, the account of objectivity that Douglas (Reference Douglas2004, Reference Douglas2009) calls “procedural” claims that a research process is objective if it has been so designed that one researcher can be replaced by another without changes in the result. The “detached” account of objectivity states that for research to be objective, nonepistemic values must not be used in place of evidence. This is the account Douglas (Reference Douglas, Maassen and Weingart2005) cites when arguing that discussions about value judgments in science do not necessarily threaten objectivity, and she suggests involving citizens in such discussions. And the “interactive” account of objectivity stresses the importance of effective critical discussions and debates in research communities: An objective community is characterized by effective critical interaction (Douglas Reference Douglas2004; Longino Reference Longino1990, Reference Longino2001).

As noted, Douglas (Reference Douglas2004) argues that such different meanings of objectivity—or “senses” of objectivity, as she calls them—are conceptually distinct, and that the notion of scientific objectivity is thus an irreducibly complex one. The only thing that is common to all the uses of the notion is the claim or endorsement, “I trust this, and you should too” (Douglas Reference Douglas2009, 123). I, however, have argued that the risk account says more than that, and covers at least all the accounts of objectivity that fit the loose description in the preceding text—in other words, the accounts that can in practice be applied when assessing the objectivity of something (Koskinen Reference Koskinen2020, Reference Koskinen2021). They all either name some risk arising from our imperfections as epistemic agents (like the detached account) or describe a strategy for mitigating or averting one or more such risks (like the procedural and interactive accounts).

The risk account combines and develops ideas that have already been presented in the recent literature on objectivity. It incorporates the idea that the different senses of objectivity all indicate “a shared basis for trust in a claim” (Douglas Reference Douglas2009, 123; see also Fine Reference Fine1998): When we call something objective, we claim that it can be trusted. However, if we distinguish between trust and reliance by noting that trust can be betrayed and reliance only disappointed (Baier Reference Baier1986), and if we wish to talk about the objectivity of methods or procedures—that is, things that cannot betray us—we must replace trust with reliance: All the different meanings of objectivity indicate a shared basis for reliance in a claim (Koskinen Reference Koskinen2020). The account also draws on Lorraine Daston and Peter Galison’s (Reference Daston and Galison2007) observation that historically, new meanings of objectivity have been related to newly recognized threats arising from our subjectivity. To cover not only subjectivity but also things like collective biases, I however prefer to talk about what Biddle and Kukla (Reference Biddle, Kukla, Kevin and Richards2017, 218) call epistemic risks: risks of epistemic error that arise anywhere during knowledge practices. Objectivity is related to the avoidance of a specific subset of epistemic risks: illusions, subjectivity, idiosyncrasies, cognitive biases, collective biases, and so forth—that is, important epistemic risks arising from our imperfections as epistemic agents. By combining these ideas, the risk account clarifies why scientific objectivity takes so different a form in different contexts: Which risks are important, and which risk mitigation strategies effective and usable, depend on the contexts (Koskinen Reference Koskinen2021).

5. Participation not threatening or increasing objectivity

Let us now return to the philosophical arguments favoring participation. Why is it, according to them, that extra-academic participation does not threaten scientific objectivity, or can even increase the objectivity of the research conducted? Using the risk account in the analysis of the arguments enables me to compare the arguments despite the fact that the philosophers who have defended them often use different meanings of objectivity. Meaningful comparisons become possible when we notice that the different meanings either name some epistemic risk arising from our imperfections as epistemic agents or describe a strategy for mitigating or averting one or several such risks.

The first set of arguments summarized in section 3 endorses citizen or stakeholder participation as a way to make the unavoidable value decisions in science in a more democratic manner. With regard to objectivity, these variants of the argument from democracy only claim that participation does not threaten it. The starting point here is the rejection of the value-free ideal: Nonepistemic values can or must influence all stages of research, and this does necessarily compromise scientific objectivity. In other words, making such value decisions does not necessarily create any epistemic risks that would threaten objectivity. Involving citizens or stakeholders in the decisions does not change the situation—at least not with regard to this particular type of risk. As we will see in the next sections, this does not guarantee that participation could not threaten objectivity in other ways.

The second set of arguments summarized in section 3 presents extra-academic participation as an efficient strategy for mitigating certain epistemic risks arising from our imperfections as epistemic agents. Standpoint epistemologists argue that stakeholder participation is a good strategy against collective biases to which researchers easily succumb. If the typical societal background of an academic researcher makes her effectively blind to some phenomena that are apparent to the members of a socially marginal community, collaboration with members of that community can mitigate the epistemic risks arising from her failings, and thus increase the objectivity of her research. It is an efficient way for avoiding a situation in which her socially ingrained blindness skews the results of her research.

When Harding (Reference Harding2015) stresses that strong objectivity requires taking the viewpoints of socially marginalized groups into account, she points out that when a claim appears well-established from many standpoints, we have good reason to rely on it. With weakly objective claims—that is, established scientific claims that satisfy the appropriate criteria accepted in some field—there is always the risk that we have not noticed some important error. The whole field might be blind to it. Wide extra-academic participation is an effective strategy against that risk. And as Wylie (Reference Wylie, Padovani, Richardson and Jonathan2015) notes, extra-academic participants also can be in a position to offer epistemically useful criticism to scientists sometimes, for instance by pointing out unnoticed background assumptions.

To give an example, patient activist and disability activist groups and organizations have questioned research practices, conceptualizations, and assumptions, and drawn attention to insufficiently studied issues in medicine (Epstein Reference Epstein1998; Fletcher-Watson et al. Reference Fletcher-Watson, Adams, Brook, Charman, Crane, Cusack and Leekam2019). Autism research, for instance, has been challenged by activists promoting the neurodiversity paradigm, according to which autism is a mode of neurocognitive functioning rather than a disorder (Chapman Reference Chapman, Tekin and Bluhm2019). While it remains to be seen to what degree the challenge will alter our understanding of autism, it is in any case increasing the objectivity of autism research. The activists have drawn attention to something they take to create a previously unnoticed epistemic risk: Researchers have taken for granted something that for stakeholders is far from obvious, and this makes the whole field biased. By highlighting the issue, the activists have ensured that the risk is scrutinized and, if necessary, mitigated (Fletcher-Watson et al. Reference Fletcher-Watson, Adams, Brook, Charman, Crane, Cusack and Leekam2019; Hughes Reference Hughes2021).

6. Participation threatening objectivity

It seems that participation can in some situations increase objectivity. It can nevertheless also threaten objectivity. This is because extra-academic participation in science can also:

  1. 1. Create new epistemic risks arising from the imperfections of epistemic agents—either entirely new risks or, more typically, risks that have previously not been seen as particularly important in the context in question.

  2. 2. Render previously effective and usable risk mitigation strategies ineffective or unusable.

To illustrate, let us now turn to some of the worries expressed by scientists working in fields where citizen science and participatory projects are common. Many of these worries, though not all, have already been thoroughly discussed in the fields where they are pertinent, and methods and practices that counter the problems have been developed. Here my aim is to analyze the worries and some of the responses to them using the risk account of objectivity, as this will enable us to compare the worries to the benefits described in the previous section.

The quality of the data gathered by volunteers in citizen science projects has been questioned in many fields. According to the critics, citizens might not, for instance, be able to discern between different types of data, or they might miss something important, thus producing inaccurate data. This could threaten objectivity particularly if the participants make systematic errors, thus producing biased data (Cohn Reference Cohn2008; Dickinson et al. Reference Dickinson Janis, Zuckerberg and Bonter2010; Elliott and Rosenberg Reference Elliott and Rosenberg2019). In other words, the scientists who have been worried about data quality have pointed out epistemic risks arising from the imperfections of the citizen participants as epistemic agents. These epistemic risks are not new per se: Scientists collecting data must also avoid them. Rather, previously effective and usable strategies for averting such risks can be unusable in the new context. To give a simple example, let us assume that the volunteering citizens in a citizen science project are unable to distinguish between plant species they are supposed to be collecting because the species are very similar and distinguishing between them requires a trained eye. Scientists’ eyes would be trained for the task. But it is not possible, in practice, to train the citizen volunteers in the way scientists are trained, as this would require too much time and resources. The strategy used for mitigating the risk of error—lengthy training—is not usable in this context. Naturally such problems can be solved by developing new strategies: For instance, by planning the project so that the volunteers do not need to distinguish between the different species.

As noted, another fairly common worry in citizen science is that interest groups may be eager to participate in citizen science projects, and that their participation can lead to biased results (Braun and Schultz Reference Braun and Schultz2010; Elliott and Rosenberg Reference Elliott and Rosenberg2019; Bedessem and Ruphy Reference Bedessem and Ruphy2020). In such situations several different epistemic risks can become important. The obvious one is that it is the participant’s nonepistemic value commitments, rather than just their limited skills, that lead to systematically skewed data. Another risk becomes particularly pressing if the citizen participants are supposed to take part in value decisions as representatives of the general public. If they represent a stakeholder group, or are strongly influenced by some stakeholder, the researchers will not receive information about the values of the general public (Braun and Schultz Reference Braun and Schultz2010; Kurtulmus Reference Kurtulmus and Ludwig2021; Eigi Reference Eigiforthcoming). Failure to notice this would lead to epistemic errors, as the scientists’ beliefs about the values of the general public would be flawed. Such a risk is not new: When information about the views and values of the general public is gathered through methods such as questionnaires, it is standard procedure to make sure that the results are representative. But the strategies used to ensure this when designing a questionnaire are not necessarily efficient or usable when planning a citizen science project. Recognizing this has led to the development of new strategies that are used in the recruitment of the participating citizens.

Similar risks can also be important in collaborations with stakeholders. For instance, if a participatory project with members of a socially marginal community is supposed to enrich scientists’ understanding of issues affecting the lives of the community members, but the participants represent only some fraction of that community, the researchers end up being misinformed (Cooke and Kothari Reference Cooke and Kothari2001). And the same can also happen in transdisciplinary projects that are supposed to steer research toward societally important problems and their practical aspects (Hirsch Hadorn et al. Reference Hirsch Hadorn and Hadorn2008). Researchers risk missing such goals if the interests and needs of some powerful stakeholder group ends up being overemphasized at the expense of others.

7. How to ensure objectivity?

At the beginning of this article I promised to suggest some general guidelines for ensuring objectivity in the diverse forms of extra-academic participation in science. To be general, such guidelines must be fairly abstract, as objectivity is a highly context-sensitive issue. We can make assessments of objectivity in many different ways. The different meanings of scientific objectivity that we use in such assessments address different epistemic risks arising from our imperfections as epistemic agents and/or suggest different strategies for mitigating them.

It is nevertheless possible to sketch some guidelines. To do so, it is useful to pay attention to the key elements pertaining to scientific objectivity. I have argued that objectivity has to do with the mitigation of epistemic risks arising from our imperfections as epistemic agents. In different contexts, different risks are particularly important, and different mitigation strategies effective and usable. In a laboratory it makes sense to focus on avoiding subjective assumptions and oversights by following standardized procedures that are meant to prevent them (Douglas Reference Douglas2004, Reference Douglas2009). In ethnographic fieldwork it makes sense to focus on avoiding ethnocentric bias by systematically refraining from evaluative assessment of the informants’ beliefs and views (Koskinen Reference Koskinen2011, Reference Koskinen2014, Reference Koskinen2021). Scientific objectivity is increased when researchers recognize important epistemic risks arising from our imperfections as epistemic agents and also find and adopt suitable mitigation strategies.

To this point, I have argued that extra-academic participation can, on the one hand, increase the objectivity of research by functioning as an effective risk mitigation strategy and, on the other, threaten scientific objectivity by creating new epistemic risks or rendering some mitigation strategies useless. What is particularly interesting is that the two can happen simultaneously. This is because in any given research context there are bound to be many distinct epistemic risks arising from our imperfections as epistemic agents that must be mitigated. Extra-academic participation in science can be an efficient strategy for mitigating some of them and yet, at the same time, introduce new risks or require changes in methods and research practices, and thus lead to situations in which an established strategy for averting some epistemic risk ceases to function properly.

To illustrate, let us consider activist research movements, where the researchers represent some stakeholder group, and typically collaborate closely with extra-academic activists. On the one hand, such movements can increase the objectivity of the larger research community by drawing attention to insufficiently studied issues and by providing epistemically useful criticism, for example by drawing attention to unnoticed background assumptions leading to widespread bias. By doing so they can contribute to critical discussions in many fields and increase the objectivity of the research conducted there—in exactly the way feminist philosophers of science have pointed out. At the same time, activist research communities can be suspicious of outside criticism, and prone to favor theoretical and methodological approaches that are likely to produce results in line with their social and political goals, thus creating a clear risk of biased results (Koskinen Reference Koskinen2014, Reference Koskinen, Mäki, Votsis, Ruphy and Schurz2015; Hauswald Reference Hauswald2021).

Indigenous activist research is a good example of this. It has emerged as a part of a global political movement that has united many indigenous peoples around the world. As one of the central themes in this movement has, from the beginning, been a thorough criticism of the ways in which scientists have approached indigenous peoples, it is but natural that indigenous activists should have endeavored to take the research on indigenous people into their own hands. While collaborations with outsider researchers have also been common, the development of mutually respectful practices of collaboration that recognize the rights and aims of all parties has been a slow process (Seurujärvi-Kari and Kulonen Reference Seurujärvi-Kari and Ulla-Maija1996; Tsosie Reference Tsosie, Ian, Medina and Pohlhaus2017; Whyte Reference Whyte, Melissa and Shilling2018; Simons et al. Reference Simons, Martindale, Wylie, Chelsea, Spake and Katherine2021).

During the past few decades, indigenous activist research has successfully drawn attention to shortcomings and offered valuable criticism in fields such as anthropology, linguistics, jurisprudence, and environmental sciences (Koskinen and Rolin Reference Koskinen and Rolin2019). But at the same time activist researchers have not always accepted criticism or other contributions from outsiders. Vigdis Stordahl (Reference Stordahl and Minde2008) has described the development of such tendencies in the early years of Sámi activist research. According to her, nonindigenous researchers, as well as indigenous researchers whose perspectives were somehow dissenting, repeatedly reported that members of the emerging community of Sámi activist researchers had asked them not to conduct research in Sámi society. That is to say, while indigenous activist research was successfully increasing the interactive objectivity of several research communities by offering valuable criticism, the interactive objectivity of its own research community was not always exemplary.

This example shows that the risks and the benefits of participation are not necessarily interdependent: We may not need to accept the first to get the second. As indigenous activist research has gained a more secure institutional standing in academia, the community has become more receptive to outside criticism (Koskinen Reference Koskinen, Mäki, Votsis, Ruphy and Schurz2015). This has not diminished its ability to offer valuable criticism and identify insufficiently studied issues. It is possible to keep the significant benefits of participation—increased democratic legitimacy and/or various epistemic benefits, including increased objectivity—and mitigate the new risks, or find new mitigation strategies to replace ones that have become ineffective or unusable.

To assess and increase the objectivity of research involving extra-academic participants, researchers must first identify the epistemic risks arising from our imperfections as epistemic agents that are particularly pressing in the context in question. Both the researchers and the extra-academic participants must be taken into consideration as imperfect epistemic agents. Then one must consider the strategies used for mitigating the risks that have been identified as particularly important. New strategies may be required, either because the introduction of extra-academic participants has created new risks, or increased the importance of some risks, or because some established mitigation strategy no longer functions.

As the reader will probably have noticed, many of the risks described in the previous section are relatively easy to mitigate. It is therefore not surprising that efficient mitigation strategies have been developed in the fields where the worries have arisen. For example, it is quite possible to alleviate the risks that arise because citizens in citizen science projects are not necessarily very good at data gathering. Diverse methods for controlling the production of data, and debiasing datasets collected by lay participants, are by now in use in many fields (see e.g., Bird et al. Reference Bird, Bate, Lefcheck, Hill, Thomson, Edgar and Stuart-Smith2014; Kosmala et al. Reference Kosmala, Wiggins, Swanson and Simmons2016), and as Elliott and Rosenberg (Reference Elliott and Rosenberg2019) note, citizen science does not currently seem to suffer from significant data-quality problems. Researchers engaged in citizen science and the organization of citizen panels and juries have also developed ways to screen and select participants, thus minimizing the risk that the participants would represent a stakeholder group instead of the general public (Braun and Schultz Reference Braun and Schultz2010). Effective screening practices are also in use in participatory and transdisciplinary research: Collaborations are planned so that the results reflect the experiences and views of all relevant community members or stakeholder groups (Hickey and Mohan Reference Hickey and Mohan2004; Leventon et al. Reference Leventon2016).

But not all the identified risks are easy to mitigate. And it is fully possible, even likely, that the introduction of extra-academic participants in science can—in addition to many benefits—bring about risks that have not yet been fully understood. For example, the emphasis on societal impact that is common in practically all forms of extra-academic participation in science can engender risks similar to ones that have materialized in highly commercialized research. When competition for private funding forces researchers to choose research questions in a way that appeals to funders, some important questions and areas may be left insufficiently studied. This can lead to a problematically skewed overall view of some large topic—for instance, we may end up knowing so much more about drug therapies for some illness than about forms of therapy that cannot be patented, that our understanding of the illness is very poor (Reiss and Kitcher Reference Reiss and Kitcher2010; Musschenga et al. Reference Musschenga, van der Steen, Ho and Radder2010). The emphasis on the views of stakeholders and extra-academic experts in transdisciplinary research or coresearch is largely motivated by a need to increase the societal impact of science. But we are not good at assessing or predicting long-term, consequential, or indirect impacts, so the emphasis on societal impact risks leading to science policies that favor research with easily predictable, short-term impacts (Bornmann Reference Bornmann2012). Such an emphasis can result in a misleadingly skewed overall understanding of important phenomena. In a field where the pressure to produce easily measurable societal impact is already high, the introduction of stakeholder interests and the views of extra-academic experts in research could aggravate the situation.

8. Conclusion

In this article I have attempted to clarify arguments and worries related to extra-academic participation in science and objectivity. I have done this by using the risk account of scientific objectivity (Koskinen Reference Koskinen2020, Reference Koskinen2021). It has allowed me to meaningfully compare arguments and worries that have been presented in different fields, focus on different types of participation, and have different underlying assumptions about what objectivity means. As I have argued, participation can increase the objectivity of science if it functions as an effective mitigation strategy against some epistemic risk or risks arising from our imperfections as epistemic agents—for example, the epistemic risks resulting from unobserved collective biases in scientific communities. But extra-academic participation can also, even simultaneously, threaten scientific objectivity by introducing new epistemic risks arising from our imperfections as epistemic agents, and/or by rendering previously satisfactory risk mitigation strategies ineffective or unusable.

Philosophers of science who have paid attention to extra-academic participation in science usually focus on some specific form of such participation: for instance, citizen science, participatory research, or transdisciplinarity. Here I have tried to sketch an overall picture of the multiform phenomenon. To spell out differences between different arguments endorsing participation, as well as between different worries related to it, I have suggested distinguishing between citizens, stakeholders, and extra-academic experts. I believe that this helps us to better understand the scope of the different arguments philosophers have presented: Feminist arguments typically concern stakeholders who are also experts, while arguments stressing the importance of democratic decision making in science recommend citizen or stakeholder participation.

But I hope that what I have said is not only philosophically interesting (or at least clarificatory) but also relevant to some degree in the development of methods and practices in fields where extra-academic participation in scientific knowledge production is currently happening. I particularly hope to have convinced some of my readers of the value of objectivity as an ideal that should be and can be sought even in research that is permeated by nonepistemic values and interests. Even the fact that participation can threaten objectivity need not be a reason to question extra-academic participation and engagement in science. Instead, the threats should be identified and mitigated, while cultivating the various benefits.

Acknowledgments

Earlier versions of this article were presented in the workshop “Objectivité et sciences participatives” in 2019 in Lyon, in the “Virtual Conversation: Citizen Science” organized by the Department of History and Philosophy of Science in Cambridge in 2020, in TINT’s Perspectives on Science seminar in Helsinki in 2021, in EPSA21, and in PSA 2020/2021. I would like to thank the audiences in these seminars and conferences, particularly Baptiste Bedessem, Jaana Eigi, Kristina Rolin, and Stéphanie Ruphy, as well as two anonymous referees, for useful comments and suggestions. Any errors that remain are my own.

Funding for this research was provided by the Academy of Finland (grants 294585 and 316695).

References

Alexandrova, Anna. 2018. “Can the Science of Well-Being Be Objective?The British Journal for the Philosophy of Science 69 (2):421–45.CrossRefGoogle Scholar
Baier, Annette. 1986. “Trust and Antitrust.” Ethics 96 (2):231–60.CrossRefGoogle Scholar
Bedessem, Baptiste, and Ruphy, Stéphanie. 2020. “Citizen Science and Scientific Objectivity: Mapping Out Epistemic Risks and Benefits.” Perspectives on Science 28 (5):630–54.CrossRefGoogle Scholar
Biddle, Justin B., and Kukla, Rebecca. 2017. “The Geography of Epistemic Risk.” In Exploring Inductive Risk: Case Studies of Values in Science, edited by Kevin, C. Elliott and Richards, Ted, 215–37. Oxford: Oxford University Press.Google Scholar
Bird, Tomas J., Bate, Amanda E.s, Lefcheck, Jonathan S., Hill, Nicole A., Thomson, Russell J., Edgar, Graham J., Stuart-Smith, Rick D., et al. 2014. “Statistical Solutions for Error and Bias in Global Citizen Science Datasets.” Biological Conservation 173:144–54.CrossRefGoogle Scholar
Bonney, Rick, Ballard, Heidi, Jordan, Rebecca, McCallie, Ellen, Phillips, Tina, Shirk, Jennifer, and Wilderman, Candie C.. 2009. Public Participation in Scientific Research: Defining the Field and Assessing Its Potential for Informal Science Education. A CAISE Inquiry Group Report. Center for Advancement of Informal Science Education.Google Scholar
Bornmann, Lutz. 2012. “Measuring the Societal Impact of Research.” EMBO Reports 13 (8):673–76.CrossRefGoogle ScholarPubMed
Braun, Kathrin, and Schultz, Susanne. 2010. “'… a Certain Amount of Engineering Involved': Constructing the Public in Participatory Governance Arrangements.” Public Understanding of Science 19 (4):403–19.CrossRefGoogle ScholarPubMed
Brown, Mark B. 2009. Science in Democracy: Expertise, Institutions, and Representation. Cambridge, MA: MIT Press.CrossRefGoogle Scholar
Brown, Matthew J. 2019. “Is Science Really Value Free and Objective? From Objectivity to Scientific Integrity.” In What Is Scientific Knowledge?, edited by McCain, Kevin and Kampourakis, Kostas, 226–42. New York: Routledge.CrossRefGoogle Scholar
Bucchi, Massimiani, and Neresini, Federico. 2008. “Science and Public Participation.” In The Handbook of Science and Technology Studies, edited by Edward, J. Hackett et al., 449–72. Cambridge, MA and London: MIT Press.Google Scholar
Chapman, Robert. 2019. “Neurodiversity Theory and Its Discontents: Autism, Schizophrenia, and the Social Model of Disability.” In The Bloomsbury Companion to Philosophy of Psychiatry, edited by Tekin, Şerife and Bluhm, Robyn, 371–90. London: Bloomsbury Academic.Google Scholar
Cohn, Jeffrey P. 2008. “Citizen Science: Can Volunteers Do Real Research?BioScience 58 (3):192–97.CrossRefGoogle Scholar
Collins, Harry, and Evans, Robert. 2002. “The Third Wave of Science Studies: Studies of Expertise and Experience.” Social Studies of Science 32 (2):235–96.CrossRefGoogle Scholar
Cooke, Bill, and Kothari, Uma. 2001. Participation: The New Tyranny? London and New York: Zed Books.Google Scholar
Daston, Lorraine, and Galison, Peter. 2007. Objectivity. New York: Zone.Google Scholar
Dickinson Janis, L., Zuckerberg, Benjamin, and Bonter, David N.. 2010. “Citizen Science as an Ecological Research Tool: Challenges and Benefits.” Annual Review of Ecology, Evolution, and Systematics 41:149–72.CrossRefGoogle Scholar
Douglas, Heather. 2000. “Inductive Risk and Values in Science.” Philosophy of Science 67 (4):559–79.CrossRefGoogle Scholar
Douglas, Heather. 2004. “The Irreducible Complexity of Objectivity.” Synthese 138 (3):453–73.CrossRefGoogle Scholar
Douglas, Heather. 2005. “Inserting the Public into Science.” In Democratization of Expertise? Exploring Novel Forms of Scientific Advice in Political Decision-Making, edited by Maassen, Sabine and Weingart, Peter, 24:153–70. Sociology of the Sciences Yearbook. Dodrecht: Springer.CrossRefGoogle Scholar
Douglas, Heather. 2009. Science, Policy and the Value-Free Ideal. Pittsburgh: University of Pittsburgh Press.CrossRefGoogle Scholar
Dupré, John. 2007. “Fact and Value.” In Value-Free Science? Ideals and Illusions, edited by Kincaid, Harold, Dupré, John, and Wylie, Alison, 2471. Oxford: Oxford University Press.Google Scholar
Eigi, Jaana. Forthcoming. “Why Philosophers of Science Should Care about the Problems of the ‘Pure Public.'”Google Scholar
Elliott, Kevin C. 2011. Is a Little Pollution Good for You? Incorporating Societal Values in Environmental Research. Oxford: Oxford University Press.CrossRefGoogle Scholar
Elliott, Kevin C., and Rosenberg, Jon. 2019. “Philosophical Foundations for Citizen Science.” Citizen Science: Theory and Practice 4 (1):19.Google Scholar
Epstein, Steven. 1998. Impure Science: AIDS, Activism, and the Politics of Knowledge. Berkeley: University of California Press.Google Scholar
Etzkowitz, Henry, and Leydesdorff, Loet. 1995. “The Triple Helix—University-Industry-Government Relations: A Laboratory for Knowledge Based Economic Development.” EASST Review 14 (1):1419.Google Scholar
Fernández Pinto, Manuela. 2021. “Science and Industry Funding.” In Global Epistemologies and Philosophies of Science, edited by Ludwig, David et al., 164–73. London and New York: Routledge.CrossRefGoogle Scholar
Fine, Arthur. 1998. “The Viewpoint of No-One in Particular.” Proceedings and Addresses of the APA 72 (2):920.Google Scholar
Fletcher-Watson, Sue, Adams, Jon, Brook, Kabie, Charman, Tony, Crane, Laura, Cusack, James, Leekam, Susan, et al. 2019. “Making the Future Together: Shaping Autism Research through Meaningful Participation.” Autism 23 (4):943–53.CrossRefGoogle ScholarPubMed
Frickel, Scott, Gibbon, Sahra, Howard, Jeff, Kempner, Joanna, Ottinger, Gwen, and Hess, David J.. 2010. “Undone Science: Charting Social Movement and Civil Society Challenges to Research Agenda Setting.” Science, Technology, and Human Values 35 (4):444–73.CrossRefGoogle ScholarPubMed
Funtowicz, Silvio O., and Ravetz, Jerome R.. 1993. “Science for the Post-normal Age.” Futures 25 (7):739–55.CrossRefGoogle Scholar
Gibbons, Michael, Limoges, Camille, Nowotny, Helga, Schwartzman, Simon, Scott, Peter, and Trow, Martin. 1994. The New Production of Knowledge: The Dynamics of Science and Research in Contemporary Societies. London: Sage.Google Scholar
Harding, Sandra. 2015. Objectivity and Diversity: Another Logic of Scientific Research. Chicago: University of Chicago Press.CrossRefGoogle Scholar
Hauswald, Rico. 2021. “The Epistemic Effects of Close Entanglements between Research Fields and Activist Movements.” Synthese 198:597614.CrossRefGoogle Scholar
Hess, David, Breyman, Steve, Campbell, Nancy, and Martin, Brian. 2008. “Science, Technology, and Social Movements.” In Handbook of Science and Technology Studies, edited by Edward, J. Hackett et al., 473–98. Cambridge, MA and London: MIT Press.Google Scholar
Hickey, Samuel, and Mohan, Giles. 2004. Participation: From Tyranny to Transformation. Exploring New Approaches to Participation in Development. London: Zed Books.Google Scholar
Hirsch Hadorn, Gertrude, et al. 2008. “The Emergence of Transdisciplinarity as a Form of Research.” In Handbook of Transdisciplinary Research, edited by Hadorn, Gertrude Hirsch et al., 1942. S.l.: Springer.CrossRefGoogle Scholar
Holman, Bennett, and Elliott, Kevin C.. 2018. “The Promise and Perils of Industry-Funded Science.” Philosophy Compass 13 (11):e12544.CrossRefGoogle Scholar
Holman, Bennett, and Geislar, Sally. 2018. “Sex Drugs and Corporate Ventriloquism: How to Evaluate Science Policies Intended to Manage Industry-Funded Bias.” Philosophy of Science 85 (5):869–81.CrossRefGoogle Scholar
Howe, Jeff. 2006. “The Rise of Crowdsourcing.” Wired 14. https://www.wired.com/2006/06/crowds/ Google Scholar
Hughes, Jonathan A. 2021. “Does the Heterogeneity of Autism Undermine the Neurodiversity Paradigm?Bioethics 35 (1):4760.CrossRefGoogle ScholarPubMed
Jaggar, Alison M. 2004. “Feminist Politics and Epistemology: The Standpoint of Women.” In The Feminist Standpoint Theory Reader: Intellectual and Political Controversies, edited by Harding, Sandra, 5566. London: Routledge.Google Scholar
Janack, Marianne. 2002. “Dilemmas of Objectivity.” Social Epistemology 16 (3):267–81.CrossRefGoogle Scholar
Jasanoff, Sheila. 2003. “Technologies of Humility: Citizen Participation in Governing Science.” Minerva 41 (3):223–44.CrossRefGoogle Scholar
Jasanoff, Sheila. 2017. “Science and Democracy.” In The Handbook of Science and Technology Studies, edited by Felt, Ulrike et al., 259–88. Cambridge, MA: MIT Press.Google Scholar
Kimura, Aya H., and Kinchy, Abby. 2016. “Citizen Science: Probing the Virtues and Contexts of Participatory Research.” Engaging Science, Technology, and Society 2:331–61.CrossRefGoogle Scholar
Kitcher, Philip. 2001. Science, Truth, and Democracy. Oxford: Oxford University Press.CrossRefGoogle Scholar
Kitcher, Philip. 2011. Science in a Democratic Society. Amherst, NY: Prometheus Books.Google Scholar
Koskinen, I. 2011. “Seemingly Similar Beliefs: A Case Study on Relativistic Research Practices.” Philosophy of the Social Sciences 41 (1):84110.CrossRefGoogle Scholar
Koskinen, Inkeri. 2014. “Critical Subjects: Participatory Research Needs to Make Room for Debate.” Philosophy of the Social Sciences 44 (6):733–51.CrossRefGoogle Scholar
Koskinen, Inkeri. 2015. “Researchers Building Nations: Under What Conditions Can Overtly Political Research Be Objective?” In Recent Developments in the Philosophy of Science: EPSA13 Helsinki, edited by Mäki, Uskali, Votsis, Ioannis, Ruphy, Stéphanie, and Schurz, Gerhard, 129–40. Cham, Switzerland: Springer.CrossRefGoogle Scholar
Koskinen, Inkeri. 2020. “Defending a Risk Account of Scientific Objectivity.” The British Journal for the Philosophy of Science 71 (4):11871207.CrossRefGoogle Scholar
Koskinen, Inkeri. 2021. “Objectivity in Contexts: Withholding Epistemic Judgement as a Strategy for Mitigating Collective Bias.” Synthese 199:211–25.CrossRefGoogle Scholar
Koskinen, Inkeri, and Mäki, Uskali. 2016. “Extra-academic Transdisciplinarity and Scientific Pluralism: What Might They Learn from One Another?The European Journal of Philosophy of Science 6 (3):419–44.CrossRefGoogle Scholar
Koskinen, Inkeri, and Rolin, Kristina. 2019. “Scientific/Intellectual Movements Remedying Epistemic Injustice: The Case of Indigenous Studies.” Philosophy of Science 86 (5):1052–63.CrossRefGoogle Scholar
Kosmala, Margaret, Wiggins, Andrea, Swanson, Alexandra, and Simmons, Brooke. 2016. “Assessing Data Quality in Citizen Science.” Frontiers in Ecology and the Environment 14 (10):551–60.CrossRefGoogle Scholar
Kurtulmus, Faik. 2021. “The Democratization of Science.” In Global Epistemologies and Philosophies of Science, edited by Ludwig, David et al., 145–54. London and New York: Routledge.CrossRefGoogle Scholar
Leventon, Julia, et al. 2016. “An Applied Methodology for Stakeholder Identification in Transdisciplinary Research.” Sustainability Science 11:763–75.CrossRefGoogle ScholarPubMed
Longino, Helen E. 1990. Science as Social Knowledge: Values and Objectivity in Scientific Inquiry. Princeton, NJ: Princeton University Press.CrossRefGoogle Scholar
Longino, Helen E. 1996. “Cognitive and Non-Cognitive Values in Science: Rethinking the Dichotomy.” In Feminism, Science and the Philosophy of Science, edited by Nelson, Lynn Hankinson and Nelson, Jack, 3958. Dordrecht, The Netherlands: Kluwer.CrossRefGoogle Scholar
Longino, Helen E. 2001. The Fate of Knowledge. Princeton, NJ: Princeton University Press.Google Scholar
Maassen, Sabine, and Weingart, Peter. 2005. “What’s New in Scientific Advice to Politics?” In Democratization of Expertise? Exploring Novel Forms of Scientific Advice in Political Decision-Making, edited by Maassen, Sabine and Weingart, Peter, 24:119. Sociology of the Sciences Yearbook.CrossRefGoogle Scholar
Montuschi, Eleonora. 2003. The Objects of Social Science. London: Continuum.Google Scholar
Musschenga, Albert W., van der Steen, Wim J., and Ho, Vincent K. Y.. 2010. “The Business of Drug Research: A Mixed Blessing.” In The Commodification of Academic Research, edited by Radder, Hans, 110–31. Pittsburgh: Pittsburgh University Press.CrossRefGoogle Scholar
Prahalad, C. K., and Ramaswamy, Venkatram. 2000. “Co-Opting Customer Competence.” Harvard Business Review 78: 7987.Google Scholar
“Rise of the Citizen Scientist.” 2015. Nature 524:265.CrossRefGoogle Scholar
Reiss, J., and Kitcher, P. 2010. “Biomedical Research, Neglected Diseases, and Well-Ordered Science.” Theoria 24 (3):263282.CrossRefGoogle Scholar
Rolin, Kristina. 2009. “Scientific Knowledge: A Stakeholder Theory.” In The Social Sciences and Democracy, edited by Van Bouwel, Jeroen, 6280. Basingstoke: Palgrave Macmillan.CrossRefGoogle Scholar
Rooney, Phyllis. 1992. “On Values in Science: Is the Epistemic/Non-Epistemic Distinction Useful?Proceedings of the Biennial Meeting of the Philosophy of Science Association 1:1322.CrossRefGoogle Scholar
Rudner, R. 1953. “The Scientist qua Scientist Makes Value Judgments.” Philosophy of Science 20 (1):16.CrossRefGoogle Scholar
Seurujärvi-Kari, Irja., and Ulla-Maija, Kulonen (eds.). 1996. Essays on Indigenous Identity and Rights. Helsinki: Helsinki University Press.Google Scholar
Simons, Eric, Martindale, Andrew, and Wylie, Alison. 2021. “Bearing Witness: What Can Archaeology Contribute in an Indian Residential School Context?” In Working with and for Ancestors: Collaboration in the Care and Study of Ancestral Remains, edited by Chelsea, H. Meloche, Spake, Laure, and Katherine, L. Nichols, 2131. Abingdon, Oxon: Routledge.Google Scholar
Solomon, Stephanie. 2009. “Stakeholders or Experts? On the Ambiguous Implications of Public Participation in Science.” In The Social Sciences and Democracy, edited by Van Bouwel, Jeroen, 3961. Basingstoke: Palgrave Macmillan.CrossRefGoogle Scholar
Stilgoe, Jack, and Guston, David H.. 2017. “Responsible Research and Innovation.” In The Handbook of Science and Technology Studies, edited by Felt, Ulrike et al., 853–80. Cambridge, MA: MIT Press.Google Scholar
Stordahl, Vigdis. 2008. “Nation Building through Knowledge Building: The Discourse of Sami Higher Education and Research in Norway.” In Indigenous Peoples: Self-Determination, Knowledge, Indigeneity, edited by Minde, Henry, 249–65. Delft: Eburon.Google Scholar
Tsosie, Rebecca. 2017. “Indigenous Peoples, Anthropology, and the Legacy of Epistemic Injustice.” In The Routledge Handbook of Epistemic Injustice, edited by Ian, J. Kidd, Medina, José, and Pohlhaus, Gaile Jr., 356–69. London and New York: Routledge.CrossRefGoogle Scholar
Whyte, Kyle. 2018. “What Do Indigenous Knowledges Do for Indigenous Peoples?” In Traditional Ecological Knowledge: Learning From Indigenous Practices for Environmental Sustainability, edited by Melissa, K. Nelson and Shilling, Daniel, 5781. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Whyte, William F. 1990. Participatory Action Research. London: Sage.Google Scholar
Wilholt, Torsten. 2014. “Review of Philip Kitcher: Science in a Democratic Society.” Philosophy of Science 81 (1):165–71.CrossRefGoogle Scholar
Wylie, Alison. 2003. “Why Standpoint Matters.” In Science and Other Cultures: Issues in Philosophies of Science and Technology, edited by Figueroa, Robert and Harding, Sandra, 2648. New York and London: Routledge.Google Scholar
Wylie, Alison. 2015. “A Plurality of Pluralisms: Collaborative Practice in Archaeology.” In Objectivity in Science: New Perspectives from Science and Technology Studies, edited by Padovani, Flavia, Richardson, Alan, and Jonathan, Y. Tsou, 189210. Cham, Switzerland: Springer.CrossRefGoogle Scholar
Wylie, Alison, and Hankinson Nelson, Lynn. 2007. “Coming to Terms with the Values of Science: Insights from Feminist Science Scholarship.” In Value-Free Science? Ideals and Illusions, edited by Kincaid, Harold, Dupré, John, and Wylie, Alison, 5886. Oxford: Oxford University Press.CrossRefGoogle Scholar
Zahle, Julie. 2020. “Objective Data Sets in Qualitative Research.” Synthese 199 (1-2):101–17.CrossRefGoogle Scholar