12.1 Research Narratives and Narratives of Nature
In 1945, George Beadle, who was to receive the Nobel Prize in Physiology or Medicine in 1958, together with Edward Tatum, published a long review article on the state of biochemical genetics. In one section, entitled ‘Eye pigments in insects’, he summarized results stemming to a large extent from his own work, which he had initiated with Boris Ephrussi in 1935, preceding his collaboration with Tatum. Beadle and Ephrussi used the fruit fly Drosophila melanogaster, which at the time was already a well-established experimental organism. Their experiments, however, introduced a novel approach based on tissue transplants between flies carrying different combinations of mutations. The results of these and similar experiments, and further biochemical efforts to characterize the substances involved, led to the following account of the physiological process of the formation of brown eye pigment and the roles of genes therein:
Dietary tryptophan is the fly’s initial precursor of the two postulated hormones. This is converted to alpha-oxytryptophan through a reaction controlled by the vermilion gene. A further oxidation to kynurenine occurs. […] This is the so-called v+ substance of Ephrussi and Beadle. This is still further oxidized to the cn+ substance, which Kikkawa believes to be the chromogen of the brown pigment. The transformation of kynurenine to cn+ substance is subject to the action of the normal allele of the cinnabar gene.
This text constitutes a small narrative. It relates several events which occur in temporal order and are causally connected. The sequence has a beginning (the precursor is ingested), a middle (it is transformed in several reactions controlled by genes) and an end (the implied formation of brown pigment). Yet, this narrative does not recount particular events, but rather a type of event happening countless times in fruit flies (and similarly in many other insects); it is a generic narrative.Footnote 1
In the natural sciences, such narratives are often found in review articles and textbooks, but also in summaries of the state of knowledge on a given subject in the introduction to research articles; they state what is taken as fact. Addressing scientific facts as narratives acknowledges that they are typically presented as complex and ordered accounts of a subject rather than single propositions. It is striking that no human agents, observers or cognizers are present in such narratives. They are accounts of events that are taken to happen ‘in nature’ when no researcher is intervening or even watching. Such narratives can thus be called ‘narratives of nature’.Footnote 2
Historians and philosophers of science no longer see the question of epistemology to be concerned with the truth of such knowledge claims alone, but also with the practices from which they emerge, and which enable, shape and delimit these claims. The references in Beadle’s text make it clear that each proposition can be traced back to an episode of research. Narratives of nature emerge gradually from the research literature as facts accepted in a community. Accounts of the methods by which the knowledge was achieved are abandoned like ladders once the new state of knowledge is reached. The facts are turned into ‘black boxes’, which can, however, be reopened any time; methods are called into question when facts are challenged (Reference LatourLatour 1987).
To account for how a hypothesis derived from research eventually enters a ‘narrative of nature’, it is necessary to show how a hypothesis comes to be known and understood by members of a community in the first place. I will argue here that this requires peers to understand the research approach – which aligns a method and a problem, and in the context of which the hypothesis was formulated. Familiarization with the approach is achieved by using another type of narrative, realized primarily in research articles. The function of research articles is thus not only and primarily to convince readers that a hypothesis is supported by evidence, so that they will accept it.Footnote 3 Instead, by making readers familiar with the approach, the article enables them to understand how one gets in the position to formulate and support a hypothesis of this kind in the first place, its relevance regarding a problem recognized in the community, and the meaning of the terms used (i.e., to grasp the epistemic objects in question).Footnote 4
An approach is a movement, it involves positioning oneself towards a phenomenon and accessing it from a particular direction and in a particular way.Footnote 5 The phenomenon, the experimental system employed for accessing it, the activities of intervention and observation afforded by the system, and the ways to make inferences from observations, including the recognition of invisible entities, make up what I call the ‘practice-world’ of researchers. The research article introduces the reader to this world and to the way that researchers position themselves by interpreting a problem pertaining to a phenomenon, to access the phenomenon, materially and cognitively, generate data and draw inference – in other words it makes the reader familiar with an approach. Only then can the hypothesis be understood; but it does not need to be accepted. Any criticism, refinement or amendment of the hypothesis is articulated in terms that are meaningful in the context of the approach and often involve the recreation of the approach by members of the community, introducing more or less substantial variation.Footnote 6
In this chapter, I will show how research articles employ narrative to familiarize readers with an approach. Reporting the material (intervention and observation) and cognitive (inference) activities of researchers, research articles on the whole are narratives (even if they often contain non-narrative passages) and might be referred to as ‘research narratives’. Like narratives of nature, research narratives are factual narratives, but in contrast to the former, they recount particular events, which happened at a specific site (e.g., a given laboratory) and a specific time; they are not generic. And yet, as will become clear, they do not present these events as unique either, but rather as exemplary.
In section 12.2, I will introduce several narratological concepts pertinent to the analysis. In 12.3, I will then trace back some of the elements of the narrative of nature above to an original research article by Beadle and Ephrussi, which I take to be representative of this genre in twentieth-century experimental life sciences. Section 12.4 will then return to the ways in which the particular implementation of an approach is rendered exemplary.Footnote 7
12.2 Research Articles as Factual Narratives
Subsection 12.2.1 will argue that modern research articles are indeed narrative texts. As they are generally taken to be factual narratives, I will briefly address the question of how they relate to real-world events. Subsection 12.2.2 will clarify the relation of researchers in their double role as agents and authors, and as narrator and character. It will then relate these roles to the narratee and the implied and actual reader. I will also introduce two metaphors: narrative as path and narrating as guidance, to further characterize the relation of narrator and narratee.
12.2.1 Research Articles are Factual Narratives
When talking about narrative, one often thinks of fictional texts or accounts of personal experience.Footnote 8 Research articles might not meet common expectations about what a narrative is. Nonetheless, research articles should be seen as narratives. Before showing why, I will address some ways in which they depart from more typical narrative texts.
First, research articles have a unique structure in that they typically separate the accounts of various aspects of the same events. This partitioning of information is often realized in the canonical ‘introduction, methods, results and discussion’ (IMRaD) structure.Footnote 9 In the Introduction, researchers state where they see themselves standing in relation to various disciplines and theoretical commitments, thereby positioning themselves towards a problem recognized in the community they address and motivating the activities to be narrated. The detailed description of the activities, including preparation, intervention and observation, is presented in the Material and Methods section collectively for all experimental events. The structured performance of these activities is reported in the Results section, albeit not necessarily according to their actual temporal order. Finally, the Discussion section recounts cognitive operations in which the material activities are revisited, often as involving entities which are inferred from patterns in the data.
Second, research articles tend to exhibit a characteristic style. As is often noted, they use impersonal language, i.e., various devices such as passive voice, adjectival participles, nominalization, abstract rhetors and impersonal pronouns to conceal the agent in an event (e.g., Reference Harré and NashHarré 1990; Reference MyersMyers 1990). Furthermore, events are often reported in the present tense. These strategies give the impression of a generic narrative, even though (unlike a narrative of nature) the statements in fact refer to particular events. Such narratives are thus pseudo-generic, but in this way represent events as exemplary.
Taken together, these organizational and stylistic features result in the fact that research articles do not resemble other text types that are more often addressed in terms of narrative. And yet they should indeed be seen as narratives.
Most definitions of narrative or criteria for narrativity of a text include the notion that narratives relate connected events. The verb ‘to relate’ can be read in a double sense here: narratives recount the events and they also establish relationships between them. There is some dispute about the nature of the connections among events that lend themselves to being narrated – for example, whether connections need to be temporal or causal (Reference MorganMorgan and Wise 2017). It is, however, almost universally agreed that a mere assortment of event descriptions or a mere chronological list of events does not constitute a narrative. Another central criterion is the involvement of human-like or intelligent agents in the events. Again, further aspects of agency might be required, such as the representation of the mental life of the agents or the purposefulness of actions (Reference Ryan and HermanRyan 2007).
Depending on whether the latter condition is taken to be necessary, or how one interprets ‘human-like’, one might doubt the status of narratives of nature discussed above. Research articles, however, are clearly narratives in the light of these core criteria. They report connected events, and they report them as connected. Indeed, many of the events are temporally ordered, with previous determining subsequent events. Furthermore, the events involve the researchers as agents, and their actions are purposeful and accompanied by cognitive operations.
Although it has been observed that research articles often do not provide a faithful representation of the research process which they appear to report, research articles are not typically perceived as works of fiction either (Reference SchickoreSchickore 2008).Footnote 10 Instead, they are generally presented and perceived as factual narratives (Reference Fludernik, Fludernik and RyanFludernik 2020). Accordingly, an account of factual narrative is required.
One of the most robust theoretical tenets of narratology is the distinction between story and discourse.Footnote 11 Seymour Chatman, for instance, states that
each narrative has two parts: a story (histoire), the content or chain of events (actions, happenings), plus what may be called the existents (characters, items of setting); and a discourse (discours), that is, the expression, the means by which the content is communicated. In simple terms, the story is the what in a narrative that is depicted, discourse the how.
It cannot be assumed, however, that in the case of factual narratives the real-world chain of events constitutes the story. If the observation of common mismatch between research process and report is accurate, then for research articles, at least, it is clear that the chain of events reconstructed from the discourse, the story, is not necessarily equivalent to the chain of events that make up the research process. The story as the sequence of events reconstructed from the discourse by the reader is a mental representation, as cognitive narratologists maintain (Reference Ryan and HermanRyan 2007). I will thus assume a semiotic model of factual narrative according to which the discourse invokes a story in the mind of the reader, and the narrative (discourse + story) represents real-world events, whether or not the events of the story fully match the represented events.Footnote 12 I will speak of the represented events as being part of a ‘practice-world’, however, to avoid false contrasts, as discourses and minds are, of course, part of reality, and to point out that these narratives represent only a fragment of the world which is inhabited by the actual researchers.
12.2.2 Communicating and Narrating
By putting their names in the title section, researchers as authors of scientific articles clearly assume responsibility for what they write, and they will be held accountable by others. Yet even if the narrator is identified with the author of these and other factual narratives, it cannot be equated with the author.Footnote 13 Authors will carefully craft the narrator and adorn it with properties which they need not necessarily ascribe to themselves. In fact, as many articles – including Beadle and Ephrussi’s – are co-authored, it would be challenging to construct a narrative voice that is faithful to the ways each of the authors perceives themselves or the group. In research articles, narrators are homodiegetic, i.e., they are also characters in the story (Reference GenetteGenette 1980). Hence, by crafting the narrator, authors also craft the character of the researcher on the level of the story (for instance, as an able, attentive and accurate experimenter).
On the recipient side, the reader of a research article can be anyone, of course, even a philosopher of science looking at the text 80 years later to make it an example for narrative in science. There is also an implied reader, which can be inferred from paratextual as well as textual features (Reference IserIser 1978). Regarding the former, the journal in which an article is published is a key indicator. Textual features include the knowledge the authors take for granted – the kind of claims that do not need further justification or terminology, used without definition. The actual reader who matches the features of the implied reader is the addressee of the communicative act of the author.
Reference GenetteGenette (1980) distinguishes the act of narrating from the discourse and the story. This act is performed by the narrator and is not part of the story; the addressee of this act can be called the ‘narratee’. By creating the discourse, the author creates the voice of a narrator as if it (the narrator) had produced this discourse, and a narratee as the addressee implied in the discourse.Footnote 14 Thus the narratee cannot be equated with the reader addressed by the author. Furthermore, while the way the narratee is construed is informative of the way the implied reader is construed, these two categories need not necessarily overlap.
Based on the above model of factual narrative, I propose the following account of narrating. The narrator in the act of narrating represents the researchers in their role as authors in the precise sense that it is construed as having the same knowledge as the latter. The researcher-character, who is identified with the narrator, represents the researchers in their role as experimenters and reasoners in the practice-world. The narrator addresses the narratee to recount events in which it was involved as a character and which thus represent events in the practice-world of the researchers. A reader can cognitively and epistemically adopt the position of the narratee and thereby learn about these events. A reader who matches the implied reader will be more willing and able to do so. In this way, researchers as authors communicate information about the practice-world they inhabit as agents to a reader who might inhabit similar practice-worlds.
Narratologists routinely analyse differences regarding time (order, duration and frequency of events) between discourse and story.Footnote 15 Note, however, that if the story is distinguished from the practice-world events in factual narratives, the difference between these two regarding time is an entirely different issue. Take the order of events. The discourse might introduce events in the order B, A, C, while it can be inferred from the textual cues that the order in the story is A, B, C. The discourse then does not misrepresent the order – in fact, by means of the cues it does represent the events in the order A, B, C, and as the story is an effect of the discourse, the two levels cannot be compared independently. If the narrative (discourse + story) presents events in a given order A, B, C, while the practice-world order of events was in fact C, A, B, then this, instead, constitutes a mismatch (e.g., between research process and report). The above semiotic model maintains that the narrative still represents the practice-world events. By manipulating order, duration and frequency in the discourse, authors can create certain effects in the perception of the story. In the case of factual narrative, developing a story that misrepresents practice-world events in one aspect can help to highlight other important aspects of these events such that the overall representation might become even more adequate with respect to a given purpose.
The purpose of the research narrative, or so I argue, is to represent the practice-world events as an approach to a given problem. Seen from the perspective of the act of narrating, the discourse not only presents events which are reconstructed on the story level, but it consists of events of narrating. If the discourse introduces narrated events in the order B, A, C (including cues that indicate the order on the story level is A, B, C), then there will be three sequential events: narrating B, then A, and then C. The temporal order on the level of narrating might be employed to highlight an order of elements in the story world other than temporal (e.g., a conceptual order).Footnote 16
On the level of narrating, the narrative might be described as a path through scenes in the story world which are considered in turn. By laying out a path, the narrator guides the narratee through the story-world. If this metaphor has a somewhat didactic ring, it is important to remember that it does not describe the relation of author and reader. The narratee in the research narrative is construed not so much as a learner who knows less about a subject but more as an apprentice who knows less about how to approach the subject. The narrator (who is also the researcher-character) will create a path connecting several diegetic scenes in which the character has certain beliefs, performs activities and observations, and reasons on their basis. The narratee qua guidee is thus introduced to the epistemic possibilities of the approach. A reader willing and able to adopt the position of the narratee can thereby learn about the approach.
12.3 Familiarizing a Community with an Approach through Research Narratives
12.3.1 The Case: A Research Article on Physiological Genetics from the 1930s
I now turn to the work of George Beadle (1903–89) and Boris Ephrussi (1901–79) and in particular to one article, which can be analysed based on the considerations in 12.2. The article in question was published in the journal Genetics in 1936.Footnote 17 It was entitled ‘The Differentiation of Eye Pigments in Drosophila as Studied by Transplantation’ and reported research the authors had performed mainly in 1935, when Beadle, who was at Caltech at the time, visited Ephrussi in his lab at the Institut de Biologie Physico-Chimique, Paris.Footnote 18
Leading up to Beadle’s Nobel Prize-winning work with Tatum, which is usually associated with a the ‘one gene – one enzyme hypothesis’ and thus considered an important step in the history of genetics, the article is relatively well known, at least to historians of genetics, as well as philosophers of biology. While firmly embedded in the genetic discourse and practice of its time, it presents enough novelty to display clearly the work it takes to familiarize peers with a novel approach and the novel epistemic objects emerging from it. Finally, in employing the IMRaD structure and an impersonal style, it conforms to salient conventions of much scientific writing in twentieth-century life sciences. It is thus well suited for such an analysis.
Many geneticists at the time aimed to understand the physiological role of genes, an enterprise that was often referred to as ‘physiological genetics’.Footnote 19 This was the kind of problem Beadle and Ephrussi set out to engage with. Their starting point was an observation made by Alfred Sturtevant. Sturtevant had studied genetic mosaics naturally occurring in Drosophila flies, that is, organisms which are composed of tissues with different genotypes.Footnote 20 In some flies it appeared that the eyes did not exhibit the eye colour that would be expected given their mutant genotypic constitution (indicated by other phenotypic markers), but rather the colour-phenotype associated with the normal (wild type) genotype present in other parts of the body. From this Sturtevant concluded that a substance might circulate in the body of the fly, affecting the development of the eye, and that the gene, which was mutated in the eye, but was functioning in other parts of the body, was involved in the production of this substance (Reference Sturtevant and JonesSturtevant 1932).
Beadle and Ephrussi developed an experimental system based on implanting larval structures that would give rise to the adult eye (imaginal eye discs) into host larvae. The procedure resulted in adult host flies which harboured an additional eye in their abdominal cavity. This allowed them to create mosaics artificially and thus in larger numbers, and to produce adequate experimental controls. They clearly began with Sturtevant’s hypothesis regarding the existence of a circulating, gene-related substance. Furthermore, hypotheses about the nature of gene action, in particular, the idea that genes affected biochemical reactions (either because they were enzymes or because they played a role in their production) were common (Reference RavinRavin 1977). Nonetheless, Beadle and Ephrussi’s article did not frame the work as testing any specific hypothesis about the relation of these entities, but rather as exploratory. Their project aimed at producing evidence for the existence and interactions of further elements in the biochemical system.
The epistemic objects they dealt with were thus on the one hand a well-established one, the gene, of which, however, little was yet known regarding its physiological function in somatic contexts, and on the other hand the assumed circulating substances, which were presumably involved in physiological reactions and in some way connected to the action of genes. The article reported the approach through which they achieved material and cognitive access to these epistemic objects and thereby established novel concepts referring to them. The approach enabled the formulation of hypotheses pertaining to these objects.
12.3.2 The Analysis: The Research Narrative as Path through Epistemic Scenes
In the following, I will reconstruct the research article by Beadle and Ephrussi as a narrative. The narrative draws a path through several scenes in which the researcher-character performs material or cognitive activities in a story-world which in turn represents the practice-world of the researchers as experimenters. The researchers as authors construct the narrator to guide the narratee through these scenes in a way that enables an understanding of the epistemic possibilities of the approach they have developed and thus an understanding of the hypothesis put forward. I will identify four types of epistemic scenes (concerning what is known and what can be known through the approach), which roughly coincide with the canonical IMRaD sections.
I The Positioning Scene: Interpreting a Problem Shared by a Disciplinary Community
Both the journal in which Beadle and Ephrussi published their article (Genetics), as well as the things they take for granted, clearly indicate that their text implies geneticists as readers, as opposed to, say, embryologists.
The article does not begin with a hypothesis to be tested, but with a question or research problem to be explored, which pertains to the discipline of genetics, and more specifically to the subfield of physiological genetics.Footnote 21
Prominent among the problems confronting present day geneticists are those concerning the nature of the action of specific genes – when, where and by what mechanisms are they active in developmental processes?
With respect to this question, an assessment is made of the state of research at the time, which has a theoretical aspect (what is known or assumed about gene action) and a methodological aspect (how the problem has been approached). Regarding the former, it is asserted that ‘relatively little has been done toward answering [these questions]’ (Reference Beadle and EphrussiBeadle and Ephrussi 1936: 225). Regarding the latter, advances that have been made are acknowledged:
Even so, promising beginnings are being made; from the gene end by the methods of genetics, and from the character end by bio-chemical methods.
However, a methodological obstacle to theoretical progress is identified in the fact that those organisms, which are well-characterized genetically, are not studied from a developmental perspective, and vice versa. It is suggested that this impasse be confronted by studying developmental processes in a genetically well-characterized organism (Drosophila), and in particular regarding the formation of pigment in the eye, because many eye-colour mutants were known in this species (and because of Sturtevant’s previous findings).
As these considerations are written in an impersonal style, one could see them as considerations of the authors in the moment of writing. And yet they are narrated as considerations of the researchers at the time of setting up the project, as indicated by formulations such as this: ‘Several facts have led us to begin such a study’ (Reference Beadle and EphrussiBeadle and Ephrussi 1936: 225). As such, they are events in the story-world (whatever was in fact considered in the practice-world). They constitute the beginning of the story, the initial epistemic scene in which the researcher-character (‘we’) – as a member of a discipline – finds itself.Footnote 22 The narrator guides the narratee through the scene to let it understand how one can position oneself in the field characterized by certain problems and available methods, and to realize the advantages of the chosen approach.Footnote 23 This will resonate in particular with readers who are members of the community the authors belong to.
II The Methodology Scene: Having and Mastering an Approach
The introduction of a new approach changes the situation in the field. It results in new possibilities for these researchers, and with them for everyone in their community. The new situation is characterized by the availability of the new experimental method, the new interpretation of the problem such that it can be addressed by the method, and the evidence and conclusions it affords. The narrator has already led the narratee to consider this new approach by setting it off against previous work in the Introduction section.
In the Material and Methods section, then, experimental events, consisting in applying a technique, are presented as generic, repeatable activities:
In brief, the desired organ or imaginal disc, removed from one larva, the donor, is drawn into a micro-pipette and injected into the body cavity of the host. As a rule, operations were made on larvae cultured at 25°C for three days after hatching from the eggs.
In general, one function of this section can be to enable other researchers to reproduce the techniques in their own lab. In that sense, the text functions like a recipe (or ‘protocol’, in the language of experimental sciences). In this case, however, the detailed description of the technique has been relegated to an extra method article (Reference Ephrussi and BeadleEphrussi and Beadle 1936). The information given in the Material and Methods section of the present article is possibly too sparse to allow for reproducing the experiments. This points to the fact that there must be another function: this section is similar to the exposition in a fictional text.Footnote 24 It introduces the reader to various elements (‘existents’) of the story, such as flies, fly larvae, donors, hosts, imaginal discs, various mutant lines and other things, and, furthermore, to the ‘habitual’ activities involving these elements performed by the researcher-character.
In the quotation, the first sentence uses the present tense. It is prescriptive in the sense of a protocol, but more importantly expresses the fact that the experiments can be performed by anyone who has the skills and access to the material. The second sentence is in the past tense, making it clear that the narrative nonetheless represents particular events when the researchers have performed these actions and indeed varied the conditions and found one that worked best. Following the contrastive presentation of the approach in the Introduction, the narrative in the Material and Methods section presents the character in a scene where it equips itself with a reliable method with which to approach the problem identified in the positioning scene.
III The Experimentation Scene: Addressing Questions Pertaining to the Problem through the Approach
In the Experimental Results section, the narrative proceeds through questions which generate several epistemic scenes within the context of the broader disciplinary situation. These scenes are characterized by specific instances of ignorance (e.g., regarding the action of specific genes known through mutations) relative to the overarching research problem (gene action in general). These questions in turn have to be expressed in terms of the behaviours of the material in the context of the experimental interventions possible in the framework of the novel approach.
The path along which the narrator guides the narratee through these scenes is not fully determined by the temporal order in which the experiments were performed. Some questions can only be formulated if the data of previous experiments are obtained (or indeed only after conclusions are drawn from it, which are only presented in the Discussion section). But for many experiments the order in which they are performed is not relevant and hence also not represented in the text. The ordering created by the path is thus not always that of a sequence of events, but, instead, the intervention events are also ordered into series according to the logic of the experiments, in this case the combinatoric logic regarding donor and host genotype. The subsections have titles such as Mutant eye discs in wild type hosts, Wild type discs in mutant hosts, Vermilion discs in mutant hosts, etc.
For instance, the first subsection shows how the approach provides an assay to answer the question of which mutants are autonomous (i.e., when serving as donor, are not affected by the host tissue). The result that v and cn are the only exceptions, in that they are not autonomous, leads to a new epistemic scene. In the subsection Vermilion discs in mutant hosts, then, the narrative moves forward by means of a new question the researcher-character asks itself, and which can be addressed through the approach:
data should be considered which bear on the question of whether other eye color mutants have anything to do with this ‘body-to-eye’ phase of the v reaction [i.e., the influence of the host]. This question can be answered by implanting v eye discs into hosts which differ from wild type by various eye color mutants. Such data are given in table 3.
[Table 3]
These data show that, when implanted in certain mutant hosts ([list of mutants]), a v optic disc gives rise to a wild type eye; in others ([list of mutants]), it gives an eye with v pigmentation.
Most of the researcher-character’s activities of intervention (implanting) and observation (dissecting and comparing eye colours) are compressed into one sentence and relegated to the table. Again, the formulation in the present tense and the impersonal style suggest that for any researcher, at any time, these interventions would result in these observations. And yet these sentences clearly refer to particular events in the story. The table, for instance, lists the number of individuals tested. We learn, for instance, that a v disk has been implanted in a bo host only a single time, while it has been implanted in 18 ca hosts. By having the researcher-character note the regularities and notable exceptions in the data, the narrator enables the narratee to grasp what can be done with the experimental method within the approach.
IV The Interpretation Scene: Formulating Hypotheses in the Context of the Problem and Approach
Already in the Experimental Results section, cognitive operations of the researcher-character are narrated:
From the data present above, it is seen that, in the cases of cn in wild type, v in wild type, and wild type in ca, the developing eye implant is influenced in its pigmentation by something that either comes or fails to come from some part or parts of the host. Just what this is, whether or not, for example, it is of the nature of a hormone, we cannot yet say. We shall therefore refer to it by the noncommittal term ‘substance’.
In this scene, the narratee is shown how the approach enables cognitive access to new epistemic objects through interpreting data resulting from past activities. It is in the context of the approach that the term ‘substance’ refers to new objects. It can then be used to formulate a new set of questions, which no longer concern the visible effects of the interventions in the materials, but the assumed entities which are not directly observable: ‘[I]s there only one substance? If not, are the different substances related and in what way? What is their relation to the genes concerned in their production?’ (Reference Beadle and EphrussiBeadle and Ephrussi 1936: 233).
These epistemic objects are thus introduced as objects of interaction, appearing when acting in the framework of the approach. For this purpose, in the Discussion section, events reported in the Experimental Results section are revisited:
Since the pigmentation of a genetically v eye can be modified to v+ by transplanting it to a host which supplies it with what may be called the v+ substance, it follows that v differs from wild type by the absence of this substance. Evidently there is no change in the v eye itself which prevents its pigmentation from assuming wild type characteristics. It follows that the mutation v+ → v has resulted in a change such that v+ substance is no longer formed.
The events of experimental intervention (implanting a v disk) are retold, but this time the unobservable events on the molecular level that are thought to link the intervention and observation made by the researcher are added. Yet the scene inhabited by the researcher-character is not one of experimentation but of reconsidering past experimental action. Together, the experimental scene, in which the narrator recounts what has been observed upon intervention, and the interpretation scene, which narrates the reconstruction by the character of what was actually happening on a hidden level, are akin to an ‘epistemic plot’.Footnote 25 The narratee is led to understand the way activities in the context of the approach can be interpreted in terms of interactions with the epistemic objects.
12.4 Conclusion: Exemplification of an Approach, between the Particular and the Generic
If the hypothesized entities and relations in the research article are compared with the narrative of nature in the review article quoted above, then it is clear that some – for instance, regarding the roles of the v and cn genes – achieved the status of accepted facts. Other propositions never went beyond the status of ‘preliminary hypothesis’. Regarding the relation of substances, Beadle and Ephrussi provide the following hypothesis:
Such an hypothesis assumes that the ca+, v+, and cn+ substances are successive products in a chain reaction. The relations of these substances can be indicated in a simple diagrammatic way as follows:
→ ca+ substance → v+ substance → cn+ substance
The entities and relations after the second arrow are conserved in the narrative of nature. For sure, Beadle and Ephrussi can claim to have discovered these substances and the relations holding among them and between the substances and genes.Footnote 26 But the details of the hypothesis do not matter much, nor which elements are conserved. When it turned out that the existence of an entity that would match their hypothesized ca+ substance could be established, this by no means diminished the value of the work. To criticize the hypothesis on its own terms required understanding the approach from which it emerged. Further results of that sort would come from the application of a more or less substantially modified version of the approach. Indeed, the research (Reference ClancyClancy 1942) which led to the abandonment of the ca+ substance, was ‘undertaken in order to repeat and supplement the experiments of Beadle and Ephrussi’ and ‘[t]ransplantation operations were performed by the method of Reference Ephrussi and BeadleEphrussi and Beadle [1936]’. The author also added a novel technique for the ‘extraction and measurement of the eye-color pigments’ to the approach (Reference ClancyClancy 1942: 417, 419). Hence, amending Beadle and Ephrussi’s hypothesis depended on understanding, applying and modifying their approach.
The approach to the problem faced by the discipline, rather than the hypothesis, was thus the main achievement of Beadle and Ephrussi’s work. As stated right at the beginning of their article:
In this paper we shall present the detailed results of preliminary investigations […] which we hope will serve to point out the lines along which further studies will be profitable.
The actual process, the contingencies and detours are not the subject of the narrative. The activities are reported as they would have been performed if the researchers had known better from the beginning. This explains the common mismatch between research process and report. The result is an approach that works and that enables researchers to make certain kinds of claims. Understanding the approach is a condition for understanding the terms and the significance of the hypothesis, no matter how well supported it is by the evidence. Furthermore, it is this kind of knowledge researchers can employ to design new research projects (Reference MeunierMeunier 2019). It is anticipated that further research ‘along these lines’ will lead to modifications of the theoretical claims. The purpose of the narrative is to make readers as members of the relevant community (geneticists) familiar with the approach, such that they understand ‘some of the possibilities in the application of the method of transplantation’ with regard to the shared problem of gene action (Reference Beadle and EphrussiBeadle and Ephrussi 1936: 245). Accordingly, the hypotheses about these epistemic objects which might or might not enter the narrative of nature are not the only or even primary result.
In order to present the approach as universally applicable to the problem faced by the community, the narrative takes on the character of a generic narrative, even though it is in fact about particular events. It is thus pseudo-generic. More positively, the particular events are presented as exemplary; the research article constitutes an exemplifying narrative.
A significant stylistic difference between research narratives and many other accounts of personal experience is the use of an impersonal style and the present tense. These literary devices remove ‘indexicality’ (Reference Harré and NashHarré 1990). In sentences of the type ‘when implanted into a x host, a y disk gives rise to a z eye’, the researcher-character is hidden by omitting the pronoun, while the present-tense detaches the activities from time and site. On the level of narrating, this has the effect that the narratee, guided through the experimental scene as an apprentice, can occupy the vacant position of the agent and perceive the event from the character’s point of view (or rather point of action). A reader can then adopt the narratee’s and thereby the character’s position.
Grammatically, the character is only referentially absent but performatively present as the agent of implantation. Hiding the character thus renders the narrated events universal experiences of an unspecified agent. However, the occasional use of ‘we’, reference to individual instances (flies), and the use of the past tense anchor the narrative in particular events experienced by the character. Semiotically, the character as a complex sign denotes Beadle and Ephrussi. In so far as their experience is represented by the narrative, they are construed as exemplars of researchers in their community, who could all have similar experiences when performing the approach exemplified in the activities in which Beadle and Ephrussi engaged.Footnote 27
Members of the community can read the text as narrating what Beadle and Ephrussi did or as stating what can be done regarding the problem. This ambiguity is indeed necessary. An approach is seen as universally applicable to a type of problem, just like a hypothesis is seen as universally answering to a problem. But, an approach, unlike a hypothesis, is not justified; it is not shown to be true, but it is shown to work. This is achieved by guiding the narratee along a path through various epistemic scenes, to see that one can do these things because they have been done.
In conclusion, while understanding the terms and the significance of a hypothesis (and not least the degree to which it is supported by the evidence) through understanding the approach is the condition for members of the community to accept the hypothesis as fact and incorporate it into emerging narratives of nature, the primary result communicated through the research narrative is the approach itself, as exemplified in the particular activities reported. Rendering the events generic, by stylistic means, helps members of the community to familiarize themselves with the approach as generally applicable to a shared problem.Footnote 28
13.1 Introduction
This chapter is about the role of narratives in chemistry. Recent studies by historians and philosophers of science have argued that narratives play an important part in shaping scientific explanations; narratives are not, according to this view, only concerned with rhetoric or communication, and not an added extra, but integral to the work of social and natural sciences. In Mary Morgan’s concise definition, ‘what narratives do above all else is create a productive order amongst materials with the purpose to answer why and how questions’ (Reference MorganMorgan 2017: 86).
Notions of narrative are not alien to existing discussions of chemistry: most notably, the Nobel Prize-winning organic chemist Roald Hoffmann has argued that chemical findings should be given narrative form, and similar arguments are present (or at least implicit) in some chemical publications, process ontologies of chemistry and historians’ and social scientists’ critical accounts of chemistry. Despite their differences, these claims are based on a shared understanding of the purpose of narrative which goes beyond attention to productive order: they suggest that narratives should be used to challenge the conventional demarcations of chemical accounts and ‘let the world back in’ by incorporating contingencies, aspects of decision-making, social dynamics and the interactions between humans and chemical substances which are not usually included within the chemical literature. All continue to bring materials together, to answer questions – they are thus still narratives in Morgan’s sense – but they also proceed contrastively, by trying to offer something beyond the conventions of writing in chemistry. These more capacious narratives contrast with the extremely terse form usually adopted by chemical publications. I will call the conventional presentation of chemical findings, ‘thin narratives’, and the more capacious ones recommended by some chemists, philosophers and historians, ‘thick narratives’.
My distinction between the thick and the thin is modelled on the anthropologist Clifford Reference GeertzGeertz’s (1973) celebrated discussion of ‘thick description’. Geertz gave the example of describing someone who was winking, first developed by the philosopher Gilbert Ryle. We could describe a wink in physiological terms – through a very specific sequence of muscle contractions, or more simply in terms of what we observe directly. Or we could say something like, the man winked conspiratorially, according to a cue we had agreed beforehand, and I was delighted. The former confines its description to a single plane: that of observable physiological phenomena – Reference RyleRyle (1947) called it a ‘thin’ description. The latter incorporates context and intentionality, which cannot just be read directly, but require additional elucidation and the incorporation of considerations behind the immediately observable. It is a ‘thick’ description. By extension, a thin narrative is a sequence or productive order, all of whose materials are presented as closely interrelated and conducing to the same purpose, and which can readily be transferred from one situation to another.Footnote 1 The thin narrative may also be presented in a formal language, which encodes relations and interactions between the entities involved in the narrative. A thick narrative, by contrast, is one which incorporates more context and considerations which may not be directly related to the explanatory task at hand, and which may be more difficult to move around.
The distinction between thin and thick descriptions carries normative implications. Geertz thought that anthropology needed thick descriptions; that its accounts would be incomplete and misleading without them. Similarly, the chemists and writers in chemistry who have called for the use of narrative form argue that understanding of chemical processes and chemists’ decision-making will be impoverished without the incorporation of elements which are usually not found in works of chemistry. But the difference between the thick and the thin has been understood in a much wider sense as well. The historian Ted Reference PorterPorter (2012) argues that the institutional and bureaucratic structures of modernity tend to privilege thin descriptions and to denigrate thick ones, and that natural sciences have been justified through an appeal to thinness, sometimes even changing their own thickets of practices and overlooking the persistence of skilled judgement in response to the pressure to offer thin descriptions.
I think that Porter is right to claim that thin descriptions (and thin narratives) are characteristic products of modernity, and that it has often been a chief aim of historical and sociological analysis to restore a measure of thickness. The views of chemistry discussed in this chapter are examples of arguments which have exactly this goal in mind. Nevertheless, Porter’s view requires two qualifications. First, we should not give the impression that thin descriptions and narratives are impoverished, because this risks overlooking the functions which they serve, such as providing a condensed, unitary record of chemical reactions, or shared format for planning out new chemical syntheses. Those functions may come with considerable problems, but that does not imply they are unimportant, and indeed they are of considerable utility to working chemists.
Second, thickening can be seen as an end in its own right, an obvious good. But, as the examples discussed in this chapter indicate, different attempts to thicken a thin narrative can have rather divergent aspirations, incorporate details of different kinds, and also make significant omissions. As a result, even thick narratives can look somewhat thin if the goal is to provide a completely comprehensive account. This can be a strength, as long as thickening in itself is not seen as a way to escape the troubles of thinness, or a way to offer the ‘whole story’ which lurks behind the thin surface.
In this chapter, I describe and analyse thin and thick chemical narratives, using the example of synthetic reaction schemes linked to a ‘classic’ synthesis from the history of chemistry: Robert Robinson’s ‘one pot’ production of tropinone, which was accomplished in 1917. In section 13.2, I present a twenty-first-century rendering of the tropinone reaction scheme, as well as its Reference Robinson1917 counterpart, and use work by the chemist-historian Pierre Laszlo to indicate some of the reasons that chemists may prefer to present their findings in such a thin form. Sections 13.3 and 13.4 contrast two kinds of arguments that conventional presentations of chemical results are deficient on the grounds of their thinness – those employed by chemists and those advocated by analysts of the science, respectively – and explore how such attempts played out in repeated retellings of Robinson’s tropinone synthesis. This leads me, finally, to consider some implications of thinking in terms of thick and thin narratives for historical and philosophical writing about chemistry.
Before I do so, however, I want to introduce my historical case study of a thin chemical narrative which has repeatedly been thickened. The example is drawn from the career of celebrated organic chemist Robert Robinson. Born in Derbyshire in 1886, Robinson (d. 1975) would acquire a reputation as one of the foremost organic chemists of the first half of the twentieth century and become President of the Royal Society and an advisor to the British government on a range of chemical topics, including colonial development. In 1917, Robinson achieved a synthesis of the alkaloid tropinone that significantly simplified the previous multi-stage, and therefore highly inefficient, scheme. Robinson’s scientific paper on the synthesis was published in the same year and detailed how he used counter-intuitive chemical starting products to produce tropinone at room temperature, and without any extremes of alkalinity or acidity. Furthermore, the process involved several reactions which led on from one another without requiring further intervention on the part of the chemist. These features of the synthesis led to its becoming one of the foundational works for Robinson’s reputation as a significant synthetic chemist, and to its elevation to the status of a synthetic ‘classic’ – discussed in textbooks and cited as an inspiration by chemists even now (Reference Medley and MovassaghiMedley and Movassaghi 2013). As we will see in section 13.3, Robinson’s tropinone synthesis has been repeatedly retold by chemists, and was the subject of a sustained historical investigation by Robinson’s one-time student, the Australian biochemist Arthur Birch.
13.2 Synthetic Reaction Schemes as Thin Narratives
Reaction schemes are one of the characteristic ways in which organic chemists plan and record their activities; it is therefore not surprising that Robinson’s landmark publication on the one-pot synthesis of tropinone included such a scheme. Drawing on discussions by Robert Meunier (Chapter 12), Line Andersen (Chapter 19), Norton Wise (Chapter 22) and Andrew Hopkins (Chapter 4) from elsewhere in this volume, this section will discuss some of the features which make reaction schemes distinctive as thin narratives, as well as ways in which they are similar to scientific narratives found in other domains.
Figure 13.1 is taken from a 2013 reconsideration of Robinson’s ‘landmark’ synthesis of tropinone and records a reaction scheme for the synthesis according to twenty-first-century conventions. Read in a clockwise direction, starting in the top left, the scheme shows the ways in which two starting products are subjected to various operations – diluted, reacted with other chemical substances, and so on – which change them into a series of intermediate forms, which gradually become more and more similar to the desired final product (tropinone – see molecule labelled 1 in Figure 13.1). The synthesis of complex natural products can involve many hundreds of separate stages, although this version of the tropinone synthesis only involves three intermediate stages. Indeed, from a chemist’s point of view, what is striking about this reaction is that a considerable amount of change happens in only a few stages. Each stage consists of one or several structural formulae: diagrams through which chemists represent both the composition of chemical substances and their spatial arrangement; knowledge of composition and structure helps chemists to construct explanations about how chemical substances will react with one another. Stages in the scheme occur in a particular sequence of reactions, where structural formulae indicate both the protagonists of the synthesis (the chemical substances which play a part in it) and the functions which these chemical substances can play. The transition between the different steps of the synthetic sequence is indicated by straight arrows, while the intermediary reactions are animated, so to speak, by the curved arrows that join together different chemical structures and show the movement of electrons. These curly arrows, which came into widespread use in the second and third decades of the twentieth century, allow the reaction sequence to offer an indication of what is happening at a molecular level to form the desired final chemical substance.
If the reaction scheme provides an ordered sequence of chemical events leading to a single goal (the end product), it is also important to note what the scheme does not show. It does not give an indication of what happens to any chemical substances which do not play a role in subsequent stages of the synthesis, and which are treated as waste products. Similarly, the scheme does not give any indication of the process by which the sequence was arrived at. It also presents a series of operations and reactions which may occur within an organism, or in a laboratory, as though they followed on naturally from each other – the role of the human chemist in performing the synthesis does not appear as distinct from the reactions of chemical substances.
Considered in this way, it makes sense to consider chemical reaction schemes as thin narratives: ordered sequences of chemical events conducing to a single, unified end, in which human intervention is flattened onto the same plane as chemical interactions. Moreover, the reaction scheme resembles a ‘narrative of nature’, in Robert Meunier’s sense. As Meunier describes such a narrative (Chapter 12), it ‘relates several events which occur in temporal order and are causally connected’, and which is structured into a beginning, middle and end; like the narratives which Meunier discusses, the reaction scheme ‘does not recount particular events, but rather a type of event happening countless times’. And the sequence appears to be self-evident: it does not foreground the role of a human experimenter or observer. In other ways, however, the sequence is rather unlike the examples which Meunier gives. It is told in a formal visual language (the structural formulae), which requires a chemical training to understand, rather than providing a neat compact set of events that are (potentially) intelligible to non-scientists. It is not that the reaction sequence cannot be paraphrased, or its events presented verbally; instead, a verbal paraphrase of the sequence of chemical events presented in the reaction scheme would be just as terse and technical as the reaction scheme, just as thin a narrative.
Here, for example, is one such verbal description (of a different synthetic reaction), presented by the chemist, historian and philosopher Pierre Laszlo:
L-Proline was esterified (12) by treating it with MeOH and thionyl chloride at 0°C, followed by Boc protection of secondary amine in dry tetrahydrofuran (THF) using triethyl amine as base at rt, furnishing (13), which on LAH reduction at 0°C in dry THF provided alcohol (14).
Unpacking the meaning of this extremely terse sentence, Laszlo argues, relies on the implicit knowledge of the chemist. He attempts two glosses of this piece of ‘chemese’. The first seeks to define the provenance of the chemical substances mentioned in the paper – indicating how they would be obtained – and a description of the verbs, suggesting what is turning into what.
The chemical recipient of this treatment is the amino acid proline, as the (natural) L-enantiomer. It can be bought from suppliers of laboratory chemicals. Its esterification means formation of an ester between its carboxylic COOH group and the simplest of alcohols, methanol (here written as MeOH), another commercial chemical, in the presence of thionyl chloride (SOCl2), also commercial. The reaction scheme bears the instruction ‘0°C-rt, 4 h’, in other words, ‘dissolve proline and thionyl chloride in methanol, held in a cooling bath, made of water with floating ice cubes, at 0°C and let this mixture return to room temperature (rt) over four hours, before extracting the desired product’.
Laszlo goes on to unpack the sentence’s other implicit meanings, in a manner which draws them out towards the laboratory routines of the chemist:
[T]he stated ‘room temperature’ in fact has a meaning more elaborate than ‘the temperature in the laboratory’. It means ‘about 20°C’, hence if the actual room temperature is markedly different, one ought to switch on either heating or air-conditioning.
Laszlo’s commentaries give one perspective from which to unpack the sentence, which works outward from the various materials employed in the experimental process to the routines of the laboratory and the chemist’s view of her workflow and the conditions in which she is working. Different explications could be given. Laszlo’s larger point is that the cognition of chemists involves associative processes, ‘molecular polysemy’, characterized by continually shifting horizons: new chemical discoveries add extra layers of association to the sentence’s existing stock of substances by positing new relations between them. Sentences, such as Laszlo analyses, lack, even as an aspiration, an attempt to fix the meanings of their key terms.
The use of structural formulae and of the terse language of ‘chemese’ are the reasons that I think we should consider chemical syntheses, as typically presented, as thin narratives, even in their verbal form. The powerful and polysemous formal languages of organic chemistry provide a rich but also restrictive vocabulary for describing what has happened or can happen, in chemical terms – for keeping track of how chemical substances change and the reasons for thinking that they may be used to serve chemists’ purposes. Chemists’ use of diagrammatic sequences and of language bring accounts of chemical syntheses into a single plane, with all relevant chemical actions and events describable in the same terms. And structural formulae can be used not only to explain what has happened, to record synthetic achievements or to investigate synthetic pathways in living organisms; the formulae can also be used to plan novel syntheses, with the information encoded in the formulae giving a good idea of what approaches might or might not be workable within the laboratory.Footnote 3 On their own terms, such ‘narratives of nature’ are meant to be self-sufficient, a robust and portable sequence of events which can be unpacked by a skilled chemist.
My attempt to consider such terse and formalized sequences as narratives in their own right, however, also indicates their potential instability – reasons that others might call for them to be ‘thickened’. Other chapters in this book have aligned narrative with the experiential dimension of interpreting a formalized sequence; this is the gist of Line Andersen’s discussion of mathematical proofs (Chapter 19), which Norton Wise (Chapter 22) describes as follows: ‘Reading a proof in experiential terms changes what looks to an outsider like a purely formal structure into a natural narrative for the reader; so too the experiential reading enriches the formal language of rigorous proof with the natural language of narrative, for it calls up meanings that the unaided formal language, lacking background and context, cannot convey’. If the opposition is drawn between formal language, on the one hand, and natural language, on the other, then thin narratives only become narrative when they are interpreted by a skilled reader, who is able to supply context and detail that may be absent from the plane of the formal representation itself. In the absence of a reader who possesses such ‘scripts’, reaction sequences cannot function as narratives. Even so – and I wish to insist on this – organic chemists do not simply animate the dry bones of their thin narratives with their competence, background knowledge and experience; chemists have also argued, explicitly, that the formal languages in which chemical research is presented provide an inadequate account of chemists’ reasoning and the character of the interactions between chemical substances which they employ. I will discuss chemists’ calls for thickening in the next section of this chapter.
For now, I wish to follow Meunier’s lead and ask to what extent these thin chemical narratives might encode their origins in experimental research practices. Meunier (Chapter 12) emphasizes that each part of a narrative of nature ‘can be traced back to an episode of research’, with narratives of nature ‘emerg[ing] gradually from the research literature as facts accepted in a community’, with the experimental aspects of ‘the methods by which the knowledge was achieved […] abandoned like ladders once the new state of knowledge is reached’. Is something similar happening with the narrative of nature provided by the reaction scheme? The answer to this question is a qualified yes: indeed, the narrative of nature is related to past experimental work, but in organic chemical synthesis the experimental narrative retains a stronger presence in an organic chemical reaction scheme than would be the case for the biological narratives which Meunier examines.
We can see this with reference to Figure 13.2, which shows the tropinone reaction scheme as presented by Robinson in his Reference Robinson1917 publication. As before, the scheme bears features of a thin narrative: a sequence of chemical events leading to a single outcome (the tropinone molecule – see bottom right), presented in the formal language of structural formulae, with no explicit indication of the researcher’s interventions or the laboratory context. But, if we compare Figure 13.2 with Figure 13.1, we note an important difference in the way the structural formulae are presented. As Reference Laszlo and KleinLaszlo (2001) remarks, to the eye of the present-day chemist, the structural formulae found in Figure 13.2 and similar publications look like primitive attempts to capture the spatial arrangement of chemical substances. But this is a historical mirage. The way in which late nineteenth- and early twentieth-century chemists used structural formulae, Laszlo argues, was primarily to relate their experimental investigations to the edifice of organic chemistry, to situate new findings in relation to existing work, and to draw the map of relations between chemical substances. The formula ‘spelled out to its proponent a historical account of how it came to be, of how it had been slowly and carefully wrought. A formula was the sum total of the work, of the practical operations, of the inter-relating to already known compounds, which had gone into its elucidation’ (Reference Laszlo and KleinLaszlo 2001: 55). As such, the formula amounted to a kind of ‘condense[d] […] narrative’ (Reference Laszlo and KleinLaszlo 2001: xx), whose history would need to be unpacked by a skilled chemist familiar with the relevant literature.Footnote 4 In other words, the structural formulae of the late nineteenth and early twentieth centuries encoded what Meunier terms ‘research narratives’ – although once again their narrative qualities were not obvious to the non-specialist, and had to be unpacked. It is only on the basis of hindsight, Laszlo says, that present-day chemists might see the structural formulae of the late nineteenth and early twentieth centuries as continuous with those of present-day chemistry. We might even say that these historical research narratives are so thin, bound so tightly into a single plane, that their practically and epistemically significant details cannot easily be recovered by today’s skilled practitioner in synthesis.
Before examining the different styles of thickening, I want to note some of the distinctive uses which a thin narrative could play in the hands of a chemist like Robinson. The same year that Robinson published his laboratory synthesis, he wrote and published a second paper proposing that he might have found a plausible pathway for the formation of alkaloids in living plants. This claim was absent from his first paper, which instead positioned tropinone as a precursor to a number of products of commercial and medical significance. Robinson’s new claim relied on the reaction’s status as a thin narrative. That is, it was a scheme that could be picked up from one context and inserted into another, without changing significantly. Contemporary textbooks show Robinson’s speculations being reported respectfully, and alongside the proposals of other chemists; in the 1910s and 1920s, experimental methods were not available to trace the formation of chemical substances directly. This changed in the early 1930s, with the development of carbon tracing techniques; initially, Robinson’s proposal appeared to have been borne out in practice, although subsequent experimental findings cast doubt on its correctness.
Robinson maintained his distance from experimental attempts to confirm his speculation and was even a little scornful of them. The Australian natural products chemist Arthur Birch, who was at one time Robinson’s student, recalled that Robinson was reluctant to take ‘pedestrian, even if obviously necessary steps beyond initial inspiration’, and would even claim to be disappointed if his findings were confirmed. As a result, ‘if Robinson correctly “conceived and envisaged” a reaction mechanism […] he thought he had “proved” it’ (Reference BirchBirch 1993: 282). For Robinson, the venturesome daring of the thin narratives of organic chemistry was all-important: a way to avoid becoming bogged down in the minutiae of subsequent development.
13.3 The Pot Thickens: Chemists’ Claims
In this section, I will discuss some of the ways in which chemists have sought to thicken the thin narratives described in the previous section, beginning with arguments by the Nobel laureate, poet and playwright Roald Hoffmann. Then I will look at two other sorts of narrative thickening which chemists have employed, which proceed by emphasizing contrastive and contingent aspects of the chemical story.
Roald Reference HoffmannHoffmann (2012: 88) argues that narrative gives a way to ‘construct with ease an aesthetic of the complicated, by adumbrating reasons and causes […] structuring a narrative to make up for the lack of simplicity’. In other words, the interactions between chemical substances which characterize chemical explanations and the decisions of human chemists which impact on chemical research programmes are highly particular, involving contingencies and speculations and evaluations in terms of human interest in order to make sense. Hoffmann aligns scientific narratives with literary ones on three grounds – a shared approach to temporality, causation and human interest – and he particularly emphasizes the greater narrative satisfactions which are often found in oral seminar presentations than in published scientific papers. In drawing his distinction between narrative and information, Hoffman quotes from the philosopher Walter Benjamin: information can only communicate novelty, whereas a story ‘does not expend itself. It preserves and concentrates its strength and is capable of releasing it even after a long time’ (Reference BenjaminBenjamin 1968: 81). In the terms which I am using in this chapter, Hoffmann offers a call for narrative thickening – for getting behind the surface of the conventional chemical article to explain the human dynamics and non-human particularities that have shaped chemical research. In Hoffmann’s view, the role of narratives in chemistry should be taken seriously as a way for chemists to be clearer about how they actually think and work (as opposed to idealizations which would present chemistry as an affair of discovering universally applicable laws). Hoffmann’s position has both descriptive and normative implications. He suggests that if we scratch the surface we will see that chemists do use narratives as a matter of course; but also that if chemists reflect on how they use narratives this will contribute to a better understanding of their work.
‘Classic’ syntheses, like Robinson’s production of tropinone, come to take on the attributes of narratives in Hoffmann’s sense. They are retold for their ingenuity and human interest, to motivate further inquiry, to suggest imitable problem-solving strategies and as part of chemists’ professional memory; some chemists also argue that they are worth revisiting repeatedly to allow new lessons to be drawn. In this sense, they are more like stories than like information, in Hoffmann’s terms. So, for example, the chemists Reference Medley and MovassaghiJonathan Medley and Mohammad Movassaghi (2013: 10775) wrote almost a hundred years after Robinson’s initial synthesis that it had ‘continue[d] to serve as an inspiration for the development of new and efficient strategies for complex molecule synthesis’. The tropinone synthesis has been retold by chemists on a number of different occasions over the past century, and these retellings have drawn out a variety of meanings from the synthesis and related it to subsequent chemical work in a number of different ways. In the process, chemists have used contrasts to emphasize different aspects of the synthesis, or tried to restore contingent historical details or aspects of context which would not be apparent from the elegant, but notably thin, reaction schemes discussed in the previous section.
A similar discontent to the one which Hoffmann expresses with the terseness of conventional chemical publications can be detected in some twentieth-century publications on organic chemical synthesis. Complex, multi-stage syntheses can take many researchers many years to achieve, but the final publication may ignore possible routes which were not taken, or which were successful but proved to be less efficient or in other ways less desirable than the final synthetic pathway. In an article from 1976, the chemist Ivan Ernest explicitly tries to challenge this tendency by reconstructing in some detail the plans which the research group made and the obstacles which led them to give up the approaches which had initially appeared promising. Rather than presenting the final synthesis as an edifice which could only adopt one form, this method of presentation emphasizes the chemists’ decision-making, and the interaction between their plans and what they found in the laboratory. And rather than presenting the structural formulae of the reaction sequence simply as stepping stones towards a pre-determined end, Ernest’s article (1976) emphasizes that each stage of the synthesis should be considered as a node, a moment when several different decisions may be possible. Like Hoffmann’s view of narrative in chemistry, Ernst’s article emphasizes contingency and the human interest of chemical decision-making in the laboratory, giving a more complex and nuanced human story about what this kind of experimentation involves. In other respects, though, it does not diverge significantly from the conventional presentation of thin chemical narratives – it is still presented chiefly in the form of structural formulae, and its presentation is based chiefly on laying different routes alongside each other, giving additional clarity to the decisions made in the final synthesis by comparing it with paths not taken – what could have happened but did not. I call this contrastive thickening because it contributes to the scientific explanation by allowing for a contrast between the final decisions which were made and other paths which could have been taken. Every event in the narrative thus exists in the shadow of some other possibility; what did happen can be compared with what did not.
Beyond telling different ways in which things can happen, chemically, to allow the desired outcome to be reached, contrastive thickening also introduces a different way of thinking about the shape of the whole synthesis and what motivates the relations between its different stages. For example, when Robinson’s reaction scheme for synthesizing tropinone is contrasted with that proposed by German organic chemist Richard Willstätter in 1887, contemporary chemists evoke notions of ‘brute force’ and an ‘old style’ of synthesis to describe Willstätter’s approach. Robinson’s scheme contrasts as a far more efficient experimental methodology, and the first glimpse of a more rational approach towards synthetic planning, which is based on starting with the final form of a molecule and then dividing it up.
Contrastive thickening tries to show that the final form of a chemical synthesis could have turned out differently, but does not make significant changes to the terse manner in which chemical syntheses are presented – Ernst’s article is still narrated primarily in ‘chemese’. Contingent thickening, in contrast, proceeds by fuller narration. For instance, Ernst’s sense that conventional publications on synthesis failed to give the whole story was also cited as inspiration in the first volume of the book series Strategies and Tactics in Organic Synthesis, a collection of papers in which chemists were invited to reflect on the contingencies, human factors and tangled paths of their experimental work. The chapters adopt an avowedly narrative style, and emphasize the prolonged difficulty of synthetic work as well as its eventual achievements. Details include serendipitous discoveries in the chemical literature; and discussions of sequencing syntheses so that their more tricky or untested parts are not attempted at the end, putting previous work into jeopardy. These narrative approaches are intended to stir reflection on problem-solving, and how chemists do not rely on the formal language of structural formulae and planning primarily in their synthetic work. They also share with Hoffmann the goal of keeping chemists motivated and the less codifiable aspects of synthetic knowledge in clear view. The Harvard chemist E. J. Corey writes in his preface to the third volume of the series that
the book conveys much more of the history, trials, tribulations, surprise events (both negative and positive), and excitement of synthesis than can be found in the original publications of the chemical literature. One can even appreciate the personalities and the human elements that have shaped the realities of each story. But, above all, each of these chapters tells a tale of what is required for success when challenging problems are attacked at the frontiers of synthetic science.
In Corey’s view, it is easy to think of synthetic chemistry as ‘mature’ because it has grown more ‘sophisticated and powerful’ over the past two centuries. But the impression of maturity belies the fact ‘that there is still much to be done’ and that the ‘chemistry of today will be viewed as primitive a century from now’. As such, it is important that ‘accurate and clear accounts of the events and ideas of synthetic chemistry’ should be available to the chemists of the future, lest they be misled into thinking that chemistry has become routine. Thickening, in this contingent form, reintroduces research narratives alongside the thin narratives of nature for the benefit of the discipline of chemistry: motivation and inculcation of junior researchers into the culture of synthetic research.
13.4 Analysts’ Narratives: Processual and Contextual Thickening
I now want to discuss two other ways of thickening chemical narratives, which I will call processual and contextual. Whereas the thickenings discussed in section 13.3 have been developed by chemists themselves, accounts of processual and contextual thickening have been developed primarily by analysts of chemistry – philosophers, and historians and social scientists, respectively. The primary goal of these thicker accounts is not to offer a more complete record of laboratory activity in order to assist with chemists’ own activities, but rather to move beyond the plane of the reaction in selecting what requires consideration in recounting chemical processes. Processual and contextual thickenings work to shift the focus of chemical narratives – calling into question the range of humans and non-humans who should be considered as the primary agents of chemistry, the actions and motivations which are held relevant and worthy of discussion, and the locations in which chemistry occurs. These ways of thickening, furthermore, open up the notion of chemical beginnings and endings by raising questions of how some chemical substances come to be available for chemists to study, and of what happens to chemical substances after chemists have finished using them.
To start with processual thickening, then. Some philosophers of chemistry have offered ‘process ontologies’, guided by the view that philosophy should give accounts of processes and the dynamic aspects of being. As the science of transformations of matter, chemistry can be treated in such dynamic terms, which also call into question the seeming fixity of the substances which chemists employ. These arguments proceed from two related claims: first, Gaston Bachelard’s (Reference Bachelard1968: 45) view that the substances that chemistry studies require extensive purification, and hence ‘true chemical substances are the products of technique rather than bodies found in reality’. In this view, the artificiality of chemical substances used in the laboratory circumscribes the types of stuff which are amenable to chemical analysis – samples taken from the messy world are therefore to be understood to the extent that they conform to what chemists can do with their artificial materials. The second claim is that, in the words of A. N. Reference WhiteheadWhitehead (1978: 80), ‘a molecule is a historic route of actual occasions; and such a route is an “event”’. What Whitehead meant was that the chemist’s molecules arise from sequences of specific actions, whether constructed in the laboratory or found outside. So chemistry deals, above all, with processes – which may be occurring on different scales – rather than with fixed substances.
In his metaphysics of chemistry, the chemist Joseph Earley builds on these insights to claim that chemical substances are historically evolved, in the manner of other evolved systems, and have a vast array of potentials, but that in practice these are subject to considerable path dependencies, as ‘[e]very sample has a history (usually unknown and untold) that specifies its current context and limits the range of available futures’ (Reference Earley, Fisher and ScerriEarley 2015: 226). In broad terms, Earley is observing that material history and institutional constraints matter for the definition of chemical substances – and this sounds like he is calling for thicker narratives of chemistry, which take these other factors into account. But, while his philosophical arguments can be read in this way, he also cautions that many of the relevant histories of chemical substances used in the laboratory are ‘unknown and untold’, and that to the extent their origins are unknowable it is not possible to construct narratives about them. This view suggests the need for a measure of caution concerning the extent to which narratives of chemistry can be thickened to incorporate all the relevant contributory historical factors. Although this chemical metaphysics might sound like an abstract warning, it touches on some of the factors which are described in the Strategy and Tactics research narratives – especially the impact on synthesis efforts of which materials happen to be locally obtainable.
I quoted from Birch’s prolonged investigation into Robinson’s synthesis earlier in this chapter; now I want to say a bit more about what he was trying to achieve and how attending to the contingent history of Robinson’s materials helped him to do so. Birch had trained as an organic chemist but made his professional career as a biochemist. He was extremely sensitive to the differences in method and experimental technique between organic chemistry and biochemistry, and suspicious of attempts to claim that practitioners from the two fields could talk straightforwardly to each other, without taking such differences into account. As part of this wider argument, Birch condemned what he characterized as the mythology which had grown up around Robinson’s synthesis – particularly the claim that Robinson had been inspired primarily by an attempt to imitate the natural process by which plants synthesize alkaloids. In an effort to challenge this narrative, Birch interrogated its chronology, drawing on both documentary and material evidence. He noted that Robinson had been interested in a somewhat similar synthesis some years earlier, as a result of a theoretical interest in the structure of alkaloid skeletons which he had developed in discussion with his colleague Arthur Lapworth. Robinson’s initial experimental work for a one-pot type of synthesis, Birch showed, had taken place when he was based in Sydney. Birch even succeeded in tracking down the original bottle of one of the chemical reagents which Robinson had employed in his experiments. As a result, argued Birch, Robinson’s motivations for attempting the tropinone synthesis could not be reduced straightforwardly to an attempt at bio-mimicry, and the synthesis should not be remembered as a precursor to a subsequent unification between organic chemistry and biochemistry. As Birch wrote, ‘the chemist’s natural products […] tend to mark the diversity of organisms by their sporadic occurrences, whereas the biochemist’s materials tend to represent the unity of living matter’; as a result ‘the biochemists in a search for generalities have largely ignored the chemist’s compounds’ (Reference BirchBirch 1976: 224). Digging into Robinson’s legacy, and locating it within the distinct material and processual culture of organic chemistry, gave Birch a way to demonstrate the tensions between different chemical subfields, their different ways of proceeding and the different entities which they considered. Birch also noted that Robinson’s programme in Sydney may have been guided in part by the difficulty of obtaining chemical supplies in the early part of the twentieth century, and chemists’ needs to improvise on the basis of the materials which were locally obtainable. As he noted, most chemical supplies ‘came from abroad and normally might take up to six months to arrive for use’, with the result that ‘[s]ynthetic programmes tended to be organized around what was already in the store,’ and ‘[m]uch early Sydney work was on natural products which grow in the local Bush’ (Reference BirchBirch 1993: 286).
The intention of Birch’s historical narrative is to recover different conceptual and material sources for Robinson’s synthetic work, and to caution against too close an equation between the practices of synthetic organic chemistry and those of biochemistry. He draws attention to the material constraints which bounded some of Robinson’s synthetic decision-making, but which are absent from the published research narrative. In the process, he draws attention to the specificity of the molecular cast of characters involved in organic chemical synthesis. These moves all recall the aspirations of contingent thickening, described above; but they also suggest a wider set of material and conceptual constraints which might need to be incorporated into an account of how chemists make their decisions. These wider questions are consistent with the goals of processual thickening, even though their intent is not philosophical.
Like processual thickening, contextual thickening tries to give chemical narratives depth beyond the laboratory; but it goes beyond material and processual contingencies to explore how chemists’ scientific activities might be informed by social, political, historical and environmental dynamics. This kind of thickening thus often shifts chemists away from the centre of accounts of chemistry in favour of other human users of chemical substances (the farmer who employs pesticides, the sunbather with her suntan lotion), the ways in which chemical substances interact with non-humans, and of the complex, ambivalent meanings associated with relations with chemical substances. In general, the goal of such studies is critical – to look beyond the way chemists think about their materials and the impacts of their activities, and to understand chemical substances not as ‘isolated functional molecules’, but rather in terms of ‘extensive relations’, as the historian Michelle Murphy puts it (Reference Murphy2017: 496). What Murphy means is that chemists’ own evaluations of the impacts of chemical substances are too limited and limiting, and are insufficiently attentive to the myriad roles which such substances play.
Again, some retellings of Robinson’s tropinone synthesis enact a kind of contextual thickening, by showing that his work was not guided solely by scientific considerations, and nor by the material constraints identified by Birch. In Robinson’s own memoir, written late in his life, he talked about what had been happening in his laboratory when he conducted experimental work for the tropinone synthesis at Liverpool University during the First World War. This was a time when the British government had taken a great interest in the utilization of chemical waste products, and the university’s laboratories had been turned over to an effort to make pharmaceuticals from the chemical residues of manufacturing explosives. At Liverpool, they made large quantities of the painkillers beta-eucaine and novocaine by saturating acetone with ammonia; the process was improved by adding calcium chloride. Sludge produced by washing explosives with alcohol was brought from the TNT factory at Silvertown and kept in buckets underneath the laboratory benches. Robinson’s colleague, the Reverend F. H. Gornall, derived useful intermediates from these wastes and analysed their chemical properties. According to Reference RobinsonRobinson (1976: 107), ‘The improvisation of suitable apparatus required for this work, and the necessity for careful operation and control, was found to be a good substitute for the conventional courses’. Robinson was learning too, and by his own account returned to his own earlier experimental work from Sydney. Among the substances which the chemists sought to produce was atropine, an alkaloid which was closely related to tropinone and which was used in the treatment of people who had been exposed to poison gas. As Robinson recounted:
Atropine was in short supply during the First World War and the knowledge of this fact led me to recall that I had contemplated in Sydney a synthesis of psi-pelletierine from glutardialdehyde, methyl-amine and acetone. This idea was a possible extension of pseudo-base condensations and I realised, at Liverpool, that a synthesis of tropinone […] might be effected in a similar manner, starting with succindialdehyde, and tropinone could probably be converted to atropine without difficulty.
There is no evidence that Robinson was able to produce significant quantities of atropine for the British war effort, and his synthetic technique would have been unable to produce large quantities of atropine in any case. But, in this telling, a part of his motivation for returning to this synthesis at this time was that the historical and institutional imperatives brought about by wartime restrictions made the pursuit of a highly efficient synthesis more desirable.
13.5 Conclusion: Unfinished Syntheses
This chapter has drawn a distinction between the use of thin and thick narratives in organic chemical synthesis. Thin narratives allow explanations to be given in a terse form which is portable and not dependent on a particular setting or set of historical circumstances; the four styles of thickening identified here all add depth to the apparent planar self-evidence of thin narratives by exploring the role of unsuccessful lines of research, contingencies, the processes by which substances become available for chemical inquiry, or the relations between chemical syntheses and wider historical, political, environmental and material contexts.
Read alongside the other chapters of this book, I hope that the discussion here clarifies some of the ways in which scientists use narratives. As with geological features, chemists sometimes revisit past synthetic achievements, open them up and unpack their implications. Some of these implications may not have been obvious when a synthesis was first conducted, and for this reason some classic syntheses have the inexhaustible, unfinished quality which Roald Hoffmann associates with stories, in contrast with information. Of course this attitude towards the potentials of past experimental work is not present among all chemists and is not applied to all syntheses. But when chemists do draw upon past experimental achievements or reflect on the ways in which the activities of chemistry ought to be documented, they talk quite often in terms of narrative, and with an explicit awareness of the shortcomings of conventional modes of chemical publication – with the sense that the terse formal languages of chemistry fall short in describing how chemists work and think. Much academic history and public discussion of chemical synthesis has focused on the ways in which synthetic decision-making can be made routine, guided by artificial intelligence and planned using the powerful ‘paper tools’ of chemical nomenclature and structural formulae. Although such an emphasis correctly identifies a major strand in the chemical synthesis of the last 60 years, it has also often been balanced (as in the writings of E. J. Corey, quoted above) with a sense of the abiding complexity and role of contingency which are involved in chemical syntheses. The suggestion here has been that thinking about the difference between thin and thick narratives is a way to preserve a sense of the significance of the two aspects of chemical synthesis.
In Ted Porter’s contrast between thick and thin descriptions, which I quoted in the introduction to this chapter, thickness indicates the complex, contingent, often intractable world, whereas thinness stands for attempts to corral that world into predictable shape. As chemistry deals with processes which are often complex, contingent and intractable, it is perhaps unsurprising that alongside its very robust reaction schemes there should be repeated calls for thickening – ways to put the world back in. It is important to note, however, some of the differences, and possible overlaps, between the different styles of thickening which I have identified. Because processual and contextual thickening emerge chiefly from analysts’ accounts and chemists’ historical writings rather than chemical research publications, it is tempting to see them as offering different forms of narrative to those discussed in sections 13.2 or 13.3. I will first give some reasons that we might want to draw such a division, in terms of the familiar distinction between ‘internalist’ and ‘externalist’ accounts of science, and then indicate the reasons that even though this contrast is suggestive, it is also somewhat misleading. As a first approximation we might associate thin synthetic narratives, and chemists’ own contrastive and contingent thickenings, with internalist accounts of chemistry, which seek to understand the development of the science in its own terms, according to its conceptual and experimental developments, but without reference to a larger historical context. Processual and contextual thickenings, on the other hand, might be aligned with an externalist view of the history of chemistry, which seeks to understand what chemists do and the significance of their narratives as emerging from and as feeding back into a wider array of political, social, material, philosophical and environmental considerations. We might therefore say that such externalist perspectives are proper to the narratives of philosophy, of social science, or of history – in other words, that the problems they seek to solve belong to these different disciplines, rather than to chemistry itself. So the information in Robinson’s memoir might be useful to a historian constructing an account of the relations between wartime production restrictions and innovations in chemical research, but it would be much less likely to be of use to a chemist trying to develop a new synthesis.
Chemists’ use of narratives also has implications for critical analyses of chemistry, especially the work associated with the recent ‘chemical turn’ by social scientists. The goal of these studies, exemplified earlier in this chapter by Michelle Murphy’s work, has been to show the pervasive importance and ambivalent significance of chemical substances for both humans and non-humans, and chemicals’ roles in sustaining ways of life as well as causing harms through pollution, poisoning and addiction. In the process, these studies have shifted focus away from laboratories and towards the myriad settings in which chemicals are found and play an active role. In taking the significance of chemistry away from the chemists, however, and displacing the locus of chemical study from laboratories, such studies may also fail to account for the distinctive terms in which chemists understand their science, and narrate their activities. Given the density and complexity of organic chemists’ language, there are still few historical and social scientific studies which follow their distinctive practices of narrative ordering in significant detail, or which pursue the retellings of a single chemical synthesis, as I have tried to do here. There are not straightforward ways to incorporate the perspectives of the producers and users of thin chemical narratives into thicker accounts of chemistry without attempting to learn to speak ‘chemese’; at the same time, chemists themselves sometimes argue for the need to give a fuller, thicker, account of their activity.Footnote 5
14.1 Introduction
Epidemics make for powerful stories. Ever since Thucydides’ account of the Plague of Athens, the epidemic story has joined the ranks of the grand tales of war, terror and devastation. Thucydides’ account of the events of a plague in the Hellenic world also gave shape to a genre of writing that has since been copied, developed and expanded by countless witnesses to epidemic events in Western history. Since then, the epidemic narrative has contributed to the chronology through which an epidemic unfolds and has become the principal source to infer meaning and to make sense of epidemic crises. The same narrative has enabled authors to characterize the sweeping and limitless effects of epidemic events and to join aspects of natural phenomena, social conventions and cultural customs implicated in the distribution of plagues. The epidemic narrative, finally, has come to offer a generalized lesson, a common theme or an eternal truth, that the epidemic had laid bare (Reference PagePage 1953; Reference WrayWray 2004).
This chapter revisits the position of the epidemic narrative within a significant epistemological transformation. At the end of the nineteenth century, writing about epidemics shifted from an emphasis on storytelling to the production of methods and instruments to elevate an epidemic into the status of a scientific object. Written accounts of epidemic events were no longer judged upon their capacity to invoke lively pictures of terror or to excel in the inference of political lessons from tragic circumstances but were scrutinized within a field increasingly oriented towards a shared understanding of a scientific method.
As Reference AmsterdamskaOlga Amsterdamska (2005; Reference Amsterdamska, Gaudillière and Löwy2001) points out, epidemiology has a complicated history as a medical science. Without the tradition of the clinic and beyond the experimental and deductive methods of the laboratory, many of epidemiology’s early protagonists turned to quantification to defend their work’s status as a ‘full-fledged science, no different in this respect from other scientific disciplines’ (Reference AmsterdamskaAmsterdamska 2005: 31). However, quantification and medical statistics were not the only resources required to establish the field’s authority. Through boundary work, Amsterdamska shows how epidemiologists established epidemics as ‘a collective phenomenon’, as the field’s ‘special object’ (Reference AmsterdamskaAmsterdamska 2005: 42). In the quest to establish its unique scientific authority, the field came to rely ‘on a wide range of widely used scientific methods’ (Reference AmsterdamskaAmsterdamska 2005: 42), which went far beyond statistics and quantification.
This chapter focuses on narrative reasoning as one of these methods deployed by epidemiologists to account for their special object at a time when the disciplinary boundaries of epidemiology were rather incongruent. The study of epidemics required a generalist dedication to historical accounts, a reliable understanding of medical classification, the capacity to account fluently for social dynamics, while maintaining expertise in the biological variables of contagion and infection. Epidemics were primarily medical events, as they constituted the multiplied occurrence of a specific disease, and most early epidemiologists approached the subject from the vantage point of their medical career. However, since Quetelet, even the medical profession had accepted that the aggregated occurrence of disease might not resemble the dynamics of the individual case.Footnote 1 The social body is, after all, not equivalent to the individual, and the spatial and temporal patterns of a series of cases in society followed discrete regularities (Reference ArmstrongArmstrong 1983; Reference MatthewsMatthews 1995).
Most of the historiography of epidemiology has looked to the quantification of epidemiological methods since the mid-nineteenth century to explain how this new object of concern took shape. Epidemics were represented in lists, tables, maps and diagrams to measure and calculate the dynamics of their waxing and waning, and medical statistics had become the dominant framework to envision the distribution of a disease within society (Reference Magnello and HardyMagnello and Hardy 2002). Narrative reasoning, so the gospel of formal epidemiology goes, took on a secondary position, predominantly concerned with the interpretation and explanation of formalized expressions (Reference MorabiaMorabia 2013). With this chapter, I challenge the widely held assumption that narrative reasoning lost significance in the formation of a scientific method in epidemiology. Instead, I argue that narrative assumed a new epistemic function in the late nineteenth century, supporting the professional reorganization of the field and shaping what I call here ‘epidemiological reasoning’.
This chapter will turn to outbreak reports of the third plague pandemic published between 1894 and 1904 to demonstrate how epidemiologists navigated the complexity of their ‘special object’. To develop their account of epidemic events, the authors of the reports contributed to, engaged in and relied on epidemiological reasoning. The second section (14.2) will outline the nature of these reports and contextualize them within the field of medical and colonial reporting practices at the time. In the following sections, I will take in turn three aspects in which the reports’ epidemiological reasoning advanced the constitution of epidemics as scientific objects. The third section (14.3) will return to the historical dimension of epidemics, asking how epidemiological reasoning has made epidemics ‘known and understandable by revealing how, like a story, they “unfold” in time’ (Reference MorganMorgan and Wise 2017: 2). In the fourth section (14.4), the focus will lie on the ordering capacity of epidemiological reasoning to produce epidemic configurations. Through narrative, the authors combine, or rather colligate (Reference MorganMorgan 2017), empirical descriptions, theoretical projections and a range of causal theories to capture the complex characteristics of the outbreak. In the fifth section (14.5) I will revisit the question of formalization in the evaluation of how lists, graphs and maps were positioned within an epidemiological reasoning dedicated to possibilities, conjecture, contradictions and contingency. Narrative is the technology which allows these reports to configure epidemics as more than just a multiplication of individual cases, more than just a result of social and environmental conditions and more than just the workings of a pathogen.
14.2 Outbreak Reports of the Third Plague Pandemic (1894–1952)
The production and circulation of outbreak reports was firmly grounded in a British administrative reporting practice: the Medical Officer of Health reports. As Anne Hardy has demonstrated in her extensive work on the ‘epidemic streets’, the reports of the medical officers of health were produced from the mid-1800s within a rationale of prevention, and established the provision of ‘facts and faithful records about infectious disease’ (Reference HardyHardy 1993: 7). These reports were usually written with a focus on the range of diseases to be found in a specific district or city. Some diseases had also become the subject of dedicated reports during the nineteenth century, which compared and contrasted the occurrence of cholera or typhoid fever in different places (Reference WhooleyWhooley 2013; Reference Steere-WilliamsSteere-Williams 2020). However, only in the reporting on the third plague pandemic at the end of the nineteenth century do we see the emergence of a sizeable number of comparable reports.
Each of the over one hundred reports on plague was concerned with a city or a region, usually written after an outbreak had ended. A first look at these manuscripts shows them to be highly idiosyncratic pieces of writing, perhaps as much influenced by the authors’ interests and professional expertise as by the specific local circumstances in which the outbreak occurred. However, comparing the range of reports published on outbreaks of bubonic plague between 1894 and 1904 allows for an appreciation of structural, stylistic and epistemological commonalities.Footnote 2 Over the course of the pandemic, reporting practices were neither discrete nor arbitrary; rather, authors tended to collect, copy, adapt and emulate their colleagues’ work. The authors, who were local physicians, medical officers, public health officials or epidemiologists, would write their own account of a plague outbreak with a global audience of like-minded epidemiologists, medical officers and bacteriologists in mind. Archival provenance further suggests that these reports often circulated globally and followed the vectors of the epidemic. The occurrence of novel outbreaks in Buenos Aires or New South Wales appears to have prompted local health officers in these regions to collect outbreak reports from around the world to inform their actions and to adjust their narrative. A key function of the growing global collection of reports was to integrate each local outbreak into the expanding narrative of a global pandemic.
Comparable in form and style, the narrative genre of the epidemiological outbreak report resembles the medical genre of the clinical case report (Reference Hess and MendelsohnHess and Mendelsohn 2010).Footnote 3 Like clinicians, the authors of the reports practised epidemiology as an empirical art, dedicated to inductive reasoning and correlative modes of thinking. Unlike clinicians, the epidemiologist’s scope was much less defined. Authors drew from history, clinical medicine, bacteriology, vital statistics, sanitary science, anthropology, sociology and demography. From Porto, San Francisco, Sydney to Hong Kong and Durban, the reports covered significant aspects of the location, ranging from climate trends, descriptions of the built environment, to the social and cultural analysis of populations in urban or rural communities.Footnote 4 These elements were bound together to constitute the epidemic narrative, tracing how the epidemic offered new ways of ordering what had appeared before as disparate sources and disconnected information.Footnote 5 With sections moving from questions of bacteriology to mortality rates, quarantine measures, outbreaks among rodents, to summaries of the longer history of plague, the narrative colligated disease, environment and population to let the epidemic emerge as a configuration of these coordinates. However, for this narrative to provide a formalized and ordered account of the epidemic – for it to become a scientific account – it was also punctuated with instruments of abstraction and formalization: tables, lists, graphs, and maps.
As such, reports are understood in this chapter as a peculiar global genre of epidemiological reasoning, which was ultimately concerned with producing a robust and global epidemiological definition of plague. This was not achieved just through cross-referencing and intertextual discussions of reports from different places. As a record of events, data and observations tied together by a single disease in a specific place, reports considered the local incident to shape the pandemic of plague as a global object of research.
The three reports discussed below, chosen from over one hundred written on the third plague pandemic, demonstrate three interlinked aspects of epidemiological reasoning: outbreak histories, epidemic configurations and visual formalizations. I have selected one of the first written reports on the emerging epidemic in Hong Kong in 1894, a second from the sprawling and fast-developing outbreak in Bombay, India, and a final one from a South African outbreak in Durban. All three epidemic events occurred within the confines of the British Empire at the time, and were subject to scrutiny, observation and reportage by officers under imperial British command. The reports on plague should therefore also be understood with regard to the long-standing forms of reporting carried out by British colonial officers. These forms included concerns of overseas administration; occasionally reports served as legal evidence for actions taken and they were instruments of stabilizing colonial hierarchies of power and knowledge (Reference DonatoDonato 2018). Reports and their destinations, the colonial archives, furnished the administration with knowledge to govern territory and populations, while establishing difference and hierarchy through ‘epistemic violence’ (Reference StolerStoler 2010). Especially with regards to the governance of public health in British India, administrative reporting practices have been shown to have contributed substantially to the formation of common colonial tropes, such as the opacity of the colonial city as well as the pathogenicity of foreign territory. Reporting on outbreaks of plagues was therefore interlinked with evaluating plague’s capacity to destabilize colonial rule and to provide evidence about how containment measures contributed to the reinstatement – but also the failure – of colonial power (Reference EchenbergEchenberg 2007). Ultimately, all reporting on outbreaks of plague in colonies and overseas territories was driven by utopian considerations of hygienic modernity (Reference RogaskiRogaski 2004; Reference Engelmann and LynterisEngelmann and Lynteris 2020), aiming to stabilize the increasingly fragile image of Europe as a place of immunity and security against epidemic risks.Footnote 6
James Alfred Lowson, the author of the report on plague from Hong Kong in 1894, was a young Scottish doctor and acting superintendent of the civil hospital in Hong Kong by the age of 28 (Reference SolomonSolomon 1997). He took on a key role in the outbreak, diagnosed some of the first cases and led early initiatives for rigorous measures to be put in place in the port and against the Chinese population. He remained on the sidelines of bacteriological fame, as the controversy between Kitasato and Yersin unfolded, both claiming to have first identified the bacterium responsible for the plague, later named yersinia pestis (Reference Bibel and ChenBibel and Chen 1976).
The second report is one of many written by the Bombay Plague Committee, which was at the time under the chairmanship of James McNabb Campbell (Reference MacNabb Campbell and MostynMacNabb Campbell and Mostyn 1898). The Scottish ethnologist had joined the Indian Civil Service in 1869 and served as collector, administrator and commissioner in the municipality of Bombay. In 1897, he succeeded Sir William Gatacre as the chairman of the Plague Committee to encourage cooperation, prevent further riots and contribute to the reinstatement of colonial rule (Reference EvansEvans 2018).
Ernest Hill, the author of the report on plague in Natal, was a member of both the Royal College of Physicians and the Royal College of Surgeons, and from 1897 was appointed health officer to the colony of Natal in South Africa. He authored a number of reports on the health challenges of the colony, notably on suicide as well as malaria outbreaks, and was reportedly involved in ambitious planning to introduce and establish vital statistics overseas (Reference WrightWright 2006; Reference HillHill 1904).
14.3 Outbreak Histories
Until the early twentieth century, epidemiology had been a field intertwined with historical methods and narrative accounts. The historical geography of diseases, as exemplified by August Hirsch, had substantial influence on the development of formal accounts of epidemics (Reference HirschHirsch 1883). Understanding the wider historical formation of a disease was, Hirsch and his contemporaries had argued, fundamental to anticipating which diseases were confined to certain geographies, which diseases occurred with seasonal regularity and how diseases corresponded to what Sydenham had called the epidemic constitution of societies (Reference Susser and SteinSusser and Stein 2009). History, in short, was what gave a form to epidemics, and it was historical narrative that enabled differentiation between smallpox, syphilis and phtysis (or tuberculosis) from plague and cholera (Reference Mendelsohn, Lawrence and WeiszMendelsohn 1998). Investigating the natural history of an epidemic disease was a powerful instrument of generalization and classification. Considering the origins, geographical distributions, stories of migration and relations to wars and famines offered a biographical form to diseases in the history of Western society (Reference Rosenberg, Rosenberg and GoldenRosenberg 1992b).Footnote 7
It is therefore unsurprising to see most reports opening with some form of appreciation of the history of plague at large. Lowson, in his account of events in Hong Kong, even apologized for his limited access to relevant historical scholarship on the plague. However, revisiting what he had available in Hong Kong, he delved into a historiographical critique of the limited state of scholarship on Asiatic plague history. Knowledge of the historical geography of plague was for Lowson essential in considering how plague might have arrived in Hong Kong from Canton. The Cantonese outbreak had reportedly also begun in 1894, where plague might have been endemic for some time (Reference LowsonLowson 1895: 7).
Similarly, for the South African report, Ernest Hill dedicated his first chapter to the history of plague to emphasize three characteristics of the disease, known from extensive scholarship. He noted its ‘indigenous’ quality, as the epidemic appeared to persist historically in particular localities. Hill accounted for a predictable periodicity of outbreaks and included the fact that epidemics appear ‘interchangeable’ between men, rats and mice (Reference HillHill 1904: 5–6). From these generalized historical qualities, Hill inferred then a short history of the most recent outbreaks preceding the events in Natal, originating in Hong Kong and a series of outbreaks in India, Australia, and Africa to let the historical arc arrive finally in 1901 in the Cape Colony, from which the disease had most likely spread to Natal in 1903.
The report from India, however, does not refer to the recorded history of plague over previous centuries, but offers a different, perhaps more pertinent, account of outbreak history (Reference MacNabb Campbell and MostynMacNabb Campbell and Mostyn 1898). Where the large historical narratives of plague suggest generalization about plague, the repetitive chronologies, or what I call here outbreak history, emphasize a different register of historical reasoning. Without much preamble, the report gives a month-by-month overview of the development of the epidemic from July 1897 to April 1898. It continues on from previous reports that account for the development of the epidemic beginning in August 1896 in Bombay (Reference EvansEvans 2018; Reference GatacreGatacre 1897). Monthly summaries of the epidemic constitute by far the largest section of this report, and each monthly vignette cycles through aspects that the plague committee considered important to record over time in the epidemic diary. Rainfall, mortality and sickness, relief works, staffing, quarantines and migration of people in and out of Bombay were recorded monthly, each enclosed in a short narrative description. This entry for December 1897, for example, marks the beginning of the second outbreak and describes the reasoning for the ‘segregation of contacts’:
In early December the arrival of infected persons in Bombay, and in many attacks an increase of virulence and infectiousness, made it probable that at an early date the Plague would develop into an epidemic. To prepare for an increase in disease, two measures received the consideration of the Committee. These were the separation of Contacts, of the sick man’s family, and the vacating of infected or un-wholesome houses, with the removal of the inmates to Health Camps.
For some months, miscellaneous events such as riots or house inspections were added. But overall the author’s choice of structure emphasized the temporal characteristics of the epidemic, which offers a sense of how circumstances, case numbers as well as reactive measures changed over time. August 1897 saw plenty of rainfall, with 15.59 inches, and a moderate number of 83 new plague cases. Relief works were required in August, as it was noted that ‘the city was infested with numbers of starved idlers whose feeble condition, predisposing to plague, was a menace to public health’ (Reference MacNabb Campbell and MostynMacNabb Campbell and Mostyn 1898: 4). Quarantine was established on sea routes to prevent importing plague and the total movement of people in and out of the city recorded an excess of 15,224 departures. In November of the same year, rainfall had been zero, while plague cases rose by a dramatic 661 cases. Relief works were in steep decline as movement of people into the city also continued to decrease.
The history of plague crafted in the Bombay report was structured to deliver a picture of the temporal dynamic of the epidemic within a complex configuration. Monthly summaries provide a granular view onto the variability of case numbers, of the changing climatic, social and political conditions in which plague emerged and thrived. This chronological reconstruction of the epidemic was entirely invested in the temporal dynamic of the outbreak.Footnote 8 Narrative gave a sense of the beginning, the waxing and the waning of the disease while integrating quantifiable indicators such as case numbers, rainfall and immigration as well as dense descriptions of what the committee perceived to be mitigating measures (poverty relief) and exacerbating circumstances (the immigration of homeless people).
Outbreak history, which considered the series of events that structured the local outbreak from its beginning to its end (if it had been reached), was a key component of reports of the third plague pandemic. Within epidemiological reasoning, this kind of temporal characterization was dislodged from the grand historical portraits of plague. While the latter were concerned with the settled story of what plague was, and how the contours of the epidemic’s biography aligned and criss-crossed with sections of the history of the Western world, the former provided a lens for investigation and open-ended speculation. The historical arc of plague provided a hook, a larger, global narrative within which the report’s account had to be situated, whereas the outbreak history offered the opportunity to bring the many facets that contributed to the local outbreak into a temporal order.
In contrast to that broad temporal arc, Lowson, in Hong Kong, dedicated only a small section explicitly to the ‘time of the outbreak’ (Reference LowsonLowson 1895: 30). A sense of the chronology of the Hong Kong outbreak can, however, be traced through all of Lowson’s thematic sections. In his discussion of climatic influences, he reasoned on ‘the increase of the disease after the rainy season’ (Reference LowsonLowson 1895: 5), and in the ‘Administrative’ section, he provided day-by-day details on how staffing levels at the hospital were arranged and adapted to match the dynamic of the epidemic (Reference LowsonLowson 1895: 26). Lowson’s section dedicated to statistics conveys a sense of the sudden growth and then quick slump in case numbers through June and July 1894. Overall, Lowson was eager to impart a picture of the Hong Kong plague as a sudden incident that emerged in April 1894 as ‘people were reported fleeing from Canton on account of the plague’ (Reference LowsonLowson 1895: 2) and which was expected to end with the strict observation of a list of recommendations provided by Lowson to improve the sanitary state of the city’s worst dwellings (Reference LowsonLowson 1895: 26).
Where Lowson let varied aspects of the outbreak chronology unfold in parallel, section by section, in the colony of Natal, Hill structured sections of his report very explicitly around the chronology of the outbreak, relating the ‘origin’, the ‘course’ as well as the ‘spread’ and the ‘limitation’ of the outbreak, each told in a dedicated section. The plague story of Natal began in the ‘first weeks of December’ 1902, when ‘the disease was found to be causing a heavy mortality among rats over a roughly triangular area’ at the Veterinary Compound (Reference HillHill 1904: 8). However, one month later, rats infected with plague were found in a produce store in the middle of Durban. Suspecting that the disease had been imported, Hill charted the epidemic distribution over time and space, as it spread to five or six further areas in Durban where it prevailed for some time. To characterize the temporal ‘course of the epidemic’ Hill utilized the metaphor of ‘water spilt on a dry surface: a continuous forward progression with occasional branching off shoots, and now and again a return flow’ (Reference HillHill 1904: 26). After detailed discussion of the relation between plague in rats and humans as well as white and (what Hill described as) native inhabitants, he closed his chronology with a detailed description of local measures put in place to control and end the outbreak.
14.4 Epidemic Configurations
The historian of medicine, Charles Rosenberg, identified two conceptual frameworks through which epidemics – his case was predominantly cholera – were explained until the late nineteenth century. The first, configuration, emphasized a systems view in which epidemics were explained as ‘a unique configuration of circumstances’ (Reference RosenbergRosenberg 1992a: 295), each of which was given equal significance. Communal and social health was imagined as a balanced and integrated relationship between humankind and environmental constituents, in which epidemics appeared not only as the consequence, but also as the origin of disturbance, crisis and catastrophe. Rosenberg’s second framework, contamination, prioritized particular identifiable causes for an epidemic event. Where configuration implies holistic concepts, the contamination perspective suggested a disordering element, a causa vera, suggestive of reductionist and mono-causal reasoning. As Rosenberg emphasizes, both of these themes have existed since antiquity in epidemiological reasoning, but it is particularly in the late nineteenth century, with the emergence of bacteriological science, that we can see a proliferation of these mutually resistant themes into polemical dichotomies.
Plague reports, however, did not neatly fit within this antagonism. Despite successful identification of the plague pathogen in 1894, and despite historiographical claims of a subsequent laboratory revolution (Reference Cunningham, Cunningham and WilliamsCunningham 1992), the epidemic did not lend itself to reductionist attribution of cause and effect between seed and soil (Reference WorboysWorboys 2000). As the previous section illustrates, understanding the puzzling configurations on the heel of the introduction of the contaminating pathogen was subject to much deliberation in plague reports. The texts’ capacity to integrate questions of contamination and systems of configuration without adopting deterministic models is what I would propose here as a second advantage of epidemiological reasoning. Narrative was essential for a kind of reasoning that offered some breathing space around deterministic theories of cause and effect, while not resolving the question of cause altogether.Footnote 9
This quality is perhaps best observed in the more speculative sections of the reports, where narrative reasoning enabled conjecture and allowed for contradiction. In stark contrast to the sober empirical tone of historical chronology, the reports engaged in intriguing ways with theories of distribution and transmission of plague. Many historians before me have shown that these factors were subject to heated global dispute (Reference EchenbergEchenberg 2007; Reference LynterisLynteris 2016). The return of plague as a global menace, no longer confined to historical periods as a ‘medieval’ disease, challenged as many convictions about ‘hygienic modernity’ (Reference RogaskiRogaski 2004) as it supported spurious theories about racial superiority within colonial occupation and exploitation. Each of the authors of our three example reports offers a range of idiosyncratic theories attempting to arrange their observations within available causal concepts. To shed light on the circumstances under which plague moved through the communities of Hong Kong, Bombay and Natal, Lowson considered infection through the soil, the Bombay Plague Committee discussed the problem of infectious buildings, and Hill defended the rat as a probable vector of the disease.
The soil had been, as Christos Lynteris recently argued, a ‘sanitary-bacteriological synthesis’ (Reference LynterisLynteris 2017). Removed from traditional miasmatic understandings of contagion as an emanation from the ground, the soil became suspicious as a plausible source of infection as well as a reservoir for plague’s pathogen. Lowson, who, like many of his contemporaries thought that implicating rats as the cause of plague was ‘ridiculous’ (Reference LowsonLowson 1895: 4), instead dedicated a full section to infection via the soil. The soil was a likely culprit, he argued, as it explained the geographically limited distribution of plague in the district of Taipingshan. With a vivid description of the living conditions of an area mostly occupied by impoverished Chinese labourers, Lowson drew attention to ‘filth everywhere’, ‘overcrowding’, the absence of ‘light and ventilation’ and basements with floors ‘formed of filth-sodden soil’ (Reference LowsonLowson 1895: 30).
Lowson’s account of the environmental configuration in Hong Kong was populated with vitriolic and racist descriptions of Chinese living conditions. Overcrowding, filth and the poor and damp state of houses, basements and stores were to him the driving factors of the plague, while latrines were particularly suspicious, as they were ‘used by the bulk of the Chinese population’. The danger ‘to every healthy person who went into the latrine’ could be assessed through a quick ‘glance’ (Reference LowsonLowson 1895: 28). Remarkable here is not his relentless anti-Chinese sentiment, which was of course a common component of British rule in Hong Kong, but the seamless integration of bacteriological and sanitary perspectives into his reasoning.
‘Predisposing causes’, as Lowson qualified his perspective, assumed political urgency as authorities concerned themselves with the future of the district. As well as burning the district to the ground and destroying the squalid habitations, was it also necessary to remove an entire layer of soil? After consultations with bacteriologists and a series of experiments, Lowson came to the conclusion that the soil was innocent and that instead common sense should prevail. Resorting to his racist conviction, he concluded that the most ‘potent factor in the spread of the epidemic’ could be found in the ‘filthy habits of the inhabitants’ of Taipingshan (Reference LowsonLowson 1895: 32).
A similar reasoning about infective environments structured the writing of the plague committee in India. As a second line of defence, after patient cases had been relocated to hospitals and populations evacuated to quarantined camps, the remaining houses and buildings were perceived as a suspicious and potentially dangerous environment. This was reflected in extensive discussions about the need for thorough disinfection. Fire, as the report stated, was the ‘only certain agent for the destruction of infective matter’, but its application was considered too risky. Based on undisclosed experience from previous outbreaks, the local committee chose to use perchloride of mercury, followed by thorough lime-washing (Reference MacNabb Campbell and MostynMacNabb Campbell and Mostyn 1898: 65). Yet, evidence of the beneficial effect of such operations was difficult to obtain. Previously disinfected premises were, as the author states, not protected against the reintroduction of plague through vermin and people. Some disinfection officers had thrown into doubt the benefit of lime-washing, as bacteriologists had reportedly shown that bacteria thrive in alkaline environments, such as the one provided by hot slaked lime. Regardless, the committee held up against the contradictory perspective and insisted on continuing lime-washing operations, despite the death of ‘two or three limewashing coolies’, as its use following other means of disinfection was ‘invaluable for sweetening and brightening up the rooms’ (Reference MacNabb Campbell and MostynMacNabb Campbell and Mostyn 1898: 66).
Six years later, in Natal, Hill needed to consider a very different question when explaining the distribution of plague. The rat had by then become the most likely vector in the distribution of the disease, and observations of symptoms in rats were no longer the subject of myth but had moved to the centre of theories regarding the epidemic’s aetiology. Accordingly, Hill discussed a series of cases, which seem to indicate clearly that plague in rats was a precursor to human cases. Examples of grain and produce stores and a railway locomotive shop and barracks were introduced as sceneries in which rat cadavers had been found, collected and tested positively for plague before cases in human occupants of the same structures were reported (Reference HillHill 1904: 77). However, eager to deliver a balanced view, Hill also offered cases ‘of the opposite’. He reported on an employee at the same barracks, who, despite being contracted with the collection and destruction of rats in the premises, never once suffered from plague, and he included detailed description of rat-proofing constructions, encountering dozens of infected rats, which were ‘carried out without any precaution, and yet for all that fortunately not one of the persons so engaged was attacked by the disease’ (Reference HillHill 1904: 78).
14.5 Epidemiological Reasoning and Visual Formalization
Each of the reports contains formalized representations of epidemic outbreaks, such as tables, graphs and maps. With this third section, I return to the initial question of how we might position narrative reasoning within the more common perception of the field’s trajectory towards quantification and mathematical formalization. In plague reports, medical statistics and maps take on a significant role to support, and at times to illustrate, narrative. Importantly, throughout the examples cited here, as well as within most of the remaining reports, little effort is given to the explanation and interpretation of statistical representations or spatial diagrams.
Lowson dedicated a short section to quantifiable data, which he entitled ‘Statistical’. Rather than offering characterization and interpretation of the aggregated case numbers per hospital and along nationality and age groups, his writing was predominantly concerned with reasons that undermined the reliability of the listed numbers. In reference to a table of cases and mortality in different nationalities, he did not discuss or analyse the variable caseloads in the listed populations. Nor did he make any efforts to interpret the highly suggestive picture of mortality rates. Lowson did not use the visualized data to draw inferences, but the numbers appear to be listed to confirm the colonial framing of the outbreak as a Chinese issue, which had already been established through narrative. However, the table seems to have been still useful to Lowson as a rhetorical device to strengthen yet another colonial trope. The lack of reliable data, so he argued, was attributed to the invisible and unaccountable burial practices that emerged as a consequence of corpses left in the street.
The report from Bombay shared a similar agnosticism towards formal data in the characterization of the epidemic. While the repetitive structure of the chronological narrative, with its regular references to weather, migration and control measures, appears to adhere to a formal structure, there are no tables and lists within this lengthy description of a year of plague in the city. A substantial formalization of the epidemic’s account, however, can be found in a separate set of documents that accompanied the report. The portfolio consists of a map of the island of Bombay, a second, similar map, now inscribed with detailed information on the epidemic, a complex chart of the epidemic’s case rates as well as plans for a hospital and an ambulance. The first object of interest is the chart, in which daily plague mortality from June 1897 to April 1898 was plotted together with data on the usual mortality, temperature, population, humidity, velocity of wind, wind directions and clouds (see Figure 14.1). (For further details on data collection, see Reference MacNabb Campbell and MostynMacNabb Campbell and Mostyn 1898: 213–214.) According to the report authors, the chart was developed to mount further evidence against ‘some of the theories freely advanced regarding the definite influence of temperature, humidity, wind and clouds on mortality’ (Reference MacNabb Campbell and MostynMacNabb Campbell and Mostyn 1898: 214). Intriguingly, rather than an instrument of generalization, the chart takes on the opposite function, preventing misleading and simplistic causal theories about plague as a disease of climatic circumstances through the demonstration of the fallacy of correlations.
The second visual representation of interest attached to the report from Bombay is a ‘progress map’ of the epidemic (see Figure 14.2), plotting the course of the epidemic from September 1897 to the end of March in 1898. Each section of the city had been marked with a circle when it had become epidemic, and each of the circles was shaded to indicate in which months the outbreak occurred, based on granular data collection of ‘actual cases from house to house’. While the report’s authors saw the map as evidence for an improved overall picture of the epidemic compared with the previous year, its relation to the narrative account within the report requires a few further considerations.
First of all, the map was designed to reinstate the image of chronology previously developed in the narrative sections of the report. It illustrated inferences drawn in writing, rather than opening a new space of geographical exploration. Second, the map served to visualize the ‘progress’ of plague, invoking the image of sweeping coverage, in which the flow of contagion becomes as visible as the obstacles that were put in place to contain the epidemic.Footnote 10 Third, within the form of the administrative report, the map constitutes a remarkable picture of granular insight, which exposes the colonial urban space through the lens of its epidemic predisposition as a radical transparent, controlled and contained space (Reference ShahShah 1995).
With maps like these, epidemiologists were able to deliver two-dimensional abstraction of the complex relations of a plague outbreak. As Tom Koch has written, such maps should not be read as representations of the outbreak, or as pictures of research results (Reference KochKoch 2011). Rather, he emphasized their use to combine data and theories, to create a visual context in which theories could be tested. The ‘progress map’ enabled a theoretical exploration of the relationship between the temporal dynamic of the epidemic and its place, following the rationale outlined in the reports’ chronology.
In South Africa, Hill used a quite similar map to combine temporal and spatial coordinates in his attempt to show the ‘marked correspondence between rat plague and human plague’ (Figure 14.3). His map demonstrated that in areas where plague cases were rife, rats with plague had been found; while in areas without registered cases, rats were unaffected too (Reference HillHill 1904: 37). But this neatly mapped data could not ascribe a causal direction to the distribution of plague and infections between rats and humans, as many sceptics of the rodent-vector theory continued to argue. To support and indeed to strengthen the theory of the rat as a principal vector, Hill returned to narrative speculation about the professional occupation of human plague cases. Over 22 per cent of infected people were employed in grocery stores or stables where rat plague had been shown to reside. Here, in this focused line of argument, the map assumed a status of evidence to support his causal theory: as almost 50 per cent of cases stemmed from premises adjacent to or connected to such stores and stables, Hill concluded that ‘the most important agency in the dissemination of plague was the rat’ (Reference HillHill 1904: 39).
The tables used by Lowson in Hong Kong, as well as the charts and maps included in the reports from Bombay and Durban, have one aspect in common: they were used to illustrate, accompany and reinstate arguments and inferences already made in narrative form. The visualizations were not included to lift empirical observations up to a more generalizable state, nor were they used to replace the prevailing picture of uncertainty and conjecture with unambiguous representations of causal theories. All these authors raised doubts about the reliability of the data that went into the development of the tables, charts and maps and thus qualified the status of such visualizations as temporary, exploratory and experimental rather than definitive.
Within epidemiological reasoning, this precarious status of formal representation was neither derided nor seen as problematic. Particularly as these reports were concerned with the observation and explanation of an epidemic outbreak, their authors aimed to sustain the muddy ground between correlation and causation rather than to resolve the resulting account either into radical contingency or into simplistic mono-causality. The visualizations in this period maintained a dual position – as diagrams to formalize the temporal dynamic as well as street maps to visualize the epidemic on the ground (see Wise, Chapter 22). Narratives allowed the authors to convey a sense of correlation and causal implication, as they explained why and under which conditions a series of cases assume epidemic proportions. Narrative focused on the crucial questions, which at the same time were the most difficult to answer succinctly: how mortality rates were skewed by social behaviour, how the disease dynamic unfolded in relation to climatic or sanitary conditions, and if the parallel occurrence of diseases in rodents and humans emerges as causal theory if one considered professional occupation. The visual ‘polemics’ of graphs, maps and charts were not only mistrusted, but their misleading determinism required framing and containment within the possibilities that narrative conjecture raised.
14.6 Conclusion
In this chapter, I have revised perspectives on early twentieth-century epidemiology, which has been seen as a phase of quantification and medical statistics. Contrary to this historical account, I have introduced the outbreak report as a narrative genre and as a source to consider the emergence of epidemiological reasoning. In this narrative form, epidemics retain their character as complex phenomena, which could never fully be understood through the narrow lens of bacteriology, the limited perspective of the vital statistician or the diagnostic point of view of the clinician. Epidemiological reasoning set the groundwork for the development of epidemiology as a unique scientific practice, at a remove from the clinic and the laboratory, but dedicated to colligating an endless array of material, social and biological aspects.
Historical narration emphasizes the temporal nature of the epidemic as an object of research. The rhythm, patterns and dynamics of epidemics assume significance in the writing of the reporting authors, as they seek to account for the temporal shape of plague outbreaks. Crucially, epidemiological reasoning distinguishes between what would later be called the micro-histories of outbreaks and the macro-histories of disease biographies, to evaluate and to scrutinize their relations.
Beyond the sober empiricism of historiography and chronology, the reports also offer space for the negotiation of causal theory. Assumptions about contagious soil, spaces and rodents are often brought forward without robust justifications, strong experimental evidence or academic rigour. Rather, narrative epidemiological reasoning sustains the epidemic configuration as a series of disparate factors forced into relation by the epidemic event, allowing its authors to speculate about their correlation without losing sight of a probable causal inference.
The value of conjecture and the capacity to maintain uncertainty between correlation and causation assumes prominence when the narrative is contrasted with the blunt pictures of epidemics derived from tables, lists, graphs, charts and maps. Within the epidemiological reasoning of reports of the third plague pandemic, these representations of quantifiable aspects were framed in a rhetoric of unreliability and misleading mono-causality. Rather than instruments of standardization and generalization, visual formalizations took on a role of expressing theories, testing hypotheses and exploring spurious inferences.
As a practice of empirical observation, the reasoned argument about epidemics remains deeply indebted to the epidemic narrative as a form of story-telling. However, charged with the formalization of a scientific epidemiological discourse, the narrative in outbreak reports also begins to shape the epidemic as an object of knowledge structured by historical contingency, theoretical multiplicity and a rather hesitant formalization of causes and determinants. At the beginning of the twentieth century, it is in epidemiological reasoning, rather than in the formalization of medical statistics and mathematical formulae, where the epidemic emerges as a versatile point of reference to think through and beyond the boundaries of the clinic, the laboratory, the population, the city and an increasingly fragile colonial world order.Footnote 11
15.1 Introduction
Automation is all about representation and representation is always a political project. In order to hand off a given task to a computer, that task must first be reconceived and reformalized as something that a computer can do, translated into its languages, its formalisms, its operations, encoded in its memory.Footnote 1 In service of those transformations, decisions have to be made about what is important, what will be lost in the translation, whose needs or goals will be prioritized. This chapter explores two influential attempts to automate and consolidate mathematics in the second half of the twentieth century – the QED system and the MACSYMA system – and the representational choices that constituted each: the languages of mathematics had to be translated into the languages and formalisms of computing; relatedly, mathematical procedures, like proof verification or algebraic simplification, had to be translated into computer-executable operations; and decisions had to be made about how best to formalize mathematics for automation, with what foundational logics, rules and premises.
MACSYMA and QED developers made very different representational choices and they used narratives to frame those choices. Marc Aidinoff has observed that historians often set out to unearth the ‘hidden politics’ of technological systems that are framed by their developers or users as value-neutral, objective, apolitical. He argues we should also ‘listen to people when they tell us what, and who, they prioritized’, we should attend to ‘the political, as it lies on the surface of technology, as actors directly described it’ (Reference Aidinoff, Abbate and DickAidinoff 2022). This chapter attempts to do just that by focusing on the narratives with which QED and MACSYMA were framed in order to make sense of the approaches to automation they represent, and the animating visions of mathematics and culture at work underneath.Footnote 2 These narratives were not just stories, extraneous and external to the systems. Nor were they post hoc, developed to explain choices that had already been made. They mapped directly onto and informed technical development and design decisions. They also mapped onto practice – the representational choices framed by these narratives corresponded with cognitive realities – how users would have to think about and do mathematics with these systems.Footnote 3
As such, the narratives that framed each project were both political and epistemic.Footnote 4 They were foundational myths that advocated for the consolidation and automation of existing mathematical knowledge so that the computer could take over certain elements of mathematical labour – from algebraic simplification to proof checking – and in so doing open up new possibilities for knowledge-making. Mathematicians in the future, it was proposed, would be able to see new things, solve new problems and ask new questions with automated repositories of what was already known in hand.Footnote 5 Neither QED nor MACSYMA fulfilled their foundational myths, however. They were utopian narratives, at the intersection of political and epistemic imagination. Throughout the second half of the twentieth century, there was genuine uncertainty about what kind of tool the modern digital computer would turn out to be, what its epistemic and cultural limitations and possibilities were. The narratives explored here served to attribute meaning, possible futures and cultural values to mathematics as it would be made manifest in this new and undetermined technology.
15.2 Political Choices in Automation
The QED system, whose development began with an anonymously authored manifesto in 1994, was an attempt to combat the ‘tower of Babel’ its developers perceived in the automation of mathematics which had, throughout the 1970s and 1980s, involved a proliferation of ‘incompatible reasoning systems and symbolic computation systems’ that were inefficient, redundant, cacophonous, and that threatened mathematics’ traditional claim to universal truth (QED Manifesto 1994: 242). The QED Manifesto accordingly called for the translation of mathematics into a single formal and computational system, ‘that effectively represents all important mathematical knowledge and techniques’ and that conforms ‘to the highest standards of mathematical rigor, including the use of strict formality in the internal representation of knowledge and the use of mechanical methods to check proofs of the correctness of all entries in the system’ (QED Manifesto 1994: 238). It was to be a ‘monument’, gathering together, verifying and unifying mathematics, the ‘foremost creation of the human mind’. Writing in the wake of the Cold War, and amid the rise of American liberalism, the authors of the Manifesto proposed that the system would help ‘overcome the degenerative effects of cultural relativism and nihilism’ (QED Manifesto 1994: 239–240). They lamented the perceived loss of ‘fundamental values’ that the end of the Cold War and the rise of liberalism signalled and saw in mathematics a uniting and universalizing possibility.
QED would bring mathematics together by making it all the same – by formalizing it within one ‘root logic’, the same rules and foundations at work throughout. The Manifesto incorporated a narrative of ‘Babel’ and of the loss of shared cultural values in order to align their project with an ideological goal: they wanted to use the universality of mathematics in order to reinforce ‘fundamental values’, in the face of cultural difference. The home of the project was the Argonne National Laboratory (where some of the anonymous authors were based). This was an American government and military funded, Department of Energy hosted, effort to assert ‘universal truth’. But their project highlights that ‘the universality of mathematics’ is itself a construct. QED would make mathematics universal, by demanding that different visions, approaches, logics and techniques be put into one formal and technological system. Anything that wasn’t or couldn’t be reformalized in this way would be ‘outside of mathematics’, excluded from the centralized system, from the monument to truth. The corresponding commitment to shared fundamental cultural values is similarly normative – values will only be universal and shared when everyone has been convinced (or forced) to adopt them.
The authors of the Manifesto were right about Babel in mathematics automation. Since the early 1960s, there had been a proliferation of attempts to automate different parts of mathematics, and the resulting systems did not conform to shared formal or computational specifications. Some of the ‘cacophony’ resulted from the fact that system developers were building from scratch and without collaboration or communication with other system developers. Some differences were the result of direct competition between them. But some of the formal and representational pluralism was done by design, including in the second case to be explored in this chapter.
The MACSYMA system, developed at Massachusetts Institute of Technology (MIT) between the mid-1960s and the early 1980s during the Cold War, was among the most influential early computer algebra systems. It was designed with multiple representational schemes, multiple logics, on purpose, because the developers believed this would make it more useful to practising mathematicians and mathematical scientists. MACSYMA, too, was meant to be a centralized, consolidated, automated repository of existing mathematical techniques – a toolkit mathematicians could use in order to spare themselves the time and effort of learning and executing those techniques for themselves. But MACSYMA developers believed that the best way to automate and consolidate mathematical knowledge was with as much heterogeneity and flexibility as possible. They wanted to bring mathematics together in pieces, stand-alone modules that each operated according to its own logic, its own internal design. This, they believed, would create a more accurate and more useful encoding of mathematical knowledge that would reflect and respect the pluralism of mathematical communities.
In an article explaining the representational choices one must make in the automation of mathematics, MACSYMA developers used political language. In a section called ‘The Politics of Simplification’, Joel Moses (a lead MACSYMA developer) described these choices in terms of how much freedom they afford the user, acknowledging that user freedom almost always adversely affects efficiency (Reference MosesMoses 1971). There are many different but equivalent ways that mathematical relations can be expressed, and mathematicians choose particular expressions because they are convenient to work with in a given context. But what is convenient for a mathematician on paper may not be efficient on the computer where very different constraints and economies, of memory and operations, are at stake.
For example, even simple addition can lead to trouble on the computer. Consider the sum of a series of numbers [1] S = x1 + … + xn. In computers, numbers are typically stored in memory using a fixed number of bits, and for ‘real numbers’, a format called floating-point is used to represent them. However, floating-point schemes struggle to represent both very large and very small numbers. As such, for the purposes of automation when very large numbers may be involved, it might be simpler to work in ‘log space’ where the computer stores and operates on the logs of numbers rather than the numbers themselves, because they require less memory. **Incidentally, the capacity to simplify problems by calculating in ‘log space’ is what made tables of logarithms so valuable in the nineteenth century before automatic calculators.** Expression [2] calculates the same value as [1], but works in log space, and so is often more efficient for computation. If you want to compute the log-space representation of the sum of x1 to xn, you can convert out of log space (by exponentiating), compute the sum of the regular representation of the numbers, and then take the log again, as in [2]. But, on the computer, it can be even more efficient to represent this expression as [3] . [2] and [3] are equivalent, but how could [3] possibly be more efficient than [2]? It has this extra term, M, added and subtracted throughout. [3] is called the ‘log-sum-exp’ trick and it is a way of computing the sum of a series of numbers in log space without having gigantic intermediate calculations that could exhaust computer memory. While it complicates the expression by adding M, M simplifies the computation by ensuring that numbers are sufficiently small to be represented in available memory. But this way of looking at and working with sums may be counter-intuitive or difficult for a human user who may nonetheless be required to input expressions in this form or recognize and interpret them on the screen if sums have been implemented in this way in the system they are using. In this and so many other cases, what is easier and more efficient computationally may not be what is easiest for the mathematician.Footnote 6
Typically, the more representational flexibility a user has, the more ‘under the hood’ processing needs to be implemented by developers to translate inputs into a form that the system was set up to manipulate. A ‘user-friendly’ system might allow a user to input simple expressions like [1] and, ‘under the hood’, the computer could convert them into the more computationally efficient forms in [2] or [3] before executing, and then convert back when displaying a result. But, these conversions also cost computing resources, so more rigid designs demand that the user become accustomed to working with, recognizing and generating computer-oriented representations themselves. This problem – how to implement and represent mathematical expressions and operations efficiently in memory, how users could input and work with mathematical expressions and operations, and how much work was needed to translate between the two – is a core problem for the automation of mathematics. These are the representational choices involved in any automation effort, and these are the choices MACSYMA developers framed through political narrative.
Moses surveyed the algebraic computing systems of the 1960s according to what he figured as the politics of their representational choices. There were the so-called ‘radical systems’ that could only ‘handle a single, well-defined class of expressions. […] This means that the system stands ready to make a major change in the representation of an expression written by a user in order to get that expression into the internal canonical form’ (Reference MosesMoses 1971: 530). There was ‘the new left’, which ‘arose in response to some of the difficulties experienced with radical systems’ and which operated like a radical system but with some alternative algorithmic simplification mechanisms. There were ‘the liberals’, equipped with ‘very general representations of expressions’, the ‘conservatives’, who ‘claim that one cannot design simplification rules which will be best for all occasions. Therefore, conservative systems provide little automatic simplification capabilities. Rather, they provide machinery whereby a user can build his own simplifier and change it when necessary’ (Reference MosesMoses 1971: 532). There were also ‘catholic’ systems that used ‘more than one representation for expressions and have more than one approach to simplification. The catholic approach is that if one technique does not work, another might, and the user should be able to switch from one representation and its related simplification facilities to another with ease’ (Reference MosesMoses 1971: 532). MACSYMA was a catholic system, incorporating elements of liberal, radical and conservative representational choices – ‘The designers of catholic systems emphasize the ability to solve a wide range of problems. They would like to give a user the ease of working with a liberal system, the efficiency and power of a radical system, and the attention to context of a conservative system. The problem with a catholic system is its size’ (Reference MosesMoses 1971: 532). MACSYMA, with its catholic design, reflected a narrative that highlighted horizontal management – the system’s modules operated independently of one another – and pluralism – each module operated according to its own representational schemes and internal logic (Reference Martin and FatemanMartin and Fateman 1971).
Any attempt to encode and automate mathematics requires an answer to a host of representational questions – how should mathematical objects be stored in computer memory? What will be included and what will be excluded? How should human practice be translated into computer operations? Whose needs and perspectives will be prioritized – the user or the developer? How and how much should these processes and representations be made visible to the user on a screen or printout? How must users formulate their problems and objects of interest such that they can be input to the system? QED and MACSYMA were designed with different answers to this set of representational questions, both framed with politico-epistemic narratives. QED embodied a vision of mathematics as a source of universal, shared truth and ‘fundamental values’ in the face of scorned ‘cultural relativism’. MACSYMA instead embodied a commitment to pluralism and flexibility in both mathematics and culture. These narratives flag the cognitive freedom or discipline that accompanies different approaches to automation – they describe how users must discipline their relationship to mathematics and mathematical representation in order to use a system effectively. They imagine a different role for computers in the production of mathematical knowledge, and different ‘styles of reasoning’ to accompany them (Reference HackingHacking 1992).
15.3 From Political Choices to System Building
But how (and how well) do these narratives relate to on-the-ground realities of these projects? How free are the developers of technological systems to decide what their politics will be? What is highlighted and what is left out in these narratives? Jonnie Penn, a historian of artificial intelligence (AI) has demonstrated that, in spite of all of their self-proclaimed differences, early AI practitioners were in fact united by key underlying logics and values (Reference PennPenn 2020). While they disagreed about how intelligence might be manifested in the machine, or what intelligence was, different approaches to AI were nonetheless united by many shared commitments – most notably, he identifies military and industrial logics and funding at work across them. For all their purported differences, they in fact agreed as much as they disagreed, especially about unspoken assumptions. Similarly, on the face of it, QED and MACSYMA embodied opposite approaches to the same problem – both projects aimed to centralize and automate mathematics, MACSYMA by preserving difference and adopting representational flexibility, QED by translating all of mathematics into one ‘root logic’ by unifying it. The narratives adopted by the developers of each system correspond to these opposing visions of automation. However, in spite of those differences, both systems shared a more fundamental belief that the consolidation and automation of mathematics was possible. They shared an underlying goal – to extract mathematical knowledge from people and communities and put it into the machine. To do so, both projects had to accommodate computers, whose limitations and possibilities constrained the epistemological and political values they could realize. The next sections offer a closer look at each automated system, the narratives that surrounded them and the practices that accompanied them.
15.3.1 MACSYMA
The MACSYMA system (for Project MAC Symbolic Manipulator) was developed under the auspices of Project MAC at MIT, beginning in the 1960s. The system was meant to offer automated versions of much of what mathematicians know and do: ‘The system would know and be able to apply all of the straightforward techniques of mathematical analysis. In addition, it would be a storehouse of the knowledge accumulated about many specific problem areas’ (Reference Martin and FatemanMartin and Fateman 1971: 59). The system could multiply matrices, it could integrate, it could factor and simplify algebraic expressions, it could maximize and minimize functions and hundreds of other numeric and non-numeric operations. This automated repository of knowledge was meant to free mathematical scientists from ‘routine mathematical chores’, and free them even from the process of acquiring much mathematical knowledge for themselves (Reference EngelmanEngelman 1965: 413). With such a system at hand, one need only to know when different operations were useful in solving a particular problem, but not necessarily how to execute those operations by hand oneself. The system grew in popularity, especially among Defense Advanced Research Projects Agency (DARPA)-funded military, academic and industrial research centres throughout the 1960s and 1970s. The PDP-10 computer at MIT on which the system was housed could be accessed through the ARPANET and was, Moses recalled, one of the most popular nodes during the 1970s (Reference MosesMoses 2012: 4). MACSYMA grew popular enough, in fact, that by the mid-1970s, they shifted to a user consortium funding model rather than relying on DARPA funding alone. The initial consortium included the Department of Energy, NASA, the US Navy and Schlumberger, an oil and gas exploration company.Footnote 7 Universities and academic research labs continued to access the system freely until the early 1980s, when the system outgrew the development and maintenance capacities of the MIT team, and it was privatized (controversially) and licensed to Symbolics Inc.
MACSYMA was developed in explicit opposition to two other trends in artificial intelligence and automated mathematics research at the time, and these differences help to situate the developers’ framing narratives. First, MACSYMA developers were critical of the ‘symbolic’ approach to AI which was largely characterized by an ‘information processing’ model of human intelligence in which minds took information as input and manipulated it according to a set of rules, and then output decisions, solutions, judgements, chess moves and other ‘intelligent behavior’ (Reference CordeschiCordeschi 2002).
Following Allen Newell and Herbert Simon, AI researchers using this approach looked for the information-processing rules that governed different problem domains and set out to automate these. Newell and Simon’s ultimate goal in this field was the development of a ‘general problem solver’ (GPS) – a computer program equipped with sufficiently general rules of reasoning that it could solve problems in any domain, by applying those rules in a top-down fashion to whatever symbolic input it was given (Reference Newell, Shaw and SimonNewell, Shaw and Simon 1959). GPS was based on a ‘theory of problem solving’ that suggested ‘very general systems of heuristics […] that allows them to be applied to varying subject matters’ (Reference Newell, Shaw and SimonNewell, Shaw and Simon 1959: 2). The idea was that people do the same sorts of analysis and planning when they solve problems in chess, or in mathematics, or in governance alike, and that if you could identify and automate those ‘heuristics’, they could be successfully applied ‘to deal with different subjects’ (Reference Newell, Shaw and SimonNewell, Shaw and Simon 1959: 6). Attempts to produce a general problem solver in this way, however, were fraught with failure and overpromise throughout the second half of the century.
According to Moses, these failures were entirely unsurprising. He rejected both the belief that some one set of reasoning rules or heuristics was sufficient for problem-solving across domains, and the underlying vision of ‘top down’ control in automation. Reflecting in 2012, he wrote:
[…] I was increasingly concerned over the classic approach to AI in the 1950s, namely heuristic search, a top-down tree-structured approach to problem solving […] There was Herb Simon […] emphasizing a top-down hierarchical approach to organization. I could not understand why Americans were so enamored with what I considered an approach that would fail when systems became larger, more complex, and in need of greater flexibility.
Moses thought it was untenable to identify any set of top-down rules that would be effective in solving problems across domains in mathematics. He also believed that this was an inaccurate picture of how human minds work. He believed minds were modular as well, applying different tricks and methods here and there. He did not believe that there was a singular governing set of reasoning principles at work across all intelligent behaviour, not even in mathematics. The MACSYMA system was accordingly modular – one module to factor, another module to integrate, another module to find the Taylor expansion – and these modules did not operate according to a shared set of rules or a top-down governing principle. It fell to the user to chart a path through the available modules that would produce a solution to their problem, and this was based on experiment, intuition, trial and error.
Moses was born in Palestine in 1941 and found America to be more culturally homogeneous by comparison. He suggested that this cultural homogeneity explained the commitment to top-down hierarchical organizational structures, citing these as uniquely American. He believed that pluralist systems of organization had correlates both in other societies and in the branches of mathematics, and sought to reflect these in MACSYMA:
When I began reading the literature on Japanese management, I recognized ideas that I had used in […] MACSYMA. There was an emphasis on abstraction and layered organizations as well as flexibility. These notions are present in abstract algebra. In particular, a hierarchy of field extension, called a tower in algebra, is a layered system. Such hierarchies are extremely flexible since one can have an infinite number of alternatives for the coefficients that arise in each lower layer. But why were such notions manifest in some societies and not so much in Anglo-Saxon countries? My answer is that these notions are closely related to the national culture, and countries where there are multiple dominant religions (e.g., China, Germany, India, and Japan) would tend to be more flexible than ones where there is one dominant religion.
Moses’ interest in ‘non-American’ forms of organization informed his approach to automation and AI throughout his career. His critique of top-down control infrastructure was not just that, empirically, it was brittle and performed poorly, but also that it reproduced a commitment to homogeneity that he believed was characteristically American.
Moses recognized what historians of technology have long suggested – that culture and ideology can be reproduced in technical infrastructure – and the MACSYMA system was designed to reflect the political-technics of pluralistic places. MACSYMA’s catholic modularity was intended to preserve pluralism, to allow for context, mixing radical, liberal and conservative elements. That modularity would, he believed, better meet the needs of mathematicians, avoid the brittleness and failings of top-down control hierarchies he perceived in other automation attempts and, he considered, in American culture overall.
15.3.2 QED
Where Moses sought to preserve pluralism in MACSYMA, the QED system, inaugurated in the 1990s, was meant to promote and even enshrine cultural homogeneity:
[P]erhaps the foremost motivation for the QED project is cultural. Mathematics is arguably the foremost creation of the human mind. The QED system will be an object of significant cultural character, demonstrably and physically expressing the staggering depth and power of mathematics. Like the great pyramids, the effort required may be great, but the rewards can be even more staggering than this effort. Mathematics is one of the most basic things that unites all people, and helps illuminate some of the most fundamental truths of nature, even of being itself. In the last one hundred years, many traditional cultural values of our civilization have taken a severe beating, and the advance of science has received no small blame for this beating. The QED system will provide a beautiful and compelling monument to the fundamental reality of truth. It will thus provide some antidote to the degenerative effects of cultural relativism and nihilism.
The QED Manifesto was written by a collective of automated mathematics researchers, and anonymously published in the proceedings of the 1994 Conference on Automated Deduction, after the fashion of the mathematical collective called Nicholas Bourbaki.Footnote 8 Like Bourbaki, however, the Manifesto had a primary author – Robert Boyer, a professor of computer science, mathematics and philosophy at the University of Texas at Austin. Boyer had many collaborators Argonne, the institutional home of QED, which had also been an important site of automated mathematics research since the 1960s. Readers of the 1994 Manifesto were directed to email ‘subscribe qed’ to majordomo@msc.anl.gov in order to subscribe to the Argonne-supported qed@msc.anl.gov mailing list. Argonne also hosted the first QED workshop, aimed at realizing the imagined project later in 1994.
Further reading of the Manifesto reveals which ‘civilization’ and whose values were perceived as under threat and in need of monumentalizing: they worked in the tradition of the European Enlightenment. The authors of the manifesto lamented the fact that ‘the increase of mathematical knowledge during the last two hundred years has made the knowledge, let alone understanding of all, or even the most important, mathematical results something beyond the capacity of any human’ (QED Manifesto 1994). In the late nineteenth century, during the so-called ‘foundations crisis’, similar concerns motivated efforts to consolidate and formalize mathematics, but in books and periodicals rather than computer systems (Reference CorryCorry 1998; Reference GrayGray 2004). Logicians and philosophers like Giuseppe Peano, Gottlob Frege, Bertrand Russell and Alfred North Whitehead set out to develop logics whose premises and inference rules they hoped would be sufficient for the establishment of mathematical results from different fields, and they published lists of known theorems and proofs of foundational results within those systems. Their desire to consolidate emerged in part in response to concerns about the foundations of mathematics and the discovery of troubling paradoxes, but also in response to the professionalization and proliferation of mathematics, which developed distinct national cultures and schools during the nineteenth century.
If mathematics was to be the bedrock of ‘universal truth’, it wouldn’t do for it to diversify, proliferate and divide in this way, threatening the Enlightenment narrative in which mathematics, and its nineteenth- and twentieth-century bedfellows reason and rationality, respectively, were the foundations for universal truth.Footnote 9 The Manifesto cites Aristotle on this point:
In the end, we take some things as inherently valuable in themselves. We believe that the construction, use, and even contemplation of the QED system will be one of these, over and against the practical values of such a system. In support of this line of thought, let us cite Aristotle, the Philosopher, the Father of Logic: That which is proper to each thing is by nature best and more pleasant for each thing; for man, therefore, the life according to reason is best and pleasantest, since reason more than anything is man.
The narrative that an antidote to cultural relativism was required, in the form of a monument to fundamental truth, participated in that century-old impulse to gather together and render immutable – by logic and consolidation – what is known in mathematics. The enlightenment commitment to ‘reason’ as the bedrock of truth, as an imagined ‘universal’ faculty, and of mathematics as its purest manifestation, were the values perceived as under threat by ‘cultural relativism’ and in need of reinforcement by QED. The commitment to reason, like the commitment to formalization, may seem in tension or at odds with the use of narrative tools, and yet, in the context of QED, they work in entangled ways. While acknowledging that there would be biases and disagreements in the implementation of the system, their belief in universalism was not swayed – ‘If there is to be a bias, let it be a bias towards universal agreement’ (QED Manifesto 1994: 241). This statement captures the tension and political fantasy that supported the project.
The late nineteenth- and early twentieth-century attempt to consolidate and fully formalize all of mathematics largely failed. While significant subsections of mathematics were subjected to successful axiomatization efforts, much of mathematics remained and remains unformalized. There were also the incompleteness and decision problem results of Kurt Gödel, Alonzo Church, and Alan Turing, which demonstrated that formalization has intrinsic limitations. There was similarly the fact that most formal systems were too obtuse for actual use in practice, and most research mathematicians did not work strictly within them.
Boyer and his co-authors on the Manifesto believed that the modern digital computer put the full formalization of mathematics back on the table. Human limitations had impeded earlier efforts, but these were limitations that the computer did not share – ‘the advance of computing technology [has] provided the means for building a computing system that represents all important mathematical knowledge in an entirely rigorous and mechanically usable fashion’ (QED Manifesto 1994). Where early twentieth-century efforts at consolidation and formalization had fallen short, computer automation, they believed, could succeed – ‘The QED system we imagine will provide a means by which mathematicians and scientists can scan the entirety of mathematical knowledge for relevant results’. Mathematical knowledge would be redefined as that which was included in the system, and which adhered to its formal prescriptions, highlighting again that the field’s ‘universality’ was constructed through inclusionary and exclusionary choices. Mathematicians would not need, they went on, ‘minute comprehension of the details’ of the knowledge they would find, use and build upon in the centralized database. In this way, human understanding of that knowledge was displaced in favour of machine-consolidation. Human understanding was further displaced by the QED commitment to machine-verification. Results would be accepted, not if they were convincing to mathematicians, but if they were automatically verifiable by the system.
QED, as earlier projects projecting universalism in mathematics, largely failed to achieve its lofty goals. Although it led to the development of the Mizar library which currently holds the largest database of fully formalized and verified mathematical results, and projects are ongoing, no system has achieved the consolidation and automation they imagined.Footnote 10 The Manifesto itself pointed to numerous obstacles – ‘social, psychological, political, and economic’, not to mention technical and mathematical – that would need to be overcome (QED Manifesto 1994: 250). They imagined a vast number of people would be needed to achieve this project and suggested that credentialing systems and individualism in mathematics might also impede their vision (QED Manifesto 1994: 249). They noted even that QED should avoid ‘any authorship or institutional affiliation’ since these could undermine the universalism that QED sought to construct. Universalism would be the product of a particular social and labour organization, central planning, shifts in credentialling and motivations, as well as technical consolidation.
The Manifesto acknowledged that the establishment of leadership, and the cultivation of agreement about the priorities and plans that would guide the project, would be difficult. What they described, essentially, was a centrally planned economy – you need a central planner to make a centrally planned universal mathematics, to ‘establish some “milestones” or some priority list of objectives’, to ‘outline which parts of mathematics should be added to the system and in what order. Simultaneously, an analysis of what sorts of cooperation and resources would be necessary to achieve the earlier goals should be performed’ (QED Manifesto 1994: 249). The Manifesto proposed that, ideally, the ‘root logic’ with which mathematics would be represented in the system would be widely accepted: ‘It is crucial that the “root logic” be a logic that is agreeable to all practicing mathematicians’ (QED Manifesto 1994). However, they also acknowledged that no such ‘root logic’ was, as yet, universally accepted, and leadership and agreement would remain difficult. In practice, the QED project was guided by the perspectives of a small number of automated reasoning researchers and descendent efforts remain adjacent to both mainstream mathematics and computer science. In spite of continually running up against the realities of pluralism and individualism in mathematics, part of QED’s foundational myth was that a ‘root logic’ could be established, that reasonable people would no doubt agree on it, and mathematical labour could be reorganized accordingly. The Manifesto’s acknowledgement of obstacles highlighted the fact that the unity and universalism of mathematics would have to be constructed – disagreements erased, a ‘root logic’ selected and then all of mathematics reformalized and implemented within it by labourers willing to eschew individual recognition for collaborative achievement. Although QED inspired significant efforts in this direction, no such fully formal, automatically verified, comprehensive consolidation of mathematics yet exists.
In spite of consistent failures, the belief that full formalization and consolidation of mathematics could be achieved, just around the next corner, with the next advancement, has been remarkably powerful and persistent in the history of mathematics. The authors of the QED Manifesto suggested that paper, pencil and human minds had simply been too limited for the task but the technological advances of computing had, by the mid-1990s, made it possible to achieve. Over the next several decades, mathematicians reflecting on the QED project proposed that it had failed because of limited interest and limited technical capacity but that now it might be possible. In 2007, Freek Wiedijk asked, ‘Why the QED manifesto has not been a success (yet)’, and concluded that ‘I myself certainly believe that the QED system will come. If we do not blow up the world to a state that mathematics will not matter much anymore, then at some point in the future people will formalize most of their proofs routinely in the computer. And I expect that it will happen earlier than we now expect’ (Reference WiedijkWiedjik 2007: 132). In 2016, success still had not come, but Italian computer scientists Michael Kohlhase and Florian Rabe proposed that ‘Even though [QED] never led to the concrete system, communal resource, or even joint research envisioned in the QED manifesto, the idea lives on and shapes the research agendas of a significant part of the community’ (Reference Kohlhase and RabeKohlhase and Rabe 2016). Again, in 2014, Ittay Weiss proposed that ‘two decades later it is safe to say the dream is not yet a reality’. But he, too, believed that success was just around the corner (Reference WeissWeiss 2014: 803). Weiss suggested a new approach to the complete automation of mathematics, which he named ‘Mathropolis’ – an imagined polity, just over the next hill, in which the monument to universal truth will be built, the pluralism of mathematics united in one formal system, the economy of mathematical labour centrally planned, the limited human mind and social vetting of truth replaced by the robust and reliable machine. His proposed system, named as a city, reflected the entanglement of politics, governance and epistemology at work within the QED project.
15.4 Conclusion
This vision – that mathematics will be fully consolidated, automated and formalized just around the next social or technical corner, that its universality will be made materially manifest – gained much traction in the late nineteenth and early twentieth centuries. Responding both to the discovery of several troubling paradoxes and to the proliferation of mathematical fields and centres of research, mathematicians around the turn of the twentieth century wanted to get all of mathematics into one place, they wanted to represent it all in the same formal system, the same symbolism and in the pages of one book. They were unable to do so, for formal, social and material reasons. With the perceived possibilities of modern digital computing in the 1960s and 1970s, many, including the developers of the MACSYMA system, believed that, finally, consolidation would be possible, especially through pluralism and horizontal management. It wasn’t. Again, in the 1990s, the anonymous authors of the QED Manifesto proposed that finally the cost of computing and the intellectual will were such that it would be possible to gather up all of mathematics in one place, in one formal system. It wasn’t. In revisiting the QED Manifesto two decades later, several mathematicians proposed that the time had finally come for the full and final consolidation of mathematics. It hadn’t. This story – that mathematics will be fully unified, consolidated, formalized just around the corner – now that the conditions of past failures have been overcome – shapes whole research projects, and scaffolds belief in the universalism of mathematics.Footnote 11
In spite of their different approaches to automation, and the different narratives that accompanied them, QED and MACYSMA both participated in that shared goal of consolidating mathematical knowledge and automating it, putting it in the machine. Moreover, both received initial funding from the same organizations – DARPA and the Office of Naval Research (ONR), especially. Both projects were undertaken at powerful hubs of military–industrial–academic research, MIT and Argonne National Laboratory, whose power grew out of the post-war American context. Both ascribed to ideologies of efficiency and logics of industrial planning in their imagining of automated mathematics, but to serve two different ideologies. Both projects rested on the belief that, whether pluralistically or not, knowledge could be extracted from human knowers, that it could and should be ‘put into the machine’. And both set out to redefine, transform and encode mathematical knowledge with computer-oriented representations and processes. QED and MACSYMA have more in common than their framing narratives may suggest.
MACSYMA was meant to preserve pluralism and empower mathematicians for new programs of problem-solving. It was meant to free time and energy for new questions and explorations by handing over much mathematical labour to the machine. However, the freedom afforded by MACSYMA required users to work with and within highly disciplined and often counter-intuitive computer-oriented representational schemes, and that freedom cultivated dependency on the system, once a user came to rely on the system for the execution of techniques they did not themselves understand (Reference DickDick 2020). The developers conceded the point that MACSYMA required mathematicians to reconceive what they know for the purpose of automation, and even encouraged users to transform their own knowledge into automated modules for inclusion in the system. The modularity that was meant to serve a pluralistic and modular vision of mathematical practice also made it easier for mathematicians to take what they knew and ‘put it in the machine’. Users could contribute to a SHARE Directory – an ever-growing repository of new modules, user-generated, that expanded the system’s capabilities and made more ‘knowledge’ available to more people. The claim that MACSYMA freed mathematicians and that it preserved pluralism of practice betrayed the fact that incredible accommodation to the machine was first required and that the system was primarily useful and usable to elite and defence-funded institutions. When MACSYMA was privatized in 1981, and licensed to Symbolics Inc., the users who had worked so hard to learn and accommodate and even contribute to the system were then transformed into a set of buyers in a market who had to now pay for the privilege of consuming the goods they had in part made themselves. MACSYMA wasn’t the materialization of freedom and pluralism that its narrative suggests.
Lewis Mumford cautioned, in opposition to strong theories of social construction, that there are technological systems that cannot be aligned with any politics whatever, but rather operate according to fundamental logics that cannot be overcome through creative use, alternative intention or new narrative. Mumford suggested that computers are essentially authoritarian technics, centralized command and control technologies, no matter how often people have tried to align them with democracy, freedom, counter-culture and pluralism (Reference MumfordMumford 1964; Reference TurnerTurner 2008). Even if one doesn’t accept Mumford’s analysis in its entirety, it would still be safe to suggest that no American militarily funded effort to extract knowledge from knowers and communities and make it efficiently and automatically available to defence-funded research institutions, can be aligned with the politics of pluralism.
Both QED and MACSYMA were supposed to serve a dual purpose. First, both were meant to automate mathematics, and in this they differed – the former meant to automate by representing all of mathematics in a shared ‘root logic’, the latter, automating mathematics modularly, attempting to preserve logical and methodological pluralism, as well as offer users flexibility. In this difference, the narratives the developers attached to the projects suit. But both projects were also meant to consolidate all of mathematical knowledge, efficiently and automatedly. Both entailed and in fact celebrated the displacement of human understanding – users need not understand that which the system can do. For MACYSMA, users would be spared the need to learn mathematical techniques for themselves because of having an automated system available to execute them instead. In QED, the fundamentally social project of establishing mathematical truth was displaced in favour of automatically verified results. Both entailed theories of knowledge that did not require a subjective knower, only a machine encoding. And in this regard both displace human understanding, social processes and the pluralism these entail. And both projects consolidated resources and decision-making power, as well as the automated mathematical knowledge itself, in the hands of a small number of institutions, also limiting pluralism. Both projects also minimize the productive capacity of friction, miscommunication, disagreement, misunderstanding and difference. While MACSYMA preserved logical pluralism in its modularity, all modules still had to accommodate the constraints of a single arbiter: the PDP-10 computer on which they ran. We might call this computational-pluralism, and it was only as plural as those constraints permit. The politics of technology go beyond the technical design choices made within them to include the context in which they are developed, who pays for them, profits from them, and how much freedom or discipline users and contributors have in their engagement with technical systems.
In these histories of mathematics automation, narratives map onto design and implementation decisions, they acknowledge the representational choices involved in accommodating the machine and the user and they reflect beliefs about mathematics’ relationship to culture. But the narratives that developers use to frame their technological systems may also serve to direct our gaze away from certain institutional realities and unspoken assumptions. These epistemic–political narratives highlight entanglements between mathematics and culture, and conformity and freedom, in the representational choices that automation always involves.Footnote 12
16.1 Introduction
In this chapter, the history of historiography and the philosophy of history is brought to the aid of the history and philosophy of science (Reference UebelUebel 2017; Reference RothRoth 2020; Reference VirmajokiVirmajoki 2020; Reference KuukkanenKuukkanen 2012). Narrative has sometimes been taken to define historical knowledge, and to define it in contrast with scientific knowledge. The Narrative Science Project undermines this contrast (Reference WiseMorgan and Wise 2017; Reference MorganMorgan 2017; Reference WiseWise 2017; Reference Cristalli, Herring, Jones, Kiprijanov and SellarsCristalli 2019; Reference GriesemerGriesemer 1996). If narrative is a constitutive feature of scientific knowledge then perhaps the making of historical and scientific knowledge is more similar than has otherwise been assumed or allowed. For historians and philosophers who have investigated the so-called historical sciences, most prominently geology, palaeontology, evolutionary biology and natural history (Reference CurrieCurrie 2018; Reference ClelandCleland 2011; Reference RudwickRudwick 1985; Reference GallieGallie 1955; Reference Richards, Nitecki and NiteckiRichards 1992; Reference HubálekHubálek 2021), or who have attended to science’s archival practices (Reference DastonDaston 2017; Reference StrasserStrasser 2019; Reference LeonelliLeonelli 2016), such similarities might already seem obvious.Footnote 1 But the approach taken here extends beyond these bounds.
Not only is it the case that scientific knowledge contains more narrative than has been appreciated, but historical knowledge contains less. Historical knowledge has existed in many forms, not all of which are indebted to narrative. Chronicles and genealogies are among the most well-known alternatives which do not assimilate to narratives, although they may possess narrativity. If it is the case that within historiography we recognize narrative as only one part of our epistemic apparatus (working with chronicles and genealogies), and we also find narrative at work in science, then perhaps there is something about the relations between chronicle, genealogy and narrative within historiography that might be illuminating within the sciences. This chapter argues that this is indeed the case. It does so through an analogy between, on the one hand, the making of new historiographical fields and subfields, and, on the other, the making of new scientific fields and subfields. It argues that the process is relational, with field-forming choices taken by individual historians and scientists being made to a considerable extent through reflection on the apparent field-forming choices made by others. The content of these choices tracks the terrain of chronicle, genealogy and narrative. Sections 16.2 and 16.3 acclimatize the reader to thinking with these three forms of knowledge within historiography. Section 16.4 applies them to a case study in the sciences.Footnote 2
16.2 Three Forms of Historical Knowledge
Chronicles are some of the earliest known examples of historical writing and thought (Reference BreisachBreisach 2007; Reference AurellAurell 2004). While their variety of contents and styles is considerable, they can be grouped together thanks to their sharing some key exaggerated features. A convenient example is included in the University of Leeds digital collection of Medieval Manuscripts, which are freely available online.Footnote 3 The ‘Anonimalle Chronicle’, which can be found there, is a fourteenth-century manuscript which exemplifies key features of a chronicle (Reference Childs and TaylorChilds and Taylor 1991). A chronicle can be eclectic, but establishes rough terms for what it will include, bounded by some geographical or temporal limit. It records people and events deemed important. For example, in the case of the Anonimalle, this includes the Peasants’ Revolt. There is little thematic or argumentative ordering, as chronicles are mainly organized according to sequences and chronology (Reference SpiegelSpiegel 2016). While there may be some evidence of forward referencing from past events to future ones, so that there is room for some overarching narrativity, these features are muted (Reference PollardPollard 1938). At different times, and in different cultures, what it has meant to produce a factual account, and the means by which a chronicle’s evidences and descriptions have been assessed as reliable, have varied considerably. Today, key distinctions between historiographical approaches very often hinge on changes in the chronicle. For instance, while feminist historiography has inspired many significant and ongoing changes, the most fundamental has been recognition that the chronicles of history have been drawn ridiculously narrowly. The same can be said for those urging for global history-making, or environmental history-making or animal history. To boil things down, we can say that making a chronicle concerns choices of relevance and irrelevance, facing epistemic constraints of the present and the absent.
When it comes to genealogy, the most complete digitized work in the Leeds online collection is the ‘Biblical and genealogical chronicle from Adam and Eve to Louis XI of France’. This fifteenth-century manuscript includes a genealogical tree of the pedigrees of French kings and their descendants. It achieves this record both in tables and through tree diagrams showing these relations.Footnote 4 While there are many earlier examples of genealogical working and thinking, in Europe it was not until the twelfth and thirteenth centuries that this form came to be developed into a prose genre in its own right, its primary function being to define and legitimize particular lines of descent and their authority (Reference SpiegelSpiegel 1983; 1993). A historical genealogy finds ways to pick out certain objects that it can follow over time, objects bounded by some privileging rationale. The choice of ending point will have a direct and immediate effect on the overall message or moral, a choice which the genealogical author is considered responsible for. Genealogy was given a new lease of life in the second half of the twentieth century, adopted by a large and diverse set of historians and sociologists who found it could be fruitfully applied to histories of concepts and ideas. This mode is most commonly associated with Foucault, although his own broader debts in arriving at genealogy are worth remembering, as are alternative approaches to genealogical history (Reference RothRoth 1981; Reference BevirBevir 2008; Reference Prescott-CouchPrescott-Couch 2015). In addition, many publics commit themselves to making genealogies, be it of their DNA, or of their own family history, all of which has become big business (Reference NelsonNelson 2008; Reference Tutton and PrainsackTutton and Prainsack 2011). Often in genealogical research it is the finding of connections that matters over and above any explaining which those connections might achieve. In contemporary historiography, genealogies often distinguish themselves from narrative histories (discussed shortly), by explicitly resisting the latter’s epistemic expectations and genre conventions, particularly by denying closure. In its defining features, genealogy concerns choices of following and unfollowing, facing epistemic constraints of what it is possible or impossible to follow.
As with chronicles and genealogies, there is a wide range of different examples of narrative histories (Reference MomiglianoMomigliano 1990; Reference LevineLevine 1987; Reference BentleyBentley 1999). Nevertheless, while some form of narrativity is present in chronicles and genealogies, that narrative itself can be recognized as requiring its own care and attention within historical epistemology allows us to mark out a third distinct form of historical knowledge (Reference WhiteWhite 1987). I cannot account for the multiple potential origins of this form’s emergence, although it presumably occurred in piecemeal fashion somewhere between The Iliad and Braudel’s The Mediterranean. This form brings together a range of evidence to serve an argument or set of arguments, organized in the form of a narrative or set of narratives. A narrative has to know its end before it begins, and its terms are bounded by the questions it pursues. The motivations and justifications which take it from beginning to end (or from the end back to the beginning) are drawn from some present-centred interests which help to determine its informational order (we need to know X before we get to Y if we are to truly appreciate or agree to Z). Sometimes the written account will be narrated much like an unfolding novel, other times it is intended for a narrative to be read into it. Even when leaning into the grandiose and the rhetorical, their ambitions remain factual. At times the presence of narrative in historical knowledge, as offering something too much like fantasy or storytelling, has been contentious (Reference WhiteWhite 1984; Reference SpiegelSpiegel 2007), but today it remains the dominant and preferred form of historical knowledge, facing little meaningful scepticism. Those who recognize narrative as providing a means of explanation in its own right can hold any rhetorical or storytelling features at arm’s length.Footnote 5 To boil its key features down: narrative concerns choices of beginnings and endings, and makes connections – explicit or implicit – between elements of evidence which constitute an argument about, or give an explanation of, their subject.
16.3 The Analytical Apparatus: Six Elements of Historical Knowledge
The three forms are not incompatible with one another, and most examples of historical knowing and understanding will contain aspects of all three. A narrative history necessarily adopts some chronicles and not others, while treating some objects genealogically and not others. A chronicle will necessarily serve some genealogical interests better than others and be more amenable to some narrative syntheses and not others. A genealogy necessarily includes some chronicles and not others and occupies some narrative worlds more than others. Having described them, I argue that the making of fields and subfields of history, and indeed the making of any given historian’s identity as a historian, is achieved by the combining of different choices concerning the chronicles, genealogies and narratives that one adopts or rejects. This understanding is partially inspired by Gabrielle Spiegel, particularly her work on the ‘social logic of the text’ (Reference Spiegel1990), and earlier work that I completed with Paolo Palladino on biological time (Reference Berry and PalladinoBerry and Palladino 2019). Some of these choices are aesthetic, others political, others correspond to competing epistemic goals and values. Other aspects of these choices concern the kind of time in which one wishes to situate one’s research objects and audiences. Different fields, subfields and historians assess the value of historical knowledge encoded in these three forms according to their own criteria. This process is relational: seeing in what one person or group is doing an excellence or an excess, and in some other person or group something improper or deficient. It is relational because these assessments help to motivate and justify change (or stasis) in oneself. When two or more historians arrive at the same or similar evaluative criteria, or when they share an emphasis on the importance of one or the other forms for a particular topic, we may discern the beginnings of a subfield.
It is this state of affairs within historiography, which, so this chapter argues, is paralleled in the sciences.Footnote 6 However, the descriptions of the three forms provided thus far has been too general for the purposes of making analogies between history and science. We need smaller focal points.
The six elements running down the rows of Table 16.1 have been intuited from reading history of historiography, narrative theory and philosophy of history. They concern ways in which the chronicle, the genealogy and the narrative are distinguishable. Some of these six elements are taken quite directly from the existing work of other scholars, and these debts will be clear in citations. However, the gloss which each is given serves the unique aims of this chapter. Section 16.4 applies these elements analogically to a case in the sciences.
1. Means of construction. When a chronicle is being compiled, one is primarily faced with choices of inclusion or exclusion, determined by whatever criterion one has adopted. When a genealogy is being composed, even if the key figures or subjects are picked out, the primary choices one faces concern which to follow when, and which to cease following when. As for the construction of a narrative, the most essential manoeuvre for the building of a narrative world concerns when to start (and why), and when to stop (and why), two decisions which are really one.
2. Means of ordering. The question of ordering has helped motivate the Narrative Science Project from the outset (Reference MorganMorgan 2017). The organizer of a chronicle works under the expectation that each entry will be ordered chronologically. Chronology may also matter for the organizer of a genealogy, but this will be mixed with a selectivity towards a particular object, the phenomenon of interest, that which is being traced over time. The term ‘material overlap’ – which I use to describe this means of ordering – I take from Reference GriesemerGriesemer (2000), which he introduced to help characterize what is interesting about evolutionary dynamics in particular, but which I think is extendable to anything lineal. When it comes to narrative, despite very clearly important concerns which often guard historians against presentism, the producer of a narrative will be inescapably presentist, and indeed that presentism contributes to a narrative’s value. Their materials will therefore be ordered according to their argumentative ambitions in the present.
3. Likely modes of narrativity. The six modes of narrativity that run across this row are all directly taken from Reference RyanRyan 1992, who also lists many more.Footnote 7 I have chosen the six which best help differentiate the three forms. To explain them very briefly, an embryonic narrative has some of the most important features for narrative-making without any identifiable plot (the historical chronicle is one of Ryan’s illustrative examples). A deferred narrative provides no narrative of its own but intervenes into something which might eventually become one (the example given is that of a newspaper report). Multiple narrativity keeps many plots in play at once but without requiring that they be related or interact (the suggested examples are The Decameron and The Arabian Nights). Braided narrativity might have plots which interact, join, depart from one another, etc. (Ryan’s preferred examples are family sagas and soap operas). Underlying narrativity is read into some source material without being stated in explicit narrative form (examples are offered from everyday life, such as witnessing a fight and interpreting it as the outcome of some longer set of events).Footnote 8 Historians commonly use the latter mode of narrativity in an effort to create distance between the materials presented and their own preferred narrative, or when they are attempting to delay the selection of one narrative over alternatives. Last, figural narrativity, which again arises outside of any explicitly stated narrative, and occurs when some source or other conjures up in our minds a stand-in, a figure, of one kind or another. One of Ryan’s preferred examples is the making of nation states into characters on a global historical stage.
4. Reflexivity. This row places the three on a scale, from low reflexivity to high. A chronicle requires little reflexivity on the part of the chronicler once the criteria for selection are established. As such, it might be better to say that the reflexivity of the chronicler is required prior to the making of the chronicle, rather than it being an explicit feature of the account. A genealogy requires a little more reflexivity because the choices concerning what to follow or unfollow, and when, cannot be specified prior to the composition, but will be more dependent on author choices. Last, a narrative history expects a very reflexive and self-conscious author, even when an externalizing ‘all-seeing’ voice is adopted.
5. Ending. A chronicle is not building to some ending or other but ends at some arbitrary point. It could easily continue, without any effect on its overall structure or significance, but it simply does not. A genealogy can only last as long as do the materials it is addressing. It could continue, provided the object continues, and different ending points can produce different lessons. The ending of a narrative history, meanwhile, will have been baked into it from the beginning, because it needs to be coherent with the questions it pursued (Reference RothRoth 2017; Reference MorganMorgan 2017).
6. Orientation to the world. This last distinguishing element is the most difficult to explain concisely. Very briefly, when we pick up either a chronicle, a genealogy, or a narrative history, we are also picking up the relationship between reader and world which each generates or assumes. This element is similar to ‘rhetorical structure’ as conceived in genre theory (Reference FrowFrow 2015), establishing the posture of the audience.Footnote 9 Chronicles, by and large, are written such that they are set against the backdrop of ‘all time’, or ‘God’s time’. ‘These are the chronicles of X’, some disembodied voice impresses upon us, ‘and they were recorded because they are important’. Genealogies instead orientate audiences by facts of existence, i.e., that some things which once were are no more, other things which have been are still, and still others that have not yet been, might. Finally, narrative histories orientate a reader between three points: (A) the world of the narrative; (B) the world as it has been known to the reader, and (C) an argument which is taken to describe and explain B through A.Footnote 10 Narrative history is a large and complex modelling exercise.
Having explained these elements, Table 16.1 can now be used both as a diagnostic for detecting the presence of the three forms of historical knowledge and as a means by which to more clearly distinguish between them. These elements will now be applied analogically to a case in the sciences – more specifically, the development of the field of synthetic biology.
16.4 Narrative in the Sciences: The Case of Synthetic Biology
Synthetic biology is, among other things, an epistemic programme of reform. This programme is sometimes ‘imposed’ on the biological sciences by engineering-trained outsiders, but just as often is pursued by biologists and biochemists from within (Reference O’MalleyO’Malley 2009). The creation of this field, or subfield, has been relational in the sense that synthetic biologists explain their own aims and ambitions largely through comparison and contrast with alternative existing fields (the primary alternatives being molecular biology and engineering). On my terms, this is the normal state of affairs for all areas of science and historiography, which make and remake themselves by these relational claims and choices on a daily basis. But in the case of synthetic biology the process has been particularly pronounced and instructively explicit. Indeed, there are a vast array of things which synthetic biology seeks to, and often has to, distinguish itself from, in order to marshal any autonomy. The list of competitors ready to swallow it up include biochemistry, systems biology, genetics, microbiology, data-centric biology, molecular biology, developmental biology, biochemical engineering and biotechnology. Indeed, it might not make sense to conceive of synthetic biology as existing outside of the parallel development of some of these alternatives, particularly systems biology (which it has grown up alongside), each dealing with ‘epistemic competition’ in similar ways, although emphasizing different aspects of biological knowledge (Reference Gross, Kranke and MeunierGross, Kranke and Meunier 2019). Nor has synthetic biology entirely settled in one identity or another, with both biological engineering and design biology being alternatives sometimes adopted by practitioners and institutions.
This chapter explains the emergence of synthetic biology not only as a direct result of the influence of key charismatic individuals (Reference Campos, Harman and DietrichCampos 2013), nor only thanks to the availability of novel experimental commodities (Reference BerryBerry 2019), nor only by attachment to the aspirations of national and international patrons (Reference Schyfter and CalvertSchyfter and Calvert 2015), nor only as a product of techno-futurist venture capital (Reference Raimbault, Cointet and JolyRaimbault, Cointet and Joly 2016), alongside all the other candidate features of importance which scholars have already addressed, but also as the bringing together of a set of epistemic choices made relative to other subfields. The epistemic choices in question track the terrain of chronicle, genealogy and narrative.
Figure 16.1 contains most of the essential features one might need in order to illustrate the epistemology of synthetic biology. From the point of view of this chapter, the knowledge production and interpretation practices found in this image exemplify all six of the scientific analogues for the elements of historical knowledge explained above. It is an exemplary image of synthetic biology, and was intended to be, published as it was in a PhD thesis completed in a laboratory dedicated to bringing plants to synthetic biology and synthetic biology to plants (Reference Pollak WilliamsonPollak Williamson 2017).Footnote 11 It was produced to test predictions concerning the relative strengths of different promoters (lengths of DNA that raise the rate at which some other DNA in the cell gets transcribed, to ensure it gets expressed) in different regions of plant tissue. Such promoters are prototypical ‘parts’ for synthetic biologists. What parts are is sometimes a fraught question, but are in general lengths of DNA with some characterized and specified function. Here ‘characterized’ means that data has been generated describing their behaviour in one or several biological, chemical or biochemical contexts. The promoter parts in question were being tested for inclusion in a new registry of standardized parts, which would enable more plant scientists to work with the organism in question, Marchantia polymorpha, as a model organism (Reference Pollak WilliamsonDelmans, Pollak Williamson and Haseloff 2017). The combining of different radiating or fluorescing reporters with different microscope technologies provides some of the most important historical background to this particular case study (Reference WorliczekWorliczek 2020). In Figure 16.1, we see fluorescent proteins (FPs), which are the source of the dots that you can see in rows 1, 2 and 4, combined with confocal microscopy (the particular microscope technology which took these images).Footnote 12 Fluorescence in row 3 is produced by a dye. Figure 16.1 is also demonstrating a particular method, known as ‘ratiometry’. All of these features (standardized parts, registries, fluorescent proteins, confocal microscopy and ratiometry) will be explained in more detail below.
For some scholars, my decision to treat synthetic biology as a relatively well-defined field or subfield with a distinctive collective epistemic culture would be problematic, on the grounds that it is not quite so distinctive as it thinks itself to be and is substantially coextensive with existing fields of biological research. In this respect, my attention to confocal microscopy in combination with FPs will not be an effective way to explain or argue for the particular emergence of synthetic biology, because the tools and methods that I am focusing on matter far more widely than in synthetic biology alone, right through molecular and developmental biology (Reference BaxterBaxter 2019). For instance, Hannah Landecker has already recognized the importance of live cell imaging techniques (as seen in Figure 16.1) throughout the biological sciences, and attended to the novel epistemic perspective which they enable (Reference LandeckerLandecker 2012). Likewise, regarding the emphasis I will place on improving data quality and management: these characteristics have been studied extensively by Reference LeonelliSabina Leonelli (2016) in the wider phenomena of ‘data-centric’ biology. But, for myself, the point is that whatever significance these methods and representations might have throughout the biological sciences they are nevertheless regularly claimed on behalf of synthetic biology – and not entirely illegitimately, thanks to reasons that correspond to their epistemic choices and preferences. Of course, the extent to which actors’ epistemic choices and preferences are practised coherently, and the extent to which actors live up to their own self-image, are always important questions. But they simply fall outside the analytical bounds of this particular chapter.
16.4.1 Means of Construction
For synthetic biology, the most important choices in this respect concern inclusion and exclusion. Synthetic biology pushes for the rigour and standardization of engineering, rejecting the artisanal choices of molecular biology. As such, it emphasizes the chronicle form of knowledge.
Synthetic biologists often start with worries and complaints that too many molecular biologists have been recording too many insignificant details, with insufficient rigour, too idiosyncratically, too selfishly, for too long. On their estimation, the terms for deciding what should be included and excluded in shared records have not been attended to with sufficient care and scrutiny, and so they want to increase the relevance and value of what is recorded. These claims and goals are analogous to choices in historiography regarding what to incorporate into the chronicle. The promoter parts used in Figure 16.1, and the parts registry to which they were submitted, are a useful icon to think with in this context, particularly as discussion surrounding them has developed considerably since the early 2000s when they were first posited as necessary (Reference Frow and CalvertFrow and Calvert 2013; Reference Stavrianakis and BennettStavrianakis and Bennet 2014). The idea of a ‘parts registry’ (with an attendant physical repository for samples) is partially built on the back of earlier national and international infrastructures for the sharing of materials and services in biology, such as GenBank. But parts registries are also novel to synthetic biology in ways which are directly related to its dissatisfaction with molecular biology. The quality of a part’s characterization data is intended to be higher, and collected more rigorously and in more standardized ways than has been common in molecular biology, excluding anything esoteric or artisanal. On my terms, the application of the skills, technologies and methods which constitute parts, is intended to expand, and systematically to improve on, biology’s chronicle, in contradistinction with the repository and data collection practices performed by others. One should imagine historians choosing to replace or alter existing chronicles on the grounds that they were made inexpertly, or under the guidance of misleading prejudices.
16.4.2 Means of Ordering
Synthetic biology uses new techniques and technologies to see through multiple scales of life during processes of material overlap, avoiding the retrospective stitching together of dead bits and pieces which it considers characteristic of molecular biology. As such, it emphasizes the genealogical form of knowledge.
Images like those in Figure 16.1 are taken to evidence the synthetic biologist’s particular powers of observation, precision, and control over their materials as those organisms and materials undergo processes of material overlap. This is all the more so when used to demonstrate the desired activity of a molecular part, which in turn evidences their competence in designing and making. These technologies for visualization provide biological scientists finer-grained detail in the investigation of phenomena as they happen over time, at both molecular and phenotypic levels simultaneously, all while the cell, tissue or organism in question is still alive. On my terms, this concerns synthetic biology’s and molecular biology’s means of ordering. Both groups prize observing objects undergoing processes of material overlap at molecular and phenotypic scales simultaneously, but (so it is argued) only synthetic biologists have prioritized developing truly reliable means for doing so. By contrast, so it is said, molecular biologists addressing questions of biological development in relation to molecular and phenotypic scales have worked more indirectly, often retrospectively piecing together developmental sequences from dead matter, or addressing molecular and phenotypic levels separately rather than simultaneously (Reference AnkenyAnkeny 2001; Reference SchürchSchürch 2017). On my terms, synthetic biology claims to arrive at the same kind of genealogical knowledge sought by molecular biology, but better and more reliably. One should imagine historians finding apparent evidentiary smoking guns at the centre of developing historical phenomena, or finding new potential paths of connection.
16.4.3 Likely (or Available) Modes of Narrativity
Where molecular biology measures its success by greater or lesser incorporation into the narrative of evolution, synthetic biology measures success in making simpler and more finely tuned systems, more like engineering than those found in nature. As such, it once again emphasizes the chronicle form of knowledge.
The next feature of Figure 16.1 that we will discuss, its use of ratiometry, exemplifies the mode of narrativity which synthetic biology prefers. Massimiliano Simons has identified the emergence of ‘postcomplex’ life sciences in the twenty first century. This refers to ‘sciences [that] do not imply a denial of the complexity of nature at the experimental level, but rather […] desire to transcend it’ (Reference SimonsSimons 2019: 151). This is a very helpful way to understand what synthetic biologists are getting up to in general, and with ratiometry in particular.
In biology, ratiometry is a practice that was first developed in the biomedical sciences as an improvement on earlier fluorescence-based diagnostic and observational techniques (Reference Haidekker and TheodorakisHaidekker and Theodorakis 2016). Because living cells and tissues are so context sensitive, and subject to multiple complex influences, actors in biomedicine began to develop dyes that fluoresce at two different wavelengths of light. Measuring both frequencies provided a check on the overall biochemical context, while also gathering the actual reaction data which one is interested in. This is because the second fluorescence measurement can be used as a constant reference point. In Figure 16.1, the reference signal is found in Row 2. If the reference point behaves bizarrely, one has a reason to question the validity of the experiment and the data it yields. If, however, the reference point behaves well, one can gather even more precise data concerning the reaction of interest by monitoring the ratio between the two outputs, hence ‘ratiometrics’. While not all synthetic biologists use ratiometry, it is nevertheless precisely the kind of effort which the field celebrates and can be appreciated as forming part of their productive response to the ‘problem’ of biological ‘noise’ (Reference Knuuttila and LoettgersKnuuttila and Loettgers 2014). Interestingly, the practice and the term ‘ratiometric measurement’ originated outside of biology and biomedicine, within electrical engineering, where electrical rather than optical signals were used (Reference Holloway and NwaohaHolloway and Nwaoha 2013). The precision of the ratiometric results arrived at increases the chance that whatever design constraints are impinging on this biological context will become easier to spot, if not now then at some point in the future. This is an embryonic or deferred narrativity, more akin to what one finds in a chronicle, which synthetic biology prefers over and above the more figural or underlying evolutionary and biochemical narratives, which drive, and are prized in, molecular biology. In making this kind of choice, existing outside of evolutionary narrative-making, synthetic biology is by no means alone (Reference Love, Bueno, Chen and FaganLove 2018). One should imagine historians increasing sceptical pressure on certain evidences and standards of assessment, which often requires or affords a postponement on overarching conclusions.
16.4.4 Reflexivity
Synthetic biology interprets molecular biology as possessing low reflexivity because the latter has rarely considered the authors of protocols, metadata and other foundational sources as worthy of recognition. By contrast, synthetic biology increases authorial pride over protocols, metadata and so on. As such, it emphasizes the narrative form of knowledge.
At the same time as improving the materials used, synthetic biology also promises to improve the ways they are used, by prioritizing thoughtful, planned and well-managed sharing. The ambition is not only to increase the number of parts available for the synthetic biologist to work with, but to improve their capacity to work with them by also collecting as much useful data and metadata concerning their use as is feasible (Reference McLaughlin, Myers, Zundel and MısırlıMcLaughlin et al. 2018). Fluorescent proteins, and the parts they are used to characterize, are themselves expected to be created more reflexively in synthetic biology than they have been in molecular biology through the adoption of some explicit standard, which, it is hoped, will ensure their compatibility with other parts that have been made according to the same standard (Reference Peccoud, Blauvelt, Cai and CooperPeccoud et al. 2008). Sabina Leonelli’s research on the broader phenomena of data-centric biology is important here, and data ‘curators’, the class of experts permeating the biological sciences on whom Leonelli focuses, are particularly pronounced in synthetic biology (Reference LeonelliLeonelli 2016). These claims and goals emphasize authorial reflexivity. On my terms, by emphasizing greater reflexivity concerning authorship, synthetic biology is emphasizing the value of narrative knowledge for the field, in contrast with those fields outside of it. One should imagine curators and archivists developing international standards for description, the making of definitive translations of historical texts or the making of more widely accessible historical archives.
16.4.5 Ending
Where molecular biology seeks to find what has been and why, synthetic biology arrives at what can be, and (hopefully) an eventual understanding as to why, ending as engineering often does with something that works. As such, it emphasizes the genealogical form of knowledge.
Synthetic biology’s preferences for narrativity (section 16.4.3) are coextensive with its preferred ending points, which are often the demonstrations of what they can now make with their materials rather than necessarily a reflection on what questions they might answer (Reference SchyfterSchyfter 2013). Whatever complexity is present in the cells of Figure 16.1, and whatever overall biochemical or evolutionary narrative which these findings might contribute to, this research programme cuts through all of that, in order to produce a simpler, immediate and fine-grained picture of a system of protein expression, one which might be further refined or made tuneable, as in engineering. Of course, this does not stop researchers also considering evolutionary significances, but these are simply not a requirement for the field. On my terms, such endings most closely resemble genealogical knowledge, because parts and constructs could always be developed further, characterized further, put to work in more places, etc. But after they have been shown effective in at least one or two places the synthetic biologist can choose to stop. Nor are they required to place them in evolutionary context. These then are their preferred kinds of ending, which are demonstrative ones. One should imagine historians engaging in re-enactments, or developing new uses for old sources, or new methods by which to study sources, or simply finding new sources. These are all sufficient, as useful and valuable endings, in their own right.
16.4.6 Orientation to the World
Where molecular biology illuminates evolutionary lineages, and the persistence or loss of forms over time, synthetic biology renders organisms in the image of its argument for engineering. As such, it once again emphasizes the narrative form of knowledge.
When it comes to how synthetic biology orientates an audience to the world, design principles are key. This is a topic which Sara Green has addressed through the concept of ‘constraint-based generality’ in the closely aligned field of systems biology. ‘Constraint-based generality makes possible the identification of general principles underpinning a class of systems exhibiting similar structural or dynamic patterns’ (Reference GreenGreen 2015: 635). Where systems biologists attempt to find and understand these patterns in extant biological organisms and systems, synthetic biologists look for structural patterns not only in existing systems but also in the parts which they make (Reference KoskinenKoskinen 2017). When submitting parts to the registry and attempting to standardize the ways in which these parts are characterized, synthetic biologists are building up biology’s recognized design space, one part at a time. This is the world which synthetic biology orientates a reader to – some possible designed one rather than a merely evolutionary and natural historical one (Reference KellerKeller 2009; Reference Knuuttila and KoskinenKnuutilla and Koskinen 2021). Molecular biology therefore asserts its authority through contrast with an ‘actual’ genealogy of evolution and natural history. Interestingly, one of the ways in which Michel Morange allows that synthetic biology might prove liberating for evolutionary biology is by undermining its otherwise uniformitarian tendencies when it comes to the actual narratives of evolution (Reference MorangeMorange 2009: 374).Footnote 13 As synthetic biology’s constructs serve as justification for the approach and embody their argument, I interpret synthetic biology as producing an orientation to the world which is much more like narrative history rather than chronology or genealogy. Just as historians orientate the audience towards the world through their historical model, so do synthetic biologists orientate the audience to the world through their designed synthetic one.
16.5 Conclusion
In this chapter, I have offered six elements distinguishing three forms of knowledge, arguing that choices concerning these six elements are means by which fields and subfields define and differentiate themselves in relation to other fields and subfields, regarding their understanding of the world and their practices. While the three forms in question were derived from analyses of historiography, the chapter argues that they also undergird knowledge-making in the sciences. Table 16.1 lists these elements and the forms of knowledge they most typically align with, turning the table into a diagnostic tool. I have used the table to analyse features of synthetic biology present within Figure 16.1, illustrating how these six elements are located in practice in a scientific case study of a field’s formation and self-understanding of its knowledge-making. We come to appreciate that synthetic biology differentiates itself by sometimes emphasizing ‘chronicular’ knowledge, other times genealogical knowledge, other times narrative knowledge. When it comes to its means of construction and narrativity, it emphasizes the chronicle. When it comes to its research endings and preferred ordering of phenomena, it emphasizes genealogy. When it comes to its level of reflexivity and orientation to the world, it emphasizes the narrative form. If the overall picture has been effectively communicated, the reader will be able to recognize ways in which the process of making new scientific fields and their knowledge claims turns on similar considerations to those which arise in the process of making new historical fields and their knowledge claims. The aim has been to advance understanding in at least two directions.
First, the historians and scientists who populate any given period are rarely considered together and are typically treated as requiring different and distinct analytical apparatus. But this need not be the case. The kinds of analysis which some historians of science already make concerning the narrative, genre and literary conventions which have mattered for science throughout time (Reference Pomata, Ginzburg and BiasioriPomata 2018; Reference BucklandBuckland 2013; Reference BeerBeer 1983; Reference DearDear 1991) could be good starting points for such an approach. Aspects of these accounts will be illuminating for the history of science and historiography alike. As such, a future direction which historians and philosophers could take would involve looking across the waxing and waning of modes of narrativity, or preferred endings, or means of construction, etc., of a given period in science in tandem with historiographical fashions concerning the same, in the search for shared patterns of epistemic change. Such shared patterns might direct us towards underlying cultural shifts.
Second, this chapter has established a new framing for examining the formation of fields and subfields in both history and science. This approach prioritizes actors’ categories without remaining beholden to them. It is dynamic, as the six elements of knowledge formation described here can be applied imaginatively to a wide variety of aspects of scientific and historical life – be they publishing norms, experimental practices, representational preferences, intellectual property norms or what they study and how they study it. The differences between chronicles, genealogies and narratives, and the different times when we wish to emphasize the significance of the one or the other, are worth bringing to bear on more cases, particularly where clashing ‘narratives’ are believed to be in play, as between synthetic biology and its immediately adjacent subfields.Footnote 14