1.1 Introduction
To contextualise present biomedical debates on the role of pregnancy in shaping offspring traits, and hence the related notion of maternal responsibility, we review in this chapter the prehistory of the belief in maternal impression. Maternal impression is the enduring notion that the emotions and experiences of a pregnant woman could leave permanent marks on her unborn child. We are not claiming in the following pages that maternal impression, or its historical understanding, is a direct predecessor of the Developmental Origins of Health and Disease (DOHaD). However, with this global historical overview, moving eastward from the Mediterranean to Asian medical systems, we are alerting the reader to the ubiquitous pre-scientific concern with pregnancy, and sometimes pre-pregnancy, as a key time ‘requiring self-discipline and work on the part of expectant mothers’ [Reference Kukla1]. As DOHaD globalises its research and claims about novel forms of soft inheritance [Reference Hanson, Low and Gluckman2] to geographical regions that encompass a multiplicity of knowledge systems, this longue durée and global view of maternal effects may help understand both its resonance with traditional beliefs in the West and contemporary forms of hybridisation with non-Western systems of medical knowledge, which we will discuss in Section 1.4.
1.2 Maternal Impressions in Europe before Modern Medicine
1.2.1 Greco-Roman and Medieval Medicine
In describing the relevance of maternal impressions in Greco-Roman and Latin medieval medicine, it is important to highlight three major aspects. First is the wider experience of the pre-modern body as a permeable and porous entity, living in a state of constant apprehension of the external environment on its physiological balance. Second is the physical power of imagination. Before the modern split between body and mind, a fully materialistic view prevailed in pre-modern medicine where the soul and human temperament were seen as mostly determined by the physical impact of changes in diet, climate, or through the effects of planets’ movement. This deep communication between imagination and bodily changes may seem superstitious or only metaphorical in the post-Enlightenment mindset, which neatly separates inwardness and external objects, but it was literal and effective well into early modernity [Reference Kirmayer3]. Scholastic philosophers, for instance, shared the notion that mental images could impress real forms into matter. Al-Kindi (ninth century CE), one of the fathers of Islamic philosophy known in the Latin West for his On Rays (De Radiis) and a translator of Aristotle, held the view that any spiritual substance can induce true forms in the body just through imagination [Reference Zambelli4, Reference al-Kindi5]. This widespread understanding of the relationship between matter and mind offers a context to appreciate the impact and ubiquity of the doctrine of maternal impression or the belief that a mother’s imagination could imprint both physical and mental characteristics on an unborn child.
Third is the gendering of biological knowledge, at different degrees, in both medical and philosophical views of reproduction. Physiologically, in Greco-Roman and later medieval medicine, it was widely assumed that women were of a softer, more permeable, and less stable nature, more liquid and transparent than men, and hence easily subject to passion [6]. This perceived greater impressionability of the female body finds its validation earlier in Hippocratic texts, where women are considered spongier, ‘with a capacity to absorb fluid which makes it directly analogous to wool or sheepskin’ [6]. These and other tropes were instrumental in consolidating the image of women as more subject to passions and less shielded ‘against corrupting ingestions’ [Reference Kukla1]. These ingestions included the power of imagination and the susceptibility to several social influences that we would now call shocks or traumas. As we can read in the most influential treatise in female medicine in antiquity, Soranus’ Gynecology (c.125 CE):
Some women, seeing monkeys during intercourse, have borne children resembling monkeys. The tyrant of the Cyprians who was misshapen, compelled his wife to look at beautiful statues during intercourse and became the father of well-shaped children; and horsebreeders, during covering, place noble horses before the mares. Thus, in order that the off spring may not be rendered misshapen, women must be sober during coitus because in drunkenness the soul becomes the victim of strange fantasies; this furthermore, because the off spring bears some resemblance to the mother as well, not only in body but in soul.
A treatise attributed to Galen [Reference Leigh7] enlists the perceived ability of an expectant mother to imprint her child as one of the many ‘wonderful works of nature’. The Aristotelian view is even more radically gendered. Whereas the Hippocratic and Galenic tradition sees both men and women as contributing to semen for procreation, for the most influential philosophers of antiquity until early modernity, women contribute only bare matter to the male semen that will inform and shape the female menses. Medieval medicine and natural philosophy would rework these tropes with a few variations. Women being not just less hot but also of a more watery constitution than men, Albert the Great and Thomas Aquinas thought that they would behave as any moist object would: that is, they would receive impressions more ‘easily but retain them poorly’ [Reference Aquinas8]. In a moralising rendering of this trope, Albert the Great instead made the claim that the mud-like impressionability of women could explain why ‘women are more inconstant and fickle than men’ [Reference Albertus9]. Their moister constitution made women more easily subject, for instance, to the effects of lunar tides. Throughout the European Middle Ages and into early modern Europe, maternal impression was debated as one of the key environmental influences to explain the specific characteristics of the child, particularly birth defects or the resemblance (or lack of) to the father [Reference Temkin10, Reference Tucker11]. The emphasis remained on the negative, and the avoidance of scary sights or stressful emotions was largely advised in the medical literature as a way to counteract the power of a porous womb.
1.2.2 Early Modern European Debates 1500–1700
Maternal impression, and particularly one racialised version of it possibly based on the Hippocratic ‘On the Nature of Children’ [Reference Doniger and Spinner12], became widespread in the Renaissance and early modern Europe (see [Reference Paré13]). In the medical-teratological work of Ambroise Paré (Des monstres et prodiges, 1573), Hippocrates was said to have saved a white princess from the accusation of adultery when she mothered a child ‘black as a Moor’. The father of medicine pronounced that a portrait of a Moor in the princess’s bedroom was to blame for the case of dissimilarity in generation. (For the racialised aspect of maternal imagination, see [Reference Doniger and Spinner12].) Besides Paré’s teratology, a belief in maternal impression was shared by a large number of European intellectuals and doctors of this period from Ficino to Montaigne and Gassendi [Reference Hirai, Manning and Klestinec14]. Paracelsus claimed that ‘the woman is the artist and the child the canvas on which to raise the work’ (cited in [Reference Pagel15]); Montaigne claimed that ‘we know by experience that women transmit marks of their fancies to the bodies of the children they carry in the womb’ (cited in [Reference Huet16]).
The seventeenth century saw a progressive collapse of the fluid body of humoralism in favour of an organic understanding of physiology and the fading of teleological explanations replaced by emerging mechanistic science. Maternal impression, however, still found a place in debates on reproduction and heredity. Rather than just being confined to teratological abnormalities, the power of imagination was integrated as another mechanism for explaining resemblance among generations [Reference Smith17]. For this reason, the alleged power of maternal impression was often resisted and criticised by scholars who highlighted instead the stability of natural processes and their relative impermeability to the direct effect of accidents or emotions [Reference De Renzi, Mueller-Wille and Rheinberger18].
Although scientific debates became more cautious in connecting abnormalities to maternal misconceptions, seventeenth- and eighteenth-century medical and midwifery texts, both technical and popular, continued to offer harsh prescriptions to control pregnant women. Unknowingly building on old humoralist tropes of maternal permeability, they taught women that their wombs were sensitive to the external world ‘as our senses are’ and that ‘whatsoever moves the faculties of the mother souls may do like in the child’ [Reference Sharp19]. John Maubray, in his influential treatise The Female Physician (1724), also claimed that when the soul is ‘elevated and inflamed with a fervent Imagination, it may not only affect its proper Body, but also that of Another’. Therefore, pregnant women were encouraged to ‘suppress all Anger, Passion, and other Perturbations of Mind, and avoid entertaining too serious or melancholic Thoughts; since all such tend to impress a Depravity of Nature upon the Infant’s Mind, and Deformity on its Body’ (1724 cited in [Reference Shildrick20]).
1.3 The Middle East: Biblical, Talmudic, and Arabic Commentaries on Maternal Impression
1.3.1 Pre-eugenic Thinking in the Biblical and Talmudic Tradition
Moving to the East Mediterranean and what is today the Middle East, we can see how tropes around maternal impression often took a different form within debates on paternal and maternal power to shape heredity. In the Jewish tradition, for instance, what emerges more vividly is the possibility to control and optimise reproduction through impressions, particularly at the time of conception rather than during the whole pregnancy. The Torah offers a clear and oft-cited example. Jacob, the grandson of Abraham, uses something akin to maternal impressions to generate spotted sheep. Genesis (Reference Furth30, Reference Richardson37–Reference Kamata39) says:
Jacob, however, took fresh-cut branches from poplar, almond and plane trees and made white stripes on them by peeling the bark and exposing the white inner wood of the branches (Reference Yoshino38). Then he placed the peeled branches in all the watering troughs, so that they would be directly in front of the flocks when they came to drink. When the flocks were in heat and came to drink, (Reference Kamata39) they mated in front of the branches. And they bore young that were streaked or speckled or spotted [21].
While Midrashic disputes or Talmudic commentaries refer to this episode to exonerate white mothers having a black child or vice versa [Reference Kueny22–Reference Preuss24], in later rabbinic commentaries (14th CE) the passage in Genesis is taken as a platform to suggest a positive possibility of control of heredity among humans:
If animals, who have no intelligence to understand the benefit of a matter or its detriment, and only act out of instinct, have the power to mold their offspring according to their thoughts at the time of copulation, how much more so for human beings, who have great power of intellect to form in their minds perceptions of matters lofty and mundane, and they have the power to direct their thoughts with regard to any given matter, that they need to purify their thoughts for this endeavor [Reference Farber25]
An interesting albeit anecdotal variation to this quasi-eugenic theme (in the sense of better birth) can be found in a story about the famous Rabbi Yohanan (first century) who apparently justified to other rabbis his frequent attendance of Roman public bathhouses by claiming that ‘the sight of his physical beauty’ would cause women who ‘washed before intimacies with their husbands … to conceive handsome children’ [Reference Barilan26]. The focus on preventing negative effects, instead, is highlighted by a story in the Talmud of the anxious precaution of a rabbi who only made love to his wife in darkness, so that at the moment of copulation he would not set his eyes on another woman ‘begetting sons who are as bastards’ (cited in [Reference Doniger and Spinner12]). At least in this case, it is the man and not the woman the possible channel through which impressions or emotions may shape a child’s nature.
Finally, Moses Maimonides’ work (1135–1204) also emphasises the potential usage of maternal impression to achieve a good or better birth. In his medical aphorisms on Greco-Roman medicine, he writes [Reference Leigh7]:
(xxiv.26b) I was informed about an ancient physician that he wished to have a fair son born to him and that he painted a portrait on the wall of a boy as handsome as possible. When he had sexual intercourse with his wife, he ordered her to look at that portrait constantly and not to look away from it even for a short moment. She got a son who was as beautiful as that portrait but did not resemble his father [Reference Doniger and Spinner12, Reference Bos27].
1.3.2 Muslim Medical Views in the Middle Ages
Medical views in Islam are influenced by four parallel and somehow competing if not conflicting sources and traditions: the Greco (Unani)-Islamic tradition (Avicenna, Al-Razi, etc.); Aristotle’s highly influential view of generation and reproduction as determined mostly or only by the power of the male semen to control a recalcitrant maternal matter; Prophetic medicine (medical sayings attributed to Muhammad, which were later turned to a whole discipline); and ‘spiritual’ saintly healing. References to maternal impression are, however, somehow more contained. Key medical sources from the tenth and eleventh centuries – Al-Qurtubi, al-Rāzī, and Ibn Sina – are more worried about possible miscarriages caused by frightening emotions in women (or undue movement) than maternal impression or influence on the child morphology as such (see, for instance, [Reference Kruk28]). Two peculiar aspects of Islamic culture may have deflected the medical view on impressions. One is that the Islamic ban on portraits and statues at home limits the number of eccentric sights (mostly outdoor sights such as animals) (see [Reference Amster and Amster29]) that could influence a woman [Reference Kueny22]. The diminished impact of images is balanced by a stronger emphasis on smell. According to Ibn Sina, women could abort due to particular smells such as an extinguished candle. And if the smell of some peculiar food generated a craving that was not satisfied immediately, this would lead to a proverbial birthmark (in the shape of the food) or crossed eyes, a punitive sign that not enough attention has been paid to a mother’s desire. As commentators noticed, a penalty applied to all tenants of a building if a maternal craving caused by smelling food was not satisfied, illuminating the attention and care that should be reserved for pregnant women under Islamic rules [Reference Kruk28].
The second peculiarity of Islamic commentaries is their emphasis on a less permeable view of heredity that somehow ‘resembles’ a contemporary understanding. Indeed, Islamic medieval commentators developed a specific understanding of heredity that sidelined the direct power of impression and instead highlighted how ‘traits from distant relatives may skip generations and then suddenly appear in subsequent offspring’ [Reference Kueny22].
This is important to highlight to avoid a too-deterministic view of direct maternal effects as a ubiquitous monolith in pre-modern traditions. Besides medical advice such as the work of the Persian physician Ali ibn Sahl Rabban al-Tabari (d. 870) in his Firdaws al-Hikma (Paradise of Wisdom), similar notions can be found in the saying of the prophet Muhammad who explicitly advised fathers not to disown sons who look nothing like them because children frequently receive hereditary traits (naj‘a ‘irq) from distant ancestors, just as red camels oftentimes produce ashy-coloured offspring that resemble prior relatives [Reference Kueny22]. This is less to attribute to the Prophet a Mendelian view of heredity and more to highlight sociological pressures to avoid a too-easy dismissal of paternal responsibility from men and hence the risk of disintegration of the family [Reference Kueny22].
This is not to say that in medieval Islamic medicine there is no emphasis on and hence anxiety about direct environmental influences on the act of procreation and accordingly the quality of birth. Tenth-century embryological debates (Arıb ibn Sad (d. 980), in his Kita¯b khalq al-janı¯n (Creation of the Embryo)) are ripe with precautions about the presence of certain winds during coitus that may produce lazier or more delicate children, the importance of maternal and paternal mood at the moment of procreation (a happy soul strengthens the body, and a stronger body gives rise to more robust sperm), or avoiding intercourse with women who had refrained from sex for a long time [Reference Kueny22]. Both Islamic and Jewish medical traditions present different interpretations of maternal impressions developing out of their specific religious contexts. A shared similarity is the importance placed on the various senses at the time of conception to optimise reproduction, and a framing within legal-theological debates, while references to licit and illicit sexual intercourse and a patriarchal anxiety to control female desire are in continuity with the Greco-Roman experience.
1.4 Eastern Medical Traditions and Fetal Education
Eastern medical traditions, such as Ayurveda in India and traditional Chinese and Japanese medicines, share similar albeit differing perspectives on the mother’s ability to influence her child in the womb. Instead of a focus on maternal impressions as seen previously, however, these traditions developed the concept of ‘fetal education’ or the ability to educate or influence the child from within the womb. The ideas expressed in fetal education have strong roots in ancient texts and beliefs and work to create positive and avoid negative influences on the fetus. Both Ayurveda and Chinese medicine continue as traditional therapeutic medical systems today, unlike European humoralism discussed previously [Reference Furth30], and various forms of fetal education continue to be practiced contemporarily (see [Reference Garbh31, Reference Sleeboom-Faulkner32]). While Ayurveda draws on Hindu moral ideas and a specific medical framework, Chinese and Japanese medicine share very similar resemblances and largely rely on Confucian doctrines.
1.4.1 Garbh Sanskar: Ayurvedic Education in the Womb
The concept of fetal education in Ayurveda is often evoked by the tragic hero Abhimanyu from the Indian classic the Mahabharata. In this epic, Abhimanyu learns how to enter an impenetrable city while within his mother’s womb [Reference Smith33]. Hindu myths express ideas about maternal impressions during conception or even prior to conception (see [Reference Doniger and Spinner12]). In Ayurvedic medicine, Garbh Sanskar, commonly translated as ‘education in the womb’, prescribes a regime for both men and women to conceive and birth a healthy child. Garbh Sanskar interestingly has cited the preconception period as a key component to producing healthy children and advises a variety of purification procedures and timings to ensure good-quality progeny. The focus is not solely on women, and men are included in the procedures and are thought to play an important role in conception.
The classical text, the Charaka Samhita explains how at conception the embryo is created by three factors: the mother, the father, and ‘the self’ or ‘life’. All three components are said to influence different organs and dispositions of the child [Reference Wujastyk34]. Here, maternal and paternal health both play an important role before conception, and men are required to undertake actions to produce good-quality semen. Semen, in classical Ayurvedic medicine, is viewed as the ‘highest essence’ generated by the body [Reference Wujastyk34]. Invoking an agricultural metaphor, the Caraka Samihita describes the need to prepare for pregnancy: ‘As seed (of a plant) does not sprout if affected by improper time, water, worms, insects and fire, so is the defective semen of man’ [Reference Samhita35]. For women, menstrual blood plays a similar, but not equivalent, role to semen and is a generative fluid that unites with semen at conception [Reference Wujastyk34]. After the time of conception, the focus shifts to the mother and her actions, behaviours, and nutrition.
1.4.2 Taijiao and Taikyō: Fetal Education in Chinese and Japanese Medicine
Classical Chinese medicine incorporates the cosmology of Yin and Yang. Yin is representative of qualities such as ‘darkness, cold, moisture … the moon, the night and the feminine’. Yang is representative of the qualities of ‘brightness, the sun, fire, warmth, activity and the masculine principle’. Yin and Yang were seen to regulate the bodily qi (fundamental energy) and any changes in the body [Reference Furth30]. In medieval China, the concept of fetal education reflected a particular interconnection between the womb and external stimuli. The sex of the child, for instance, was thought to be malleable during this indeterminate time, and the responsibility for the outcome rested upon the mother’s conduct [Reference Furth30]. For instance, in Song Dynasty texts, such as the Fu Ren Da Quan Liang Fang (A Great Complete Collection of Fine Formulas for Women) (1237 CE), the mother’s actions are described as impacting the attractiveness or intelligence of the child:
The child during the first trimester is called the initial fetus. This is the period during which the infant begins to be moulded. If a dignified and elegant stature is desire, the mother should think, speak, and act discretely. If a handsome offspring is wanted, she should wear a piece of jade. If a witty offspring is desired, she should read verses and poems. This is because the exterior reception communicates with the interior plasticity. [Reference Flaws36]
Taijiao or ‘fetal education’ (literally tai [fetus] and jiao [to instruct]) has roots in the classical text the Senior Dai’s Book of Rites (Da Dai Liji) and was later expanded upon by Liu Xiang (ca. 80–7 BCE), a Han Dynasty Confucian scholar, who claimed that the integrity of kings rested on the morality of their mothers [Reference Richardson37]. It has been part of Chinese gynaecology for centuries and returned periodically to prominence at certain historical junctures of Chinese stories, such as the early Republican era from the 1910s. Often, the specific nature of training in the womb reflected the importance of moral transformation within Confucianism and the wider belief that the outside environment could influence the developing fetus. Here, the fetus is responsive to stimuli provided by the mother through her senses, behaviour, emotions, and environment. Taboos, such as not eating or looking at rabbits (which would result in the child developing a harelip), reflect the belief in the ability of the fetus to absorb outside stimuli and incorporate them physically into their own developing body [Reference Richardson37].
The influence of the environment on the child is expanded in Japanese fetal education or Taikyō to include both parents and the wider society. Taikyō was introduced to Japan from China in the tenth century and, like Chinese medicine, was strongly influenced by Confucian ideas [Reference Yoshino38, Reference Kamata39]. In Japanese, Taikyō translates to ‘womb teaching’ and could be interpreted as teaching the pregnant woman or the fetus within the womb. Originally, the belief was that if a pregnant woman behaved appropriately, she would birth a man of great virtue [Reference Yoshino38]. Other texts such as the Onna chōhōki or The Record of Women’s Great Treasures, written by Namura Jōhaku in seventeenth-century Japan, show the progression of these beliefs and give an insight into ideas of pregnancy during the Edo period. Namura’s recommendations on Taikyō included wrapping the belly, prescribing types of food, and producing an ideal environment to bring about an uneventful birth and the health of the child [Reference Tanimura and Chart40].
1.5 Conclusion
Global history has emerged in the last two decades as an attempt to decentre Euro-Atlantic routes of trade, power, and science, as the main site of knowledge production [Reference Berg42]. Through a review of primary and secondary sources about maternal impression from the Mediterranean to China before the rise of modern science, in this chapter we have aimed to contribute to debates on the global history of the permeable body across different cultural and medical contexts. Focusing on a time where there is no technological or scientific gap between East and West is particularly fertile because it avoids the narrative of the diffusion of Western science. It suggests instead circulation, ‘connected histories’, and networks of exchange across Afro-Eurasia [Reference Subrahmanyam43, Reference Raj44]. As the following chapter will show, the trope of maternal impression declined with the rise of modern science, and genetics in particular, but not the emphasis on the maternal body as a site for optimising reproduction and monitoring a mother’s behaviour before and during pregnancy. Eighteenth-century embryological debates [41, Reference Epstein45], the rise of ‘pre-natal culture’ in nineteenth-century European science [Reference Arni, Brandt and Müller-Wille46], and more recently, the growing interest in mother–fetal interaction and maternal ‘imprint’ [Reference Richardson51] can undoubtedly be linked with several immediate causes in medical, scientific, and technological changes. In particular, the last half-century changes in Euro-American biomedicine have led to a new visibility and agency of the fetus within a technologically transparent maternal body [Reference Dubow47–Reference Waggoner50]. This chapter has aimed to show that whatever their present iterations and more immediate genealogies, contemporary postgenomic findings – and their related notions of risk, responsibility, gendering of biological knowledge, and blame or optimisation – are not solely a modern invention and have a much wider global and historical resonance. Contemporary scientific labs did not emerge in a historical vacuum but are embedded in sociocultural contexts in which pre-comprehensions of the maternal body as uniquely transparent and impressionable, and hence subject to control and governance, have a long prehistory.
A good example of the persistence and hybridisation of longue durée beliefs with contemporary science comes from India. Newspaper articles have reported on specialised ashram-style clinics that have claimed to use the practice of Garbh Sanskar not only to help parents conceive and produce healthy children but to also perfect the reproduction process. Some institutions claim that they can help couples produce the ‘perfect progeny’ or even ‘upgrade’ or ‘repair’ dysfunctional genes [Reference Garbh31]. It has been suggested that by utilising the knowledge and practices of Garbh Sanskar, epigenetic programming can take place and help create healthier offspring [Reference Londhe, Chinchalkar, Kaushik and Prakash52]. Articles have likened the practice to a modern version of ‘genetic engineering’ [Reference Garbh31]. The practice of Garbh Sanskar has also found space in the digital sphere in India with apps, online workshops, webinars, and music advertised to help women increase the likelihood of producing healthy and intelligent children. These trading zones exemplify the globalisation and commercialisation of traditional medical knowledge and ideas about the pre-pregnant and pregnant body. The example of Garbh Sanskar illuminates how the circulation of epigenetic ideas across the world is neither uniform nor bound by bioscience and can be adapted and entangled with pre-existing notions of maternal imprinting and responsibility. As DOHaD develops and expands globally, these entanglements will become important to assess how epigenetic knowledge is being (re)produced in localised contexts. An invitation to look at history not just as a succession of novel discoveries but also as the long-term accretion and sedimentation of mentalities across a long historical time will be helpful for DOHaD scholars and practitioners to situate their work in context.
2.1 Introduction
During the 1960s and 1970s, a new understanding of the fetus emerged at the intersection of demographic changes: high fertility and reduced infant mortality in the ‘baby boom’ era; new medical visual technologies and the expansion of mass media; and the rise of the feminist movement and the liberalisation of abortion. These social shifts spurred scholarly and lay interest in public representations and private perceptions of the fetus [Reference Petchesky1–Reference Duden4], the rise of the fetus as a subject [Reference Dubow5, Reference Casper6], the politics of abortion [Reference Reagan7], and the uses of fetal bodies in research [Reference Wilson3, Reference Pfeffer and Kent8]. A century ago, scholars argued that the mother and the fetus were one; maternal experiences ‘imprinted’ the malleable fetus and maternal testimony was central to the understanding of pregnancy until the hidden fetus was revealed at birth. But starting in the nineteenth century and especially during recent decades, an increasingly visible and autonomous fetus has emerged, the mother has been erased from the picture, and the experience of pregnancy has come to be more contingent and technologically mediated.
This compelling and broad narrative glosses over subtler shifts in the way that the fetus, the mother, and, especially, the relationship between them have been conceptualised. And yet, a closer look at medical and scientific literature shows that over the course of the twentieth century, the maternal–fetal relationship has been reinterpreted and redrawn multiple times. For this chapter, I have used published sources from diverse medical and scientific disciplines, such as obstetrics, fetal physiology, evolutionary biology, developmental science, and epigenetics, to draw attention to the changing ways in which the maternal–fetal relationship has been understood. This close reading has helped me uncover underlying assumptions – shifting and competing even within a single discipline – that fed into scientific and clinical research. For example, in the 1960s, physiologists who stressed fetal autonomy when describing fetuses as lone mountaineers and astronauts also worked on questions related to the fetal control of processes within the fetal and maternal bodies, such as the onset of labour and fetal growth. The diverse assumptions, metaphors, and research questions tell us something about changing social views of and attitudes towards motherhood, pregnancy, and the relationship between the mother and the fetus, including and especially maternal influences on the developing organism.
Covering the long twentieth century, I have identified several key concepts and periods: the abandoning of maternal impressions, strong hereditarianism, and the fetus as the parasite in the early decades of the twentieth century, the era dominated by eugenics; the ‘maternal effects’ of the mid-twentieth century, when concerns over adequate nutrition and trauma of long-standing effect that emerged around the Second World War supported the idea of the ‘critical’ or ‘sensitive’ periods and revived interest in maternal influences; the autonomous fetus of the 1960s and 1970s civil rights movement era followed by the selfish fetus imagined by evolutionary biologists of later decades of the twentieth century; ending with the latest rapprochement between the mother and the fetus supported by developmental approaches and epigenetics. While it may be tempting to regard this latest development as a return of maternal impressions, I want to show that similarities are superficial: the fetal–maternal relationship was redrawn according to new rules, and it cannot be fully understood without insight into its recent history.
2.2 The Fetal Parasite
Well into the 1800s, the developing organism was seen to be malleable by external influences, and the mother was both the mediator and the source of these cues. Anything the mother ate, saw, touched, or even imagined, collectively known as maternal imagination or maternal impression, was understood to have the capacity to affect the child [Reference Huet9, Reference Shildrick10]. Yet during the nineteenth century, this close bond between the mother and the fetus was broken. The concept of heredity, which reduced the mother to little more than a passive vessel transmitting elements collected from previous generations to the offspring, first appeared in the early decades of the nineteenth century and quickly gained popularity [Reference López-Beltrán, Müller-Wille and Rheinberger11]. In the 1880s, the German biologist August Weismann explained how heredity worked using the tools of experimental biology [Reference Müller-Wille and Rheinberger12]. According to Weismann, ‘germplasm’ (preserved in the germline but unfolding its potential in the body during development) was resistant to influences exerted by ‘soma’, so changes in somatic cells had no effect on the germline. While Weismann did allow the possibility of direct environmental influence on the germ cells, scholars who followed in his footsteps by and large reduced development to a robust pre-programmed sequence of stages. Influences received in development, unless extreme to the point of threatening maternal or fetal survival, were secondary to heredity.
Weismann’s work had a major impact outside academic biology. The early twentieth century is usually seen as the high point of eugenics, a broad movement that distilled nineteenth-century concerns over rapid socio-economic change into modernist visions of society reformed through rational reproduction [Reference Levine, Bashford, Levine and Bashford13, Reference McLaren14]. Eugenics preceded the cellular explanation of heredity: it relied, initially, on the mid-century concept of degeneration, whereby ‘organic’ and social factors acted on the organism to produce a reverse evolution, cumulative over generations, taking a lineage to a downhill slope of no return [Reference Pick15]. Weismann’s work provided it with scientific cachet.
So if we take that strong hereditarianism saw the mother as a passive vessel, rather than an active agent in the formation of the new organism and the mediator of external influences, then the appeal of the model taken from another cutting-edge scientific discipline of the period, parasitology – the relationship between a parasite and its host – begins to make sense. The parasite depended on its host for shelter and food but was also remarkably protected from the fluctuations in the host’s circumstances and environment, even if the host itself suffered. Accordingly, the fetus was understood to thrive in all except the most extreme circumstances, with maternal homeostatic mechanisms maintaining environmental factors at a near-constant level and the placenta providing protection from many noxious substances [Reference Sontag16, Reference Martin and Holloway17]. Yet, for the mother, the pregnancy could be precarious, as ‘the increasing demands of the parasitic fetus will make the diet deficient for the mother’ [Reference Ebbs, Scott and Tisdall18, p. 1].
There were traces of the idea of the parasitic fetus in earlier times: in the eighteenth century, Denis Diderot wrote in his Eléments de physiologie that ‘the child is at all times an inconvenient guest for the womb’ and described delivery as ‘a sort of vomiting’ [Reference Diderot and Assézat19, p. 406]. However, it was not until the turn of the twentieth century that the idea gained full prominence. Scientists travelled across colonial empires to study the life cycles of organisms causing frightening diseases, such as malaria and sleeping sickness, killing people, and damaging imperial economies. The idea of the parasite was engrained in public imaginations. It was also politically helpful: as civilians faced severe food shortages in the First World War, reassurances that the fetus (as well as the infant/lactating mother) would be unaffected by maternal starvation might have been seen as comforting [20].
Yet there were voices critical of strong hereditarianism. Some came from relatively marginal movements such as prenatal culturism, associated with theosophy and drawing on the notion of prenatal impressions. It argued that heredity could be influenced by a pregnant woman’s thoughts and behaviour, and thus those had to be controlled [Reference Dubow5]. Others were mainstream physicians. They used examples of conditions such as congenital syphilis to argue against a sharp distinction between hereditary and communicable (environmentally caused) diseases [Reference Gluckman, Hanson, Beedle, Buklijas, Felicia, Hallgrímsson and Hall21]. The best known among them was the Edinburgh obstetrician John William Ballantyne, who gave teratology – the science of collecting and studying births with congenital abnormalities – clinical significance and reinvented it as antenatal pathology [Reference Al-Gailani, Helduser and Dohm22, Reference Al-Gailani23]. For Ballantyne, the maternal body provided the immediate point of medical and research interest as ‘we can only reach the unborn infant through the mother who carries him, and so the pre-natal life and the life of the woman in pregnancy are closely bound together and depend one upon the other’ [Reference Ballantyne24, p. x]. Indeed, he defined the relationship between the mother and the child in the following manner: ‘although he [the infant] is hidden from sight in the womb of his mother, he is not beyond the influences of her environment, nay, her body is his immediate environment’ [Reference Ballantyne24, p. xii].
The rise of hereditarianism through the nineteenth and early twentieth centuries thus influenced the view of the maternal–fetal relationship. The concept of maternal impressions, or indeed any influences received from or through the mother, was relegated to second place, after heredity. ‘The mother marks her infant, not with the fanciful imagery of birthmarks, but with the ancestral tendencies,’ wrote a Chicago professor of obstetrics in this period [Reference Adair25]. But as the narrow notion of heredity was forged around 1900, the notion of prenatal or antenatal came into being [26]. This new concept accounted for the contingencies of conception, gestation, embryogenesis, birth, and breastfeeding, now disconnected from heredity [Reference Arni, Brandt and Müller-Wille27].
2.3 Critical Periods
‘The existence of a profusion of myths and superstitions has probably somewhat inhibited until modern times scientific thought and investigation into maternal–fetal relationships from the standpoint of how fetal development may be influenced by varying maternal factors. During the last twenty years, however, many facts and some very interesting hypotheses accumulated in the literature of various fields’,
wrote the American physician Lester Sontag, who between the 1930s and 1960s studied the ways in which cues received during development – from maternal nutrition to emotional states – influenced the offspring [Reference Sontag16, p. 996]. By the 1930s, eugenics was in retreat: in the increasingly unstable political-economic climate, the impact of environment, physical as well as social, on human health and disease could not be ignored. Genetics, an experimental discipline studying mechanisms and rules of heredity, had matured since the early 1900s, and its specialists criticised harshly what they perceived as eugenics’ sloppy grasp of genetic concepts and research methods [Reference Kevles28]. During the economic depression and in the shadow of the looming war, concerns about feeding human and animal populations in the likely conditions of severe shortage occupied politicians as well as scientists [Reference Buklijas29]. Those who subscribed to the notion of the parasitic fetus worried that poorly nourished mothers would perish under the demands of pregnancy. Others argued that in a malnourished mother, the growth and development of the fetus would suffer too. While food was seen as a prime example of outside exposures impinging upon the developing organism, other influences – microorganisms, toxins, but also maternal emotional states – came under the scrutiny of experimental and clinical scientists.
Throughout the 1930s, nutritionists and physicians, faced with deprivations caused by economic depression, studied the impact of maternal undernutrition on the offspring of cohorts of working-class women, but the results were negative or inconclusive [Reference Sontag, Pyle and Cape30, Reference Williams and Smith31]. In the Second World War, however, large civilian populations suffered sieges and blockades of food shipments, providing scientists with ‘natural experiments’: previously well-fed women exposed to severe famines of limited duration [Reference Susser and Stein32]. Early findings came from the Leningrad siege, between September 1941 and January 1944, during which the urban civilian population experienced prolonged and severe famine [Reference Antonov33]; smaller but more precise data came from Western Holland during the German siege between September 1944 and May 1945, in what became known as the Dutch Winter Famine [Reference Smith34]. Data showed that if the mother starved around conception, then the fetus had a greater chance of being miscarried or born malformed, and if famine struck in the last months of pregnancy, the baby was likely to be born small and light.
Wartime observations were carried forward into the lean post-war years: the British scientist Elsie Widdowson studied the birthweight of babies and milk production in hospitals, as well as the growth of children fed small and monotone food rations in orphanages in war-ravaged Germany [Reference Buklijas29]. She found that not just food but also emotions affected children’s growth: children living in an orphanage directed by a strict matron lagged behind their peers raised in an institution run by a kind person [Reference Widdowson35]. Back in Cambridge laboratories, Widdowson and her collaborator Professor Robert McCance transformed clinical observations into hypotheses for experimental animal studies: they manipulated maternal nutrition and the size of the litter (which determined the amount of mother’s milk received by each pup) to test how undernutrition during pregnancy and early postnatal period affects the offspring’s growth and development. They found that the impact was permanent, making adult animals smaller, more prone to infections, and even changing their facial structure.
Widdowson’s research supported the notion of ‘critical periods’ that emerged across disciplines in the 1930s and 1940s, most importantly in teratology, behavioural studies, and fetal physiology, to describe the relationship between chronological time and developmental milestones. Teratology in this period transformed from a museological discipline engaged in collecting and classifying malformed births into an experimental science that sought to explain how certain noxious agents – especially microorganisms such as the rubella virus and certain toxins – acting at well-defined developmental stages produced specific effects [Reference Kalter36]. Other studies explored how the lack or excess of physiological substances, such as vitamins or hormones, could influence development.
Hormones offered a way to explain a problem of long-standing concern: how maternal emotional states influence the psychological set-up of the child. In the late nineteenth century, France Charles Féré had argued that external stimuli, such as loud sounds or maternal emotions, caused uterine contractions, which in turn stimulated the fetus to move [Reference Arni, Brandt and Müller-Wille27]. Féré based his argument on the observations made on a cohort of children born to women who had suffered from ‘mental shocks’ while pregnant during the siege of Paris, 1870/71 [Reference Arni, Helduser and Dohm37]. In the 1940s, Lester Sontag observed a connection between increased fetal movement and fetal weight gain [Reference Sontag38]. Heightened fetal activity, he claimed, was caused by maternal emotional states, which were then transmitted to the fetus by hormones such as adrenaline. And while loud noises and maternal fatigue did increase fetal activity, these (intermittent) factors were less significant than maternal emotional states. Sontag published cases, such as that of a mother with a ‘religious and moralistic’ background who during pregnancy learnt about her husband’s infidelity. Her ‘almost continual emotional turbulence’ resulted in an ‘extremely active’ fetus and, finally, a short and light infant. In another case, the father developed a psychosis during the fifth month of the mother’s pregnancy, causing her to live in constant apprehension of physical violence and worry about her husband’s health as well as their future as a family. The infant was light for its length and ‘extremely active and irritable’ [Reference Sontag38, p. 629].
While just a few decades earlier, the focus was on fetal resistance to changes in the maternal environment, the decline of eugenics, experiences of economic depression, and especially war moved the emphasis onto the ways in which the fetus was sensitive to its environment. I have argued elsewhere how broader social concerns with recovery from early trauma – nutritional, emotional, and psychological – so pertinent in post-war Europe provided the background to the idea of sensitive periods [Reference Buklijas29]. While the idea of the fetus as a parasite did not quite go away, the concept of pregnancy as a plastic, open state and the fetus in constant exchange and communication with the environment gained currency.
2.4 The Autonomous Fetus
In 1965, the prestigious Life magazine published a series of photographs by the Swedish photographer Lennart Nilsson, documenting human development over the nine months of pregnancy [Reference Buklijas and Hopwood39]. These photographs were hailed as the unprecedented celebration of the ‘drama of life before birth’. Nilsson’s images, showing the childlike fetal form floating on the ‘starry sky’ background, without the maternal body anywhere in sight, signalised the new status of the fetus as an autonomous being. The growing distinction between mother and fetus was evident everywhere: in the way that the fetuses were portrayed in the media, for lay audiences, but also in textbooks and research papers; in their acquisition of the status of the patient in their own right; and in the language used to talk about them. By the 1960s, society was no longer preoccupied with survival and war trauma but rather with questions of identity, subjectivity, and agency. Could it be that the severance of the umbilical cord in the representation of fetuses reflected a broader social shift?
The use of fetal images has been extensively studied in the context of feminist history (visual), politics of abortion, as well as the broader political and social history of this period [Reference Petchesky1, Reference Jülich, Ekström, Jülich and Wisselgren2, Reference Hughes40]. Nilsson’s photographs – the most famous and best studied – were created within a gynaecological campaign in Sweden to restrict the abortion law and published in a popular colour magazine to entertain and educate its audience; in the 1970s, they were recruited by the growing pro-life movement in the United States to teach its prospective supporters about the ‘humanity’ of the fetus. And in addition to Nilsson’s vivid images, pro-life advocates could also draw on less attractive yet increasingly ubiquitous ultrasound scans. By the 1970s, ultrasound technology, first developed in the 1950s, had become a standard part of antenatal medicine.Footnote 1
Historians of medicine have noted that the deployment and popularity of fetal images corresponded with the emergence of the fetus as a patient in its own right. The increased prosperity of the post-Second World War and the rise of public healthcare systems worldwide meant that more women than ever were receiving antenatal care. Yet with improved control of infectious disease and better socioeconomic conditions, both maternal and infant mortality – at least in the developed world – were falling. The medical focus now turned to relatively rare cases of congenital anomalies, prematurity, and conditions that developed in pregnancy. In this period, a leading obstetrical scientist, William Liley, pioneered a therapy targeted at the fetus to treat the hitherto incurable fetal haemolytic disease, which emerged when the mother, who did not have Rh antigen on her red blood cells, developed antibodies to the Rh antigen-bearing red blood cells of the fetus [Reference Gluckman and Buklijas43]. Under ultrasonic guidance, Liley performed a blood transfusion into the fetal belly – a method previously done only on children. Liley’s work marked the beginning of the field of fetal medicine, which in the following decades gave rise to the highly precarious and controversial area of fetal surgery [Reference Casper6].
The obstetricians’ increased interest in the fetus and their positioning as fetal, rather than maternal, advocates became sharply evident as the debate over the legalisation of abortion deepened in the late 1960s and 1970s. Around that time, many countries liberalised their abortion laws, but the debate continued, and obstetricians frequently stood on the ‘conservative’ side, against the liberal laws. Ian Donald was a prominent opponent of the legalisation of voluntary abortion and a campaigner against the 1967 Abortion Act, and he employed vivid images produced by the ultrasound technology that he had pioneered in anti-abortion campaigns. Even when his campaign failed and Britain legalised abortion, he continued to fight elsewhere, for example taking his images to Italy that in the late 1970s was in the swing of the anti-abortion debate [Reference Nicolson and Fleming42, p. 243]. At the same time, alarmed by the developments in Britain, William Liley launched the Society for the Protection of the Unborn Children (SPUC) in New Zealand in 1970. In contrast to most other pro-life activists, Liley was not religious but rather held a firm belief that the fetus is a being independent of its mother, ‘our new individual’ residing in a ‘suitable host’ [Reference Dubow5, p. 114]. Yet while fetal advocacy in matters of abortion prohibition produced little in the way of results, in other areas, fetuses increasingly came to be seen as needing legal protection from the actions of their mothers [Reference Dubow5]. From the 1970s onwards, especially in the United States, conflicts between fetal rights and the rights of women – as patients, workers, and citizens – steadily increased.
One aspect of the increasing visibility of the fetus that has hitherto been little studied is how scientists – rather than practising obstetricians – viewed the fetus. Examining their language and research topics reveals a clear shift towards the autonomous fetus. Starting from the 1960s, science books and articles no longer described the fetus as a passive parasite but rather as a fearless pioneer in extreme conditions. Metaphors drew on new technologies of ocean, space, and land exploration, calling the fetus a submarine sailor, ‘a weightless astronaut in utero’ [Reference McCance, Jonxis, Visser and Troelstra44, p. 307], or a mountaineer. At the time when Edward Hilary and Sherpa Tenzing captured the public imagination by ‘conquering’ Mount Everest, the fetal environment began to be described as ‘Mount Everest in utero’ [Reference Dawes45]. From the 1960s until 1990, scientists met at conferences tellingly titled ‘Foetal autonomy’, ‘The fetus and independent life’, and ‘Foetal autonomy and adaptation’ [Reference Dawes46–Reference Dawes, Borruto and Zacutti48]. Indeed, the introduction to the 1969 Foetal Autonomy Conference Proceedings said that ‘it [the fetus] demonstrates its innate capacity for influencing its external and maintaining its internal environment – that is, its autonomy’ [Reference Wolstenholme and O’Connor47, p. 1].
The language of fetal autonomy closely corresponded to the type of research questions that interested scientists in this period. In the 1940s and 1950s, McCance and Widdowson experimented with maternal nutrition and the size of the litter to show how the antenatal environment shaped development before and after birth. In contrast, in the 1960s and 1970s, the focus moved from external influences to the ways in which the fetus controlled its development. Research methods were developed – named chronic preparation or chronic method – that allowed precise monitoring of physiological parameters throughout the course of pregnancy, using electrodes and catheters inserted into the pregnant animal [Reference Rudolph and Heymann49]. And, indeed, the fetus seemed remarkably autonomous. It could regulate its sleep patterns and its behaviour. It moved, and it appeared to breathe. It oversaw its growth through a finely balanced cascade of hormones [Reference Gluckman, Liggins, Beard and Nathanielsz50]. But its agency did not stop at the boundaries of the fetal body: the fetus was also seen to ‘participate in, or is responsible for, the sequence of events that ends in its birth’ because ‘it would be a logical feature of reproductive design if the initiation (of labour) were under fetal control, so that the other systems necessary for postnatal survival were normally mature before birth. In this sense fetal autonomy would be a necessary feature of development’ [Reference Dawes51]. Testifying before the US Congress in support of pro-life legislation, William Liley described the fetus as being ‘very much in charge of the pregnancy’. The fetus, it seemed, was in control.
2.5 Neighbours at Odds
The idea of the fetus as a cosmonaut or a mountaineer implied agency and self-sufficiency. But scientists and physicians went even further: the feminist historian Ann Oakley quoted from Frank Hytten’s 1976 obstetrics textbook, describing the fetus as
an egoist and by no means an endearing and helpless little dependent as his [sic] mother may fondly think. As soon as he has plugged himself into the uterine wall, he sets out to make certain that his needs are met, regardless of any inconvenience he may cause. He does this by almost completely altering the mother’s physiology, usually by fiddling with her control mechanism [Reference Oakley41].
This ‘selfish fetus’ could not help itself: it was a machine governed by its selfish genes. The 1970s and 1980s were the heyday of the disciplines of sociobiology and evolutionary psychology. They explained behaviour – human and animal – using the mid-twentieth-century ‘superdiscipline’ of Modern Synthesis. Modern Synthesis was Darwin’s theory of evolution by natural selection unified with population and experimental genetics [Reference Bowler52]. Evolution was defined as a change in the allele (gene) frequency, and although the evolutionary environment acted upon the phenotype of the whole organism, it was the passage of the gene across generations that mattered.
And genes, as suggested persuasively in the title of Richard Dawkins’ famous book, were selfish [Reference Dawkins53]. They looked after their own interests using the organism as a convenient vehicle to ferry them around, meet prospective mates, and secure survival for the next generation. One was fond of his or her parents because they shared 50 per cent of their genes but cared progressively less for his or her siblings, half-siblings, and cousins, as the percentage of shared genes dropped [Reference Hamilton54]. In 1974, the American sociobiologist Robert L. Trivers built on this concept to explain the apparent conflict over resources arising between parents and their children [Reference Trivers55]. According to him, children demand more from their parents than the latter are willing to give because their evolutionary interests differ: individual children want all of their parents’ attention (and food), yet parents have other – extant or future – children to consider. Trivers supported his hypothesis with data on the social behaviour of mammals, mostly around the time of weaning. The young aggressively demanded more food and care than their parents, who wanted to reserve their energy for other or future offspring, were willing to give.
Trivers’ model met enthusiastic reception among evolutionary biologists. Steven Pinker saw the conflict as ‘inherent to the human condition’ [Reference Pinker, Bloom, Barkow, Cosmides and Tooby56]. Richard Dawkins described Trivers’ model of parent–offspring conflict as ‘brilliant’ [Reference Dawkins53, p. 127]. At the same time, behavioural scientists criticised Trivers: in many species, the offspring weaned itself, while in others mothers responded to its requests. But the model remained popular. It inspired the Harvard evolutionary biologist David Haig to extend it to pregnancy and development, arguing that the mother and the child each have their own interest in mind; interests that are partially aligned (because they share 50 per cent of their genes) but substantially differ (because the remaining 50 per cent is different). Pregnancy, in Haig’s view, was not a romantic alliance of ‘one body and one flesh, a single harmonious unit in which conflicts of interest are impossible’ – a perspective that, according to Haig, was the received view. But neither was it correct to see the mother and the fetus locked in a relationship where ‘the fetus is an alien intruder within its mother’s body: a parasite whose sole concern for its host is to ensure an uninterrupted supply of nutrients’ [Reference Haig57, p. 226]. Rather, he likened this ‘most intimate human relationship’ to a constant negotiation, ‘a tug-of-war’ where ‘two teams attempt to shift a flag a small distance either way, yet there is high tension in the rope and the system would collapse if either side stopped pulling’ [Reference Haig58, p. 496].
Haig first applied the parent–offspring conflict concept to development and pregnancy to explain the phenomenon of genomic imprinting, in which for some genes only the maternal (or paternal) copy is expressed, while the copy that came from the other parent is silenced [Reference Haig59]. Because the mammalian mother is equally related to all of her offspring, her interests are best served by controlling resource allocation to her offspring, making sure as many survive as possible; but because the father of the fetus in the current pregnancy may not also father a future fetus or litter, it is in his interest to promote the growth of this particular fetus [Reference Moore and Haig60]. The hypothesis was persuasively supported by the insulin growth factor 2 (IGF2) system, in which the growth factor (promoting growth) was paternally expressed as the growth factor receptor (controlling growth) was maternally expressed. But Haig soon expanded his concept to other aspects of pregnancy, in the first place the communication between the mother and the fetus by means of chemical messages through hormones [Reference Haig61]. In Haig’s words, this communication was a devious game played by both sides to advance their own interests: ‘a response that is beneficial for a sender need not be beneficial for the responder, and vice versa’ [Reference Haig61, p. 358]. Mothers were ‘able to extract some information from placental hormones’ [Reference Haig61, p. 374], yet placental hormones were ‘fetal attempts to manipulate maternal metabolism for fetal benefit’ [Reference Haig61, p. 357].
While Haig’s hypothesis of placental hormones as tools of fetal subterfuge has remained without empirical support, the concept of maternal–fetal relationship as a state of unresolved conflict has held much attraction. For instance, clinical researchers have used it widely – moving slickly from selfish genes to selfish organisms and back – to explain various pathological phenomena of pregnancy, such as gestational hypertension and severe chronic infections [Reference Barker and Osmond67]. The attraction of the concept may be explained by the broader social view of the maternal–fetal relationship in the last decades of the twentieth century. It was recognised that for the fetus the mother presented the immediate environment, but the idea of an autonomous fetus, whose needs and interests need not overlap with its mother’s, remained in full force. Yet, the strong hereditarianism implied in the conflict model, with both the mother and the fetus seen as machines governed by their genes, left little room for considerations of environmental influences received in development [Reference Abrams and Meshnick62–Reference Muehlenbachs, Mutabingwa and Edmonds64].
2.6 Maternal Environment and Fetal Exposure
By the end of the twentieth century, many of the paradigms that had dominated the twentieth century came under scrutiny. As ‘the century of the gene’ ended with the publication of the Human Genome draft (and, a few years later, full sequence), it became obvious that the knowledge of the genome sequence was only the beginning, rather than the end, of the quest for understanding life, health, and disease [Reference Fox Keller65]. The notion of the autonomous fetus was questioned too. ‘We have been dazzled by the very strong control by the fetus’ wrote the fetal physiologist Graham Liggins, when decades of research into the onset of birth revealed enormous interspecies variation and the fact that, in humans, the mechanism firing off labour had little direct input from the fetus [66]. The research programme studying fetal respiratory movements came to a dead end in the late 1980s. Fetal physiologists looked for inspiration elsewhere and found it in the work of David Barker, the British epidemiologist who argued that the conditions of early life – indeed, even before conception – shaped the disease risk in adulthood [Reference Dawes, Borruto and Zacutti48, Reference Barker and Osmond67]. Barker was certainly not the first to stress the importance of prenatal influences: there were studies coming from social medicine and epidemiology throughout the 1960s and 1970s, such as those by Zena Stein and Mervyn Susser [Reference Stein and Susser68], examining the impact of maternal nutrition on cognitive development in youth. Yet as long as the genetic paradigm and the idea of the autonomous fetus prevailed, this approach remained restricted to public health fields.
The move away from the close focus on the fetus back to the mother and the environment of the pregnancy and early life fitted well with the renewed interest in development, manifested, for example, in the return of development into evolutionary studies named ‘evo-devo’ [Reference Gilbert, Opitz and Raff69]. It also had to do with an increased anxiety about the environment changed by human action and its impact on human health, which had been growing since the 1960s. Older research, such as the previously described work of Robert McCance and Elsie Widdowson or studies of the cohort of women who were pregnant during the Dutch Winter Famine, was reappraised and integrated into the new paradigm [Reference Barker70]. The reappraisal included the previously little recognised research across the Iron Curtain, by the East Berlin endocrinologist Günter Dörner, who in the 1970s compared the risk of obesity and cardiovascular diseases in the cohorts of young men born before, during, and after the Second World War [Reference Dörner, Haller and Leonhardt71, Reference Dörner, Rodekamp and Plagemann72]. The difference was that, around the turn of the twenty-first century, the long-term impact of the early influences had to be expressed in molecular rather than late-nineteenth-century physiological or twentieth-century endocrinological terms.
The solution was offered by the new, rapidly growing area of biomedical research, epigenetics, which has been variously described as ‘the study of mitotically or meiotically heritable changes in gene function that cannot be explained by changes in genetic sequence’ or, in a less technical language, ‘the molecular memory of past stimuli’, the signals allowing cells to ‘remember past events, such as changes in the external environment or developmental cues’ [Reference Landecker and Panofsky73]. Epigenetics holds the promise of explaining what genetics could not; it clarifies how, under (even slightly) different environmental influences, switching certain genes on and off may allow the same genetic code to produce different phenotypes. There seem to be many mechanisms through which genes may be turned on (and off) – some involving small RNAs and others spatial changes to the DNA–protein complex in the nucleus – but the best studied is the addition of methyl groups to promoter regions of the gene [Reference Gluckman, Hanson, Beedle, Buklijas, Felicia, Hallgrímsson and Hall21].
It may seem that, with developmental approaches and epigenetics, ‘maternal impressions’ have returned to medicine and society. Yet, while the mother was certainly brought back into the picture, her return took place in a reductionist manner, befitting the way that science operates today. The perception of the mother is evident in expressions of ‘maternal effects’ and ‘maternal environment’. Maternal experiences are required (1) to be, or to be made, amenable to experimental, molecular approaches (2) to show a quantifiable change in parameters that may be measured using epigenetic methods.
Most research is focused on two categories of influences or exposures: nutrition and stress [Reference Gluckman, Hanson, Beedle, Buklijas, Felicia, Hallgrímsson and Hall21]. The impact of changes in diet is modelled in a relatively straightforward manner in animal models, by restricting nutrition or changing proportions of food groups or particular nutrients in experimental animal diets. Yet the relevance of results to human physiology has not always been obvious. There is very little ‘natural’ about the standardised diets fed to laboratory animals, bred in laboratory environments for generations, so the implications of experimental findings for human nutrition are not always clear. Epigenetic research has also complicated the previously established therapeutic regimens: folate, a B vitamin that has been supplemented to pregnant women to prevent neural tube deficit, is a powerful methyl-group donor, which thus changes the epigenetic state at multiple locations in the organism and possibly has widespread effects.
Even more controversial and complicated than nutritional epigenetics are the attempts to show how maternal psychological traumas and emotional states influence development. Féré once explained them with nervous reflex reaction and Sontag with hormones such as adrenaline; epigenetic research largely focuses on the expression of genes coding receptors for corticosteroid stress hormones. ‘Stress’ here refers to a large group of very different experiences – from parental neglect in early life to the situations where the mother is exposed to environmental stress, for example experiencing the 9/11 terrorist attack. The best-known animal model was the ‘high/low licking/grooming’ model. In this model, rat dams are divided into those that exhibit either frequent licking and grooming behaviour towards their offspring (thus modelling a caring mother) or opposite – infrequent licking and grooming – behaviour [Reference Meaney74]. The caring mother is supposed to provide a positive, low-stress environment for the offspring, which in turn is understood to affect the functional activity of a group of genes involved in the production and activity of corticosteroids, stress hormones, evident in the epigenetic state of stress hormone receptors and in the level of the hormone.
In short, the new approach to the ways in which the mother modulates and transmits influences received during development is highly reductionist, made amenable to experimental physiological and molecular approaches, with very different experiences expected to produce the same chemical effect in the organism. It is thus entirely different from maternal impressions. One aspect, however, remains by and large unchanged, and that is the responsibility of the mother for the child’s health – and not just in childhood, but throughout life, and even, if the transmission of epigenetic marks across generations proves true, to future generations. The way that the results of epigenetic studies are reported – by journalists but also in some cases by scientists who did the research – places the burden of guilt for a child’s poor health squarely on the shoulders of the mother [Reference Richardson, Daniels and Gillman75]. Maternal behaviour during pregnancy is scrutinised to an unprecedented level, with an ever-increasing list of prohibited foods, the prohibition of any alcohol, strict scrutinising of weight gain, and a growing list of medical checks. The focus on the mother may seem baffling if we know that many of the animal studies cannot be easily extrapolated to humans, that paternal effects (through the epigenetic changes in sperm cells) may play an equally important role, and that many influences are really of societal or broadly environmental nature. Yet if we keep in mind the older as well as more recent history of the maternal–fetal relationship, on the background of which these studies are conducted and results are presented, then this picture of an ambivalent association makes sense. Rather than seeing the mother and the fetus as a team, a pair working together towards a common goal, they are viewed as two parties uneasily united: the fetus requiring protection and the mother needing control.
2.7 Conclusion
In this chapter, I have argued that the focus on the maternal–fetal relationship, rather than the mother or the fetus alone, provides a richer, more instructive picture than the focus on the fetus or on the mother alone. For example, Sara Dubow’s close attention to the medical and legal status of the fetus in twentieth-century America painted an image of ever-increasing autonomy and rights ascribed to the fetus, paralleled by the continuously diminishing status and control of the mother [Reference Dubow5]. This view agrees with the older feminist critique of women’s loss of authority in medicine today, for example by Barbara Duden [Reference Duden4]. Yet shifting the lens slightly to capture the interaction between the two tightly connected organisms also changes, or complicates, our view of the history of the fetus, of the mother, and indeed of ‘maternal impressions’. Rather than a linear process, we see an image where the importance of maternal experiences, and of influences received through the mother, periodically strengthens and weakens. These shifts tell us as much about social changes – women’s position in the society, war trauma, standpoints on human identity, agency, and rights – as they do about developments in obstetrics and fetal physiology. In the era of ‘hard heredity’, eugenics, and the early days of genetics at the beginning of the twentieth century, the fetal parasite got what it needed from the mother to survive, but, beyond the bare minimum necessary for survival, maternal influences had no impact. But in the economic depression and political upheaval of the 1930s, which brought unprecedented civilian suffering and famines, the idea of a fetus sensitive to maternal experiences – from her diet to the psychological trauma – prevailed. By the 1960s, however, in the newly affluent society, the main concerns revolved around the issues of human rights and subjectivity. The fetus – made visible through the new technology of ultrasound and enjoying media exposure in colour magazines – was seen as an autonomous organism, able to breathe, move, and control its growth and possibly even the timing of birth. Fetal rights came to be understood as opposed to women’s rights in the era of liberalisation of abortion laws; obstetricians increasingly positioned themselves as fetal rather than women’s advocates. Mothers and fetuses, it seemed, were uncomfortable neighbours whose interests only partially overlapped; evolutionary biologists provided an explanation of this relationship that drew on their sharing only some of their genes. But as the genetic paradigm began to lose some of its power around the turn of the twenty-first century and concerns about the environment changed through human action strengthened, approaches emphasising the importance of environmental influences began to grow in importance. The mother is now seen as the primary environment, as well as the mediator of cues coming from the broader environment. While these approaches may be understood as more inclusive and accurate, they also carry the load of the recent history of maternal–fetal relationship. They imply – and sometimes explicitly state – that the mother, through her behaviour and her choices, is responsible for the health of her future child, but that she cannot be trusted and requires close supervision and control, preferably before the pregnancy has even begun. So rather than viewing the mother and the fetus as a unit, a team working towards a shared goal, their relationship remains ambivalent. Finally, while it may be tempting to see the epigenetic approach as the return of maternal impressions – with the Internet and newspapers brimming with titles such as ‘you are what your mother ate’ – the similarity is only superficial. The mother, in epigenetics terms, is a molecular environment, a source, and a mediator of exposures, where what matters is not the actual experience but whether it activates the gene or not.
3.1 Introduction
In June 2003, the Second World Congress on Fetal Origins of Adult Disease took place in Brighton, the UK. Alongside researchers specialising in fetal development – developmental physiologists, epidemiologists, obstetricians, and paediatricians – the meeting was addressed by a group of illustrious guests: the well-known scientists and science communicators Colin Blakemore and Lord Robert Winston; the Nobel laureate in economics Amartya Sen; and the royal patron, Princess Anne. The latter stressed the importance of this research for global health and presented a silver salver to the Southampton epidemiologist David Barker, in recognition of his pioneering work in the field [Reference Hanson, Gluckman, Bier, Challis, Fleming and Forrester1]. At this meeting, the International Society for the Developmental Origins of Health and Disease (DOHaD) was founded, and the global ambitions of the new field were evident in its logo showing a fetus ostensibly peacefully nestled in the globe.
Yet just a decade earlier, this field had not existed. It began at a workshop at Lerici, near La Spezia in Italy, in 1989, in which David Barker presented his retrospective epidemiological research from Hertfordshire, UK, showing that low birthweight was associated with an increased risk of chronic non-communicable diseases in later life. It was at that meeting when fetal physiologists first discussed the plausibility and possible underlying mechanisms of Barker’s observations [Reference Dawes, Borruto, Zacutti and Zacutti2].
This chapter, written by a founder of the field and a historian with long-term interest in DOHaD, examines this key (long) decade in the making of DOHaD, bookended by the 1989 La Spezia workshop and the 2003 Brighton Congress. It argues that, for all the attention that DOHaD has received from social and biomedical scientists, its history has not been studied in sufficient depth. Yet to understand the objectives, methods, research questions, and intellectual networks making the field of DOHaD, and the responses that it evoked and that further shaped it, we must appreciate the historical and geographical context in which it was created. For the purposes of this chapter, we focus on three key themes:
1. Interdisciplinarity. From its inception, DOHaD was explicitly interdisciplinary, and interdisciplinarity is a source of its intellectual dynamism and breadth. Yet this required rendering the concepts of each collaborating discipline intelligible to all participating members [Reference Kluger and Bartzke3]. As we will show, while these transformations were productive, in the process some of the context and layers of the original question were lost.
2. Social class and health inequalities in Britain: Barker brought to his research a concern with social inequalities in health. We briefly review its long-standing history in British science and policy and then focus on the reasons for the uptake of DOHaD by the Labour Party upon its accession to power in 1997.
3. Globalisation and health. DOHaD’s international expansion took place during a decade of globalisation. The global interest in DOHaD has been taken for granted, but, as we show in this section, the international networks through which the field spread merit deeper investigation.
3.2 The Promises and Challenges of Interdisciplinarity
In 1989, the doyen of fetal physiology, Geoffrey Dawes, invited the epidemiologist David Barker to a meeting in Villa Marigola, a conference centre near La Spezia, Italy. The title of the conference, ‘Fetal Autonomy and Adaptation’, signalled both continuity and change. Twenty years earlier, Dawes chaired a meeting centred on the key idea that the fetus ‘demonstrates its innate capacity for influencing its external and maintaining its internal environment – that is, its autonomy’ [Reference Wolstenholme and O’Connor4]. The La Spezia meeting was intended to mark a new era in fetal physiology in which the preoccupation with the autonomous functions of the fetus would be complemented, if not replaced, with a focus on the interaction between the fetus and its broader environment [Reference Dawes, Borruto, Zacutti and Zacutti2, Reference Buklijas and Al Gailani5].
To inspire new thinking and draw on the views of the physiologists’ collective, Dawes invited David Barker, an environmental epidemiologist from the University of Southampton. Barker had recently published a series of well-received but provocative articles. He argued that chronic non-communicable diseases were not caused (exclusively) by adult lifestyles but by the conditions of early life that set the organism on a path that increased, or reduced, the later disease risk [Reference Barker and Osmond6, Reference Barker, Winter, Osmond, Margetts and Simmonds7].
Barker built his hypothesis by linking historical with contemporary demographic and epidemiological data. His first studies took an ‘ecological’ approach, by demonstrating a geographical correlation between high infant mortality in the early twentieth century and high morbidity from cardiovascular disease (in men) in the period 1968–1978. The causal link, he proposed, was poor early-life nutrition, caused by maternal malnutrition and poor health, infectious diseases in infancy, and artificial feeding practices. Ecological studies were followed by a retrospective cohort study on a group of men born in Hertfordshire around 1920, whose records of birth and infant weight were, unusually, preserved. Their matched mortality records showed that those born small, and especially those whose growth failed to ‘catch up’ in the first year of life, had a higher relative risk of death from cardiovascular disease [Reference Barker, Winter, Osmond, Margetts and Simmonds7].
In the conference proceedings, printouts of presentations were followed by summaries of the discussions after each paper. These records show us physiologists at the La Spezia conference were intrigued by Barker’s findings but struggled to imagine how to convert them into a workable experimental programme. The discussants asked about placental size and gestational age at birth, and also about the possible effects of genetic factors, smoking, and breastfeeding. Significantly, in view of later developments in DOHaD, Hanson asked Barker whether correcting for social class might remove the association between birthweight and later disease, in view of the well-known association with cardiovascular disease, which we will discuss in the next section. This correction might distinguish between an underlying mechanism and merely an association. Barker replied that ‘Hertfordshire at the time in question [emphasis original] was a rural county in which social class was relatively unrelated to health’ [Reference Dawes, Borruto, Zacutti and Zacutti2, p. 35]. While Barker noted that future data on social class and early life would become available, the fetal physiologists left with the resolve to devise studies in animal models to investigate possible fundamental mechanisms.
Hanson’s group provides an example of such early physiological, animal DOHaD work. When they moved to University College London in 1990, they secured funding to investigate the effect of small reductions in the food intake of ewes during early pregnancy. These reductions were not large enough to produce a sustained reduction in maternal body weight or lambs’ birthweight but did produce effects on fetal and neonatal cardiovascular and endocrine function. This experiment, they argued, distinguished between a physiological (‘normal’) adaptive process, albeit with possible later health consequences, and a pathophysiological process in utero. The physiological proposition was indeed confirmed, although the misconception that developmental processes ‘harm’ the fetus has been persistent [Reference Hawkins, Steyn, McGarrigle, Calder, Saito and Stratford8, Reference Richardson9]. Barker and researchers investigating the effects of moderate to severe challenges such as the Dutch Hunger Winter imagined the environment of early human development as a complex web of social and economic forces ultimately manifested as the food available to women and girls. Physiologists, in contrast, had in mind specific graded and quantifiable changes in physiological parameters such as blood pressure, oxygen level, or concentration of nutrients that could be registered by receptors and which then altered development through plasticity [Reference Hanson, Dawes, Borruto, Zacutti and Zacutti10].
Experimentalists initially turned to the animals – namely, sheep – that had been traditionally used to model human pregnancy. Indeed, their confidence in this animal model was validated when ultrasound, a technology introduced in the 1960s, confirmed similarities between ovine and human fetal development [Reference Hawkins, Steyn, McGarrigle, Calder, Saito and Stratford8, Reference Nicolson and Fleming11]. But studies testing the effects of specific nutritional modification on developing offspring required larger numbers of animals than pregnancy research. Sheep were expensive, slow to reproduce, and difficult to manipulate nutritionally for such studies. At the same time, animal experimental regulations were becoming much more stringent. DOHaD scientists replaced sheep with smaller animal ‘models’ – rats, mice, and guinea pigs – that had the advantage of rapid reproductive cycles, lower cost, and simpler regulatory approval. Yet with their large litters and fast development, much of which occurred after birth, they had far less in common with human fetal development. While the replacement of the sheep with small animals was a pragmatic decision, the transferability of observations from small experimental animals to humans was uncertain.
These regulatory and economic pressures on experimental physiology were happening simultaneously with the rise of genomics, which culminated with the publication of the human genome at the end of our examined period (2000, officially in 2003) [Reference Fox12, Reference Hood and Kevles13]. This co-occurrence was not coincidental but the result of the economic and policy shift in the UK through the 1970s and 1980s. The political pressures to cut costs and modernise science were translated into support for certain scientific fields, while other fields lost funding, institutional footing, and political advocacy. In agriculture, traditional animal genetics that relied on long-term follow-up of generations of farm animals was defunded in favour of genomic biotechnology [Reference García-Sancho14, Reference Myelnikov15]. Fetal physiology, also using large farm animals, saw funding cuts too. Through the 1980s and 1990s, the spotlight on novel animal ‘disease models’ developed in genomics laboratories – animal strains genetically modified to carry mutant genes predisposing them to specific diseases – and the first successful experiment in cloning a mammalian organism further increased public concern about animal rights [Reference Balls16].
DOHaD, with its focus on the environment–organism relationship, had no interest in genomics at first; however, the overarching push away from experimental physiology and towards genomics was likely the key reason for DOHaD’s entrance into epigenetics in the early 2000s [Reference Brawley, Poston and Hanson17]. This disciplinary relationship was mutually beneficial: while epigenetics provided molecular evidence to DOHaD, DOHaD secured policy relevance for epigenetics [Reference Kenney and Müller18]. This disciplinary relationship between DOHaD and (environmental) epigenetics is so close that many see it as the same field [Reference Richardson9]. Yet it is important to note that each field began on its own, roughly simultaneously in the late 1980s, and had over 15 years of independent development [Reference Buklijas, Meloni, Lloyd, Fitzgerald and Cromby19]. Many scientists who have used epigenetics to explain intergenerational transmission of disease, or, more broadly, inheritance of phenotypic traits, do not see themselves as members of the DOHaD community. Similarly, for many in the DOHaD community, epigenetics is not a core element of the field but rather one of the tools to address the question of ‘developmental origins’. As this chapter stops in 2003, the DOHaD–epigenetics relationship is beyond its scope.
A field of particular interest to the emerging DOHaD community was human medicine. Here we must distinguish between epidemiologists and nutrition scientists working inside medical institutions and research groups, who had been interested in ‘Barker’s hypothesis’ throughout, from practicing clinicians. In particular, specialists in internal medicine – cardiologists, diabetologists, and endocrinologists – who treated adults, and, increasingly, elderly patients, and whose primary objective was treatment rather than early prevention of chronic disease showed little interest in DOHaD [Reference Reynolds and Tansey20, p. 47]. In terms of elucidating mechanisms of cardiovascular and metabolic disease, they placed greater trust in genomics, which promised to reveal the basis of risk of disease at the individual level; a promise later captured in the term ‘personalised medicine’ [Reference Prainsack21]. In contrast, obstetricians and paediatricians, communities that had already collaborated closely with experimental physiologists in the fields of fetal and neonatal physiology, joined DOHaD in larger numbers. So, while the field was meant to bridge two opposite ends of the human lifespan, in practice, clinical disciplines studying life’s beginning took up more space in the field than those at the other end, and this influenced DOHaD’s direction. Probably the most significant criticisms came from Barker’s own discipline. Epidemiologists argued that his observations were artefacts arising from over-controlling for variables such as BMI and other confounders. They lacked confirmation in studies of cohorts where birthweight was smaller such as twins. The potentially underestimated importance of social factors was also emphasised [Reference Paneth and Susser22, Reference Rich-Edwards and Gillman23].
The retrospective nature of the early studies made it difficult if not impossible to resolve such issues, and epidemiologists began to use case-control and cohort studies to clarify the causal links [Reference Elford, Shaper and Whincup24, Reference Elford, Whincup and Shaper25]. The Southampton group established the Southampton Women’s Survey (SWS) between 1998 and 2002 [Reference Inskip, Godfrey, Robinson, Law, Barker and Cooper26]. With the help of general practitioners in the city, researchers recruited women and then followed up the pregnancies and children of those who conceived. The SWS collected rich data, produced many papers, and confirmed and extended DOHaD thinking in many ways.
Through the early 1990s, we can track the process of disciplinary expansion, to incorporate new knowledge, and then its translation. Mothers, babies and disease – a 1994 book-length explanation of the ‘Barker’s hypothesis’ – combined Barker’s own historical epidemiological studies with a summary of research investigating adult risk factors of chronic disease; animal studies of the long-term impact of nutritional modifications, especially during so-called ‘sensitive periods’; and existing clinical data [Reference Barker27]. Although the idea itself was not necessarily new, the disciplinary collaboration was novel, and Barker, with his team, was its tireless champion.
Yet in this interdisciplinary translation and expansion that required ‘telescoping’ from social conditions to dietary components and then specific molecular pathways, the link with the broader social environment became difficult to maintain. Social scientists have critiqued the ways in which social class, gender, and race are ostensibly erased in DOHaD research [Reference Richardson9, Reference Valdez28]. We borrow the term ‘telescoping’ from Warin and colleagues who criticised the shift from the long-term impact of early-life undernutrition to overnutrition as the key question in the field [Reference Warin, Moore, Zivkovic and Davies29]. In their view, this also meant a move from concerns over social determinants of health to assumptions about individual women’s bodies [Reference Kevles32]. Here we want to add another meaning: the entrance of experimental and molecular fields and the pressures of interdisciplinarity. This disciplinary structure of DOHaD as a biomedical field – rather than being situated within social medicine or even epidemiology – has profound consequences. As a recent ethnographic study of DOHaD science argued, while researchers are aware of the importance of understanding the structural reasons underlying different early life conditions, current DOHaD studies, with their focus on individual behaviour and measurement of a limited set of variables, make these connections difficult [Reference Penkler30].
3.3 Social Class, Health Inequalities, and Government Policy in the UK
The relationship between social class and health has long been a key preoccupation of British scientists and policymakers. Francis Galton’s eugenic ideas were a defence of the existing social order based on innate and fixed biological characteristics [Reference Bland, Hall, Bashford and Levine31]. Yet by the interwar period the practitioners of the new discipline of genetics began to insist on the precise delineation and description of heritable traits and to criticise the ambiguity of eugenics. Studies such as Lionel Penrose’s Colchester Survey, which examined the heritability of ‘mental retardation’, pointed to a range of congenital (i.e. associated with birth or pregnancy), but not heritable, factors influencing the characteristics of the new individual [Reference Kevles32]. The economic depression of the 1930s further exposed problems with the eugenic argument, showing that poverty rather than heritable traits was the main cause of many diseases. Soon thereafter, the Second World War strengthened social and political support for the emerging welfare state. Simultaneously, eugenics was replaced by social medicine – a new field that joined together the commitment to redistributive economic policies, public health concerns with living conditions, and the interest in ‘lifestyles’ [Reference Oakley33].
Between the 1940s and 1960s, social medicine flourished at British universities [Reference Porter34, Reference Porter and Porter35]. David Barker received his PhD in 1967 under one of the leaders of the field, Professor Thomas McKeown at the University of Birmingham. McKeown’s research programme investigated the relationship between human reproduction, social conditions, and mental health [Reference McKeown36]. Even though Barker subsequently worked in or led departments of epidemiology rather than social medicine, his methodology resembled McKeown’s in its blending of historical with contemporary epidemiological data and his enduring preoccupation with social inequalities.
Barker collaborated with epidemiologists who studied links between social class and disease. He worked with Geoffrey Rose, a lead investigator in the longitudinal ‘Whitehall Study’ that interpreted coronary heart disease as an outcome of class-based inequalities rather than a disease of ‘affluent lifestyles’ [Reference Clark37]. Barker participated in the debate on health inequalities through his series of investigations into the links between contemporary geographical distributions of chronic diseases and the patterns of predisposing factors in earlier generations. In the 1970s, he mapped the occurrence of Paget’s disease (of the bone) in the elderly onto the incidence of childhood rickets in earlier generations. This research can be understood as a precursor to his more famous 1980s studies [Reference Barker and Gardner38]. But in contrast to McKeown, Barker was operating not in the context of a rising welfare state but in the neoliberal response to the 1970s economic crisis: a political and economic environment in which the elements of the ‘welfare state’ were progressively eroded.
An explicit contribution to the debate was written in 1987, when Barker quoted the report on inequalities in health by the committee led by Sir Douglas Black – a report commissioned by the Labour government but issued by the new Conservative government under Margaret Thatcher in 1980 – which explained inequalities as ‘the more diffuse consequences of the class structure’ [Reference Barker and Osmond39, Reference Black40]. Barker argued that ‘specific explanations may be found in the environmental influences that determined past differences in child development. These explanations may allow a national strategy for reducing inequalities in health to be developed’ [Reference Barker and Osmond39].
This quote, Barker’s intellectual networks, and his epidemiological research indicate that a contribution to policymaking aimed at reducing health inequalities had long been one of his objectives, although perhaps out of reach through the 1980s under the Thatcher government. But by the early 1990s, conditions for health and social policy in the UK started to change. In his 1994 book Mothers, Babies and Disease in Later Life, Barker stressed the profound implications of emerging interdisciplinary research – his earlier epidemiological work and the incipient physiological and clinical research on ‘Barker’s hypothesis’ – for government policy [Reference Barker27, pp. 170–171]. He referred to the UK’s Health of the Nation strategy, based on the WHO’s Health for All (1978) and launched in 1992 as ‘the first attempt by a British government to develop a strategy explicitly to improve the health of the population’ [Reference Hunter, Fulop and Warner41]. Yet although broadly welcomed, this strategy was also criticised, both for its disease-based model and, importantly, for not considering the socio-economic determinants of health [Reference Hunter, Fulop and Warner41, p. 4].
The Labour Party’s historic accession to power in 1997, after 18 years in opposition, meant a renewed focus on socio-economic determinants of health, within a broader commitment to marshalling scientific evidence into public policy. The new government immediately commissioned a report on ‘inequalities of health’, with the objective of identifying priority areas for policies to ‘develop beneficial, cost effective and affordable interventions to reduce health inequalities’ [Reference Acheson42]. Chaired by Donald Acheson, Barker’s predecessor as Director of the MRC Environmental Epidemiology Unit in Southampton who was then appointed Chief Medical Officer of the UK, the working group included scholars who would become synonymous with research into inequalities and health, namely the epidemiologist Michael Marmot and the sociologist Hilary Graham, alongside David Barker.
DOHaD influenced the report and the British policy. Both nutrition and gender would have likely received attention, with or without DOHaD. But the explicit statement on the nutrition of women before and during pregnancy, influencing the long-term health of the next generation, and an entire discussion on the risks of reduced fetal growth, referenced by the recent work of the Southampton group, were most likely contributed by Barker [Reference Acheson42]. Correspondence kept in the National Archives confirms this hypothesis: in a letter to Acheson dated 4 September 1997, Barker explained the policy implications of inequalities in fetal growth, and Acheson wrote on the margins, ‘Thank you very much for your interesting and important letter which will be duly fed into our process.’
An early outcome of the report’s – and DOHaD’s– impact on British policy may be the cross-departmental programme Sure Start, which brought together social services, health, early childhood and primary education, and social justice, to improve the ‘physical, social, emotional and intellectual status of children’ [Reference Glass43]. While the original remit was children under seven years of age, as the review developed ‘there was an accumulation of evidence that successful intervention in the earliest years offered the greatest potential for making a difference’ [Reference Glass43, p. 260]. This text did not explicitly reference Barker or Acheson, but it named the Health Secretary Tessa Jowell who oversaw both the Acheson Report and Sure Start and who steered the 1997 Comprehensive Spending Review in which the Sure Start programme originated ‘towards services for families and their children aged nought to three, including the pre-natal period’ [Reference Glass43, p. 260].
In conclusion, although the reception of DOHaD in the science community in the late 1990s was not fully settled, the historical moment in British politics, with the election of a party explicitly committed to using the latest evidence to reform social and health policy, created the conditions for DOHaD to enter public policy early in the history of the field.
3.4 Developmental Origins in the Era of Globalisation and Global Health
DOHaD began in the UK, indeed in England: at Barker’s home institution, Southampton; at University College London, where Hanson led a fetal physiology group; with Alan Lucas’ research team at the Childhood Nutrition Centre of the Institute of Child Health in London; and in Cambridge where Nicholas Hales’ group studied clinical biochemistry and metabolism. But the field almost immediately began to expand internationally. In this section, we show how early collaborative networks were created along the established intellectual networks in the British Commonwealth. Then we show how multilateral global organisations became key proponents and advocates of DOHaD.
In 1994, Barker published a report of the ‘first international study group’ meeting in Sydney in October 1994, bringing together scientists from the UK, Australia, Canada, and New Zealand [Reference Barker, Gluckman and Robinson44]. The geographical location is significant as it maps onto the leading centres of fetal physiology and medicine. And while these were relatively new fields, launched in the mid-twentieth century, they built on the long-standing networks of research and practice of agriculture, and especially sheep breeding, in the British Empire [Reference Franklin45–Reference Gluckman and Buklijas47]. These scholarly communities had been studying animal growth for decades; they had developed sophisticated research methodologies and had easy access to animals. DOHaD provided a new framework for their research, by putting the emphasis on environmental (nutritional) influences on fetal growth, and new relevance, by linking their work to adult clinical medicine. In turn, these research communities responded enthusiastically. DOHaD groups sprung up in the UK beyond the southeast: in Bristol, Nottingham, Aberdeen, and Edinburgh; in Toronto, Adelaide, and Auckland; and in US centres with a strong tradition of animal agricultural research – Cornell and Portland, Oregon [Reference Fall and Osmond48].
But it was human studies in the Global South that provided the key missing piece. As the previous section discussed, human prospective studies were central to the confirmation of the DOHaD hypothesis developed on retrospective historical data. And while the Southampton Women’s Study was important for its rich insight into the everyday lives of ‘Western’ women – now conceptualised as developmental environments – prospective human studies in the Global South were important for three reasons. At least since the 1950s, biomedical scientists trying to explain and predict the impact of a rapidly changing environment upon humankind have taken the Global South populations – more likely to subsist on sparser diets than the late twentieth-century people of Europe and North America – as a window into the recent global past [Reference Ventura Santos, Lindee and Sebastião de Souza49, Reference Santos, Coimbra and Radin50]. DOHaD researchers further wanted to understand differences between human populations: do they all respond to developmental nutritional fluctuations in the same manner? Are the risks of cardiovascular and metabolic diseases the same for all? Finally, the 1990s were the heyday of the economic, cultural, and technological transformation termed ‘globalisation’, which helped spread the ‘nutritional transition’ – a shift to a diet high in ultra-processed, high-fat, and high-sugar foods – from the Global North to the South [Reference Popkin51, Reference Labonté, Mohindra and Schrecker52]. By launching human studies in the Global South just as this transition was starting, DOHaD researchers hoped to capture this fleeting moment, when generations raised under ‘old’ nutritional regimes were bearing children into new ones.
The new DOHaD human medicine centres in the Global South were established through Commonwealth networks too, but in former British colonies rather than settler societies. An early and highly significant collaboration flourished between Southampton and Chittaranjan Yajnik who set up the Pune Maternal Nutrition Study in 1994. The aim was to study the long-term, especially metabolic, effect of maternal malnutrition on children born to women in villages around Pune, in the hope of explaining the much higher risk of insulin resistance and diabetes mellitus among the subcontinent’s population [53, 54]. Similarly, a study in Jamaica led by Terrence Forester linked birthweight with the later response to famine: children who were born small tended to respond with the wasting illness, marasmus, and those born larger were more likely to develop the more life-threatening kwashiorkor [Reference Forrester, Badaloo, Boyne, Osmond and Thompson55]. At the Medical Research Council Unit in The Gambia, DOHaD research was integrated into an existing programme of human nutrition research in West Africa [Reference Green56].
DOHaD was new, but, as we show in this section, it built upon the existing disciplinary and institutional networks largely within the British Commonwealth, with histories dating back to the British Empire. Whether these were fetal physiologists whose animal models and research methodologies built upon the structures of settler agriculture, or medical institutions and knowledge networks that traditionally prioritised diseases of greatest economic significance to the empire, their legacies and assumptions influenced the new field of DOHaD. Further research is needed – and in particular case studies focusing on specific countries or research fields and institutions – to elucidate the specific forms and impacts of these influences.
3.5 Conclusions
The field that became DOHaD started in the intimate environment of an academic workshop, but just over a decade later it had sufficient appeal and reputation to hold a world congress bringing together the research community with celebrity scholars and royalty. This chapter argued that to understand this trajectory we must situate the field in the geographical and historical context in which it was created and flourished. We then identified three key themes to help us explain both its success and the controversies: effective interdisciplinarity; ability to offer a new solution to the long-standing problem of social class in Britain; and the ability to recruit existing international knowledge and institutional networks to build a novel approach to emerging global health problems. We summarise our argument in the following way:
First, the interdisciplinarity of DOHaD was its central feature from the start: a source of innovation, intellectual richness, and an effective way to broaden the field’s appeal and recognition. Yet at the same time it was a source of challenges and controversies, with the field having to reconcile diverse methodologies and data types and also respond to criticisms from different disciplinary corners. Furthermore, for all its rapid global spread, DOHaD was deeply marked by its British origins. The long-standing concern with the effect of social class on health not only influenced Barker in the formulation of his original hypothesis but also provided the opportunity and context for DOHaD to influence public policy relatively early in its history. This track record in British social and health policy, right at the time when the New Labour government was gaining international interest for its conscious attention to scientific evidence in policymaking, became important in the twentieth century [Reference Wells57]. This period marked the entrance of DOHaD into global health policy, at first through the established scientific connections of the former British Empire. While this could have sounded its death knell in times of wider recognition of the harmful legacy of this past, in fact it gave DOHaD new life as the realisation grew in the early years of the current millennium that inequalities in health affect all societies, and none more so than those passing through the nutritional and economic transitions associated with globalisation.