Introduction
Malaria has had a long and significant impact on the African continent. From nearly any perspective and metric, the effects have been persistent and deep. This article raises critical historical, ethical, and scientific questions related to the disease by focusing on the “keyword” of malaria over the past century. Malaria, as a topic, represents a huge area of research and cuts across many disparate fields. There are parasitologists studying plasmodium development in labs; entomologists observing mosquito biting behavior in the field; economists measuring the implications of the disease on human capital; microbiologists and geneticists creating genetically modified mosquitos; epidemiologists tracing localized patterns of disease; climate scientists tracking the implications of a warming planet; anthropologists observing the range of therapies used to treat the disease; historians reconstructing what happened in past malaria control activities; and many others working in fields such as political science, sociology, and environmental studies.
Since the study of malaria covers many different disciplines, we have narrowed our focus to malaria interventions—by which we mean activities carried out with the intent to control, eliminate, or eradicate malaria both historically and in the present. These terms have always been somewhat imprecise, but eradication refers to permanent zero incidence of a disease across the globe; elimination refers to the interruption of local transmission of a disease in a defined geographic area (such as a country or region); and control is any activity intended to reduce the incidence, prevalence, morbidity, or mortality of a disease. Over the past 120 years, malaria control, elimination, and eradication activities have largely been funded, directed, planned, and carried out by international organizations ranging from European colonial governments to the Bill & Melinda Gates Foundation. An emphasis on interventions allows us to consider the importance of these past interactions and how they shape current activities, enabling us to see the similarities in approach and tools over the past century and to reflect on the outsized role played by international scientific actors and organizations.
This article has three main arguments. The first argument is that historical knowledge must be meaningfully integrated into biomedical and global health epistemic realms, and it must be used to inform the planning and design of malaria interventions. The second argument draws our attention to one of the neglected ethical questions surrounding malaria control: the risk of rebound malaria epidemics occurring when successful programs end. It presents evidence of how Africans have different understandings of the risks of participation in eradication projects. The issue of rebound malaria illustrates the practical and moral problems that abound when historical knowledge is ignored. Finally, there is an embedded argument about historical continuity, a recognition that as much as global health experts and scientists involved in malaria activities emphasize large discontinuities between past practices and current methods, from at least a few different vantage points—especially in the realm of technology and rhetoric—it is a century of similarities and connections.
The article begins with an explanation of how the biomedical disease we know as malaria was created around the turn of the century. The next section provides a brief history of malaria activities on the continent over the past 120 years, with a focus on the overall history of failures in terms of elimination and eradication and the past two decades of significant reductions in morbidity and mortality. We then take a closer look at the challenging ethical questions that are linked with the risks of rebound malaria, particularly questions of truth telling, appropriate involvement of local communities in decision making, and longer-term responsibilities and obligations. The conclusion steps back to consider the necessity of integrating multiple epistemic forms, arguing that social scientists should insert themselves more forcefully into policy conversations that pertain to their areas of expertise. It’s not a new sentiment, nor one that all academics agree upon. But it’s a question with urgency and worth returning to.
Throughout the article we evaluate how the Lancet Commission on Malaria Elimination Report, a recent and influential global health report calling for another attempt at malaria eradication, has integrated historical knowledge, considered the ethics and risks of rebound malaria, and discussed the promise of new technology. The 57-page report, which was put together by the University of California San Francisco’s Malaria Elimination Initiative and funded by the Bill and Melinda Gates Foundation, was released in September 2019; it argues that eradication is ambitious, well-intentioned, and ethically defensible. Starting in 2007 with their call for malaria eradication, the Foundation has pushed the envelope of what the global community thinks is possible, and no one should doubt their influence in setting the global health agenda.
***
Malaria—as a biomedical disease, and as a bodily condition causing suffering and ill health—is not new, nor is it confined to Africa. One of the first issues we must confront is the question of whether malaria in Africa is exceptional. The answer is yes, malaria in Africa deserves to be considered independently. From an entomological perspective, the continent has the highest density of the most effective (and thus most dangerous) mosquito vectors, Anopheles gambiae. This grouping of species has been shown to have a strong preference for biting humans and often lives long enough to ingest the malaria parasite from an infected human and to pass it on to another person, causing a new infection. From a geographic perspective, the entire middle part of the continent has bi-annual rains, and those rainfall patterns and high levels of humidity combine to create many ideal mosquito habitats and therefore large numbers of mosquitos, which contribute to year-round malaria endemicity in many areas. Epidemiologically, the most common type of malaria infection in Africa is p. falciparum, which is the most severe form. Finally, the likelihood that malaria frequently results in death corresponds to the overall health of the person infected; there are many Africans suffering from co-infections and poor nutritional status. Other societal conditions that contribute to high mortality include overall poverty levels; lack of access to prompt testing and treatment; and many people not having proximity or easy access to medical facilities.
Malaria is a vector-borne disease, which means it is transmitted from one infected person to a non-infected person by way of the bite of a female anopheles mosquito. It is a protozoan infection of the red blood cells. Anopheles mosquitos must first bite an infected human, ingest the malaria parasite, and then live seven to twelve days longer for the parasite to fully develop. For the transmission cycle to be completed, the mosquito must then bite another human, transferring the malaria parasite to cause a new infection. Once a person is infected, the parasite undergoes distinct developmental phases inside the human liver and the blood stream. The physical effects as the parasites proliferate in the body are cyclical fevers, malaise, and often stomach upset or vomiting, in addition to anemia. Technically, all malaria cases are treatable, and death is preventable if treatment is sought quickly enough, and if the right treatment is given (a drug that the parasite is not resistant to). However, if a case of malaria isn’t treated promptly, occurs in a young child or person without acquired immunity or who is otherwise weakened, the disease can (and often does) lead to death.
Malaria and the Creation of a Biomedical Disease
The biomedical disorder we call malaria was created in Europe just before the turn of the twentieth century through a series of discoveries in Algeria, India, and Italy. The word “malaria” is Italian in origin, coming from mal aria (bad air), which hints at the pre-germ theory explanations of the disease as one caused by miasmas and bad air emanating from particular environments such as swamps. Scientists used microscopes, experiments, and close observation of animals and people to untangle the disease lifecycle and identify the parasite and vector. By saying that malaria was “created,” we mean that the new field of bio-medicine named malaria as a single bodily ailment, distinguishable from other fevers, etiologically explained by a parasite, with a set of common symptoms presenting in all people. That particular concept of malaria has only existed for about 120 years.
It may now be obvious that there is a paradox that comes with naming diseases: unless it is a new disease just appearing among humans and it is quickly identified and named (such as Covid-19), most diseases existed as alternately-named maladies causing ill health long before being identified, named, and bounded as a single biomedical disorder (see Langwick Reference Langwick2011 for an insightful discussion of maladies). The malady of malaria has existed for millennia on the African continent, and across the world these cases of fever, malaise, and death were described with different names and were ascribed different causes and cures. In some cases, these maladies aligned closely with what would later be named biomedical malaria, by being linked with particular environments (such as swamps), or in relation to the presence of mosquitos, or with observations about how different people (such as children) were at greater risk. These maladies existed prior to the formal naming of malaria, and many continue to exist into the present (Gessler et al. Reference Gessler, Msuya, Nkunya, Schär, Heinrich and Tanner1995; Giles-Vernick et al. Reference Giles-Vernick, Traoré and Sirima2011; Kamat Reference Kamat2008; Muela et al. Reference Muela, Ribera, Mushi and Tanner2002). The fact that local categories of illness existed prior to biomedical explanations is not unique to Africa—that was a global phenomenon. Excellent works on the history of malaria in Italy (Snowden Reference Snowden2006), East Asia (Yip Reference Yip2009), Mexico (Cueto Reference Cueto2007), and the United States (Humphreys Reference Humphreys2001; Nash Reference Nash2007) all show how local disease categories pre-dated the “discovery” of malaria, and trace how these local categories were affected by the introduction of biomedical explanations.
Though this article cannot fully explore this topic, disease categories are firmly embedded in local cosmological and knowledge systems. Explanations for why a particular disease occurs, how it disables, who is affected and why, and effective preventatives or cures all fall within the realm of vernacular knowledge. This body of knowledge encompasses the observational and experiential learning that comes from decades or centuries of living with a particular condition. There have been multiple studies of this type of knowledge in relation to biomedical disease, such as Clappterton Mavhunga’s The Mobile Workshop (Reference Mavhunga2018), focusing on sleeping sickness in Zimbabwe, and Helen Tilley’s Africa as a Living Laboratory (Reference Tilley2011), which shows how African knowledge was appropriated into colonial European biomedical and ecological bodies of knowledge. Vernacular knowledge acknowledges different understandings of a particular condition, such as that described by Tamra Giles-Vernick, Abdoulaye Traoré, and Sodiomon B. Sirima’s recreation of the historical epidemiology of “cold fever” in Burkina Faso (Reference Giles-Vernick, Traoré and Sirima2011). These differences in naming and framing are emblematic of more substantial cosmological and epistemological distinctions (such as the African philosophy and modes of thought Kai Kresse [Reference Kresse2018] describes for the Kenyan coast, or the “vernacular accounting” described by Adia Benton in West Africa [Reference Benton2012].) As Julie Livingston has noted in her masterful study of how thibamo and tuberculosis co-existed and shaped each other, the translation of biomedical disease into locally salient diagnostic categories is challenging because “each is anchored in radically different ontological regimes…based on different etiologies with different moral consequences” (Reference Livingston2007:807).
Doctors and global health workers sometimes express frustration that these local categories of disease continue to exist alongside biomedical malaria. Broadly speaking, these categories persist because they serve a purpose. They describe illnesses that doctors diagnose as malaria, but that do not respond to malaria treatment; they allow people to explain illness with a different starting point than the bite of an infected mosquito; they provide explanations for the tragic deaths of children that do not lay blame on individuals but recognize larger constraints; and they recognize the importance of the unknowable in explaining misfortune such as sickness and death (Whyte Reference Whyte1997). There is a deep functionality to many of these categories, whether it is due to biomedical gaps that Vinay Kamat documents in Tanzania (Reference Kamat2013), or what Iruka Okeke calls “diagnostic insufficiency” and finds in Nigeria (Reference Okeke2011). There is no reason to expect that local disease categories will disappear, or that these categories endure only because people aren’t knowledgeable about biomedical malaria. This is an important point that is yet to be absorbed within the global health policy community.
Now that it is clear that diseases are created within a process of scientific-social construction (they start in a lab with scientific discovery, but then they have to be diffused and adopted by larger society), we also have to point out that what constitutes a disease changes over time. Biomedical malaria has not been a single and stable disease category. Knowledge about malaria has changed dramatically over the past century. To offer just a few examples: it was initially not known that there are five different malaria parasites affecting humans (falciparum, vivax, ovale, malariae, knowlesi) and that each creates slightly different symptoms; it was not known that there is a dormant liver stage of vivax and ovale malaria where malaria can reappear months or years after first infection; and it was not obvious how people living in endemic spaces acquire partial immunity to malaria or how people across the continent have a degree of genetic protection through Duffy negativity and how that related to sickle cell anemia. In terms of prevention and treatment, it was not known how much quinine had to be given to protect or cure; and as resistance grew to drugs such as chloroquine, new treatments had to be invented, or rediscovered in the case of artemisinin. Biomedical knowledge is not a static body of facts that are true and unchanging. Just as local conceptions of malaria have changed over time, biomedical malaria should be considered as an unstable, yet durable, category.
Malaria and History: Elimination Attempts & Failures
We established earlier that malaria in Africa is unique. It is also worth asking whether the continent’s inability to eliminate the disease is unique. The answer is both yes and no. On the one hand, African countries have been less successful than other countries globally in eliminating malaria. (Of the thirty-eight countries that have ever eliminated the disease, only three are in Africa. There is a general consensus that wealthier countries with less endemic malaria are more likely to be successful in eliminating the disease, and small islands have a particular advantage [Smith et al. Reference Smith, Cohen, Chiyaka, Johnston, Gething, Gosling, Buckee, Laxminarayan, Hay and Tate2013; Packard Reference Packard2007].) On the other hand, it may be more reasonable not to compare Africa to every other part of the globe with malaria, but rather to consider only places that are similarly malaria endemic, or that have “stable” or “substantial” malaria. Depending on which report is consulted, 40, 44, or 47 African countries are considered malaria endemic, and none of those have eliminated the disease.Footnote 1 The World Health Organization (WHO) recognizes that there is no country with highly endemic, stable, malaria transmission that has ever completely eliminated the disease, and that countries that have been had successful elimination programs are “located in areas of low and unstable transmission” (WHO 2008). When we compare African countries with other places with endemic, stable, malaria, the commonality is failure to eliminate. This means that Africa, as an endemic area, is not unique in the persistence of malaria despite concerted eradication efforts.
One area where Africa is similar to the rest of the world is in the strategies used for malaria control, which have remained largely the same over the past seventy years. These activities have followed the basic principles of destroying mosquito habitat through environmental control, preventing contact between mosquitos and humans using barriers, and reducing transmission by means of prompt testing and treatment of the infected. There has been a great deal of consistency in these low-tech approaches, including environmental control and what is now called “integrated vector management”—combining more than one approach to reduce the number of mosquitos. The main strategy of environmental modification is destruction of mosquito breeding sites by draining swampy areas. This reduces the overall number of mosquitos by destroying the places where females can lay eggs, thus reducing the number of vectors available to move the malaria parasite from one person to another. In Zanzibar, water sanitation practices meant to eliminate swampy areas date back to the Omani era; “anti-mosquito brigades” were functioning by 1907, and extensive oiling done by African workers continued for decades (Issa Reference Issa2011). Another low-tech approach has been to focus on minimizing human-mosquito contact through barriers such as bed nets or sturdily constructed houses with screens and tight-fitting windows, doors, and roofs. Using chemicals and insecticides has also been a mainstay of eradication programs, beginning with the oiling of small breeding sites, outdoor spraying with Paris Green from the 1920s onward, and eventually the development and use of new insecticides such as DDT, dieldrin, and pyrethroids for both indoor and outdoor use during and after World War II.
The largest and most well-known malaria intervention was the WHO’s global attempt to eradicate the disease. The Global Malaria Eradication Programme (GMEP) ran from 1955 to 1969, and it was expected that the new pesticide DDT, sprayed indoors and outdoors to reduce mosquitos, would pave the way to complete eradication. There were certainly successes in some parts of the globe. During GMEP years, malaria was declared officially eliminated in fifteen countries, including Italy, Spain, and Taiwan. There were also instances of what scholars have termed “failure as success,” where program goals were not accomplished, but success was still declared (Brown Reference Brown1999). Some countries, such as India, Sri Lanka, and Korea, saw steep reductions in malaria rates, which were then followed by epidemics of rebound malaria that sent morbidity and mortality surging higher than pre-intervention levels.
It is worth examining how the Lancet Report engages with the GMEP history, given that the first sentence of the report invokes the “noble but flawed attempt to eradicate malaria in the mid-20th century” and points out that we are “again seriously considering eradication” (1). The Report’s position on what to do is clear, that malaria “can and should be eradicated before the middle of the 21st century” (2). With the goal being to prove that eradication today is a good idea, there is three-paragraph section recounting “Lessons from the Global Malaria Eradication Programme.” The authors recognize that fifty years later, “the findings and conclusions of this final GMEP report are startlingly familiar.” But despite seeing similar operational, technical, and financial challenges, the authors argue that the world in 2019 “is nothing like the world in 1969,” due to increases in wealth, health, and education. They acknowledge previous mistakes but claim that those failures will remain in the past. There is a superficiality in how history is used: “History in global health and many other arenas has taught that success follows bold commitments, and not vice versa” (8).
It is telling that the Lancet Report overlooks all historical works on the topic. Among the 364 references, there is not a single historical book, despite the plethora of extant works. Historians James Webb and Randall Packard have detailed the specifics of programs in Liberia and southern Africa, in addition to writing books about the entire continent (Packard Reference Packard1996, Reference Packard2007; Webb Reference Webb2009, Reference Webb2011, Reference Webb2014). These works are unequivocal in their assessments about the past failures and the unlikelihood of elimination in Africa in the future. Packard writes, “Put simply, malaria policy has largely ignored the human ecology of malaria. The failure to link ecology and policy has prevented the elimination of malaria” (Packard Reference Packard2007:247). Webb has pointed out this same historical shortcoming in past works by the same team that put together the Lancet Report, writing that the vision of “shrinking the malaria map…seemed to run counter to the long experience with malaria control in tropical Africa,” but that skeptical voices were silenced due to the large amounts of money supporting the endeavor (Webb Reference Webb2014:146). It is unclear whether this history is not incorporated because it is inconvenient, or if these works remain unread. What is clear is that this type of historical information is woefully un-incorporated into contemporary discussions about malaria in Africa. There are a myriad of other works by historians and anthropologists that also contain information calling into question some of the Lancet Report’s basic conclusions.
One frequent explanation for the failure of malaria eradication in Africa is that no genuine attempt was made. During the actual campaign, the WHO’s Africa Regional Director stated that there was a “temporary exclusion of the African region from WHO’s world-wide malaria eradication plans” (Litsios Reference Litsios2015). This is an oft-repeated claim in contemporary malaria articles, seen in statements such as, “the clear decision of the GMEP to leave most of Africa out of ‘global eradication’” (Cohen et al. Reference Cohen, Moonen, Snow and Smith2010). But this claim does not hold. Between 1959 and 1968, WHO documents list up to twenty-four “official” efforts as part of the GMEP, occurring in at least twenty-one different African countries.Footnote 2 This number is most certainly an under-estimation. Based on documents from African and WHO archives, it appears that many malaria programs occurring during these years were referred to as “pilots,” “schemes,” or “experiments,” and were not counted as formal parts of the eradication attempt.Footnote 3 To date, there has been no comprehensive accounting of all these programs to properly acknowledge the size and scale of the effort, and very few of these programs have received close attention. (The Garki project in Nigeria is one of the few that has been examined [Molineaux & Gramiccia Reference Molineaux and Gramiccia1980]).
On page two of the Lancet Report, the authors lay out another potential explanation for Africa’s failure, claiming “Malaria control programs were often overwhelmed and underfunded, and, especially across Africa, a sense of fatalism existed that substantial progress would never be made” (2). But archival materials from multiple programs indicate that these African efforts were well-funded, used the best technologies of the day, and mimicked the structure being used on other continents. The scientists involved were optimistic at the outset; there was a great deal of enthusiasm, and in the case of Zanzibar, it was written repeatedly by different scientists involved in the project that elimination was expected. As a private 1963 letter reflected, “There was every reason to expect a model malaria eradication program in Zanzibar which would have demonstrated the feasibility of malaria eradication in the presence of tropical African vectors and under tropical African climactic conditions” (Assessment of Zanzibar Map 1963).
Yet even with money, technology, and a surplus of optimism, problems mounted. Vectors did not disappear as easily as the models had predicted they would, and in some cases, scientists were surprised to discover new vectors. Insecticide resistance grew faster than had been expected, and it was not recognized by scientists as quickly as it should have been. A multitude of scientific and managerial mistakes were made. In Zanzibar, documents show the wrong supplies being ordered, spraying planned for the wrong months, maps poorly made, and data inaccurately collected, compiled, and interpreted. Very rarely do the official WHO reports mention what shows up frequently in the private, confidential memos: leadership failings in Geneva, gross errors in the modeling of malaria from leading European universities, scientific missteps around insecticide resistance, and unrealistic assumptions about how particular interventions would translate to the various African environments. As one private memo put it, “This report shows the great incompetence and complete lack of technical knowledge and critical minds of the WHO advisers” (Zanzibar 2nd Quarter Report).
Another common explanation for the failure to eliminate malaria in Africa is to blame Africans or African spaces. Frequently mentioned are tropes of recalcitrant or uneducated Africans (being unwilling to open houses to sprayers because they did not understand the benefits of the intervention), unreliable African employees (sprayers, mappers, oilers), or African environments themselves (too many vectors, unexpected vectors, different soil/climate/rainfall patterns) which presented challenges distinct from what scientists had predicted or expected. These show up throughout the Lancet Report, and are among the seven different explanations for the failure: “The biggest challenges at the time were considered to be complacency and absence of political will; poor leadership and management; inadequate tools to eliminate in high transmission areas, particularly sub-Saharan Africa; population movement and poor access to malaria services; minimal knowledge of vector behavior; insufficient funds; and the early development and spread of insecticide and drug resistance” (7). Having read many of the internal WHO documents related to the GMEP for different countries in Africa, we can say with certainty that the biggest challenges facing those programs were inadequate tools, minimal scientific knowledge about localized conditions, and the slow response to resistance. These were grave problems that prevented success.
As this historical section moves into the contemporary era, there is positive news. Since 2000, there have been dramatic reductions in the number of cases globally and vast reductions in mortality across the African continent, though a few countries have seen increases (Bjorkman et al. Reference Björkman, Shakely, Ali, Morris, Mkali, Abbas and Al-Mafazy2019; Murray et al. Reference Murray2012; World Health Organization 2020). Not since the late 1960s and the end of the GMEP has malaria been reduced in so many different places. Many of these reductions have come with the introduction of new technologies such as wi-fi access and cell phones that allow for outbreaks to be efficiently reported and contained. Yet, while the general trend of malaria reductions is agreed upon by all experts, there are disagreements about how much malaria has been reduced and where (Hay et al. Reference Hay, Guerra, Tatem, Noor and Snow2004; Cibulskis et al. Reference Cibulskis, Alonso, Aponte, Aregawi, Barrette, Bergeron and Fergus2016). This is yet another example of where key terms and indicators—even those as basic as the number of infections, morbidity, and mortality—are less stable than they might first appear. As with other global health metrics, much of what is presented as quantitative fact is a mashup of incompletely collected data supplemented with models meant to accurately predict the missing and biased data. The “cooked” nature of this data and generally “poor numbers” have been well documented and are not unique to malaria (Adams Reference Adams2016; Biruk Reference Biruk2018; Gerrets Reference Gerrets2010; Jerven Reference Jerven2013). Work by Marlee Tichenor (Reference Tichenor2017) shows how the very process of data production reinforces preconceived ideas about malaria and appropriate solutions. Setting aside the murkiness of the malaria statistics, there is agreement that never before have such vast amounts of money, energy, and interventions targeted malaria in Africa. Promisingly, this focused attention has resulted in substantial reductions in the disease, and millions of lives have been saved.
Malaria and Ethics: Risks of Rebounding Malaria
A closer look at the efforts meant to control or eliminate malaria reveal a host of challenging ethical questions, including respect for persons by recognizing the autonomy of individuals and communities through appropriate consent practices; truth telling by sharing information about risks that may occur once the intervention ends in a form that is understandable to all; and, more broadly, how to facilitate shared decision making. There are also many more ethical questions raised by malaria interventions, including those regarding what types of vector research to pursue (Ndebele & Musesengwa Reference Ndebele and Musesengwa2012) and how best to organize malaria vector trials, while taking account of community concerns and the treatment of trial communities (Kilama Reference Kilama2010). This section explores these ethical issues by considering particular instances when biomedical determinations of risk may not match local conceptions. As one example of this mismatch, we examine the risks stemming from the loss of acquired immunity and epidemics of rebound malaria. In this case, though the African continent has been the site of multiple rebound epidemics, it does not seem that most African participants recognize the risks associated with short-term malaria programs or the loss of acquired immunity.Footnote 4 We begin by explaining the basic science of acquired immunity and the epidemiology of rebound malaria before delving into the ethical implications and the significance of alternate moral frameworks.
Acquired immunity allows children born in endemic areas to develop a degree of protection by surviving repeated malaria infections while they are young. This makes malaria deadly in childhood but largely a disease of morbidity (sickness) in adulthood. It has been discovered that just as this partial immunity can be acquired, it can also fade when an individual of any age is no longer exposed to regular malaria infections. It remains an open question how quickly immunity wanes when a person is not exposed, and if the immunity can be regained. What has been well documented is that in situations where individuals, or whole communities, lose their acquired immunity through multi-year suppression of malaria infections, when malaria returns, it can be particularly severe and deadly (Trape et al. Reference Trape, Tall, Sokhna, Badara Ly, Diagne, Ndiath, Mazenot, Richard, Badiane and Dieye-Ba2014; Griffin et al. Reference Griffin, Hollingsworth, Reyburn, Drakeley, Riley and Ghani2015; Langhorne et al. Reference Langhorne, Ndungu, Sponaas and Marsh2008). These epidemics of “rebound” or “resurgent” malaria are so named because malaria rates are reduced during the interventions, then rebound and resurge, often to higher levels than before. Rebound malaria thus refers to instances when the disease returns to a place where it was successfully controlled and infects adults who have lost immunity and children who never gained it. (Children under five years of age are particularly vulnerable because their immune systems are not fully developed.) The main article on this topic, by Justin M. Cohen et al. (Reference Cohen, Smith, Cotter, Ward, Yamey, Sabot and Moonen2012), recorded seventy-five unique cases of rebound malaria epidemics in sixty-one countries over a seventy-year time period. The most commonly assigned cause of these rebound epidemics was a weakening of malaria control programs, mostly caused by funding constraints. While Cohen’s article is the most comprehensive accounting of rebound malaria to date, it is incomplete—being entirely reliant on published papers indexed on major databases—and thus certain to be an underestimation. This threat remains very real in Africa, and new epidemics continue to occur, despite the fact that such outbreaks are largely—if not entirely—preventable with responsible planning.Footnote 5
The realities and ethical questions associated with rebound malaria are not new. Dating back to the middle of the twentieth century, there have been well-documented disagreements about the risks of programs that failed to fully eliminate the disease (Corbellini Reference Corbellini1998; Webb Reference Webb2009). As Mary Dobson, Maureen Malowany, and Robert Snow (Reference Dobson, Malowany and Snow2000) have shown, prominent malariologists at the 1950 WHO Malaria conference disagreed about the morality of stripping Africans of their acquired immunity, which would make malaria more dangerous if elimination campaigns failed and the disease returned. This came with the burgeoning scientific understanding that Africans living in highly malarial zones could gain a degree of natural protection, but that it was dependent on regular exposure. While campaigns were ongoing, malaria was often reduced to such low levels for such long periods that adults lost their acquired immunity and children failed to gain it. Without proper measures taken as malaria returned (widespread testing, free, prompt, and effective treatment), rebound epidemics could occur and be extremely deadly.
Even seventy years ago, general knowledge of the ill effects that could occur when a malaria campaign ended abruptly wasn’t limited to scientists; African leadership councils and British colonial health officers also spoke out about the potential dangers. In the Pare-Taveta Malaria Scheme that ran in Kenya and Tanganyika in the late 1950s, when the Taveta governing council heard the malaria intervention was coming to an abrupt end, they publicly stated it was “morally wrong” for the researchers to end the project and force them to “suffer the consequences unaided.”Footnote 6 Even though no one on the governing council could describe the precise risks of loss of immunity or the epidemiology of rebound malaria, they knew there would be consequences to ending the spraying, dispensaries, and malaria testing, and they questioned whether it was appropriate that they had to endure and respond to those consequences alone (Graboyes Reference Graboyes2014).
The Taveta governing council’s statement points to ethical questions about malaria that have largely been ignored. Some of these relate to questions of truth telling about the risks of rebound and the realities of short-term funding cycles—how common is it that community members understand that a malaria program will only last a few years, or that it could cause malaria to worsen once that particular activity ends? Other questions arise about the obligations of global health agencies and those running malaria efforts to educate the community members about the phenomenon of acquired immunity, so that risks can be better understood and considered. There are also broader questions of the longer-term responsibilities of the agencies carrying out malaria control or elimination programs. Is there any responsibility to the community? Any obligation to monitor for rebound, to help respond if rebound occurs, or to acknowledge their own role in creating those conditions?
There is no good reason why African communities on the receiving end of malaria programs should not be thoroughly informed and empowered to make decisions about the short, medium, and long-term risks of participating in malaria interventions. Recognizing the risk of rebound requires an understanding of acquired immunity. The two terms explain the otherwise unexplainable shift from malaria merely making adults sick to killing them. In our review of the malaria literature, we have not found any research reporting on local understandings of acquired immunity or rebound malaria. In our own research in Zanzibar, in more than eighty interviews and informal conversations, we found that even highly educated Zanzibaris working in the medical field—doctors, nurses, professors, and scientists—were hazy on how rebound differs from any other increase in malaria rates. The very few people we spoke with who recognized the causal relationship between a successful malaria intervention ending and rates spiking were those who worked for malaria control agencies.Footnote 7
The most obvious reason why people in Zanzibar (and perhaps across the continent) remain unaware of acquired immunity and the risks of rebound epidemics is that scientists and global health workers have avoided explaining these concepts. Biomedical scientists have long discussed this risk at conferences, in published papers, in organizational memos, and in private correspondences. Yet, there is no evidence these risks were shared with the communities in which the scientists worked. This conclusion is based on multiple types of data: norms recorded in the archival documents of the WHO, information captured in the grey literature of contemporary global health organizations, and in direct observations of how information about malaria activities is shared today. In all of these realms, experts deliberated the risks only with other professionals, limiting the circulation of their concerns. There seems to be an unstated understanding that such information is not to be shared. There are many possible motivations for this, but the most probable explanation is that it is considered irrelevant, inappropriate, too complicated, or too dangerous for communities to know about. It is also true that even in the best of circumstances—when public health officials are committed to sharing information with community members—this can be challenging, as Salum A. Mapua et al. (Reference Mapua, Finda, Nambunga, Msugupakulya, Ukio, Chaki and Tripet2021) have shown in relation to the somewhat simpler concepts of larviciding. Even in the areas of shared assessment of “risk” from malaria, studies have found wide disparities in how community members understand the risk of infection (Koenker et al. Reference Koenker, Loll, Rweyemamu and Ali2013). Acquired immunity and rebound malaria are admittedly complex concepts, though complexity does not invalidate the responsibility of those implementing the programs to explain clearly and to be committed to making sure that community members have an adequate understanding.
While rebound malaria has been clearly documented, and likely under-reported globally, there is not a consensus among malaria experts about its seriousness. In conversations with male malaria experts at the global level, nearly unanimously, they dismiss the relevance of historical cases of rebound, saying that the risk of rebound happening today is exceedingly low. In making these arguments, they emphasize that the present is a clear improvement over the past, and that the past is characterized by mistakes that will not be repeated. This allows for these experts to acknowledge that rebound epidemics occurred, without having to admit any contemporary relevance. This was the tactic taken by a top CDC official when he was quizzed about past rebound epidemics and how such information was integrated into planning contemporary campaigns. He was emphatic that such information was irrelevant and thus did not factor into the organization’s planning at all.
The issues surrounding rebound malaria are complex and important, and it is an area in which social scientists—by paying attention to local forms of knowledge and understanding, and profound silences in local discourse—may be able to make a particularly large impact. Rebound malaria requires us to pay close attention to “the role of human behavior in creating these new epidemiological trends,” since the tendency has been to focus only on the disease pathogen itself (Brown Reference Brown, Inhorn and Brown1997). As one expert working in the field noted, the phenomenon of rebound malaria “remains relatively absent from discourses of malaria control and utterly neglected in the social science literature.”Footnote 8
Malaria and Technology: Rhetoric, Adaptation, & Alternate Moral Framings
This section considers technologies for malaria interventions and the rhetoric that has surrounded their use over the past century. We specifically consider whether the rhetoric expressed in the Lancet Report is similar to or different from the rhetoric that was used in past elimination campaigns. We find that the narratives surrounding the introduction of new tools over the past century (whether it was Paris Green, DDT, long-lasting insecticide-treated bed nets, rapid diagnostic tests, or the RTS,S malaria vaccine) are strikingly similar. We acknowledge a central paradox in global health efforts past and present: that the rhetoric of progress and success is a key component for mobilizing financial resources. These optimistic narratives are necessary in order to gather support, but they also create unrealistic expectations. The presence of these positivist narratives has allowed for a key assumption to persist: that technologies will be transformative, that they will move seamlessly from the lab to the field, and that Africans communities will be grateful recipients. Recent work has shown how current malaria decision making structures make it almost inevitable that we arrive at technical-biomedical solutions (Eckl Reference Eckl2017). This section discusses some of the “new” and “old” strategies used for malaria control, while pointing out historical parallels about the expectations placed on these items to be radically transformative. We will end by pointing out how technologies may have alternate meanings and moral framings from what scientists may expect.
Technologies
One consistent theme over the last century is that many new technologies have had relatively short shelf-lives. The anopheles mosquito and the malaria parasite are frustratingly adaptive and have developed resistance to every intervention developed to date. That has included mosquito resistance to insecticides such as Paris Green, DDT, and pyrethroids. Parasite resistance to the pharmaceuticals sulfadoxine pyrimethamine (SP) and chloroquine have rendered those drugs largely useless, and even the WHO’s currently recommended treatment regimen—artemisinin combination therapy—has documented cases of resistance in Asia and Africa.
However, there is no denying that the pace of scientific discoveries related to malaria is brisk, and two recent findings demonstrate both the excitement and the potential. A research team in Kenya and the United Kingdom discovered a naturally-occurring microbe, Microsporidia MB, inside mosquito vectors living on the banks of Lake Victoria that is effective in blocking malaria. The microbe is passed from one generation to the next through eggs. The finding has not yet been translated into a new interventions, but discussed possibilities include a mass release of the microbe into the wild to infect mosquitos, or the mass release of lab-infected male mosquitos that would then infect females through reproduction (Herren et al. Reference Herren, Mbaisi, Mararo, Makhulu, Mobegi, Butungi, Mancini, Oundo, Teal and Pinaud2020). Another recent discovery about mosquito genetics identified that increased production in just three specific genes are associated with resistance to four different classes of insecticides used in malaria control. It also presents a new method to monitor insecticide resistance through molecular testing (Adolfi et. al Reference Adolfi, Poulton, Anthousi, Macilwee, Ranson and Lycett2019). More generally, new technologies such as CRISPR have raised the possibility of genetically modified mosquitos and gene drives, an area of great excitement within the scientific community. In this area especially, there remain many uncertainties regarding how, where, and under what conditions African communities would approve of the use of this technology, and more broadly, how communities should be involved (Beisel & Ganle Reference Beisel and Ganle2019). The ethical questions associated with genetically modified organisms are vast, and continue to be pertinent to sub-Saharan Africa (Nading Reference Nading2015).
One tool that has been anticipated for decades, but which has only come online in the past few years, is a malaria vaccine. The first partially-effective malaria vaccine, RTS,S, is being tested in pilot programs in three African countries, involving approximately 360,000 children per year. In stage III human subjects testing, the vaccine reduced malaria cases by 30 to 55 percent, depending on the age of the child. The current pilot is meant to determine the feasibility of delivering the vaccine as part of already existing vaccination clinics and campaigns, and whether wide-scale use across the continent is appropriate. Recent news of another stage II vaccine trial from Burkina Faso has returned even more promising results: this vaccine is 77 percent effective at preventing cases of clinical malaria over twelve months post-vaccine (Datoo et al. Reference Datoo and Natama2021).
In addition to new discoveries, there have also been innovations in the ways available tools are deployed. For many decades, prophylactic treatment has been given to foreigners visiting Africa, with the intent of preventing malaria infections before they occur. Those preventative therapies were rarely if ever given to Africans living on the continent. That has changed. In areas of high transmission, pregnant women are frequently put on intermittent preventative therapy to prevent malaria in both mother and fetus. Similarly, countries have begun to adopt new technologies to improve the basic strategies of case tracking, monitoring, and surveillance. Expanding internet and wi-fi access has made it possible to utilize mobile phones, tablets, and geolocation to quickly transmit epidemiological data.
Rhetoric
Within the Lancet Report, a large part of the claim for eradication by 2050 hinges on new tools, and thus it is worthwhile to examine how the authors discuss malaria technologies. They are consistently optimistic about the likelihood that there will be dramatic improvements over what is currently available. Early in the report they write, “The tools needed to overcome these challenges…are rolling out, and the research and development pipeline for new technologies has never been stronger.” On the same page they continue, “The research and development pipeline is expected to yield additional new drugs and insecticides, innovative vector control strategies, and more sensitive and precise diagnostics over the coming decade. Further in the future is the radical potential of gene drive technologies” (3). The Lancet authors are also clear to draw a line between past historical weakness and the vast improvements they see in the present. “Technological capacities have advanced beyond recognition compared with 1969… [with] widespread access to modern information and communications technology… New and highly effective tools, a strong product pipeline, five decades of scientific research and evidence generation, and invaluable lessons from previous and current disease eradication efforts are now available to guide decision making” (7).
What the authors neglect to point out is that during the 1960s eradication campaign, scientists also had a highly effective new tool (DDT), a strong product pipeline, and six decades of scientific research related to mosquito control and malaria transmission patterns. The optimistic tone of the Lancet Report, characterized by what some might call hubris or naivety, mirrors that of the 1950s and the WHO’s global attempt. In both cases, there was an unrealistic faith in new technological solutions. In the 1950s, it was expected that DDT would lead to eradication; today, hopes are pinned on vaccines and genetically modified mosquitos. Both then and now, there was an overreliance on still-to-be-developed tools, an unreasonable level of optimism about how fast new tools will become available, an unwavering faith in scalability and portability into vastly different ecological settings, and a baseline assumption that local communities would adopt technologies in ways imagined by scientists.
Malaria is not the only disease targeted for eradication that has fallen victim to unreasonable expectations. In Eradication: Ridding the World of Diseases Forever? (Reference Stepan2013), historian Nancy Stepan shows how past eradication campaigns targeting hookworm, yellow fever, yaws, and malaria were all hobbled by the twinned evils of incomplete scientific knowledge and scientific hubris. Stepan argues that the expert communities involved in the eradication efforts were too optimistic about what they could accomplish, falling into patterns of over-estimating their own knowledge while assuming their tools were more effective than they realistically were and that new ones would develop faster than they actually did. This rhetoric about new technologies has been the norm for seventy years, and even before. A 1927 League of Nations report described the history of special antimalarial campaigns as “chiefly a record of exaggerated expectations followed sooner or later by disappointment and abandonment of the work” (Najera et al. Reference Nájera, González-Silva and Alonso2011). In all cases, it would be wise to assume that not all developments will be viable over the long term, that many interventions in the pipeline will never come to fruition, and that some otherwise viable tools may not be accepted by the target communities.
Adaptation and Alternate Moral Framings
From the Lancet Report there is conspicuously missing any in-depth discussion of how technologies are received, understood, and adopted on the continent—a blank spot that ignores the actual Africans who use the items, the communities where they will be deployed, and the alternate moral frameworks within which these items are assessed. This glaring omission shows a lack of imagination and a willful ignorance of how past “new” tools were adopted, adapted, or rejected. This is a continuation of colonial-era thinking that imagined Africa as a blank landscape where technology would be gratefully received. As an innovative paper by Ann H. Kelly et al. (Reference Kelly, Boko Koudakossi and Moore2017) shows, even the “sites” of malaria control (community, neighborhood, household) are often overly simplified and do not consider the many intermediary spaces that are neither public nor private, and thus complicate the implementation of an intervention. In the present, the assumption seems to be that Africans will gratefully receive whatever malaria programs arrives, following a rough logic of “beggars can’t be choosers.” This idea of passive acceptance of imported items was not true in the colonial period, and certainly is not true today.
Most technologies are not adopted exactly as imagined by scientists or as developed in a laboratory. Individual adaptation is widespread, both in how a particular item is used and by whom. Nets are an interesting example: they are a simple barrier that has been around more than a century, recently updated with new long-lasting synthetic materials and impregnated with insecticides. But there are myriad potential analyses involving nets. We could consider how the items are valued within the community (Alidina et al. Reference Alidina, Colaco, Ali, Mchac, Mwalimud, Thawera, Laljia, Mutagahywaa, Ramsana and Kafuko2016); how they exist as objects with their own set of political economy and species entanglements; or how they have been constructed as “humanitarian goods” (Beisel Reference Beisel2015). Global health considerations of the net typically focus on what is considered effective use: having those most vulnerable to malaria sleeping under a net at night. Those “most vulnerable” to malaria are often assumed to be children under five—those with the highest mortality rates. Yet even simple nets have been an area of active adaptation, with documented uses as fishing nets, protection for domestic animals, bridal wear, or to protect members of the household who are not young children. These alternate uses—which are often labeled as “improper” or as examples of “non-compliance” in many global health articles—frequently are not indicative of a misunderstanding, but rather represent a different valuation of the net and its potential uses. A study from Tanzania found that when decisions had to be made about which nets to repair, men saw their nets as being a priority because of their breadwinner status (Mboma et al. Reference Mboma, Dillip, Kramer, Koenker, Greer and Lorenz2018). This hints at different moral frameworks and calculations of whose health needs protecting: children under five who are at high risk of mortality if they contract malaria, or conversely the adults, who would likely only become sick, but whose wage earner status could imperil the wellbeing of the entire household.
This same mis-alignment of who is most in need, or who should benefit from the technology, is also apparent in other areas. In community discussions about the malaria vaccine, adults questioned the utility of protecting children without doing the same for adults. The adults presented examples of different ethical frameworks, drawing on the realities of daily life. They pointed out that if adults became ill, there would be no one to care for the children, and that if adults were not protected, they could pass the disease to their children (Bingham et al. Reference Bingham, Gaspar, Lancaster, Conjera, Collymore and Ba-Nguz2012). Whereas global health narrowly identifies children under five as high risk and a target for many interventions, information from African communities indicates different assessments of need, different valuations of who should be allowed access to protective technologies, and evidence of how preserving the health of an adult could be considered more important than preserving the health of a young child. That is a painful decision born out of unfair scarcity, but it is also a decision rooted in daily realities that must be respected.
While these examples have explored some of the adaptations and alternative moral calculations that accompany net use and malaria vaccines, all technologies come enmeshed with their own set of ethical and moral questions. Radically new developments such as gene drives and genetically modified mosquitos have particularly profound ethical questions. What does it mean for a species to be a “species”? Is trying to eradicate or permanently modify a living organism ever appropriate or justified? Even if we consider mosquitos in their non-altered forms, communities may come to different answers about whether they are best thought about as “hosts,” “vectors,” or “companions”—and how those classifications change the way we interact with mosquitos (Beisel et al. Reference Beisel, Kelly and Tousignant2013). Answers to these questions often come wrapped up with questions about African agency and the role of communities in making decisions about if and how new technologies should be tested, how malaria and mosquitos are thought about, and how well risks are explained by scientists and understood by participating communities. Just as there remain profound ethical problems with how information about rebound malaria and the loss of acquired immunity has not been shared with participants, potentially graver ramifications exist in the areas of technological innovation.
Conclusions
By proposing Malaria as a keyword we mean to show that without a doubt, we must have a broader set of social science knowledge integrated into our consideration of contemporary global health problems, and that historical data should shape the global health policies we view as feasible and beneficial. By engaging carefully with the Lancet Report, we find plenty of evidence that this type of knowledge is not being integrated. When history is evoked, it is often in a glib or functionalist manner. We are not the first to make a case for the inclusion of historical knowledge when it comes to running malaria intervention. Webb challenged policy makers to pay attention to the “historical epidemiology” of diseases as a way to more realistically understand how mathematical models and scientific concepts would apply in unique cultural and historical contexts (Reference Webb2015). We see this article as additional evidence of the necessity of this task.
The unwillingness to integrate social science knowledge, or to take it seriously, is not necessarily a new phenomenon. Although the WHO commissioned social science research on malaria in the 1970s and 1980s, Malaria Program officials never felt the “right” questions were being addressed (Packard & Brown Reference Packard and Brown1997). Moving into the present, we have to be aware of the “cycles of public health amnesia” and the “malarias that have been forgotten” (Kelly & Beisel Reference Kelly and Beisel2011). Maintaining a historical mindset encourages practitioners of global health to more critically engage with the structures upon which this field was built. As Nora Kenworthy, Lynn Hunt, Johanna Crane, and Iruka Okeke have pointed out, the partnerships upon which global health functions are often inequitable, impermanent, and rooted in the histories of colonialism and racism (Crane Reference Crane2010; Kenworthy et al. Reference Kenworthy, Thomas and Crane2018; Okeke Reference Okeke2018).
A hard look at what has already been tried in terms of malaria interventions will allow us to imagine the future more realistically. Many malaria experts and historians of medicine believe that complete eradication is simply not feasible. Global health’s myopic approach allows its proponents to miss the striking similarities between the 1950s eradication attempt and the situation today, to overlook problems that continue to exist, and to overestimate technological solutions that are not (and may never be) fully materialized. We need to think carefully, proceed slowly, and more fully engage local communities where malaria is still present to better understand how the local residents view malaria and what interventions they believe are most appropriate. As Marceline Finda et al. (Reference Finda, Christofides, Lezaun, Tarimo, Chaki, Kelly, Kapologwe, Kazyoba, Emidi and Okumu2020) have shown, when local residents are specifically asked about what malaria interventions they would like, their preferences take into consideration far more than just technical efficiency. Without recognizing the hard realities of failed past interventions, and calling them as such, we run the risk of endangering people globally even while trying to do something good.
There are many other important themes related to malaria in Africa that are not touched on here. Climate change, in particular, is of paramount importance and has the potential to make vulnerable communities more vulnerable and to extend malarial zones by increasing mosquito habitats through warming climates and changing rainfall patterns. We also know that malaria is not the only disease afflicting people on the continent, and that there are unfortunate synergies between malaria, tuberculosis, HIV, and now Covid-19. These relationships play out in individual bodies as people suffer from multiple maladies; affect how governments can divide limited public health and medical resources; and change how communities judge the gravity of malaria in relation to other threats and uncertainties. We also have not touched on the disagreements between many malaria-endemic communities and funders from the Global North about how urgent, large, and pressing a problem malaria is (Gerrets Reference Gerrets2010) or investigated more creative approaches to considering malaria as an interspecies condition (Kelly & Lezaun Reference Kelly and Lezaun2014).
There is great value in conducting global health research that has direct application, but it can lead to ignorance about broader social and historical conditions, which is apparent in the Lancet Report. Humanists and social scientists can serve as a bulwark against the global health tendency to want to generalize from the particular and make all findings scalable and portable. As Clare Chandler and Uli Beisel (Reference Chandler and Beisel2017) noted, it is important to attend to “granularity and locality” and to “push beyond simplified, standardized tools for malaria control and measurement.” Africanists can insist on the particularities and peculiarities, showing the value of zooming in on a single place at a single time. It is now time for social scientists to get more involved in these deliberations, to broaden the types of knowledge drawn upon and integrated, and to more forcefully integrate historical case studies and forms of vernacular knowledge that challenge and complicate past conclusions. We should be thinking hard about the ethical questions raised by interventions, eradication attempts, and failures and the different moral worlds in which new technologies will circulate.
Global health policy is being made, important initiatives are being proposed, and billions of dollars are being spent, all in an attempt to address the narrow topic of malaria and the broader topic of improvements to global health. It is worth pondering whether we, as humanists and social scientists with expertise on the African continent, have an obligation to join in this conversation, to insert ourselves into the dialogue, and to help integrate forms of knowledge that are currently excluded. We should consider the importance of writing in more accessible ways and publishing outside of our discipline-specific journals, where practitioners in other fields might be more likely to encounter our work. This is not a new sentiment. Susanna Hausmann-Muela and Julian Eckl rightly pointed out in Reference Hausmann-Muela and Eckl2015 that social scientists “have struggled in the past to find an appropriate platform within the malaria community that provides them the opportunity to address researchers from other disciplines, malaria practitioners, and policy makers.” Unfortunately, their observation remains true. A case must be made for the value and importance of historical knowledge, anthropological knowledge, and local forms of knowledge. That case must be made by people like us.
Acknowledgments
We would like to thank the participants in the African Studies Association’s 2019 “ASR Keywords” panels and the ASR’s editor and anonymous peer reviewers for helpful feedback. We would also like to thank Lynn Hunt, Nora Kenworthy, Thomas (Dodie) McDow, Mari Webel, and Laura Fair for reading early drafts. Thanks to the students in MG’s Global Health Research Group and freshmen in her “Malaria: Science, Ethics, History, Technology” course at the University of Oregon for thoughtful questions. Rachel Conner and Mikala Capage provided superb research assistance. This research was supported by MG’s NSF Career grant, award 1844715.
MG conceived of the structure of the article and the topics to be addressed, and drafted the paper. ZA conducted an extensive review of global health literature, drafted early sections, and presented the paper at ASA. Both authors reviewed the final manuscript. The authors have no competing interests to declare. Both authors have work and research experience with malaria: ZA worked for four years with RTI International on an IRS project. MG worked with the global health organization Population Services International, involved in the social marketing of public health products such as bed nets and bed net retreatment kits.