Skip to main content Accessibility help
×
Hostname: page-component-78c5997874-4rdpn Total loading time: 0 Render date: 2024-11-10T08:03:50.876Z Has data issue: false hasContentIssue false

Chapter 1 - Fads and Fallacies in Science, Medicine, and Psychology

Published online by Cambridge University Press:  16 March 2023

Joel Paris
Affiliation:
Emeritus Professor of Psychiatry, McGill University, Canada

Summary

This chapter defines fads and fallacies, and relates them to cognitive errors. It discusses broader problems with determining causality in science, and the reasons for the replication crisis in research. Examples of medical and surgical fads, namely chronic fatigue syndrome, chronic pain, and non-evidence-based surgical procedures, are examined. The chapter also discusses the role of the pharmaceutical industry in medical fallacies. It concludes by explaining how fads can be understood in the context of the challenges of chronicity in medicine.

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2023

Defining Fads

Fads are novel ideas that are rapidly adopted and enthusiastically followed—at least for a time. Fads are also based on bad theories. Science moves slowly, and to make progress more certain, proceeds with caution. Yet since fads can appear new and attractive, they initially gain much attention. Most end by disappearing from view, sometimes with barely a trace. The American sociologist Joel Best described these phases as “emerging, surging, and purging” (Best, Reference Best2006).

Not every new idea is a fad. There can be real breakthroughs in knowledge, but it takes years to determine how they actually pan out. As a rule, it is best to remain cautious about concepts that spread too rapidly, and to be more welcoming to those that gain support gradually, that prove to be replicable, and that withstand the test of time. In the end, fads are addictive ideas that short-circuit the slow advance of science. They lead to mistaken conclusions that can be embraced incautiously, but do not bear close inspection.

Fallacies and Cognitive Errors

Fallacies are cognitive errors that can be described in research (Kahneman, Reference Kahneman2011). Most people assume that however foolish others may be, they themselves are more rational and have good judgment. Thus a lack of critical perspective on the self is the most prevalent of all fallacies. It is related to what has been called a fundamental attribution error. This term refers to the tendency to attribute other people’s mistakes to their character, but to attribute one’s own mistakes to circumstance.

One would like to assume that intelligent clinicians and scientists are less susceptible to fallacies, which only appeal to uneducated non-professionals. If only that were so! This book will show how stubbornly wrong ideas can be held, even by brilliant people. It also takes time for them to decline and disappear, often only after the death of influential founders of schools of thought and their disciples. In a witticism attributed to the physicist Max Planck, science advances one funeral at a time.

One of the earliest books on this subject was Charles Mackay’s Extraordinary Popular Delusions and the Madness of Crowds (Mackay, Reference Mackay1841/1980), still in print after almost 200 years. Mackay made fun of faddish ideas, but implicitly assumed that his readers would be immune to them. Over a century later, Martin Gardner’s Fads and Fallacies in the Name of Science (Gardner, Reference Gardner1957) showed how science, or at least popular science, can also be infected by fads. Most of Gardner’s examples were fringe ideas that have since died out, but a few remain current nearly 70 years later, including extra-sensory perception (ESP), homeopathy, food fads, and Scientology.

If Gardner were still with us, he would no doubt want to write about the latest twists in the story. Fads and fallacies remain a problem, even in mainstream science. For example, a well-known psychologist published a paper some years ago claiming to prove the existence of a form of ESP called precognition (Bem, Reference Bem2011). Attempts at replication of these findings consistently failed (Ritchie, Reference Ritchie2020). Yet, as often happens in science, it was more difficult to publish failures to replicate than sensational findings that turn out be incorrect. This is one reason why we have a “replication crisis” that affects both medical and social sciences. Or as the internist John Ioannidis famously described the matter, most research findings turn out to be false (Ioannidis, Reference Ioannidis2005). Thus the progress of science generally involves two steps forward and at least one step backward.

Even so, some ideas have a tendency to “go viral.” The evolutionary theorist Richard Dawkins made a useful contribution to the understanding of fads by introducing the term “meme” (Dawkins, Reference Dawkins1976), a concept that was later expanded by the psychologist Susan Blackmore (Blackmore, Reference Blackmore1999). Dawkins and Blackmore suggest that ideas can spread through society rather like genes, and that they are replicated even more rapidly. The difference is that the mechanism is entirely social and cultural. The concept of a meme goes some way toward explaining how false ideas can spread rapidly.

To explain why people are attracted to fads, we can begin by considering fallacious mechanisms of thought that promote incorrect conclusions. Fads gain adherents because they seem promising, even when based on false reasoning. These errors have been the subject of a large body of scientific research, particularly in the new disciplines of behavioral economics (Ariely, Reference Ariely2008; Thaler, Reference Thaler2015) and cognitive science (Kahneman, Reference Kahneman2011).

Fads and fallacies can also lead to people losing a great deal of money, as demonstrated by the regular periods of financial turmoil that have been driven by misjudgments and unjustified optimism (Taleb, Reference Taleb2007). What research most often shows is that many opinions and judgments are based on emotion, not reason, and that arguments are used to justify conclusions that have already been reached. This explains why it can be a waste of time to try to change another person’s mind by arguing—whether in politics and religion, or in scientific debate.

Some of the most important cognitive errors derive from preconceived beliefs. The idea that we discover the truth from reason is beloved of philosophers. Yet there is good evidence that people adopt a view of the world based on intuition, not data, and that preconceived ideas shape their perceptions of reality (Haidt, Reference Haidt2012).

Many decades ago, Festinger (1957) introduced the term cognitive dissonance to account for how people explain away discrepancies between their expectations and unwelcome facts. He studied how followers of a failed prophet became even more fanatical when their prediction about the end of the world did not come about. Once they were committed to a point of view, it was hard for them to admit they had been wrong or foolish. Instead, they “doubled down,” holding on to their original opinions more strongly than ever. (They explained the failure of the prophecy as being the result of their intense prayers.)

Strangely, scientists sometimes do the same thing. When presented with contrary evidence, they may find a way to explain why the data prove they were right in the first place, or why contrary data cannot be relied on because of methodological flaws (Ritchie, Reference Ritchie2020). Of course, since hardly any study is definitive, one can easily play that game. And if highly trained researchers can sometimes be fanatical, those trained as clinicians are even more likely to be credulous. Practitioners with strong beliefs about the effectiveness of certain treatment methods can be very good at finding ways to explain away contradictory evidence.

The general term used to describe these phenomena is confirmation bias (Oswald and Grosjean, Reference Oswald, Grosjean and Pohl2004). Once you have already made up your mind, new information is interpreted in the light of preconceived ideas. One might think that this kind of error should not happen in research, where data, at least in principle, should be the final arbiter. As the nineteenth-century biologist Thomas Huxley is thought to have said, “many a beautiful theory is killed by an ugly fact.” Unfortunately, some scientists hold on to favorite theories with religious fervor.

Many researchers will have had the experience of encountering difficulty in publishing results that challenge a current consensus or paradigm. Peer review is a necessary part of science, but can sometimes be used by experts who do not want data contradictory to their own views to be published. Thus when a submitted scientific paper challenges a broadly held consensus (i.e., is “counter-intuitive”), the immediate reaction of a peer reviewer could be negative, an intuition that can easily be backed up by pointing out inevitable shortcomings in research methods. When I was a journal editor, I sometimes made the mistake of asking colleagues with fixed ideas to review papers that they disagreed with, requiring me to search out more balanced opinions. I have seen peer reviewers demonstrate their scientific potency by tearing apart papers that do not support their own ideas (or that simply fail to quote their work).

Similarly, anyone who has ever attended a scientific congress can attest to the way that researchers hold on to favorite ideas for dear life. During the wait for older scientists to be replaced by younger ones, incorrect conclusions can linger on through simple inertia.

Mahoney and DeMonbreun (Reference Mahoney and DeMonbreun1977) carried out a striking empirical study of confirmation bias in the peer review of scientific papers. They sent the same submission to 75 expert readers, modifying only what the data showed. The results revealed that reviewers had a much more favorable opinion of studies with findings that confirmed their own theoretical views, and a poorer opinion of those that disconfirmed them. In another provocative study, researchers sent out several classical high-quality research papers from years in the past under different names (Peters and Ceci, Reference Peters and Ceci1982). Only a few journals recognized the deception, and 89% of the submissions were rejected on methodological grounds.

The same process occurs in grant submissions. I have known researchers who spend almost as much time predicting who their reviewers will be as on writing a grant proposal. If a hypothesis seems too controversial, they may withhold the submission. (Some colleagues have told me that they prefer to transfer funds from another grant.)

Kahneman (Reference Kahneman2011) published a widely read book that described a very broad range of cognitive biases, one of which is the availability heuristic. In that scenario, error results from depending on what comes easily to mind, rather than on what is most probable. Even the most intelligent people tend to be impressed by a lively anecdote or a recent personal experience. But as the witticism goes, “the plural of anecdote is not data.”

This type of cognitive bias tends to afflict clinical practice. For example, practitioners may remember something that happened to a recent patient, but fail to bear in mind that the most striking observations tend to be rare. If you have just seen a series of patients with a particular diagnosis and have given them a certain therapy, you may be tempted to view future patients as having the same condition and requiring the same intervention. I shall show later in this book how many patients receive incorrect diagnoses, such as major depression, bipolar disorder, post-traumatic stress disorder, or attention-deficit hyperactivity disorder (ADHD), leading to incorrect forms of treatment.

The human mind is programmed to find patterns in the world (Bloom, Reference Bloom2004), a phenomenon that Shermer (Reference Shermer2012) has described as “patternicity.” Sometimes people see hidden faces in natural landmarks. In medicine, any explanation tends to be better than none. When I was a young teacher of psychiatry, I passed on many of these “just-so stories” to my trainees. Since I believed them myself, my enthusiasm made me a popular teacher. Today, embracing a scientific culture of doubt, I find myself telling students that we have a very limited idea of why our patients fall ill—and we often don’t quite know what to do for them. The price I pay for greater humility is being less popular as a teacher. Colleagues who have an answer for everything are more attractive.

Establishing Cause and Effect

The simplest cognitive error concerns the nature of cause and effect. The basic fallacy is post hoc, ergo propter hoc (i.e., “after this, therefore because of this”). In plain English, correlation does not prove causation. Although everyone understands this principle, it is surprising how often it is flouted. As a journal editor, I (along with my peer reviewers) have often had to remind authors to tone down their conclusions and avoid making causal inferences from simple associations. This happens all too often in research, and is an even more serious problem in practice. It is one of the main reasons for the “replication crisis” in medical and psychological research (Witkowski, Reference Witkowski2019).

The same problems arise in clinical work. Consider one of the most common errors in medicine—attributing change in a patient’s condition to the most recent intervention. A physician prescribes a drug, after which the patient rapidly gets better. Is that not good enough? Unfortunately not. There can be other reasons for patient improvement. One is spontaneous remission. Another is the natural course of disease. Still another is a change in life circumstances or the removal of a risk factor. Physicians make this mistake because they want to believe that what they do is of value.

Patients also like to think in this way. If they get better after taking a medicine, they assume cause and effect. If they get worse after taking (or stopping) a medicine, the same assumption is made. Maybe the drug worked, but it is also possible that all you are seeing is a coincidence or a placebo effect. You cannot be sure about causation unless you conduct an experiment with proper controls. That is why clinical trials are so essential for guiding the practice of medicine, and, given the complexity of the questions asked, one should always wait for replications and meta-analyses.

Physicians want to believe that clinical problems are treatable, and they tend to stick with the treatments they know. They like to make diagnoses that offer a basis for providing such treatments. In other words, if all you have is a hammer, everything looks like a nail. On a broader level, problems in determining cause and effect lead to a profound misunderstanding of the causes of illness.

The idea of a single cause for a single disease is attractive. That model has been based on infectious diseases, in which Koch’s postulates for identifying a specific organism led to the discovery of many effective treatments for infections. In fact even that example is misleading, as it fails to take into account resistance factors that determine whether even the most virulent infections become pathogenic. Only a few diseases in medicine have a single cause.

The multifactorial nature of illness gives physicians trouble. The human mind is programmed to favor single causes and single effects. But the real world is different. To take multiple factors into account, there has been a vast change in the way that research data are analyzed statistically. When I was an undergraduate, we learned to carry out t-tests and chi-squares. Today, journals may not accept submissions unless the analyses are multivariate. This is because we need to know how much of the variance in a study is captured by each variable. I have come to the conclusion that the world as a whole may be a kind of multiple regression. Everything has many causes, and few effects are easily predictable.

Over-simplification of complexity also affects research. When a single risk factor is identified, whether it is biological or psychosocial, it can sometimes be seen as the cause of an illness. That conclusion is usually wrong. A good example concerns attempts to find viral infections in patients with chronic fatigue, which may not play any causative role in the syndrome, but can be triggers that contribute to the overall burden of risk, are secondary effects, or are incidental findings (Afari and Buchwald, Reference Afari and Buchwald2003). This illustrates the danger involved when medicine views every disorder in the light of biomedical reductionism.

Another example, closer to my own area of research, is the idea that if a patient has a history of childhood trauma, that must be the main cause of adult psychopathology. But, as I shall discuss in Chapter 2, this conclusion fails to consider either that traumatic events tend to occur when other risk factors are present, or that most children who are exposed to trauma do not develop a mental disorder as adults.

In summary, disease is not a software glitch that only needs to be tweaked. Complex interactions between multiple risk factors and protective factors require complex forms of treatment.

Fads, Fallacies, and Good Intentions

Many years ago, when I was in training on an inpatient ward, I suggested to one of my teachers that since schizophrenia has a strong genetic basis, psychotherapy for psychosis can have only limited therapeutic value. His reply was “If I accepted your view, I would have to consider the condition hopeless.” But my teacher’s assumption that genes strictly determine outcomes was wrong. Even in the most severe illnesses, genes are rarely the only determinant of outcome, but can be best understood in a gene–environment interaction model (Paris, Reference Paris2022). Moreover, research in epigenetics describes mechanisms by which the environment can switch genes on or off (Szyf et al., Reference Szyf, Meaney, Turecki, Suomi and Tremblay2009).

My teacher’s comment also made me realize how important hope is for clinicians. He was a psychoanalyst who, unlike most of his colleagues, had spent his life treating some of the sickest patients. He sincerely believed that if one had enough skill, anything was possible. But this led him to make mistakes, such as treating two colleagues suffering from bipolar disorder with psychological therapy.

By and large, medical fads arise when practitioners have good intentions but lack knowledge. For reasons of training, idealism, and professional pride, physicians passionately want to help their patients. Although they often succeed, most are less comfortable with chronic illness than with acute disease. Managing chronicity requires patience and an acceptance of limitations.

When I was a medical student, a painting was prominently displayed at the entrance to the faculty building. It showed a heroic physician at the bedside of a child, physically struggling with a skeleton representing death. This is an image that many medical graduates want to believe in. Doctors aim to conquer disease, even as they learn that this is not the only service they can provide to patients. In a famous saying (attributed, like so many other sayings, to Hippocrates), the role of the physician is to cure sometimes, to treat often, but to care always.

Medicine and Science

The idea that medicine should be practiced on scientific principles is relatively new. Over most of history, clinical work was more of an art than a science. In the past, bleeding and purging killed many patients, and people were much better off avoiding physicians than seeking their advice.

Then, in the late nineteenth century, pathology and bacteriology developed methods to confirm many medical diagnoses. But treatment methods in medicine were still not particularly evidence based. Practice was based on clinical experience, or on the consensus of experts. There was no formal concept of evidence-based medicine (EBM) until the mid-twentieth century. Even today, it is not possible to conduct practice entirely on the basis of empirical data. The evidence we have is rarely conclusive, and quite a few important clinical questions have never been studied.

Scientific medicine is associated with advances that have reduced the burden of disease and increased the human lifespan. But it is still necessary to ensure that empiricism sets the rules for further progress. Medical journals set standards for what is accepted as scientific. But as empirical data became required for deciding what diagnoses are valid and what treatments are effective, the bar was constantly being raised. Peer review is much more critical today. Many (if not most) articles that were published 25 years ago would be rejected today. We expect larger, more representative samples, and journals have statistical consultants to ensure that the most advanced methods of analysis have been used. The top journals pride themselves on a high rejection rate, which can approach 80–90%.

When perusing medical journals of previous generations, I observe a very different standard. Journals in the past were replete with papers whose methods were almost entirely unscientific and unreliable. One still sees these kinds of publications in lower-impact journals—case series without control groups, associations reported as percentages, with few hypotheses and a lack of formal statistical testing. For many years, prominent journals still published reports of single cases, from which one can conclude almost nothing. (Case reports can occasionally be heuristic if they generate hypotheses that stimulate more systematic research, but most journals refuse to publish them, and those that do may put them in the letters column.)

Yet 50 years ago the case series was one of the commonest types of article published in journals. A physician would describe and report outcomes for a number of patients. There would be no control group for comparison, and no effort to show that the samples were in any way representative of larger clinical populations. If the article went on to propose an association between an etiological factor and a disease, one had no way of determining whether the observations were of real significance, could be explained in some other way, or were only chance findings. If the article described a new method of treatment and claimed it was effective, it tended not to be replicated, since so many such reports were based on unrepresentative samples, or in fact described placebo effects. This change has produced angst among researchers who now have more papers rejected. But setting a high bar is good for both medical science and clinical practice.

The stakes of publishing misleading data in medical journals are higher than they are in the basic sciences. If physics, chemistry, biology, or academic psychology produce faulty research findings, no one other than the authors ends up being adversely affected. But patients can suffer real harm when medical researchers get things wrong. Sometimes, as in the claim that certain vaccines cause autism (Taylor et al., Reference Taylor, Miller and Farrington1999), public health can be put at risk. To prevent incorrect and dangerous conclusions, peer review must assure that research methods were properly followed, and that conclusions are justified.

Even under the stringent standards of modern medical journals, papers will be published that eventually prove to have misleading findings. In most cases, unrepresentative and/or insufficiently large samples are the problem. This may be why, as Ioannidis (Reference Ioannidis2005) has shown, most articles published in medical journals are never replicated. Dr. Ioannidis may have shocked the world of medicine, but he helped to support a review of practices that have led to more doubt as to whether findings from a single scientific paper reflect what one would see in larger and more representative samples.

Failure to replicate results is also the main reason why the media so often get medical science wrong. They are looking for a story, particularly anything that looks like a dramatic breakthrough. When a new finding comes out in a top journal, medical reporters tend to jump on it. We are unlikely to learn that, later on, the results were not replicable. About 30 years ago, one of my colleagues became briefly famous for finding a gene that he claimed strongly determined a major personality trait. The finding was duly written up in Time Magazine, but no one was ever able to replicate it. The media did not consider non-replication to be a story, and never returned to the subject.

For the same reason, we should be cautious about the expert consensus that lies behind formal treatment guidelines, even the most high-quality recommendations. In my opinion, the guidelines published by the National Institute for Health and Care Excellence (NICE) in the UK are more reliable than anything produced in North America. (This could be because British culture has historically, at least until recently, valued empiricism and common sense over hype.) The other major contribution of British medical experts (in collaboration with Canadian ones) has been a series of Cochrane reports that are considered to be a gold standard for evaluating treatment. But Cochrane is so rigorous that it typically concludes that not enough research is available to lead to strong conclusions. Moreover, although NICE guidelines and Cochrane reports represent the best we can do at any given time, they have to be regularly updated to serve as a guide to practice. Many of them become dated within a decade or so.

An internist once remarked: “the consensus of experts has been a traditional source of all the errors that have been established throughout medical history” (Feinstein, Reference Feinstein1988). But imagine the plight of physicians of the past, who had nothing to rely on but the opinions of senior clinicians and unscientific reports in medical journals. Let us go back in history and consider examples of how practice, when not evidence based, went seriously astray.

Medical Fads in Historical Context

When I was a student, working at a summer camp in a remote area of Canada, and reading by kerosene lamp, I encountered a copy of William Osler’s The Principles and Practice of Medicine. This famous textbook, first published in 1892, eventually went into an eighth edition (Osler, Reference Osler1916), and was still an instructive read in the 1970s. Osler acknowledged how little was then known about illness, or how to treat it. That made him ahead of his time in many ways. He criticized the fads of his era, and he did not approve of a “shotgun” approach in which every symptom was treated with a separate drug. (Unfortunately, that kind of practice remains common today.)

Reading a book originally published before the First World War made me wonder what our own textbooks will look like 50 or 100 years hence. This is an example of how the history of medicine offers a useful perspective on present errors. Future readers may shake their heads, wondering how physicians of the twenty-first century could have so frequently misunderstood and mismanaged disease.

Physicians of the past were foolish to rely on bleeding and purging as mainstays of therapy for so many diseases. But they lived in a climate of opinion in which the most respected members of their profession promoted these practices. American psychiatrists still admire Benjamin Rush, a signer of the Declaration of Independence. But Rush advocated bleeding and purging to the very end of his days, killing many patients along the way (Fruchtman, Reference Fruchtman2005). This was down to ignorance, but the failure on the part of medicine to challenge such ideas is a chilling historical fact.

In another cognitive error, called “group-think” (Janis, Reference Janis1972), people adjust their views to those of peers with whom they work. It is difficult to stand against received wisdoms without being branded a renegade. Practitioners generally accept the consensus of their colleagues unless they are familiar with alternative options.

A good example is pharmacology. Even in the past, drug treatment was aggressive, despite the fact that most agents available in the nineteenth century were not effective. An American physician of the time, Oliver Wendell Holmes, is believed to have once said, “If the entire pharmacopeia were thrown into the ocean, it would be much better for mankind and much worse for the fishes.” Only a few drugs from the twentieth century have stood the test of time, the most prominent being digitalis and morphine. Even today physicians may prefer to prescribe ineffective agents than to stand around helplessly in the face of serious illness.

Once diseases are understood, therapy becomes more rational. Even prior to the era of antibiotics, once the organisms that cause infectious diseases were identified, patients were less likely to receive treatment that could make them worse. Often, once effective therapy became available, fads died out entirely. That was the ultimate reason for the disappearance of bleeding and purging. And most drugs of unproven value disappeared once physicians had access to a modern pharmacopeia.

Diagnostic Fads in Medicine

The most common and most intractable symptoms with which patients present to physicians are those that are particularly likely to attract fads. I need not discuss the endless number of diets that have been promoted by physicians over the years. Food fads have never entered mainstream practice, although some physicians have made fortunes recommending them. One could also write volumes on the various methods used to manage chronic insomnia, some of which also lie beyond the boundaries of medicine. But let us narrow our focus to two of the most frequent presentations seen in any physician’s office: unexplained fatigue and unexplained chronic pain. These common but often intractable problems tend to elicit faddish remedies.

Chronic fatigue syndrome (CFS) has been the subject of intensive research and serious controversy for decades, and the diagnosis remains controversial (Holgate et al., Reference Holgate, Komaroff, Mangan and Wessley2011). As defined by the Centers for Disease Control and Prevention in the USA, CFS is characterized by persisting or relapsing fatigue for at least 6 months, cannot be explained in other ways, and is associated with four other symptoms from a list that includes post-exertional malaise, impaired memory or concentration, unrefreshing sleep, muscle pain, multi-joint pain without redness or swelling, tender cervical or axillary lymph nodes, sore throat, and headache.

Attempts to define CFS in other ways, such as by the term “myalgic encephalopathy (ME),” assume that a specific biological etiology has been found. But these claims have never been proven or replicated. Viral infection need not be the main cause of the syndrome, although it seems to be a trigger. Holgate et al. (Reference Holgate, Komaroff, Mangan and Wessley2011) concluded that post-viral fatigue ends up being chronic, either because of psychosocial stressors or due to other unknown factors. Thus chronic fatigue is not an infectious disease, as claimed by some patient advocates who want to legitimize their suffering, but an abnormal response of the immune system to infection, leading to a failure to recover.

No evidence-based treatment for CFS has ever been established. This is not surprising, given that the syndrome is quite heterogeneous. For a time, the idea that fatigue might be due to low blood sugar, even in the absence of diabetes, affected practice (Bennion, Reference Bennion1983). Later, Abbey and Garfinkel (Reference Abbey and Garfinkel1991) concluded that CFS includes cases with unknown organic causes, and others that represent depression, or what nineteenth-century psychiatry called “neurasthenia” (Shorter, Reference Shorter1993), as well as what DSM-5-TR now calls “somatic symptom disorders” (American Psychiatric Association, 2022).

Another patient symptom that physicians struggle with, namely chronic pain, can take many forms. Today fibromyalgia is a common diagnosis in primary care. This syndrome has a specific definition (Chakrabarty and Zoroob, Reference Chakrabarty and Zoroob2007): the presence of widespread pain for a period of at least 3 months, as well as tender points at 11 out of 18 specific anatomic sites. However, no lesions can be found at the tender points, and there is no evidence of any etiological factor—or of any consistently effective evidence-based method of treatment. Like chronic fatigue, fibromyalgia overlaps with somatic disorders (Shorter, Reference Shorter1993), with the important difference that it is more widely accepted within the medical profession. Even so, the concept remains controversial. Like many mental disorders, it presents with symptoms but without signs, and is not associated with biological markers or organic changes. It remains possible that fibromyalgia will eventually go down in history as a medical fad.

Surgical Fads

In the history of medicine, surgery developed many heroic and effective interventions for illness. But some of its procedures have been faddish. I shall focus on one that for decades was firmly in the mainstream of clinical practice, namely radical mastectomy.

The history of therapy for breast cancer is a complex story (Lerner, Reference Lerner2001). This fairly common disease can be fatal. It has therefore attracted powerful treatment methods, as well as strong emotions. For over 100 years, the first line of treatment has been surgery. But it was never clear how extensive these procedures should be. Because cancer spreads to lymph nodes and beyond, many surgeons felt that one should go beyond removing the tumor itself.

William Stewart Halsted, a prominent American professor of surgery at Johns Hopkins University, was a pioneer in surgical technique, antisepsis, and effective anesthesia (Nuland, Reference Nuland1988). He developed the technique of radical mastectomy, in which axillary lymph nodes, as well as chest muscles, were also removed. This method was still standard when I was a medical student 60 years ago. But it was already becoming apparent that simple mastectomy, accompanied by radiotherapy and/or chemotherapy, could be equally effective. Long-term follow-up studies that lasted as long as 25 years later confirmed this conclusion (Fisher and Wolmark, Reference Fisher and Wolmark2002).

Why was an ineffective and disfiguring surgical procedure popular for so long? One factor was the prestige of Halsted, a pioneer working at one of America’s top medical schools. Despite having an addiction to cocaine and morphine (revealed years after his death), Halsted (Reference Halsted1961) convinced the surgical community of the effectiveness of his methods by publishing highly descriptive surgical papers. Yet if control groups had been required, as they would be today, radical mastectomy might never have gained support. Perhaps Halsted’s aggressive approach reflected the “can-do” beliefs of American culture—if a disease is dangerous, you just have to fight harder to beat it.

Was radical mastectomy—a mainstream treatment used for many thousands of patients and a standard approach supported by clinical consensus—a fad? I would say yes, because it persisted into an era when better evidence could have been made available. Despite poor scientific support, radical mastectomy spread rapidly and became wildly popular, but eventually disappeared. Perhaps it was the best that medicine had to offer at the time, but Halsted’s operation could easily have been challenged in his own time, by following up patients who received it and comparing their outcomes with those who received less radical treatment. It was bad judgment to stick with a procedure that was both aggressive and naively ambitious.

One can readily find other examples of surgical fads. One is tonsillectomy for children, considered a routine procedure for many years. As confirmed by a Cochrane review (Burton et al., Reference Burton, Glasziou, Chong and Venekamp2014), it should retain only a marginal role in practice. The concept that chronic untreated infections anywhere in the body can lead to serious consequences also had some impact on psychiatry, and there was even a brief fad for surgical procedures in mental hospitals, based on the idea that psychosis is due to chronic focal infection or autointoxication. This led to the removal of teeth and even portions of the colon (Scull, Reference Scull2005). Today, such procedures are unthinkable, and any surgeon wanting to carry them out would lose hospital privileges.

Unfortunately, the problem of determining whether a surgical procedure is effective or necessary is hard to resolve. Even now, surgical practice depends much more on clinical experience than on randomized clinical trials (Balch, Reference Balch2006). It is much more difficult to carry out research on surgical procedures than on drug treatment. But without randomized controlled trials, accompanied by careful and extended follow-up of patients, one cannot be sure that any surgical method is superior to a less invasive alternative, or is not simply a placebo.

The Challenge of Chronicity

Many medical fads have gone into decline, but new ones continue to appear. The reason is that many diseases are incurable, whereas others are chronic and can only be palliated. Moreover, advances in acute treatment mean that most of medical practice involves the management of chronic illness.

When diseases are progressive but remitting, fads are likely to develop. Multiple sclerosis (MS) provides an excellent example. Most patients have remissions, but over time they get gradually worse, and the ultimate outcome tends to be fatal. This course of illness has led to a range of faddish treatments for MS, including one that involved an untested form of vascular surgery (Kolber et al., Reference Kolber, Makus, Allan and Ivers2011). None of these therapies has ever been shown to affect the course of the disease. The fluctuating course of MS, with sudden and surprising periods of improvement, can fool physicians into thinking that their interventions are responsible for changes. As we shall see, many mental disorders have a similar course, leading both patients and their physicians to explain any improvement on the basis of the most recent intervention. This is another example of how cognitive biases can affect medical judgment.

Patient Advocacy

In the contemporary world, patient advocacy groups, working through the Internet, aim to raise awareness of chronic diseases and attract funds for research. This is a positive development, and I have been involved with one such group (for borderline personality disorder). We live in an era in which patients are more actively involved in treatment decisions. That is also is a good thing. Many patients are looking up their diagnoses online, which gives them a chance to understand their illness and its treatment in more detail.

The problem is that the Internet allows uninformed groups of consumers and patients with a strong agenda to “flood” search engines with dubious ideas, sometimes supported by instant “experts” (some of whom are celebrities rather than professionals). This is what happened in the case of the anti-vaccination campaign, based on a claim by the British physician Andrew Wakefield that measles, mumps, rubella, and pertussis vaccines can cause autism (Goldberg, Reference Goldberg2010). The story became an international scandal when, after systematic research, it became clear that no such relationship exists. Unfortunately, several physicians became involved in this malignant fad. Moreover, many parents refused vaccination for their children, which led to an outbreak of several entirely preventable diseases. Thus, like so many other things, advocacy can be double-edged. When conducted with professional support, it offers help to patients and families in need. When linked to a fad, getting information from the Internet can do real harm.

How the Pharmaceutical Industry Promotes Fads

The pharmaceutical industry has played an important role in promoting medical fads (Goldacre, Reference Goldacre2013). To understand why, we need to understand the relationship between “Big Pharma” and practitioners. Ideally, they should work in partnership. Both aim to use drugs to treat disease. Positive results benefit patients and please physicians. In such cases, drug companies should make a legitimate and well-deserved profit. However, this rosy scenario is far from current reality. One reason for this is the high margin of profit for drugs.

Moreover, medicine—particularly academic medicine—has suffered from being too close to industry (Angell, Reference Angell2004). Pharmaceutical corporations, which used to be small, are now large and among the most profitable companies in the world. Although some drugs have benefited large numbers of patients, “Big Pharma” is reluctant to invest billions in developing new agents, preferring to make “copycat” drugs that resemble those already available. Even if only a single atom is changed in the molecule, once government approval is granted a new agent can be marketed to physicians, supported by an aggressive campaign. The industry spends large amounts of money on marketing, much more than on research. It also benefits from new diagnoses that are associated with drug treatment, so that its influence may also extend into diagnosis.

The dynamic behind the relationship between the pharmaceutical industry and physicians depends on the way new drugs make money for companies. Older drugs, particularly when they are out of patent and “go generic,” yield little profit. Thus pharmaceutical representatives have the task of convincing practitioners to adopt the latest agents. They provide physicians with gifts (Wazana, Reference Wazana2000), and they recruit medical opinion leaders to influence prescription practices (Angell, Reference Angell2004). These academics may receive “consultant fees” to actively promote a product. Moreover, an army of pharmaceutical representatives establishes personal relationships with practitioners, sometimes paying for dinners at fine restaurants.

Yet older drugs are often as good as (or better than) newer agents. Twenty years ago, a large-scale study (ALLHAT Collaborative Research Group, 2002) showed that the classical diuretic chlorthalidone is more effective for hypertension than any of the current favorites (ACE inhibitors or calcium-channel blockers). Similarly, acetylsalicylic acid may be as effective as any current alternative in reducing the risk of developing cardiovascular disease (Gaziano et al., Reference Gaziano, Opie and Weinstein2006). (In fact, none of the alternatives are that effective.) These findings have been published in top medical journals, but have had little impact on practice. Physicians seem to have an almost irresistible attraction to “the latest thing” in drug therapy.

Some authors defend a close relationship between industry and academic medicine on the grounds that it promotes research into the development of new drugs (Goldberg, Reference Goldberg2010). But although effective collaborations do occur, they are relatively rare. Some academic physicians conduct clinical trials run by industry, which designs protocols that are most likely to support its products (Goldacre, Reference Goldacre2013). But industry would rather spend its money on drug promotion than on more research. One result is that practicing physicians are constantly encouraged to embrace the latest diagnosis and the latest treatment.

This is not to say that every new idea is wrong, but that medicine is being practiced in a climate that almost inevitably promotes fads. We cannot blame industry for this problem. It is the result of our own unjustified enthusiasm. The next few chapters will show how these problems have infected the theory and practice of psychiatry.

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×