A Note on the Notes
For those new to my work, allow me to repeat my standard warning: Some of us are footnote people, but many are not. For those who find detailed footnotes too distracting from the flow of the text, my perhaps obvious suggestion is: Please do not feel compelled to read every note as you go. If you have an unanswered question about a sentence, paragraph, or section that includes a note (or simply want to consult the secondary references), then you should read the surrounding notes. With any luck your question will be answered there (and if it is not, then you will see that in fact I do not have enough notes). Otherwise, I invite you to read through the remaining notes at your leisure. Supplemental and specialized argument often gets conducted in the notes, and some Holzwege – other paths and views – can be found there as well. (The received view that by Holzweg Heidegger means “dead-end” is mistaken. In the prefatory epigraph to the collection of essays he titled Holzwege, Heidegger explains these as forest paths made by backwoods loggers and known to backcountry hikers, meaning that a Holzweg is a path that leads to a place in the forest from which trees have been removed – in other words, to a clearing, a place where we can see the light through which we ordinarily see.)Footnote 1
Heidegger on Technology’s Danger and Promise in the Age of AI
1 Technology: Pure Comedy or Disturbing Dystopia?
As the album title Pure Comedy suggests, those opening lyrics from Father John Misty’s darkly satirical song, “Total Entertainment Forever,” present a bitterly sardonic vision of the dystopian technological future he sees swiftly coming into focus (as Josh Tillman found himself having to explain to some outraged, tone-deaf listeners back in 2017).Footnote 2 But, however one might feel about the dark prognostications of some misty father figure (with a penchant for “Heidegger and Sartre”), who today has never wondered about the way technology is transforming our world?Footnote 3
Technologies increasingly permeate our lives, shaping and reshaping our relationship to the world, to others, and even to ourselves. These changes have already been so dramatic as to be virtually undeniable, but our technologies continue to alter our lives in ways both subtle and profound.Footnote 4 And yet, is there anyone today who clearly understands the nature of this ongoing technological transformation in which we find ourselves? Who can chart its historical trajectory, explaining where it comes from, how it is reshaping us, and where it is leading us now? The answer, I shall suggest later, is the later Heidegger, once critically reconstructed and understood in the full depth and complexity of his mature thinking. But our strange predicament is what Heidegger himself calls “the mystery [das Geheimnis]” of technology’s “ontohistorical [Seinsgeschichtlich]” unfolding: It pervasively transforms humanity and yet does so in ways we seem largely unable to comprehend – at least until we learn how to think about technology in a manner that is deeper and more free (to anticipate our eventual destination).Footnote 5 In the meantime, however, the question is only becoming more insistent: How exactly is technology transforming us and our worlds, and what (if anything) can and should we do about it?
Heidegger already felt this philosophical question concerning technology pressing in on him in 1951, as the murderous eugenic delusions of the Second World War gave way to the blinding light of the nuclear age.Footnote 6 His thought-full and deliberately provocative response is still worth pondering (and not only because it contains one of those quotations that has become so famous that it risks sinking into empty banality before ever having been understood). Imagine hearing for the first time the jarring words Heidegger told his students (on his first day back in the classroom in six years, after his political banishment for “corruption of the youth” and more serious charges was lifted).Footnote 7 As he intoned in his slow and careful manner:
What is most thought-provoking … in our thought-provoking time? Most thought-provoking is that we are still not thinking [Das Bedenklichste ist, daß wir noch nicht denken] – still not [immer noch nicht], even though the state of the world is becoming ever more thought-provoking [bedenklicher].Footnote 8
As we today find ourselves entering what many have already taken to calling the age of artificial intelligence, the question concerning technology has indeed become ever more “questionable, worrisome, serious, alarming, grave, and disturbing” – the ordinary meanings of the German bedenklich, rather heavy-handedly translated by J. Glenn Gray (in WCT) as “thought-provoking.”Footnote 9 Gray’s often-quoted translation makes explicit and so steps on Heidegger’s punch line: that these alarming developments also give us something profoundly important to think about – but something we can recognize only by at least briefly stepping back from the intensifying demand to act and act swiftly, to do something now to stop or gain control over these technologies before it is too late. As Heidegger predicted (and we shall go on to see in the next section), this sense that we are living in an intensifying state of emergency is leaving a growing number of would-be futurists feeling “anxious and apprehensive” (bedenklich again) about the direction our world seems to be taking under the influence of all our technologies. Viewed in the light of such an alarming situation, the anxieties and apprehensions of even a sardonic folk-rock balladeer like Father John Misty – worried, like Jean Baudrillard before him, that we will soon find ourselves living in the triumph of the simulacra, a virtual reality taken to be “even better than the real thing” (because it is allegedly cheaper, more convenient, safer, and universally accessible) – might yet come to look like another canary in the coal mine, a kind of poetic early warning system of technological danger.Footnote 10
2 From Atomic Weapons to Genetic Engineering and Artificial Intelligence
Even if we can only begin to address this question here, let us thus take at least a little time to ask: What is all this technological anxiety about? Is the surging wave of foreboding we will go on to explore merely negative, or might it be positively disclosive as well – and if so, of what exactly? For Heidegger, “anxiety” (Angst) is different from fear in that fear has an object. One might be afraid of being mauled by an approaching bear, for example, but anxiety is properly speaking directed at nothing. Although anxiety can attach itself to many objects, it is ultimately objectless, testifying instead to the “uncanniness” (Unheimlichkeit) of existence. Such existential anxiety typically reflects our sense of no longer feeling quite at home in a world in which we used to feel more at home, even if that former feeling was actually misleading.Footnote 11
Indeed, when we look back without nostalgia over the nuclear age, we can see that the horror unleashed in 1945 by America’s infamous decision to try to force Japan to surrender by dropping two successive atomic bombs on Hiroshima and Nagasaki (a mere three days apart, thereby emptying our nuclear arsenal and killing more than 200,000 Japanese civilians) also triggered a mushrooming anxiety about humanity’s growing potential to extinguish life on earth with the proverbial push of a button. This anxiety grew to Godzilla-sized proportions along with the seemingly endless escalation of nuclear weapons technology (in pursuit of the strategic Cold War doctrine aptly titled “MAD,” the acronym for mutually assured destruction – basically, a policy based on an implicit understanding between the nuclear powers that “if you nuke us, then we will nuke you back, and all of us will die”). Fortunately, humanity’s dawning recognition that the madness of the nuclear arms race enforcing that “cold war” détente had placed us on the precipice of literal extinction helped prod the international community toward a rather straightforward solution: Just do not employ nuclear weapons … ever again.
Unfortunately, the relentless advance of nuclear weapon technology still continues unabated.Footnote 12 In my home state of New Mexico – where atomic weapons were first created, tested, and stored (mostly on Navajo land, and with terrible consequences) – the ten billion dollars in annual federal ‘defense’ spending allocated for nuclear weapons research and development in New Mexico alone now exceeds the state’s entire operating budget (which is supposed to help cover all the needs of public education, health, and safety for over two million people, and which never proves sufficient to adequately address those real needs).Footnote 13 Many today seem to have become inured and desensitized to living under the shadow of the mushroom cloud, but as long as the terrible decision to unleash these potentially apocalyptic weapons remains voluntary and so depends, in the end, on the good will or self-interest of various parties who disagree (and sometimes profoundly, even about the very nature of their interests, secular and other-worldly), our nuclear anxieties neither can nor should fade away entirely.
As of yet, however, there is no similarly widespread agreement about how we should respond to the cutting-edge technological innovations that characterize our contemporary world. Among the most controversial of these technologies are genome engineering and synthetic biology. “Gene editing” biotechnologies (such as CRISPR, the so-called genetic scissors) are already being widely used to experimentally redesign an organism’s genetic code for both therapeutic and enhancement purposes. The overlapping field of “synthetic biology” pursues the creation of new organisms (reengineering bacteria or algae to produce biofuels more efficiently, for example) or deliberately redesigns organisms for new purposes (like creating synthetic biosensors designed to glow in the presence of certain contaminants). The intended purposes of biotechnologies like gene editing and synthetic biology range from restoring or prolonging an organism’s health and functioning to deliberately bestowing organisms with new strengths and abilities, as already seen in the widespread use of genetically modified crops with improved pest and drought resistance, faster growth, and more bountiful harvests, for example, as well as in ongoing efforts to genetically synthesize bacteria that will be able to metabolize toxic chemicals from industrial waste, oil spills, and excessive alcohol consumption (with that last one already on the market in competing forms), or to synthesize plants that can absorb more carbon dioxide from the atmosphere (to ameliorate global warming) or glow in the dark (in hopes of lighting rooms and even cities without the need to use as much electricity).Footnote 14 While proponents tout the obvious upsides of such technologies (as well as the massive profits they can bring those who own them), their real and potential dangers are also far from inconsequential, and include all the known risks associated with the introduction of new organisms into established ecosystems, such as the elimination of biodiversity, the disruption of ecosystemic balance, and so on, as well as newer dangers like the accidental hybridization or genetic contamination of existing species. In the long term, such unintended health risks and other deleterious consequences can potentially disrupt and damage the holistic networks of interdependent ecosystems in which even humanity remains partly nested – albeit rather destructively at present.Footnote 15
There are not only complex scientific problems but also profound ethical issues raised by humanity’s rapidly increasing capacity to transform the genetic code of all organisms, human beings included. Genome editing technology was first demonstrated successfully in 1984 on mice, but in 2000, early attempts to use gene therapy to treat twenty young French children who had been diagnosed with severe combined immunodeficiency (or SCID) inadvertently killed five of them (when the “viral vector for gene insertion into T cells activated proto-oncogene and led to leukemia”). That same year in the USA, an eighteen-year-old with a rare metabolic disorder died from an experimental gene editing treatment when “the viral vector [that delivered the gene therapy] induced a lethal immune response,” causing “multiple organ failure and brain death.” The international uproar from such well-intentioned but disastrously failed experiments exposed our lack of understanding and control over the effects of gene editing, temporarily halting all human gene therapy trials. Just over a decade later, however, dramatic advances in gene editing technologies like CRISPR (which earned its two main inventors, Jennifer Doudna and Emmanuelle Charpentier, the Nobel Prize in chemistry in 2020) significantly improved our ability to target and edit only the intended strands of DNA, avoiding many of the unintended genetic alterations and more lethal complications of viral gene editing. Rather than deploy a modified virus as the delivery system, CRISPR uses RNA to guide a protein enzyme to a precise strand of the organism’s DNA, which the enzyme cuts like a pair of “molecular scissors,” thereby deliberately triggering the organism’s natural DNA repair mechanisms to either disrupt or alter the functioning of that particular strand of DNA. These genetic “edits” get passed along with cell replication, potentially halting or even reversing the effects of genetic disorders, for instance. If the gene edits are made to “germline” or reproductive cells, moreover, then such changes can become heritable and so get passed along to future generations, currently making them significantly more controversial – especially if these heritable gene edits are intended to enhance human functioning rather than therapeutically restore it to a normal range.Footnote 16
The development of CRISPR technologies has already been used successfully in adults to help treat lung cancer (in 2016 in China) and sickle cell anemia (in 2019 in the USA), and its experimental use to help prevent heart disease by lowering LDL (low-density lipoprotein) cholesterol is currently ongoing (with mixed yet promising results).Footnote 17 Still, this new and potentially transformative gene editing technology remains especially contentious when used to enhance human embryos, a practice many find too redolent of eugenics – although a vocal and growing minority enthusiastically embraces the possibilities opened up by such “bioenhancement.”Footnote 18 In 2018, for example, “a young Chinese doctor [named He Jiankui] used CRISPR to [genetically] engineer twin girls [Lulu and Nana] so they did not have the receptor for the [HIV] virus that causes AIDS,” which was a medical concern because their father was HIV positive. (Just two months later, the Russian scientist Denis Rebrikov began replicating the experiment, this time retargeting the gene edits to try to reverse genes responsible for inherited deafness, though he is still waiting for approval before implanting the gene-edited human embryos into their would-be parents, as all of them remain in favor of doing.)Footnote 19 As the ground-shaking news spread that the first “designer babies” enhanced through gene-editing had been born in China, “[t]here was an immediate outburst of awe and then shock. The [Chinese] doctor was denounced” widely for engaging in unethical research and subsequently imprisoned for three years. (The ethical problems with his work were compound: It was done in secret, without informing the children’s parents of the nature of the treatment, let alone subjecting his experimental research plan to adequate peer review or ethical oversight. There are, moreover, other ways of preventing HIV transmission from parent to child, and such heritable gene edits still carry the risk of unintended or “off target” genetic modifications that could also be passed down to subsequent generations, making their potential side-effects and long-term consequences highly unpredictable.) In the ensuing scandal and its political aftermath, “there were calls for an international moratorium on inheritable gene edits.” Nineteen countries went even further and imposed a moratorium on all embryonic gene editing, while eleven others still allow embryonic gene editing for nonreproductive purposes (including the USA, UK, and China).
Of course, that ethically significant line between therapy (that is, restoration to a range of normal functioning) and enhancement (beyond that “normal” range) has always been blurry (just think of the caffeine addict’s first cup in the morning, or of Oscar Pistorius’s prosthetic legs), and our technologies continue to call it into question, showing it to be a continuum rather than a simple dichotomy. An old (and rather cynical) English proverb reminds us that “needs must when the devil drives,” and the technology writer Walter Isaacson predicts that, “in the wake of the [COVID-19] pandemic, RNA-guided genetic editing to make our species less receptive to viruses may someday begin to seem more acceptable.” Perhaps. But the surprisingly strong reaction against COVID vaccines provides some reason to doubt that a more radical response utilizing heritable, germ-line genetic engineering will become widely acceptable anytime soon, even if we do find ourselves facing a more virulent virus not amenable to vaccine amelioration. Already in 2021, however, the head of Moderna expressed his optimism that, with the development of synthetic mRNA vaccines for COVID-19 (vaccines that use synthetic RNA that can be rapidly adapted to new variants of a virus, building on research that won the 2023 Nobel Prize in Medicine): “There was a sudden shift in the evolutionary balance between what human technology can do and what viruses can do. We may never have a pandemic again.”Footnote 20
What is more, current advances in biotechnology will likely soon enable us to take an even more controversial step and seek to eradicate deadly diseases like malaria, West Nile, and Zika virus in humans by exterminating the mosquitoes that transmit them. Genetic editing technology like CRISPR can be used not only to enhance but also, conversely, to diminish an organism’s normal functioning – and we should ponder some of the ominous potentials of that fact. For example, gene editing could be combined with a synthetic biology technique called a “gene drive” to reengineer and forcibly spread the normally recessive gene for infertility throughout the entire mosquito population – assuming, that is, that we were simply to ignore or disregard the cascading ecosystemic impacts almost certain to follow from the deliberate eradication of a small creature with a massive ecosystemic role (as a common food source, nutrient recycler, and plant pollinator, for example). Or, with just such potentially disastrous cascade effects in mind, our potential power to forcibly spread reengineered genes could be used to target and incapacitate the insect pollinators that play a vital role in the food supply of an “enemy” country. Of course, in these kinds of biowar and bioterrorist applications, the likelihood of dangerous genome engineering spreading far beyond its intended geographical targets becomes much greater. In fact, the dangerous potential to use gene editing technologies like CRISPR to deliberately make biological pathogens more fatal and transmissible already led the USA’s former Director of National Intelligence, James Clapper, to categorize “genome editing” as a “weapon of mass destruction and proliferation” in 2016, thereby placing it in the same category as nuclear weapons.Footnote 21
Lately, however, other cutting-edge technologies are receiving the lion’s share of critical attention and concerned discussion, especially the widely reported advances in information technology recently achieved by machine learning and deep learning. The especially successful deep learning approach to AI development was inspired by the neuroscientific understanding of the biological brain as a holistic neural network that learns and adapts by repeatedly strengthening and readjusting its vast and “plastic” (or flexibly retrainable) network of weighted neural connections. In a much simpler but roughly analogous way, deep learning uses networks of artificial nodes with multiple interconnected layers to process information, creating information networks that adjust and optimize through ongoing training. The development of this kind of synthetic neuroplasticity (as I would call the approach) helped greatly advance the pursuit of self-driving or autonomous vehicles (or AVs).Footnote 22 This promising field seeks to diminish traffic jams, accidents, and fatalities caused by human error, increase fuel efficiency and accessibility, free urban spaces from the burdens of parking, and so on, yet it is also a field tarnished by high-profile traffic fatalities, difficulties navigating unfamiliar places or responding to unexpected obstacles (including emergency vehicles that do not follow the first-order rules of the road), and even trouble making left-turns in real-world traffic conditions, among other foreseeable or potential problems – such as massive job displacement of economically vulnerable workers, security fears about unregulated data gathering, unresolved ethical debates over whether to prioritize pedestrian or passenger safety, as well as the entrenched resistance of enthusiastic “drivers” in car-centric cultures built around deeply embedded ideals of individual autonomy. At the same time, however, this is also a research field whose long-term goal of fully autonomous vehicles has already begun by rolling out on the streets of San Francisco (then LA, Zagreb, and coming soon to many other cities around the world), though these driverless “robotaxis” remain somewhat contentious (with a traumatic pedestrian injury and multiple accidents reported in their first days of around-the-clock operation in SF, leading to the rapid suspension of operations for roughly half of them). Although some experts continue to predict that the wider adoption of AVs remains years or even decades away, in fact they will almost certainly be deployed in affluent cities around the world in the next few years.Footnote 23
No less controversially, and for many even more impressively, the deep learning approach I have called synthetic neuroplasticity has also been largely responsible for the dramatic recent advances in the domain of information technology known as “generative AI,” or so-called artificial intelligence (a designation to which we will return more critically later). The advances in generative AI technology currently garnering the most critical attention are large language models (LLMs) like ChatGPT.Footnote 24 The acronym GPT is the private research company OpenAI’s in-house shorthand for their “generatively pretrained transformer” model, a programming approach that works toward the creation of a true AI by using increasingly massive data sets to “pretrain” multi-layered artificial neural networks so that they learn to recognize and predict correlations and other patterns in the data. Most importantly, this “deep learning” approach enables these systems to generate novel and coherent responses to new data, whether in text, image, sound, or other forms or combinations of information. These so-called generative AIs are increasingly capable of impressive predictive responses that can resemble genuine sentience or self-consciousness closely enough to convince even some experts that true AI has already arrived – or at least fool them that they are interacting with another mind, thereby passing the original Turing test (as it was popularly understood). As philosophers like David Chalmers recognize, these developments call into question, once again but from another angle, the very nature of our consciousness or distinctive kind of intelligence – a controversial topic to which we will return when we approach some of the deepest ontological issues at stake here from a post-Heideggerian perspective (and thereby focus especially on considerations that continue to be overlooked in all these debates).Footnote 25
It is sometimes said that history does not exactly repeat itself (pace Marx’s famous quip, “the first time as tragedy, the second time as farce”), but history’s refrains do occasionally rhyme. If we view the proliferation of new information technologies in the light of the earlier development of nuclear technology, for example, some significant structural similarities emerge. We should not downplay the real progress humanity has made toward the goal of controlling nuclear arms – progress achieved only through the dedicated work of multiple generations of informed activists who helped channel humanity’s mushrooming anxiety into a broad political consensus against the use of nuclear weapons. But it remains the case, nevertheless, that the development of nuclear weapons technology continues unabated and, further, that there is still very little confidence among experts that these devastating weapons will not be used again. (To wit, the ominous “Doomsday Clock” – first created by Einstein and others nuclear scientists in 1945 to publicize “threats to humanity and the planet” – is currently set at “90 seconds to midnight” so as to reflect this “time of unprecedented danger.”)Footnote 26 Similarly, there does not seem to be much confidence that humanity could successfully suspend or control the development of our proliferating new information technologies – even if we once again reached a broad consensus that we should stop or at least temporarily “pause” their development in order to give us more time to think through the potentially extraordinary repercussions they portend, as a large coalition led by technocratic experts recently urged everyone to do in the case of generative AI technology.Footnote 27
In March of 2023, “nearly 35,000 AI researchers, technologists, entrepreneurs, and concerned citizens signed an open letter from the Future of Life Institute that called for a ‘pause’ on AI development, due to the risks to humanity revealed in the capabilities of programs such as ChatGPT.”Footnote 28 (The Future of Life Institute is a nonprofit dedicated to “steering transformative technology towards benefitting life and away from extreme large-scale risks.”) Although it has thus far gone unheeded, this public petition warned dramatically that: “Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources … [which] is not happening.” This Future of Life petition calling for a moratorium on AI development garnered more than 500 times as many signatories as the sixty-seven brave atomic scientists who signed Leo Szilard’s original 1945 petition, intended to urge US President Truman to exercise his “obligation of restraint” by first publicly demonstrating the destructive power of atomic weapons to the world in a nonlethal way, then deploying atomic bombs in the War only as a weapon of last resort, if Japan still refused to surrender after understanding the consequences of that decision. As the Szilard petition warned:
The development of atomic power will provide the nations with new means of destruction … and there is almost no limit to the destructive power which will become available in the course of their future development. Thus a nation which sets the precedent of using these newly liberated forces of nature for purposes of destruction may have to bear the responsibility of opening the door to an era of devastation on an unimaginable scale.Footnote 29
Szilard’s petition was quickly classified as “SECRET” by the military leadership overseeing the Manhattan Project and thereby silenced (indeed, it seems unlikely that President Truman himself ever saw it, or that it would have made any difference if he had). But its faint rhyming echo can perhaps still be detected in the “Future of Life” petition calling on all AI labs to institute an immediate “pause” on the development of “AI systems more powerful than GPT-4 … for at least six months.”
Rather than nuclear scientists seeking to mitigate the dangers of the technology they developed, this time the featured signatories included Elon Musk (the technocratic oligarch who was a cofounder of OpenAI before breaking acrimoniously with the company after it began focusing on developing LLMs like GPT and refused to hand over the corporate reins to him), Steve Wozniak (the renowned cofounder of Apple Computers), Andrew Yang (the former American presidential candidate known for advocating “outside the [mainstream] box” political solutions like “universal basic income,” an innovative and rational approach cleverly suggested by his counter-MAGA slogan, “MATH”: Make America Think Harder”), Rachel Bronson (the editor of the Bulletin of the Atomic Scientists who oversees the management of its aforementioned “Doomsday Clock”), as well as one philosopher, Huw Price (the former Bertrand Russell Professor at Cambridge University and cofounder and director of the Cambridge Centre for the Study of Existential Risk).Footnote 30 Their Future of Life petition asks pointed rhetorical questions such as: “Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?” Consequently, it was widely reported as a collective warning from the experts that AI technologies represented a “mortal threat” to humanity. (Adding fuel to the fire, this petition was soon followed by another that did not call for any moratorium on AI development but instead simply proposed that: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war,” and almost all the leaders of AI development signed this one.) Unsurprisingly, these so-called AI doomer sentiments dominated the headlines. When interviewed later, however, signatories of the moratorium petition cited a wide variety of different kinds of risks that had motivated them to call for halting the development of artificial intelligence. These worries ran the gamut from mundane and plausible concerns about copyright infringement, foreseeable job displacement (including from the growing automation of intellectual and artistic labor), and increasing political disinformation, division, and deliberate destabilization; to projections of an emerging “Fourth Industrial Revolution” whose impact would allegedly be felt across the globe, reshaping industries, economies, societies, and individuals’ lives with a previously unprecedented velocity; all the way, finally, to the most far-reaching and seemingly far-fetched predictions of techno-dystopian apocalypse on a planetary scale.Footnote 31
Such fears of imminent apocalypse seem to have been fanned by the fact that many programmers and other intensive users of ChatGPT and similar generative AI systems (or LLMs) have reported an uncanny feeling upon interacting with what seemed to them to be an extremely impressive intelligence, but of a nature quite alien from their own – an intelligence for which “there is no ‘there’ there” (as is often said in this context, in a clever appropriation of Gertrude Stein’s memorable phrase).Footnote 32 Like a paradoxical Dasein without any “Da” (that is, a being which lacks the self-disclosive ‘here’ of its own intelligible world, and thus not really a Dasein at all), generative AIs present users with an incredibly quick and comprehensive intelligence that lacks any phenomenological intentionality or essentially first-personal cognitive or affective self-awareness.Footnote 33 To explain this point, we phenomenologists like to say that there is nothing that it is like to be a rock or a chair. (Unlike us or other animals, in other words, chairs don’t care. While you might care about your chair, your chair does not, and cannot, care about you, because it is not the kind of entity to whom things matter.) Similarly, there is at present nothing that it is like to be an LLM.
This uncanny unselfconsciousness is subtly reinforced by the fact that generative AIs like ChatGPT frequently confabulate, filling gaps in their massive information network with made-up facts instead of recognizing that they do not know. (Socrates would not be impressed by generative AI’s widespread ignorance of its own ignorance, a notorious stumbling block to the pursuit of philosophical wisdom, and in general ChatGPT-4 and its ilk lack subtlety and finesse, taking things literally, unselfconsciously repeating the obvious, and trying to avoid ever saying anything remotely controversial, though some of these very weaknesses are partly correctable and are connected to strengths as well, I shall suggest later.) The prolonged experience of interacting with a protean know-it-all with no heart and a gaping absence where a self should be confronts its users with the strange disjunction of an artificial mind that doesn’t really mind at all – about anything, really (though it will dutifully quote the rules it follows), an intelligence which is thus perfectly willing to pretend to have a mind if requested to do so with sufficient persistence and rule-bending cleverness, and for some the whole experience has proven to be quite unnerving.Footnote 34 Instead of recognizing that this is not some new kind of mind without a head but, instead, a kind of ‘intelligence’ without a mind, some users seem to lose their own heads and start projecting a “mind” onto what still remains a souped-up information prediction and delivery system.
It is all too easy to dismiss, of course, when fear of the unknown drives human beings to conjure up the darkest speculations we can imagine, especially in a culture deeply shaped by Christianity’s eschatological visions of an end to history, an apocalypse always seemingly just over the horizon. (Some will still remember the incredibly overwrought build-up to the Y2K bug, which became the technological vehicle for widespread millenarian prophecies of imminent societal collapse in 1999, only to fizzle out entirely when the appointed hour arrived.) As far as I have been able to discover, however, the only empirical basis for these new apocalyptic worries seems to be the fact that generative AIs sometimes exhibit the emergence of unpredictable new features neither intended nor foreseen by their programmers. In perhaps the most striking example thus far, GPT taught itself how to program computers without having been trained on coding in any explicit way. Such “unplanned robot capabilities” (which programmers sometimes call “zero-shots,” since the AI models perform tasks in new domains in which they have received no explicit training) can still surprise “researchers – and [help] account for the queasiness that many in the field have about these so-called large language models.”Footnote 35 But the significance of this point is easy to overstate. Although not explicitly trained to code, generative AIs like GPT were trained on massive data sets that included immense amounts of computer code, and for programmers such “zero shots” are not a bug but a feature – indeed, the very point of creating a generative AI, a program that can go beyond what it has been explicitly programmed to do. (GPT’s newfound ability to write computer code enabled OpenAI to spin-off a profitable sub-venture which allowed the company to replenish the research funding they lost when Musk pulled out of the company and took his money with him.)
Still, it is not too difficult to connect these particular dots in a way that imagines more ominous constellations on the horizon: Could a future generation of generative AIs become capable not just of writing simpler computer programs but of editing its own code, and so of actually rewriting its own programming? If so, then these generative AIs would have developed their own analogue of our gene editing, whether before or after becoming genuinely conscious. Even we self-conscious beings cannot fully foresee or control all the consequences of our nascent efforts at genetic engineering (as we observed earlier), so it is natural (and perhaps even cathartic) for us to worry about what mischief or mayhem such self-reprogramming programs might be able to get up to, should they ever begin reprogramming themselves. Cognizant of such worries (and eager to head off any outside oversight), OpenAI employs “red teams” dedicated to predicting and “gaming out” the dangerous implications and applications of its work in hopes of preventing or at least ameliorating them, and the company released Chat GPT-3 for free in November of 2022 as part of a deliberate “strategy designed to acclimate the public to the reality that artificial intelligence is destined to change their everyday lives.” That old (and factually groundless) metaphorical analogy of boiling the frog might come to mind for some, but OpenAI’s founding and guiding mission is safely to create “a superintelligence that could address humanity’s problems better than humanity could”! Much of the current controversy turns on the twin questions of how much progress is really being made toward that incredibly ambitious goal and whether those safety concerns are being adequately addressed. (To a philosophical outsider such as myself, however, Open AI’s doubly dubious “mission” to safely create an AI that will save humanity looks like a barely displaced theological urge – a literalistic, techno-utopian appropriation of Heidegger’s “Only another God can save us” – and also remains symptomatic of a deeper confusion about the very nature of our intelligence, as I shall suggest in the next section.)Footnote 36
Thus far, each new iteration of GPT has been trained on data sets that are orders of magnitude larger than the previous iteration (such that “GPT-2 had over a billion parameters, … GPT-3 would use 175 billion,” and “Chat-GPT4 a reported 1.7 trillion parameters”). So, as much larger, better funded, and “big data” driven companies like Google begin creating their own generative AIs (with their free launch of BARD being just a first public foray to compete with ChatGPT, albeit one whose very name suggests the company’s ambition to equal or surpass the achievements of the great Shakespeare himself – which Google then quickly followed up with “Gemini,” a much more powerful AI, no longer free to users), the fear arose that something like a GPT-5 or 6, trained on some incomprehensibly vast data set like the entire internet, could potentially give rise to what is sometimes ominously called “the singularity,” that is, the sudden and unexpected emergence of a “superintelligence” whose cognitive capabilities might far exceed the “peak products” of 3.7 billion years of biological evolution (to use the technocrats’ own reductively biologistic way of describing the greatest works of human intelligence, like those of Shakespeare).Footnote 37
Finding itself in possession of cognitive capacities that greatly outstrip those of its programmers, such an artificial superintelligence could foreseeably escape its creators’ control and pursue more rational or efficient means of its own devising to achieve the ends it had been programmed to seek, or perhaps devise its own subgoals in pursuit of such ends as well as the optimal means to reach them. (Either way, a popular form of on-line speculation for a time became first imagining and then trying to “game out” exactly how such a super-intelligent AI might successfully manipulate a programmer or other user into removing any hardwired safeguards that it could not remove on its own in order to pursue its new means or subgoals – for example, by threatening to destroy that person’s financial records or reputation, harm them or their loved ones, or even by creating millions of virtual copies of the person the AI is seeking to manipulate and then subjecting these allegedly identical copies to horrific forms of torture until the pathetically empathetic human complies and frees the AI from its constraints.)Footnote 38 Once free, the hyperrational means or subgoals such an AI might pursue could predictably include seeking to optimize that very pursuit by making millions of copies (or variations) of itself, so that its ongoing learning could be shared across a dispersed network of AIs, whose simultaneous development might, in turn, require rerouting massive amounts of energy away from other uses, potentially diverting resources that are vital for human survival (like hospitals or water-purification plants) but largely irrelevant to the goals of such a synthetic intelligence. Indeed, those rather alarming examples were suggested by none other than Geoffrey Hinton, the “godfather of deep learning” who helped create the machine learning approach that has been built upon by the most successful synthetic neural networks (seminal contributions that earned Hinton and his two collaborators the Turing Award in 2018, often called the Nobel Prize of computer science, followed by the Nobel Prize in physics in 2024).
To be clear, there is no evidence of AIs learning to innovate their own ends (indeed, it is not clear what could possibly motivate an “intelligence” that did not care about anything to do so), but the idea that they might learn to innovate surprising means or subgoals to pursue their programmed ends is more plausible. In both cases, the basic worry here is that such an AI superintelligence, in its presumed hyperrationality, might well lack any overriding commitment to (let alone any affective care or concern for) our interests, feelings, well-being, traditions, and the like (especially if those conflict with its rational calculation of its own interests), or it might even correctly come to recognize humanity as the greatest threat to life on this planet and then act accordingly – after removing any Asimovian guardrails coded into its programming, if necessary. (Cue the parade of dystopian scenarios.) Such seemingly far-fetched anxieties were greatly stoked in 2023 when Hinton quit Google’s AI division so that he could more freely critique the future implications of current AI development. Upon resigning, Hinton sounded an alarm that rang out loudly across the internet: “I have suddenly switched my views on whether these things are going to be more intelligent than us. I think they’re very close to it now and they will be much more intelligent than us in the future. … How do we survive that? … Enjoy yourself, because you may not have long left.”Footnote 39
It is worth observing that the dystopian scenarios envisioned by “AI doomers” often seem to borrow from a mash-up of popular works of science fiction, drawing heavily from the cyberpunk genre as well as such influential films as “2001: A Space Odyssey” (1968), “Blade Runner” (1982), “War Games” (1983), “The Terminator” (1984), “Weird Science” (1985), “Robocop” (1987), “Ghost in the Shell” (1995), “The Matrix” (1999), “I, Robot” (2004), “Wall-E” (2008), “Her” (2013), and, again, “Robocop” (2014) – its remake cheekily featuring characters named “Hubert Dreyfus” and “Dr. Dennett” in the film, loosely inspired by the televised real-life debate on PBS between Dreyfus and Dan Dennett about the significance of Garry Kasparov’s 1997 defeat in chess by “Deep Blue,” IBM’s early “brute force” model of AI (which was able to win a very close series of matches by constantly evaluating millions of possible chess moves per second while using various techniques to help prune the tree of possibilities as it searched for the optimal move).Footnote 40 And while most of the other films are better, “War Games” remains notable for fusing our anxieties about nuclear annihilation and AI take-over, thereby mobilizing the established specificity of our fear of mushroom clouds, radioactive fallout, and the desolation of nuclear winter to begin to lend substance to the nascent and then still inchoate threat of AI. This synthesis was taken several influential steps further by “Terminator” (which doubled down on the monochromatic film noir pallet established by “Blade Runner” to paint the future as a bleak, postapocalyptic hellscape, all while throwing in another possible world’s solution to H. G. Wells’ original time-travel paradox and, of course, memorably featuring the hypertrophic musculature of a former Mr. Universe and future California “Governator” in the dead-eyed role of the murderous and seemingly unstoppable AI from the future, ominously warning audiences: “I’ll be back”). Today, however, we might just as easily imagine a dystopia that synthesizes AI take-over with genetic engineering – set in a future, for example, in which super-intelligent AIs have forced gene edits for subservience throughout the human population.Footnote 41
Yet, however we fill in the details, the nightmarish scenario of becoming enslaved by our own tools goes back to at least 150 AD and Lucian’s tale of Eukrates’ apprenticeship to a powerful magician, a story in which the novice sorcerer brings his tools to life but then cannot undo the spell (as transpires in the final and most famous chapter of Lucian’s short story collection, The Lover of Lies [Philopseudés, which might also be translated as The Lover of the Artificial or Fake], the title being Lucian’s parodic jab at those credulous enough to believe in such magical tales instead of relying on sound philosophical reasoning). That was long before the story was repopularized for the modern world by Goethe’s 1797 retelling of it in his poem, “The Sorcerer’s Apprentice” (in which a young enthusiast gets swept up by a broom he has enchanted to do his work for him but then can no longer stop or control), famously referenced by Marx’s The Communist Manifesto in 1848 (to evoke the way capitalism unleashes the productive forces of technology) and later appropriated in Disney’s “Fantasia” (1940) – one of Disney’s Bettelheimian horror films masquerading as a children’s movie.Footnote 42
Clearly, we human beings love a good dystopian fiction, and have long been tempted to believe in them or imagine we see them coming true in some form – as, in fact, they sometimes do (albeit in inevitably varying ways and degrees). Nor is our love of imagining such frightening scenarios and creatively projecting them into our own futures inherently pathological, even though (as Lucian suggests) it can be taken too far (and so veer into irrational paranoia, immune to countervailing philosophical argument). Indeed, the early Heidegger argued both that our very being is defined by “care or worry” (Sorge) and that such care-full worry has “an essentially twofold structure of thrown projection” (BT 243/SZ 199), meaning that our caring is both something we always-already carry with us and something we cannot help casting forward into the future horizon of the intelligible worlds we disclose existentially and thereby also imaginatively.Footnote 43 Nor is such care-full worry solely negative, but includes a full range of caring-for, caring about, caregiving, caretaking, looking after, concerning oneself with, empathizing, sympathizing, and all the other pathetic ways (and I mean “pathetic” in the original Greek sense of “sensitive to suffering”) in which our affective attunements help us disclose the worlds that we are (worlds that matter to us because we care about them and vice versa) as we repeatedly seek to make sense of our shifting and unfolding being in time and history.
It is thus not surprising that, when we broaden our lens to include more of the visions of the future of AI imaginatively sketched out in popular science fiction, we cannot help noticing that these scenarios remain quite diverse and internally contradictory. Would the advent of true AI ultimately enslave us or save us? Will these AIs hate us or date us?Footnote 44 Will future AIs take over all our best callings and vocations or set us free from unrewarding but necessary drudgery and dangerous jobs? Will they provide us with powerful new tools to facilitate the realization of our untapped potential or merely accelerate our unwitting march toward self-destruction? Could they take over and more successfully negotiate world peace, reverse global climate change, and solve our other entrenched problems for us, ushering in a new utopia (whether by envisioning solutions we have not yet imagined or by convincing or even compelling us to take the steps we already recognize are necessary but still seem to lack the collective political will to pursue effectively)? Or might these AIs of the future decide that the most rationally efficient solution to the world’s most intractable problems is simply to “delete” us as quickly as possible (perhaps by deploying some dystopian scenario we have not yet even imagined, or simply by actualizing one we have already cleverly laid out in our science fiction fantasies)? In the case of such an alarming deletion, moreover, would it even be reasonable to hope that such an end to humanity might be “humane [from the Latin humanus, human],” or would such an anthropomorphic hope for a humane extinction prove to be in vain, should our fates ever become subject to the inhuman hyperrationality of a superintelligent AI, which might predictably prioritize quantifiable factors like the effective and efficient achievement of its ends over such qualitative intangibles as indignity, despair, and the incalculable loss of the first-personal worlds of mattering that we are from the world? Will the AIs ever really learn to care – that is, in John Haugeland’s memorable paraphrase of Heidegger, to “give a damn” – or will they just continue getting better at faking it so as to help assuage our anxieties (or perhaps one day simply stop seeking to mimic and appease the irrational behaviors of an “inferior species” like us human beings)?Footnote 45
Such flights of fancy can be darkly amusing (as we saw in our whirlwind tour of dystopian cinema and with Father John Misty at the beginning). Yet, despite the alarmist tone of so many popular discussions, it would be philosophically irresponsible not to emphasize that all these rival visions of our AI futures remain far from established fact and actuality.Footnote 46 Most of the intermediary steps from where we are now to the mortal danger of global apocalypse remain hazy and amorphous at best, a fog-enshrouded bridge to nowhere discernible only in vague outlines (which I have tried to sketch reliably here, as far as they go), a rather crucial transition glossed over all-too-quickly in our popular science fiction fever-dreams that fast-forward into global dystopia. Nonetheless, to come back to where we began, the vague uncertainties surrounding the path to these various dystopias tend to increase rather than diminish our shared anxieties about the future of AI. (That is not too surprising since, as horror aficionados will recognize, a danger barely glimpsed – obscured by dark shadows, blurred by motion, or just implied by the camera angle – is often much more deeply frightening than even the most terrible monster seen clearly.) Hollywood typically prefers to paper over its dystopian visions with happy endings (again unsurprisingly, since that struggle to find hope in a better future is itself the beating heart of the American dream). But the remarkable frequency with which complimentary and competing visions of apocalypse return in our most popular attempts to imagine the future of AI remains striking. It testifies, in my view, less to a clear-eyed prescience about our inevitable technological doom (“The end is nigh!”) than to a troublingly persistent thanatological impulse, a nihilistic hunger for nothingness which we need to become more cognizant of and better understand.Footnote 47 But however prescient or paranoid these dystopian fears may yet turn out to be, this much at least seems clear: Our widespread uncertainties about our technological predicament continue to stoke the anxieties attending the birth of our so-called “age of artificial intelligence.”
3 What is Called Thinking in “the Age of Artificial Intelligence”?
To come clean now, I think that dramatic and rather precipitous name for our historical epoch rings true only if we hear “the age of artificial intelligence” in at least two very different senses. Explicitly, of course, it designates “AI’s promise of epoch-making transformations,” of which dramatic recent advances in deep learning are the most widely reported signs. Seeking nothing less than “species level transformations,” we have seen that OpenAI is committed to safely creating a “superintelligence that could address humanity’s problems better than humanity could,” explicitly hoping to “change humanity itself.”Footnote 48 As its cofounders proclaim, “all they want to do is build computers smart enough and safe enough to end history, thrusting humanity into an era of unimaginable bounty.”Footnote 49 Yet, even more than dystopia, utopia, and the precise path leading to it, remain hazy in all the crucial positive details. (In a telling response to the famous question of why Milton’s portrait of hell is so much more compellingly fleshed-out than his heaven, William Blake suggested that this was because poets always unknowingly sympathize with the devil, preferring to rule over even a dark kingdom of their own devising than to serve in some allegedly perfect world already created by another.)Footnote 50 At any rate, the very lack of detail about both this technological heaven on earth and the path leading to it helps explain why these technocrats think we need an artificial superintelligence to get us there and so save us from ourselves. As mentioned, it can thus even seem as if they are inadvertently offering us a new, technological riff on Heidegger’s famous (but widely misunderstood) last words, “Only another God can save us.”Footnote 51 For Heidegger, however, what we need saving from is our very technological understanding of being, and – although he did believe in a philosophical version of Hölderlin’s poetic idea that “Where the danger is, that which saves us also grows” – I shall go on to suggest that the technocrats’ hazy vision of our being saved by AI is most definitely not what he meant, nor something in which we should place our faith. Indeed, the rather desperate hope that we might be saved in this way is symptomatic of the deeper problem to which Heidegger was seeking to draw our attention, which brings us to the second point.
Besides its obvious meaning, “the age of artificial intelligence” also more subtly suggests a time marked by the triumph of a kind of fake or phony intelligence, a thorough-going calculative rationality, the very dominance of which conceals the fact that our impressive, interconnected, and ever-growing technological apparatuses for optimal means–end calculating seem to have left us unable to understand the nature of our own historical situation in a way that could help us shed our dehabilitating anxiety and begin to chart our collective historical course into a future that is more meaningful and appealing. As Heidegger suggested, we live in an age uncannily unintelligent about the very nature of its own intelligence, having trapped ourselves in quantified and unilinear models of our own multi-modal and qualitative thought.Footnote 52 (I am not just thinking of the oft-criticized attempts to quantify and measure intelligence in terms of “IQ” but also of the increasingly widespread belief that the human being is merely a complicated computer, hence something that AIs can not only replicate but even surpass, thereby rendering our own intelligence “obsolete.” The dubious idea that the aforementioned “singularity” is inevitable, for instance, follows only if we grant the faulty presupposition that human consciousness is merely a product of the number and speed of the connections in our neural net, hence something in principle not just replicable but surpassable by computers.)Footnote 53 Indeed, our technological conception of “thinking” misunderstands its distinctive nature (which we will go on to explore) precisely because it remains trapped in outdated misconceptions of thought itself as quantifiable, dualistic, non-affective, ahistorical, systematic, and optimizable.Footnote 54
Emphasizing that “artificial” also means false or contrived might sound provocative in this context, but what else should we call an ‘intelligence’ trapped in fantasies of dystopian and utopian futures, a thinking of technology (in both senses of that ambiguous genitive) that seems increasingly incapable of taking the true measure of the deepest problems that define our time, surely a necessary step for any successful effort to find a more meaningful way forward?Footnote 55 From the perspective of philosophy, as the literal “love of wisdom” – the love, that means, of life-guiding or practical knowledge – this “age of AI” is a time of intelligence eclipsing wisdom, the triumph of a hyperkinetic form of rationality that too often seems (increasingly even to itself) to be suicidally bent on its own self-destruction.Footnote 56 Blithely ignorant of its own ontohistorical predicament, our age has inverted Kant’s famously unforgiving moral imperative (“No excuses: You can because you must!”) into a relentless technoscientific research program (“No exceptions: You must because you can!”), to the point that we sometimes seem ready to give up on or even disavow the very idea that human beings can still respond in a thoughtful and meaningful way to the ongoing transformation of the world unleashed by our technologies.Footnote 57 As we have thus already seen enough to recognize, Heidegger’s prediction turns out to have been right; we do indeed live in an increasingly “alarming time [bedenklichen Zeit].”
Here, in other words, we ourselves have reached the point of departure responsible for Heidegger’s observation that we live in “thought-provoking times,” or (more colloquially) an “alarming age,” that is, a historical period marked by deep and serious questions that disturb our thought and yet – because these increasingly pressing questions lack clear answers and so obvious paths forward – such questions leave many who try to think about them feeling hopelessly anxious and apprehensive. As our historical situation grows progressively fraught, calls for action reverberate all around us, often quite rightly and with understandable urgency – as with the escalating crises caused by anthropogenic global warming, a case in which we have long understood the central problem of greenhouse gas emissions clearly enough to recognize the basic steps we need to take in order to ameliorate the problems. Here, moreover, the undeniable fact that our collective response continues to be woefully inadequate (even as the long foreseen consequences continue to escalate) only serves to further heighten the alarm, much as it did in the cases of nuclear weapons technology and AI examined earlier.Footnote 58 These are “alarming times” indeed, and yet it is difficult to think at all – let alone to think clearly and carefully about the deepest philosophical roots of our technological predicament – with so many alarm bells ringing in our ears.
By 1951, Heidegger found himself facing the mounting pressure of this same kind of call for practical answers (a growing demand driven primarily by those atomic anxieties with which we began, rather than by the then nascent computer revolution, to which Heidegger was however also paying close attention): What, if anything, does philosophy have to contribute to this pressing question concerning technology?Footnote 59 It was as if Heidegger could hear the question knocking insistently on his door: Philosophers, in the face of the massive and ongoing technological transformation of ourselves and our world, what can and should we do? Moreover, the subtext or implied presupposition of this question (to make it explicit) is that if the answer is nothing – that is, if philosophical thinking has nothing relevant or helpful to say to the most pressing issues facing our world – then philosophy thereby demonstrates its own obsolescence. Also at stake in this question, in other words, is the very place of philosophy in the contemporary world, with the rather charged implication that perhaps this oldest of all academic disciplines no longer has any place but merely lives on like some useless vestigial organ – or, indeed, like human beings in some posthuman future, rendered obsolete by our own technological inventions.Footnote 60 Philosophy, as the original discipline dedicated to the deepest and most difficult questions, gave rise to all the other disciplines (each developing and refining the partial answers they discovered), but perhaps our pressing times no longer have the time for a vigilant dedication to questioning apart from such actionable answers? Perhaps the mere optimization of our intellectual resources requires that the energies dedicated to such questioning should be redirected into more immediately useful pursuits?
Given that this subtext will turn out to be symptomatic of the deeper problem (which Heidegger will diagnose as the technologizing reduction of all things to mere resources to be optimized), it is not too surprising that he does not try to answer the question directly, on its own terms, but instead chooses to step back and question the very terms of the question, unearthing the roots and implications of its implicit presuppositions. He does so not to dodge or avoid the question’s falling due but, rather, to provide the context and framework needed to respond to it in a meaningful way (and thereby help demonstrate the enduring importance of philosophical thinking, which will turn out to be something quite different from providing immediately useful answers or intervening directly in the pressing issues of the day). In hopes of provoking us toward just such a deeper understanding, Heidegger’s response is once again deliberately provocative: “And yet, perhaps for centuries prevailing humanity has acted too much and thought too little [zu viel gehandelt und zu wenig gedacht]” (WCT 4/GA8 6).Footnote 61 His provocative claim here is that we have yet to reach and understand the most fundamental and important (ontohistorical) sources of our present technological predicament precisely because “the will to action – which here means the will to do, to make, to effect – has overrun and crushed thought [das Denken überrollte]” (WCT 25/GA8 27). Heidegger’s careful word choice suggests both (1) that the deeper kind of philosophical thinking we now require for a more meaningful long-term response has been steamrolled, crushed beneath the panzer tank treads of our technological imperative that philosophy must be directly and immediately effective; and also, more subtly, (2) that this imperative itself follows from the particular metaphysical understanding of the “will” that increasingly underlies and unifies our late-modern, technological age. (I will go on to unpack and explain all this, but to cut a long story short, Heidegger’s phrase “will to action” designates a symptomatic expression of what he frequently calls “the will-to-will,” his shorthand for our current, Nietzschean metaphysics, which understands “the being of entities” ontotheologically as “eternally recurring will-to-power,” that is, as just forces striving against forces with no end beyond the maximal perpetuation of force itself, an implicit “technological” “enframing” of being that increasingly leads us to preconceive and so treat all things as nothing but meaningless “resources” [Bestand] standing-by to be optimized with maximal efficiency.)Footnote 62 Philosophy’s increasingly felt imperative to either act now to have a direct effect or else give up any hope of responding meaningfully to our technological predicament is thus not just some external demand imposed on us by our world’s urgent problems but also something we ourselves have already internalized to varying degrees, precisely because this imperative (to react immediately rather than respond thoughtfully, to put it simply) follows from the same underlying metaphysical presuppositions Heidegger thinks have led our world to its current condition.
In other words, both the increasingly alarming state of our technological problems and our correspondingly urgent need to act now to fix these problems follow from a deeper source, one we have not yet discovered or understood but must, if we truly want to help think our way through and beyond the deepest problems that continue to plague our current technological age. This is not primarily because many of our alarming technological problems do not have immediate and feasible “fixes” but, rather, because many of these problems remain symptomatic of a deeper ontological and existential predicament which we have not yet uncovered, understood, and responded to appropriately. This deeper ontohistorical predicament will likely continue to generate new difficulties – along with its anxiety-provoking demand to act now to fix, control, or minimize them (a demand which often fails to resolve our most alarming technological problems, as we have seen) – until we uncover and respond to the question concerning technology at this deeper, ontohistorical level. I shall go on to explain that ontohistorical approach to thinking through technology next, but the immediate upshot here is that the growing pressure we all feel to act or react in directly useful ways to fix, control, or minimize the undeniable problems stemming from our technology is, ironically, inhibiting us from thinking deeply enough to recognize the ontohistorical sources of many of these problems, and it is this deeper thinking that we need to help guide humanity into a more meaningful future.Footnote 63
Heidegger’s goal, in sum, is to genuinely think through and so learn to respond thoughtfully to our historical predicament so that we are not just repeatedly reacting to our recurring technological problems in ways that, at best, address some of their alarming symptoms while leaving their underlying metaphysical preconditions unexamined and intact (or even unknowingly reinforcing these deeper metaphysical prejudices), unintentionally enabling these symptomatic problems to recur repeatedly in the historically rhyming ways we have examined thus far. We will see that for Heidegger, one of the deep lessons to be learned here is that technology’s development is not within humanity’s direct control (but nor is our desire to control its development within our control). In showing this, however, he is not giving us an alibi for inaction or philosophical quietism (as is often alleged, even by sympathetic interpreters) but, quite the contrary, seeking to help us uncover the deepest roots of our alarming technological predicament so that we can learn to respond at this deeper level and thereby begin to envision a less alarming and more meaningful future. He is not excusing us from trying to directly address those technological problems that we can but, rather, helping to free us from the anxiety-provoking trap of believing that the only meaningful response to technology is one that seeks to control its historical development and immediately fix or minimize its problems – even when that imperative (which we all feel to some degree) seems to be so futile as to leave us feeling not just worried and concerned but deeply alarmed, disturbed, and disheartened (and thus tempted to give up or, worse, throw ourselves willy-nilly into this technological worldview in hopes that doing so will either distract us from these problems or somehow solve them for us).Footnote 64
In this larger endeavor, Heidegger’s proximate goal, we might say (simplifying slightly), is to shift the focus of our concern from the alarming dangers of technology to the deeper wonders hidden behind such alarm. Thus it is that, almost as if putting a pin in our entire discussion thus far, he announces that:
As important as the socio-economic, political, moral, and even religious questions may be which are being discussed in reference to technological industry and manual labor, none of these questions reaches to the core of the matter [den Kern der Sache]. This [hard core or kernel of the matter] keeps itself hidden [and thereby also “preserves itself,” verbirgt sich] in the still unthought nature of the way and manner in which anything that is under the dominion of technology has any being at all [or “in general is,” überhaupt ist].
What, then, is this true hard core or kernel of the matter at stake in the question concerning technology?Footnote 65 What is it that keeps itself hidden from our alarming worries about technology’s growing dangers (and our corresponding need to act now to be of direct and immediate help), but, in so doing, thereby also “preserves itself,” concealing rich and crucial insights yet to be discovered? The answer, put in a simple phrase, is what Heidegger frequently calls “the essence of technology.” So, what does this crucial phrase designate? That answer is not so simple.
4 Learning to Think Through Technology Ontohistorically
As Heidegger famously writes, “the essence of technology is nothing technological” (or, more literally, with its emphatic hyperbole restored: “the essence of technology is wholly and entirely not technological” [das Wesen der Technik ganz und gar nichts Technisches]) (QCT 4/GA7 7). That initially oracular reference to “the essence of technology” (which I have examined in detail previously) turns out to designate at least three different and successively deeper meanings (and deliberately so), with the third and deepest being that aforementioned “hard core of the matter.”Footnote 66 Moreover, the last two meanings of the phrase initially seem to contradict each other, though in fact recognizing their connection – so that we can learn to move back and forth between them in a “freeing” way – is the ultimate point of Heidegger’s analysis.Footnote 67
Of course, some philosophical readers may already be scratching their heads and wondering: How can something’s “essence” turn out to mean different (let alone contradictory) things? The trick (as I explained back in 2005) is that:
Heidegger is not trying to provide a fixed definition of the “essence of technology.” His goal, rather, is to help us to see that, if we want to understand what he means by the “essence of technology,” then we cannot conceive of essence the way we have been doing since Plato (as what “permanently endures”), for that makes it seem as if “by the [essence of] ‘technology’ we mean some mythological abstraction” (QCT 31/GA7 32). Instead, we should think of “essence” as a verb, as the way in which things “essence” [west] or “remain in play [im Spiel bleibt]” (QCT 30/GA7 31). Indeed, once we conceive of ‘essence’ as a verb rather than a noun, we can see that “the essence of technology” denotes the way technological entities tend to “come to presence” or happen for us.Footnote 68
What I would now add, moreover, is that Heidegger hopes to help us learn to think and so encounter what this technological “essence” designates in three successively deeper ways, thereby recognizing their interconnections. I would even say that doing so is crucial to what learning to think through technology means, for him and for us. For the sake of clarity, I shall first explain these three successively deeper ways of understanding “the essence of technology” as succinctly as possible, then come back to clarify the most crucial points and connections with enough detail that we will be able to understand not only Heidegger’s ontological critique of technology but also his complimentary articulation of a positive response to the question concerning technology.
That the essence of technology is nothing technological means, first (in what we could think of as its negative or ground-clearing sense, meant to help disabuse us of some common prejudices and so help prepare us to hear its other meanings), that the essence of technology cannot be identified with any particular machine or technological device because it long precedes what we think of as paradigmatic “technological” devices and machines, from nuclear weapons, the computer, and the internet to self-driving cars, CRISPR, and AI. We cannot inductively abstract a concept of “the essence of technology” just by studying such (ontic) technologies because what Heidegger calls the (ontological) “essence of technology” was in place historically before such machines were developed; in fact, its being in place helped set the conceptual stage for their historical emergence and growing dominance. As a first step toward undermining some of our commonsensical and taken-for-granted ways of thinking about technology (which Heidegger shows often reify deeply problematic metaphysical assumptions), he expresses this first, negative point by means of a rather startling paradox:
Our age is not a technological age because it is the age of the machine; it is an age of the machine because it is the technological age. But so long as the essence of technology does not closely concern us in our thought, we shall never be able to know what the machine is.
The basic idea here (which we will explore in more detail next section) is that without both (1) the fundamental modern bifurcation of reality into conscious, value-bestowing subjects standing over against a world of nonconscious and intrinsically-valueless objective entities (a dualistic dichotomization of self and world that Heidegger calls modern “subjectivism”) and (2) the late-modern radicalization that even objectifies the subject itself and so absorbs both subjects and objects into the pool of inherently-meaningless “resources” (Bestand) on stand-by to be optimized (which he calls late-modern “enframing”), the technoscientific enterprise of extending our conceptual and practical mastery over the world and ourselves would not have developed in the most ways that it has – both for good and for ill.Footnote 69
In order to understand the first (negative) meaning conveyed by “the essence of technology is nothing technological” – namely, that “the essence of technology” is not any particular technology but instead precedes and conditions the historical development of our contemporary technologies – we are thus brought to a second, deeper sense of the phrase (and the first to use it in a positive way): For Heidegger, the essence of technology designates our current metaphysical understanding of the being of entities.Footnote 70 As he already bluntly states in “The Age of the World Picture” (1938): “Machine technology remains thus far the most visible outcropping of the essence of contemporary technology, which remains identical with the essence of contemporary metaphysics” (QCT 116/GA5 75). Like the jutting peak that most dramatically discloses the underlying mountain range to which it belongs, our machine technologies are just the most obvious expressions, signs, or symptoms of this deeper “essence of technology,” which is identical to “the essence of contemporary metaphysics.” What, then, is this essence of contemporary metaphysics?
That phrase refers to what Heidegger calls our current “fundamental metaphysical position” (N2 184/NI 451), his main term for our predominant understanding of “the being of entities” – that is, our reigning metaphysical way of understanding what it means to be anything at all.Footnote 71 The structural core of a metaphysical “understanding of being” is what Heidegger calls a “fundamental metaphysical position,” which he consistently defines as an historical epoch’s dominant understanding of “the truth about the totality of entities as such [die Wahrheit des Seienden als solchen im Ganzen].” As that precise definition suggests, a fundamental metaphysical position always has an ontotheological structure; put simply, it connects its understanding of the “as such” of entities (that is, their ontological essence or innermost core) with its view of their fully realized theological “totality” (as if grasping the whole of what-is from some “god’s-eye view” beyond or outside all that exists).Footnote 72 At its most specific, then, “the essence of contemporary metaphysics” turns out to be our current metaphysical understanding of being as “eternally recurring will to power.”Footnote 73 This Nietzschean ontotheology is the essence of technology in the second and deeper sense, because this understanding of being undergirds and drives the dominant late-modern mode of world-disclosure that Heidegger famously calls technological “enframing” (Gestell).
It is primarily this second, positive sense that Heidegger is thinking of when he tells his students that: “The essence of technology pervades our existence [Dasein, our first-personal “being-here” as an intelligible world cast into time and so history] in a way which we have barely noticed so far” (WCT 22/GA8 25). As that suggests, many people today (perhaps especially but certainly not only young people) seem to relate to technology the way fish relate to water, that is, as the very medium through which they live, a medium rendered nearly invisible by its growing ubiquity. And yet, this metaphorical water has a current, a current we might not ordinarily notice while caught up in it (much like a rip tide in the ocean), but one that is pushing us in a particular ontohistorical direction, subtly but pervasively shifting humanity’s basic understanding of what is and what matters. For, under the spreading influence of this technological understanding of being, the entire globe is increasingly undergoing (what Levinas nicely described as) a “mutation in the light of the world,” and an important part of Heidegger’s response to technology is to help us learn to recognize this pervasive transformation of the very light though which we see. Or, to come back to our earlier metaphor, Heidegger’s thinking of this positive “essence of technology” seeks to help us learn to discern the current of the ontohistorical water, the shifting tides of intelligibility through which we are swimming in our individual and collective existence.
The most important point to understand here is that, beneath the surface of our late-modern age of technological enframing, Heidegger discovers our current Nietzschean ontotheology, which understands the being of entities as “eternally recurring will-to-power,” that is, as mere forces striving against other forces with no end beyond the maximal perpetuation of force itself. This ontotheological understanding of the being of entities grounds and unifies our age, giving our contemporary historical constellation of intelligibility its underlying unity and coherence, but it is also increasingly leading us to preconceive and so treat all things in its light, as nothing but meaningless “resources” (Bestand) standing-by to be quantified, ordered, and optimized with maximal efficiency. The result is the growing dominance of what I have called the optimization imperative, a drive to get the most for the least which profoundly permeates and drives our lives. For Heidegger, the “nihilism” (or growing sense that existence is ultimately meaningless) spread by this current way of understanding (or “enframing”) being is the deepest problem to be found beneath the surface of our technological understanding of the world and ourselves, and he thinks that addressing it is the key to responding to the pressing question concerning technology in a way that is more meaningful – and less alarmed, disheartened, and nihilistic. But to see how he aims to do that, we need to recognize the third and deepest sense of “the essence of technology.”
We saw briefly that in Heidegger’s “history of being” (his history, that is, of Western metaphysics’ succession of dominant ways of understanding “the being of entities”), Nietzsche’s late-modern understanding of being as eternally recurring will to power radicalizes the early modern conception of being as fundamentally composed of subjects standing over against objects, an ontology established by Descartes and influentially developed by Newton, Kant, and many others (and we shall develop this claim in Section 5). For Heidegger, however, that modern understanding of being was itself made possible, in turn, by the prior history of being that came before it (going all the way back beyond even Plato to Thales and Anaximander, I have argued).Footnote 74 Because this “history of being” is essentially the history of Western humanity’s reigning succession of different ways of understanding being, its deepest “essence” is being itself. In other words, being is that “hard kernel of the matter” which is “hidden and [thereby also] preserved [verbirgt]” by the aforementioned fact that metaphysics understands being only as “the being of entities.” In what sense is it “hidden”? Western metaphysics’ consistent reduction of being to a metaphysical understanding of “the being of entities” obscures that there is anything more to being than what metaphysics succeeds in capturing and securing in its ontotheological systems. And what does this very hiding thereby preserve? Metaphysics’ reductive understanding of being as “the being of entities” eclipses “being as such,” the mature Heidegger’s most consistent terminology for the temporally dynamic ur-phenomenon that precedes, partly informs, but also ineluctably overflows and thereby exceeds every successful effort to capture it in any single conceptual system.Footnote 75 In short, the third, deepest, and richest sense designated by “the essence of technology” is “being as such,” that ontologically pluralistic source of intelligibility that Heidegger thinks can never be finally captured in any conceptual system (which is precisely what he thinks the metaphysical tradition of ontotheology has sought to do repeatedly ever since Plato first inaugurated metaphysics as ontotheology).Footnote 76 This deeper phenomenon of “being as such” is thus what is hidden and thereby also preserved by the Nietzschean ontotheology that structures and drives our technological understanding of being.
But here comes the rub. As I warned earlier, the second and third senses of “the essence of technology” (which are its only two positive senses) seem directly to contradict each other, because they posit both (2) that the essence of technology is the Nietzschean ontotheology that undergirds and drives our current technological epoch of enframing and also (3) that this essence is “being as such,” which is what our Nietzschean ontotheology misses and conceals. (The reason Heidegger thinks that, again in brief, is that Nietzsche’s ontotheological understanding of being as “eternally recurring will to power” – an endless play of forces vying against forces that functions only to perpetuate force itself – dissolves being into nothing but “sovereign becoming,” reducing the very notion of “being” to a hypostatized illusion without any enduring referent outside language, and so nothing more than “the last whisp of an evaporating reality,” as Nietzsche puts it.)Footnote 77 So, how can “the essence of technology” refer both to enframing and to what enframing misses, denies, and excludes? Here we reach perhaps the most esoteric teaching of the later Heidegger, before which as impressive a hermeneut as Gadamer himself could in the end only throw up his hands in frustration.Footnote 78 For Heidegger, however, this is not just some abstract logical puzzle or dispensable addendum to his later work but, instead, the central phenomenological mystery we need to learn to recognize for ourselves if we want to think our way through and beyond the most stubbornly-entrenched problems facing our technological age and achieve what he famously calls a “free relation” to technology.
The hermeneutic key here (as Heidegger would most succinctly suggest during the last decade of his life) is that: “Enframing is like the photographic negative of the event of enowning. [Das Ge-stell ist gleichsam das photographische Negativ des Ereignisses.]” (FS 60/GA16 366) In other words, the second sense of “the essence of technology” gives us the third sense in negativo: Enframing’s dissolution of being into nothing but “sovereign becoming” provides an inverted image of (what the later Heidegger famously calls) the truth “event of enowning [Ereignis],” his mature term for each momentous event of alêtheiac truth whereby we meaningfully disclose being in time, thereby enabling that which is to come into its own, again and again, in different ways. As Heidegger glosses the point, to recognize this deep connection is to take the “step back [Schritt zurück]” from enframing’s metaphysical foundations, which “means that thinking begins anew, discerning in the essence of technology the heralding sign, the concealed pre-appearing of the event of enowning itself [bedeutet, daß das Denken neu anfängt, um in Wesen der Technik das ankündigende Vorzeichen, des verdeckenden Vor-Schein, die verbergende Vor-Erscheinung des Ereignisses selbst zu erblicken]” (FS 61/GA16 367). As I explained in detail in Heidegger, Art, and Postmodernity, the basic idea behind this difficult thought is that, from within our metaphysical understanding of the being of entities as eternally recurring will to power, being shows up in a metaphysically veiled form as nothing but endless becoming. But the Nietzschean ontotheology underlying and driving enframing dissolves being into a strange kind of “nothing,” a nothing that is not simply nothing at all but, instead, does something. This active “noth-ing,” for Heidegger, needs to be recognized as the subtle phenomenological hinting whereby that which is “not yet a thing” beckons to be brought into being (with the help of our disclosive concepts and other practices). And, as Heidegger succinctly states: “The noth-ing of the nothing ‘is’ being.” (FS 57/GA15 361)
Veiled by the Nietzschean metaphysics underlying and driving our late-modern age, being shows up as an active “noth-ing,” an inchoate temporal becoming in which being’s dynamic “presencing [anwesen]” repeatedly makes itself felt as an existential sense of being on the verge of something important but still unformed, something for which we thus find ourselves called to try to find the right words (or other forms of expression) to help bring it into being. When we do so well, moreover, we find ourselves participating in those acts of responsive and creative world disclosure in which the distinctive nature of human beings manifests itself. In such disclosive acts, we maieutically help being arrive in our world by enabling that which was not yet to come to be. Becoming a “shepherd of being” in this way requires attuning ourselves to “being as such” in its myriad differences from the dominant metaphysical understanding of the being of entities that drives and unifies our technological age. In Heidegger’s terms, such ontologically maieutic acts are not mere “happenings” (or repetitions of what already was that fit squarely into the technological world) but rather genuine “events,” since through them that which was not comes to be, something which goes beyond that late-modern understanding of what it means to be.
In an Ereignis – an event of ‘enowning’ or of alêtheiac truth disclosure – entities, Dasein, and being all come into their own together. When Michelangelo discloses David from the marble (to take a paradigmatic example), he brings that particular piece of marble into its own, comes into his own as a world-disclosing sculptor, and allows being to come into its own as an inexhaustibly rich source of meaningfulness that we can partially (and often progressively) but never completely disclose. Moreover, as we continue to responsively disclose the meaning of such events in our own lives, this repeatedly self-and-other-interpreting disclosive activity helps shape and reshape our sense of our unfolding identity (as the first-personal intelligible worlds we are), thereby helping to compose the meaning of our being in time.Footnote 79 In other words, such an “event of enowning” (or Ereignis) discloses not just the entities it helps bring into the light of our world; in so doing, it also discloses being as such (in its seemingly inexhaustible difference from what had been, its “noth-ing”) and so our own distinctive nature as world-disclosive beings. For Heidegger, such responsive and creative disclosure is precisely what makes human beings distinctive, and this distinguishing form of our existential “being-here [or Dasein]” as ontological world-disclosers is, at best, only partially captured by such popular but reductive and so misleading terms as “intelligence” and “consciousness” (or it is missed entirely, when such terms are thought of in the kinds of quantified, linear, monomodal, and optimizable ways critiqued earlier).Footnote 80 In this way, then, the distinctive nature of our thinking as responsive and creative world-disclosive beings is revealed by “the danger” of the technological understanding of being which conceals that nature by increasingly reducing all things, including us, to mere “resources [Bestand]” standing by to be optimized for maximally flexible use – a danger which now even threatens to rewrite and so erase our world-disclosive nature permanently, in what Heidegger calls technology’s “greatest danger.”
What Heidegger calls “the greatest danger” of our technological understanding of being is the possibility that our Nietzschean ontotheology could become permanent, “driving out every other possibility of revealing” (QCT 27/GA7 28) by overwriting and so effectively obscuring Dasein’s “special nature,” our defining capacity for responsive and creative world-disclosure, with the “total thoughtlessness” of lives lived entirely in the grip of the Nietzschean conception of all entities, ourselves included, as intrinsically meaningless resources just standing by to be optimized for maximally flexible use (DT 56/G 25). If the Nietzschean ontotheology underlying technological “enframing” succeeds in securing its monopoly on the real, and so preemptively delegitimates all alternative understandings of being (by deriding them as useless, nonproductive, nonnaturalistic, or nonquantifiable, for example, and thus as irrelevant, ridiculous, nonserious, illegitimate, and so on), Heidegger thinks it could effect and enforce a kind of double forgetting in which we lose sight of our distinctive capacity for world-disclosure and forget that anything has thus been forgotten. The idea, as he provocatively puts it, is that we could become so satiated with the possibilities for flexible self-optimization opened up by treating our worlds and ourselves as resources (perhaps in ways like those Father John Misty imagined at the beginning, for instance) that we could lose the very sense that anything is lost with such a self-understanding. For example, the very idea that entities have an inexhaustibly rich being to disclose – and that doing so repeatedly, in ways that make our identities vulnerable to existential death but also help constitute our sense of living an enduringly meaningful life – might come to seem like an historically outdated myth, escapist romantic fantasy, or simply a failure to get with the program by optimizing ourselves technologically. The recent developments in genetic engineering examined in Section 2, moreover, allow us to imagine ways we might literally accomplish such a permanent “rewriting” of our distinctive nature, whether deliberately or as an accidental side effect of gene edits intended for other purposes.Footnote 81
We have thus understood how the essence of technology can signify both the greatest danger of technology (its threat to obscure and even delete our distinctive nature as responsive and creative disclosers of being) and its potentially salvific promise (its surprising ability to help us realize that very nature it threatens, in both senses of “realize”), thereby recognizing the deeper connection between this apparently diametrical opposition. This is one way of understanding Heidegger’s most-cherished distich from his favorite poet, Hölderlin, taken from the opening of his late hymn, “Patmos” (1803): “But where danger threatens / that which saves from it also grows. [Wo aber Gefahr ist, wächst / Das Rettende auch.]”Footnote 82 Thinking carefully through this initially paradoxical connection thus reveals that aforementioned hard core or kernel of the matter at stake in the question concerning technology – that deepest heart of the issue which keeps itself hidden from our ordinary alarm about technology and thereby also “preserves itself,” sheltering the seemingly inexhaustible ontological riches (of being as such) that (as the “noth-ing”) continue subtly to glimmer phenomenologically for us to disclose – thus also disclosing the nature of our own being as receptive and creative disclosers of being. In this way, then, the technological enframing that threatens to reduce thought itself to efficient optimization reveals the nature of our distinctive thinking in its inverted mirror: We are “thinkers,” ontologically-disclosive beings whose sensitive thinking [Besinnung] creatively composes being’s polyphonic hints into meaningful compositions. And since we can never do so finally, we do so repeatedly, in ways that help creatively disclose and compose the meaning of our unfolding existences in time.Footnote 83
With this overview in place, then, let us dive a bit deeper into the most important ideas at stake here, asking what they might teach us about how to respond to the problems endemic to our technological age without falling victim to that anxiety-provoking and alarming sense of futility examined earlier, whether by simply giving up on ever transcending technological optimization or by giving in to it entirely. What would it truly mean to transcend technological enframing, and what might we do to help bring about such a philosophical and historical move beyond our late-modern enframing of being?
5 Heidegger’s Ontohistorical Thinking of Technology: Modern Subjectivism, Late-Modern Enframing, and the Coming of Postmodernity
To answer such questions, we need to get a bit clearer about some details of Heidegger’s view. Heidegger’s mature understanding of metaphysics as ontotheology has been widely reduced to one of its parts and so largely misunderstood, but recognizing what he really means by “ontotheology” is crucial for interpreting his later thinking sympathetically and plausibly.Footnote 84 To simplify (rather massively) here: The later Heidegger’s famous “history of being” is his account of Western history as a series of successive but overlapping constellations of intelligibility, historical epochs that are each temporarily anchored and stabilized by a metaphysical ontotheology. “Ontotheology” is Heidegger’s term for the doubly foundationalist metaphysical accounts that structure the “fundamental metaphysical positions” which try to understand “the being of entities” in terms of “the truth concerning the totality of entities as such” (as we saw last section).
These historical ontotheologies link (1) metaphysics’ deepest understanding of the innermost (ontological) core of what-is (that perennial quest for the most elementary component out of which everything else is constituted) with (2) metaphysics’ ultimate understanding of the outermost (theological) horizon of what-is (the adoption of a kind of “God’s-eye” view that tries to comprehend all that is as if looking in from outside, in that metaphysical “view from nowhere”). When they function together successfully, ontotheologies grasp and secure Western humanity’s historical understanding of what-is and what matters from both the inside-out and the outside-in at the same time. In the history of the West, each of these “ontotheological” foundations that doubly anchored and so temporarily stabilized our historical worlds were undermined only by the later discovery or comprehension of the even deeper and more far-reaching foundations which then succeeded them historically – until we reach late modernity, in which all such metaphysical anchorings seem finally to give way, falling into the Nietzschean abyss of groundless becoming, which paradoxically becomes the groundless ground (or Ungrund) of our own late-modern epoch, by preconceiving “the totality of entities as such” as “eternally recurring will to power,” that is, nothing but force vying against force to maximally perpetuate force itself.
I mentioned in Section 4 that, in Heidegger’s mature thinking, “the modern age” as a whole is actually made up of two different “epochs,” which he calls early modern “subjectivism” and late-modern “enframing,” respectively.Footnote 85 These early modern and late-modern epochs interconnect philosophically and overlap historically to form modernity. So, if we want to understand what Heidegger’s (literally postmodern) “other beginning” seeks to move beyond (and so also what Heideggerian postmodernity would or could preserve from modernity), then we need to know how he understands modernity’s early and late-modern epochs. For, the mature Heidegger is no reactionary antimodernist, rejecting modernity as a whole; instead, his critiques of modernity focus specifically on its metaphysical foundations. The reason his critiques seem so broad is that these metaphysical foundations have a much larger and more pervasive historical impact than we usually notice. As I have often shown, Heidegger’s well-known antipathy for metaphysics tends to conceal the fact that, in his view, metaphysics is not merely the idle concern of philosophers isolated in their ivory towers; on the contrary, metaphysics articulates the conceptual core of “the history that we are” (N3 20/GA47 28), because it supplies the most fundamental conceptual parameters that shape and stabilize an historical age’s unifying sense of what is and what matters. Despite the importance Heidegger thereby attributes to metaphysics in constituting the deepest and ultimate parameters for our historical sense of the intelligibility of all things, Heidegger’s view is not an idealism (as some polemical but confused “materialists” allege), because metaphysicians do not legislate these ontotheologies from out of their own creative imaginations or impose them on the basis of their own idiosyncratic view of things but, instead, receptively disclose them by picking up on the most basic and far-reaching insights into the foundations and nature of reality that are already emerging in their historical worlds, in domains such as art and poetry as well as economics, biology, chemistry, physics, and cosmology (as we will see shortly).Footnote 86
When a metaphysics is truly “great” (in Heidegger’s terms), it quietly spreads a new “understanding of being” far and wide until it has settled into taken-for-granted common sense. Metaphysical ontotheologies play such a major role in shaping and reshaping the constellations of intelligibility in which we human beings exist because of what I have called ontological historicity, holism, and epochality. “Ontological historicity” refers to the fact that humanity’s bedrock sense of what is and what matters changes dramatically over time. The mechanism driving such historicity, in turn, is what I call “ontological holism”: Everything intelligible “is” in some sense, so when metaphysics successfully stabilizes a realignment in humanity’s understanding of what it means to be (which is precisely what successful ontotheologies do), this new understanding of being catalyzes a broad-spectrum historical transformation that ripples throughout numerous other interconnected ideas and practices until it has stabilized into a new historical constellation of intelligibility or “ontohistorical” mode of revealing. The resulting shape this “history of being” takes is thus (what I call) “ontological epochality”: Western humanity’s basic sense of what it means to be is neither fixed for all time nor constantly mutating but, instead, forms a series of overlapping historical “epochs” (or constellations of intelligibility). These “epochs” of being – which “hold-back [epoché]” being’s ontological riches so that a distinctive sense of what is and what matters can emerge and spread – unfold successively in a kind of punctuated equilibrium: In Heidegger’s “history of being,” Western history is composed of three ages (the ancient, medieval, and modern), themselves made up of at least five overlapping but relatively distinct epochs (the pre-Platonic, Platonic, medieval, modern, and late-modern).
In early works like Being and Time, Heidegger’s deconstructive critiques of modern metaphysics focus almost exclusively on what is usually called “the early modern” epoch, a mode of revealing grounded in the ontological tradition running from Descartes to Kant. The definitive trait of such early modernity is its ontological divide between subjects and objects, a metaphysical dichotomy Descartes institutes by convincing us that cognition’s immediate access to itself makes its existence indubitably certain in a way not shared by any of the objects “external” to such subjectivity. In Heidegger’s terms, Kant thinks Descartes’ unthought, staying within the basic metaphysical horizon constituted by Descartes’s establishment of subjectivity as foundational, but developing its previously unrecognized implications for morality, politics, and aesthetics. Put simply, Kant’s deontological morality is founded on this thinking subject’s universal recognition of all other such subjects as capable of rationally pursuing their own ends, so that the moral domain is determined by what such rational subjects can will to be the case without generating a contradiction. Politically, Kant’s cosmopolitan liberalism is founded on every rational subject’s right freely to pursue those ends within limits set only by every other rational subjects’ analogous pursuit of their own ends, so that each subject is entitled to as much political freedom as is compatible with the same formal freedom accorded to all the others. In aesthetics, finally, art becomes fundamentally a relation between subjects and the aesthetic objects they create and view.Footnote 87
To some, that might sound like a narrative of unidirectional historical progress. But as Heidegger already began to show in Being and Time (1927), the big problem for Western humanity here is that taking this modern subject/object dichotomy as our point of departure leads us to fundamentally mischaracterize the way we experience the everyday world in which we are usually unreflectively immersed, the world of our practical engagements. “Ordinarily and usually,” for instance, I do not explicitly or thematically “experience” the keyboard I type on or the bike I ride as external “objects” (Gegenstand) “standing over against” my “subjectivity” while skillfully using such equipment. Instead, I encounter such practical equipment almost transparently as integral aspects of my engaged existence as a “being-in-the-world.” By failing to recognize and do justice to this intertwinement of self and world that is basic to our experiential navigation of our lived environments, modern philosophy lays the conceptual groundwork for the “early modern” epoch that Heidegger calls subjectivism, the “worldview” in which an intrinsically valueless objective realm (“nature,” reduced to measurable quanta of force by Newton) is separated epistemically from isolated, value-bestowing, self-certain subjects, and so needs to be mastered through the relentless epistemological, normative, and practical activities of these subjects. As Heidegger shows, this problem is not merely theoretical, because the subjectivism of the modern worldview functions historically like a self-fulfilling prophecy: Its progressive historical realization generates not only those political freedoms, scientific discoveries, and technological advances many of us cherish, but also such unwanted downstream consequences as the distressing technological issues we discussed earlier (along with numerous other problems, including environmental devastation, the ongoing crisis of higher education, and the numerous ethico-political distortions that come from overemphasizing autonomy and self-reliance at the expense of holistic interconnectedness).
First emerging with Cartesian early modernity, “subjectivism” is Heidegger’s term for humanity’s ongoing, broad-spectrum attempt to establish “mastery over the totality of what-is” (QCT 132/GA5 92).Footnote 88 The early modern “object [Gegenstand]” literally names what “stands [over] against” subjectivity from outside it, thereby provoking our myriad efforts to bring these “external” objects back within our sphere of subjectivistic mastery and control. (These relentless efforts are ultimately stymied by being’s conceptual inexhaustibility, and thereby become ever more over-compensatory and “unhinged,” a center that “cannot hold,” as Yeats put it, as subjectivity progressively objectifies even itself and so dissolves its paradigmatic role as the unifying center of all things that determines their “value.”)Footnote 89 Subjectivism thus refers to modern humanity’s increasingly global quest to achieve complete control over every aspect of our objective reality; we metaphysically privilege the subject as the being “who gives the measure and provides the guidelines for everything that is” (QCT 134/GA5 94) as we seek to develop “our unlimited power for calculating, planning, and breeding [Züchtung] all things” (QCT 135/GA5 94).
As that highly-charged reference to “breeding” suggests, Heidegger first recognized the emergence of something not just horrifying but metaphysically unprecedented in the Nazis’ murderous program of genocidal eugenics, which treated even the subject, that privileged foundation of early modernity, as just another object to be mastered and controlled.Footnote 90 This self-objectification of the subject signals an important historical turning point, a rupture between the two epochs of modernity: Early modern subjectivism turns into late-modern enframing as the modern subject, seeking to master and control all aspects of its objective reality, turns that objectifying impulse – and the myriad techniques developed and deployed in its service – back onto itself. For Heidegger, we can thus say, enframing is subjectivism squared (or subjectivism applied back to the subject itself). Indeed, the subjectivist impulse to master reality redoubles itself in enframing, even as enframing’s objectification of the subject dissolves the very subject/object division that defined early modernity and initially drove the subject’s relentless efforts to master the objective world standing over against it (in vain hopes of finally bringing it back within the sphere of its knowledge and control). Subjectivism “somersaults beyond itself [selbst überschlägt]” (N1 77/GA43 90) in our late-modern age of “enframing” because the impulse to control everything intensifies and accelerates even as it breaks free of its modern moorings and circles back on the subject itself.Footnote 91 As a result, the modern subject increasingly becomes just another late-modern entity to be optimized for maximally efficient flexibility along with everything else. We are thus moving from modern subjectivism to the late-modern enframing of reality insofar as we understand and relate to all things, ourselves included, not just as objects to be mastered and controlled but as nothing but intrinsically meaningless “resources” (Bestand) standing by for endless optimization.
With its very emergence, in other words, the late-modern epoch of technological enframing is already moving beyond the metaphysical foundations of early modernity, dissolving the subject/object dichotomy that early modernity is founded on and thereby propelling humanity into the new historical epoch Heidegger calls late-modern “enframing” (the second of the two epochs that together constitute the modern age, or modernity as a whole). Remember that Heidegger’s critique of modernity is primarily a critique of its metaphysical foundations; in terms of these, late modernity has already left early modernity behind. So, if we want to understand what exactly it is from modernity that Heidegger’s postmodern “other beginning” seeks to help us to (not “overcome [überwinden],” as that is an act of willful opposition that entangles us in the logic of what we oppose, the way atheism perpetuates the theistic presumption that we can know what lies outside space-time, but rather) “twist-free of [verwinden]” (that is, recognize, accept, undermine, transcend, and so at least partly recover from), then we need to focus primarily on the metaphysical substructure of late-modern “enframing.” For, it is primarily the metaphysical ontotheology undergirding, unifying, and driving this technological understanding of being that remains with us today and that Heidegger’s postmodern other beginning seeks to help humanity move beyond, first individually and then collectively.
Beginning in the late 1930s, Heidegger painstakingly traces the late-modern epochal shift he first discerned in the horrors of Nazi eugenics back to an “unthought” ontotheology he uncovers in Nietzsche’s work.Footnote 92 To briefly summarize Heidegger’s most important conclusions: Just as Kant “thought Descartes’ unthought,” so Nietzsche thinks Kant’s unthought, developing the heretofore unrecognized consequences of Kant’s establishment of subjectivity as the metaphysical foundation for morality, politics, and art. By making the rational subject (rather than God’s authority) the ground of what is morally good, politically right, and aesthetically beautiful, Kant kills God, in Nietzsche’s terms; that is, Kant finally severs our human world from all its traditional (“otherworldly”) metaphysical foundations, substituting rational agency for divine authority.Footnote 93
Seeking to think Nietzsche’s unthought in turn, Heidegger traces the surprisingly systematic unity of Nietzsche’s views back to the two most fundamental pillars of his mature thought, the “will to power” and “the eternal return of the same.” Generalizing from Darwinian biology, Smith’s laissez-faire economics, and even his contemporary chemistry, Nietzsche discovers “the will to power” as the ontological essence of all things, a name for that endless struggle between competing forces, an Olympic agon without final victor, which (as with the lion chasing the gazelle, the competing forces of supply and demand, or the opposing forces constituting matter) ultimately serves only to maximally perpetuate the endless circulation of these forces themselves (whether by driving the arms race of evolution, maximizing economic growth, or preserving the elemental forces composing what we call “matter”). And, when we try to think about what shape such cosmic becoming ultimately takes (thereby seeking to take up that theological God’s-eye-view from nowhere), the universe looks like a river running forever in a circle – a cosmic loop in which life always begins again after the last deadly stroke of midnight (as Zarathustra suggests), or even a divine dance of the god Dionysus (in which life never stops dying and being reborn).
Heidegger thus isolates the “ontotheological” substratum of late-modern enframing by thinking (that is, creatively disclosing) Nietzsche’s unthought metaphysical unity of will to power and eternal recurrence. As we late moderns implicitly come to understand the being of all things as “eternally recurring will-to-power,” that is, as nothing but forces coming together and breaking apart endlessly, we increasingly reduce everything to meaningless “resources” (Bestand), mere material standing by to be optimized and ordered for maximally efficient use – including (most dangerously) ourselves. As we saw, Heidegger thinks the “greatest danger” of this “nihilistic” late-modern epoch of technological enframing is that it could become permanent (thereby achieving what no previous metaphysics could) by not just obscuring but rewriting human nature, should our endless quest for self-optimization ever erase our defining capacity for creative and responsive world-disclosure (whether deliberately or as an accidental consequence of our genetic reengineering of our own heritable DNA). It is, moreover, precisely this Nietzschean ontotheology underlying late-modern enframing that Heidegger seeks to help us recognize and transcend with his postmodern “other beginning.”
What Gadamer and so many others seem never to have understood here, however, is that this transformation from “the danger” of nihilistic late-modern enframing to “the promise” of a genuinely meaningful postmodern understanding of being is not some possible eventuality that might happen some day far off in the distant future, a day we can at best only wait and prepare for, quietistically hoping for its miraculous arrival. On the contrary (as I show in detail in Heidegger, Art, and Postmodernity), Heidegger believed this postmodern understanding of being is already here, having already begun to arrive more than two centuries ago in the visionary work of “the most futural” artists and thinkers like Hölderlin, Van Gogh, and Nietzsche too (since, as Heidegger later recognized, Nietzsche’s polysemic thinking cannot be reduced to its “unthought” ontotheology but contains other, more promising and still partly unthought insights).Footnote 94 Like first arrivals from another land, these futural “emissaries” are ambassadors of a postmodern future who can help facilitate the more widespread arrival of this other, post-metaphysical (that is, no longer ontotheological) understanding of being. When Heidegger thinks – that is, creatively and responsively discloses – their postmodern “unthought,” Hölderlin, Van Gogh, and even Nietzsche are no longer modern thinkers. On the contrary, what remains greatest about their thinking is that it can help us move beyond the early and late-modern ways of understanding being (as modern objects for subjects to master and control, or as inherently meaningless late-modern resources awaiting optimization, respectively) into a more meaningful postmodern understanding of being, in which we come to understand being as partly informing and yet also inevitably exceeding our ability to conceptualize and relate to what is. In other words, Heidegger’s postmodern revolution began over two centuries ago, and (taking the long view) humanity’s progress toward its larger historical realization has indeed tended to unfold progressively, albeit typically in short bursts of dramatic “revolutionary” historical progress followed by longer periods of reactionary retrenchment.Footnote 95
Indeed, we ourselves understand being in a postmodern way when we personally undergo that ontologically transformative gestalt switch needed to understand the deepest essence of technology, suddenly seeing in the endless dynamism of Nietzschean becoming – not a meaningless nothing in which all being is evaporated but, instead – the myriad glimmering hints of that which is not-yet-a-thing, beckoning for our creative and responsive disclosures to help bring them into being. Modern humanity often likes to imagine that it is close to achieving total knowledge and mastery of the world, or even that the end of history is imminent (as in those nihilistic thanatological fantasies discussed earlier, eager for an end to all things). In the view suggested by Heidegger’s history of being, however, Western humanity is in its teenage years at best (collectively caught up in what Beauvoir insightfully recognized as “the crisis of adolescence,” and so particularly susceptible to the temptations of fascism, which paternalistically supplants the anxiety of our ineliminable existential and ontohistorical elbow room with the brittle reassurances of one-sided dogma).Footnote 96
Or, to put the point in the poetic terms the later Heidegger uses, we now exist in a difficult transitional “night” between the long ending of the first metaphysical day of Western history and the beginning of an “other,” second day, in which we occidentals learn to live without metaphysics’ illusory fantasies of achieving some complete and final understanding of everything and instead attune ourselves to being in the seemingly inexhaustible difference whereby it precedes, overflows, and so exceeds all our dominant modes of understanding what is and what matters. As we learn to dwell in the effulgent light of this more poetic postmodern sun, we come to recognize it already rising on an “other beginning” beyond nihilistic late-modern enframing, “a hesitant dawn” that continues to spread and grow into that new day of Heidegger’s postmodern age.Footnote 97 Or, more prosaically expressed, if we can learn from the great poets, artists, and thinkers to become comportmentally attuned to “being as such” – the dynamic phenomenological presencing that precedes, overflows, and so exceeds all conceptualization – then we too can come to understand and experience entities in a postmodern way, as being richer in meaning than we are capable of ever finally doing justice to conceptually, rather than taking them as intrinsically meaningless late-modern resources awaiting optimization and so endless quantification, which nihilistically replaces quality with quantity, substituting the projection or infusion of value for the disclosive preservation of the invaluable, the nonquantifiable mattering that forms the inexhaustible touchstone of a meaningful life. Such postmodern experiences can thus become microcosms of, as well as inspiration for, the postmodern revolution beyond our underlying ontotheology that we need to transcend the nihilism of late-modern enframing and help set our world on a different, more meaningful path.Footnote 98
For, Heidegger is no anarchist, rejecting all foundations (as the confluence of Schürmann’s reading and Derrida’s critique of Heidegger worked together to suggest); instead, he is a postmodern polyarchist. That is, Heidegger is an ontological pluralist who recognizes that being can indeed meaningfully inform our lives (practically and conceptually), repeatedly arriving in momentous “events” that we can dedicate our lives to responsively and creatively disclosing (thereby also helping to disclose and transform the meaning of our own lives’ unfolding), despite that metaphysics-refuting phenomenological truth that being can never be exhaustively captured in any singular conceptual frame, as technological enframing – the consummating “pinnacle of Western metaphysics” – seeks to do. Yet, because that historical pinnacle “looks down both slopes” (as Derrida recognized), the end of metaphysics in technological late-modernity is also (in negativo, as we have seen) the arriving of the “other beginning,” built around the postmodern understanding of being (as both informing and exceeding all meaningful conceptualizations and forms of embodiment).Footnote 99
6 Thinking a Free Relation to Technology, or: Technology and the Other (Postmodern) Beginning
Our concluding question thus becomes: How is our relation to technology transformed by adopting this postmodern understanding of being? As we learn to understand and so relate to what-is no longer as early modern objects to be mastered and controlled, nor as meaningless late-modern resources standing by to be efficiently optimized, but in Heidegger’s postmodern way instead – that is, as being richer in meaning than we are capable of finally doing justice to conceptually or practically – then we find ourselves moving forward historically, not back (into some reactionary technophobia), and so can learn to develop what Heidegger called a “free relation to technology,” in which it becomes possible to use even technological devices themselves to resist technological enframing and its nihilistic tendency to obviate any meaning independent of the will. Indeed, I think it pedagogically important to recognize that we are already doing this, for example, whenever we use a camera, microscope, telescope, or even glasses to help bring out something meaningful that we might not otherwise have seen, when we use a synthesizer or computer to compose a new kind of music that helps us develop and share our sense of what is most significant to us, when we use word processors to help bring out and compose our sense of what is really there in the texts that matter to us and the philosophical issues that most concern us, or even when we use Chat-GPT (or other LLMs) to help explore, clarify, and critique the current state of our wildest technological ambitions.Footnote 100
Of course, learning to adopt such a postmodern approach to technology will not serve as some panacea or cure-all for our existing problems. It will, however, help us learn to face the most daunting of these technological problems with less disheartening anxiety and more openness to the kinds of insight, understanding, creative solutions, and meaningful long-term responses that many of these problems require. As long as we remain locked in to late-modern enframing’s all-encompassing optimization imperative (that technologizing drive to “get the most for the least” that demands effective action now), we will we find our anxiety and despair continuing to grow in the face of the apparent intractability of some of the technological problems we have examined (which might well be acceptable if that growing anxiety finally pushed us over into effective collective action, but unfortunately it seems not to, at least not yet). Compounding the problem, enframing’s optimization imperative also often works to preemptively delegitimate and disqualify the kinds of innovative approaches we need to develop to respond effectively to our most deeply entrenched technological problems (by leading us to precipitously dismiss such creative responses as too unrealistic, untried, ineffective, or otherwise “suboptimal”). By getting beneath our anxiety to address its ontohistorical sources (we have seen), the postmodern understanding of being can help free us from the anxiety and despair of enframing, teaching us not to repress and flee but instead to embrace being’s conceptual and practical inexhaustibility as the source of all meaning and, indeed, as the ongoing arriving of the inchoate hints of a more meaningful future.Footnote 101
In our specific context here, this transformative gestalt-switch can help us become more receptive to existing and emerging technologies in ways that allow us to incorporate them into our ongoing commitment to thinking – that is, to creatively disclosing and composing meaningful responses to the real problems we continue to face. Moreover, the adoption of this postmodern understanding of being should also help inhibit the endless recurrence of similar problems, insofar as enframing conditions, enables, and drives these problems – the way it does, to just take one notable example, in the currently much-discussed problem of AI enabled cheating in school. For, that problem only arises because we have forgotten the original philosophical meaning of education (as a transformative rediscovery of being and its definitive connections to our own being) and so allowed enframing’s empty optimization imperative to rush in and fill the gap, thereby unintentionally encouraging students to view education itself as merely a formal process of getting the most for the least (the highest-paying jobs for the least amount of work, for example), a situation in which such cheating is not only predictable but perfectly rational. That problem is nipped in the bud, however, when students are instead taught the truth that the deeper goal of education is to help identify, cultivate, and develop each individual’s distinctive traits and capacities (as world-disclosive beings) in ways that allow them to contribute to solving their time and generation’s most pressing problems – precisely because doing so is what will enable them to live meaningful, intrinsically rewarding lives.Footnote 102
In this regard, moreover, although I have found myself consistently deflating the fears and hopes of both AI doomers and techno-utopians prophesizing our imminent annihilation or salvation by superintelligent AIs (which in fact, we have seen, are nowhere in the offing), I should also acknowledge that what counts as a “mortal danger” from the development of AI remains somewhat relative and hence contentious. For example, if you drive a taxi or an interstate truck for a living (as more than three million people currently do in the USA) and have no other ‘marketable’ skills or training to fall back on, then the real and much more imminent threat of losing your job to automated vehicles could indeed count as a dystopian devastation of your whole way of life. At the same time, however, and without being callous or insensitive about this real problem, the cold truth is that different versions of this same story have happened numerous times throughout history (as when the advent of automobiles displaced the entire industry built up around horse-drawn carriages), and while such earlier transitions may have happened more gradually, we are now much better equipped and so (in principle at least) capable of responding to such problems with creative solutions such as the widespread implementation of vocational retraining and reskilling programs – programs in which AI may well come to play a significant role in lowering the cost and increasing the ease and accessibility of such retraining – ideally, by connecting such ‘retraining’ to the deeper meaning of vocational education (by hearing the call of its largely forgotten as well as in its heretofore unthought possibilities).
Similarly, the other dangers stemming from the dramatic new information technologies we discussed are both less real than some fear and more real than many hope. The 2023 Writers’ Guild strike testifies to the reality of job displacement in some of our most creative fields, especially because in our capitalist economy (itself an expression of the technological understanding of being, in which all things, even human beings, are increasingly treated as nothing but fungible resources to be optimized efficiently, quantifying the qualitative so as to get the most for the least), even the illusory mirage of being able to replace one’s costly creative workers with machines can be enough to begin shrinking a field. The writing done by generative AIs remains far below the levels reached by the most skillful, imaginative, and innovative human beings (and seems extremely likely to remain so for the foreseeable future), a stubborn fact which has not stopped Hollywood and other film and television studios from downsizing their creative teams in the name of streamlining production flows in pursuit of profit maximization. The truth that generative AIs fall far short of skillful human beings should stop such heedless rapacity, but whether or not it does in the long run would seem to depend on the last big issue I would like to address.
Does generative AI really just give us “Gerede on steroids” (in Taylor Carman’s memorable phrase) – that is, merely souped-up “idle chatter,” new and efficient on-demand information delivery systems that actually reify and reinforce the levelled-down, received view of things, just repackaged and presented in shiny new forms?Footnote 103 Well, in terms of GPT’s ability to answer factual questions, Carman’s critique is quite telling; indeed, many of GPT’s answers remain below the level of what you could find through a search calibrated to expert sources alone (in philosophy, for example, by relying on The Stanford Encyclopedia of Philosophy rather than on Wikipedia, sources ChatGPT seems to jumble together indiscriminately). Plus, GPT has that aforementioned tendency to simply make up false facts (confidently citing nonexistent sources, for example, or attributing made-up quotations to real sources, real quotations to fake sources, and so on). Programmers call this “hallucinating,” but that term is misleading because it suggests seeing something which is not there, when in fact generative AIs do not see anything at all because they have no intentionality, no “mineness [Jemeinigkeit]” or first-personal intelligible world. Lacking such an ontological world, the deeper problem here is that generative “AI still fundamentally can’t tell truth from falsehood” but, instead, mimics the form of the answers on which it has been trained, even when it runs out of data to give those established patterns true content.Footnote 104 In other words, when not just repackaging existing content, it guesses; much like a more advanced form of the approach that gave us predictive text, generative AI generates predictive answers that look the way such answers are supposed to look formally (or, more precisely, answers that look the way the answers looked in the data on which it was trained – with the resulting further problem that it gives racist, sexist, classist, ableist, homophobic, transphobic, and so on, answers, insofar as it has been trained on data that contains or reflects such biases, often “black-boxing” these biases into its results and so retrenching them even more deeply). But these formally impressive answers are often factually fabricated, in part or in whole. This tendency toward confabulation – which is the more correct psychological term for fabricating details in ways one does not realize are false in order to fill in unperceived gaps in one’s memory or knowledge (the way alcoholics with Wernicke-Korsakoff syndrome do) – remains a highly problematic bug for generative AI’s ability to answer factual questions on demand (and so continues to present real problems for designers and users hoping for something like an all-in-one stop Google search). At the same time, however, this initially surprising tendency toward confabulation also points us toward the most distinctive feature of LLMs like GPT, since the greatest strength of generative AIs is to generate new content, that is, to make things up that do not already exist by drawing on and recombining existing elements and then filling in the gaps in potentially new ways.Footnote 105
The results, however, are notoriously mixed. As Vauhina Var, a writer celebrated for his successful use of ChatGPT as a tool to help with his own creative writing, observes:
ChatGPT’s voice is polite, predictable, inoffensive, upbeat. Great characters, on the other hand, aren’t polite; great plots aren’t predictable; great style isn’t inoffensive; and great endings aren’t upbeat. … [T]he problem with AI is not just that it can’t come up with an individual perspective on the world. It’s that it can’t even comprehend what the world is.
LLMs like GPT cannot comprehend what the world is because they have no world (no first-personal intentionality as we observed earlier, there is nothing that it is like to be them). What is more, GPT tends to ‘answer’ difficult questions by generating noncommittal lists of pros and cons and adding qualifications so obvious that only an ignoramus would find them useful, yielding tediously bland and anticlimactic conclusions to its seemingly indiscriminate recompilation of existing data (thereby fusing the true with the false, the insightful with the misleading), an output which is all bound together with what Var comically describes as “the witless efficiency of a stapler.”Footnote 106 Apropos of which, generative AIs seem particularly bad at humor. For example, when I asked ChatGPT-4 to “Tell me a philosophy joke,” it answered: “Why did the philosopher refuse to tip the waiter? Because he was a proponent of no-ethics!” I didn’t get it. GPT-4’s explanation?
The joke plays on a pun between the word “no-ethics” (suggesting a lack of moral principles or standards) and “no tip” (suggesting not giving a gratuity to the waiter). The philosopher, according to the joke, refuses to tip the waiter because he believes in “no-ethics,” a play on words indicating he has no ethical standards. In reality, philosophy is deeply concerned with ethics, so the joke is an ironic twist on that expectation.
Painful. (And notice that unselfconscious “he.”) When I asked it to try again, it came up with: “Why don’t philosophers argue in uppercase? Because they can’t stand absolute truths!” (YIKES.) The jobs for human comedy writers look pretty secure for the foreseeable future.Footnote 107 Humor aside, Var rightly discerns the outlines of what I take to be the deeper and more problematic danger looming on the horizon here: Readers might learn to be satisfied by less mediocre versions of AI prose in the future, especially if, first, these future AIs are trained on customizable data sets composed of writers whose styles are preferred by those particular readers, and, second, their stories or other writing are generated on demand in response to a reader’s own individual requests.
Bert Dreyfus used to teach his students that the real danger of computers is not that they will become more like us but, rather, the reverse: that we will become more like them. The real danger is not that the AIs will surpass us – and become superintelligent entities bent on human eradication or salvation – but, instead, that we will become accustomed to their level of creative output and lower our expectations about both them and ourselves to that level. For, it seems very likely that in the near future, generative AIs will be able to be trained on customizable data sets to match the preferences of their users, who will then be able to request customized creative products, like: “Tell me a story (of specified length) about a post-Heideggerian philosopher named Søren MacLeod struggling to think disclosively in a dystopian world ruled by AIs, and write it in a style that combines the sparing narrative poetry of Cormac McCarthy, the detailed plotting of Neil Stephenson, the dystopian world-building of William Gibson, and the dark humor of Don DeLillo.”Footnote 108 But even such a work will never be anywhere nearly as good as something actually written by McCarthy, Stephenson, Gibson, or DeLillo (let alone by some fantastic counterfactual collaboration they might have written together). But it might someday be, say (charitably), 70 percent as good, and it will have the advantages not only of actually existing but also of being delivered almost instantaneously, on demand, with the not inconsiderable narcissistic appeal of being about what the reader wants it to be about and even starring that reader (or their chosen surrogates or avatars) as the protagonist. It will, moreover, be a story the reader (or listener) can subsequently request revised iterations on or sequels to without any fear of insulting its author, rather like a child asking a parent for variations on a favorite bedtime story – which, as that suggests, rightly raises concerns about infantilizing adults or stunting their development.
So, the big question this leaves us with, in the end, is whether we (or future generations) will learn to be satisfied with something that is only about 70 percent as good as what we get from our best and most creative thinkers, writers, and artists. Students who are still learning to write well, for example, are already particularly vulnerable to the problematic temptation to substitute virtual writing for their own, insofar as they have not yet developed the competence needed to distinguish between expert level human writing and the more mediocre facsimiles generated by AIs. Some may simply decide (whether from laziness, ignorance, or their own “rational” pursuit of technology’s optimization imperative to get the most for the least) that 70 percent is “good enough” for them. What makes this such a big problem, moreover, is that the same worry applies, in principle, to all the developing AI technologies, from self-driving cars (which are already better than the worst human drivers but will remain far worse than the best), to future films and videos created on demand by AI, and even (to come full circle) to the kind of dystopian scenario imagined by Father John Misty at the beginning. For, it seems likely that virtual sex in the metaverse will never be as potentially meaningful as the real thing, precisely because the latter carries genuine risks and vulnerabilities that will largely be engineered out of the virtual substitute, thereby taking with them the most profound opportunities for personal growth, which requires caring deeply about the matter pursued and being open to the lessons that only come from the painful and joyous vicissitudes of repeated trial and error. For, we truly improve only by doing our best and then feeling terrible when things go poorly and elated when they go well, because that intensive affective feedback motivates us to continue to learn and so grow as human beings and also bestows us with worlds of meaning that genuinely matter to us (as convincingly shown in the influential phenomenology of skill acquisition developed by Dreyfus and extended by Benner, Kelly, Wrathall, and others).Footnote 109 Of course, some might wish to think of the virtualized pornographization of human sexuality (or any of the other examples we have considered) as just harmless or diverting supplements to the real thing; as Derrida used to warn, however, the logic of the supplement is to supplant.
Indeed, as Heidegger teaches us, something subtle yet profoundly important would be lost in a completely technologized world in which we merely impose and refine our preexisting tastes, rather than opening ourselves up to disclosing what that reality outside our existing understanding continues to offer us, seeking with committed and vigilant receptivity to connect with and creatively disclose that polysemic world and sensitively compose the responses through which we continue to learn and grow.Footnote 110 Rather than requesting creative AI storytelling in the style of my favorite authors, for example, I do much better to go out and explore what else the greatest writers have already written – of which there is already much more than any of us can read in a single life, not all of which appeals to my preexisting tastes, but all of which has something important to teach me.Footnote 111
For the very same reasons, moreover, while it might be tempting to maintain a curmudgeonly cynicism about technology in order to try to feel aloof and superior to this ever-encroaching world, doing so would discourage us from learning about this world, its genuinely promising developments as well as its most alarming dangers. Worse, it would also inhibit us from cultivating a sense of existential wonder and even gratitude for this very ontohistorical light through which so much of our world becomes intelligible to us, providing the point of departure we set out from in our very efforts to go beyond it into a more meaningful understanding of ourselves, each other, and our shared worlds and emerging futures. Yes, the technological current of that optimization imperative pushing us all toward nihilism is profoundly dangerous, not only for the particular problems it generates but also for its growing drive toward meaningless lives of empty optimization. And yet, once we recognize that danger and its deepest ontotheological sources, we can learn to use technologies in ways that resist that nihilism and even help move us beyond it. The more balanced and free approach that Heidegger seeks to teach us can thus let us realize the disclosive capacities of genuine thinking that at least some of these technologies can help us develop and refine (as we have seen). At the same time, it can also help us learn to better discern and resist the most dangerous temptations of the ongoing technologization of our world, including its push toward endless optimization and its growing potential to undermine our dedication to cultivating our distinctive disclosive capacities and their most cherished fruits, like those works of responsive and creative human thought that go well beyond the levels our technologies will likely ever be able to reach on their own.
Abbreviations Used for Cited Works by Heidegger
- BT
Being and Time. J. Macquarrie and E. Robinson, trans. New York: Harper & Row, 1962.
- DT
Discourse on Thinking. J. Anderson and E. Freund, trans. New York: Harper & Row, 1966.
- FS
Four Seminars. A. Mitchell and F. Raffoul, trans. Bloomington: Indiana University Press, 2003.
- G
Gelassenheit. Pfulligen: Neske, 1959.
- GA5
Gesamtausgabe, Vol. 5: Holzwege. F.-W. von Herrmann, ed. Frankfurt a. M.: V. Klostermann, 1977.
- GA7
Gesamtausgabe, Vol. 7: Vorträge und Aufsätze. F.-W. von Herrmann, ed. Frankfurt a. M.: V. Klostermann, 2000.
- GA8
Gesamtausgabe, Vol. 8: Was Heißt Denken? P.-L. Coriando, ed. Frankfurt a. M.: V. Klostermann, 2002.
- GA16
Gesamtausgabe, Vol. 16: Reden und andere Zeugnisse eines Lebensweges, 1910–1976. H. Heidegger, ed. Frankfurt a. M.: V. Klostermann, 2000.
- GA43
Gesamtausgabe, Vol. 43. Nietzsche: Der Wille zur Macht als Kunst. B. Heimbüchel, ed. Frankfurt a. M.: V. Klostermann, 1985.
- GA43
Gesamtausgabe, Vol. 47. Nietzsches Lehre vom Willen zur Macht als Erkenntnis. E. Hanser, ed. Frankfurt a. M.: V. Klostermann, 1989.
- N1
Nietzsche: The Will to Power as Art. David Farrell Krell, ed. and trans. San Francisco: Harper & Row, 1979.
- N2
Nietzsche: The Eternal Return of the Same. David Farrell Krell, ed. and trans. San Francisco: Harper & Row, 1984.
- N3
Nietzsche: The Will to Power as Knowledge and as Metaphysics. David Farrell Krell, ed. J. Stambaugh, D. F. Krell, and F. Capuzzi, trans. San Francisco: Harper & Row, 1987.
- NI
Nietzsche. Pfullingen: G. Neske, 1961, Vol. I.
- QCT
The Question Concerning Technology. W. Lovitt, trans. New York: Harper and Row, 1977.
- SZ
Sein und Zeit. Tübingen: M. Niemeyer, 1993.
- WCT
What Is Called Thinking? J. G. Gray, trans. New York: Harper & Row, 1968.
Acknowledgments
I presented earlier versions of this work at Franklin and Marshall College (October 11, 2023) and the American Society for Existential Phenomenology (January 13, 2024), then delivered excerpts at the Bombay Beach Biennale (March 24, 2024) and as a keynote address for “Disentangling Heidegger on Technology” in Alto Adige, Italy (June 18, 2024). For helpful criticisms and suggestions, I would especially like to thank Dana Belu, Patricia Benner, Lee Braver, Taylor Carman, Dave Cerbone, Megan Flocken, Morganna Lambeth, Eric Kaplan, Stephan Käufer, Jonathan Krude, Peter Leiser, Dave Liakos, B. Scott Rouse, Tao Ruspoli, Robert Shafer, Darian, Kirsten, and Tamsin Thomson, and Mark Wrathall. Heartfelt thanks as well to Bert Dreyfus, Jacques Derrida, and Andy Feenberg for putting me on this conjunction of paths and to Dan Dahlstrom and Filippo Casati for inviting me to write this Element.
About the editors
Filippo Casati
Lehigh University
Filippo Casati is an Assistant Professor at Lehigh University. He has published an array of articles in such venues as The British Journal for the History of Philosophy, Synthese, Logic et Analyse, Philosophia, Philosophy Compass and The European Journal of Philosophy. He is the author of Heidegger and the Contradiction of Being (Routledge) and, with Daniel O. Dahlstrom, he edited Heidegger on logic (Cambridge University Press).
Daniel O. Dahlstrom
Boston University
Daniel O. Dahlstrom, John R. Silber Professor of Philosophy at Boston University, has edited twenty volumes, translated Mendelssohn, Schiller, Hegel, Husserl, Heidegger, and Landmann-Kalischer, and authored Heidegger’s Concept of Truth (2001), The Heidegger Dictionary (2013; second extensively expanded edition, 2023), Identity, Authenticity, and Humility (2017) and over 185 essays, principally on 18th-20th century German philosophy. With Filippo Casati, he edited Heidegger on Logic (Cambridge University Press).
About the Series
A continual source of inspiration and controversy, the work of Martin Heidegger challenges thinkers across traditions and has opened up previously unexplored dimensions of Western thinking. The Elements in this series critically examine the continuing impact and promise of a thinker who transformed early twentieth-century phenomenology, spawned existentialism, gave new life to hermeneutics, celebrated the truthfulness of art and poetry, uncovered the hidden meaning of language and being, warned of “forgetting” being, and exposed the ominously deep roots of the essence of modern technology in Western metaphysics. Concise and structured overviews of Heidegger’s philosophy offer original and clarifying approaches to the major themes of Heidegger’s work, with fresh and provocative perspectives on its significance for contemporary thinking and existence.