Hostname: page-component-78c5997874-mlc7c Total loading time: 0 Render date: 2024-11-10T05:37:14.929Z Has data issue: false hasContentIssue false

The Jargon Amplifier

Published online by Cambridge University Press:  26 July 2024

Xuenan Cao*
Affiliation:
Chinese University of Hong Kong
Rights & Permissions [Opens in a new window]

Abstract

Type
Forum
Copyright
Copyright © 2024 The Author(s). Published by Cambridge University Press on behalf of Modern Language Association of America

To the Editor:

Rey Chow's “The Jargon of Liberal Democracy” (vol. 137, no. 5, Oct. 2022, pp. 935–41), a contribution to the special feature Monolingualism and Its Discontents, captures a position glorified yet misunderstood. We scholars of Anglophone-American humanistic studies take pride in transcending the confines of monolingualism. The social fact that scholars strive to speak, read, and write in multiple languages can be productively read as progress in the spirit of DEI (diversity, equity, and inclusion). However, for individuals caught in linguistic hierarchies, such as a Filipina maid speaking only in English while working in a Cantonese household in Hong Kong, the “dominance of specific national and colonial languages” (936) is still a daily reality. At first glance, this critique may appear uncompromising. After all, what left-leaning intellectuals would want to risk being accused of hypocrisy, especially if they themselves employ domestic workers from the global south who communicate with them in English?

Yet, Chow's key argument, an even more uncompromising one, appears still later. It begins in the middle of a paragraph: monolingualism “cannot be accounted for purely linguistically” (936). The more insidious form is the one mediated by technology. Here, we are not just talking about the monolingualism in voice technologies, such as Alexa's or Siri's reading of the morning news in a certain default accent. This type of cultural imperialism has been addressed by many scholars such as Halcyon Lawrence. Chow is referring to the less discernible type of ideological monolingualism: the “recycling of the lexicon of a certain political ideology” (936). An individual may speak and write in various languages yet still remain monolingual in their use of the same “ideoléxics like freedom, democracy, and human rights in the global mainstream media's reports” (937).

Such “socially approved and thus politically safe” (936) discourse has a specific valance in the larger territory of digital cultures when we account for the attention economies on the Internet. We live in a digital culture that taught us to get inspired by messages in which the violation of human rights and the values of liberal democracy are well described. In turn, these messages are circulated widely for their effects, the mental grip they have on the audience—not for their truth value. Chow calls this virtue-signaling lexicon the “jargon of liberal democracy” and finds that the recycling of jargons devalues thoughts and promotes “unreflective consensus” (937).

Chow's terminologies can be used to settle a hot debate about the social applications of large language models (LLMs), even before most of us realized how LLMs are relevant to critical humanistic studies. Given the sudden availability of chatbots, the recycling of jargons becomes faster, cheaper, and more smoothly integrated into our daily exchanges and even in academic work and students’ writings (although many have cautioned that ChatGPT unashamedly makes up citations and fakes legal case references).

I once asked ChatGPT, “What was the 1989 protest in China about?” It answered that the event was “a pro-democracy movement driven by demands for political reforms, greater freedom of speech, and an end to corruption and government authoritarianism.” Then I typed, “I know some scholarly sources pointing out that slogans of democracy and freedom were more simplification by the Western reporters than requests from the students. Do you know about those sources?” It answered, “You are correct. I acknowledge that the movement had a diverse range of motivations and demands from different participants.” Vainly, it then added, “it is crucial to approach historical events with a critical and nuanced perspective,” boasting the merits it lacked. This is hardly surprising. What amazed me was that another chatbot, created by a major company in China, gave almost identical answers. They speak the same jargons. When questioned, both default to the statement that it is just a language model.

The point is, the long-term problem of chatbots does not come from misinformation alone. The recycling of the jargons embedded in language models determines future storytelling. Even when the chatbot had learned from the correct information about the past, they use ideological languages to power a mathematical function that helps the models “extrapolate” about future scenarios.

In the mathematics of machine learning, extrapolation means reaching beyond a model's training data to generate an output. It means that the discrete data points in a training set will require some sort of creative act on the part of the machine (i.e., extrapolation) to help jump discontinuities and fill in the gaps. According to Mathew Hillier at Macquarie University, the lossy compression used in the GPT3 statistical model means that even if a piece of information is present and correct, its details have been lost. Ted Chiang raises a similar point in an article about how ChatGPT offers a blurry image of the Web (“ChatGPT Is a Blurry JPEG of the Web”; The New Yorker, 9 Feb. 2023, www.newyorker.com/tech/annals-of-technology/chatgpt-is-a-blurry-jpeg-of-the-web). That is, extrapolation is not a bug or a virtue but a built-in feature necessary for any model to work.

This logic of making a model work hastens the rise of the “unreflective consensus” to the extent that the critique of jargons would become unpractical and commercially undesirable for companies and businesses that depend on LLMs, including the fast-evolving products supported by LLMs from Apple and Microsoft, as well as all the content recommendation systems we use more than a dozen times a day, such as Google Search. For now, the most conspicuous culprit might seem to be ChatGPT, but the issue extends far beyond a single chatbot.

LLMs extrapolate ideological storytelling from the past into the future, mostly without our noticing. I think it is this use of Chow's “jargon” and “unreflective consensus” in the critique of LLMs that is highly suggestive for literary, media, and cultural studies today.

Footnotes

PMLA invites members of the association to submit letters that comment on articles in previous issues or on matters of general scholarly or critical interest. The editor reserves the right to reject or edit Forum contributions and offers PMLA authors discussed in published letters the opportunity to reply. Submissions of more than one thousand words are not considered. The journal omits titles before persons' names and discourages endnotes and works-cited lists in the Forum. Letters should be e-mailed to pmlaforum@mla.org.