We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter launches the contemporary section of the book. The overarching argument is that despite the binaries leveraged by leaders and analysts alike, political contestation in the twenty-first century, as in the nineteenth and twentieth, is not reducible to an “Islamist vs. secularist” cleavage. Instead, contestation and key outcomes are driven by shifting coalitions for and against pluralism, notably, an Islamo-liberal/secular liberal coalition that marked the sixth major, pluralizing alignment since the Tanzimat reforms. It would transform state and society, even though the coalition itself proved short-lived as democratization stalled against a backdrop of debates over Islamophobia, the headscarf, minority rights, freedom of expression, media freedoms, and sweeping show trials.
The digital revolution has transformed the dissemination of messages and the construction of public debate. This article examines the disintermediation and fragmentation of the public sphere by digital platforms. Disinformation campaigns, that aim at assuming the power of determining a truth alternative to reality, highlight the need to enhance the traditional view of freedom of expression as negative freedom with an institutional perspective. The paper argues that freedom of expression should be seen as an institution of freedom, an organizational space leading to a normative theory of public discourse. This theory legitimizes democratic systems and requires proactive regulation to enforce its values.
Viewing freedom of expression as an institution changes the role of public power: this should not be limited to abstention but instead has a positive obligation to regulate the spaces where communicative interactions occur. The article discusses how this regulatory need led to the European adoption of the Digital Services Act (DSA) to correct DPs through procedural constraints. Despite some criticisms, the DSA establishes a foundation for a transnational European public discourse aligned with the Charter of Fundamental Rights and member states’ constitutional traditions.
The “Danish cartoons controversy” has often been cast as a paradigm case of the blindness of liberal language ideologies to anything beyond the communication of referential meaning. This article returns to the case from a different angle and draws a different conclusion. Following recent anthropological interest in the way legal speech grounds the force of law, the article takes as its ethnographic object a 2007 ruling by the French Chamber of the Press and of Public Liberties. This much-trumpeted document ruled that the Charlie Hebdo magazine’s republication of the cartoons did not constitute a hate speech offense. The article examines the form as well as the content of the ruling itself and situates it within the entangled histories of French press law, revolutionary antinomianism, and the surprisingly persistent legal concern with matters of honor. The outcome of the case (the acquittal of Charlie Hebdo) may seem to substantiate a view of liberal language ideology as incapable of attending to the performative effects of signs. Yet, a closer look challenges this now familiar image of Euro-American “representationalism,” and suggests some broader avenues of investigation for a comparative anthropology of liberalism and free speech.
Violence and time are elements shaping the lives of children. For children, time is something that to a large part is placed in the future, while to adults, it is placed in the past; still, it is within this time that violence directed toward children occurs because they are children, often with the purpose of shaping their personhood and controlling them. To be able to speak freely about how time and violence socially construct the self-identity as a child is an important act of resistance against the use of violence constructing childhood but also an important form of protection. To fight violence, the child rights discourse must move beyond the child’s rights to be heard to also take seriously the right to freedom of speech.
Germany’s content moderation law—NetzDG— is often the target of criticism in English-language scholarship as antithetical to Western notions of free speech and the First Amendment. The purpose of this Article is to encourage those engaged in the analysis of transatlantic content moderation schemes to consider how Germany’s self-ideation influences policy decisions. By considering what international relations scholars term ontological security, Germany’s aggressive forays into the content moderation space are better understood as an externalization of Germany’s ideation of itself, which rests upon an absolutist domestic moral and constitutional hierarchy based on the primacy of human dignity. Ultimately, this Article implores American scholars and lawmakers to consider the impact of this subconscious ideation when engaging with Germany and the European Union in an increasingly multi-polar cyberspace.
Dean John Wade, who replaced the great torts scholar William Prosser on the Restatement (Second) of Torts, put the finishing touches on the defamation sections in 1977.1 Apple Computer had been founded a year before, and Microsoft two, but relatively few people owned computers yet. The twenty-four-hour news cycle was not yet a thing, and most Americans still trusted the press.2
The term “content moderation,” a holdover from the days of small bulletin-board discussion groups, is quite a bland way to describe an immensely powerful and consequential aspect of social governance. Today’s largest platforms make judgments on millions of pieces of content a day, with world-shaping consequences. And in the United States, they do so mostly unconstrained by legal requirements. One senses that “content moderation” – the preferred term in industry and in the policy community – is something of a euphemism for content regulation, a way to cope with the unease that attends the knowledge (1) that so much unchecked power has been vested in so few hands and (2) that the alternatives to this arrangement are so hard to glimpse.
This chapter addresses an underappreciated source of epistemic dysfunction in today’s media environment: true-but-unrepresentative information. Because media organizations are under tremendous competitive pressure to craft news that is in harmony with their audience’s preexisting beliefs, they have an incentive to accurately report on events and incidents that are selected, consciously or not, to support an impression that is exaggerated or ideologically convenient. Moreover, these organizations have to engage in this practice in order to survive in a hypercompetitive news environment.1
What is the role of “trusted communicators” in disseminating knowledge to the public? The trigger for this question, which is the topic of this set of chapters, is the widely shared belief that one of the most notable, and noted, consequences of the spread of the internet and social media is the collapse of sources of information that are broadly trusted across society, because the internet has eliminated the power of the traditional gatekeepers1 who identified and created trusted communicators for the public. Many commentators argue this is a troubling development because trusted communicators are needed for our society to create and maintain a common base of facts, accepted by the broader public, that is essential to a system of democratic self-governance. Absent such a common base or factual consensus, democratic politics will tend to collapse into polarized camps that cannot accept the possibility of electoral defeat (as they arguably have in recent years in the United States). I aim here to examine recent proposals to resurrect a set of trusted communicators and the gatekeeper function, and to critique them from both practical and theoretical perspectives. But before we can discuss possible “solutions” to the lack of gatekeepers and trusted communicators in the modern era, it is important to understand how those functions arose in the pre-internet era.
The laws of defamation and privacy are at once similar and dissimilar. Falsity is the hallmark of defamation – the sharing of untrue information that tends to harm the subject’s standing in their community. Truth is the hallmark of privacy – the disclosure of facts about an individual who would prefer those facts to be private. Publication of true information cannot be defamatory; spreading of false information cannot violate an individual’s privacy. Scholars of either field could surely add epicycles to that characterization – but it does useful work as a starting point of comparison.
The commercial market for local news in the United States has collapsed. Many communities lack a local paper. These “news deserts,” comprising about two-thirds of the country, have lost a range of benefits that local newspapers once provided. Foremost among these benefits was investigative reporting – local newspapers at one time played a primary role in investigating local government and commerce and then reporting the facts to the public. It is rare for someone else to pick up the slack when the newspaper disappears.
An entity – a landlord, a manufacturer, a phone company, a credit card company, an internet platform, a self-driving-car manufacturer – is making money off its customers’ activities. Some of those customers are using the entity’s services in ways that are criminal, tortious, or otherwise reprehensible. Should the entity be held responsible, legally or morally, for its role (however unintentional) in facilitating its customers’ activities? This question has famously been at the center of the debates about platform content moderation,1 but it can come up in other contexts as well.2
Coordinated campaigns of falsehoods are poisoning public discourse.1 Amidst a torrent of social-media conspiracy theories and lies – on topics as central to the nation’s wellbeing as elections and public health – scholars and jurists are turning their attention to the causes of this disinformation crisis and the potential solutions to it.
Current approaches to content moderation generally assume the continued dominance of “walled gardens”: social-media platforms that control who can use their services and how. Whether the discussion is about self-regulation, quasi-public regulation (e.g., Facebook’s Oversight Board), government regulation, tort law (including changes to Section 230), or antitrust enforcement, the assumption is that the future of social media will remain a matter of incrementally reforming a small group of giant, closed platforms. But, viewed from the perspective of the broader history of the internet, the dominance of closed platforms is an aberration. The internet initially grew around a set of open, decentralized applications, many of which remain central to its functioning today.
Political scientist and ethicist Russell Hardin observed that “trust depends on two quite different dimensions: the motivation of the potentially trusted person to attend to the truster’s interests and his or her competence to do so.”1 Our willingness to trust an actor thus generally turns on inductive reasoning: our perceptions of that actor’s motives and competence, based on our own experiences with that actor.2 Trust and distrust are also both episodic and comparative concepts, as whether we trust a particular actor depends in part on when we are asked – and to whom we are comparing them.3 And depending on our experience, distrust is sometimes wise: “[D]istrust is sometimes the only credible implication of the evidence. Indeed, distrust is sometimes not merely a rational assessment but it is also benign, in that it protects against harms rather than causing them.”4
Almost all platforms for user-generated content have written policies around what content they are and are not willing to host, even if these policies are not always public. Even platforms explicitly designed to host adult content, such as OnlyFans,1 have community guidelines. Of course, different platforms’ content policies can differ widely in multiple regards. Platforms differ on everything from what content they do and do not allow, to how vigorously they enforce their rules, to the mechanisms for enforcement itself. Nevertheless, nearly all platforms have two sets of content criteria: one set of rules setting a minimum floor for what content the platform is willing to host at all, and a more rigorous set of rules defining standards for advertising content. Many social-media platforms also have additional criteria for what content they will actively recommend to users that differ from their more general standards of what content they are willing to host at all.
A central tenet of contemporary First Amendment law is the metaphor of the marketplace of ideas – that the solution to bad speech is more, better, speech.1 This basic idea is well established in both judicial and scholarly writing – but it is not without its critics. My contribution to this volume adds a new criticism of the marketplace-of-ideas metaphor. I argue that there are circumstances where ostensibly “good” speech may be indistinguishable by listeners from bad speech – indeed, that there are cases in which any incremental speech can actually make other good speech indistinguishable from bad speech. In such cases, seemingly “good” speech has the effect of “bad” speech. I call this process by which ostensibly good speech turns the effects of other speech bad “a noisy speech externality.”
It’s accually obsene what you can find out about a person on the internet.1
To some, this typo-ridden remark might sound banal. We know that our data drifts around online, with digital flotsam and jetsam washing up sporadically on different websites across the internet. Surveillance has been so normalized that, these days, many people aren’t distressed when their information appears in a Google search, even if they sometimes fret about their privacy in other settings.
It was 1971 and Los Angeles Times editor Nick Williams had what he called a “terribly uneasy feeling.” In a letter to one of the paper’s Washington correspondents, he wrote of his suspicion that journalism had “lost credibility … with an alarming percentage of the people.” If the plummet continued, Williams fretted, journalists will have “destroyed or weakened a keystone of our Constitution.”1