We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter details the formation of the MAS movement from the local teachers, students, artists, and activists to the national-level support (e.g., professional/scholarly organizations, hip hop/funk group Ozomatli, and cartoonist Lalo Alcaraz). Of particular importance was the formation of the “Tucson 11” – a group of MAS educators who filed a federal lawsuit challenging the constitutionality of the state law on First and Fourteenth Amendment grounds. Additionally, in this chapter, we explore both the importance of the documentary Precious Knowledge in supporting this movement and how the director’s alleged rape of one of the former MAS students was the beginning of lasting community wounds that ran throughout the movement.
On August 22, 2017, Judge Tashima issued a blistering ruling finding that state representatives created the law and banned MAS based upon racial animus and partisan political gain in violation of the First and Fourteenth Amendment rights of Mexican American students in TUSD. There was a massive local and national uproar, celebrating the end of this racist law. Though different Tucson factions claimed shared victory due to the ruling, persistent community divisions remained. This chapter details the post-ruling celebrations, the continued community divisions, a summary of where the key actors in this drama ended up, the current state of MAS in TUSD, and the national Ethnic Studies renaissance that the Tucson struggle spawned. Of equal importance, this chapter details how the lessons of the MAS controversy can help inform the work of those challenging Critical Race Theory bans throughout the country.
This chapter looks at how the police power has evolved in judicial interpretations and legislative enactments to the present day. It begins by exploring how the shifting approaches to regulatory governance more generally and also various state constitutional developments in the past two centuries affected thinking about the overall structure and purpose of state regulatory authority. It then turns to a number of critical areas in which the police power was used as a tool of protecting health, safety, welfare, and the common good. It begins with morals, a linchpin of traditional police power regulation, and then proceeds to discuss urban blight, occupational licensing, and public health emergencies
This chapter touches upon the very large topic of how individual rights interact with the police power. In what sense and to what degree do rights contravene state and local exercises of the police power? It is a shibboleth that regulatory power is constrained by rights. But this chapter interrogates these issues in more depth and detail, by discussing how rights claims are framed in connection with the police power and how the government’s assertions of power are circumscribed by particular doctrines and arguments in courts. Further, the chapter considers how the debate over the nature and content of so-called positive rights implicates the police power questions, questions concerning authority and content.
Germany’s content moderation law—NetzDG— is often the target of criticism in English-language scholarship as antithetical to Western notions of free speech and the First Amendment. The purpose of this Article is to encourage those engaged in the analysis of transatlantic content moderation schemes to consider how Germany’s self-ideation influences policy decisions. By considering what international relations scholars term ontological security, Germany’s aggressive forays into the content moderation space are better understood as an externalization of Germany’s ideation of itself, which rests upon an absolutist domestic moral and constitutional hierarchy based on the primacy of human dignity. Ultimately, this Article implores American scholars and lawmakers to consider the impact of this subconscious ideation when engaging with Germany and the European Union in an increasingly multi-polar cyberspace.
The United States’ free speech regime, as codified in the First Amendment to the United States Constitution, comes with obvious contrasts to Thailand’s ill-famed lèse–majesté law—Section 112 of the Thai Criminal Code—which prohibits defamation or even truthful degradation of the Thai King and Royal Family. Recent scholarship has focused on such differences and has largely depicted the two regimes as diametric opposites. When viewing the First Amendment and Thailand’s lèse–majesté law in temporal isolation, the recent scholarly consensus has significant merit. However, by analyzing the two regimes over time, similarities arise suggesting that both regimes represent each respective country’s attempt to accommodate competing and changing values present within the respective countries.
Dean John Wade, who replaced the great torts scholar William Prosser on the Restatement (Second) of Torts, put the finishing touches on the defamation sections in 1977.1 Apple Computer had been founded a year before, and Microsoft two, but relatively few people owned computers yet. The twenty-four-hour news cycle was not yet a thing, and most Americans still trusted the press.2
The term “content moderation,” a holdover from the days of small bulletin-board discussion groups, is quite a bland way to describe an immensely powerful and consequential aspect of social governance. Today’s largest platforms make judgments on millions of pieces of content a day, with world-shaping consequences. And in the United States, they do so mostly unconstrained by legal requirements. One senses that “content moderation” – the preferred term in industry and in the policy community – is something of a euphemism for content regulation, a way to cope with the unease that attends the knowledge (1) that so much unchecked power has been vested in so few hands and (2) that the alternatives to this arrangement are so hard to glimpse.
This chapter addresses an underappreciated source of epistemic dysfunction in today’s media environment: true-but-unrepresentative information. Because media organizations are under tremendous competitive pressure to craft news that is in harmony with their audience’s preexisting beliefs, they have an incentive to accurately report on events and incidents that are selected, consciously or not, to support an impression that is exaggerated or ideologically convenient. Moreover, these organizations have to engage in this practice in order to survive in a hypercompetitive news environment.1
What is the role of “trusted communicators” in disseminating knowledge to the public? The trigger for this question, which is the topic of this set of chapters, is the widely shared belief that one of the most notable, and noted, consequences of the spread of the internet and social media is the collapse of sources of information that are broadly trusted across society, because the internet has eliminated the power of the traditional gatekeepers1 who identified and created trusted communicators for the public. Many commentators argue this is a troubling development because trusted communicators are needed for our society to create and maintain a common base of facts, accepted by the broader public, that is essential to a system of democratic self-governance. Absent such a common base or factual consensus, democratic politics will tend to collapse into polarized camps that cannot accept the possibility of electoral defeat (as they arguably have in recent years in the United States). I aim here to examine recent proposals to resurrect a set of trusted communicators and the gatekeeper function, and to critique them from both practical and theoretical perspectives. But before we can discuss possible “solutions” to the lack of gatekeepers and trusted communicators in the modern era, it is important to understand how those functions arose in the pre-internet era.
The laws of defamation and privacy are at once similar and dissimilar. Falsity is the hallmark of defamation – the sharing of untrue information that tends to harm the subject’s standing in their community. Truth is the hallmark of privacy – the disclosure of facts about an individual who would prefer those facts to be private. Publication of true information cannot be defamatory; spreading of false information cannot violate an individual’s privacy. Scholars of either field could surely add epicycles to that characterization – but it does useful work as a starting point of comparison.
The commercial market for local news in the United States has collapsed. Many communities lack a local paper. These “news deserts,” comprising about two-thirds of the country, have lost a range of benefits that local newspapers once provided. Foremost among these benefits was investigative reporting – local newspapers at one time played a primary role in investigating local government and commerce and then reporting the facts to the public. It is rare for someone else to pick up the slack when the newspaper disappears.
An entity – a landlord, a manufacturer, a phone company, a credit card company, an internet platform, a self-driving-car manufacturer – is making money off its customers’ activities. Some of those customers are using the entity’s services in ways that are criminal, tortious, or otherwise reprehensible. Should the entity be held responsible, legally or morally, for its role (however unintentional) in facilitating its customers’ activities? This question has famously been at the center of the debates about platform content moderation,1 but it can come up in other contexts as well.2
Coordinated campaigns of falsehoods are poisoning public discourse.1 Amidst a torrent of social-media conspiracy theories and lies – on topics as central to the nation’s wellbeing as elections and public health – scholars and jurists are turning their attention to the causes of this disinformation crisis and the potential solutions to it.
Current approaches to content moderation generally assume the continued dominance of “walled gardens”: social-media platforms that control who can use their services and how. Whether the discussion is about self-regulation, quasi-public regulation (e.g., Facebook’s Oversight Board), government regulation, tort law (including changes to Section 230), or antitrust enforcement, the assumption is that the future of social media will remain a matter of incrementally reforming a small group of giant, closed platforms. But, viewed from the perspective of the broader history of the internet, the dominance of closed platforms is an aberration. The internet initially grew around a set of open, decentralized applications, many of which remain central to its functioning today.
Political scientist and ethicist Russell Hardin observed that “trust depends on two quite different dimensions: the motivation of the potentially trusted person to attend to the truster’s interests and his or her competence to do so.”1 Our willingness to trust an actor thus generally turns on inductive reasoning: our perceptions of that actor’s motives and competence, based on our own experiences with that actor.2 Trust and distrust are also both episodic and comparative concepts, as whether we trust a particular actor depends in part on when we are asked – and to whom we are comparing them.3 And depending on our experience, distrust is sometimes wise: “[D]istrust is sometimes the only credible implication of the evidence. Indeed, distrust is sometimes not merely a rational assessment but it is also benign, in that it protects against harms rather than causing them.”4
A central tenet of contemporary First Amendment law is the metaphor of the marketplace of ideas – that the solution to bad speech is more, better, speech.1 This basic idea is well established in both judicial and scholarly writing – but it is not without its critics. My contribution to this volume adds a new criticism of the marketplace-of-ideas metaphor. I argue that there are circumstances where ostensibly “good” speech may be indistinguishable by listeners from bad speech – indeed, that there are cases in which any incremental speech can actually make other good speech indistinguishable from bad speech. In such cases, seemingly “good” speech has the effect of “bad” speech. I call this process by which ostensibly good speech turns the effects of other speech bad “a noisy speech externality.”
It’s accually obsene what you can find out about a person on the internet.1
To some, this typo-ridden remark might sound banal. We know that our data drifts around online, with digital flotsam and jetsam washing up sporadically on different websites across the internet. Surveillance has been so normalized that, these days, many people aren’t distressed when their information appears in a Google search, even if they sometimes fret about their privacy in other settings.
It was 1971 and Los Angeles Times editor Nick Williams had what he called a “terribly uneasy feeling.” In a letter to one of the paper’s Washington correspondents, he wrote of his suspicion that journalism had “lost credibility … with an alarming percentage of the people.” If the plummet continued, Williams fretted, journalists will have “destroyed or weakened a keystone of our Constitution.”1