We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
As its name indicates, algorithmic regulation relies on the automation of regulatory processes through algorithms. Examining the impact of algorithmic regulation on the rule of law hence first requires an understanding of how algorithms work. In this chapter, I therefore start by focusing on the technical aspects of algorithmic systems (Section 2.1), and complement this discussion with an overview of their societal impact, emphasising their societal embeddedness and the consequences thereof (Section 2.2). Next, I examine how and why public authorities rely on algorithmic systems to inform and take administrative acts, with special attention to the historical adoption of such systems, and their impact on the role of discretion (Section 2.3). Finally, I draw some conclusions for subsequent chapters (Section 2.4).
This chapter focuses on the Black body in the narrative genre of passing literature, which combines issues of embodiment with those of visuality. It begins by arguing that, whereas recent literary culture habituates us to immediacy, access, and confession, the passing plot operates on different terms. At a moment when many artists and critics are arguing for the importance of opacity to relational frameworks, the passing plot comes into focus as a special testing ground for viewing racialized embodiment and ethical sociality in fresh ways. The chapter goes on to argue that just as the passing plot proves a rich container for considering the ethics of relation, dramatic literature offers a particularly productive platform for considering passing literature today. My case study for these claims is Branden Jacob-Jenkins’s play An Octoroon (2014). A metatheatrical riff on a prominent nineteenth-century melodrama called The Octoroon (1859), the play avoids conveying some intimate truth about racial embodiment – the secret ostensibly kept by the passing figure – in order to offer new opportunities for Jacobs-Jenkins’s audience to become aware of their embodied participation in acts of racialization.
Deep neural networks are said to be opaque, impeding the development of safe and trustworthy artificial intelligence, but where this opacity stems from is less clear. What are the sufficient properties for neural network opacity? Here, I discuss five common properties of deep neural networks and two different kinds of opacity. Which of these properties are sufficient for what type of opacity? I show how each kind of opacity stems from only one of these five properties, and then discuss to what extent the two kinds of opacity can be mitigated by explainability methods.
There is a broad consensus that human supervision holds the key to sound automated decision-making: if a decision-making policy uses the predictive outputs of a statistical algorithm, but those outputs form only part of a decision that is made ultimately by a human actor, use of those outputs will not (per se) fall foul of the requirements for due process in public and private decision-making. Thus, the focus in academic and judicial spheres has been on making sure that humans are equipped and willing to wield this ultimate decision-making power. Yet, proprietary software obscures the reasons for any given prediction; this is true both for machine learning and deterministic algorithms. And without these reasons, the decision-maker cannot accord appropriate weight to that prediction in their reasoning process. Thus, a policy of using opaque statistical software to make decisions about how to treat others is unjustified, however involved humans are along the way.
This chapter closes Part 1 by analysing how the opacity surrounding the use of AI and ADM tools by financial corporations is enabled, and even encouraged by the law. As other chapters in the book demonstrate, such opacity brings about significant risks to fundamental rights, consumer rights, and the rule of law. Analysing examples from jurisdictions including the US, UK, EU, and Australia, Bednarz and Przhedetsky unpack how financial entities often rely on rules and market practices protecting corporate secrecy such as complex credit scoring systems, proprietary rights to AI models and data, as well as the carve out of ‘non-personal’ information from data and privacy protection laws. The authors then focus on the rules incentivising the use of AI and ADM tools by financial entities, showing how they provide a shield behind which corporations can hide their consumer scoring and rating practices. The authors also explore potential regulatory solutions that could break the opacity and ensure transparency, introducing direct accountability and scrutiny of ADM and AI tools, and reducing the control of financial corporations over people’s data.
This essay identifies two approaches to theorizing the relationship between financialization and contemporary art. The first departs from an analysis of how market logics in non-financial spheres are being transformed to facilitate financial circulation; the other considers valuation practices in financial markets (and those related to derivative instruments in particular) from a socio-cultural perspective. According to the first approach, the contemporary art market is in theory a hostile environment for financialization, although new practices are emerging that are increasing its integration with the financial sphere. The second approach identifies socio-cultural similarities between the logics by which value is extracted, amplified, and distributed through derivative instruments and contemporary art. The two approaches present a discrepancy: on the one hand, contemporary art functions as an impediment to outright financialization because of market opacity; on the other, contemporary art represents a socio-cultural analog to derivative instruments. The essay concludes by setting out the terms for a more holistic understanding of contemporary art's relationship to financialization, which would enable an integration of its economic and socio-cultural dimensions.
This chapter discusses the reflexive relationship between qualitative researchers and the process of selecting, forming, processing and interpreting data in algorithmic qualitative research. Drawing on Heidegger’s ideas, it argues that such research is necessarily synthetic – even creative – in that these activities inflect, and are in turn inflected by, the data itself. Thus, methodological transparency is key to understanding how different types of meanings become infused in the process of algorithmic qualitative research. While algorithmic research practices provide multiple opportunities for creating transparent meaning, researchers are urged to consider how such practices can also introduce and reinforce human and algorithmic bias in the form of unacknowledged introduction of perspectives into the data. The chapter demonstrates this reflexive dance of meaning and bias using an illustrative case of topic modelling. It closes by offering some recommendations for engaging actively with the domain, considering a multi-disciplinary approach, and adopting complementary methods that could potentially help researchers in fostering transparency and meaning.
Kant’s moral philosophy both enjoins the acquisition of self-knowledge as a duty, and precludes certain forms of its acquisition via what has become known as the Opacity Thesis. This article looks at several recent attempts to solve this difficulty and argues that they are inadequate. I argue instead that the Opacity Thesis rules out only the knowledge that one has acted from genuine moral principles, but does not apply in cases of moral failure. The duty of moral self-knowledge applies therefore only to one’s awareness of one’s status as a moral being and to the knowledge of one’s moral failings, both in particular actions and one’s overall character failings, one’s vices. This kind of knowledge is morally salutary as an aid to discovering one’s individual moral weakness as well as the subjective ends for which one acts, and in this way for taking up the morally required end of treating human beings as human beings. In this way, moral self-knowledge can be understood as a necessary element of moral improvement, and I conclude by suggesting several ways to understand it thereby as genuinely primary among the duties to oneself.
It is natural to think that social groups are concrete material particulars, but this view faces an important objection. Suppose the chess club and nature club have the same members. Intuitively, these are different clubs even though they have a common material basis. Some philosophers take these intuitions to show that the materialist view must be abandoned. I propose an alternative explanation. Social groups are concrete material particulars, but there is a psychological explanation of nonidentity intuitions. Social groups appear coincident but nonidentical because they are perceived to be governed by conflicting social norms.
In this article, the three co-authors collaboratively address practices of queering in relation to the Parisian choreographer of color Nyota Inyoka (1896–1971), whose biography and identity remain mysterious even after extensive research. Writing from three different research perspectives and relating to three different aspects of her life and work, the co-authors analyze Nyota Inyoka and practices of Queering the Archive, her staging of Shiva as a performance of (culturally) “queer possibility,” and the act of remembering Nyota Inyoka in a contemporary context in terms of queering ethnicity and “cultural belonging.” Juxtaposing and interweaving notions and practices of queering and créolité/creolizing over the course of the article, the co-authors attempt to respect Nyota Inyoka's “right to opacity” (Glissant [1996] 2020, 45) and remember her on her own terms.
Chapter 4 conducts a qualitative assessment of the substantive rules underlying Eurozone economic governance, with the view of testing their actual contribution to trust-generation and uncertainty-reduction. Focused on the Eurozone’s fiscal policy rules, the Chapter shows that the common fiscal discipline of the Eurozone suffers from serious qualitative flaws, pertaining to their complexity, their opacity, their internal inconsistency and the unconstrained discretion that the Commission enjoys as their main enforcer. It argues that the system’s reliance on policy rules has become excessive and counterproductive, as it now works against the objectives of certainty, stability and equality that it was supposed to achieve, instills distrust and facilitates arbitrariness. Hence, the Chapter highlights the pressing need for an overhaul of the existing policy rules and a deeper institutional reflection about the legitimacy of the rules-based approach to economic and fiscal governance.
The primary interest of sandhi in Romance is as a morphological phenomenon. Adaptation of word forms to a variety of sandhi contexts gives rise to allomorphy (paradigmatic variation). Such adaptation reflects natural phonological processes which tend to reduce the markedness of sequences of phonological elements. We illustrate from Catalan and French strategies to avoid hiatus, and from Catalan and Occitan strategies to simplify consonant clusters. Romance also attests subphonemic alternations in sandhi environments, and we draw attention to cases such as intersonorant lenition of initial voiced stops in much of south-western Romance. A striking feature of Romance sandhi alternations is the readiness with which they may become morphologized or lexicalized. This outcome may arise from subsequent sound changes that make the original motivated alternation opaque, or from levelling of allomorphic alternation that makes the distribution of allomorphs opaque. We review an example of such a change in progress: the aspiration/loss of coda /s/ in Andalusian Spanish. Occasionally, a morphologized/lexicalized alternation may be (partly) remotivated, as is famously the case with rafforzamento fonosintattico ‘phonosyntactic strengthening’ in standard Italian. However, the phenomena of elision and liaison in modern French exemplify morphophonemic arbitrariness with very extensive incidence.
Umlaut and ablaut as morphological (rather than phonological) processes, affix order and bracketing paradoxes, subcategorization and stratum ordering, critique of Optimality Theory with respect to its ability to account for major phonological patterns in English, as described in rule terms in the preceding chapters. These include stress, vowel shift, and laxing. Special attention is given to opacity. Opacity presents the same problem to Optimality Theory as it does to pre-Generative structuralist phonology, due to its output orientation. Velar Softening is opaque in medicate (underapplication) and in criticize (overapplication). Various patches proposed to deal with this issue have involved the reintroduction of the intermediate derivational stages that Optimality Theory was designed to eliminate. These patches do not allow for Duke of York derivations such as that which appears in English in the derivation of pressure. The device of stratal Optimality Theory, combining level ordering and constraints differently ranked on different strata, can account for some Duke of York derivations but at the expense of making some postlexical processes lexical.
One characteristic interpretive technique in the discourse of customary international law is the identification of such norms as 'possibly emerging' or possibly in existence. Thus it is frequently asserted that a putative norm 'may' have or 'probably has' customary status. This hypothetical mode of analysis can give rise to the speculative construction of international obligations driven more by preference than by evidence. This speculative rhetorical technique is examined by reference to the account of temporal dimensions of the emergence of customary international law provided in the Chagos Archipelago Advisory Opinion of 2019. Here the International Court of Justice endeavoured to pin down the time of origin and path of evolution of a customary norm requiring territorial integrity in the context of decolonialisation as self-determination. This chapter engages with this ubiquitous characteristic of the interpretation of customary international law and argues that the accompanying opacity in relation to international legal norms – norms that are held to generate obligations – is to be deplored.
This paper explores a neglected normative dimension of algorithmic opacity in the workplace and the labor market. It argues that explanations of algorithms and algorithmic decisions are of noninstrumental value. That is because explanations of the structure and function of parts of the social world form the basis for reflective clarification of our practical orientation toward the institutions that play a central role in our life. Using this account of the noninstrumental value of explanations, the paper diagnoses distinctive normative defects in the workplace and economic institutions which a reliance on AI can encourage, and which lead to alienation.
Between 1958 and 2016, the French Caribbean novel is resoundingly about the French Caribbean, less invested in dislocation and displacement—a number of novels of the 1960s and 1970s do focus on the alienation of exiled female protagonists in Africa and France—than in grounding, naming, reclaiming, bringing home. This foregrounding of the local acquired particular political urgency in the wake of departmentalisation (1946), which sparked a process of decreolisation that was accelerated through the French education system and media in subsequent decades. The urge to explore and validate home ground, and to preserve and celebrate Creole memory, becomes more explicit from the late 1980s, and reaches its fullest articulation in the Eloge de la créolité (1989). Despite accusations of nostalgia, even very contemporary novels look to the past, often celebrating a waning Creole culture. That such novels are usually set after Abolition (1848), and that so few novels place slavery front and centre of the narrative, does not, however, mean that the story of slavery is ignored, marginalised or irrelevant. The discontinuity between the overwhelming extra-literary presence of slavery (in interviews with novelists, and in their cultural/media work), and its relative diegetic absence, is more apparent than real: almost all Antillean fiction is haunted by this absent-presence, and can only be fully understood through it.
A recently published scheme for inertial fusion based on instantaneous heating of an uncompressed fuel is criticized. It is shown that efficient fusion and “time-like” fusion burn propagation cannot be realized due to the low nuclear reaction cross-sections. The suggested use of nanospheres inside the volume of the target to support the fast heating of the fuel is also questioned.
In the neutron-star mergers, the radioactive decay of freshly synthesized heavy elements produces emissions in the ultraviolet-optical-infrared range, producing a transient called kilonova. The observational properties of the kilonova depend on the bound-bound opacity of the heavy elements, which was largely unavailable for the conditionssuitable at an early time (t < day). In this article, I share some of our recent progress on modeling the early kilonova light curve, focusing on the atomic opacity calculation.
Coalescence of binary neutron stars gives rise to kilonova, thermal emission powered by radioactive decays of newly synthesized r-process nuclei. Observational properties of kilonova are largely affected by bound-bound opacities of r-process elements. It is, thus, important to understand atomic properties of heavy elements to link the observed signals with nucleosynthesis of neutron star mergers. In this paper, we introduce the latest status of kilonova modeling by focusing on the aspects of atomic physics. We perform systematic atomic structure calculations of r-process elements to understand element-to-element variation in the opacities. We demonstrate that the properties of the atomic structure of heavy elements are imprinted in the opacities of the neutron star merger ejecta and consequently in the kilonova light curves and spectra. Using this latest opacity dataset, we briefly discuss implications for GW170817, expected diversity of kilonova emission, and prospects for element identification in kilonova spectra.
This chapter discusses the problem of modelling mixing and chemical element transport in low- and intermediate-mass stellar evolution calculations. In particular, emphasis is given to the uncertainties and parametrisations involved, and hopes of future developments based on asteroseismic data and hydrodynamics simulations.