We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this chapter we consider aspects of phonology for bimodal bilinguals, whose languages span distinct modalities (spoken/signed/written). As for other bilinguals, the primary issues concern the representation of the phonology for each language individually, ways that the phonological representations interact with each other (in grammar and in processing), and the development of the two phonologies, for children developing as simultaneous bilinguals or for learners of a second language in a second modality. Research on these topics has been sparse, and some have hardly been explored at all. Findings so far indicate that despite the modality difference between their two languages, phonological interactions still occur for bimodal bilinguals, providing crucial data for linguistic theories about the locus and mechanisms for such interactions, and important practical implications for language learners.
The history, and parameters, of specialized dictionaries are of necessity somewhat vague, as they invite examination of the role and content of a dictionary. This chapter looks at works ranging from hard word dictionaries to treatises on communicating via naval flags and telegraphic code books, all of which might be categorized as specialized dictionaries.
Even though the word has been around for over one thousand years, bitch has proven that an old dog can be taught new tricks. Over the centuries, bitch has become a linguistic chameleon with many different meanings and uses. Bitch has become a shape-shifter too, morphing into modern slang spellings like biatch, biznatch, and betch. Bitch is a versatile word. It can behave like a noun, an adjective, a verb, or an interjection, while it also makes a cameo appearance in lots of idioms. Bitch can be a bitch of a word. Calling someone a bitch once seemed to be a pretty straightforward insult, but today – after so many variations, reinventions, and attempts to reclaim the word – it’s not always clear what bitch really means. Nowadays, the word appears in numerous other languages too, from Arabic and Japanese to Spanish and Zulu. This chapter takes a look at bitch in the present day, and beyond.
Phonology has generally been neglected as a nexus of philosophical interest despite certain debates within the field both inviting and needing philosophical reflection. Yet, the few who have attempted such inquiry have noted something special about the field and its target. On the one hand, it shares formal and structural aspects with syntax. On the other, it seems to require more literal interpretation in terms of components such as hierarchy and sequential ordering. In this chapter, the nature of the phoneme, the theoretical centrepiece of traditional phonology, comes under scrutiny. The notion, as well as the field itself, is extended to other modalities, such as sign, in accordance with the contemporary trajectory of the field. This extension, and the connections with language and gesture in general, open up the possibility of a philosophical action theory with phonology as its basis. Motor and action theory have been proffered recently in connection with syntax, with little success. However, it’s argued that phonology serves as a better point of comparison. The chapter discusses a range of issues from autosegmental phonology, feature grammar, and sign language, to gestural grammar, motor cognition, and recent 4E approaches to cognition.
Recurrent gestures are stabilized forms that embody a practical knowledge of dealing with different communicative, interactional, cognitive, and affective tasks. They are often derived from practical actions and engage in semantic and pragmatic meaning-making. They occupy a place between spontaneous (singular) gestures and emblems on a continuum of increasing stabilization. The chapter reconstructs the beginnings of research on recurrent gestures and illuminates different disciplinary perspectives that have explored processes of their emergence and stabilization, as well as facets of their communicative potential. The early days of recurrent-gesture research focused on the identification of single specimens and on the refinement of descriptive methods. In recent years, their role in self-individuation, their social role, and their relationship to signs of sign language have become a focus of interest. The chapter explores the individual, the linguistic, and the cultural side of recurrent gestures. Recurrent gestures are introduced as sedimented individual and social practices, as revealing the linguistic potential of gestures, and as a type that forms culturally shared repertoires.
The classical approach to gesture and sign language analysis focuses on the forms and locations of the hands. This constitutes an external point of view on the gesturing subject. The kinesiological approach presented in this chapter looks at gesture from the inside out, at how it is produced, taking a first-person perspective. This involves a physiological description of the parts of the body that are moving (the segments) and the joints at which they can move (providing the degrees of freedom of movement). This type of analysis allows for such distinctions of proper movement of segments from displacement caused by movement of another segment. Movement is distinguished according to muscular properties such as flexion versus extension, abduction versus adduction, exterior versus interior rotation, and supination versus pronation. The propagation of movement in the body is considered in terms of its flow across connected segments of the body, from more proximal to more distal segments or vice versa. These distinctions distinguish different functions of gestures (e.g. showing that you don’t care vs. expressing negation) and different meanings of signs in a sign language.
Gestures associated with negation have become a well-defined area for gesture studies research. The chapter offers an overview of this area, identifies distinct empirical lines of enquiry, and highlights their contribution to aspects of linguistic and embodiment theory. After relating a surge of interest in this topic to the notion of recurrent gestures (but not restricted to it), the chapter offers a visualization of the widespread geographical coverage of studies of gestures associated with negation, then distils a set of common observations concerning the form, organizational properties, and functions of such gestures. This area of research is then further thematized by exploring distinct chains of studies that have adopted linguistic, cognitive-semantic, functional, psycholinguistic, comparative, and cultural perspectives to analyze the gestural expression of negation. Studies of gestures associated with negation are shown to have played a vital role in shaping understandings of the multimodality of grammar, the embodiment of cognition, and the relations between gestures and sign.
The chapter provides an overview of Kendon’s research biography, describing the origins of the theoretical notions and categories for analysis that he developed, e.g. gesture unit, gesture phrase, preparation, stroke, hold, kinesic action, the ways in which gestures can perform referential (through forms of pointing and depiction) and pragmatic functions (including operational, performative, modal, and parsing functions). The data he considered included not only speakers’ gestures, but also signed languages of different types, e.g. those used by the Deaf (primary sign languages), to those used for ritualistic or professional reasons (alternate sign languages). Discussion of the latter notes their structural relation to the spoken languages of their users. Locations and communities in which Kendon studied visible action as utterance include Great Britian, Naples, Italy; Papua New Guinea, and the United States; and among Aboriginal people in Australia. The work finishes with issues related to the study of language origins. Emphasis is placed throughout on the limitations of the term ‘gesture’ and the author’s preference for other terms, such as ‘utterance dedicated visible action’.
While iconicity has sometimes been defined as meaning transparency, it is better defined as a subjective phenomenon bound to an individual’s perception and influenced by their previous language experience. In this article, I investigate the subjective nature of iconicity through an experiment in which 72 deaf, hard-of-hearing and hearing (signing and non-signing) participants rate the iconicity of individual letters of the American Sign Language (ASL) and Swedish Sign Language (STS) manual alphabets. It is shown that L1 signers of ASL and STS rate their own (L1) manual alphabet as more iconic than the foreign one. Hearing L2 signers of ASL and STS exhibit the same pattern as L1 signers, showing an iconic preference for their own (L2) manual alphabet. In comparison, hearing non-signers show no general iconic preference for either manual alphabet. Across all groups, some letters are consistently rated as more iconic in one sign language than the other, illustrating general iconic preferences. Overall, the results align with earlier findings from sign language linguistics that point to language experience affecting iconicity ratings and that one’s own signs are rated as more iconic than foreign signs with the same meaning, even if similar iconic mappings are used.
Language, a hallmark of human cognition, is a complex and universal tool for conveying thoughts and ideas. This chapter navigates the intricate landscape of language development, spanning its various dimensions. We begin by dissecting language into its components, be it spoken or signed, and explore its dual nature – both specific and universal. The chapter illuminates the brains remarkable capacity to derive meaning from linguistic input, pinpointing the neural structures underpinning language comprehension and production. Distinguishing between language quantity and quality, we delve into the role of contingent learning and experiential adaptation in molding linguistic abilities. Additionally, we ponder the evolutionary origins of language, contemplating its exclusive human attribute. Drawing from a diverse pool of research, including neuroimaging, behavioral assessments, and developmental studies, this chapter offers a comprehensive view of language development. It underscores the profound influence of gene–environment interactions in enabling infants to acquire language organically, without explicit instruction.
Sign languages, the languages used by and among deaf people, have long been misunderstood and undervalued. Chapter 13 shows what they really are: human languages. First, we have to rid ourselves of various misconceptions about sign languages. I then formulate the sign language argument for the Innateness Hypothesis, which is based on various parallelisms between signed and spoken languages that strongly suggest that, despite operating in completely different sensory channels, both are likely instantiations of the same mental language system. Both types of languages are processed in the same brain areas and show similar developmental patterns during acquisition and language breakdown. This supports the idea of a genetically anchored default language function for these brain areas. In support of this idea, sign language studies also provide us with examples in which grammatical structure emerges spontaneously when deaf children grow up without being exposed to a sign language. These so-called home sign systems can even give rise to new sign languages. This adds the argument from spontaneous emergence to our list of arguments that potentially support the Innateness Hypothesis.
Chapter 2 opens by asking readers to reflect on strengths and weaknesses of experts of their choice and then to consider overlap between their own strengths and weaknesses and those of these experts. Variation in personalities and styles is useful in public engagement because we meet many different kinds of people in informal learning venues. The chapter thus encourages readers to be themselves as they talk about their science. Genuine passion combines well with any level of expertise. Further, saying "I don’t know" when you reach the edge of your expertise shows your conversational partners that you are honest. A demonstration of counting in different sign languages exemplifies these concepts. This chapter also encourages a growth mindset so that both success and failure during public engagement contribute to improved skills. This chapter’s Closing Worksheet asks readers to choose the topic area that they’ll develop into a demonstration through activities later in the book.
This paper examines how signers make lists. One way is to use the fingers on the signer’s nondominant hand to enumerate items on a list. The signer points to these list-fingers with the dominant hand. Previous analyses considered lists to be nondominant, one-handed signs, and thus were called list buoys because the nondominant hand often remains in place during the production of the list. The pointing hand was largely ignored as a nonlinguistic gesture. We take a constructional approach based on Cognitive Grammar. In our approach, we analyze lists as a type of pointing construction consisting of two meaningful components: a pointing device (the pointing hand) used to direct attention; and a Place, also consisting of form and a meaning. Using data from Brazilian Sign Language (Libras) and Finland–Swedish Sign Language (FinSSL), we examine the semantic role of each component, showing how the nondominant list-fingers identify and track discourse referents, and how the pointing hand is used to create higher-order entities by grouping list-fingers. We also examine the integration of list constructions and their components with other conventional constructions.
Expressing Left-Right relations is challenging for speaking-children. Yet, this challenge was absent for signing-children, possibly due to iconicity in the visual-spatial modality of expression. We investigate whether there is also a modality advantage when speaking-children’s co-speech gestures are considered. Eight-year-old child and adult hearing monolingual Turkish speakers and deaf signers of Turkish-Sign-Language described pictures of objects in various spatial relations. Descriptions were coded for informativeness in speech, sign, and speech-gesture combinations for encoding Left-Right relations. The use of co-speech gestures increased the informativeness of speakers’ spatial expressions compared to speech-only. This pattern was more prominent for children than adults. However, signing-adults and children were more informative than child and adult speakers even when co-speech gestures were considered. Thus, both speaking- and signing-children benefit from iconic expressions in visual modality. Finally, in each modality, children were less informative than adults, pointing to the challenge of this spatial domain in development.
This chapter looks at sociolinguistic variation (resulting from sociohistorical factors that differentiate people into groups) and how it interfaces with phonological differences. The specific phonological difference under study is the use of the two hands versus the use of only one as an aspect of sublexical structure (i.e., as a phonological feature) in individual signs, and their overall patterning throughout the lexicon and morphophonology. We examine how Black ASL demonstrates the distribution of allophony in two-handed vs. one-handed lexical variants of signs.
Across sign languages, nouns can be derived from verbs through morphophonological changes in movement by (1) movement reduplication and size reduction or (2) size reduction alone. We asked whether these cross-linguistic similarities arise from cognitive biases in how humans construe objects and actions. We tested nonsigners’ sensitivity to differences in noun–verb pairs in American Sign Language (ASL) by asking MTurk workers to match images of actions and objects to videos of ASL noun–verb pairs. Experiment 1a’s match-to-sample paradigm revealed that nonsigners interpreted all signs, regardless of lexical class, as actions. The remaining experiments used a forced-matching procedure to avoid this bias. Counter our predictions, nonsigners associated reduplicated movement with actions not objects (inversing the sign language pattern) and exhibited a minimal bias to associate large movements with actions (as found in sign languages). Whether signs had pantomimic iconicity did not alter nonsigners’ judgments. We speculate that the morphophonological distinctions in noun–verb pairs observed in sign languages did not emerge as a result of cognitive biases, but rather as a result of the linguistic pressures of a growing lexicon and the use of space for verbal morphology. Such pressures may override an initial bias to map reduplicated movement to actions, but nevertheless reflect new iconic mappings shaped by linguistic and cognitive experiences.
An ideography is a general-purpose code made of pictures that do not encode language, which can be used autonomously – not just as a mnemonic prop – to encode information on a broad range of topics. Why are viable ideographies so hard to find? I contend that self-sufficient graphic codes need to be narrowly specialized. Writing systems are only an apparent exception: At their core, they are notations of a spoken language. Even if they also encode nonlinguistic information, they are useless to someone who lacks linguistic competence in the encoded language or a related one. The versatility of writing is thus vicarious: Writing borrows it from spoken language. Why is it so difficult to build a fully generalist graphic code? The most widespread answer points to a learnability problem. We possess specialized cognitive resources for learning spoken language, but lack them for graphic codes. I argue in favor of a different account: What is difficult about graphic codes is not so much learning or teaching them as getting every user to learn and teach the same code. This standardization problem does not affect spoken or signed languages as much. Those are based on cheap and transient signals, allowing for easy online repairing of miscommunication, and require face-to-face interactions where the advantages of common ground are maximized. Graphic codes lack these advantages, which makes them smaller in size and more specialized.
This chapter uses notions of language and civilisation to understand how disabled people were understood in the context of the British empire.The chapter begins with a discussion of the long-standing association in western European thought between language and humanity. Debates about whether animals had access to language, which gained renewed importance during the Enlightenment and again with evolutionism, identified language acquisition as the ‘true marker’ of what it meant to be human. This left those with aphasia, intellectual disabilities that affected speech or deaf muteness in a highly vulnerable position. Whilst the development of deaf education helped mitigate fears around disability, during the nineteenth century, it became deeply contested and the question as to whether deaf children should be educated using sign language or spoken English became of surprisingly wide importance. Drawing on the work of Douglas Baynton, who argues that in nineteenth-century America sign language was increasingly linked with an early stage of evolution and ‘foreignness’, I resituate these trends, found also in Britain, in the context of colonialism and imperialism. I argue that the intolerance of sign language was linked to a wider intolerance of difference, both the difference of disability and the difference of ‘race’.
This chapter focuses on how working memory develops in children who are born deaf. It includes studies of deaf users of spoken and signed languages from within the medical and social models of deafness. It also reviews how differences in working memory capacity have been explained between deaf and hearing children. It reviews the role of auditory function in the establishment of working memory, as well as consideration of language as a mediator. It concludes with a proposal that deafness leads to disrupted early exposure to language and reduced subvocal rehearsal abilities, which both impact on the operation of the working memory system.
Chapter 22 offers an overview of the field focusing on both established and emerging modalities, from traditional transfer modes such as dubbing, subtitling and voice-over, to modes that provide accessibility for people with sensory impairment, such as subtitling for the deaf and hard-of-hearing, audio description, live-subtitling and sign language. Non-professional translation practices such as fansubbing, fandubbing and film remaking are also discussed. For each mode, the chapter illustrates the associated medium-specific constraints and creative possibilities, highlighting the power of audiovisuals to contribute to meaning in ways that lend themselves to manipulation during the translation process.