We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
English-speaking children sometimes make errors in production and comprehension of biclausal questions, known as “Scope-Marking Errors”. In production, these errors surface as medial wh questions (e.g., What do you think who the cat chased? (Thornton, 1990)). In comprehension, children respond to questions like How did the boy say what he caught? by answering what was caught (de Villiers & Roeper, 1995). These errors resemble wh-scope marking questions, attested in languages like German. Together, these errors suggest temporary adoption of multiple UG-licensed grammars (e.g., Yang, 2002). However, Lutken et al. (2020) found that children who make these errors in production do not necessarily make errors in comprehension and vice versa. They suggest these errors stem from children’s immature processing mechanisms. This article examines children’s production, comprehension, and processing capabilities, specifically working memory (WM). We find a correlation between WM and error rate and suggest separate causes for production and comprehension errors.
Phoneme discrimination is believed to be less accurate in non-native languages compared to native ones. What remains unclear is whether differences in pre-attentive phonological processing emerge between the first foreign language (L2) and additional ones (L3/Ln), and whether they might be influenced by the acquisition setting (formal vs. naturalistic). We conducted an event-related brain potential oddball study with native Polish learners of English (L2) and Norwegian (L3/Ln). The results revealed a graded amplitude of the mismatch negativity (MMN) effect, which was largest in L1, smaller in L2, and smallest in L3/Ln. Considering the previously obtained results for naturalistic/mixed learners with the same language combination, we believe that the acquisition setting is an important factor influencing the perception of phonemic contrasts. In the naturalistic group, no difference was observed between L1 and L2, while the instructed group exhibited more fine-grained distinctions between all tested languages.
The Perceptual Assimilation Model (PAM) accounts for how native-language (L1) experience shapes speech perception. According to PAM, infants develop phonological categories by attuning to the critical phonetic features that set phonological categories apart (phonological distinctiveness) and to the phonetic variability that defines each category (phonological constancy).The effects of L1 attunement on perception can also be seen in adults. PAM generates predictions about discrimination accuracy for non-native contrasts by comparing how the non-native phones are perceived in terms of L1 phonological categories. The extent to which perception might be altered further by experience with a second language (L2) is outlined by PAM-L2. While PAM has focused on L1 attunement in monolinguals, and PAM-L2 on L2 acquisition in adulthood, their principles also apply to early bilingual language acquisition. In this chapter, we will consider the various contexts of acquisition and language use in early bilinguals to sketch out how experience with more than one native language shapes perception and how childhood L2 acquisition might modify the emerging phonological system.
Learning the meaning of a word is a difficult task due to the variety of possible referents present in the environment. Visual cues such as gestures frequently accompany speech and have the potential to reduce referential uncertainty and promote learning, but the dynamics of pointing cues and speech integration are not yet known. If word learning is influenced by when, as well as whether, a learner is directed correctly to a target, then this would suggest temporal integration of visual and speech information can affect the strength of association of word–referent mappings. Across two pre-registered studies, we tested the conditions under which pointing cues promote learning. In a cross-situational word learning paradigm, we showed that the benefit of a pointing cue was greatest when the cue preceded the speech label, rather than following the label (Study 1). In an eye-tracking study (Study 2), the early cue advantage was due to participants’ attention being directed to the referent during label utterance, and this advantage was apparent even at initial exposures of word–referent pairs. Pointing cues promote time-coupled integration of visual and auditory information that aids encoding of word–referent pairs, demonstrating the cognitive benefits of pointing cues occurring prior to speech.
To bring linguistic theory back in touch with commonplace observations concerning the resilience of language in use to language change, language acquisition and ungrammaticality, Pullum and colleagues have argued for a ‘model-theoretic’ theory of syntax. The present paper examines the implications for linguists working in standard formal frameworks and argues that, to the extent that such theories embrace monotonicity in syntactic operations, they qualify as model-theoretic under some minor modifications to allow for the possibility of unknown words.
This chapter addresses the phenomenon of false cognates in Slavic languages, i.e. words that sound alike but have different meanings, such as Polish brak ‘deficiency’ and Bulgarian brak ‘marriage’. Inter-Slavic false cognates are seen primarily as endpoints of diverging semantic and derivation processes in various Slavic languages. The chapter also discusses Slavic to non-Slavic false cognates, where in most cases diverging processes in lexical borrowing represent generators of this kind of relationship. False cognates thus provide important insights into all major processes of lexical enrichment. Pedagogical implications of false cognates in teaching Slavic languages are also presented.
Many studies have explored children’s acquisition of temporal adverbs. However, the extent to which children’s early temporal language has discursive instead of solely temporal meanings has been largely ignored. We report two corpus-based studies that investigated temporal adverbs in Finnish child-parent interaction between the children’s ages of 1;7 and 4;11. Study 1 shows that the two corpus children used temporal adverbs to construe both temporal and discursive meanings from their early adverb production and that the children’s usage syntactically broadly reflected the input received. Study 2 shows that the discursive uses of adverbs appeared to be learned from contextually anchored caregiver constructions that convey discourse functions like urging and reassuring, and that the usage is related to the children’s and caregivers’ interactional roles. Our study adds to the literature on the acquisition of temporal adverbs by demonstrating that these items are learned also with additional discursive meanings in family interaction.
This chapter reviews the relation between gesture and the natural signed languages of deaf communities. Signs were for centuries considered to be unanalyzable depictive gestures. Modern linguistic research has demonstrated that signs are composed of meaningless parts, equivalent to spoken language phonemes, that are combined to form meaningful signs. The chapter discusses a system called homesign used where a deaf child with hearing parents is not exposed to signed languages during language acquisition. Two ways in which gesture may become incorporated into a signed language through the historical process of grammaticalization are described. In the first, gestures are incorporated into a signed language as lexical signs, which go on to develop grammatical meaning. In the second, ways in which the sign is produced, its manner of movement, and certain facial displays, are incorporated not as lexical signs but as prosody or intonation, which may develop grammatical meaning. Finally, the chapter critically examines a new view in which certain signs are considered to be fusions of sign and gesture and proposes a cognitive linguistic analysis based in the theory of cognitive grammar.
The chapters in the handbook cover five main topics. Gesture types in terms of forms and functions; the focus is on manual gestures and their use as emblems, recurrent gestures, pointing gestures, and iconic representational gestures, but attention is also given to facial gestures. Different methods by which gestures have been annotated and analyzed, and different theoretical and methodological approaches, including semiotic analysis. The relation of gesture to language use covers language evolution as well as first and second language acquisition. Gestures in relation to cognition, including an overview of McNeill’s growth point theory. Gestures in interaction, considering variation in gesture use and intersubjectivity. Across the chapters, the meaning of the term ‘gesture’ is itself debated, as is the relation of gesture to language (as multimodal communication or in terms of different semiotic systems). Gesture use is studied based on data from speakers of various languages and cultures, but there is a bias toward European cultures, which remains to be addressed. The handbook provides overviews of the work of some scholars which was previously not widely available in English.
We present an analysis of phrasal prosody, with an emphasis on focus-marking, for heritage speakers of Samoan in Aotearoa New Zealand. The analysis is based on recordings of four speakers doing a picture-description task designed to elicit different focus positions and types, from an earlier study of home country Samoan (Calhoun, 2015). All speakers showed features of phrasal prosody similar to those found for home country Samoan; however, there was considerable variation between speakers. We relate this to the language background of the speakers, and their attitudes and beliefs toward their heritage language. In particular, there were differences between generation 1.5 and 2 speakers, relating to their engagement with and beliefs about their university Samoan language classes. This shows the importance of these factors in the acquisition and maintenance of prosodic features, similar to other more-studied language features.
This paper examines patterns in an Estonian–English bilingual child’s spontaneous speech, employing a computational application of the traceback method, which is used in usage-based linguistics. Forty-five hours of data were analyzed to check what proportion of patterns from code-mixed utterances are attested in the child’s monolingual data and in her input. Pattern overlap between the child’s and the caregivers’ speech was also examined. Results show that about one-third of code-mixed utterances can be traced back to the child’s input and one-third also to her own monolingual data. A little over half of the child’s utterances are either chunks or frame-and-slot patterns from the caregivers’ speech. These results make it evident that the traceback method can also be applied to language pairs that are genealogically more distant, though limitations exist.
Young children sometimes incorrectly interpret verbs that have a “result” meaning, such as understanding ‘fill’ to refer to adding liquid to a cup rather than filling it to a particular level. Given cross-linguistic differences in how event components are realized in language, past research has attributed such errors to non-adultlike event-language mappings. In the current study, we explore whether these errors have a non-linguistic origin. That is, when children view an event, is their encoding of the event end-state too imprecise to discriminate between events that do versus do not arrive at their intended endpoints? Using a habituation paradigm, we tested whether 13-month-old English-learning infants (N = 86) discriminated events with different degrees of completion (e.g., draw a complete triangle versus draw most of a triangle). Results indicated successful discrimination, suggesting that sensitivity to the precise event end-state is already in place in early infancy. Thus, our results rule out one possible explanation for children’s errors with change-of-state predicates—that they do not notice the difference between end-states that vary in completion.
How does human language arise in the mind? To what extent is it innate, or something that is learned? How do these factors interact? The questions surrounding how we acquire language are some of the most fundamental about what it means to be human and have long been at the heart of linguistic theory. This book provides a comprehensive introduction to this fascinating debate, unravelling the arguments for the roles of nature and nurture in the knowledge that allows humans to learn and use language. An interdisciplinary approach is used throughout, allowing the debate to be examined from philosophical and cognitive perspectives. It is illustrated with real-life examples and the theory is explained in a clear, easy-to-read way, making it accessible for students, and other readers, without a background in linguistics. An accompanying website contains a glossary, questions for reflection, discussion themes and project suggestions, to further deepen students understanding of the material.
Chapter 2 first discusses the fact that humans form one of the many millions of animal species that, along with non-animal species, all occupy a place in the big “tree of life,” followed by two responses which aim to single out humans as fundamentally different, especially in terms of their mental capacities. Given our focus of attention on the mind, we discuss the notions mind–body dualism and modularity. The remainder of this chapter offers a preview of many issues that will be discussed in more detail in subsequent chapters. We review the central question how people come to know what they know in some detail, which allows us to be more precise about what we mean by “nature” and “nurture.” We then focus on Noam Chomsky’s Innateness Hypothesis for language, considering its impact in all fields that study human behavior. We preview what this hypothesis entails about how children acquire their language and the predictions it makes about general, universal properties that all languages share. We discuss why Chomsky’s Innateness Hypothesis is controversial and conclude the chapter with some genetic and neurological aspects of the innateness claim.
Previous research on infant-directed speech (IDS) and its role in infants’ language development has largely focused on mothers, with fathers being investigated scarcely. Here we examine the acoustics of IDS as compared to adult-directed speech (ADS) in Norwegian mothers and fathers to 8-month-old infants, and whether these relate to direct (eye-tracking) and indirect (parental report) measures of infants’ word comprehension. Forty-five parent-infant dyads participated in the study. Parents (24 mothers, 21 fathers) were recorded reading a picture book to their infant (IDS), and to an experimenter (ADS), ensuring identical linguistic context across speakers and registers. Results showed that both mothers’ and fathers’ IDS had exaggerated prosody, expanded vowel spaces, as well as more variable and less distinct vowels. We found no evidence that acoustic features of parents’ speech were associated with infants’ word comprehension. Potential reasons for the lack of such a relationship are discussed.
What do speakers of a language have to know, and what can they 'figure out' on the basis of that knowledge, in order for them to use their language successfully? This is the question at the heart of Construction Grammar, an approach to the study of language that views all dimensions of language as equal contributors to shaping linguistic expressions. The trademark characteristic of Construction Grammar is the insight that language is a repertoire of more or less complex patterns – constructions – that integrate form and meaning. This textbook shows how a Construction Grammar approach can be used to analyse the English language, offering explanations for language acquisition, variation and change. It covers all levels of syntactic description, from word-formation and inflectional morphology to phrasal and clausal phenomena and information-structure constructions. Each chapter includes exercises and further readings, making it an accessible introduction for undergraduate students of linguistics and English language.
In this article, we report on a number of violations of (or exceptions to) the so-called V2 constraint in different variants of Icelandic. The main purpose is to investigate what these violations can tell us about the nature of the V2 constraint, its vulnerability, the limits of syntax, and about children’s ability to sort out what is relevant and what is not in the input they hear during the acquisition period. Three main explanatory possibilites are taken under consideration: the use and acceptance of sentences with V2 violations in Icelandic (i) is due to English influence, (ii) indicates an expansion of patterns existing in the language for language-internal reasons, (iii) is due to a task effect. In brief, our results support (i) for heritage Icelandic but not for non-heritage Icelandic, while different subsets of our data are best accounted for in terms of either (ii) or (iii).
This chapter analyzes how languages are acquired in multilingual contexts and focuses on the development of phonological competence, morphosyntactic skills, and vocabulary. We discuss the differences and similarities that monolingual and bilingual children may show in language development, as well as possible advantages and disadvantages related to bilingualism. We explain that simultaneous exposure to two (or more) languages does not cause confusion or developmental delays, as is often feared by caregivers. We demonstrate that simultaneous bilinguals generally reach language milestones at the same age as monolinguals.
This chapter proposes a functional theory of language acquisition based on the idea that children utilize their understanding of cognitive and communicative principles to construct a grammar that integrates semantic and pragmatic notions. The chapter explores child language data that are relevant to such issues as how layered clause structure, operator projection, predicate structure and grammatical relations are acquired within a communication-and-cognition framework. In showing how the language acquisition data map to the Role and Reference Grammar framework, the chapter includes contrasts with alternative theories, such as autonomous syntax theory. From the perspective of conceptual development, the infant-toddler is viewed as a relatively proficient information processor with the capacity to discover fundamental linguistic relationships, in the spirit of the theory of Operating Principles (Slobin 1985).
Pointing plays a significant role in communication and language development. However, in spoken languages pointing has been viewed as a non-verbal gesture, whereas in sign languages, pointing is regarded to represent a linguistic unit of language. This study compared the use of pointing between seven bilingual hearing children of deaf parents (Kids of Deaf Adults [KODAs]) interacting with their deaf parents and five hearing children interacting with their hearing parents. Data were collected in 6-month intervals from the age of 1;0 to 3;0. Pointing frequency among the deaf parents and KODAs was significantly higher than among the hearing parents and their children. In signing dyads pointing frequency remained stable, whereas in spoken dyads it decreased during the follow-up. These findings suggested that pointing is a fundamental element of parent-child interaction, regardless of the language, but is guided by the modality, gestural and linguistic features of the language in question.