We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The Automatic Selective Perception (ASP) model posits that listeners make use of selective perceptual routines (SPRs) that are fast and efficient for recovering lexical meaning. These SPRs serve as filters to accentuate relevant cues and minimize irrelevant information. Years of experience with the first language (L1) lead to fairly automatic L1 SPRs; consequently, few attentional resources are needed in processing L1 speech. In contrast, L2 SPRs are less automatic. Under difficult task or stimulus conditions, listeners fall back on more automatic processes, specifically L1 SPRs. And L2 speech perception suffers where there is a mismatch between the L1 and the L2 phonetics because L1 SPRs may not extract the important cues needed for identifying L2 phonemes. This chapter will present behavioral and neurophysiology evidence that supports the ASP model, but which also indicates the need for some modification. We offer suggestions for future directions in extending this model.
This chapter provides a cross-sectional overview of current neuroimaging techniques and signals used to investigate the processing of linguistically relevant speech units in the bilingual brain. These techniques are reviewed in the light of important contributions to the understanding of perceptual and production processes in different bilingual populations. The chapter is structured as follows. First, we discuss several non-invasive technologies that provide unique insights in the study of bilingual phonetics and phonology. This introductory section is followed by a brief review of the key brain regions and pathways that support the perception and production of speech units. Next, we discuss the neuromodulatory effects of different bilingual experiences on these brain regions from shorter to longer neural latencies and timescales. As we will show, bilingualism can significantly alter the time course, strength, and nature of the neural responses to speech, when compared with monolinguals.
Since the rules of civility are often abandoned for the sake of the goals activists are pursuing, this chapter considers whether these goals – rather than a set of universal rules – might themselves suggest moral constraints. To illustrate this point, I analyze two authors who believe that how one communicates is integrally related to what one actually conveys, and thus morality and effectiveness cannot always be separated. Karlyn Kohrs Campbell argues women must be free to reflect on their own experiences rather than being subjected to authoritative interpretations. Even when done in the name of women’s liberation, telling women how they should feel ironically stifles women’s voices. Thus, a dialogical, consciousness-raising style of communication is integrally related to the pursuit of women’s liberation. Paulo Freire likewise argues that propaganda for the cause of liberation ironically perpetuates oppression. Liberators need to be committed to dialogue because the task of liberation itself demands dialogical engagement.
This chapter considers the relationship between the historical Gorgias of Leontini and Plato’s portrayal of him and his ideas in the Gorgias. By drawing on fragments and testimonia of the historical figure, it shows that Plato’s understanding of Gorgias and his views informs both his characterization of the orator himself in the Gorgias, as well as that dialogue’s philosophical content and aims. In particular, three of the central themes of the Gorgias – ones that the character himself introduces – are prominent in Gorgias’ own works and in the doxographical reception of him: (1) the conception of speech as a form of power or dunamis; (2) the relation between power and wish or boulēsis and their joint role in human action; and (3) the contrast between – and contrasting relationships speech itself has with – belief on the one hand and knowledge on the other. Whether the historical Gorgias was ever personally committed to the relevant ideas in question or not, the chapter argues that he at least gave voice to them in his works, and that Plato, at least, evidently took them seriously as expressions of Gorgianic theory and practice.
Growing evidence demonstrates that subtle changes in spontaneous speech can be used to distinguish older adults with and without cognitive impairment, including those with Alzheimer's disease (AD). Recent work suggests that quantification of the meaningful connectedness of speech - termed semantic coherence - may be sensitive to cognitive dysfunction. The current study compared global coherence (GC; the degree to which individual utterances relate to the overall topic being discussed) and local coherence (LC; the degree to which adjoining utterances relate meaningfully to each other) in persons with AD and healthy controls.
Participants and Methods:
Speech transcripts from 81 individuals with probable AD (Mage = 72.7 years, SD = 8.8, 70.3% female) and 61 healthy controls (HC) (Mage = 63.9 years, SD = 8.5, 62.2% female) from Dementia Bank were analyzed. All participants completed the Cookie Theft and MMSE as part of that larger project. Machine learning analyses of GC and LC were conducted and models evaluated classification accuracy (i.e., AD vs HC) as well as ROC-AUC. Relationships between coherence indices and MMSE performance were also quantified.
Results:
Though no significant group differences emerged in LC (Estimate = 0.012, p = 0.32), persons with AD differed from healthy controls in GC (Estimate = 0.03, p = 0.006) and produced less semantically coherent speech. GC indices predicted AD diagnoses with 65% accuracy. Interestingly, coherence indices showed only modest correlation with MMSE scores (r = .19).
Conclusions:
GC metrics of spontaneous speech differentiated between persons with AD and controls, but did not strongly correlate with MMSE performance. Such findings support the notion that many aspects of language are impacted in persons with AD. In addition to replication, future work should evaluate whether GC is also disrupted in persons with pre-clinical AD and its potential to assist with early detection.
The Children’s Communication Checklist-Second Edition (CCC-2) is a rating scale designed to assess domains of communication skills with emphasis on pragmatics (Bishop, 2006). The CCC comprises 10 subtests addressing various aspects of oral communication skills: Speech, Syntax, Semantics, Coherence, Initiation, Scripted Language, Context, Nonverbal Communication, Social Relations, and Interests. In a study conducted on the original CCC, Geurts et al. (2004) found that when compared to normal controls, pragmatic difficulties occurred in children with either high functioning autism (HFA) or ADHD. Since the initial version of the CCC, no study has examined whether the revised version can differentiate children with HFA, ADHD, and LD, the purpose of the present study. Focus was on derived factors of the structure/content of language and the pragmatics of language.
Participants and Methods:
Forty-one participants grouped according to diagnosis were drawn from two archival data pools, one adapted from a previous study conducted by Casey and Scott (2016) and the other from a set of anonymized patients from a neuropsychological clinic. Fourteen participants met clinical criteria for autism (Mage = 11.95), 12 participants met criteria for ADHD without co-morbid disorders (Mage = 9.5), and 15 participants met criteria for a learning disability involving reading, writing, math, or some combination (Mage = 10.13). Group-specific descriptive statistics were computed for the participants’ age, full scale intelligence quotient (IQ), and General Communication Composite (GCC). Two factor scores were computed, one composed of the subtests that constitute the structure/content aspects of language (Speech, Syntax, Semantics, and Coherence) and one composed of the pragmatic aspects of language (Initiation, Nonverbal Communication, Social Relations, and Interests), an area of particular weakness in HFA. Independent samples ANOVAs were conducted on both factor scores to determine whether the CCC-2 could differentiate the three groups. Post-hoc comparisons were planned for the subtests comprising the factor(s) that differentiated the groups.
Results:
Participants in the ADHD (M = 9.45, SD = 2.45) group were significantly younger than those in the HFA group (M = 11.95, SD = 2.24) and LD group (M = 10.13, SD = 2.58), the latter two not differing significantly. The groups did not differ significantly on IQ, nor on the structure/content factor. On the pragmatic factor, the LD group (M = 10.18, SD = 9.91) had significantly higher scores than the ADHD group (M = 7.79, SD = 6.54), which in turn, had significantly higher scores than the HFA group (M = 5.48, SD = 8.26), F(2, 38) = 17.81, p < .01. Within this composite, the same pattern was shown on Nonverbal Communication, F(2, 38) = 9.29, p < .01, and Interests, F(2, 38) = 17.81, p < .01.
Conclusions:
Compared to children with an academically-based learning disability, children with ADHD and HFA demonstrated pragmatic difficulties on the CCC-2. Although there was overlap between the pragmatic language characteristics of children with ADHD and children with HFA, the CCC-2 demonstrated utility in distinguishing the two disorders on certain aspects of communication skills, suggesting that it is a useful tool in differential diagnosis.
Brain science demonstrates that people who stutter (PWS) exhibit insufficient activation in the auditory speech area of the left hemisphere (Kikuchi, et al. 2011 ; Garnett, et al. 2018). In this study, we reported the auditory brainstem response of PWS: in PWS with moderate and severe impairment, significantly longer interpeak latencies (IPLs) between waves I and V (IPL [I-V]) of the right ear than those of the left ear were observed. However, in PWS with mild impairment, the IPLs (I-V) of the left ear were significantly longer than those of the right ear (Anzaki et al., 2020). We considered that the differences in the IPLs (I-V) between the right and left ears cause monitoring disturbance in communication, which results in developmental stuttering. It has been reported that stuttering was improved by delayed auditory feedback (DAF) (Stromsta, 1956; Sakai, 2008). Thus, we improved the DAF system and developed an application that can be used by PWS to listen to their own voices with no differences in the IPLs (I-V) between their left and right ears. We verified the effectiveness of this application.
Participants and Methods:
This study included five male adults with developmental stuttering (ADSs), with a mean age and handedness index of 36 years and 84, respectively. The application was adjusted so that the IPLs (I-V) of the left and right ears were the same. For example, one ADS showed that the IPL (I-V) of their right ear was 0.5 msec longer than that of their left. Subsequently, the application was adjusted so that the IPL (I-V) of his left ear would be delayed by 0.5 msec. We asked the participants to use the application for six months when free talking and reading aloud. Using the Japanese Standardized Test for Stuttering (JSTS) (Ozawa, et al. 2013), we compared their disfluencies with and without the application.
Results:
As per the JSTS, the stuttering severity in all participants improved. Case 1, who had severe impairment (level 5), showed a moderate improvement (level 4), Cases 2 and 3, who had moderate impairment (level 4), showed a mild improvement (level 3), and Cases 4 and 5, who had mild impairment (level 3), exhibited a normal level of improvement (level 1). We calculated the z-scores of the improvement rates of the JSTS based on the standard deviations according to the severity (Anzaki, 2019). The z-scores of Case 4 and 5 were 4.01 and 2.01, respectively, indicating a significant improvement.
Conclusions:
In our report last year, although ADSs with moderate and severe impairment showed improvement by stimulation intervention on the left hemisphere through the right ear, those with mild impairment exhibited only a slight or no improvement as per the JSTS (Anzaki, et al. 2021). The application developed in this study was found to significantly improve the disfluencies of all the participants as per the JSTS, especially those with mild impairment. Therefore, we considered that stuttering disorders are layered; ADSs have auditory monitoring disorder in the base.
Early detection is critical to the effective management of Alzheimer’s Disease (AD) and other forms of dementia. Existing screening assessments are often costly, require substantial expertise to administer, and may be insensitive to mild changes in cognition. A promising alternative is automatically measuring features of connected speech (c.f., Ostrand & Gunstad, 2021, Journal Geriatric Psych & Neurol) to predict impairment. Here, we built on prior work examining how well speech features predicted cognitive impairment. Unique to the current work, we attempted to capture more holistic effects of cognitive impairment by examining the relevance of linguistic features that measure sentential or discourse context properties of speech, including the context in which filler words (e.g., um) occur, and the predictability of individual words within their sentence context, computed from a large computational language model (GPT-2).
Participants and Methods:
Participants completed the Cookie Theft picture description task, with data available in the DementiaBank corpus (Becker et al., 1994, Arch Neurol). Descriptions that contained at least 50 words (N = 214) were submitted to an automatic feature calculation pipeline written in Python to calculate various part-of-speech counts, lexical diversity metrics, and mean lexical frequency, as well as multiple metrics related to lexical surprisal (i.e., how surprising a word is given its context).
Surprisal of individual words was computed using the pre-trained GPT-2 transformer language model (Radford et al., 2019, Comput. Sci.) by computing word probability given the previous 12 words. Multiple linear regression was performed using 17 linguistic features jointly as predictors, and Mini-Mental State Examination (MMSE) score as the outcome variable. Simple regressions were calculated between each feature and MMSE scores to examine the predictability of each linguistic feature on cognitive decline.
Results:
A multiple linear regression model containing all linguistic features plus demographic information (age, sex, education) significantly predicted MMSE scores (Adjusted R2 = 0.41, F20, 193 = 8.37, p < .001), and explained significantly more variance in MMSE scores than did demographic variables alone (F17193 = 6.85, p < .001). Individual predictors that were significantly correlated with MMSE score included: how unexpected an individual’s word choices were, given the preceding context (median surprisal: r = -0.33, p < 0.001; interquartile range: r = 0.18, p < 0.02), mean lexical frequency (r = -0.50, p < .001), and usage of definite articles (r = 0.31, p < 0.001), nouns (r = 0.26, p < .001), and empty words (r = -0.25, p < 0.001).
Conclusions:
Participants with lower MMSE scores, indicating greater impairment, used more frequent, yet more surprising, words, and produced more empty words and fewer definite articles and nouns. These results suggest that measures of semantic specificity and coherence of speech could be meaningful predictors of cognitive decline, and can be computed automatically from speech transcriptions. The results also provide novel evidence that computational approaches to estimating lexical predictability may have value in predicting the degree of decline, motivating future work in other speech elicitation tasks and differing clinical groups.
Schizophrenia is often disguised as a crisis of adolescence. We want to understand how schizophrenia manifests itself in speech. We expect the difference in the Word Fluency Association Task (WFAT) in Normality Index and Time Delay.
Participants and Methods:
The analyses data was collected by the WFAT (authored by V.Kritskaya). WFAT actualize speech connections based on past experience. Stimuli - various syllables of 2-letters (20 pcs.), 3-letters (30 pcs.), varying in frequency of use (compiled on the basis of the Corpuses of the Russian language). Instruction: “Now I will tell you a syllable, your task is to complete the proposed syllable to words as quickly as possible, your words must be real”. For study purposes we subdivided the sample into two subgroups: 12-14 years, 15-17 years. For statistical analysis we used U-criterion, Mann-Whitney. Analyzed parameters:
Normality index (NI) - the ratio of productive nouns to the amount of the standard associations
Time delay (TD) - response delay of the subject (sec., millisec.)
Study involved 57 participants: 27 adolescents with schizophrenia (Cl_G) aged from 12 to 18 years, assessed in the hospital setting (DS: F20.xx, F21.xx) and 30 normotypal peers (Co_G) assessed in the school setting. Exclusion criteria: patients in the acute psychotic phase, left-handers.
Results:
NI grouped with the following rates: subgroup 1- Co_G 70,4, Cl_G 58,5, subgroup 2 - Co_G 65,2, Cl_G 56,1 There was no statistically significant difference between Cl_g Co_g in NI (P=0,000 and p=0,002 respectively). TD grouped with the following rates: subgroup 1-Co_G 2,42, Cl_G 22,63, subgroup 2 - Co_G 4,48, Cl_G 3,14 There was statistically significant difference between Cl_g Co_g in TD (P=0,432 and p=0,25 respectively).
Conclusions:
Temporal indicators of cognitive activity differ in the clinical and control groups, which testifies to the significance of this indicator in the context of WFAT. There is no difference in terms of semantics. However, we expect to see it, because such difference has previously been shown in a group of adult patients with a severe type of schizophrenia. In the future, we would like to expand the group and select additional methods that evaluate the semantic component.
While physician-assisted suicide legislation is being drafted and passed across the United States, a gray-area continues to exist in regard to the legality of a lay person’s assistance with suicide. Several high-profile cases have been covered in the media, namely that of Michelle Carter in Massachusetts and William Melchert-Dinkel in Minnesota, but there is also a growing volume of anonymous pro-suicide materials online. Pro-suicide groups fly under the radar and claim to help those desiring to take their own lives. This paper aims to identify the point at which an individual or group can be held civilly or criminally liable for assisting suicide and discusses how the First Amendment can be used to shield authors from such liability.
The end of the constitutional right to abortion with Dobbs v. Jackson Women’s Health stands to generate massive conflict between abortion regulation and the First Amendment. Abortion exceptionalism within constitutional doctrine -- which both treats abortion differently than other areas and favors anti-abortion over pro-choice viewpoints -- will not retreat but advance, unless confronted by the courts.
This chapter examines constitutional theory and doctrine as applied to emerging government regulations of video image capture and proposes a framework that will promote free speech to the fullest extent possible without facilitating unnecessary intrusions into legitimate privacy interests.
This chapter examines constitutional theory and doctrine as applied to emerging government regulations of video image capture and proposes a framework that will promote free speech to the fullest extent possible without facilitating unnecessary intrusions into legitimate privacy interests.
This global overview of how translation is understood as a performative practice across genres, media and disciplines illuminates the broad impact of the 'performance turn' in the arts and humanities. Combining key concepts in comparative literature, performance studies and translation theory, the volume provides readers with a dynamic account of the ways in which these fields fruitfully interact. The chapters display interdisciplinary thinking in action across a wide spectrum of performance practices and media from around the world, from poetry and manuscripts to theatre surtitles, audio description, archives, installations, dialects, movement and dance. Paying close attention to questions of race, gender, sexuality, embodiment and accessibility, the collection's rich array of methodological approaches and experiments with scholarly writing demonstrate how translation as a performative practice can enrich our understanding of language and politics.
Research has shown sound-symbolic associations between speech sounds and conceptual and/or perceptual properties of a referent. This study used the choice response time method to investigate hypothesized associations between a high/low vowel and spatial concepts of up/down and above/below. The participants were presented with a stimulus that moved either upward or downward (Experiments 1 and 2), or that was located above or below the reference stimulus (Experiment 3), and they had to pronounce a vowel ([i] or [æ]) based on the spatial location of the stimulus. The study showed that the high vowel [i] was produced faster in relation to the up-directed and the above-positioned stimulus, while the low vowel [æ] was produced faster in relation to the down-directed and the below-positioned stimulus. In addition, the study replicated the pitch-elevation effect showing a raising of the vocalization pitch when vocalizations were produced to the up-directed stimulus. The article discusses these effects in terms of the involvement of sensorimotor processes in representing spatial concepts.
This article discusses speech and hearing disabled Americans’ claims to citizenship during World War I, and the ways American policymakers sought to rehabilitate American soldiers treated in the U.S. Army Section of Defects of Hearing and Speech—or those classified after the Section’s closure as deaf, hard-of-hearing, or “speech defective.” Ultimately, I argue that one’s aural communication abilities were indicators of worthiness in American society and that this was especially the case during World War I, when tensions about speech and hearing heightened within and outside of the Deaf community due to significant pressures placed on Americans to show support for the war. Such pressures also shaped the experiences of American soldiers treated for speech and hearing disabilities after 1918, by suggesting that their service to the United States could not be complete until they were successfully rehabilitated through lip-reading training. To be able to aurally communicate signified the veterans’ sound citizenship in a literal and a metaphorical sense.
This chapter argues that anthropological studies of human relationships with animals can draw our attention towards – rather than away from – the importance of representation within human ethical lives. The chapter posits that the anthropological ethical turn and the animal turn are each orientated in different ways to the concept of ‘representation’. This means the two traditions speak at cross-purposes to one another, visible particularly in their distinctive innovations in relation to ‘personhood’. From the perspective of the ethical turn, the personhood (or agency) attributed to animals by multispecies ethnographers appears ethically thin. From the perspective of the animal turn, the ethicists’ focus on varied human-held forms of reflection risks repeating dualistic distinctions and portraying animals only as they appear within human accounts. Yet the chapter shows that these two emerging traditions can benefit from one another, in attending to ‘more-than-representational’ (not non-representational) aspects of ethics. These include (1) the material/embodied form of ethical representations, (2) the fact that ethical life often seems inherently hard to represent, and (3) varied ethical attitudes towards representation (such as the ‘honesty’ sometimes associated with lack of speech) that can be seen as and when people interact with their animals.
Phonetics is the science explaining what happens as people talk -- that is to say, what happens as we produce the sounds of speech. Speech is a functional part of language, as language is most commonly used in human interaction. Language, in an abstract sense, is something common to all neurotypical humans. This abstract sense of “language” contrasts with specific languages, each one unique. The level of phonetic analysis of language is separate from, but overlapping with, phonology. Phonology focuses more on contrasts, whereas phonetics focuses more on differences. Phonetic variables can be used at very different levels of the grammar in different languages. Traunmüller distinguishes four types of information in speech: phonetic (linguistic), affective, personal, and transmittal. Dialect, register, and the hyperspeech--hypospeech continuum affect specific aspects of phonetic production in a given language. The science of phonetics uses terminology often consisting of ordinary words whose meanings are frequently different from the technical sense.
The mass street demonstrations that followed the 2020 police murder of George Floyd were perhaps the largest in American history. These events confirmed that even in a digital era, people rely on public dissent to communicate grievances, change public discourse, and stand in collective solidarity with others. However, the demonstrations also showed that the laws surrounding public protest make public contention more dangerous, more costly, and less effective. Police fired tear gas into peaceful crowds, used physical force against compliant demonstrators, imposed broad curfews, limited the places where protesters could assemble, and abused 'unlawful assembly' and other public disorder laws. These and other pathologies epitomize a system in which public protest is tightly constrained in the name of public order. Managed Dissent argues that in order to preserve the venerable tradition of public protest in the US, we must reform several aspects of the law of public protest.