Hostname: page-component-78c5997874-m6dg7 Total loading time: 0 Render date: 2024-11-13T01:28:54.427Z Has data issue: false hasContentIssue false

Producing a smaller sound system: Acoustics and articulation of the subset scenario in Gaelic–English bilinguals

Published online by Cambridge University Press:  11 December 2023

Claire Nance*
Affiliation:
Department of Linguistics and English Language, Lancaster, UK
Sam Kirkham
Affiliation:
Department of Linguistics and English Language, Lancaster, UK
*
Corresponding author: Claire Nance; E-mail: c.nance@lancaster.ac.uk
Rights & Permissions [Opens in a new window]

Abstract

When a bilingual speaker has a larger linguistic sub-system in their L1 than their L2, how are L1 categories mapped to the smaller set of L2 categories? This article investigates this “subset scenario” (Escudero, 2005) through an analysis of laterals in highly proficient bilinguals (Scottish Gaelic L1, English L2). Gaelic has three lateral phonemes and English has one. We examine acoustics and articulation (using ultrasound tongue imaging) of lateral production in speakers’ two languages. Our results suggest that speakers do not copy a relevant Gaelic lateral into their English, instead maintaining language-specific strategies, with speakers also producing English laterals with positional allophony. These results show that speakers develop a separate production strategy for their L2. Our results advance models such as the L2LP which has mainly considered perception data, and also contribute articulatory data to this area of study.

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - SA
This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike licence (http://creativecommons.org/licenses/by-nc-sa/4.0), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the same Creative Commons licence is used to distribute the re-used or adapted article and the original article is properly cited. The written permission of Cambridge University Press must be obtained prior to any commercial use.
Copyright
Copyright © The Author(s), 2023. Published by Cambridge University Press

1. Introduction

A fundamental characteristic of learning a new language is adapting the sounds of your first language to the new sound system of a second language. In some cases, there may be a sound in your L1 that is very similar to a sound in the L2, such that only small adjustments in pronunciation are needed. In other cases, there will be entirely new sounds to be learned, requiring you to develop new motions of the tongue, lips and larynx to produce second language speech. A different scenario from both of the above concerns learning a sound in the L2 that is similar to more than one sound in the L1. In this study we examine such a case, whereby Scottish Gaelic has three lateral phonemes and Scottish English has only one. The key question here is: how do L1 Scottish Gaelic speakers learn to produce English laterals? Do they select a particular Gaelic lateral and use this in their English? Do they adopt a combination of strategies that exist across Gaelic laterals? Or do they do something entirely different in English? This type of learning challenge has been called the “subset” scenario by Escudero (Reference Escudero2005) and it provides an invaluable context for studying how speakers make links between corresponding phonemes across two or more languages. While the subset scenario has been examined fairly widely in speech perception, we know much less about how speakers develop strategies for L2 sounds in this context, given the wide range of competing options that may exist in their L1.

In this study, we examine the speech of L1 Scottish Gaelic, L2 Scottish English bilinguals, focusing on how they produce the three phonemic laterals of Scottish Gaelic and the one phonemic lateral of Scottish English. An important aspect of our approach is the use of ultrasound tongue imaging, which permits deep insights into the specific articulatory mechanisms used by individual speakers to produce cross-linguistic phoneme categories. This allows us to investigate whether bilingual speakers develop similar or separate speech production targets in Gaelic and English, which has a number of implications for our understanding of bilingual sound systems.

1.1. The subset scenario

When a bilingual speaker has a larger linguistic sub-system in their L1 than their L2, how are L1 categories mapped to the smaller set of L2 categories? In Escudero's (Reference Escudero2005) L2LP model of speech learning, the subset scenario refers to a case where speakers have a larger sub-system in their L1, and a smaller sub-system in their L2. Escudero's classic example compares Dutch, which has three mid/high front vowels /i ɪ ɛ/, with Spanish, which only has /i e/. Escudero refers to this learning process as “multiple-category assimilation”, such that several L1 categories have to be compressed into a smaller L2 system. In a series of experiments, Escudero shows that Dutch L1 perceivers initially perceive Spanish vowels according to the nearest Dutch perceptual boundary, resulting in some misperception. With time and experience, Dutch listeners adjust their perceptual boundaries relative to this novel scenario. Further work by van Leussen and Escudero (Reference van Leussen and Escudero2015) demonstrates that the process of learning new boundaries is driven by learning new lexical items rather than an awareness of the phonemes in a new language. The subset scenario is predicted to be of “medium difficulty” for learners (Escudero, Reference Escudero2005, p. 125), as it is more difficult to learn than learning to perceive “similar sounds” (e.g., L1 Dutch /u ɔ/ and L2 Spanish /u o/). On the other hand, it is less difficult than learning entirely “new sounds” where more categories are needed in the L2, such as Spanish L1 speakers learning the English /i ɪ/ contrast. In this case, either an entirely new category needs to be created, or an existing Spanish category is split into two.

Other theoretical models of bilingual phonetics and phonology do not explicitly conceptualise the subset scenario in the above way, but would incorporate it as follows. In the Perceptual Assimilation Model of L2 speech learning (PAM-L2), Best and Tyler (Reference Best, Tyler, Munro and Bohn2007) argue that L2 sounds are initially assimilated to the L1 system. Extending their discussion on comparing different pairs of sounds, we can assume that the subset scenario would not be too difficult for L2 users. Escudero and Boersma (Reference Escudero, Boersma, Skarabela, Fish and Do2002) argue that the subset scenario is similar to two category assimilation in the PAM (Best, Reference Best, Goodman and Nusbaum1994). In the Dutch–Spanish context described above, Dutch listeners are hypothesised to assimilate Dutch /i/ to Spanish /i/, and Dutch /ɛ/ to Spanish /e/. But it is unclear how Dutch /ɪ/ would fit in this model, inspiring the L2LP-specific work on the subset scenario. Similarly, the Speech Learning Model (SLM) and revised version (SLM-r) do not consider the subset scenario specifically (Flege, Reference Flege and Strange1995; Flege & Bohn, Reference Flege, Bohn and Wayland2021). Flege's model, in contrast to the L2LP, assumes that similar sounds are difficult to acquire in an L2 (Flege & Bohn, Reference Flege, Bohn and Wayland2021, p.33). From this, we can assume that an L1 Dutch user would find it challenging to produce Spanish /i/ and /e/ in a similar way to a Spanish L1 speaker. But again, the model does not predict how Dutch speakers might allocate the acoustic space reserved for /ɪ/ in their L1. Flege explains that very proficient bilinguals will learn new phonetic categories in their L2, even for similar sounds. It might be assumed from this that very proficient Dutch–Spanish sequential bilinguals will learn Spanish phonetic boundaries rather than just using their Dutch ones.

1.2. The subset scenario in speech production

The models described above have largely focussed on speech perception. Escudero's L2LP covers the subset scenario in most detail, but has been largely tested in terms of the perceptual boundaries between vowels. A small number of studies have investigated how these ideas apply to speech production, and the current study continues research in this vein. For example, Kang and Guion (Reference Kang and Guion2006) measured the acoustics of Korean and English plosives in early- and late-bilinguals (Korean L1, English L2). Korean has a three-way stop contrast between aspirated, fortis and lenis, and English only has a two-way contrast which is phonologically described as voiceless vs. voiced. Kang and Guion's results show that early-bilinguals developed five acoustically different stop categories across the two languages, and late-bilinguals merged Korean and English into three categories (1) Korean/English aspirated/voiceless, (2) Korean/English fortis/voiced, (3) Korean lenis.

Simonet (Reference Simonet2011) considers the production of Catalan and Spanish vowels in Catalan-dominant and Spanish-dominant bilinguals. Catalan has two mid-back vowels /o ɔ/, Spanish has only /o/. Of relevance to the subset scenario is how the Catalan-dominant bilinguals reduce their Catalan vowel system to fit into Spanish. Simonet found that the Catalan-dominant bilinguals produced a significantly different Catalan /ɔ/, and Spanish /o/ and Catalan /o/ were merged. Similar to Catalan, in Galician there are mid vowel contrasts /e ɛ/ and /o ɔ/, whereas Spanish only has /e/ and /o/. Mayr et al. (Reference Mayr, López-Bueno, Vásquez Fernández and Tomé Lourido2019) investigated the production of these vowels in Galician–Spanish bilinguals from a variety of backgrounds. Their overall results show a merged system where participants produced Galician /e ɛ/, and Spanish /e/ in a similar manner as well as Galician /o ɔ/, and Spanish /o/. However, there was substantial individual variation in the results. A small number of participants, mainly Galician-dominant bilinguals, did not merge vowels in Galician, and differentiated one Galician vowel from one Spanish vowel. Although there were no significant group level effects, these results suggest different behaviour for bilinguals who are dominant in the language with a larger sub-system. Finally, Beristain (Reference Beristain2021) considers the acoustics of Basque and Spanish sibilants in bilingual speakers from three Basque dialects: Azpeitia, Goizueta, and Lemoa. Conservative dialects of Basque such as Goizueta contrast apical /s̺/ and laminal /s̻/, whereas Northern Peninsular Spanish only has /s/.Footnote 1 Beristain's results showed that Goizueta speakers transferred one of their Basque sibilants in Spanish.

These studies indicate that speakers might develop entirely separate production categories in their L2 (Kang & Guion, Reference Kang and Guion2006), or might merge acoustically similar categories across languages (Beristain, Reference Beristain2021; Mayr et al., Reference Mayr, López-Bueno, Vásquez Fernández and Tomé Lourido2019; Simonet, Reference Simonet2011). It might be the case that in situations where there is a long history of language contact (Catalonia, the Basque Country, Galicia), languages are more likely to converge. We investigate the subset scenario in bilingual speakers in a different context of long-term language context outwith Spain: Scotland.

1.3. Speech articulation in two languages

While the research discussed above considers the acoustics of speech production, our research also considers speech articulation, which describes the mechanics of how specific sounds are produced using different parts of the vocal tract. Specifically, we wish to ascertain how the specific articulatory gestures used by speakers differ in their two languages. The motivation behind this question is to gain a more profound understanding about the nature of bilingual speech. For example, by studying articulation we can begin to better understand the origin of differences or similarities between a speaker's languages. When discussing whether or not speakers develop separate production categories, it is enlightening to determine the articulatory mechanisms behind this. For example, in the production of rhotics speakers can use very different tongue configurations to produce similar acoustic output (Zhou et al., Reference Zhou, Espy-Wilson, Boyce, Tiede, Holland and Choe2008). By also investigating speech articulation, we are able to gain an alternative but complementary picture into bilingual speech production.

The majority of articulatory research on bilingual speech has used ultrasound tongue imaging, an application of general medical ultrasound imaging to the study of tongue postures and lingual dynamics. In second language applications, recent research has noted the possible benefits of using ultrasound for visualising speech articulation in L2 pronunciation training. For example, Gick et al. (Reference Gick, Bernhardt, Bacsfalvi, Wilson, Hansen Edwards and Zampini2008) used ultrasound to train three Japanese learners of English to better discriminate English /l/ and /ɹ/. When learning new sounds in a previously unknown L2, there is limited evidence of efficacity for ultrasound training (Cleland et al., Reference Cleland, Scobbie, Nakai and Wrench2015; Lin et al., Reference Lin, Cychosz, Shen and Cibelli2019; Roon et al., Reference Roon, Kang and Whalen2020). However, similar to Gick et al. (Reference Gick, Bernhardt, Bacsfalvi, Wilson, Hansen Edwards and Zampini2008), other studies which aim to train people already using the L2 to improve the intelligibility of their speech do report positive benefits for ultrasound visualisation and training for target articulations (Kocjančič Antolík et al., Reference Kocjančič Antolík, Pillot-Loiseau and Kamiyama2019; Sisinni et al., Reference Sisinni, d'Apolito, Gili Fivela and Grimaldi2016; Tateishi & Winters, Reference Tateishi and Winters2013).

In addition to the above research on using ultrasound for L2 speech training, there is a growing body of work focusing on bilingual and cross-linguistic articulatory strategies. Wilson and Gick (Reference Wilson and Gick2014) demonstrate that highly proficient bilingual speakers use language-specific strategies for tongue shapes during the pauses between stretches of speech. Investigating similar sounds in two languages, Kirkham and Nance (Reference Kirkham and Nance2017) compared tongue shapes used in Twi, Ghanaian English, and British English. Twi has advanced tongue root vowel contrasts, while British English has tense/lax vowel contrasts which superficially sound similar (Halle & Stevens, Reference Halle and Stevens1969; Perkell, Reference Perkell1971). In this study, we demonstrated that different articulatory strategies are used in Twi, Ghanaian English, and British English to produce these sounds. Ghanaian English vowels represent an articulatory mid-point between the advanced tongue root vowels in Twi and the tense/lax contrasts in British English. Similarly, Oakley's (Reference Oakley2019) analysis shows that many L2 speakers of French use differing tongue shapes for similar sounds in French and English (though some speakers used the same).

The research discussed here suggests that proficient bilingual speakers mainly use differing tongue strategies across their two languages, even for acoustically similar sounds. However, previous work has compared pairs of sounds – for example, British English /i ɪ/ compared to Twi /i̘ i̙/ – and has investigated Flege (Reference Flege and Strange1995)'s claim that proficient bilinguals will develop separate phonetic categories in their two languages. Articulatory research has not yet considered the subset scenario in detail, which is the aim of the current study.

1.4. Scottish Gaelic context and phonology

Our study considers laterals in Scottish Gaelic, an endangered language spoken in Scotland (and amongst an immigrant population in Nova Scotia, Canada). The language is usually referred to by its speakers simply as “Gaelic” [ɡalɪk] and it is spoken by approximately 58,000 people in Scotland according to the most recently available data (Scottish Government, 2015). Our data were collected on the Isle of Lewis, the largest and most northerly island in the Outer Hebrides, where approximately 60% of the population reported the ability to speak Gaelic (Scottish Government, 2015). Gaelic is currently undergoing a language revitalisation programme including investment in media, arts and culture, immersion education, language learning, and adult education. Since 2005, Gaelic has theoretically enjoyed equal legal status to English. Family transmission of Gaelic is very limited and even in communities like the Isle of Lewis, most children acquiring Gaelic do so mainly through the school system (Nance, Reference Nance2020; Will, Reference Will2012). For an overview of Gaelic policy and language development see McLeod (Reference McLeod2020).

Gaelic phonology differs substantially from English in many ways, including pre-aspirated plosives, a pitch accent system in the Lewis dialect, nasal vowels, and svarabhakti vowels (Nance & Ó Maolalaigh, Reference Nance and Ó Maolalaigh2021). Perhaps the most substantial difference between Gaelic and English phonology is that consonants in Gaelic mainly have palatalised and non-palatalised counterparts, similar to Russian. Laterals, nasals and rhotics, however, have three phonemic variants: palatalised, plain, and velarised (Nance & Kirkham, Reference Nance and Kirkham2020, Reference Nance and Kirkham2022). We focus here on laterals, which can be shown in the IPA as follows: /l̪ʲ l l̪ˠ/. This system is large and typologically unusual, with only 12% of languages in Maddieson (Reference Maddieson1984) having three or more laterals.

The variety of English spoken on the Isle of Lewis is influenced by Gaelic due to long-term language contact. English, including Lewis English, only has one lateral phoneme. In Lewis English, the lateral has been referred to as a “clear” lateral approximant (Shuken, Reference Shuken and Trudgill1984). Laterals in English have positional allophony such that syllable-initial laterals are “clear” (produced with tongue body fronting), and syllable-final laterals are “dark” (produced with tongue body retraction) (Sproat & Fujimura, Reference Sproat and Fujimura1993). Even in varieties where the lateral is referred to as “clear”, there appears to be some positional allophony present by which syllable-initial laterals are clearer than syllable-final (Carter & Local, Reference Carter and Local2007). The context of Gaelic–English bilingualism, therefore, represents an interesting combination of phonetic and phonological learning. In terms of phonemic structure, Gaelic has three laterals and English only one. Phonetically, however, English laterals are different depending on syllable position. Gaelic speakers acquiring English have to learn an allophonic rule explaining which of two lateral variants should appear in which syllable position. Previous work generally indicates that late bilinguals or lower proficiency bilinguals might find it challenging to acquire allophonic rules in their L2, but early/higher proficiency bilinguals do successfully acquire allophonic rules (Dmitrieva et al., Reference Dmitrieva, Jongman and Sereno2010; Lee et al., Reference Lee, Guion and Harada2006); early bilinguals in Amengual and Simonet (Reference Amengual and Simonet2020).

1.5. Summary and Research Questions

In this study we use acoustic analysis and ultrasound tongue imaging to investigate speech production in a subset scenario: our speakers acquired Gaelic on the Isle of Lewis as their L1 and acquired English sequentially by interaction with society outside the home. Phonologically, Gaelic has three laterals /l̪ʲ l l̪ˠ/, while English only has /l/ phoneme, with different phonetic variants as a consequence of onset/coda syllabification. Our study uses ultrasound to allow (relatively) easy viewing of the midsagittal tongue shape, a vital articulatory axis for distinguishing palatalised and velarised consonants. Ultrasound has been widely used to investigate tongue body fronting and retraction in British English sonorants (Lawson et al., Reference Lawson, Scobbie and Stuart-Smith2011, Reference Lawson, Scobbie and Stuart-Smith2013, Reference Lawson, Stuart-Smith and Scobbie2018; Lawson & Stuart-Smith, Reference Lawson and Stuart-Smith2021; Turton, Reference Turton2017), as well as phonemic palatalisation/velarisation in a variety of languages including Gaelic and closely-related Irish (Bennett et al., Reference Bennett, Ní Chiosáin, Padgett and McGuire2018; Howson et al., Reference Howson, Komova and Gick2014; Nance & Kirkham, Reference Nance and Kirkham2022; Sung et al., Reference Sung, Archangeli, Johnston, Clayton and Carnie2015).

Based on the literature, we see two potential production strategies. First, speakers could pick one L1 sound as phonetically “similar” and use this in the L2 (Beristain, Reference Beristain2021; Mayr et al., Reference Mayr, López-Bueno, Vásquez Fernández and Tomé Lourido2019; Simonet, Reference Simonet2011). The Gaelic palatalised or plain laterals might be good candidates for this, given that Lewis English laterals have previously been described as “clear” even in syllable-final position (Shuken, Reference Shuken and Trudgill1984). Alternatively, speakers could develop a novel L2 category for their English lateral unlike any of their Gaelic laterals (Kang & Guion, Reference Kang and Guion2006).

We address two specific research questions which address the degree of separation between Gaelic and English laterals, and the extent to which Gaelic L1 speakers acquire English allophonic patterns in their L2:

  • 1) Do Gaelic–English bilinguals develop a separate category for their English lateral?

  • 2) Do Gaelic–English bilinguals learn clear/dark lateral allophony in English?

In order to address these questions, we conducted a series of acoustic and articulatory analyses, as described below.

2. Methods

2.1. Speakers and bilingualism

The speakers for this study were recorded in Stornoway on the Isle of Lewis, Outer Hebrides, Scotland. Lewis is the most northerly and largest island in the Outer Hebrides chain, which lies off the north-west coast of Scotland. We originally recorded fifteen speakers for this study. After eliciting the experimental materials, speakers were interviewed about their Gaelic acquisition trajectory, frequency of Gaelic usage in daily life, contexts of Gaelic use in daily life, and their occupation and Gaelic usage professionally. Twelve of the speakers were born and raised on Lewis and grew up in Gaelic-speaking homes. They learned English sequentially when interacting outside the family aged 3–4, and all used Gaelic professionally, apart from two retired participants. Seven of these speakers were suitable for analysis of ultrasound tongue images. This smaller number for the ultrasound analysis was due to some speakers’ anatomy imaging poorly, or technical issues during recording. As we aim to compare acoustic and ultrasound data, we included only these seven speakers in both analyses (three female, four male). The speakers were aged 21–72 at the time of recording. We recognise this is a large age range – however, our speakers were socially consistent in growing up using Gaelic, using more Gaelic than English in their daily lives, using Gaelic in professional settings (or having used Gaelic in a professional setting before retirement). Due to the fragility of Gaelic language transmission, there are not a large number of Gaelic-dominant bilinguals, and our sample represents an important proportion of those available on Lewis. We acknowledge that our sample size is relatively small, though this is not unusual in studies employing ultrasound tongue imaging due to the time needed for analysis.Footnote 2 Due to the small sample size, our results must be treated with caution, and we encourage future research on this topic in order to further explore this area of research.

2.2. Data collection

Data were collected in a community centre on Lewis, or at the speaker's workplace. Acoustic and ultrasound data were collected simultaneously. We collected the acoustic data with a Beyerdynamic Opus 55 headset microphone, a Sound Devices USBPre audio interface, and a laptop computer. The microphone was attached to a headset used to stabilise the ultrasound probe (Articulate Instruments, 2008). Our word list was presented to participants on the computer screen using the Articulate Assistant Advanced (AAA) software (Articulate Instruments, 2018). We recorded midsagittal B-mode ultrasound using a Telemed MicrUs system at 2 MHz frequency and frame rate of ~92Hz. The location of each speaker's occlusal plane was imaged by asking them to bite on a plastic bite plate fixed behind the upper teeth and push their tongue up against the plate (Scobbie et al., Reference Scobbie, Lawson, Cowen, Cleland and Wrench2011). The ultrasound and audio were synchronised by recording the TTL pulse that the ultrasound emits at the completion of each frame to a simultaneous audio track, giving frame-level synchronisation between ultrasound and audio.

The stimuli used in this study are shown in Table 1. We also recorded nasals and rhotics in Gaelic, but these are not reported on here (see Kirkham & Nance, Reference Kirkham and Nance2022; Nance & Kirkham, Reference Nance and Kirkham2020). In total, the participants read 70 words three times (210 tokens each), which took 20–25 minutes. The whole recording session including our ethics procedures, fitting the ultrasound helmet, reading the list, and the bilingualism questionnaire took around 1 hour. We aimed to elicit laterals in the context of /i a u/ in word-initial and word-final position, though this was not always possible due to lexical gaps in Gaelic and the historical evolution of palatalisation in front vowel contexts and velarisation in back vowel contexts. The list was repeated three times in random order without a carrier phrase (in order to reduce the total time of the experiment and prevent fatigue from wearing the ultrasound headset). In word-initial position, the plain Gaelic phonemes are produced in mutation contexts (a set of morphophonological consonant alternations in the Celtic languages). As such, we elicited them in short phrases – for example preceded by the word mo ‘my’, or prepositions which trigger mutation. For more information and examples of mutation in Gaelic, see Nance and Ó Maolalaigh (Reference Nance and Ó Maolalaigh2021).

Table 1. Stimuli used in this study.

2.3. Analysis

Data segmentation

The duration of the sonorant and preceding/following vowel was segmented in Praat (Boersma & Weenink, Reference Boersma and Weenink2021) by a research assistant and checked by the first author. The TextGrids were then used to inform the timepoints needed for the acoustic and articulatory analysis. Due to the long-range effects of formants on acoustics, it is not always straightforward to segment the lateral compared to the vowel. In these cases, we used acoustic information, such as a change in the second formant and change in amplitude of the waveform, to indicate boundaries between vowels and sonorants, as well as auditory clues. Five tokens were removed as it was not possible to confidently segment them, or due to technical errors. In these data, all laterals were produced as laterals. Previous work has shown that some younger speakers produce palatalised laterals as palatal glides (Nance, Reference Nance2014). Vocalised productions have not been noted for velarised laterals in Gaelic though it is possible some speakers do this. Vocalised laterals were not found in these data. For examples of the segmentation carried out, see Figure 1.

Figure 1. Example segmentation of a word-initial (velarised) Gaelic lateral, and word-final (plain) Gaelic lateral from a female speaker.

Acoustic analysis

The acoustic data were low-pass filtered at 11,025 Hz and downsampled to 22,050 Hz. We extracted formant measures using LPC analysis which was set to detect five formants below 5500 Hz (female speakers) or 5000 Hz (male speakers). Values were z-scored within speakers. Our measurements were taken at sonorant midpoint for word-initial sonorants, and at 20% of the interval duration for word-final sonorants. The reason for this timepoint in word-final position is that our analyses have revealed significant devoicing in word-final sonorants in Gaelic, so it would not be meaningful to extract formant values any later on in the duration of the consonant (note the devoicing evident in the word-final lateral in the second panel of Figure 1). Measurements were taken using Praat, which was run from R using the speakr package (Coretta, Reference Coretta2021b).

This analysis aims to examine the acoustic properties of sonorants in Gaelic and English. To do this, we measured the difference between the second and first formant (F2–F1), which approximates a continuum of lateral clearness/darkness (Morris, Reference Morris2017; Simonet, Reference Simonet2010; Sproat & Fujimura, Reference Sproat and Fujimura1993), and correspond to differences in phonemic palatalisation (Howson, Reference Howson2018; Iskarous & Kavitskaya, Reference Iskarous and Kavitskaya2010; Nance, Reference Nance2014; Nance & Kirkham, Reference Nance and Kirkham2020).

Articulatory analysis

We fitted splines to the ultrasound images automatically in AAA. A research assistant then hand corrected any major tracking errors within a region of interest. The splines were rotated and scaled to the occlusal plane using the occlusal plane estimate for each speaker. Data were exported from AAA and analysed in R using the rticulate package (Coretta, Reference Coretta2021a).

The articulatory analysis aims to capture a comparable snapshot of consonant articulation in both languages. In order to gain a numerical value characterising the extent of palatalisation and velarisation, we conducted Principal Component Analysis (PCA) on the tongue splines in order to reduce the dimensionality of the data and compare across speakers (Bennett et al., Reference Bennett, Ní Chiosáin, Padgett and McGuire2018; Johnson, Reference Johnson2011; Stone, Reference Stone2005; Turton, Reference Turton2017). The PCA was run on the 42 pairs of x/y coordinates of the tongue splines at sonorant midpoint. The PCA can reduce these 84 axes of variation into a smaller number of values which characterise the shape of each curved tongue spline. Prior to analysis, the values from each speaker were z-scored. PCA was then run using the princomp function in R. Following Baayen (Reference Baayen2008, p.130), we retain PCs that account for >5% of the data. In these data, this spans PCs 1–4, which together explain 89% of the variance in the dataset.

Each Principal Component (PC) is a linear function that accounts for a certain dimension of variation in the data. We develop an interpretable account of each PC by plotting the mean values of all the tongue splines, along with ± the standard deviation of each PC's values multiplied by the intercept of the PC function. This shows the extent to which each PC represents different axes of variation in the dataset – see Johnson, Reference Johnson2011, pp.95–102 for more information). The results of this analysis and the proportions of variation explained by the first four components are in Figure 2.

Figure 2. Axes of variation explained by the first four Principal Components, and the proportion of the variation for which each one accounts.

Our analysis focuses on PC1, which explains 49% of the variation in the data. Figure 2 shows that this PC corresponds to tongue body fronting and backing, which is the primary articulatory correlate of palatalisation/velarisation (Kochetov, Reference Kochetov2002; Malmi & Lippus, Reference Malmi and Lippus2019). Analysis of PC2–4 can be found in the supplementary materials (Figure S3–S5; Table S3–S5).

2.4. Statistics

Our acoustic analysis concerns one continuous variable, F2–F1, and our articulatory analysis centres on one continuous variable, PC1 (PC2–4 in the supplementary materials: Figure S3–S5; Table S3–S5). These values were modelled using linear mixed effects modelling in R using the lme4 package (Bates et al., Reference Bates, Mächler, Bolker and Walker2015). Separate modelling was conducted on F2–F1 and PC1. In each of the models, the baseline was a Gaelic palatalised sonorant in initial position in /i/ vowel context. Velarised, plain and English sonorants were compared against this via dummy coding. Further comparisons between the sonorant categories were made via posthoc testing (Tukey method) using the emmeans package (Lenth, Reference Lenth2021). Each model included word position, vowel category, and an interaction of word position and phoneme category. Word was included as a random intercept, and we also included a random slope of category by speaker. In two of the models, we used a random intercept of word and a random intercept of speaker only (no slopes) as the model would not converge. We used the bobyqa method of optimisation: Optimx package (Nash & Varadhan, Reference Nash and Varadhan2011; Nash, Reference Nash2014).Footnote 3

We had no theoretically motivated reason for expecting gender differences between the speakers and previous analyses of Gaelic laterals have not exhibited systematic sociolinguistic variation along this dimension (Nance, Reference Nance2020). In addition, the relatively small number of speakers in this study would mean that the modelling would risk being underpowered with the inclusion of a further factor of gender in the analysis. In both the F2–F1 formant data, and the PC1 values, we have transformed the data to z-scores in order to normalise for expected differences between speakers with differing size vocal anatomy (Lobanov, Reference Lobanov1971).

3. Results

3.1. Lateral acoustics

The F2–F1 values for Gaelic and English are shown in Figure 3, and the results of the statistical modelling in Table 2. The modelling shows significant effects of word position, vowel context, and a significant interaction of category and word position. In word-initial position, the English lateral is closest to the Gaelic palatalised lateral, and significantly different from the Gaelic velarised lateral. In word-final position, the English lateral is closest to the Gaelic plain lateral and significantly different from the Gaelic palatalised lateral. In word-initial position, there is no significant difference between the plain and velarised Gaelic laterals. In word-final position, there is no significant difference between the palatalised and plain Gaelic laterals. The significant interaction of category and word position, combined with the results of the post hoc testing indicates that the English lateral has quite different acoustics in word-initial and word-final position. This interaction is plotted in Figure 5. In this analysis and the following articulatory analysis, we have not reported posthoc testing for vowel context in order to better focus on the phonemic/language category results. For the interested reader, analyses by vowel context are included in the supplementary materials (Figure S1–S2; Table S1–S2).

Figure 3. Lateral acoustic results.

Table 2. Lateral acoustic statistics. Full model AIC is 1233.34, compared to a null model AIC of 1273.84.

3.2. Lateral articulation

The PC1 values for Gaelic and English are shown in Figure 4, and the results of the statistical modelling in Table 3. The statistical modelling shows significant effects of phoneme/language category, and vowel context. In word-initial position, the velarised and English laterals are significantly different, and the English lateral is closest to the plain Gaelic lateral. The palatalised and velarised Gaelic laterals are significantly different, as are the plain and velarised. In word-final position, the English lateral is most different from the palatalised Gaelic lateral, and most like the plain. The palatalised and velarised Gaelic laterals are significantly different. The different results of the posthoc testing in word-initial and word-final position suggest that the English lateral is somewhat different in word-initial and word-final position, although the category*position interaction was not significant. For ease of interpreting our results, the non-significant interaction is plotted in Figure 5.

Figure 4. Lateral articulatory results.

Table 3. Lateral articulatory statistics. Full model AIC is 1063.22, compared to a null model AIC of 1093.70.

Figure 5. Interaction of phoneme/language category and word position for acoustics (left panel) and articulation (right panel). Note the interaction for articulation was non-significant, but is plotted here to aid in interpreting the results.

3.3. Results summary

Our results do not straightforwardly indicate that speakers select one Gaelic phoneme and produce their English lateral in a phonetically similar way (RQ1). Instead, English laterals appear to show values in-between particular Gaelic categories. In acoustics, the English lateral is most similar to the palatalised Gaelic lateral in word-initial position, and somewhere between the velarised and plain Gaelic laterals in word-final position. In articulation, the English lateral is very similar to the plain Gaelic lateral in word-initial position, and in-between the plain and velarised Gaelic laterals in word-final position.

Comparing the acoustic and articulatory results in Figure 5, it is clear that there is a large difference between the English word-initial and word-final laterals in acoustics. Values for the Gaelic laterals are almost the same in word-initial and word-final position. In articulation, there is a (non-significant) difference between word-initial and word-final position for the English laterals, and the values of the Gaelic laterals are almost identical in word-initial and word-final position. Taken together, the results suggest positional allophony in English, but not in Gaelic (RQ2).

4. Discussion

In this Discussion, we return to our two research questions:

  1. 1) Do Gaelic–English bilinguals develop a separate category for their English lateral?

  2. 2) Do Gaelic–English bilinguals learn clear/dark lateral allophony in English?

We then return to the central broad aim of this study, the production of sounds in a subset scenario. We consider this question and its wider implications for theories of bilingualism in Section 4.2.

4.1. Laterals in Gaelic–English bilinguals

We hypothesised two potential production possibilities for the English lateral by Gaelic L1 speakers: speakers could pick one L1 sound as phonetically “similar” and use this in the L2 (Beristain, Reference Beristain2021; Mayr et al., Reference Mayr, López-Bueno, Vásquez Fernández and Tomé Lourido2019; Simonet, Reference Simonet2011); or speakers could develop a novel L2 category (Kang & Guion, Reference Kang and Guion2006).

Our results suggest that speakers do not straightforwardly copy-paste a Gaelic lateral into English (RQ1). However, the statistical modelling is inconclusive here: the English lateral was not significantly different from all Gaelic laterals in all positions. In our acoustic results, the English lateral was significantly different from the Gaelic velarised lateral in word-initial position, and significantly different from the Gaelic palatalised lateral in word-final position. In articulation, the English lateral was again significantly different from the Gaelic velarised lateral in word-initial position, and most different from the Gaelic palatalised lateral in word-final position. We suggest that the spread of data between Gaelic categories is more likely to support the argument that speakers are doing something different in English, rather than merely using a particular Gaelic lateral category in their L2 (Figure 3, Figure 4). English laterals, therefore, appear to occupy a slightly different acoustic/articulatory space compared with Gaelic. Future research with a larger dataset will be needed to confirm this interpretation. An alternative interpretation is that Gaelic speakers use the Gaelic palatalised lateral in word-initial position in English, and the Gaelic plain lateral in word-final position in English. We think this unlikely, however, and instead propose that speakers have developed language-specific categories within a broader acoustic-articulatory space. Beyond the distributional nature of our data, the second reason for our favouring interpretation is the large positional differences in English values, with this prosodic conditioning suggesting a different system in English than in Gaelic.

We argue that the large positional differences in the production of the English laterals are evidence of the positional allophony found in English (RQ2), where syllable onsets display more palatalised realisations and syllable codas display more velarised realisations (Sproat & Fujimura, Reference Sproat and Fujimura1993). We find no strong evidence of this allophony in Gaelic. Examining the interaction plots shown in Figure 5, there may be a very small difference between word-initial and word-final laterals in Gaelic, but the magnitude of this difference is extremely small in comparison to English. With the current dataset, we suggest that the significant position*category interaction in the acoustic data is driven by the differences in realisation of the English lateral. The differences we find between Gaelic and English are even more noteworthy when we take into account that our speakers are using Lewis English, a variety that has developed from close contact between Gaelic and English (Shuken, Reference Shuken and Trudgill1984). This agrees with previous work demonstrating that early high-proficiency bilinguals acquire the allophonic rules of their L2 (Amengual & Simonet, Reference Amengual and Simonet2020; Lee et al., Reference Lee, Guion and Harada2006).

Our results appear to be more consistent with those of Kang and Guion (Reference Kang and Guion2006)'s early bilinguals, who developed separate production categories for Korean and English stops. Our results are less consistent with work on Spanish-Catalan/Basque/Galician bilinguals in Simonet (Reference Simonet2011), Beristain (Reference Beristain2021 Goizueta dialect), and Mayr et al. (Reference Mayr, López-Bueno, Vásquez Fernández and Tomé Lourido2019), where speakers instead used production values from the larger system (Catalan/Basque/Galician) in the smaller system (Spanish). We suggest that this discrepancy comes from the nature of laterals in English. Our speakers have acquired a rule by which there are two different production targets depending on syllable position in English (i.e., the clear/dark allophony). We argue that this systematic variation in English has led to our speakers acquiring a different lateral realisation system in their L2 compared to their L1 Gaelic. Such effects would not be present in the contexts described by Simonet, Beristain, Mayr et al. A further factor in our analysis is that the speakers in question acquired Gaelic in a world which is now dominated by English. Even in the north-west islands of Scotland such as Lewis, it is now rare for children to grow up using Gaelic in the home as our speakers did. In this respect, our speakers are potentially more comparable with the Korean–English immigrant children described in Kang and Guion (Reference Kang and Guion2006), than contexts where the minority language is in a stronger position such as Catalan-/Galician-/Basque-speaking regions of Spain.

Our analysis showed more significant results in the acoustic analysis compared to the articulatory analysis. There are at least three potential explanations for this. First, our ultrasound measure is a relatively sparse measure of articulation. We only consider a single principal component, which captures around 50% of variability in the data (though see supplementary materials for analysis of PC2–4). Second, a midsagittal tongue shape is a sparse representation of speech articulation, albeit one that is well-known to adequately capture lateral productions. Third, this could represent greater speaker-specific variation in articulation, despite greater consistency in acoustics. The latter point is partly corroborated by our previous analysis of rhotic productions from the same speakers, which indicate substantial individual differences in tongue shape strategy used in rhotic production. Specifically, half our speakers produced rhotics with their tongue tip raised, and half with the tongue front bunched (Nance & Kirkham, Reference Nance and Kirkham2022). This result was similar to previous studies showing individual strategy for rhotic production (King & Ferragne, Reference King and Ferragne2020; Mielke et al., Reference Mielke, Baker and Archangeli2016; Zhou et al., Reference Zhou, Espy-Wilson, Boyce, Tiede, Holland and Choe2008). It may be the case that speakers aim for an acoustic target and then optimise their articulations to achieve this target (Tourville & Guenther, Reference Tourville and Guenther2011). As well as more significant results in the acoustic data, we also generally found more significant results in the word-initial data compared to the word-final data. We suggest that this is due to speakers hyperarticulating in the prosodically strong context of syllable onsets, as opposed to hypoarticulation in the prosodically weaker context of syllable codas (Lindblom, Reference Lindblom, Hardcastle and Marchal1990).

4.2. Producing sounds in a smaller sub-system

We now return to theoretical models of bilingualism and the subset scenario discussed in Section 1. The subset scenario is most clearly theorised in the L2LP model (Escudero, Reference Escudero2005), but research so far has mainly been concerned with perceptual boundaries in vowels (Escudero, Reference Escudero2005; van Leussen & Escudero, Reference van Leussen and Escudero2015). Our data support the proposal of Escudero and colleagues that with extensive learning experience, speakers adjust phonetic boundaries and develop the categories needed in the subset language. Similarly, Flege's SLM would predict that highly proficient bilinguals, such as our speakers, would develop new production categories in their sequentially acquired language, although the SLM makes no unique predictions about the subset scenario.

It was suggested in Escudero and Boersma (Reference Escudero, Boersma, Skarabela, Fish and Do2002) that the Perceptual Assimilation Model (PAM-L2) predicts two category assimilation in the subset scenario (and an unknown outcome for the third sound in the larger system). In our context, Gaelic has three phonemes and English only one rather than the three-to-two situation investigated in Escudero and Boersma (Reference Escudero, Boersma, Skarabela, Fish and Do2002). However, we find that Gaelic–English bilinguals have acquired a separate allophonic rule in English, rather than copying over a specific Gaelic lateral into their L2. As such, this does not appear to support the predictions of the PAM-L2.

Our analysis of the subset scenario is novel in including articulatory data alongside the acoustic analysis. Our articulatory results broadly support the results from acoustics, with the addition of more individual variability in the articulatory data. The analysis shows that even in a crowded acoustic/articulatory space, Gaelic L1 speakers adopt new strategies for tongue movements in English, their L2. These results mirror previous articulatory studies of highly proficient bilinguals and indicate that speakers develop language-specific articulatory routines (Kirkham & Nance, Reference Kirkham and Nance2017; Oakley, Reference Oakley2019; Wilson & Gick, Reference Wilson and Gick2014). Our motivation for analysing speech articulation here was to gain a more profound understanding of what is meant by “developing separate categories” and how that development is achieved. Our results suggest that speakers can develop separate categories in their sequentially acquired language in both acoustics and articulation. We have suggested some reasons why we found more results in the acoustic data than the articulatory data – however, more research is needed on this topic to better understand the nature of acoustic-articulatory mappings.

5. Conclusion

Our results support the broad predictions of the L2LP, and implied predictions of the SLM, with regard to the subset scenario: highly proficient bilinguals will develop separate categories in their two languages. Here, it does not seem to be the case that Gaelic–English bilinguals pick one Gaelic lateral and copy this onto the English lateral. Instead, their English laterals appear to occupy separate acoustic/articulatory space. In addition to this, Gaelic speakers acquire English lateral positional allophony, such that English lateral realisation varies depending on prosodic context. Our data contribute a perspective from articulation which broadly backs up our observations from the acoustic data. We found fewer significant results in our articulatory dataset and suggest that this stems from greater variability in these data, which could represent unmeasured aspects of articulation or speaker-specific articulations.

Our study is limited by its sample size and the small number of speakers in the community under consideration here. The current results should be considered in light of this and tested via future analysis of the same questions in larger populations. In the Gaelic context, future work could also compare to locations in central Scotland where the local variety of English is not as influenced by language contact with Gaelic in order to provide another perspective on the effects of individual bilingualism on the one hand, and long-term language contact on the other. Further research in this area ought to further investigate the speech production of differing configurations of bilinguals or adopt a longitudinal design to observe how categories evolve over time. Further investigations of this nature would allow examination of how speakers initially negotiate the subset scenario and how their categories evolve over time.

Supplementary Material

The supplementary material for this article can be found at https://doi.org/10.1017/S1366728923000688.

Figure S1: Values of F2–F1 in laterals preceding the three different vowel contexts in this analysis i.e., /i a u/.

Table S1: Posthoc testing comparing values for F2–F1 in the different vowel contexts.

Figure S2: Values of PC1 in laterals preceding the three different vowel contexts in this analysis i.e., /i a u/.

Table S2: Posthoc testing comparing values for PC1 in the different vowel contexts.

Figure S3: PC2 values according to each phoneme/language category and word position.

Table S3: Statistical modelling of PC2 values considering the effects of phoneme/language category, vowel, word position, and position/category interaction.

Figure S4: PC3 values according to each phoneme/language category and word position.

Table S4: Statistical modelling of PC3 values considering the effects of phoneme/language category, vowel, word position, and position/category interaction.

Figure S5: PC4 values according to each phoneme/language category and word position.

Table S5: Statistical modelling of PC4 values considering the effects of phoneme/language category, vowel, word position, and position/category interaction.

Data availability statement

The data and code for our analysis are available at https://osf.io/4r3f9/. With the kind permission of two of the speakers involved, example audio and ultrasound videos can be viewed and listened to at https://seeingspeech.ac.uk/gaelic-tongues/.

Acknowledgements

Mòran taing to the participants in this study who generously gave up time to take part. We are very grateful to the Editors of Bilingualism: Language and Cognition, and the anonymous reviewers, for their constructive comments on this article. Thanks also to Lois Fairclough for manual correction of the ultrasound data. This study was conducted with the assistance of two grants from the Faculty of Arts and Social Sciences at Lancaster University.

Footnotes

This article has earned badges for transparent research practices: Open Data and Open Materials. For details see the Data Availability Statement.

1 Azpeitia and Lemoa Basque have different sibilant mergers meaning that the subset scenario is not relevant here. Specifically, Azpeitia dialect has one merged Basque sibilant /s̻/, and Lemoa dialect has one merged Basque sibilant /s̺/. For more information see Beristain (Reference Beristain2021).

2 For example, recent comparable papers on ultrasound and endangered languages in Journal of Phonetics include analysis of four Kalasha speakers (Hussain & Mielke, Reference Hussain and Mielke2021), and six Arrernte speakers respectively (Tabain & Beare, Reference Tabain and Beare2018).

3 Model formula: lmer(analysis variable ~ vowel + position*category + (1|word) + (category|speaker).

References

Amengual, M., & Simonet, M. (2020). Language dominance does not always predict cross-linguistic interactions in bilingual speech production. Linguistic Approaches to Bilingualism, 10(6), 847872. https://doi.org/10.1075/lab.18042.ameCrossRefGoogle Scholar
Articulate Instruments. (2008). Ultrasound stabilisation headset: Users manual revision 1.5. Articulate Instruments.Google Scholar
Articulate Instruments (Ed.). (2018). Articulate Assistant Advanced version 2.17. Articulate Instruments.Google Scholar
Baayen, R. H. (2008). Analyzing linguistic data: A practical introduction to statistics. Cambridge University Press.CrossRefGoogle Scholar
Bates, D., Mächler, M., Bolker, B., & Walker, S. (2015). Fitting Linear Mixed-Effects Models Using lme4. Journal of Statistical Software, 67(1), 148.CrossRefGoogle Scholar
Bennett, R., Ní Chiosáin, M., Padgett, J., & McGuire, G. (2018). An ultrasound study of Connemara Irish palatalization and velarization. Journal of the International Phonetic Association, 144.Google Scholar
Beristain, A. (2021). Spectral properties of anterior sibilant fricatives in Northern Peninsular Spanish and sibilant-merging and non-merging varieties of Basque. Journal of the International Phonetic Association, 132. https://doi.org/10.1017/s0025100320000274Google Scholar
Best, C. (1994). The emergence of native-language phonological influences in infants: A perceptual assimilation model. In Goodman, J. C. & Nusbaum, H. C. (Eds.), The development of speech perception: The transition from speech sounds to spoken words. (pp. 167224). MIT Press.Google Scholar
Best, C., & Tyler, M. (2007). Nonnative and second-language speech perception: Commonalities and complementarities. In Munro, M. & Bohn, O.-S. (Eds.), Second language speech learning: The role of language experience in speech perception and production (pp. 1334). John Benjamins.CrossRefGoogle Scholar
Boersma, P., & Weenink, D. (2021). Praat: doing phonetics by computer [Computer program]. http://www.praat.org/Google Scholar
Carter, P., & Local, J. (2007). F2 variation in Newcastle and Leeds English liquid systems. Journal of the International Phonetic Association, 37(2), 183199.CrossRefGoogle Scholar
Cleland, J., Scobbie, J., Nakai, S., & Wrench, A. (2015). Helping children learn non-native articultions: The implications for ultrasound-based clincial intervention. Proceedings of the 18th International Congress of the Phonetic Sciences.Google Scholar
Coretta, S. (2021a). rticulate: Ultrasound Tongue Imaging in R. https://CRAN.R-project.org/package=rticulateCrossRefGoogle Scholar
Coretta, S. (2021b). speakr: A wrapper for the phonetic software ‘Praat’. https://CRAN.R-project.org/package=speakrCrossRefGoogle Scholar
Dmitrieva, O., Jongman, A., & Sereno, J. (2010). Phonological neutralization by native and non-native speakers: The case of Russian final devoicing. Journal of Phonetics, 38(3), 483492. https://doi.org/10.1016/j.wocn.2010.06.001CrossRefGoogle Scholar
Escudero, P. (2005). Linguistic perception and second language acquisition: Explaining the attainment of optimal phonological categorization [PhD Thesis]. University of Utrecht.Google Scholar
Escudero, P., & Boersma, P. (2002). The subset problem in L2 perceptual development: multiple-category assimilation by Dutch learners of Spanish, in Skarabela, B., Fish, S. & Do, A. H.-J. (eds.), Proceedings of the 26th Annual Boston University Conference on Language Development, (Somerville, MA: Cascadilla), 208–221Google Scholar
Flege, J. (1995). Second language speech learning: Theory, findings, and problems. In Strange, W. (Ed.), Speech perception and linguistic experience: Issues in cross-language research (pp. 233277). Timonium.Google Scholar
Flege, J., & Bohn, O.-S. (2021). The revised Speech Learning Model (SLM-r). In Wayland, R. (Ed.), Second language speech learning: Theoretical and empirical progress (pp. 84118). Cambridge University Press.CrossRefGoogle Scholar
Gick, B., Bernhardt, B. M., Bacsfalvi, P., & Wilson, I. (2008). Ultrasound imaging applications in second language acquisition. In Hansen Edwards, J. G. & Zampini, M. L. (Eds.), Phonology and Second Language Acquisition (pp. 309322). John Benjamins. https://doi.org/10.1075/sibil.36CrossRefGoogle Scholar
Halle, M., & Stevens, K. (1969). On the feature `Advanced Tongue Root’. Quarterly Progress Report MIT Research Laboratory of Electronics, 94, 209215.Google Scholar
Howson, P. (2018). Rhotics and palatalization: An acoustic examination of Upper and Lower Sorbian. Phonetica, 75, 132150.CrossRefGoogle ScholarPubMed
Howson, P., Komova, E., & Gick, B. (2014). Czech trills revisited: An ultrasound EGG and acoustic study. Journal of the International Phonetic Association, 44, 115132.CrossRefGoogle Scholar
Hussain, Q., & Mielke, J. (2021). An acoustic and articulatory study of rhotic and rhotic-nasal vowels of Kalasha. Journal of Phonetics, 87, 101028. https://doi.org/10.1016/j.wocn.2020.101028CrossRefGoogle Scholar
Iskarous, K., & Kavitskaya, D. (2010). The interaction between contrast, prosody, and coarticulation in structuring phonetic variability. Journal of Phonetics, 38, 625639.CrossRefGoogle ScholarPubMed
Johnson, K. (2011). Quantitative methods in linguistics. Blackwell.Google Scholar
Kang, K.-H., & Guion, S. G. (2006). Phonological systems in bilinguals: Age of learning effects on the stop consonant systems of Korean-English bilinguals. Journal of the Acoustical Society of America, 119, 16721683.CrossRefGoogle ScholarPubMed
King, H., & Ferragne, E. (2020). Loose lips and tongue tips: The central role of the /r/-typical labial gesture in Anglo-English. Journal of Phonetics, 80, 119.CrossRefGoogle Scholar
Kirkham, S., & Nance, C. (2017). An acoustic-articulatory study of bilingual vowel production: Advanced tongue root vowels in Twi and tense/lax vowels in Ghanaian English. Journal of Phonetics, 62, 6581.CrossRefGoogle Scholar
Kirkham, S., & Nance, C. (2022). Diachronic phonological asymmetries and the variable stability of synchronic contrast. Journal of Phonetics, 94, 101176. https://doi.org/10.1016/j.wocn.2022.101176CrossRefGoogle Scholar
Kochetov, A. (2002). Production, perception, and emergent phonotactic patterns: A case of contrastive palatalization [PhD Thesis]. University of Toronto.Google Scholar
Kocjančič Antolík, T., Pillot-Loiseau, C., & Kamiyama, T. (2019). The effectiveness of real-time ultrasound visual feedback on tongue movements in L2 pronunciation training. Journal of Second Language Pronunciation, 5(1), 7297. https://doi.org/10.1075/jslp.16022.antCrossRefGoogle Scholar
Lawson, E., & Stuart-Smith, J. (2021). Lenition and fortition of /r/ in utterance-final position, an ultrasound tongue imaging study of lingual gesture timing in spontaneous speech. Journal of Phonetics, 86, 101053. https://doi.org/10.1016/j.wocn.2021.101053CrossRefGoogle Scholar
Lawson, E., Scobbie, J. M., & Stuart-Smith, J. (2011). The social stratification of tongue shape in postvocalic /r/ in Scottish English. Journal of Sociolinguistics, 15(2), 256268.CrossRefGoogle Scholar
Lawson, E., Scobbie, J. M., & Stuart-Smith, J. (2013). Bunched /r/ promotes vowel merger to schwar: An ultrasound tongue imaging study of Scottish sociophonetic variation. Journal of Phonetics, 41, 198210.CrossRefGoogle Scholar
Lawson, E., Stuart-Smith, J., & Scobbie, J. M. (2018). The role of gesture delay in coda /r/ weakening: An articulatory, auditory and acoustic study. The Journal of the Acoustical Society of America, 143(3), 16461657. https://doi.org/10.1121/1.5027833CrossRefGoogle Scholar
Lee, B., Guion, S. G., & Harada, T. (2006). Acoustic analysis of the production of unstressed English vowels by early and late Korean and Japanese bilinguals. Studies in Second Language Acquisition, 28(03). https://doi.org/10.1017/S0272263106060207CrossRefGoogle Scholar
Lenth, R. V. (2021). emmeans: Estimated Marginal Means, aka Least-Squares Means. https://CRAN.R-project.org/package=emmeansGoogle Scholar
Lin, S., Cychosz, M., Shen, A., & Cibelli, E. (2019). The effects of phonetic training and visual feedback on novel contrast production. Proceedings of the 19th International Congress of the Phonetic Sciences.Google Scholar
Lindblom, B. (1990). Explaining Phonetic Variation: A Sketch of the H and H Theory. In Hardcastle, W. & Marchal, A. (Eds.), Speech production and speech modelling (pp. 403439). Springer Verlag.CrossRefGoogle Scholar
Lobanov, B. (1971). Classification of Russian vowels spoken by different speakers. Journal of the Acoustical Society of America, 49(2B), 606608.CrossRefGoogle Scholar
Maddieson, I. (1984). Patterns of sounds. Cambridge University Press.CrossRefGoogle Scholar
Malmi, A., & Lippus, P. (2019). Keele asend eesti palatalisatsioonis / The position of the tongue in Estonian palatalization. Eesti Ja Soome-Ugri Keeleteaduse Ajakiri, 10(1), 105128.Google Scholar
Mayr, R., López-Bueno, L., Vásquez Fernández, M., & Tomé Lourido, G. (2019). The role of early experience and continued language use in bilingual speech production: A study of Galician and Spanish mid vowels by Galician–Spanish bilinguals. Journal of Phonetics, 72, 116.CrossRefGoogle Scholar
McLeod, W. (2020). Gaelic in Scotland: Policies, movements and ideologies. Edinburgh University Press.Google Scholar
Mielke, J., Baker, A., & Archangeli, D. (2016). Individual-level contact limits phonological complexity: Evidence from bunched and retroflex /\textipa\*r/. Language, 92(1), 101140.CrossRefGoogle Scholar
Morris, J. (2017). Sociophonetic variation in a long-term language contact situation: /l/-darkening in Welsh-English bilingual speech. Journal of Sociolinguistics, 21(2), 183207.CrossRefGoogle Scholar
Nance, C. (2014). Phonetic variation in Scottish Gaelic laterals. Journal of Phonetics, 47, 117.CrossRefGoogle Scholar
Nance, C. (2020). Bilingual language exposure and the peer group: Acquiring phonetics and phonology in Gaelic Medium Education. International Journal of Bilingualism, 24(2), 360375.CrossRefGoogle Scholar
Nance, C., & Kirkham, S. (2020). The acoustics of three-way lateral and nasal palatalisation contrasts in Scottish Gaelic. Journal of the Acoustical Society of America, 147(4), 28582872.CrossRefGoogle ScholarPubMed
Nance, C., & Kirkham, S. (2022). Phonetic typology and articulatory constraints: The realisation of secondary articulations in Scottish Gaelic rhotics. Language, 98(3).Google Scholar
Nance, C., & Ó Maolalaigh, R. (2021). Scottish Gaelic. Journal of the International Phonetic Association, 51(2), 261275.CrossRefGoogle Scholar
Nash, J. C. (2014). On Best Practice Optimization Methods in R. Journal of Statistical Software, 60(2), 114.CrossRefGoogle Scholar
Nash, J. C., & Varadhan, R. (2011). Unifying Optimization Algorithms to Aid Software System Users: Optimx for R. Journal of Statistical Software, 43(9), 114.CrossRefGoogle Scholar
Oakley, M. (2019). Articulation of L2 French mid and high vowels. Proceedings of the 19th International Congress of the Phonetic Sciences, 1724–1728.Google Scholar
Perkell, J. S. (1971). Physiology of speech production: A preliminary study of two suggested revisions of the features specifying vowels. Quarterly Progress Report MIT Research Laboratory of Electronics, 102, 123139.Google Scholar
Roon, K. D., Kang, J., & Whalen, D. H. (2020). Effects of Ultrasound Familiarization on Production and Perception of Nonnative Contrasts. Phonetica, 77(5), 350393. https://doi.org/10.1159/000505298CrossRefGoogle ScholarPubMed
Scobbie, J. M., Lawson, E., Cowen, S., Cleland, J., & Wrench, A. (2011). A common co-ordinate system for mid-sagittal articulatory measurement. QMU CASL Working Papers, 20.Google Scholar
Scottish Government. (2015). Scotland's Census 2011: Gaelic report (part 2). National Records of Scotland.Google Scholar
Shuken, C. (1984). Highland and Island English. In Trudgill, P. (Ed.), Language in the British Isles (pp. 152167). Cambridge University Press.Google Scholar
Simonet, M. (2010). Dark and clear laterals in Catalan and Spanish: Interaction of phonetic categories in early bilinguals. Journal of Phonetics, 38, 663678.CrossRefGoogle Scholar
Simonet, M. (2011). Production of a Catalan-Specific Vowel Contrast by Early Spanish-Catalan Bilinguals. Phonetica, 68(1–2), 88110. https://doi.org/10.1159/000328847CrossRefGoogle ScholarPubMed
Sisinni, B., d'Apolito, S., Gili Fivela, B., & Grimaldi, M. (2016). Ultrasound articulatory training for teaching pronunciation of L2 vowels. In Pixel (Ed.), ICT for language learning (pp. 265270).Google Scholar
Sproat, R., & Fujimura, O. (1993). Allophonic variation in English /l/ and its implications for phonetic implementation. Journal of Phonetics, 21, 291311.CrossRefGoogle Scholar
Stone, M. (2005). A guide to analysing tongue motion from ultrasound images. Clinical Linguistics and Phonetics, 19(6/7), 455501.CrossRefGoogle ScholarPubMed
Sung, J.-H., Archangeli, D., Johnston, S., Clayton, I., & Carnie, A. (2015). The articulation of mutated consonants: Palatalization in Scottish Gaelic. In Proceedings of the 18th International Congress of the Phonetic Sciences.Google Scholar
Tabain, M., & Beare, R. (2018). An ultrasound study of coronal places of articulation in Central Arrernte: Apicals, laminals and rhotics. Journal of Phonetics, 66, 6381.CrossRefGoogle Scholar
Tateishi, M., & Winters, S. (2013). Does ultrasound training lead to improved perception of a non-native sound contrast? Evidence from Japanese learners of English. Proceedings of the 2013 Annual Conference of the Canadian Linguistic Association, 1–15.Google Scholar
Tourville, J. A., & Guenther, F. H. (2011). The DIVA model: A neural theory of speech acquisition and production. Language and Cognitive Processes, 26(7), 952981. https://doi.org/10.1080/01690960903498424CrossRefGoogle Scholar
Turton, D. (2017). Categorical or gradient? An ultrasound investigation of /l/-darkening and vocalization in varieties of English. Laboratory Phonology: Journal of the Association for Laboratory Phonology, 8(1), 131.CrossRefGoogle Scholar
van Leussen, J.-W., & Escudero, P. (2015). Learning to perceive and recognize a second language: The L2LP model revised. Frontiers in Psychology, 6(1000), 112.CrossRefGoogle ScholarPubMed
Will, V. (2012). Why Kenny Can't Can: The Language Socialization Experiences of Gaelic-Medium Educated Children in Scotland [PhD Thesis]. University of Michigan.Google Scholar
Wilson, I., & Gick, B. (2014). Bilinguals use language-specific articulatory settings. Journal of Speech, Language, and Hearing Research, 57, 361373.CrossRefGoogle ScholarPubMed
Zhou, X., Espy-Wilson, C. Y., Boyce, S., Tiede, M., Holland, C., & Choe, A. (2008). A magnetic resonance imaging-based articulatory and acoustic study of “retroflex” and “bunched” American English /r/. The Journal of the Acoustical Society of America, 123(6), 44664481. https://doi.org/10.1121/1.2902168CrossRefGoogle ScholarPubMed
Figure 0

Table 1. Stimuli used in this study.

Figure 1

Figure 1. Example segmentation of a word-initial (velarised) Gaelic lateral, and word-final (plain) Gaelic lateral from a female speaker.

Figure 2

Figure 2. Axes of variation explained by the first four Principal Components, and the proportion of the variation for which each one accounts.

Figure 3

Figure 3. Lateral acoustic results.

Figure 4

Table 2. Lateral acoustic statistics. Full model AIC is 1233.34, compared to a null model AIC of 1273.84.

Figure 5

Figure 4. Lateral articulatory results.

Figure 6

Table 3. Lateral articulatory statistics. Full model AIC is 1063.22, compared to a null model AIC of 1093.70.

Figure 7

Figure 5. Interaction of phoneme/language category and word position for acoustics (left panel) and articulation (right panel). Note the interaction for articulation was non-significant, but is plotted here to aid in interpreting the results.

Supplementary material: File

Nance and Kirkham supplementary material 1

Nance and Kirkham supplementary material
Download Nance and Kirkham supplementary material 1(File)
File 489.3 KB
Supplementary material: File

Nance and Kirkham supplementary material 2

Nance and Kirkham supplementary material
Download Nance and Kirkham supplementary material 2(File)
File 566.4 KB