Hostname: page-component-cd9895bd7-fscjk Total loading time: 0 Render date: 2024-12-26T06:31:28.426Z Has data issue: false hasContentIssue false

Morphosyntactic agreement in English: does it help the listener in noise?

Published online by Cambridge University Press:  03 May 2024

Marcel Schlechtweg*
Affiliation:
Department of English and American Studies and Cluster of excellence ‘Hearing4all’ Carl von Ossietzky Universität Oldenburg Ammerländer Heerstraße 114–118 26129 Oldenburg Germany marcelschlechtweg@gmail.com
Rights & Permissions [Opens in a new window]

Abstract

English morphosyntactic agreement, such as determiner–noun agreement in These cabs broke down and noun–verb agreement in The cabs break down, has a few interesting properties that enable us to investigate whether agreement has a psycholinguistic function, that is, whether it helps the listener process linguistic information expressed by a speaker. The present project relies on these properties in a perception experiment, examines the two aforementioned types of English agreement, and aims at analyzing whether and how native English listeners benefit from agreement. The two types of agreement were contrasted with cases without any overtly agreeing elements (e.g. The cabs broke down). Native speakers of English with normal hearing heard short English sentences in quiet and in more or less intense white noise and were requested to indicate whether the second word of the sentence (e.g. These cabs broke down) was a singular or plural noun. Accuracy was entered as the response variable in the binomial logistic regression model. Results showed that overt determiner–noun agreement clearly increased response accuracy, while noun–verb agreement had at best marginal effects. The findings are interpreted against the background of functional aspects of linguistic structures in English, in the context of unfavorable listening conditions in particular.

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
Copyright © The Author(s), 2024. Published by Cambridge University Press

1 Introduction

Morphosyntactic agreement in English has a few outstanding properties that make it an ideal test ground to examine whether agreement has a psycholinguistic function, that is, whether it supports the listener in detecting and unpacking the linguistic information of what is expressed by a speaker. Consider the examples in (1).

  1. (1)

    1. (a) These cabs broke down.

    2. (b) The cabs break down.

    3. (c) The cabs broke down.

First, the agreement target can either precede or follow the agreement controller, which is the noun. While the target (these) occurs before the noun in (1a), it is positioned after the noun in (1b) (break), both these and break agree with the noun cabs with respect to the number feature and the plural value. Second, different types of syntactic structure operate in English agreement. While these in (1a) belongs to the same nominal phrase as cabs, break in (1b) is more distant from cabs in syntactic terms. Third, due to the existence of the two types of agreement illustrated in (1a, b), namely determiner–noun and noun–verb agreement, as well as their two properties just introduced, English agreement offers a promising way to investigate whether various kinds of agreement and their characteristics are indeed beneficial for the listener while others are not, or at least less so. Put differently, one can verify whether linear precedence and/or syntactic structure affect the potentially supportive function of agreement (on linear precedence and syntactic structure, see also Corbett Reference Corbett2006: 180, 206–30). Fourth, thanks to the special situation in English, a well-controlled, and therefore more conclusive, experiment can be built to test whether agreement facilitates the task of the listener. More precisely, the sentence given in (1c) can serve as a baseline for the sentences presented in (1a, b). The sentences in (1a, b) show the respective cases of agreement, (1c) does not, but (1c) is linguistically comparable to (1a, b). Both the pair (1a, c) and the pair (1b, c) form a kind of minimal pair; they differ in one respect only: in the pair (1a, c), only the determiner is different; in the pair (1b, c), only the verb form is different. Potentially confounding variables – for instance, phonetic, phonological, morphological, syntactic, semantic, lexical or probabilistic ones – are hence controlled for.

The objective of the present article is to rely on the four mentioned properties of English agreement to investigate whether agreement possibly serves a psycholinguistic function and is not just an unnecessary ingredient of English grammar (for discussion, see also Corbett Reference Corbett2006: 274). The focus only lies on the perspective of the listener in the experiment to be reported on in section 4, but the work connects to recent research on how English agreement affects the speaker side (see Schlechtweg & Corbett Reference Schlechtweg and Corbett2023). In this previous contribution, we analyzed whether the duration of the English s suffix was modulated by English agreement. Consider the examples in (2).

  1. (2)

    1. (a) The blue cabs always broke down.

    2. (b) These blue cabs always broke down.

    3. (c) The blue cabs always break down.

    4. (d) These blue cabs always break down.

One finds no overt agreement at all in (2a), overt agreement between determiner and noun in (2b), between noun and verb in (2c), and between both determiner and noun and noun and verb in (2d). This production experiment was carefully controlled for many potentially confounding variables and we found, first of all, that agreement between noun and verb did not affect suffix durations: the duration of the s was comparable between sentences with a past tense verb form (no overt agreement, see (2a, b)) and sentences with a present tense form (overt agreement, see (2c, d)). Overt agreement between determiner and noun triggered variation in s duration, that is, the absolute duration of the s was systematically shorter when preceded by these, in comparison to when it was preceded by the. The authors interpreted the subtle effect as slight evidence for phonetic reduction if syntagmatic predictability was high. The presence of these, but not the presence of the, already pointed to the fact that the following noun was plural and hence made the occurrence of a plural noun predictable. If these preceded the noun, the s suffix on the plural noun was less important and could be shortened. If the preceded the noun, however, the s suffix on the plural noun was the only plural marker, therefore decisive for expressing the number information, and was enhanced. We further interpreted the finding that only determiner–noun but not noun–verb agreement played a role by referring to syntactic structure and linear precedence (see also Corbett Reference Corbett2006: 180, 206–30): while the determiner was located within the same nominal phrase as the noun, the verb was more distant syntactically. Also, the determiner preceded but the verb followed the noun.

In the present contribution, the objective is to analyze whether and how different types of number agreement in English also affect perception, not production. To do so, section 2 takes previous work on psycholinguistic effects of agreement into consideration. Section 3 focuses on the challenges the listener faces in communication. It is obvious that listeners are rarely exposed to spoken language in quiet; some type of noise typically competes with what is said by a speaker. Readers are therefore introduced to the literature on speech perception in noise. Section 4 describes a new forced-choice experiment in detail, section 5 presents the results, and section 6 discusses the findings against the background of previous work, before section 7 concludes the article.

2 Psycholinguistic effects of agreement

Agreement occurs in many of the world's languages and means that different elements of a sentence systematically share a grammatical property (Steele Reference Steele, Greenberg, Ferguson and Moravcsik1978: 610 cited in Corbett Reference Corbett2006: 4). In the empirical literature, agreement and agreement violations have been shown to affect how language users process sentences. Two examples focusing on English are the studies by Squires (Reference Squires2014) and Dube et al. (Reference Dube, Kung, Brock and Demuth2019). Squires (Reference Squires2014) found in a self-paced reading study with native speakers of US American English that the presence and absence of overt subject–verb agreement had an impact on how quickly individuals read. She compared the standard forms the turtle doesn't (singular noun + doesn't, singular agreement) and the turtles don't (plural noun + don't, plural agreement) to the nonstandard but attested form the turtle don't (singular noun + don't, no agreement) and to the unknown form the turtles doesn't (plural noun + doesn't, no agreement). The standard forms (with agreement) showed the shortest reading times, the nonstandard forms triggered longer and the uncommon forms again longer reading times. Dube et al.'s (Reference Dube, Kung, Brock and Demuth2019) analysis revealed for Australian-English-speaking adults that subject–verb agreement violations as in The boy often cook or The boys often cooks led to a P600 effect in an experiment using event-related brain potentials (ERPs).

These and other studies demonstrate that agreement is processed and that violations are detected. What such contributions do not show, however, is whether agreement supports the language user in that, for instance, intact sentences with agreement are processed distinctly from intact sentences without agreement. Put differently, it does not become clear whether agreement has any function and potentially represents a psycholinguistic help. In the theoretical literature, it is discussed whether and which possible functions agreement might have. Levin (Reference Levin2001: 21–7; see also Corbett Reference Corbett2006: 274–5 for discussion) summarizes three functions of agreement. First, if elements agree, one finds redundancy in the signal, which, in turn, represents a potential help for the listener to understand the message uttered by a speaker. Second, its referential function illustrates which elements of a sentence belong together. For example, if Lisa and Matthew appear in a specific situation, the personal pronoun she would often substitute only the former. Third, syntactically speaking, agreement signals which constituents belong together. In the present article, we are only concerned with the first function, with the idea that listeners benefit from agreement.

To approach this issue, we consider two studies that did not analyze English data as a first step. While the first contribution concentrates on redundancy more generally, the second one reflects upon a specific case of agreement. Caballero & Kapatsinski (Reference Caballero and Kapatsinski2015) analyzed whether redundant morphological markers are functional in perception. They focused on Choguita Rarámuri (Tarahumara), a language of the Uto-Aztecan language family spoken in northern Mexico. Here, meaning can be redundantly expressed by two consecutive suffixes, with the second one being optional. Caballero & Kapatsinski (Reference Caballero and Kapatsinski2015) used a speech-in-noise gating task in which participants were exposed to items expressing a causative or applicative meaning with either one or two suffixes. These items were embedded in speech-shaped pink noise and the signal-to-noise ratio (SNR) was manipulated from -10 decibels (dB) to +20 dB in steps of 2 dB. Negative values mean that the noise is more dominant than the actual speech signal; positive values refer to the opposite. For instance, -10 dB represents a more adverse listening condition than -6 dB, which, in turn, is more adverse than +2 dB. At each SNR, subjects had to describe what a given item meant. The authors considered whether the words or their meanings were recognized and at which noise level. That is, for instance, if redundancy was helpful, a word with two suffixes could be recognized earlier or under more severe listening conditions than the same word with one suffix only. It was found that the additional (and optional) suffix can, depending on the predictability of the meaning in a specific context, be either beneficial or inhibitory. More precisely, the presence of both suffixes seemed to help only if the recognition of the meaning was challenging; if it was easy, the presence of both suffixes represented a disadvantage. Harris & Samuel (Reference Harris and Samuel2011) investigated whether the marking of agreement was functional in Batsbi, an endangered language of the Nakh-Dagestanian language family spoken in the Republic of Georgia. They asked whether listeners benefited from (more) agreement markers. Relying on an auditory lexical decision task, grammaticality judgments and a recall task, the authors examined whether individuals performed differently for verbs without any agreement marker, verbs with one and verbs with two such markers. Comparing items without markers to items with one marker, they found that although one marker triggered higher accuracy in the grammaticality judgment and recall experiments, response times were longer for items with one marker (in both the lexical decision and the grammaticality judgment studies). The comparison of items with one marker and items with two markers revealed that two markers represented a greater burden, as mirrored in both response time (lexical decision and grammaticality judgment tasks) and accuracy (lexical decision and recall tasks). Overall, Harris & Samuel (Reference Harris and Samuel2011) conclude that multiple marking barely helps the listener and even represents an obstacle. The studies by Caballero & Kapatsinski (Reference Caballero and Kapatsinski2015) as well as Harris & Samuel (Reference Harris and Samuel2011) give us some insights but also need to be treated with caution. For one, the two languages in focus differ typologically and morphologically quite substantially from English, the language of investigation in the present work. Moreover, the materials used in these studies were probably not as neatly controlled as the materials of the current experiment. The reasons for this are simply that much less is known about these languages and that comprehensive corpora do not exist. Caballero & Kapatsinski (Reference Caballero and Kapatsinski2015) mention this problem in their paper. Finally, it is possible that different types of agreement behave differently, with only some being a potential help for the listener. It would therefore be interesting to contrast different types of agreement and their potential psycholinguistic effects.

In contrast to the two previous studies, the following work studied English. Tanner & Bulkes (Reference Tanner and Bulkes2015) recorded ERPs and collected acceptability judgments from native speakers of US American English and contrasted the following four conditions; see (3).

  1. (3)

    1. (a) The cookies taste the best when dipped in milk.

    2. (b) *The cookies tastes the best when dipped in milk.

    3. (c) Many cookies taste the best when dipped in milk.

    4. (d) *Many cookies tastes the best when dipped in milk.

While (3a, c) are well-formed, (3b, d) show subject–verb agreement violations. Moreover, while plurality is marked on the noun (cookies) and verb (taste) in (3a) (and 3b), it is marked on the pre-nominal element (many), the noun, and the verb in (3c) (and 3d). Tanner & Bulkes’ (Reference Tanner and Bulkes2015) data (both acceptability judgments and the ERP data) indicate that participants were more sensitive to agreement violations in cases like (3d), where the pre-nominal plural marker occurred. Put differently, the participants took advantage of the additional encoding of plurality when detecting agreement violations.

Tanner & Bulkes’ (Reference Tanner and Bulkes2015) study suggests that both agreement (and agreement violations) and redundant marking play a role in language processing in English. The aim of the current work is to expand this work by (a) systematically contrasting the effects of different types of agreement, namely determiner–noun and noun–verb agreement, (b) focusing on intact sentences rather than on sentences with agreement violations, (c) relying on auditory (not visually presented) stimuli and concentrating on the listener, and (d) embedding the test sentences not only in quiet but also in noise. It is asked whether agreement and what type(s) of agreement support the listener to decode the linguistic information of a message. For this purpose, four different scenarios are considered, once in the singular and once in the plural. While we find no overt agreement in (4a, b), we see overt agreement between determiner and noun in (4c, d), between noun and verb in (4e, f), and between both determiner and noun and noun and verb in (4g, h).

  1. (4)

    1. (a) The cab broke down.

    2. (b) The cabs broke down.

    3. (c) This cab broke down.

    4. (d) These cabs broke down.

    5. (e) The cab breaks down.

    6. (f) The cabs break down.

    7. (g) This cab breaks down.

    8. (h) These cabs break down.

Sentences similar to those in (4) have recently been examined in a production study (see Schlechtweg & Corbett Reference Schlechtweg and Corbett2023 and (2) above). The present work focuses on perception, relying on the eight sentence versions given in (4), that is, on both singular and plural nouns, while the production study from Schlechtweg & Corbett (Reference Schlechtweg and Corbett2023) considered plural nouns only.

3 Perception of spoken language in noise

Language users can face difficulties when perceiving spoken language in noise, and the misperception of linguistic information is a possible consequence. A variety of factors are involved and may affect how an individual perceives spoken language in noisy environments. First of all, the characteristics of the noise itself play a role in perception. The term signal-to-noise ratio (SNR) is used to describe whether the signal or the noise is dominant in a particular situation. Research has shown that a lower SNR, that is, a less favorable listening condition, has typically a more negative impact on perception (see, e.g., Bent et al. Reference Bent, Kewley-Port and Ferguson2010). One further finds different types of noise. Multi-speaker background babble noise means that multiple speakers talk in the background of the actual signal; a listener is exposed to not only what an interlocutor says but also to background babble noise, which competes with the actual signal. Listeners typically experience more difficulties in detecting the linguistic information present in the signal if the number of background speakers increases or if the speakers in the background speak in the listener's native language, rather than in a non-native language (see, e.g., Van Engen & Bradlow Reference Van Engen and Bradlow2007). Apart from multi-speaker background babble noise, stationary noise, such as speech-shaped or white noise, exists. Research has been inconclusive when it comes to the question of which type(s) of noise has/have more severe consequences for the listener, with some data supporting the idea that multi-speaker background babble noise poses greater challenges to the listener (see, e.g., Garcia Lecumberri & Cooke Reference Lecumberri, Luisa and Cooke2006) and with other findings pointing in the opposite direction, that is, suggesting that stationary noise is more difficult to deal with (see, e.g., Liu & Kewley-Port Reference Liu and Kewley-Port2004).

Second, the characteristics of the listener play a role in the perception of spoken language in noise. Perception in noise is typically more challenging for non-native than for native speakers (see, e.g., Garcia Lecumberri & Cooke Reference Lecumberri, Luisa and Cooke2006), for bilinguals than for monolinguals (see, e.g., Schmidtke Reference Schmidtke2016), for children than for adults (see, e.g., Bent & Atagi Reference Bent and Atagi2015) and for individuals with a type of hearing loss than for individuals with typical hearing (see, e.g., Smiljanic & Sladen Reference Smiljanic and Sladen2013).

Third, the properties of the linguistic materials play a role in the perception of spoken language in noise. It has been reported that perception in noise is more challenging if the signal represents an unfamiliar native accent (see, e.g., Adank et al. Reference Adank, Evans, Stuart-Smith and Scott2009), if the signal contains foreign-accented speech (see, e.g., Bent & Atagi Reference Bent and Atagi2015), if the signal involves code-switching (see, e.g., Gross et al. Reference Gross, Patel and Kaushanskaya2021), if the (semantic) context is not rich enough to compensate for the distorted signal (see, e.g., Smiljanic & Sladen Reference Smiljanic and Sladen2013), or if the items competing with the noise have a low frequency of occurrence (see, e.g., Schmidtke Reference Schmidtke2016). Moreover, phonetic and phonological aspects have been extensively examined in the literature, for instance, by investigating how listeners perceive different vowels or consonants in noise (see, e.g., Cutler et al. Reference Cutler, Weber, Smits and Cooper2004). Also, variation in syntactic structure can lead to a more or less successful perception in noise. For instance, syntactically simpler portions of a sentence seem to be less adversely affected by the presence of noise (see, e.g., Carroll & Ruigendijk Reference Carroll and Ruigendijk2013).

The primary aim of the present contribution is to expand work in the third category and to look at whether and how specific manipulations of the linguistic materials affect the perception of spoken language in noise. The study to be reported on in this article concentrates on agreement, ‘arguably the major interface problem between morphology and syntax’ (Corbett Reference Corbett2006: 3), and thus adds to our understanding about the role of grammatical relations in perception in the context of unfavorable listening conditions.

4 Methodology

The study was a forced-choice task in which native speakers of English heard short English sentences via headphones, either in quiet or in white noise, and were asked to decide whether the second word of the sentence was a singular or plural noun. It was examined whether response accuracy was affected by the presence and type of morphosyntactic agreement in English.

4.1 Participants

Forty-eight native speakers of US American English participated in the experiment (mean age: 29.5 years; standard deviation age: 4.5 years; age range: 19–36 years). They declared normal hearing and did not have any other native language.

4.2 Materials

Thirteen English nouns were embedded in short sentences.Footnote 2 The nouns were inanimate, monosyllabic, showed a higher frequency in the singular than in the plural (hence, they were singular-dominant), and they formed the plural in a regular way and with the voiced sibilant /z/ at the end of the noun.Footnote 3 Each of the thirteen nouns occurred in eight slightly different sentence versions illustrated in table 1; the full list is presented in the Appendix.

Table 1. Eight versions of a test sentence

Overall, the eight versions differed in three respects: the target noun was presented in either the singular (cab) or the plural form (cabs), the determiner was either a definite (the, the) or a demonstrative article (this, these), and the verb was in either the present (breaks, break) or past tense (broke, broke).

In all of the thirteen sentences, an irregular verb was used that had the same number of syllables in the present and past. Hence, the eight different test versions were matched as closely as possible for length. Moreover, the eight sentence variants were only minimally different, apart from the variation illustrated in table 1, these sentence versions were phonetically, phonologically, syntactically and lexically identical.

4.3 Procedure

The study was run online with Eprime (Psychology Software Tools 2016). Before taking part in the experiment, participants were recruited via www.prolific.co, a platform that enables researchers to find, screen, recruit and pay participants anonymously. Participants received payment for completing the study. They were instructed to wear headphones while doing the perception study.Footnote 4

Participants heard short English sentences, which were presented in either quiet or white noise. The sentences subjects heard were previously spoken by a 27-year-old female native speaker of US American English from Wisconsin. The recordings were made in a sound-proof booth, with a large-diaphragm condenser microphone,Footnote 5 and with Praat (Boersma & Weenink Reference Boersma and Weenink2023). After the recording, each sentence was cut (to delete silence before and after the sentence) and normalized to 60 dB.

Each trial of the experiment started with a ‘+’ lasting for 500 milliseconds (ms) on the computer screen. Afterwards, subjects heard a sentence and were asked to indicate whether the second word of the sentence was a singular or plural noun. If it was a singular noun, they were supposed to press the left arrow on a regular keyboard; if it was a plural noun, they were supposed to press the right arrow on a regular keyboard. On the left side of the computer screen, participants saw the singular version of the noun with the respective button to be pressed (Example: cab = Left arrow). On the right side of the screen, they saw the plural version with the respective button to be pressed (Example: cabs = Right arrow).Footnote 6 Upon button press, the next ‘+’ appeared.

The thirteen test sentences were manipulated on the basis of the four variables Determiner (definite, demonstrative), VerbTense (past, present), Number (singular, plural) and signal-to-noise ratio (SNR). SNR refers to the listening conditions and had the four levels ‘quiet’, ‘zero’ (0 dB), ‘sixminus’ (-6 dB), and ‘twelveminus’ (-12 dB). ‘quiet’ means that the participants heard the test sentences without noticeable background noise. ‘zero’ (0 dB) means that both the sentence and the noise were played at 60 dB. Specifically, white noise was used, which was generated in Praat (Praat extension Vocal Toolkit, Corretge Reference Corretge2012–23). The noise started and ended concurrently with the sentences. ‘sixminus’ (-6 dB) means that the white noise was more dominant than the sentence, that is, while the sentence was played at 60 dB, the noise had an intensity of 66 dB. ‘twelveminus’ (-12 dB) means that the noise was even more dominant (72 dB), in comparison to the sentence (60 dB). Overall, and similar to previous studies (e.g. Mi et al. Reference Mi, Tao, Wang, Dong, Jin and Liu2013), performance in a quiet condition (baseline) was compared to performance in increasingly adverse listening conditions.

In total, we used 416 trials (13 sentences x 2 determiners x 2 verb tenses x 2 numbers x 4 SNRs). Each person was exposed to all of these 416 trials and therefore tested on all sentences in all possible conditions, which means that participants served as their own control (inter-subject variation balanced). The 416 trials were randomly presented for each subject.

4.4 Data analysis

A total of 19,968 data points was collected (48 participants x 416 trials per participant). The data from five subjects was not considered during the analysis since more than one- third of their responses were incorrect. That is, two-thirds was considered to be a value sufficiently beyond the chance level of 50 percent (two buttons could be pressed). Two analyses were then conducted to investigate the response variable Accuracy (accurate versus inaccurate), one only focusing on the four main effects (fixed effects) Determiner (definite, demonstrative), VerbTense (past, present), Number (singular, plural) and SNR (quiet, zero, sixminus, twelveminus), and one also concentrating on all possible two-way interactions. In either analysis, descriptive values were taken into consideration, before a binomial logistic regression was completed in R (R Core Team 2023) relying on the packages lme4 (Bates et al. Reference Bates, Mächler, Bolker and Walker2015) and lmerTest (Kuznetsova et al. Reference Kuznetsova, Brockhoff and Christensen2017) (fit by maximum likelihood (Laplace approximation); see, e.g., Field et al. Reference Field, Miles and Field2012). Apart from the above-named fixed-effect structure (only main effects (first analysis) or two-way interactions plus simple effects (second analysis)), the random intercepts by Participant and Item were included in the model. Tukey tests were further performed to examine the relevant comparisons that were not part of the model output, relying on the emmeans package (Lenth Reference Lenth2020). ‘Relevant comparisons’ referred to all comparisons for a main effect. For the interactions, it included all comparisons in which the two compared conditions shared one factor level. For example, ‘Demonstrative + Present’ was compared to ‘Demonstrative + Past’ (both sharing ‘Demonstrative’), ‘Demonstrative + Present’ to ‘Definite + Present’ (both sharing ‘Present’), but ‘Demonstrative + Present’ was not compared to ‘Definite + Past’ (no level was shared). Both for the analysis without and the analysis with the interactions, non-significant terms (main effects, interactions) were discarded from the model. If more than one term did not reach significance, the one furthest away from significance, based on the p value, was eliminated and the new model was verified again and so on and so forth.

5 Results

The descriptive statistics of the three significant main effects are presented in figures 1 to 3; the results of the final model are given in tables 2 and 3.

Figure 1. Accuracy for Determiner (error bars, confidence intervals 95 percent, circles represent the means)Footnote 7

Figure 2. Accuracy for Number (error bars, confidence intervals 95 percent, circles represent the means)

Figure 3. Accuracy for SNR (error bars, confidence intervals 95 percent, circles represent the means)

Table 2. Random effects statistics of the model of Accuracy (without interaction)

Table 3. Fixed effects statistics of the model of Accuracy (without interaction)

* p < 0.05; *** p < 0.001

Figures 1 to 3 as well as table 3 show that response accuracy was significantly higher for (a) demonstrative (this, these) than for definite articles (the, the), (b) singular than for plural cases and (c) quiet in comparison to all noise conditions. The two levels of VerbTense (present, past) did not significantly differ. Further, focusing on SNR, all Tukey comparisons except for one (quiet versus SNR of 0 dB) reached significance.

The descriptive statistics of the significant two-way interactions (Determiner*VerbTense, Determiner*Number, VerbTense*Number and Number*SNR) are given in figures 4 to 7, the results of the final model are given in tables 4 and 5.

Figure 4. Accuracy for Determiner*VerbTense (error bars, confidence intervals 95 percent, circles represent the means)

Figure 5. Accuracy for Determiner*Number (error bars, confidence intervals 95 percent, circles represent the means)

Figure 6. Accuracy for VerbTense*Number (error bars, confidence intervals 95 percent, circles represent the means)

Figure 7. Accuracy for Number*SNR (error bars, confidence intervals 95 percent, circles represent the means)

Table 4. Random effects statistics of the model of Accuracy (with interaction)

Table 5. Fixed effects statistics of the model of Accuracy (with interaction)

** p < 0.01; *** p < 0.001

Considering the interaction Determiner*VerbTense in more detail by relying on Tukey tests, I found significantly higher accuracy in the presence of a demonstrative article (this, these) than in the presence of a definite article (the), for both sentences with a present (breaks, break) and sentences with a past tense verb (broke). No significant differences were found for the comparisons ‘definite + past versus definite + present’ and ‘demonstrative + past versus demonstrative + present’. Concentrating on the interaction Determiner*Number, we see that while in the presence of a definite article (the) accuracy was significantly higher for singular than for plural cases, it was the other way around for demonstrative articles (this, these). For both singular and plural cases, demonstratives (this, these) were responded to significantly more accurately than definite articles (the). Examining the interaction VerbTense*Number, I found that singulars were responded to significantly more accurately than plurals if there was a past tense verb (broke) in the sentence; there was no significant difference in the presence of a present tense verb (break). No significant differences were found for the comparisons ‘past + plural versus present + plural’ and ‘past + singular versus present + singular’. Finally, with regard to the interaction Number*SNR, the following patterns were detected. Singulars were responded to significantly more correctly than plurals only at a SNR of -12 dB. Both in the singular and plural, the quiet conditions triggered a significantly higher accuracy than the three noise conditions, with one exception (in the plural, no significant difference was found between quiet and 0 dB). Further, both in the singular and plural, the condition 0 dB showed a significantly higher accuracy than -6 dB and -12 dB, again with one exception (in the singular, there was no significant difference between 0 dB and -6 dB). In both the singular and plural, a significantly higher accuracy was reached for -6 dB than for -12 dB.

6 Discussion

In the forced-choice study, native speakers of English were exposed to short English sentences and had to decide whether the second word of these sentences was a singular or plural noun. The objective was to examine whether and how two types of English agreement, determiner–noun and noun–verb agreement, supported the listener in perception and increased response accuracy, in comparison to cases without overt morphosyntactic agreement. Relying on English and these two types of agreement, one was able to investigate the potential psycholinguistic role of linear precedence and syntactic structure, two well-known factors in the context of agreement, in a well-controlled linguistic experiment. The study investigated the perception in noise, compared to quiet, since listeners are rarely exposed to spoken language in ideal listening conditions.

Considering only the main effects in the statistical model, we found main effects of Determiner, Number and SNR. That is, demonstratives (this, these) triggered a higher response accuracy than definite articles (the), singular cases (e.g. The cab broke down) were responded to more correctly than plural cases (e.g. The cabs broke down) and response accuracy decreased with more adverse listening conditions. No main effect of VerbTense was detected, that is, present tense verbs (e.g. breaks, break) did not help listeners more than past tense verbs (e.g. broke).

The major observation is that English agreement can indeed help the listener and can be argued to have a supportive function (see Levin Reference Levin2001: 21–7); crucially, however, not all types of agreement seem to do so. Relying on the main effects analysis, we see that while determiner–noun agreement increased response accuracy, noun–verb agreement did not. This finding is in line with Schlechtweg & Corbett's (Reference Schlechtweg and Corbett2023) findings for production. In this previous study, we found that the type of determiner affected the duration of the English plural s suffix. That is, if these preceded the target noun (e.g. These blue cabs), the s was slightly shortened in comparison to if the was used (e.g. The blue cabs). A possible reason for reducing the duration of the suffix was that it was less important in sentences with these since these expressed plurality also. In contrast, the did not indicate whether the following noun was singular or plural and hence the nominal plural suffix remained the only marker of plurality. The decisive and often the only plural marker, the s suffix, was lengthened in constructions like The blue cabs. In both this previous production and the main effects analysis of the current perception experiment, only determiner–noun but not noun–verb agreement affected processing. It seems that, as in the case of production, syntactic structure and linear precedence are two candidates to explain this divergence in perception, too (see also Corbett Reference Corbett2006: 180, 206–30). While a determiner like these occurs in the same nominal phrase as a noun like cabs, a verb form like break does not; it is more distant in syntactic terms. Moreover, the determiner precedes the noun but the verb follows.

The result concerning the determiner is compatible with Tanner & Bulkes’ (Reference Tanner and Bulkes2015) findings. The authors’ data revealed that agreement violations showed greater effects in English if the plural noun was preceded by an item signaling plurality in addition to the nominal suffix (and the verb) (e.g. *Many cookies tastes the best when dipped in milk), compared to if only the nominal suffix (and the verb) indicated plurality (e.g. *The cookies tastes the best when dipped in milk). The two other studies we discussed earlier, Caballero & Kapatsinski (Reference Caballero and Kapatsinski2015) as well as Harris & Samuel (Reference Harris and Samuel2011), only partially support the idea that agreement or redundancy is beneficial for language users. Generally speaking, as mentioned earlier, it is difficult to compare these two studies to the present work for multiple reasons. The languages at the center of investigation (Choguita Rarámuri, Batsbi, English) are genetically and linguistically quite distinct (each belonging to another language family). Further, we know much less about these languages than about English, which means that the materials were potentially less well controlled than the materials of the present work. Caballero & Kapatsinski (Reference Caballero and Kapatsinski2015) admit these shortcomings themselves. Finally, as we see in the current study, different types of agreement can have different effects even in the same language, so one obviously has to consider the possibility that various types of agreement affect perception (and/or production) differently.

The second main effect revealed in the data, the main effect of Number, is less relevant to us: singular cases (e.g. The cab broke down) were generally responded to with higher accuracy than plural cases (e.g. The cabs broke down). Two possible origins of this effect are frequency of occurrence and the acoustics of the plural [z]. That is, first, all of our thirteen target nouns were singular-dominant; they were more frequent in the singular than in the plural. The frequency variation might have translated into the difference in accuracy (for a review on frequency effects, see, e.g., Diessel Reference Diessel2007). Second, [z] is a sibilant, which shows acoustic similarities to white noise and could be hard to detect.Footnote 8

The third main effect, the main effect of SNR, showed that more favorable listening conditions resulted in higher response accuracy. This is compatible with earlier work indicating that lowering the SNR has a negative impact on perception (see, e.g., Bent et al. Reference Bent, Kewley-Port and Ferguson2010). More precisely, we made two observations with regard to the most adverse listening condition (-12 dB). One the one hand, response accuracy was clearly lower (80 percent) than that of the other three conditions (90 percent (-6 dB), 92 percent (0 dB), 93 percent (quiet)). On the other hand, a response accuracy of 80 percent is still quite high, suggesting that listeners could overall sufficiently hear the information present in the signal. It seems therefore to be the case that specifically the processing system suffered more at -12 dB than at more favorable listening conditions. That is, noise competing with a signal might increase the ‘strain on memory or attention capacity’ (Carroll & Ruigendijk Reference Carroll and Ruigendijk2013: 153). Put differently, although individuals can still hear the information in the signal necessary to solve the given task, they need to invest more cognitive effort to extract this information, which, in turn, affects overall performance (see, e.g., Caroll & Ruigendijk Reference Carroll and Ruigendijk2013; Mattys et al. Reference Mattys, Davis, Bradlow and Scott2012).

In the next step, all possible two-way interactions were included in the statistical model; four of six interactions reached significance. The analysis of Determiner*VerbTense showed that participants responded significantly more correctly to these than to the in the presence of both present and past tense verbs. In addition to that, we can cautiously adjust what we said above. We stated that noun–verb agreement did not have an effect. However, considering this interaction, we can see that the difference in response accuracy between the and these was slightly larger if a past tense verb occurred in the sentence, that is, in the absence of noun–verb agreement. In the presence of a present tense verb agreeing with the noun, the difference between the and these was smaller, which might suggest that the the cases caught up due to the support of the verb form that signaled the respective number value.

The second significant interaction, the interaction of Determiner*Number, showed, first, a higher accuracy for demonstratives (this, these) than for definite articles (the) for both singular and plural cases, which confirmed the importance of determiner–noun agreement. Second, a significantly higher response accuracy for singular than for plural cases in the presence of the was detected; the opposite pattern was found for these, but with a slightly smaller difference between singular and plural trials. One could argue that the higher frequency of occurrence of the singular nouns led to higher accuracy if the determiner had the same form in both the singular and the plural (the). For the demonstratives, in turn, this effect might have disappeared or even turned into the opposite due to acoustic differences between this and these: since the vowel of these as well as the word as a whole was longer than the vowel of this as well as this word as a whole, listeners had more time to detect the demonstrative these, which could increase response accuracy. In the literature, there is some evidence suggesting that [iː] (as in these) is responded to more accurately than [ɪ] (as in this) by American English listeners (see Cutler et al. Reference Cutler, Weber, Smits and Cooper2004; Bent et al. Reference Bent, Kewley-Port and Ferguson2010).

The third significant interaction, the interaction VerbTense*Number, points to a significant difference between singular and plural cases in the presence of past tense verbs (no overt agreement) only. Here, participants responded more accurately to singular forms, which, again, might have derived, for instance, from the variation in the frequency of occurrence between the nouns in their singular and plural form. No such difference was observed if a present tense verb, overtly agreeing with the noun, followed. This gives some additional evidence, against our initial observation in the analysis with main effects only, that noun–verb agreement could play a role in perception and could cancel out frequency effects.

The fourth significant interaction, the interaction Number*SNR, showed that the response accuracy was higher for singular than for plural forms only in the most adverse listening condition, at an SNR of -12 dB. Moreover, it was generally shown that, for both singular and plural cases, more unfavorable listening conditions led to a lower accuracy and hence represented a processing burden, which has been documented in the literature before (see, e.g., Bent et al. Reference Bent, Kewley-Port and Ferguson2010).

Overall, it is worth investigating the functional role of English agreement due to its theoretical properties and the empirical benefits it offers. Several avenues for future research arise. First, nouns like committee could be examined, which have been shown to take either singular (if conceptualized as a single entity) or plural (if conceptualized as more than one individual) agreement (e.g. The committee has/have) (see, e.g., Corbett Reference Corbett2006: 2). It would be interesting to see whether the two different conceptualizations are processed differently in quiet and in noise, and whether agreement plays a role for the listener here. Second, one could take into account different types of plural nouns, say nouns with a specific vowel change between the singular and plural (e.g. foot/feet, goose/geese) or nouns in which the singular has a different number of syllables than the plural (e.g. fox/foxes, child/children). Here, the role of syntagmatic aspects (e.g. the cabs versus these cabs), paradigmatic information (e.g. cab/cabs versus foot/feet) and their possible interaction (e.g. the cabs versus these feet) during the perception of spoken language in adverse listening conditions could be examined. Third, one could try to disentangle the contribution of the two possible driving forces behind the main effect of Determiner, namely linear precedence and syntactic structure. In the experiment reported here, the determiner always preceded and the verb always followed the noun. Now, sentences in which the verb precedes the noun could be added. If the effect of sentences with a verb preceding the noun was comparable to that of determiners in the current study, this would be a reason to argue that linear precedence represents the driving force behind the effect. If, however, verbs preceding and verbs following the noun trigger an identical effect, linear precedence would be unlikely to play the major role. It would obviously be difficult to come up with enough naturally sounding English sentences, but questions could possibly be a step in the right direction (e.g. Have the cabs been waiting for us since eight o'clock? (example from p.c. with Grev Corbett and Matthew Baerman)).

7 Conclusion

The findings suggest that agreement has a function in English: it supports the listener when decoding the linguistic information of a message in noise. However, different types of agreement do not seem to be equally efficient to serve as a help: determiner–noun agreement clearly improved accuracy in perception while noun–verb agreement did so slightly at best.

Appendix: Test sentences

The/This/These cab/cabs broke/breaks/break down.

The/This/These pear/pears fell/falls/fall from the tree.

The/This/These screen/screens became/becomes/become useless.

The/This/These ride/rides took/takes/take an hour.

The/This/These cream/creams stung/stings/sting her skin.

The/This/These wave/waves shook/shakes/shake the house.

The/This/These stone/stones sank/sinks/sink in the lake.

The/This/These fig/figs grew/grows/grow sweeter and sweeter.

The/This/These phone/phones rang/rings/ring loudly.

The/This/These pad/pads fell/falls/fall out.

The/This/These tag/tags made/makes/make the price clear.

The/This/These car/cars had/has/have mechanical problems.

The/This/These nail/nails held/holds/hold up the picture.

Footnotes

This work has been funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy – EXC 2177/1 – Project ID 390895286.

2 In Schlechtweg & Corbett (Reference Schlechtweg and Corbett2023), these nouns were also used in similar but longer sentences. However, in Schlechtweg & Corbett (Reference Schlechtweg and Corbett2023), there were sixteen sentences.

3 The frequencies were checked on IntelliText (Hartley et al. Reference Hartley, Sharoff, Stephenson, Wilson, Babych and Thomas2011), relying on the data from the UKWAC corpus.

4 One might argue that conducting the experiment online has the disadvantage that participants use different types of headphones. I still relied on the online procedure for two reasons. First, it would have been difficult, if not impossible, to recruit a fair number of native speakers of English, and even more difficult to recruit a fair number of native speakers of English from the same variety of English (e.g. from the US or UK), in the (rather small) city where our institution is located in Germany. Doing the experiment online enabled us to recruit enough participants and to focus on a homogeneous group, namely only speakers from the US. Second, and crucially, all our participants were tested on all test sentences, that is, on all test items in all test conditions. Put differently, each participant served as her or his own control and the experiment was balanced for inter-subject variation.

5 Røde NT USB (transmission range: 20 Hz to 20 kHz; limit sound pressure level: 110 dB SPL).

6 The position where the singular and plural appeared on the screen as well as the association between singular/plural and left/right arrow was held constant across all participants.

7 The figures were created in Minitab (2019).

8 Note one aspect here, however. Consider any of our test sentences, for instance, The screens became useless. Although it might be hard to detect the word-final [z] of screens embedded in white noise due to the spectral similarities of the speech sound and the noise type, one could argue that listeners could still benefit from a durational cue. That is, for a plural case, the temporal distance between the end of the stem (say the [n] of screens) and the beginning of the verb (say the [b] of became) is longer than for a singular case because there is an additional sound between [n] and [b] in the plural, namely [z].

References

Adank, Patti, Evans, Bronwen G., Stuart-Smith, Jane & Scott, Sophie K.. 2009. Comprehension of familiar and unfamiliar native accents under adverse listening conditions. Journal of Experimental Psychology: Human Perception and Performance 35(2), 520–9.Google ScholarPubMed
Bates, Douglas, Mächler, Martin, Bolker, Ben & Walker, Steve. 2015. Fitting linear mixed-effects models using lme4. Version 1.1.34. Journal of Statistical Software 67(1), 148.Google Scholar
Bent, Tessa & Atagi, Eriko. 2015. Children's perception of nonnative-accented sentences in noise and quiet. The Journal of the Acoustical Society of America 138(6), 3985–93.10.1121/1.4938228CrossRefGoogle ScholarPubMed
Bent, Tessa, Kewley-Port, Diane & Ferguson, Sarah Hargus. 2010. Across-talker effects on non-native listeners’ vowel perception in noise. The Journal of the Acoustical Society of America 128(5), 3142–51.10.1121/1.3493428CrossRefGoogle ScholarPubMed
Boersma, Paul & Weenink, David. 2023. Praat: Doing phonetics by computer [computer program]. Version 6.3.10. www.praat.orgGoogle Scholar
Caballero, Gabriela & Kapatsinski, Vsevolod. 2015. Perceptual functionality of morphological redundancy in Choguita Rarámuri (Tarahumara). Language, Cognition and Neuroscience 30(9), 1134–43.10.1080/23273798.2014.940983CrossRefGoogle Scholar
Carroll, Rebecca & Ruigendijk, Esther. 2013. The effects of syntactic complexity on processing sentences in noise. Journal of Psycholinguistic Research 42(2), 139–59.10.1007/s10936-012-9213-7CrossRefGoogle ScholarPubMed
Corbett, Greville G. 2006. Agreement. Cambridge: Cambridge University Press.Google Scholar
Corretge, Ramon. 2012–23. Praat Vocal Toolkit. www.praatvocaltoolkit.comGoogle Scholar
Cutler, Anne, Weber, Andrea, Smits, Roel & Cooper, Nicole. 2004. Patterns of English phoneme confusions by native and non-native listeners. The Journal of the Acoustical Society of America 116(6), 3668–78.10.1121/1.1810292CrossRefGoogle ScholarPubMed
Diessel, Holger. 2007. Frequency effects in language acquisition, language use, and diachronic change. New Ideas in Psychology 25(2), 108–27.10.1016/j.newideapsych.2007.02.002CrossRefGoogle Scholar
Dube, Sithembinkosi, Kung, Carmen, Brock, Jon & Demuth, Katherine. 2019. Perceptual salience and the processing of subject–verb agreement in 9–11-year-old English-speaking children: Evidence from ERPs. Language Acquisition 26(1), 7396.10.1080/10489223.2017.1394305CrossRefGoogle Scholar
Field, Andy, Miles, Jeremy & Field, Zoë. 2012. Discovering statistics using R. Los Angeles, CA: Sage.Google Scholar
Lecumberri, Garcia, Luisa, Maria & Cooke, Martin. 2006. Effect of masker type on native and non-native consonant perception in noise. The Journal of the Acoustical Society of America 119(4), 2445–54.10.1121/1.2180210CrossRefGoogle Scholar
Gross, Megan C., Patel, Haliee & Kaushanskaya, Margarita. 2021. Processing of code-switched sentences in noise by bilingual children. Journal of Speech, Language, and Hearing Research 64(4), 1283–302.10.1044/2020_JSLHR-20-00388CrossRefGoogle ScholarPubMed
Harris, Alice C. & Samuel, Arthur G.. 2011. Perception of exuberant exponence in Batsbi: Functional or incidental? Language 87(3), 447–69.10.1353/lan.2011.0068CrossRefGoogle Scholar
Hartley, Tony, Sharoff, Serge, Stephenson, Paul, Wilson, James, Babych, Bogdan & Thomas, Martin. 2011. IntelliText [computer program]. http://corpus.leeds.ac.uk/itweb/htdocs/Query.html#Google Scholar
Kuznetsova, Alexandra, Brockhoff, Per B. & Christensen, Rune H. B.. 2017. lmerTest package: Tests in linear mixed effects models. Journal of Statistical Software 82(13). 126. Version 3.1.3. https://doi.org/10.18637/jss.v082.i13CrossRefGoogle Scholar
Lenth, Russell V. 2020. emmeans: Estimated Marginal Means, aka Least-Squares Means. R package version 1.8.7. https://cran.r-project.org/package=emmeansGoogle Scholar
Levin, Magnus. 2001. Agreement with collective nouns in English (Lund Studies in English 103). Lund: Lund University.Google Scholar
Liu, Chang & Kewley-Port, Diane. 2004. Formant discrimination in noise for isolated words. The Journal of the Acoustical Society of America 116(5), 3119–29.CrossRefGoogle Scholar
Mattys, Sven L., Davis, Matthew H., Bradlow, Ann R. & Scott, Sophie K.. 2012. Speech recognition in adverse conditions: A review. Language and Cognitive Processes 27(7–8), 953–78.CrossRefGoogle Scholar
Mi, Lin, Tao, Sha, Wang, Wenjing, Dong, Qi, Jin, Su-Hyun & Liu, Chang. 2013. English vowel identification in long-term speech-shaped noise and multi-talker babble for English and Chinese listeners. The Journal of the Acoustical Society of America 133(5), EL3917.10.1121/1.4800191CrossRefGoogle ScholarPubMed
Minitab. 2019. Minitab 19 [computer program]. www.minitab.comGoogle Scholar
Psychology Software Tools. 2016. E-Prime [computer program]. Pittsburgh, PA.Google Scholar
R Core Team. 2023. R: A language and environment for statistical computing. R version 4.3.1. R Foundation for Statistical Computing, Vienna, Austria. www.R-project.orgGoogle Scholar
Schlechtweg, Marcel & Corbett, Greville G.. 2023. Is morphosyntactic agreement reflected in acoustic detail? The s duration of English regular plural nouns. English Language and Linguistics 27(1), 6792.10.1017/S1360674322000223CrossRefGoogle Scholar
Schmidtke, Jens. 2016. The bilingual disadvantage in speech understanding in noise is likely a frequency effect related to reduced language exposure. Frontiers in Psychology 7(article 678), 115.10.3389/fpsyg.2016.00678CrossRefGoogle ScholarPubMed
Smiljanic, Rajka & Sladen, Douglas. 2013. Acoustic and semantic enhancements for children with cochlear implants. Journal of Speech, Language, and Hearing Research 56(4), 1085–96.10.1044/1092-4388(2012/12-0097)CrossRefGoogle ScholarPubMed
Squires, Lauren. 2014. Processing, evaluation, knowledge: Testing the perception of English subject-verb agreement variation. Journal of English Linguistics 42(2), 144–72.10.1177/0075424214526057CrossRefGoogle Scholar
Steele, Susan. 1978. Word order variation: A typological study. In Greenberg, Joseph H., Ferguson, Charles A. & Moravcsik, Edith A. (eds.), Universals of human language IV: Syntax, 585623. Stanford, CA: Stanford University Press.Google Scholar
Tanner, Darren & Bulkes, Nyssa Z.. 2015. Cues, quantification, and agreement in language comprehension. Psychonomic Bulletin Review 22, 1753–63.10.3758/s13423-015-0850-3CrossRefGoogle ScholarPubMed
Van Engen, Kristin J. & Bradlow, Ann R.. 2007. Sentence recognition in native- and foreign-language multi-talker background noise. The Journal of the Acoustical Society of America 121(1), 519–26.10.1121/1.2400666CrossRefGoogle ScholarPubMed
Figure 0

Table 1. Eight versions of a test sentence

Figure 1

Figure 1. Accuracy for Determiner (error bars, confidence intervals 95 percent, circles represent the means)7

Figure 2

Figure 2. Accuracy for Number (error bars, confidence intervals 95 percent, circles represent the means)

Figure 3

Figure 3. Accuracy for SNR (error bars, confidence intervals 95 percent, circles represent the means)

Figure 4

Table 2. Random effects statistics of the model of Accuracy (without interaction)

Figure 5

Table 3. Fixed effects statistics of the model of Accuracy (without interaction)

Figure 6

Figure 4. Accuracy for Determiner*VerbTense (error bars, confidence intervals 95 percent, circles represent the means)

Figure 7

Figure 5. Accuracy for Determiner*Number (error bars, confidence intervals 95 percent, circles represent the means)

Figure 8

Figure 6. Accuracy for VerbTense*Number (error bars, confidence intervals 95 percent, circles represent the means)

Figure 9

Figure 7. Accuracy for Number*SNR (error bars, confidence intervals 95 percent, circles represent the means)

Figure 10

Table 4. Random effects statistics of the model of Accuracy (with interaction)

Figure 11

Table 5. Fixed effects statistics of the model of Accuracy (with interaction)