Hostname: page-component-78c5997874-dh8gc Total loading time: 0 Render date: 2024-11-15T02:14:14.789Z Has data issue: false hasContentIssue false

ARE REPLICATION STUDIES INFREQUENT BECAUSE OF NEGATIVE ATTITUDES?

INSIGHTS FROM A SURVEY OF ATTITUDES AND PRACTICES IN SECOND LANGUAGE RESEARCH

Published online by Cambridge University Press:  07 December 2021

Kevin McManus*
Affiliation:
The Pennsylvania State University, University Park, PA, USA
*
*Corresponding author. E-mail: kmcmanus@psu.edu
Rights & Permissions [Opens in a new window]

Abstract

Replication is a research methodology designed to verify, consolidate, and generalize knowledge and understanding within empirical fields of study. In second language studies, however, reviews share widespread concern about the infrequency of replication. A common but speculative explanation for this situation is that replication studies are not valued because they lack originality and/or innovation. To better understand and respond to the infrequency of replication in our field, 354 researchers were surveyed about their attitudes toward replication and their practices conducting replication studies. Responses included worldwide participation from researchers with and without replication experience. Overall, replications were evaluated as relevant and valuable to the field. Claims that replication studies lack originality/innovation were not supported. However, dissemination issues were identified: half of published replication studies lacked explicit labeling and one quarter of completed replications were unpublished. Explicit labeling of replication studies and training in research methodology and dissemination can address this situation.

Type
Research Report
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2021. Published by Cambridge University Press

Study quality and methodological rigor, including how research data are collected, analyzed, and interpreted, are now a major focus of attention in the field of second language (L2) studies (Gass et al., Reference Gass, Loewen and Plonsky2021; Plonsky, Reference Plonsky2013). This “methodological turn” (Byrnes, Reference Byrnes2013) has been fundamental to the growth and credibility of the discipline because theories about L2 learning are built and developed by collecting, analyzing, and interpreting data. However, a critical problem facing empirical disciplines like L2 studies is that established findings and previous studies appear to be seldom revisited (Marsden et al., Reference Marsden, Morgan-Short, Thompson and Abugaber2018; Zwaan et al., Reference Zwaan, Etz, Lucas and Donnellan2018). This means that new studies, theories, and applications regularly build on unverified, unconfirmed, and sometimes scanty results.

As reviews of the field have repeatedly noted, replication studies are critically needed to consolidate and strengthen the field’s evidence base (Marsden et al., Reference Marsden, Morgan-Short, Thompson and Abugaber2018; Porte, Reference Porte2012). This is because replication allows us to better understand how a study’s research data were collected, measured, and analyzed, as well as the extent to which unexpected and/or unanticipated factors potentially influenced the results (Porte & McManus, Reference Porte and McManus2019; Schmidt, Reference Schmidt2009). Replication is therefore one way to assure the quality of our work. In L2 research, however, revisiting a study to understand the nature, validity, and reliability of its findings is still not yet considered an accepted or necessary part of the research process.

One explanation for infrequent amounts of replication in the field is that replication studies are not valued because they lack originality and/or innovation (Marsden et al., Reference Marsden, Morgan-Short, Thompson and Abugaber2018; Porte, Reference Porte2012). For example, Porte and Richards (Reference Porte and Richards2012) surmised that replication “is regarded as low-prestige, mundane, ‘unoriginal’, or ‘non-academic’ and therefore not encouraged by faculty” (p. 285, see also Marsden et al., Reference Marsden, Morgan-Short, Thompson and Abugaber2018; Porte & McManus, Reference Porte and McManus2019). Small-scale surveys from neighboring disciplines provide some support for these claims. In translation and interpreting studies, for example, Olalla-Soler’s (Reference Olalla-Soler2020) survey of 52 researchers indicated that approximately 25% of respondents saw replication as “uninteresting” and “not a priority,” despite views that replication is needed to grow the discipline. In a survey of 73 researchers in computer science education (Ahadi et al., Reference Ahadi, Hellas, Ihantola, Korhonen and Petersen2016), respondents agreed that “original studies are more prestigious than replication studies” (p. 7) and thought that replications contributed little toward citation and grant success. These attitudes were reinforced by Easley et al.’s (Reference Easley, Madden and Gray2013) survey of journal editors: “social science editors [n = 31] think of replication as an uncreative process that unfairly displaces ‘original’ and important studies” (p. 1459). Slightly more optimistic attitudes were reported by Mu and Matsuda (Reference Mu and Matsuda2016) in a survey of 107 authors in the Journal of Second Language Writing, however. Close to half of respondents agreed that “replication studies count towards hiring, tenure, and promotion decisions as much as original studies do” (p. 207), but this question was ignored by 40% of respondents. Respondents additionally evaluated replication studies as “too risky” and “potentially injurious” to tenure and/or promotion (p. 208).

Taken together, even though a small body of research suggests that replication is needed, the perceived relevance and value of conducting replication studies is unclear. As a result, negative or ambivalent attitudes toward the relevance, value, and originality/innovation of replication studies could explain their infrequency. It is also possible that attitudes toward replication might be moderated by researcher experience and training (e.g., career stage, research methodology courses taken, years since receiving a PhD), in line with findings recently reported by Isbell et al. (Reference Isbell, Brown, Chan, Derrick, Ghanem, Gutiérrez Arvizu, Schnur, Zhang and Plonsky2021) and Loewen et al. (Reference Loewen, Gönülal, Isbell, Ballard, Crowther, Lim, Maloney and Tigchelaar2020) for research ethics and statistical knowledge. Apart from a few small-scale surveys, however, very little is known about researchers’ attitudes toward replication and their practices conducting replication studies. To address concerns about the infrequency of replication in the field, research is needed that investigates (a) researchers’ attitudes toward replication, (b) how replication studies are reported and disseminated, and (c) potential relationships between attitudes and researcher experience and training.

Current Study

This study addressed the aforementioned gaps by investigating researchers’ attitudes toward replication and the practices of researchers who have carried out replications in L2 studies. One particular motivation for this investigation is that discourse around study quality, replication, and a perceived crisis in the reproducibility of empirical results has increased within recent years (e.g., Gass & Plonsky, Reference Gass and Plonsky2020; Marsden et al., Reference Marsden, Morgan-Short, Thompson and Abugaber2018). One example of this is Marsden et al.’s (Reference Marsden, Morgan-Short, Thompson and Abugaber2018) narrative and systematic review that refocused attention on the lack of replication in the field (see also Language Teaching Review Panel, 2008; Porte, Reference Porte2012). Based on a sample of 67 self-identified replication studies, Marsden et al. estimated a mean rate of 1 published replication for every 400 articles. The review concluded with 16 recommendations to support future replication research, including ways to address publication bias, labeling ambiguities, and how to promote greater openness and transparency in research. The field has also witnessed a variety of initiatives designed to support and promote replication, including the creation of replication studies as specific manuscript types in some journals (e.g., Language Teaching, Studies in Second Language Acquisition), funding mechanisms to support replication of influential studies (e.g., Institute of Education Sciences), explicit guidelines from professional societies articulating the value and place of replication studies in decisions about tenure and promotion (e.g., American Association for Applied Linguistics), as well a recent textbook that guides researchers through the replication research process (Porte & McManus, Reference Porte and McManus2019). Together, these are important initiatives that may have influenced attitudes toward replication.

In the current study, three related questions were investigated to better understand why replication studies appear infrequent in L2 studies. First, researchers were surveyed about their attitudes toward replication. This question responds to claims that replication studies are infrequent because they are not relevant and/or are not valuable to the field. Second, researchers with replication experience were surveyed about their practices conducting and reporting replication studies. This question seeks to understand in what ways claims about the infrequency of replication studies might be explained by factors related to how replications are reported and disseminated. Third, relationships between researcher background characteristics and attitudes toward replication were examined to understand in what ways experience in the field, including career stage, rate of publication, and research methods training, potentially shapes attitudes toward replication. The following research questions were investigated:

RQ1. What are the attitudes of researchers toward replication?

RQ2. What are the practices of researchers who have carried out replication studies?

RQ3. In what ways do researcher background variables relate to attitudes toward replication?

Method

Participants

The target population was researchers, including PhD students, in the field of L2 studies. Data were collected between September 2020 and March 2021. Following Isbell et al. (Reference Isbell, Brown, Chan, Derrick, Ghanem, Gutiérrez Arvizu, Schnur, Zhang and Plonsky2021) and Loewen et al. (Reference Loewen, Gönülal, Isbell, Ballard, Crowther, Lim, Maloney and Tigchelaar2020), two techniques were used to obtain a broad and large sample of the target population. First, names and e-mails were extracted from recent conference programs connected with L2 research: International Association of Applied Linguistics (2017); the L2 acquisition, language acquisition, and attrition strand of the American Association of Applied Linguistics (2017–2020); British Association for Applied Linguistics (2017–2019); European Second Language Association (2018–2019); Japan Second Language Association (2017–2019); and Second Language Research Forum (2017–2019). Internet searches using google.com were conducted to collect missing e-mail addresses. Using this procedure, 3,285 unique contacts were sent and received an invitation to the survey. Second, survey links were posted to the International Symposium on Bilingualism and Info-CHILDES listservs, social media (Twitter and the Applied Linguistics Research Methods Facebook Group), and the Linguist List. Researchers were also invited to share the link with colleagues in L2 studies.

Using these recruitment methods, 556 people started the survey by clicking on the survey link and 354 completed the survey (i.e., provided a response for all questions). The final sample included 354 respondents from 45 different countries (44% in North America, 29% in Europe, 10% in Asia, 6% in South America, 2% in each of Africa and Australia; “no response” = 7%). In terms of career stage, 24% of respondents self-identified as PhD students, 8% as postdoctoral researchers, 25% as assistant professors, 19% as associate professors, and 16% as full professors (“other” = 1%, “no response” = 7%). Additional characteristics of the data sample with information about years since receiving the PhD, publishing experience, research orientation, and research training are summarized in Table 1.

TABLE 1. Characteristics of the data sample

Note: Research orientation was coded as follows: 1 = quantitative only, 2 = mostly quantitative, 3 = equal parts quantitative and qualitative, 4 = mostly qualitative, 5 = all qualitative.

Survey

The survey’s design was informed by previous work investigating researchers’ attitudes toward replication and their practices conducting replication studies (Ahadi et al., Reference Ahadi, Hellas, Ihantola, Korhonen and Petersen2016; Mu & Matsuda, Reference Mu and Matsuda2016; Olalla-Soler, Reference Olalla-Soler2020). To increase comparability between the current study and recent research on this topic, a modified version of Olalla-Soler’s (Reference Olalla-Soler2020) survey was used. Pilot testing with 12 researchers in L2 studies resulted in revisions to the survey’s flow, labeling, questions, and response options. The survey is available in the Online Supplementary Materials and from OSF (https://osf.io/6kfxz; see also IRIS).

The survey included three sections: section 1: attitudes toward replication; section 2: replication practices; and section 3: background information. Before beginning the survey, respondents were provided with a concise definition of a replication study (“A replication study is defined as an empirical study that involves repeating the research procedure of a previous piece of work, with or without changes [Schmidt Reference Schmidt2009]”) and a definition of L2 research (“L2 research includes any research that involves analyzing data from L2 speakers”). The survey was designed and administered using Qualtrics software (2021). The “survey flow” option in Qualtrics customized the order of the questions depending on answers. Therefore, no respondent was asked about replication practices if they had not conducted a replication study.

In section 1, respondents were asked about their attitudes toward conducting replication research in the field of L2 studies (e.g., “Do you think researchers should replicate their own studies, those of other researchers, or both?,” “Do you think the amount of replication in L2 research should remain the same, increase, or decrease?”). In section 2, respondents who indicated that they had replicated one or more studies were asked about their experiences and practices carrying out, reporting, and publishing their replication(s) (e.g., “How many of your replications were exact/direct, close/partial, or conceptual” [definitions of each type were provided], “How many of your replications have been explicitly labeled as a replication in the title or abstract of the publication?”). In section 3, background information was requested, including country of residence, year PhD was received, current position, publishing experience, and research orientation.

Analyses

Only data from completed surveys were used in the analysis, determined as a respondent providing a response for all questions, including “no response” (data are available from OSF, https://osf.io/6kfxz). An estimation approach to data analysis was used (Cumming & Calin-Jageman, Reference Cumming and Calin-Jageman2017), in which analyses primarily involved descriptive statistics for each survey question (mean, standard deviation, 95% confidence intervals [CIs], median, interquartile range). CIs with short intervals that do not cross zero are interpreted as indicators of a statistically meaningful effect. In addition, CIs can be used to generalize beyond a specific sample and make predictions for future replication studies (see Cumming & Calin-Jageman, Reference Cumming and Calin-Jageman2017).

To address RQ3 about potential relationships between attitudes toward replication and researcher experience/background, Spearman–Rank correlations by bootstrapping (with 1,000 replicates and 95% CIs) were performed using RVAideMemoire (Hervé, Reference Hervé2021) in R (R Core Team, 2021). The attitude data came from question 1.4 in the survey and the background variables were career stage, years since PhD, publication rate, and statistics courses taken. All instances of “no response” were coded as NA. Estimates of effect were interpreted using correlation coefficients and their 95% CIs. Correlation coefficients around 0.25, 0.40, and 0.60 are interpreted as small, medium, and large, respectively (Plonsky & Oswald, Reference Plonsky and Oswald2014).

Results

Results are presented as follows: attitudes toward replication (RQ1), practices conducting replication studies (RQ2), relationships between attitudes toward replication and researcher experience/background characteristics (RQ3). Results reported for specific survey questions are cross-referenced to the supplementary materials.

What Are the Attitudes of Researchers toward Replication?

Overall, respondents expressed very positive attitudes toward the place and value of replication in the field (Q1.4; see Table 2 and Figure 1). Using a scale from 100 (strongly agree) to 0 (strongly disagree), respondents strongly agreed that replications “are valuable to the field,” “strengthen a discipline,” and “are relevant to L2 research.” Respondents also expressed considerable agreement with claims that replications can “consolidate,” “expand,” and “verify” previous results. In addition, respondents disagreed with claims that replications lack “innovation” and “originality.” Some statements elicited relatively ambivalent ratings, however, including that replications “are negatively evaluated by reviewers” and “question the original researcher and their findings.”

TABLE 2. Descriptive results for statements about replication research ranked by mean rating

Note: Respondents rated each item on a scale of 0 (“strongly disagree”) to 100 (“strongly agree”).

Figure 1. Histograms showing respondents’ ratings of statements about replication research.

Note: Each statement was rated on a scale from 0 (“strongly disagree”) to 100 (“strongly agree”).

The histograms in Figure 1, which show counts of individual responses (in bins of 10), both reinforce these measures of central tendency and provide more nuance. For example, statements about the relevance and value of replication show counts that are very heavily skewed toward the “strongly agree” side of the scale. However, even though counts for statements about replications lacking in “originality” and “innovation” tend to collect toward the “strongly disagree” side of the scale, greater dispersion across the full scale is also visible. Taken together, these results suggest positive attitudes toward replication in our field, especially with regard to its relevance and value. In addition, respondents tend to disagree with claims that replications lack innovation and originality.

To understand attitudes toward replication more fully, respondents were asked whether they would recommend others to replicate studies and whether they thought the amount and type of replication should change.

First, 89.3% (n = 316) of 354 respondents indicated that they would recommend other researchers to carry out replication studies (Q1.1). Only 5.4% (n = 19) would not (“no response” = 5.4%, n = 19). When asked which studies should be replicated (Q1.3), most respondents thought that researchers should replicate both their own studies and those of other people (81.9%, n = 290). A small number of respondents thought that researchers should either replicate studies of other people only (9.9%, n = 35) or their own studies only (3.1%, n = 11). Very few respondents thought that researchers should not conduct replications (2.5%, n = 9; “no response” = 2.5%, n = 9).

Second, respondents were asked if they would support a graduate student who wanted to conduct a replication as part of their PhD project (Q1.2). Most respondents indicated that they would support this: fully support (33.9%, n = 120), support but an additional study is required (28.8%, n = 102), and support but with reservations (13%, n = 46). A minority of respondents selected “No, I don’t think replication is appropriate for a PhD project” (8.8%, n = 31; “maybe” = 4.8%, n = 17; “no response” = 5.4%, n = 19). Some respondents added that it would depend on the quality of the study to be replicated, that a replication is more appropriate for MA students, and some thought that including a replication study could “limit options for getting some jobs” (respondent 204) or “harm their career” (respondent 115).

Third, when asked what percentage of empirical studies in L2 research they thought had been replicated (Q1.5), the most common response was “up to 10%” (56.5%, n = 200), followed by “up to 25%” (22.6%, n = 80) and “about 50%” (6.5%, n = 23). Survey respondents were mostly agreed, therefore, that the amount of replication in the field is low. Very few respondents selected “up to 75%” (2%, n = 7) or “nearly all” (0.8%, n = 3). In addition, 79.9% (n = 283) of respondents thought the amount of replication in our field should increase, while 6.8% (n = 24) thought it should remain the same and 1.4% (n = 5) thought it should decrease (Q1.6).

Overall, these findings suggest generally positive attitudes toward conducting replication studies in our field as well as an awareness that replication studies are infrequent.

What Are the Practices of Respondents Who Have Carried Out Replication Studies?

Just over half of respondents (54.8%, n = 194) had tried to replicate an empirical study that was initially carried out by themselves or somebody else (“no response” = 0.8%, n = 3; Q2.2). Figure 2 shows that respondents with replication experience are quite evenly distributed across career stages.

Figure 2. Percentage of respondents with and without replication experience by career stage.

Of the 157 respondents who had never tried to replicate a study, the most common reasons for not doing so were: “I did not have a reason” (31.8%, n = 50,), “I am concentrated on an original line of research with no time/interest/wish to replicate others” (22.3% n = 35), and “Replicating an empirical study is less impactful than conducting an original empirical study” (n = 29, 18.5%; Q2.3). In addition, 65% (n = 102) of the 157 respondents without replication experience indicated that they wanted to carry out a replication at some point in the future (“no response” = 17.2%, n = 27; Q2.4). Also connected with replication practices is the extent to which respondents (with and without replication experience) had been contacted by someone else who wanted replicate one of their studies (Q2.1). The majority of respondents (75.4%, n = 267) had never been contacted by another researcher who wanted to replicate one of their studies (23.4%, n = 83, of respondents had been contacted by another researcher; 4 respondents selected “no response”).

The 194 respondents with replication experience were asked additional questions about their practices. Just over half of this sample had replicated at least one of their own studies (52.6%, n = 102; “no response” = 6; Q2.6). In terms of the types of replication studies conducted, close/partial replications were the most common: 69.1% (n = 134) of respondents had carried out at least one close/partial replication study, 56.7% (n = 110) had carried out at least one conceptual replication, and 21.6% (n = 42) had carried out at least one exact replication study (Q2.7). In addition, respondents were asked about the conclusions of their completed replication studies (Q2.8): 56.2% (n = 109) of replications reported in the sample reached the same conclusions as the initial study.

In terms of dissemination, most respondents indicated that they had presented a replication study at a conference or meeting (74.2%, n = 144). Just under one quarter of respondents reported that they had not presented their replication work at conferences/meetings (23.2%, n = 45; “no response” = 2.6%, n = 5; Q2.9). Respondents were also asked about the publishing venues used for disseminating their replication work. Out of the 340 completed replication studies reported in the sample, 52.4% (n = 178) of replications had been published in a peer-reviewed journal, with fewer replication studies published in books and book chapters (17.6%, n = 60) and in non-peer-reviewed journal articles (6.2%, n = 21; Q2.10). However, 23.8% (n = 81) of completed replications had not been published (excluding replications under review and/or in preparation). These findings indicate that peer-reviewed journals represent the primary publishing venue of replication studies in the field. However, almost one quarter of completed replications in our field may be unpublished.

Related to publishing venues, respondents were asked whether they had encountered any difficulties publishing their replications (Q2.11). Almost half of respondents indicated no difficulties publishing the results of their replications (49.1%, n = 83, excluding replications in progress, n = 25; “no response” = 33 respondents). For respondents who did encounter difficulties publishing their replication studies, the main reasons were that the manuscript had to be submitted to several journals before it was published (8.9%, n = 15), that the manuscript had to be expanded by including a new study (8.3%, n = 14), and that the editors and/or reviewers were reluctant to publish the results (5.9%, n = 10). Lastly, respondents were asked how many of their published replication studies had been explicitly labeled as a replication in the title or abstract (Q2.12). Half of respondents stated that none of their replications were explicitly labeled in the title or abstract (50%, n = 97). In terms of the raw number of published replications reported in the sample, only 111 out of 208 replication studies (or 53.4%) were reported to include explicit labeling. These findings suggest that close to half of replications in our field might not be explicitly labeled as replication studies.

In What Ways Do Researcher Background Variables Relate to Attitudes toward Replication?

Finally, the extent to which respondents’ attitudes toward replication (Q1.4) might be related to the background variables of years since PhD (Q3.2), career stage (Q3.3), publication rate (Q3.4), and number of research methods and/or statistics courses taken (Q3.5) was examined. Overall, no meaningful relationships were found among attitudes and career stage, years since PhD, and publication rate. For research methodology courses taken, however, a small number of relationships emerged (see Table 3 and supplementary materials for scatterplots). The magnitude of these relationships was small with CIs that did not cross zero. In particular, respondents who reported having taken more research methods courses indicated more positive attitudes toward replication, both in terms of the relevance and value of replications to the field as well as a means to “strengthen a discipline.” In addition, a negative relationship was found between research methods experience and the statement “replications question the original researcher and their findings.” Even though the magnitude of these relationships is small and should therefore be interpreted cautiously, they appear to suggest connections between studying research methodology and positive attitudes toward replication.

TABLE 3. Spearman’s Rho’s correlations with 95% confidence intervals between attitudes toward replication and background characteristics

Note: Gray highlighting indicates correlations with CIs that do not cross zero.

Discussion

The current study aimed to better understand why replication studies appear to be infrequent in L2 research by surveying 354 researchers about their attitudes toward replication and their practices conducting replication studies. Overall, respondents evaluated replication studies as valuable and relevant to the field and thought that replications can strengthen a discipline and consolidate previous results. In addition, respondents tended to disagree with claims that replication studies lack originality or innovation. Furthermore, most respondents would recommend others to carry out replication studies, would support PhD students to carry out replications, and considered that the amount of replication in the field should increase. Altogether, these findings suggest that researchers in L2 studies judge replications to be valuable and relevant to the discipline.

In terms of the practices of researchers who have conducted replications in L2 studies, close replications were reported to be the most common and more than half of replications reached the same conclusions as the initial study. In terms of dissemination, four trends emerged. First, approximately one quarter of respondents had not presented their replication work at conferences/meetings. Second, one third of respondents had not published their completed replications (excluding replications under review or in preparation). Third, approximately half of completed replication studies were published in peer-reviewed journals. Fourth, only half of replication studies in the sample were reported to be explicitly labeled as a replication in the title or abstract. This last finding indicates that almost half of published replication studies in the field might not be identified as replications.

Lastly, few relationships were found among attitudes toward replication and the background characteristics of career stage, years since PhD, and publication rate. However, a small number of associations with research methods training were evident, potentially suggesting that experience studying and reflecting on research methodology could impact attitudes toward replication research in our field.

Taken together, these findings suggest positive attitudes toward replication in L2 research and room for improvement in the reporting and dissemination of replication studies. However, it is important to note that the inevitable effects of self-selection on the current study’s conclusions cannot be ruled out (see Dörnyei & Taguchi, Reference Dörnyei and Taguchi2010), even though attempts were made to obtain a broad and large sample of the target population, including respondents with and without replication experience and from different career stages around the world. This is because all respondents were volunteers. It is therefore important for future research to replicate this study. In addition, replications in other disciplines and at different points in time (e.g., 5 to 10 years from now) are needed to document potential variations across disciplines and changing attitudes and practices. Integrating qualitative data from follow-up interviews or focus groups into future replications of the current study could also offer a more comprehensive account of replications practices and attitudes in the field.

Improving the Discoverability of Replication Studies

Returning to this study’s general aim to better understand why replications appear to be infrequent in L2 research, the findings indicate that negative attitudes are probably not the sole explanation. Reporting and dissemination practices have likely played an important role in reducing the discoverability of replication studies and limiting their potential impact on the field.

One simple way to address this issue and improve the discoverability of replication studies is to use explicit labeling in the title and abstract (see Appelbaum et al., Reference Appelbaum, Cooper, Kline, Mayo-Wilson, Nezu and Rao2018; Porte & McManus, Reference Porte and McManus2019). As previously noted, what might be perceived as a lack of replication could be a lack of transparent labeling. For example, in Marsden et al.’s (Reference Marsden, Morgan-Short, Thompson and Abugaber2018) review of self-labeled replication studies in L2 research, only 13 of the 63 published articles (or 21%) included the label “replication” in the title. In contrast, a recent meta-analysis of meta-analyses in English language teaching showed that 84 of the 90 meta-analyses (or 93%) included the label “meta-analysis” or “research synthesis” in the title (see Alsowat, Reference Alsowat2020). By not using the label “replication” in the study title, the discoverability of replications and their potential impact on the field are being limited. Explicit labeling of replication research in the title and abstract can improve the discoverability of replication research.

Providing targeted support and training to researchers in planning and reporting their replication studies is another way that can address reporting and dissemination issues (see also Marsden et al., Reference Marsden, Morgan-Short, Thompson and Abugaber2018). Compared with other types of research like meta-analysis and “original” research, for example, very little guidance exists to support researchers in the design, conduct, and reporting of replications (but see Porte & McManus, Reference Porte and McManus2019). Because replications appear to be infrequent, researchers likely require additional support to design and report replication studies. As a result, researchers may lack awareness about (a) how replication and “original” research studies differ in function, structure, and presentation as well as (b) how to convey the unique value of replication studies to editors, reviewers, and readers (see Appelbaum et al., Reference Appelbaum, Cooper, Kline, Mayo-Wilson, Nezu and Rao2018; Porte & McManus, Reference Porte and McManus2019). Therefore, an additional explanation for the infrequency of replication studies in our field could be that researchers lack exposure and necessary experience and training in research methodology and dissemination to effectively design and report replication studies. The field can address this potential challenge by creating discipline-specific resources for planning and conducting replication studies.

Based on the current study’s findings, five recommendations are proposed to facilitate and improve the discoverability, amount, and quality of replication in L2 studies:

  1. 1. Replication studies should include the label “replication” in the study title and abstract.

  2. 2. Systematic research training should be provided during conferences at no additional cost to attendees about how to design, present, and write-up (replication) research studies.

  3. 3. Field-specific resources and reporting guidelines should be developed that include recommendations for designing, presenting, and disseminating replication studies, including interpretation guidelines (e.g., Norris et al., Reference Norris, Plonsky, Ross and Schoonen2015).

  4. 4. Journals should tag replications on their websites and link them to the initial study.

  5. 5. Authors should systematically discuss approaches to replication, with rationales, in published research studies.

By improving the discoverability of replications through transparent labeling, tagging, and linking as well as investing in the research methodology and dissemination training of researchers, we can at least begin to address concerns about the frequency and quality of replication studies in our field.

Conclusions

The current study’s findings show that researchers in L2 studies attach considerable relevance and value to replication. As a result, additional explanations for the infrequency of replication studies in our field are needed. Results concerning dissemination practices indicate systemic reporting limitations that are likely reducing the discoverability and impact of replication studies. Based on the current study’s findings, recommendations are proposed to address reporting and dissemination issues in the field, including explicit labeling of replication studies in the title and abstract; the development of field-specific resources and standards for reporting and interpreting replication studies; and systematic training in how to plan, present, and disseminate replication studies.

Data Availability Statement

The experiment in this article earned Open Materials and Open Data badges for transparent practices. The materials and data are available at https://osf.io/6kfxz/.

Supplementary Materials

To view supplementary material for this article, please visit http://doi.org/10.1017/S0272263121000838.

Footnotes

I am very grateful to all survey respondents for their time and contributions to this study, to members of SLA reading group at Penn State, and to Yingying Liu and Amanda Huensch. I would also like to thank SSLA editor Susan Gass and the valuable feedback from four reviewers.

References

REFERENCES

Ahadi, A., Hellas, A., Ihantola, P., Korhonen, A., & Petersen, A. (2016). Replication in computing education research: Researcher attitudes and experiences. Proceedings of the 16th Koli Calling International Conference on Computing Education Research, 211. https://doi.org/10.1145/2999541.2999554 Google Scholar
Alsowat, H. H. (2020). Evidence-based practices of English language teaching: A meta-analysis of meta-analyses. English Language Teaching, 13, 75. https://doi.org/10.5539/elt.v13n11p75 CrossRefGoogle Scholar
Appelbaum, M., Cooper, H., Kline, R. B., Mayo-Wilson, E., Nezu, A. M., & Rao, S. M. (2018). Journal article reporting standards for quantitative research in psychology: The APA Publications and Communications Board task force report. American Psychologist , 73, 325. https://doi.org/10.1037/amp0000191 Google ScholarPubMed
Byrnes, H. (2013). Notes from the editor. The Modern Language Journal, 97, 825827. https://doi.org/10.1111/j.1540-4781.2013.12051.x CrossRefGoogle Scholar
Cumming, G., & Calin-Jageman, R. (2017). Introduction to the new statistics: Estimation, open science, and beyond. Routledge.Google Scholar
Dörnyei, Z., & Taguchi, T. (2010). Questionnaires in second language research: Construction, administration, and processing (2nd ed). Routledge.Google Scholar
Easley, R. W., Madden, C. S., & Gray, V. (2013). A tale of two cultures: Revisiting journal editors’ views of replication research. Journal of Business Research , 66, 14571459. https://doi.org/10.1016/j.jbusres.2012.05.013 Google Scholar
Gass, S. M., Loewen, S., & Plonsky, L. (2021). Coming of age: The past, present, and future of quantitative SLA research. Language Teaching, 54, 245258. https://doi.org/10.1017/S0261444819000430 CrossRefGoogle Scholar
Gass, S. M., & Plonsky, L. (2020). Introducing the SSLA Methods Forum. Studies in Second Language Acquisition, 42, 667669. https://doi.org/10.1017/S0272263120000364 CrossRefGoogle Scholar
Hervé, M. (2021). RVAideMemoire: Testing and plotting procedures for biostatistics (0.9-79) [Computer software]. https://CRAN.R-project.org/package=RVAideMemoire Google Scholar
Isbell, D. R., Brown, D., Chan, M., Derrick, D., Ghanem, R., Gutiérrez Arvizu, M. N., Schnur, E., Zhang, M., & Plonsky, L. (2021). Misconduct and questionable research practices: The ethics of quantitative data handling and reporting in applied linguistics. Manuscript submitted for publication.Google Scholar
Language Teaching Review Panel. (2008). Replication studies in language learning and teaching: Questions and answers. Language Teaching, 41, 114. https://doi.org/10.1017/S0261444807004727 CrossRefGoogle Scholar
Loewen, S., Gönülal, T., Isbell, D. R., Ballard, L., Crowther, D., Lim, J., Maloney, J., & Tigchelaar, M. (2020). How knowledgeable are applied linguistics and SLA researchers about basic statistics? Data from North America and Europe. Studies in Second Language Acquisition , 42, 871890. https://doi.org/10.1017/S0272263119000548 CrossRefGoogle Scholar
Marsden, E., Morgan-Short, K., Thompson, S., & Abugaber, D. (2018). Replication in second language research: Narrative and systematic reviews and recommendations for the field. Language Learning, 68, 321391. https://doi.org/10.1111/lang.12286 CrossRefGoogle Scholar
Mu, C., & Matsuda, P. K. (2016). Replication in L2 writing wesearch: Journal of Second Language Writing authors’ perceptions. TESOL Quarterly, 50, 201219. https://doi.org/10.1002/tesq.284 CrossRefGoogle Scholar
Norris, J. M., Plonsky, L., Ross, S. J., & Schoonen, R. (2015). Guidelines for reporting quantitative methods and results in primary research: Guidelines for reporting quantitative methods. Language Learning, 65, 470476. https://doi.org/10.1111/lang.12104 CrossRefGoogle Scholar
Olalla-Soler, C. (2020). Practices and attitudes toward replication in empirical translation and interpreting studies. Target: International Journal on Translation Studies, 32, 336. https://doi.org/10.1075/target.18159.ola CrossRefGoogle Scholar
Plonsky, L. (2013). Study quality in SLA: An assessment of designs, analyses, and reporting practices in quantitative L2 research. Studies in Second Language Acquisition, 35, 655687. https://doi.org/10.1017/S0272263113000399 CrossRefGoogle Scholar
Plonsky, L., & Oswald, F. L. (2014). How big is “big”? Interpreting effect sizes in L2 research. Language Learning, 64, 878912. https://doi.org/10.1111/lang.12079 CrossRefGoogle Scholar
Porte, G. K. (Ed.). (2012). Replication research in applied linguistics. Cambridge University Press.Google Scholar
Porte, G. K., & McManus, K. (2019). Doing replication research in applied linguistics. Routledge.Google Scholar
Porte, G. K., & Richards, K. (2012). Replication in second language writing research. Journal of Second Language Writing, 21, 284293. https://doi.org/10.1016/j.jslw.2012.05.002 CrossRefGoogle Scholar
Qualtrics. (2021). Qualtrics (July 2021) [Computer software]. https://www.qualtrics.com Google Scholar
R Core Team. (2021). R: A language and environment for statistical computing (1.4.1717) [Computer software]. R Foundation for Statistical Computing. https://www.R-project.org/ Google Scholar
Schmidt, S. (2009). Shall we really do it again? The powerful concept of replication is neglected in the social sciences. Review of General Psychology, 13, 90100. https://doi.org/10.1037/a0015108 CrossRefGoogle Scholar
Zwaan, R. A., Etz, A., Lucas, R. E., & Donnellan, M. B. (2018). Making replication mainstream. Behavioral and Brain Sciences, 41, e120. https://doi.org/10.1017/S0140525X17001972 CrossRefGoogle ScholarPubMed
Figure 0

TABLE 1. Characteristics of the data sample

Figure 1

TABLE 2. Descriptive results for statements about replication research ranked by mean rating

Figure 2

Figure 1. Histograms showing respondents’ ratings of statements about replication research.Note: Each statement was rated on a scale from 0 (“strongly disagree”) to 100 (“strongly agree”).

Figure 3

Figure 2. Percentage of respondents with and without replication experience by career stage.

Figure 4

TABLE 3. Spearman’s Rho’s correlations with 95% confidence intervals between attitudes toward replication and background characteristics

Supplementary material: File

McManus supplementary material

McManus supplementary material

Download McManus supplementary material(File)
File 2.9 MB