Hostname: page-component-5b777bbd6c-rbv74 Total loading time: 0 Render date: 2025-06-18T21:43:19.602Z Has data issue: false hasContentIssue false

Revisiting communicative competence in the age of AI: Implications for large-scale testing

Published online by Cambridge University Press:  13 June 2025

Xiaoming Xi*
Affiliation:
Hong Kong Examinations and Assessment Authority, Hong Kong SAR

Abstract

Changes in the characterization of communicative competence, especially in the context of large-scale testing, are typically driven by an evolving understanding of real-world communication and advancements in test construct theories. Recent advances in AI technology have fundamentally altered the way language users communicate and interact, prompting a reassessment of how communicative competence is defined and how language tests are constructed.

In response to these significant changes, an AI-mediated interactionalist approach is proposed to expand communicative competence. This approach advocates for extending the traditional concept of communicative competence to encompass AI digital literacy skills and broadened cognitive and linguistic capabilities. These skills enable effective AI tool usage, as well as the interpretation and application of AI-generated outputs and feedback, to improve communication. Embedding these competencies into language assessments ensures alignment with contemporary communication dynamics, enhancing the relevance of language assessments, and preparing learners for navigating AI-augmented communication environments.

While high-stakes testing faces considerable challenges in adopting this expanded construct, low-stakes formative assessments, where scores do not influence critical decisions about individuals and where opportunities exist to rectify errors in score-based actions, if any, provide a fertile ground for exploring the integration of AI tools into assessments. In these contexts, educators can explore giving learners access to various AI tools, such as editing and generative tools, to enhance assessment practices. These explorations can start to address some of the conceptual challenges involved in applying this expanded construct definition in high-stakes environments and contribute to resolving practical issues.

Type
Research Article
Copyright
© The Author(s), 2025. Published by Cambridge University Press.

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Article purchase

Temporarily unavailable

References

Alderson, C., & Hamp-Lyons, L. (1996). TOEFL preparation courses: A study of washback. Language Testing, 13(3), 280297. https://doi.org/10.1177/026553229601300304.CrossRefGoogle Scholar
Association of Test Publishers. (January 2022). Artificial intelligence principles. Retrieved March 30, 2025, from https://www.testpublishers.org/ai-principles.Google Scholar
Bachman, L. (1990). Fundamental considerations in language testing. Oxford University Press.Google Scholar
Bachman, L. F. (2007). What is the construct? The dialectic of abilities and contexts in defining constructs in language assessment. In Fox, J., Wesche, M. & Bayliss, D. (Eds.), What are we measuring? Language testing reconsidered (pp. 4171). University of Ottawa Press.10.2307/j.ctt1ckpccf.9CrossRefGoogle Scholar
Bachman, L. F., & Palmer, A. S. (1996). Language testing in practice: Designing and developing useful language tests. Oxford University Press.Google Scholar
British Council. (May 2023). IELTS writing band descriptors. Retrieved March 30, 2025, from https://takeielts.britishcouncil.org/sites/default/files/ielts_writing_band_descriptors.pdf.Google Scholar
Burstein, J. (2023). The Duolingo English Test Responsible AI Standards. Retrieved March 29 , 2024, from https://go.duolingo.com/ResponsibleAI.10.46999/VCAE5025CrossRefGoogle Scholar
Carroll, J. B. (1961). Fundamental considerations in testing for English language proficiency of foreign students. In Testing the English proficiency of foreign students (pp.3040). Center for Applied Linguistics.Google Scholar
Chalhoub-Deville, M. (2003). Second language interaction: Current perspectives and future trends. Language Testing, 20(4), 369383. https://doi.org/10.1191/0265532203lt264oa.CrossRefGoogle Scholar
Chan, S., Bax, S., & Weir, C. (2017). Researching participants taking IELTS Academic Writing Task 2 (AWT2) in paper mode and in computer mode in terms of score equivalence, cognitive validity and other factors. IELTS Research Reports Online Series, 4, 147.Google Scholar
Chapelle, C. (1998). Construct definition and validity inquiry in SLA research. In Bachman, L. F., and Cohen, A. (Eds.), Interfaces between second language acquisition and language testing research (pp. 32–?70). Cambridge University Press.Google Scholar
Chen, J., Fife, J. H., Bejar, I., & Rupp, A. A. (2016). Building e‐rater® scoring models using machine learning methods. ETS Research Report Series, 2016. 10.1002/ets2.12094CrossRefGoogle Scholar
Chen, L., Zechner, K., Yoon, S. Y., Evanini, K., Wang, X., Loukina, A., … Binod, G. (2018). Automated scoring of nonnative speech using the SpeechRaterSM v. 5 engine. Princeton: Educational Testing Service.Google Scholar
Educational Testing Service. (January 2024). Exciting enhancements to the TOEFL iBT® test. https://www.etsglobal.org/mc/en/blog/news/toefl-ibt-enhancements.Google Scholar
Educational Testing Service. (n.d.a). TOEFL iBT® writing for an academic discussion rubric. Retrieved March 30, 2025, from https://www.ets.org/pdfs/toefl/toefl-ibt-writing-rubrics.pdf.Google Scholar
Educational Testing Service. (n.d.b). Responsible use of AI in assessment: ETS principles. Retrieved March 30, 2025, from https://www-vantage-stg-publish.ets.org/Rebrand/pdf/ETS_Convening_executive_summary_for_the_AI_Guidelines.pdf.Google Scholar
Ellis, R. (2003). Task-based language learning and teaching. Oxford University Press.Google Scholar
Figueiredo, S. (2021). Second language testing: A psycholinguistic approach. International Journal of Childhood Education, 2(2), 2631. https://doi.org/10.33422/ijce.v2i2.15.CrossRefGoogle Scholar
Galaczi, E., & Luckin, R. (2024). Generative AI and language education: Opportunities, challenges and the need for critical perspectives. Cambridge Papers in English Language Education.Google Scholar
Gonzales, S. (2024, August). AI literacy and the new digital divide - A global call for action. Paris, France: UNESCO. Retrieved March 30, 2025, from https://www.unesco.org/en/articles/ai-literacy-and-new-digital-divide-global-call-action.Google Scholar
Green, A. (2003). Test impact and English for academic purposes: A comparative study in backwash between IELTS preparation and university pre-sessional courses [Unpublished PhD thesis, University of Surrey]. Centre for Research in Testing, Evaluation and Curriculum in ELT, University of Surrey.Google Scholar
Green, T., & Maycock, L. (2004). Computer-based IELTS and paper-based versions of IELTS. Research Notes, 18, 36.Google Scholar
Hulstijn, J. H. (2011). Language proficiency in native and nonnative speakers: An agenda for research and suggestions for second-language assessment. Language Assessment Quarterly, 8(3), 229249. https://doi.org/10.1080/15434303.2011.565844.CrossRefGoogle Scholar
Jin, Y. (2000). The washback effects of College English Test-Spoken English Test on teaching. Foreign Language World, 118(2), 5661.Google Scholar
Lado, R. (1961). Language testing: The construction and use of foreign language tests: A teacher’s book. Longman.Google Scholar
Leung, C. (2022). Language proficiency: From description to prescription and back? Educational Linguistics, 1(1), 5681. https://doi.org/10.1515/eduling-2021-0006.CrossRefGoogle Scholar
Long, M. (2015). Second language acquisition and task-based language teaching. John Wiley & Sons.Google Scholar
Norris, J. (2016). Current uses for task-based language assessment. Annual Review of Applied Linguistics, 36, 230244. https://doi.org/10.1017/S0267190516000027.CrossRefGoogle Scholar
Oppenheimer, D. (January 2023). ChatGPT has arrived – And nothing has changed. Times Higher Education. Retrieved March 30, 2025, from https://www.timeshighereducation.com/campus/chatgpt-has-arrived-and-nothing-has-changed.Google Scholar
Pearson. (n.d.). PTE academic just got better! Retrieved March 30, 2025, from https://www.pearsonpte.com/articles/pte-academic-just-got-better.Google Scholar
Perkins, M. (2023). Academic integrity considerations of AI large language models in the post-pandemic era: ChatGPT and beyond. Journal of University Teaching and Learning Practice, 20(2). https://doi.org/10.53761/1.20.02.07.CrossRefGoogle Scholar
Poehner, M. E., & Lantolf, J. P. (2013). Bringing the ZPD into the equation: Capturing L2 development during computerized dynamic assessment (C-DA). Language Teaching Research, 17(3), 323342. https://doi.org/10.1177/1362168813482935.CrossRefGoogle Scholar
Poehner, M. E., & Lantolf, J. P. (2021). The ZPD, second language learning, and the Тransposition ∼ Transformation dialectic. Cultural-Historical Psychology, 17(3), 3141. https://doi.org/10.17759/chp.2021170306.CrossRefGoogle Scholar
Read, J., & Hayes, B. (2003). The impact of IELTS on preparation for academic study in New Zealand. In Tulloh, R. (Ed.), International English Language Testing System (IELTS) research reports (Vol. 4, pp. 153206). IELTS AustraliaGoogle Scholar
Saville, N., & Hawkey, R. (2004). The IELTS impact study: Investigating washback on teaching materials. In Cheng, L., Watanabe, Y. & Curtis, A. (Eds.), Washback in language testing: Research contexts and methods (pp. 7396). Lawrence Erlbaum Associates.Google Scholar
Sengar, S. S., Hasan, A. B., Kumar, S., & Carroll, F. (2024). Generative artificial intelligence: A systematic review and applications. Multimedia Tools and Applications, 140. https://doi.org/10.1007/s11042-024-20016-1.Google Scholar
Skehan, P. (1998). A cognitive approach to language learning. Oxford University Press.Google Scholar
Taylor, C., Jamieson, J., Eignor, D., & Kirsch, I. (1998). The relationship between computer familiarity and performance on computer-based TOEFL test tasks. ETS Research Report Series, i30. https://doi.org/10.1002/j.2333-8504.1998.tb01757.x.Google Scholar
Taylor, C., Kirsch, I., Eignor, D., & Jamieson, J. (1999). Examining the relationship between computer familiarity and performance on computer-based language tasks. Language Learning, 49(2), 219274. https://doi.org/10.1111/0023-8333.00088.CrossRefGoogle Scholar
Van Moere, A. (2012). A psycholinguistic approach to oral language assessment. Language Testing, 29(3), 325344. https://doi.org/10.1177/0265532211424478.CrossRefGoogle Scholar
Voss, E., Cushing, S. T., Ockey, G. J., & Yan, X. (2023). The use of assistive technologies including generative AI by test takers in language assessment: A debate of theory and practice. Language Assessment Quarterly, 20(4–5), 520532. https://doi.org/10.1080/15434303.2023.2288256.CrossRefGoogle Scholar
Wall, D., & Horak, T. (2006). The impact of changes in the TOEFL examination on teaching and learning in Central and Eastern Europe: Phase 1, the baseline study. TOEFL Monograph Series, MS-34. Educational Testing Service.Google Scholar
Watanabe, Y. (1996). Does grammar translation come from the entrance examination? Preliminary findings from classroom-based research. Language Testing, 13(3), 318333. https://doi.org/10.1177/026553229601300306.CrossRefGoogle Scholar
Woo, D. J., Susanto, H., Yeung, C. H., Guo, K., & Huang, Y. (2025). Approaching the limits to EFL writing enhancement with AI-generated text and diverse learners. arXiv preprint arXiv:2503.00367. https://doi.org/10.48550/arXiv.2503.00367.CrossRefGoogle Scholar
Xi, X. (2015, March). Language constructs revisited for practical test design, development and validation. Paper presented at the 37th Language Testing Research Colloquium, Toronto, Canada.Google Scholar
Xi, X. (2023). Advancing language assessment with AI and ML–leaning into AI is inevitable, but can theory keep up? Language Assessment Quarterly, 20(4–5), 357376. https://doi.org/10.1080/15434303.2023.2291488.CrossRefGoogle Scholar
Xi, X., Bridgeman, B., & Wendler, C. (2024). Evolution and future trends in tests of English for university admissions. In Kunnan, A. J. (Ed.), The concise companion to language assessment (pp. 355370). Wiley-Blackwell.Google Scholar
Xi, X., Norris, J. M., Ockey, G. J., Fulcher, G., & Purpura, J. E. (2021). Assessing academic speaking. In Xi, X., and Norris, J. M. (Eds.), Assessing academic English for higher education admissions (1st edn, pp. 152199). Routledge. https://doi.org/10.4324/9781351142403.CrossRefGoogle Scholar
Xu, J., Schmidt, E., Galaczi, E., & Somers, A. (2024). Automarking in language assessment: Key considerations for best practice. In Cambridge papers in English language education. Cambridge University Press & Assessment.Google Scholar
Young, R. F. (2008). Language and interaction: An advanced resource book. Routledge.Google Scholar