Bridging the Gap: Aligning Communicative Language Testing Principles with AI-Driven Assessment in Civil Aviation Ground Service English

  • Jinmei Fan Guangzhou Civil Aviation College, Guangzhou 510000, Guangdong, China
Keywords: Civil Aviation Ground Service English, Communicative Language Testing (CLT), AI automated assessment, Human-AI interaction scenario, Logical reconciliation

Abstract

The integration of Communicative Language Testing (CLT) principles with AI-driven automated assessment poses a significant challenge in professional language testing. Addressing this issue within the specific context of Civil Aviation Ground Service English, this study explores pathways for their logical reconciliation. Through conceptual analysis and theoretical deduction, with a focus on human-AI interaction scenarios, we demonstrate that the synergy between CLT and AI stems from a shared focus on competency measurement. Key findings reveal that: (1) standardized competency dimensions in CLT can be operationalized into data-processable formats for AI; (2) within professional contexts, AI algorithms can be tailored using authentic service corpora to meet CLT’s demand for situational authenticity; and (3) a division of labor based on competency level—where AI handles standardized scoring of lower-order competencies and human-AI collaboration assesses higher-order competencies—effectively resolves the tension between CLT’s dynamic communication and AI’s static algorithms. Ultimately, the study constructs a three-dimensional integration framework encompassing “professional register,” “competency level,” and “human-AI division of labor,” offering a theoretical model for CLT-AI integration and a practical blueprint for innovating Civil Aviation Ground Service English assessment.

References

Chen X, 2010, The Shift of “Authenticity” Criteria under Bachman’s Communicative Language Testing Theoretical Model. Journal of Jiangsu University (Social Science Edition), (02): 81–84.

Xue R, 2008, Communicative Language Testing: Theoretical Models and Assessment Criteria. Foreign Language Teaching, (03): 68–76.

Burstein J, LaFlair, GT, Von Davier AA, 2025, A Theoretical Assessment Ecosystem for Digital-First Language Assessment: The Duolingo English Test, Duolingo Research Report DRR, 4–16.

Wang H, Wu H, He Z, et al., 2022, Progress in Machine Translation. Engineering, 18(11): 143–153.

Canale MS, Swain M, 1980, Theoretical Bases of Communicative Approaches to Second Language Teaching and Testing, Applied Linguistics, 31–47.

Russell S, Norvig P, 2020, Artificial Intelligence: A Modern Approach (4th Edition), Pearson, Hoboken.

Jin Y, Xu MJ, 2024, Ethical Reflections on Intelligent Technology Empowering Language Assessment. Foreign Languages and Their Teaching, (03): 1–10 + 145.

Hao J, von Davier A, Yaneva V, et al., 2024, Transforming Assessment: The Impacts and Implications of Large Language Models and Generative AI. Educational Measurement: Issues and Practice, (43): 16–29.

‌Alam S, Usama M, Alam MM, et al., 2023, Artificial Intelligence in the Global World: A Case Study of Grammarly as an E-Tool on ESL Learners’ Writing at Darul Uloom Nadwa. International Journal of Educational Technology in Higher Education, 13(11): 1741–1747.‌

Schaefer E, Martin J, 2023, Language Testing in Changing Times: An Interview with Professor Daniel Isbell, TEVAL: Technology-Enhanced Language Assessment Review, 1–5.

Mahmoud RH, 2022, Implementing AI-Based Conversational Chatbots in EFL Speaking Classes: An Evolutionary Perspective. Journal of Technology-Enhanced Language Learning, 1–21.

Ockey G, Chukharev-Hudilainen E, 2021, Human Versus Computer Partner in the Paired Oral Discussion Test. Applied Linguistics, 42(5): 924–944.

Spolsky B, 1985, The Limits of Authenticity in Language Testing. Language Testing, 31–40.

Bachman LF, Palmer AS, 1996, Language Testing in Practice, Oxford University Press, Oxford.

Douglas D, 2000, Assessing Language for Specific Purposes, Cambridge University Press, New York.

Published
2025-12-08