This study systematically analyzes the security risks associated with the application of large language models (LLMs) in language education based on the DREAD threat assessment framework. It points out that, as “digital native speakers,” LLMs are deeply integrated into the entire teaching process, introducing novel educational risks such as “language hallucinations,” cultural bias, and prompt injection. These risks manifest specifically as high acquisition costs in knowledge internalization, high classroom reproducibility of risks, low exploitation thresholds, broad impact scope, and low visibility. In response, this study constructs a multi-layered dynamic governance system, proposing to reduce acquisition costs through a combination of technical filtering and manual verification, manage reproducibility and exploitation thresholds by implementing tiered access controls and full-process monitoring, and strengthen the digital literacy of both teachers and students to control the risk impact scope and enhance visibility. The research indicates that only by establishing a collaborative ecosystem led by educational principles, empowered by technology, supported by institutions, and founded on literacy can LLMs truly evolve into constructive tools that promote language proficiency development and cross-cultural understanding.
Pack AJ, Austin JL, Maloney AJ, 2023, Using Generative Artificial Intelligence for Language Education Research: Insights from Using OpenAI’s ChatGPT. TESOL Quarterly, 57(4): 1571–1582.
Xu JJ, Zhao C, 2024, The Role of Large Language Models in English Teaching. Foreign Language Education in China, 7(1): 3–10 + 90.
Jiang GY, Yin WX, 2025, Transformation or Crisis: The Empowerment and Limits of Large Language Models in University Teaching—A Case Study Based on Stanford University. e-Education Research, 46(1): 122–128.
Else H, 2023, Abstracts Written by ChatGPT Fool Scientists, viewed November 26, 2025, https://www.nature.com/articles/d41586-023-00056-7
Huang L, Yu W, Ma W, et al., 2025, A Survey on Hallucination in Large Language Models: Principles, Taxonomy, Challenges, and Open Questions. ACM Transactions on Information Systems, 43(2): 1–55.
Shuai B, Wang YM, Hu RF, 2025, Evaluation of Chinese Grammar Teaching Resource Generation Based on Large Language Models. Chinese Teaching in the World, 39(3): 405–419.
Ge MY, Wu YQ, Mei YY, 2024, Security and Governance of Large Language Models in Universities. China Education Network, 2024(6): 39–42.
Zhang JX, Liu S, Li K, 2025, Large Models Frequently Attacked, Security Governance Urgently Needed, Science and Technology Daily, November 20, 2025, (005).
Fang X, 2023, Study: Large Models Can Highly Accurately Infer User Privacy, May Be Used by Ad Companies and Scammers, The Paper, October 29, 2023.
Liu BC, Gou MH, 2023, The Impact and Countermeasures of ChatGPT and Other New-Generation AI Tools on Educational Research. Journal of Soochow University (Educational Science Edition), 11(3): 54–62.