This bibliometric analysis of 460 peer-reviewed articles (2020–2024) maps the rapid evolution of Large Language Models (LLMs) in machine translation. The study reveals a significant surge in research, driven by advances in transformer architectures and characterized by robust international collaboration. Key themes identified include pre-trained models, neural machine translation, and specialized applications in domains like healthcare, highlighting the field’s interdisciplinary nature. The findings offer valuable insights into current trends and future trajectories for LLM-driven translation.