A Reinforcement Generative Adversarial Network for Multilingual Neural Machine Translation
DOI:
https://doi.org/10.70891/JAIR.2026.030001Keywords:
Chinese and English machine translation, adaptive context-aware, policy gradient reinforcementAbstract
Multilingual neural machine translation (MNMT) has achieved remarkable progress in terms of performance across a wide range of benchmarks. However, traditional MNMT methods are largely constrained by a unidirectional decoding strategy, which is either left-to-right or right-to-left. This limitation restricts their capacity to fully comprehend contextual semantics and often results in translation imbalances. To address these challenges, this paper proposes a reinforcement generative adversarial networks for multilingual neural machine translation (RGAN). RGAN integrates a novel generative network designed to optimize the decoding sequence of the conventional transformer architecture. By dynamically adjusting the decoding order, RGAN effectively mitigates output imbalances while enhancing its capability to capture richer contextual information. Embedded within a Generative Adversarial Network (GAN) framework, the generator undergoes continuous refinement through adversarial training, thereby progressively generating higher-quality translations. Moreover, RGAN employs a policy gradient reinforcement learning technique to tackle the inherent limitations associated with processing discrete text data in natural language processing tasks. The proposed framework significantly enhances translation fluency and accuracy in both Chinese and English languages.
