Concepedia

Publication | Open Access

Bridging the Gap between Training and Inference for Neural Machine Translation

208

Citations

20

References

2019

Year

TLDR

Neural Machine Translation generates target words sequentially, but during training it conditions on ground‑truth context while inference relies on its own predictions, creating a discrepancy that causes error accumulation and over‑correction of otherwise acceptable translations. The study aims to mitigate this training‑inference mismatch by incorporating predicted context during training. The authors sample context words from both the ground‑truth and the model’s own predictions, selecting the predicted sequence that optimizes sentence‑level performance. Experiments on Chinese‑to‑English and WMT’14 English‑to‑German translation tasks show that this approach yields significant improvements across multiple datasets.

Abstract

Neural Machine Translation (NMT) generates target words sequentially in the way of predicting the next word conditioned on the context words. At training time, it predicts with the ground truth words as context while at inference it has to generate the entire sequence from scratch. This discrepancy of the fed context leads to error accumulation among the way. Furthermore, word-level training requires strict matching between the generated sequence and the ground truth sequence which leads to overcorrection over different but reasonable translations. In this paper, we address these issues by sampling context words not only from the ground truth sequence but also from the predicted sequence by the model during training, where the predicted sequence is selected with a sentence-level optimum. Experiment results on Chinese->English and WMT’14 English->German translation tasks demonstrate that our approach can achieve significant improvements on multiple datasets.

References

YearCitations

Page 1