N-Gram Nearest Neighbor Machine Translation
/ Abstract
Nearest neighbor machine translation achieves non-parametric domain adaptation by augmenting the Autoregressive Translation (AT) with <inline-formula><tex-math notation="LaTeX">$k$</tex-math></inline-formula>-nearest-neighbor retrieval. The retrieval is conducted by comparing the similarity between the token-level context representations of the target tokens in the query and the datastore. However, the token-level representation may introduce noise when translating ambiguous words, or fail to provide accurate retrieval results when the representation generated by the model contains indistinguishable context information, e.g., Non-Autoregressive Translation (NAT) models. In this paper, we propose a novel <inline-formula><tex-math notation="LaTeX">$n$</tex-math></inline-formula>-gram nearest neighbor retrieval method that is model agnostic and applicable to both AT and NAT models. This method enhances the performance of AT models by reducing the ambiguous word problem. It also achieves impressive domain adaptation performance on NAT models and alleviates the multi-modality problem effectively, which is a well-known problem in NAT models. Specifically, we concatenate the adjacent <inline-formula><tex-math notation="LaTeX">$n$</tex-math></inline-formula>-gram hidden representations as the key, while the tuple of corresponding target tokens is the value. In inference, we propose tailored decoding algorithms for AT and NAT models respectively. We demonstrate that the proposed method consistently outperforms the token-level method on both AT and NAT models in general as well as on domain adaptation translation tasks. On domain adaptation, the proposed method brings <inline-formula><tex-math notation="LaTeX">$1.23 / 0.91$</tex-math></inline-formula> and <inline-formula><tex-math notation="LaTeX">$2.25 / 1.75$</tex-math></inline-formula> improvements regarding the average BLEU / ChrF score on AT and NAT models respectively. For the multi-modality problem, the repetition ratio for NAT models has been reduced by 2.64%.
Journal: IEEE Transactions on Audio, Speech and Language Processing