Papers
arxiv:2503.10351

New Trends for Modern Machine Translation with Large Reasoning Models

Published on Mar 13
· Submitted by ChenyangLyu on Mar 14

Abstract

Recent advances in Large Reasoning Models (LRMs), particularly those leveraging Chain-of-Thought reasoning (CoT), have opened brand new possibility for Machine Translation (MT). This position paper argues that LRMs substantially transformed traditional neural MT as well as LLMs-based MT paradigms by reframing translation as a dynamic reasoning task that requires contextual, cultural, and linguistic understanding and reasoning. We identify three foundational shifts: 1) contextual coherence, where LRMs resolve ambiguities and preserve discourse structure through explicit reasoning over cross-sentence and complex context or even lack of context; 2) cultural intentionality, enabling models to adapt outputs by inferring speaker intent, audience expectations, and socio-linguistic norms; 3) self-reflection, LRMs can perform self-reflection during the inference time to correct the potential errors in translation especially extremely noisy cases, showing better robustness compared to simply mapping X->Y translation. We explore various scenarios in translation including stylized translation, document-level translation and multimodal translation by showcasing empirical examples that demonstrate the superiority of LRMs in translation. We also identify several interesting phenomenons for LRMs for MT including auto-pivot translation as well as the critical challenges such as over-localisation in translation and inference efficiency. In conclusion, we think that LRMs redefine translation systems not merely as text converters but as multilingual cognitive agents capable of reasoning about meaning beyond the text. This paradigm shift reminds us to think of problems in translation beyond traditional translation scenarios in a much broader context with LRMs - what we can achieve on top of it.

Community

Paper author Paper submitter
edited about 13 hours ago

Recent advances in Large Reasoning Models (LRMs) with Chain-of-Thought (CoT) capabilities are transforming machine translation. This paper argues that LRMs reframe translation as a dynamic reasoning task requiring contextual, cultural, and linguistic understanding. Three key shifts are identified: contextual coherence, cultural intentionality, and self-reflection. We explore various translation scenarios, showcasing LRMs' superiority, and discusses phenomena like auto-pivot translation. Challenges such as over-localization and inference efficiency are also addressed. We think that LRMs redefine translation systems as multilingual cognitive agents capable of reasoning beyond text, opening new possibilities in a broader context.

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2503.10351 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2503.10351 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2503.10351 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.