TY - JOUR AU - Zhang, Min AB - Abstract:Large language models (LLMs) have succeeded remarkably in multilingual translation tasks. However, the inherent translation mechanisms of LLMs remain poorly understood, largely due to sophisticated architectures and vast parameter scales. In response to this issue, this study explores the translation mechanism of LLM from the perspective of computational components (e.g., attention heads and MLPs). Path patching is utilized to explore causal relationships between components, detecting those crucial for translation tasks and subsequently analyzing their behavioral patterns in human-interpretable terms. Comprehensive analysis reveals that translation is predominantly facilitated by a sparse subset of specialized attention heads (less than 5\%), which extract source language, indicator, and positional features. MLPs subsequently integrate and process these features by transiting towards English-centric latent representations. Notably, building on the above findings, targeted fine-tuning of only 64 heads achieves translation improvement comparable to full-parameter tuning while preserving general capabilities. TI - Exploring Translation Mechanism of Large Language Models JF - Computing Research Repository DO - 10.48550/arxiv.2502.11806 DA - 2025-02-26 UR - https://www.deepdyve.com/lp/arxiv-cornell-university/exploring-translation-mechanism-of-large-language-models-2k3Cknqa0O VL - 2025 IS - 2502 DP - DeepDyve ER -