MiLMMT-46
Collection
Gemma3-based Multilingual Machine Translation Models
•
7 items
•
Updated
MiLMMT-46-12B-Pretrain is a language model developed through continual pretraining of Gemma3-12B using a mix of 143 billion tokens from both monolingual and parallel data across 46 different languages. Please find more details in our paper: Scaling Model and Data for Multilingual Machine Translation with Open Large Language Models.
Note that MiLMMT-46-12B-Pretrain is NOT a translation model.
We collect monolingual data from DCAD-2000. For parallel data, we collect all Chinese-centric and English-centric parallel datasets from the OPUS collection up to August 2025 and conduct a series of filtering processes, such as language identification, semantic duplication filtering, quality filtering, and more.
@misc{shang2026scalingmodeldatamultilingual,
title={Scaling Model and Data for Multilingual Machine Translation with Open Large Language Models},
author={Yuzhe Shang and Pengzhi Gao and Wei Liu and Jian Luan and Jinsong Su},
year={2026},
eprint={2602.11961},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2602.11961},
}