Qwen3-Next-80B-A3B-Instruct-REAM
This model is a compressed version of Qwen/Qwen3-Next-80B-A3B-Instruct. It is obtained by reducing the number of experts in each MoE layer from 512 to 384. This reduction is achieved by the REAM method described in https://bknyaz.github.io/blog/2026/moe/. The compressed model has 60B params (120GB) instead of 80B (160GB) of the original model, reducing storage and GPU memory requirements by roughly 25%. At the same time, the model retains >=95% of the original model's performance on a variety of benchmarks (see Results section below). Additional efficiency optimization (e.g., quantization) can be added similarly to the original model.
See additional details at Qwen3-30B-A3B-Instruct-2507-REAM.
Results
| Model | IFeval | AIME25 | GSM8K | GPQA-D | HumanEval | LiveCodeBench | AVG |
|---|---|---|---|---|---|---|---|
| Qwen3-Next-80B-A3B-Instruct | 93.4 | 80.0 | 78.6 | 47.0 | 95.1 | 43.2 | 72.9 |
| Qwen3-Next-80B-A3B-Instruct-REAM | 91.5 | 73.3 | 78.4 | 36.9 | 92.7 | 42.9 | 69.3 |
License
Please refer to the license of the original model Qwen/Qwen3-Next-80B-A3B-Instruct.
- Downloads last month
- 26
Model tree for SamsungSAILMontreal/Qwen3-Next-80B-A3B-Instruct-REAM
Base model
Qwen/Qwen3-Next-80B-A3B-Instruct