Model Introduction
We introduce an updated version of LongCat-Flash-Thinking, named LongCat-Flash-Thinking-2601, a powerful and efficient Large Reasoning Model (LRM) with 560 billion total parameters, built upon an innovative Mixture-of-Experts (MoE) architecture.
To push reasoning capability beyond current boundary, we established our Heavy Thinking Mode based on the LongCat-Flash-Thinking-2601. Specifically, we decompose challenging problem solving into two complementary stages: parallel thinking and summarization, thus jointly scaling both reasoning depth and width. For reasoning width scaling, under Heavy Thinking Mode, multiple trajectories are independently generated in a parallel manner, enabling broad exploration of reasoning paths. Reasonably high inference temperature here is applied to ensure possible diversity. For reasoning depth scaling, the refined trajectories during the summarization stage can be recursively fed back into the summary model, forming an iterative reasoning loop that supports progressively deeper reasoning. An additional reinforcement learning stage is specifically tailored to train the summarization ability, thus further unlocking the potential of this mode.
We now release our LongCat-HeavyModel-Summary model at link, which is further trained based on LongCat-Flash-Thinking-2601.
We've launched Heavy Thinking Mode on the Longcat AI platform. Feel free to try it out: https://longcat.chat/.
- Downloads last month
- 5