-
Scaling up Test-Time Compute with Latent Reasoning: A Recurrent Depth Approach
Paper • 2502.05171 • Published • 152 -
Agency Is Frame-Dependent
Paper • 2502.04403 • Published • 23 -
Distillation Scaling Laws
Paper • 2502.08606 • Published • 47 -
LLM Pretraining with Continuous Concepts
Paper • 2502.08524 • Published • 30
Collections
Discover the best community collections!
Collections including paper arxiv:2502.08606
-
Learned Compression for Compressed Learning
Paper • 2412.09405 • Published • 13 -
Distillation Scaling Laws
Paper • 2502.08606 • Published • 47 -
TTRL: Test-Time Reinforcement Learning
Paper • 2504.16084 • Published • 120 -
Limitations of Normalization in Attention Mechanism
Paper • 2508.17821 • Published • 7
-
PDFTriage: Question Answering over Long, Structured Documents
Paper • 2309.08872 • Published • 55 -
Adapting Large Language Models via Reading Comprehension
Paper • 2309.09530 • Published • 82 -
Table-GPT: Table-tuned GPT for Diverse Table Tasks
Paper • 2310.09263 • Published • 40 -
Context-Aware Meta-Learning
Paper • 2310.10971 • Published • 17
-
Compression Represents Intelligence Linearly
Paper • 2404.09937 • Published • 28 -
MiniCPM: Unveiling the Potential of Small Language Models with Scalable Training Strategies
Paper • 2404.06395 • Published • 24 -
Long-context LLMs Struggle with Long In-context Learning
Paper • 2404.02060 • Published • 37 -
Are large language models superhuman chemists?
Paper • 2404.01475 • Published • 19
-
Self-Rewarding Language Models
Paper • 2401.10020 • Published • 152 -
Orion-14B: Open-source Multilingual Large Language Models
Paper • 2401.12246 • Published • 14 -
MambaByte: Token-free Selective State Space Model
Paper • 2401.13660 • Published • 60 -
MM-LLMs: Recent Advances in MultiModal Large Language Models
Paper • 2401.13601 • Published • 48
-
MotionBench: Benchmarking and Improving Fine-grained Video Motion Understanding for Vision Language Models
Paper • 2501.02955 • Published • 44 -
2.5 Years in Class: A Multimodal Textbook for Vision-Language Pretraining
Paper • 2501.00958 • Published • 109 -
MMVU: Measuring Expert-Level Multi-Discipline Video Understanding
Paper • 2501.12380 • Published • 84 -
VideoWorld: Exploring Knowledge Learning from Unlabeled Videos
Paper • 2501.09781 • Published • 27
-
LLM Pruning and Distillation in Practice: The Minitron Approach
Paper • 2408.11796 • Published • 58 -
TableBench: A Comprehensive and Complex Benchmark for Table Question Answering
Paper • 2408.09174 • Published • 52 -
To Code, or Not To Code? Exploring Impact of Code in Pre-training
Paper • 2408.10914 • Published • 45 -
Open-FinLLMs: Open Multimodal Large Language Models for Financial Applications
Paper • 2408.11878 • Published • 64
-
Training Software Engineering Agents and Verifiers with SWE-Gym
Paper • 2412.21139 • Published • 25 -
Evaluating Language Models as Synthetic Data Generators
Paper • 2412.03679 • Published • 47 -
Self-Rewarding Language Models
Paper • 2401.10020 • Published • 152 -
Self-Discover: Large Language Models Self-Compose Reasoning Structures
Paper • 2402.03620 • Published • 117
-
Scaling up Test-Time Compute with Latent Reasoning: A Recurrent Depth Approach
Paper • 2502.05171 • Published • 152 -
Agency Is Frame-Dependent
Paper • 2502.04403 • Published • 23 -
Distillation Scaling Laws
Paper • 2502.08606 • Published • 47 -
LLM Pretraining with Continuous Concepts
Paper • 2502.08524 • Published • 30
-
MotionBench: Benchmarking and Improving Fine-grained Video Motion Understanding for Vision Language Models
Paper • 2501.02955 • Published • 44 -
2.5 Years in Class: A Multimodal Textbook for Vision-Language Pretraining
Paper • 2501.00958 • Published • 109 -
MMVU: Measuring Expert-Level Multi-Discipline Video Understanding
Paper • 2501.12380 • Published • 84 -
VideoWorld: Exploring Knowledge Learning from Unlabeled Videos
Paper • 2501.09781 • Published • 27
-
Learned Compression for Compressed Learning
Paper • 2412.09405 • Published • 13 -
Distillation Scaling Laws
Paper • 2502.08606 • Published • 47 -
TTRL: Test-Time Reinforcement Learning
Paper • 2504.16084 • Published • 120 -
Limitations of Normalization in Attention Mechanism
Paper • 2508.17821 • Published • 7
-
LLM Pruning and Distillation in Practice: The Minitron Approach
Paper • 2408.11796 • Published • 58 -
TableBench: A Comprehensive and Complex Benchmark for Table Question Answering
Paper • 2408.09174 • Published • 52 -
To Code, or Not To Code? Exploring Impact of Code in Pre-training
Paper • 2408.10914 • Published • 45 -
Open-FinLLMs: Open Multimodal Large Language Models for Financial Applications
Paper • 2408.11878 • Published • 64
-
PDFTriage: Question Answering over Long, Structured Documents
Paper • 2309.08872 • Published • 55 -
Adapting Large Language Models via Reading Comprehension
Paper • 2309.09530 • Published • 82 -
Table-GPT: Table-tuned GPT for Diverse Table Tasks
Paper • 2310.09263 • Published • 40 -
Context-Aware Meta-Learning
Paper • 2310.10971 • Published • 17
-
Compression Represents Intelligence Linearly
Paper • 2404.09937 • Published • 28 -
MiniCPM: Unveiling the Potential of Small Language Models with Scalable Training Strategies
Paper • 2404.06395 • Published • 24 -
Long-context LLMs Struggle with Long In-context Learning
Paper • 2404.02060 • Published • 37 -
Are large language models superhuman chemists?
Paper • 2404.01475 • Published • 19
-
Training Software Engineering Agents and Verifiers with SWE-Gym
Paper • 2412.21139 • Published • 25 -
Evaluating Language Models as Synthetic Data Generators
Paper • 2412.03679 • Published • 47 -
Self-Rewarding Language Models
Paper • 2401.10020 • Published • 152 -
Self-Discover: Large Language Models Self-Compose Reasoning Structures
Paper • 2402.03620 • Published • 117
-
Self-Rewarding Language Models
Paper • 2401.10020 • Published • 152 -
Orion-14B: Open-source Multilingual Large Language Models
Paper • 2401.12246 • Published • 14 -
MambaByte: Token-free Selective State Space Model
Paper • 2401.13660 • Published • 60 -
MM-LLMs: Recent Advances in MultiModal Large Language Models
Paper • 2401.13601 • Published • 48