PokeGym: A Visually-Driven Long-Horizon Benchmark for Vision-Language Models
Abstract
Vision-Language Models face limitations in 3D embodied environments due to insufficient physical reasoning capabilities, as demonstrated by the PokeGym benchmark that reveals deadlock recovery as the primary bottleneck in visual grounding and semantic reasoning tasks.
While Vision-Language Models (VLMs) have achieved remarkable progress in static visual understanding, their deployment in complex 3D embodied environments remains severely limited. Existing benchmarks suffer from four critical deficiencies: (1) passive perception tasks circumvent interactive dynamics; (2) simplified 2D environments fail to assess depth perception; (3) privileged state leakage bypasses genuine visual processing; and (4) human evaluation is prohibitively expensive and unscalable. We introduce PokeGym, a visually-driven long-horizon benchmark instantiated within Pokemon Legends: Z-A, a visually complex 3D open-world Role-Playing Game. PokeGym enforces strict code-level isolation: agents operate solely on raw RGB observations while an independent evaluator verifies success via memory scanning, ensuring pure vision-based decision-making and automated, scalable assessment. The benchmark comprises 30 tasks (30-220 steps) spanning navigation, interaction, and mixed scenarios, with three instruction granularities (Visual-Guided, Step-Guided, Goal-Only) to systematically deconstruct visual grounding, semantic reasoning, and autonomous exploration capabilities. Our evaluation reveals a key limitation of current VLMs: physical deadlock recovery, rather than high-level planning, constitutes the primary bottleneck, with deadlocks showing a strong negative correlation with task success. Furthermore, we uncover a metacognitive divergence: weaker models predominantly suffer from Unaware Deadlocks (oblivious to entrapment), whereas advanced models exhibit Aware Deadlocks (recognizing entrapment yet failing to recover). These findings highlight the need to integrate explicit spatial intuition into VLM architectures. The code and benchmark will be available on GitHub.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- OmniEarth: A Benchmark for Evaluating Vision-Language Models in Geospatial Tasks (2026)
- AtomVLA: Scalable Post-Training for Robotic Manipulation via Predictive Latent World Models (2026)
- How Far Are Vision-Language Models from Constructing the Real World? A Benchmark for Physical Generative Reasoning (2026)
- GameVerse: Can Vision-Language Models Learn from Video-based Reflection? (2026)
- ESPIRE: A Diagnostic Benchmark for Embodied Spatial Reasoning of Vision-Language Models (2026)
- ForeAct: Steering Your VLA with Efficient Visual Foresight Planning (2026)
- VLA-IAP: Training-Free Visual Token Pruning via Interaction Alignment for Vision-Language-Action Models (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Get this paper in your agent:
hf papers read 2604.08340 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper