Papers
arxiv:2601.03928

FocusUI: Efficient UI Grounding via Position-Preserving Visual Token Selection

Published on Jan 7
ยท Submitted by
Yang
on Jan 15
ยท showlab Show Lab
Authors:
,
,

Abstract

FocusUI is an efficient UI grounding framework that reduces computational overhead by selecting relevant visual tokens while preserving positional continuity through a novel PosPad strategy.

AI-generated summary

Vision-Language Models (VLMs) have shown remarkable performance in User Interface (UI) grounding tasks, driven by their ability to process increasingly high-resolution screenshots. However, screenshots are tokenized into thousands of visual tokens (e.g., about 4700 for 2K resolution), incurring significant computational overhead and diluting attention. In contrast, humans typically focus on regions of interest when interacting with UI. In this work, we pioneer the task of efficient UI grounding. Guided by practical analysis of the task's characteristics and challenges, we propose FocusUI, an efficient UI grounding framework that selects patches most relevant to the instruction while preserving positional continuity for precise grounding. FocusUI addresses two key challenges: (1) Eliminating redundant tokens in visual encoding. We construct patch-level supervision by fusing an instruction-conditioned score with a rule-based UI-graph score that down-weights large homogeneous regions to select distinct and instruction-relevant visual tokens. (2) Preserving positional continuity during visual token selection. We find that general visual token pruning methods suffer from severe accuracy degradation on UI grounding tasks due to broken positional information. We introduce a novel PosPad strategy, which compresses each contiguous sequence of dropped visual tokens into a single special marker placed at the sequence's last index to preserve positional continuity. Comprehensive experiments on four grounding benchmarks demonstrate that FocusUI surpasses GUI-specific baselines. On the ScreenSpot-Pro benchmark, FocusUI-7B achieves a performance improvement of 3.7% over GUI-Actor-7B. Even with only 30% visual token retention, FocusUI-7B drops by only 3.2% while achieving up to 1.44x faster inference and 17% lower peak GPU memory.

Community

Paper submitter

TL;DR:
High-res UI screenshots (2K/4K) force VLMs to process thousands of visual tokens. Inspired by human vision, which selects only instruction-relevant image patches, FocusUI teaches VLMs where to look in UI screenshots smartly ๐Ÿ”


๐Ÿ“„ Paper: arXiv:2601.03928
๐ŸŒ Project Page: showlab.github.io/FocusUI
๐Ÿ’ป Code: github.com/showlab/FocusUI

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2601.03928 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2601.03928 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2601.03928 in a Space README.md to link it from this page.

Collections including this paper 1