lastdefiance20 commited on
Commit
e430be0
·
verified ·
1 Parent(s): 3d438ec

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +92 -3
README.md CHANGED
@@ -1,3 +1,92 @@
1
- ---
2
- license: cc-by-nc-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ tags:
4
+ - vision-action
5
+ - embodied-ai
6
+ - game-dataset
7
+ - imitation-learning
8
+ - pretraining
9
+ ---
10
+ # 🕹️ D2E: Scaling Vision-Action Pretraining on Desktop Data for Transfer to Embodied AI
11
+ > This repository hosts the **Vision-Action subset** of the D2E dataset, preprocessed at 480p for training **G-IDM**, **Vision-Action Pretraning** or other game agents.
12
+ > If you need the original high-resolution dataset (HD/QHD) for **world-model** or **video-generation** training, please visit [open-world-agents/D2E-Original](https://huggingface.co/open-world-agents/D2E-Original).
13
+
14
+ ## Dataset Description
15
+ This dataset is a curated subset of the **desktop gameplay data** introduced in the paper [**“D2E: Scaling Vision-Action Pretraining on Desktop Data for Transfer to Embodied AI”**](https://arxiv.org/abs/2510.05684).
16
+
17
+ The dataset enables **vision-action pretraining** on large-scale human gameplay data, facilitating **transfer to real-world embodied AI tasks** such as robotic manipulation and navigation.
18
+
19
+ ## Motivation & Use Cases
20
+ - 🎮 **Train your own game agent** using high-quality vision-action trajectories.
21
+ - 🤖 **Pretrain vision-action or vision-language-action models** on diverse human gameplay to learn transferable sensorimotor primitives.
22
+ - 🌍 **Use as world-model data** for predicting future states or generating coherent action-conditioned videos (recommend using the original HD dataset for this).
23
+ - 🧠 **Generalist learning** — unify multiple game domains to train models capable of cross-environment reasoning.
24
+
25
+ ## Dataset Structure
26
+ - Each **game** entry includes:
27
+ - 🖥️ Video — desktop screen capture stored as {filename}.mkv
28
+ - 🧩 Action Metadata — synchronized desktop interactions stored as {filename}.mcap
29
+ - **Format:** Each file is an OWAMcap sequence (a variant of MCAP) recorded using the **OWA Toolkit**, synchronizing:
30
+ - Screen frames (up to 60 Hz)
31
+ - Keyboard & mouse events
32
+ - Window state changes
33
+ - **Compatibility:** Easily convertible to RLDS-style datasets for training or evaluation.
34
+
35
+ ## Dataset Details
36
+ - **Recording Tool:** [ocap](https://github.com/open-world-agents/ocap) — captures screen, keyboard, and mouse events with precise timestamps, stored efficiently in OWAMcap.
37
+ - **Game Genres:** Includes FPS (Apex Legends), open-world (Cyberpunk 2077, GTA V), simulation (Euro Truck Simulator 2), strategy (Stardew Valley, Eternal Return), sandbox (Minecraft), and more.
38
+ - **Data Collection:**
39
+ - Human demonstrations collected across **31 games** (~335 h total).
40
+ - Public release covers **29 games** (~**267.81 h**) after privacy filtering.
41
+ - **Frame Resolution:** 480p (originals are HD/QHD in D2E-Original).
42
+
43
+ ## Dataset Summary
44
+ | Game Title | Files | Total Duration (hours / seconds) | Average Duration (seconds / minutes) |
45
+ |-------------|--------|----------------------------------|--------------------------------------|
46
+ | Apex_Legends | 36 | **25.58 h (92093.44 s)** | 2558.15 s (42.64 min) |
47
+ | Euro_Truck_Simulator_2 | 14 | **19.62 h (70641.61 s)** | 5045.83 s (84.10 min) |
48
+ | Eternal_Return | 31 | **17.13 h (61677.25 s)** | 1989.59 s (33.16 min) |
49
+ | Cyberpunk_2077 | 7 | **14.22 h (51183.25 s)** | 7311.89 s (121.86 min) |
50
+ | MapleStory_Worlds_Southperry | 8 | **14.09 h (50720.40 s)** | 6340.05 s (105.67 min) |
51
+ | Stardew_Valley | 10 | **14.55 h (52381.45 s)** | 5238.14 s (87.30 min) |
52
+ | Rainbow_Six | 11 | **13.74 h (49472.80 s)** | 4497.53 s (74.96 min) |
53
+ | Grand_Theft_Auto_V | 11 | **11.81 h (42518.18 s)** | 3865.29 s (64.42 min) |
54
+ | Slime_Rancher | 9 | **10.68 h (38463.32 s)** | 4273.70 s (71.23 min) |
55
+ | Dinkum | 9 | **10.44 h (37600.32 s)** | 4177.81 s (69.63 min) |
56
+ | Medieval_Dynasty | 3 | **10.32 h (37151.27 s)** | 12383.76 s (206.40 min) |
57
+ | Counter-Strike_2 | 10 | **9.89 h (35614.96 s)** | 3561.50 s (59.36 min) |
58
+ | Satisfactory | 4 | **9.79 h (35237.30 s)** | 8809.32 s (146.82 min) |
59
+ | Grounded | 4 | **9.70 h (34912.31 s)** | 8728.08 s (145.47 min) |
60
+ | Ready_Or_Not | 11 | **9.59 h (34521.40 s)** | 3138.31 s (52.31 min) |
61
+ | Barony | 10 | **9.28 h (33406.96 s)** | 3340.70 s (55.68 min) |
62
+ | Core_Keeper | 7 | **9.02 h (32460.05 s)** | 4637.15 s (77.29 min) |
63
+ | Minecraft_1.21.8 | 8 | **8.64 h (31093.47 s)** | 3886.68 s (64.78 min) |
64
+ | Monster_Hunter_Wilds | 5 | **8.32 h (29951.88 s)** | 5990.38 s (99.84 min) |
65
+ | Raft | 5 | **9.95 h (35833.27 s)** | 7166.65 s (119.44 min) |
66
+ | Brotato | 13 | **5.99 h (21574.78 s)** | 1659.60 s (27.66 min) |
67
+ | PUBG | 7 | **4.88 h (17584.92 s)** | 2512.13 s (41.87 min) |
68
+ | Vampire_Survivors | 2 | **2.81 h (10132.96 s)** | 5066.48 s (84.44 min) |
69
+ | Battlefield_6_Open_Beta | 7 | **2.21 h (7965.42 s)** | 1137.92 s (18.97 min) |
70
+ | Skul | 1 | **1.97 h (7078.00 s)** | 7078.00 s (117.97 min) |
71
+ | PEAK | 2 | **1.75 h (6288.88 s)** | 3144.44 s (52.41 min) |
72
+ | OguForest | 1 | **0.84 h (3040.94 s)** | 3040.94 s (50.68 min) |
73
+ | Super_Bunny_Man | 2 | **0.72 h (2604.00 s)** | 1302.00 s (21.70 min) |
74
+ | VALORANT | 1 | **0.25 h (911.94 s)** | 911.94 s (15.20 min) |
75
+
76
+ ## Usage Example
77
+ ```python
78
+ from datasets import load_dataset
79
+
80
+ dataset = load_dataset("open-world-agents/D2E", split="train")
81
+ ```
82
+
83
+ ## Citation
84
+ If you find this work useful, please cite our paper:
85
+ ```
86
+ @article{choi2025d2e,
87
+ title={D2E: Scaling Vision-Action Pretraining on Desktop Data for Transfer to Embodied AI},
88
+ author={Choi, Suwhan and Jung, Jaeyoon and Seong, Haebin and Kim, Minchan and Kim, Minyeong and Cho, Yongjun and Kim, Yoonshik and Park, Yubeen and Yu, Youngjae and Lee, Yunsung},
89
+ journal={arXiv preprint arXiv:2510.05684},
90
+ year={2025}
91
+ }
92
+ ```