Datasets:
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Error code: DatasetGenerationError
Exception: ReadError
Message: unexpected end of data
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1483, in _prepare_split_single
for key, record in generator:
^^^^^^^^^
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 609, in wrapped
for item in generator(*args, **kwargs):
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 120, in _generate_examples
for example_idx, example in enumerate(self._get_pipeline_from_tar(tar_path, tar_iterator)):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 45, in _get_pipeline_from_tar
current_example[field_name] = f.read()
^^^^^^^^
File "/usr/local/lib/python3.12/tarfile.py", line 693, in read
raise ReadError("unexpected end of data")
tarfile.ReadError: unexpected end of data
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1342, in compute_config_parquet_and_info_response
parquet_operations, partial, estimated_dataset_info = stream_convert_to_parquet(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 907, in stream_convert_to_parquet
builder._prepare_split(split_generator=splits_generators[split], file_format="parquet")
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1345, in _prepare_split
for job_id, done, content in self._prepare_split_single(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1523, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the datasetNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
pickle unknown | __key__ string | __url__ string |
|---|---|---|
[
128,
5,
149,
41,
60,
0,
0,
0,
0,
0,
0,
125,
148,
40,
77,
208,
2,
77,
254,
9,
134,
148,
93,
148,
40,
77,
151,
2,
77,
247,
9,
134,
148,
77,
179,
2,
77,
87,
10,
134,
148,
77,
205,
2,
77,
204,
9,
134,
148,
77,
3,
3,
... | train/data44 | hf://datasets/feiluan/WildRoad@3d919d282cda2c6138d47b870902535141804f55/train.tar.gz |
[
128,
5,
149,
10,
121,
0,
0,
0,
0,
0,
0,
125,
148,
40,
77,
196,
18,
77,
191,
24,
134,
148,
93,
148,
40,
77,
120,
18,
77,
155,
24,
134,
148,
77,
243,
18,
77,
216,
24,
134,
148,
101,
104,
3,
93,
148,
40,
77,
72,
18,
77,... | train/data20 | hf://datasets/feiluan/WildRoad@3d919d282cda2c6138d47b870902535141804f55/train.tar.gz |
[
128,
5,
149,
129,
113,
0,
0,
0,
0,
0,
0,
125,
148,
40,
77,
103,
8,
77,
76,
19,
134,
148,
93,
148,
40,
77,
92,
8,
77,
125,
19,
134,
148,
77,
101,
8,
77,
248,
18,
134,
148,
101,
104,
4,
93,
148,
40,
77,
101,
8,
77,
1... | train/data18 | hf://datasets/feiluan/WildRoad@3d919d282cda2c6138d47b870902535141804f55/train.tar.gz |
[
128,
5,
149,
249,
101,
0,
0,
0,
0,
0,
0,
125,
148,
40,
77,
43,
17,
77,
199,
1,
134,
148,
93,
148,
40,
77,
222,
16,
77,
72,
2,
134,
148,
77,
243,
16,
77,
169,
1,
134,
148,
77,
145,
17,
75,
251,
134,
148,
101,
104,
4,
... | train/data64 | hf://datasets/feiluan/WildRoad@3d919d282cda2c6138d47b870902535141804f55/train.tar.gz |
[
128,
5,
149,
171,
86,
0,
0,
0,
0,
0,
0,
125,
148,
40,
77,
226,
5,
77,
199,
19,
134,
148,
93,
148,
40,
77,
167,
5,
77,
198,
19,
134,
148,
77,
227,
5,
77,
140,
19,
134,
148,
77,
227,
5,
77,
250,
19,
134,
148,
77,
41,
... | train/data103 | hf://datasets/feiluan/WildRoad@3d919d282cda2c6138d47b870902535141804f55/train.tar.gz |
null | train/data52 | hf://datasets/feiluan/WildRoad@3d919d282cda2c6138d47b870902535141804f55/train.tar.gz |
[
128,
5,
149,
71,
78,
0,
0,
0,
0,
0,
0,
125,
148,
40,
77,
75,
6,
77,
41,
31,
134,
148,
93,
148,
40,
77,
40,
6,
77,
104,
31,
134,
148,
77,
99,
6,
77,
231,
30,
134,
148,
101,
104,
4,
93,
148,
40,
104,
1,
77,
127,
6,
... | train/data25 | hf://datasets/feiluan/WildRoad@3d919d282cda2c6138d47b870902535141804f55/train.tar.gz |
"gAWV0yEAAAAAAAB9lChNswhNPQSGlF2UKE1xCE1YBIaUTacITXMEhpRN6AhNxAOGlE0DCU0vBIaUZWgGXZQoaAFNSwlNDwSGlGV(...TRUNCATED) | train/data137 | hf://datasets/feiluan/WildRoad@3d919d282cda2c6138d47b870902535141804f55/train.tar.gz |
"gAWV4EkAAAAAAAB9lChNCQxN2hyGlF2UKE0DDE2nHIaUTQ0MTQwdhpRlaANdlChN9QtNSByGlGgBZWgEXZQoTQUMTUAdhpRNCQx(...TRUNCATED) | train/data58 | hf://datasets/feiluan/WildRoad@3d919d282cda2c6138d47b870902535141804f55/train.tar.gz |
"gAWVF0IAAAAAAAB9lChNOxBNexSGlF2UKE3/D02fFIaUTWkQTacUhpRNdxBNOxSGlGVoA12UKE22D03LFIaUaAFlaARdlChNOxB(...TRUNCATED) | train/data129 | hf://datasets/feiluan/WildRoad@3d919d282cda2c6138d47b870902535141804f55/train.tar.gz |
YAML Metadata Warning:The task_categories "graph-machine-learning" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-ranking, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, image-text-to-image, image-text-to-video, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, video-to-video, other
WildRoad
Beyond Endpoints: Path-Centric Reasoning for Vectorized Off-Road Network Extraction
Official Repository: xiaofei-guan/MaGRoad
WildRoad is a global off-road road network dataset constructed efficiently with a dedicated interactive annotation tool tailored for road-network labeling. It addresses the lack of large-scale vectorized datasets in off-road environments and provides a benchmark for challenging terrains.
WildRoad Dataset Processing Pipeline
Note on Dataset Size: The fully processed dataset is quite large. If downloading the processed patches is inconvenient, you can download the raw source data instead and use the scripts provided here to perfectly reproduce the dataset.
This repository contains scripts to process large-scale remote sensing images and their corresponding road network graphs into smaller, trainable patches.
Overview
The processing pipeline employs two strategies to crop the large map into patches:
- Strategy A (Non-overlapping): Crops the image using a regular, non-overlapping grid.
- Strategy B (Overlapping): Crops the image using an overlapping sliding window.
Filtering & De-duplication: All patches must meet a minimum road length density threshold. To avoid data redundancy, the pipeline uses the Weisfeiler-Lehman (WL) topological similarity algorithm. If an overlapping patch (Strategy B) contains a road topology that is too similar to the existing non-overlapping patches (Strategy A), it will be discarded. Otherwise, it will be kept as a valuable topological supplement.
Folder Structure
Before running the script, ensure your raw data is organized into split folders. Inside each folder, images (.jpg) and their corresponding graph files (.pickle) should be paired by name (e.g., data0.jpg and data0.pickle).
Project Root/
βββ script/
β βββ process_single_split.py
β βββ crop_patch_from_pickle_parallel.py
β βββ ...
βββ train/
βββ val/
βββ test/
How It Works
The main entry point is process_single_split.py. When you run it on a target folder (e.g., test), the script will:
- Find all image-graph pairs in the folder.
- Parallelly crop the large data into candidates and save them temporarily in
{split}_processed/. - Filter redundant topological patches using WL similarity.
- Collect the final valid patches into
{split}_patches/directory.
The output will contain two subdirectories for each split:
{split}_A: Contains strictly non-overlapping patches.{split}_AB: Contains both non-overlapping and selected overlapping patches.
Note: Only the RGB image and the graph data are kept in the final output. The debug masks are ignored to save disk space.
Usage
Process each split sequentially by running the following commands from the Project Root:
# 1. Process the training set
python script/process_single_split.py train --workers 4
# 2. Process the validation set
python script/process_single_split.py val --workers 4
# 3. Process the test set
python script/process_single_split.py test --workers 4
Optional Arguments:
--workers: Number of parallel threads to speed up the cropping process (default is 4).--patch_size: Output patch size (default is 1024).--sim_threshold: WL similarity threshold to discard redundant B patches (default is 0.7).
Verification (Expected Patch Counts)
After running the processing scripts, you can verify your results by checking the number of generated patches. The expected counts of data pairs (image + graph) for each split are as follows:
| Split | Strategy A (_A) |
Strategy A+B (_AB) |
|---|---|---|
| train | 5566 | 12896 |
| val | 1306 | 2986 |
| test | 1146 | 2666 |
- Downloads last month
- 25