| | --- |
| | dataset_info: |
| | features: |
| | - name: text |
| | dtype: string |
| | - name: meta |
| | struct: |
| | - name: pile_set_name |
| | dtype: string |
| | splits: |
| | - name: train |
| | num_bytes: 3811000466 |
| | num_examples: 79046 |
| | - name: validation |
| | num_bytes: 115675626 |
| | num_examples: 2434 |
| | - name: test |
| | num_bytes: 113239914 |
| | num_examples: 2407 |
| | download_size: 1875191074 |
| | dataset_size: 4039916006 |
| | configs: |
| | - config_name: default |
| | data_files: |
| | - split: train |
| | path: data/train-* |
| | - split: validation |
| | path: data/validation-* |
| | - split: test |
| | path: data/test-* |
| | language: |
| | - en |
| | --- |
| | |
| |
|
| | This dataset includes all arxiv documents from the 00.jsonl.zst partition of The Pile. It was created with this script: |
| |
|
| | ``` |
| | pile_path = "data/the_pile/train/00.jsonl.zst" |
| | |
| | with zstd.open(pile_path, 'r') as fr: |
| | with open("/tmp/arxiv.jsonl", "w") as fw: |
| | for i, line in enumerate(tqdm(fr)): |
| | doc = json.loads(line) |
| | source = doc['meta']['pile_set_name'] |
| | if source == "ArXiv": |
| | fw.write(json.dumps(doc) + "\n") |
| | ``` |
| |
|
| | The validation and test sets are the full official releases. |