Datasets:

Modalities:
Audio
Text
Formats:
json
ArXiv:
Libraries:
Datasets
pandas
License:
File size: 6,131 Bytes
bf27a7f
 
 
2606a3c
bf27a7f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c5852cb
 
4f04008
c5852cb
 
 
4f04008
 
 
 
 
c5852cb
4f04008
c5852cb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4f04008
c5852cb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
bf27a7f
2606a3c
bf27a7f
c5852cb
 
 
 
 
b2e9094
 
c5852cb
 
 
 
 
 
 
 
 
 
bf27a7f
c5852cb
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
---
license: cc-by-nc-sa-4.0
task_categories:
  - question-answering
language:
  - ar
  - en
tags:
  - question-answering
  - cultural-aligned
pretty_name: 'SpokenNativQA: Multilingual Everyday Spoken Queries for LLMs'
size_categories:
  - 10K<n<100K
dataset_info:
  - config_name: Arabic-ASR-Azure
    splits:
      - name: test
        num_examples: 988
  - config_name: Arabic-ASR-Whisper
    splits:
      - name: test
        num_examples: 985
  - config_name: Arabic-ASR-Fanar-Aura
    splits:
      - name: test
        num_examples: 988
  - config_name: Arabic-ASR-Google
    splits:
      - name: test
        num_examples: 985
  - config_name: English-ASR-Azure
    splits:
      - name: test
        num_examples: 2322
  - config_name: English-ASR-Whisper
    splits:
      - name: test
        num_examples: 2322
  - config_name: English-ASR-Fanar-Aura
    splits:
      - name: test
        num_examples: 2322
  - config_name: English-ASR-Google
    splits:
      - name: test
        num_examples: 2322
configs:
  - config_name: Arabic-ASR-Azure
    data_files:
      - split: test
        path: arabic_qa/spokenqa_arabic_qa_test_azure_asr.json
  - config_name: Arabic-ASR-Whisper
    data_files:
      - split: test
        path: arabic_qa/spokenqa_arabic_qa_test_whisper_asr.json
  - config_name: Arabic-ASR-Fanar-Aura
    data_files:
      - split: test
        path: arabic_qa/spokenqa_arabic_qa_test_fanar_asr.json
  - config_name: Arabic-ASR-Google
    data_files:
      - split: test
        path: arabic_qa/spokenqa_arabic_qa_test_google_asr.json
  - config_name: English-ASR-Azure
    data_files:
      - split: test
        path: english_qa/spokenqa_english_qa_test_azure_asr.json
  - config_name: English-ASR-Whisper
    data_files:
      - split: test
        path: english_qa/spokenqa_english_qa_test_whisper_asr.json
  - config_name: English-ASR-Fanar-Aura
    data_files:
      - split: test
        path: english_qa/spokenqa_english_qa_test_fanar_asr.json
  - config_name: English-ASR-Google
    data_files:
      - split: test
        path: english_qa/spokenqa_english_qa_test_google_asr.json
---

# SpokenNativQA: Multilingual Everyday Spoken Queries for LLMs

The [**SpokenNativQA**]() dataset consists of question-answer (QA) pairs, where queries are sourced from real users and answers are manually reviewed and edited. The dataset covers a diverse range of 18 topics that reflect culturally and regionally specific knowledge, as well as everyday queries. These topics include animals, business, clothing, education, events, food and drinks, general knowledge, geography, immigration, language, literature, names and persons, plants, religion, sports and games, tradition, travel, and weather.

SpokenNativQA provides **multilingual test sets of everyday spoken questions** to evaluate large language models (LLMs) and speech processing systems. The dataset contains **Arabic** and **English** queries, each transcribed by multiple automatic speech recognition (ASR) systems.

<p align="left"> <img src="./spokennativqa.png" style="width: 60%;" id="title-icon"> </p>


**<span style="color:blue">Note:</span>** As a part of this repository we only shared the wav files that are only used for evaluation. For the entire dataset that we reported in the paper might be accessible after contacting with the authors.


## Directory Overview

The dataset is organized into two main directories:

- **`arabic_qa/`**
  - `spokenqa_arabic_qa_test_azure_asr.jsonl`
  - `spokenqa_arabic_qa_test_fanar_asr.jsonl`
  - `spokenqa_arabic_qa_test_google_asr.jsonl`
  - `spokenqa_arabic_qa_test_whisper_asr.jsonl`
  - `spokenqa_arabic_qa_test.jsonl`
  - `speech/` -- wav files

- **`english_qa/`**
  - `spokenqa_english_qa_test_azure_asr.jsonl`
  - `spokenqa_english_qa_test_fanar_asr.jsonl`
  - `spokenqa_english_qa_test_google_asr.jsonl`
  - `spokenqa_english_qa_test.jsonl`
  - `speech/` -- wav files

### Dataset Structure and Format

Each `.jsonl` file contains a list of JSON objects, one per line. The typical structure includes:

- **`lang`**: The language of the spoken query (e.g., "arabic", "english").
- **`data_id`**: A unique identifier for the data instance.
- **`file_name`**: The name of the audio file.
- **`file_path`**: The relative path of the audio file.
- **`question`**: The intended question in text form (reference).
- **`answer`**: The expected answer or reference answer for the question.
- **`location`**: The geographical location where the query was recorded.
- **`asr_text`**: The text output from the ASR system.

### Example of a JSON Entry
```json
{
  "lang": "arabic",
  "data_id": "3cdfcfd1acb722617ec8bbe6808114bc",
  "file_name": "3cdfcfd1acb722617ec8bbe6808114bc_1724222501325.wav",
  "file_path": "speech/3cdfcfd1acb722617ec8bbe6808114bc_1724222501325.wav",
  "question": "من هو الشاعر الذي سجن؟",
  "answer": "وهي القصائد التي كتبها أبو فراس الحمداني فترة أسره عند الروم في سجن خرشنة، وعرفت باسم الروميات نسبة لمكان أسره، وقد تميزت هذا القصائد بجزالتها وقوتها ورصانتها، وصدق عاطفتها.",
  "location": "qatar",
  "asr_text": "من هو الشاعر الذي زعل؟"
}
```

# Experimental Scripts:
All of the experimental scripts are available as a part of [https://github.com/qcri/LLMeBench](https://github.com/qcri/LLMeBench) framework.

# License
This dataset is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0).


# Citation
If you are using this dataset in your research, we kindly ask that you cite our [paper](https://www.arxiv.org/pdf/2505.19163).

```
@inproceedings{alam2025spokennativqa,
  title     = {SpokenNativQA: Multilingual Everyday Spoken Queries for LLMs},
  author    = {Firoj Alam and Md Arid Hasan and Shammur Absar Chowdhury},
  booktitle = {Proceedings of the 26th Interspeech Conference (Interspeech 2025)},
  year      = {2025},
  address   = {Rotterdam, The Netherlands},
  month     = aug,
  organization = {ISCA},
}

```