You need to agree to share your contact information to access this dataset
This repository is publicly accessible, but you have to accept the conditions to access its files and content.
By requesting access to this dataset, you agree to the following terms:
- Usage Restriction: You agree not to use this dataset or any derivative of it for training machine learning models, including but not limited to fine-tuning, pretraining, or dataset augmentation.
- License Acceptance: You confirm that you have read, understood, and accept the dataset's license: Apache License, Version 2.0.
Log in or Sign Up to review the conditions and access this dataset content.
cieaCOVA Dataset
cieaCOVA is a Valencian-language (va) evaluation dataset designed to benchmark large language models (LLMs) on structured reasoning and generative tasks. The dataset contains 1,982 curated examples and is specifically developed for evaluation purposes — not for model training.
The dataset is organized into two task-oriented directories, each containing a train and test split:
multiple_choice/(train.parquet,test.parquet) — multiple-choice question answeringtext_generation/(train.parquet,test.parquet) — open-ended text generation
All examples are written in Valencian, making cieaCOVA a valuable benchmark resource for assessing LLM performance in this language.
Dataset Structure
The dataset contains 9 columns, all stored as string type, except for "Metadata":
- Tarea: Task category (e.g., reasoning, factual knowledge, classification).
- Subtarea: Sub-task category (specific values: "text", "oracions", "definició", "frase").
- Instrucción: The instruction provided to the model describing what it must do.
- Pista: An optional hint or supporting clue to guide reasoning (present in 698 out of 1,982 examples).
- Pregunta: The main question posed to the model.
- Respuesta: The correct or reference answer.
- Opciones: Multiple-choice options associated with the question.
- Prompt: The full constructed prompt, typically combining instruction, question, and context.
- Metadata: Additional structured information related to the example (e.g., category, difficulty, or source).
Dataset Statistics
- Total examples: 1,982
- Language: Valencian (va)
- Splits: 4 task-specific parquet files across
trainandtestsplits (multiple_choiceandtext_generation) - Hint availability (
Pista): Present in 698 examples - Format: Apache Parquet
Design Principles
The dataset was curated under the following principles:
- Clear separation between instruction, question, and answer
- Consistent formatting of multiple-choice options
- Explicit reference answers for reproducibility
- Structured prompts to reduce ambiguity
- Linguistic consistency in Valencian
- Metadata support for filtered evaluation
Quality assurance procedures were applied to ensure structural consistency and minimize formatting errors.
Intended Uses
cieaCOVA is intended exclusively for:
- Evaluating LLM performance in Valencian
- Benchmarking instruction-following models
- Measuring multiple-choice reasoning accuracy
- Analyzing generation quality under structured prompts
- Studying prompt sensitivity and evaluation reproducibility
Out-of-Scope Use
This dataset is strictly for evaluation purposes.
It must not be used for:
- Pretraining
- Fine-tuning
- Reinforcement learning
- Dataset augmentation
- Any other form of model training
Access requires explicit agreement to these restrictions.
Data Collection and Curation
The dataset has been curated to ensure:
- Structured and reusable prompt formatting
- Clean separation between model input and expected output
- Standardized evaluation across tasks
All content is written and validated in Valencian.
Funding
This work is funded by the Ministerio para la Transformación Digital y de la Función Pública - Funded by EU – NextGenerationEU within the framework of the project Desarrollo de Modelos ALIA.
Acknowledgments
We would like to express our gratitude to all individuals and institutions that have contributed to the development of this work.
We also acknowledge the financial, technical, and scientific support of the Ministerio para la Transformación Digital y de la Función Pública - Funded by EU – NextGenerationEU within the framework of the project Desarrollo de Modelos ALIA.
Reference
Please cite this dataset using the following BibTeX format:
@misc{cieacova2025,
author = {Dataset Contributors},
title = {cieaCOVA Dataset},
year = {2026},
institution = {Language and Information Systems Group (GPLSI) and Centro de Inteligencia Digital (CENID), University of Alicante (UA)
howpublished = {\url{https://huggingface.co/datasets/gplsi/cieaCOVA}}
}
Disclaimer
Be aware that the data may contain biases or other unintended distortions. When third parties deploy systems or provide services based on this data, or use the data themselves, they bear the responsibility for mitigating any associated risks and ensuring compliance with applicable regulations, including those governing the use of Artificial Intelligence. The University of Alicante, as the owner and creator of the dataset, shall not be held liable for any outcomes resulting from third-party use.
License
- Downloads last month
- 148