license: cc0-1.0
task_categories:
- question-answering
language:
- en
size_categories:
- 100M<n<1B
ComplexTempQA Dataset
ComplexTempQA is a large-scale dataset designed for complex temporal question answering (TQA). It consists of over 100 million question-answer pairs, making it one of the most extensive datasets available for TQA. The dataset is generated using data from Wikipedia and Wikidata and spans questions over a period of 36 years (1987-2023).
Dataset Description
ComplexTempQA categorizes questions into three main types:
- Attribute Questions
- Comparison Questions
- Counting Questions
These categories are further divided based on their relation to events, entities, or time periods.
Question Types and Counts
Question Type | Subtype | Count |
---|---|---|
Attribute | Event | 83,798 |
Attribute | Entity | 84,079 |
Attribute | Time | 9,454 |
Comparison | Event | 25,353,340 |
Comparison | Entity | 74,678,117 |
Comparison | Time | 54,022,952 |
Counting | Event | 18,325 |
Counting | Entity | 10,798 |
Counting | Time | 12,732 |
Multi-Hop | 76,933 | |
Unnamed Event | 8,707,123 | |
Total | 100,228,457 |
Metadata
Each question in the dataset is accompanied by detailed metadata, including:
- Type of question based on taxonomy
- Wikidata IDs of the questioned entities or events
- Country information for both questions and answers
- Difficulty rating (easy or hard)
- Time span related to the question
Dataset Characteristics
Size
ComplexTempQA comprises over 100 million question-answer pairs, focusing on events, entities, and time periods from 1987 to 2023.
Complexity
Questions require advanced reasoning skills, including multi-hop question answering, temporal aggregation, and across-time comparisons.
Taxonomy
The dataset follows a unique taxonomy categorizing questions into attributes, comparisons, and counting types, ensuring comprehensive coverage of temporal queries.
Evaluation
The dataset has been evaluated for readability, ease of answering before and after web searches, and overall clarity. Human raters have assessed a sample of questions to ensure high quality.
Usage
Evaluation and Training
ComplexTempQA can be used for:
- Evaluating the temporal reasoning capabilities of large language models (LLMs)
- Fine-tuning language models for better temporal understanding
- Developing and testing retrieval-augmented generation (RAG) systems
Research Applications
The dataset supports research in:
- Temporal question answering
- Information retrieval
- Language understanding
Adaptation and Continual Learning
ComplexTempQA's temporal metadata facilitates the development of online adaptation and continual training approaches for LLMs, aiding in the exploration of time-based learning and evaluation.
Access
The dataset and code are freely available at https://github.com/DataScienceUIBK/ComplexTempQA.