Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
License:
ireneli1024 commited on
Commit
66c9383
·
verified ·
1 Parent(s): a5d7edc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +39 -3
README.md CHANGED
@@ -1,3 +1,39 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ ---
4
+ # TutorQA Benchmark
5
+
6
+ This dataset is part of the benchmark introduced in the paper [Graphusion: Leveraging Large Language Models for
7
+ Scientific Knowledge Graph Fusion and Construction in NLP Education](https://arxiv.org/pdf/2407.10794v1). We also release more data in our [GitHub page](https://github.com/IreneZihuiLi/Graphusion/tree/main).
8
+ It contains 6 tasks designed for evaluating various aspects of reasoning, graph understanding, and language generation.
9
+
10
+ ## Dataset Structure
11
+
12
+ Each task is a separate split:
13
+ - `task1`: Relation Judgment
14
+ - `task2`: Prerequisite Prediction
15
+ - `task3`: Path Searching
16
+ - `task4`: Subgraph Completion
17
+ - `task5`: Clustering
18
+ - `task6`: Idea Hamster (no answers, open ended)
19
+
20
+ | Split | Fields |
21
+ |:-------|:----------------------------|
22
+ | task1 | `question`, `answer` |
23
+ | task2 | `question`, `answer` |
24
+ | task3 | `question`, `answer` |
25
+ | task4 | `question`, `answer` |
26
+ | task5 | `question`, `answer` |
27
+ | task6 | `question` |
28
+
29
+ ## Usage Example
30
+
31
+ ```python
32
+ from datasets import load_dataset
33
+
34
+ dataset = load_dataset("li-lab/tutorqa")
35
+
36
+ # Access individual tasks
37
+ task1 = dataset["task1"]
38
+ task6 = dataset["task6"]
39
+ ```