Datasets:

Modalities:
Text
Formats:
json
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
barty commited on
Commit
1cf9f53
·
verified ·
1 Parent(s): 2490f16

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +45 -1
README.md CHANGED
@@ -2,4 +2,48 @@
2
  license: mit
3
  ---
4
 
5
- [Paper](https://arxiv.org/abs/2404.12636)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  license: mit
3
  ---
4
 
5
+ ## Dataset Description
6
+
7
+ - **Repository:** [MORepair](https://github.com/barty/morepair)
8
+ - **Paper:** [MORepair: Teaching LLMs to Repair Code via Multi-Objective Fine-tuning](https://arxiv.org/abs/2404.12636)
9
+ - **Point of Contact:** [Boyang Yang](mailto:yby@ieee.org)
10
+
11
+ ### Dataset Summary
12
+
13
+ EvalRepair-Java is a benchmark for evaluating Java program repair performance, derived from HumanEval. It contains 163 single-function repair tasks, each with a buggy implementation and its corresponding fixed version.
14
+
15
+ ### Supported Tasks
16
+
17
+ - Program Repair: Fixing bugs in Java functions
18
+ - Code Generation: Generating correct implementations from buggy code
19
+
20
+ ### Dataset Structure
21
+
22
+ Each example contains:
23
+ - `task_id`: Unique identifier for the task
24
+ - `buggy_code`: The buggy implementation
25
+ - `fixed_code`: The correct implementation
26
+ - `file_path`: Original file path in the HumanEval dataset
27
+ - `issue_title`: Title of the bug
28
+ - `issue_description`: Description of the bug
29
+ - `start_line`: Start line of the buggy function
30
+ - `end_line`: End line of the buggy function
31
+
32
+ ### Source Data
33
+
34
+ This dataset is derived from HumanEval, a benchmark for evaluating code generation capabilities. We manually introduced bugs into the original implementations and verified the fixes.
35
+
36
+ ### Citation
37
+
38
+ ```bibtex
39
+ @article{10.1145/3735129,
40
+ author = {Yang, Boyang and Tian, Haoye and Ren, Jiadong and Zhang, Hongyu and Klein, Jacques and Bissyande, Tegawende and Le Goues, Claire and Jin, Shunfu},
41
+ title = {MORepair: Teaching LLMs to Repair Code via Multi-Objective Fine-Tuning},
42
+ year = {2025},
43
+ publisher = {Association for Computing Machinery},
44
+ issn = {1049-331X},
45
+ url = {https://doi.org/10.1145/3735129},
46
+ doi = {10.1145/3735129},
47
+ journal = {ACM Trans. Softw. Eng. Methodol.},
48
+ }
49
+ ```