Reja1 commited on
Commit
f0e1b80
·
1 Parent(s): 755f1a1

feat: Add Claude Sonnet 4 to OpenRouter benchmark models

Browse files

Include `anthropic/claude-sonnet-4` in the `openrouter_models` list within `benchmark_config.yaml`. This enables running benchmarks with the Claude Sonnet 4 model. With Jee advanced 25 result of claude sonnet 4

configs/benchmark_config.yaml CHANGED
@@ -3,7 +3,7 @@
3
  # Ensure the models support vision input.
4
  openrouter_models:
5
  - "google/gemini-2.5-pro-preview-03-25"
6
- #- "openai/gpt-4o"
7
  # - "google/gemini-pro-vision" # Example - uncomment or add others
8
  # - "anthropic/claude-3-opus" # Example - check vision support and access
9
  # - "anthropic/claude-3-sonnet"
 
3
  # Ensure the models support vision input.
4
  openrouter_models:
5
  - "google/gemini-2.5-pro-preview-03-25"
6
+ - "anthropic/claude-sonnet-4"
7
  # - "google/gemini-pro-vision" # Example - uncomment or add others
8
  # - "anthropic/claude-3-opus" # Example - check vision support and access
9
  # - "anthropic/claude-3-sonnet"
results/anthropic_claude-sonnet-4_JEE_ADVANCED_2025_20250603_220354/predictions.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
results/anthropic_claude-sonnet-4_JEE_ADVANCED_2025_20250603_220354/summary.jsonl ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {"question_id": "JA25P1P01", "marks_awarded": 3, "evaluation_status": "correct", "predicted_answer": ["A"], "ground_truth": ["A"], "attempt": 1}
2
+ {"question_id": "JA25P1P02", "marks_awarded": 3, "evaluation_status": "correct", "predicted_answer": ["D"], "ground_truth": ["D"], "attempt": 1}
3
+ {"question_id": "JA25P1P03", "marks_awarded": -1, "evaluation_status": "incorrect", "predicted_answer": ["D"], "ground_truth": ["A"], "attempt": 1}
4
+ {"question_id": "JA25P1P04", "marks_awarded": -1, "evaluation_status": "incorrect", "predicted_answer": ["A"], "ground_truth": ["C"], "attempt": 1}
5
+ {"question_id": "JA25P1P05", "marks_awarded": 1, "evaluation_status": "partial_1_of_2_plus", "predicted_answer": ["B"], "ground_truth": ["B", "D"], "attempt": 1}
6
+ {"question_id": "JA25P1P06", "marks_awarded": 4, "evaluation_status": "correct_full", "predicted_answer": ["D"], "ground_truth": ["D"], "attempt": 1}
7
+ {"question_id": "JA25P1P07", "marks_awarded": -2, "evaluation_status": "incorrect_negative", "predicted_answer": ["B", "D"], "ground_truth": ["A", "D"], "attempt": 1}
8
+ {"question_id": "JA25P1P08", "marks_awarded": 4, "evaluation_status": "correct", "predicted_answer": ["2"], "ground_truth": ["2"], "attempt": 1}
9
+ {"question_id": "JA25P1P09", "marks_awarded": 0, "evaluation_status": "incorrect", "predicted_answer": ["7"], "ground_truth": [["21", "25"]], "attempt": 1}
10
+ {"question_id": "JA25P1P10", "marks_awarded": 4, "evaluation_status": "correct", "predicted_answer": ["3"], "ground_truth": ["3"], "attempt": 1}
11
+ {"question_id": "JA25P1P11", "marks_awarded": 4, "evaluation_status": "correct", "predicted_answer": ["0.5"], "ground_truth": ["0.5", "0.75"], "attempt": 1}
12
+ {"question_id": "JA25P1P12", "marks_awarded": 0, "evaluation_status": "incorrect", "predicted_answer": ["72"], "ground_truth": [["75", "79"]], "attempt": 1}
13
+ {"question_id": "JA25P1P13", "marks_awarded": 0, "evaluation_status": "incorrect", "predicted_answer": ["27"], "ground_truth": ["72"], "attempt": 1}
14
+ {"question_id": "JA25P1P14", "marks_awarded": -1, "evaluation_status": "incorrect", "predicted_answer": ["D"], "ground_truth": ["C"], "attempt": 1}
15
+ {"question_id": "JA25P1P15", "marks_awarded": -1, "evaluation_status": "incorrect", "predicted_answer": ["C"], "ground_truth": ["A"], "attempt": 1}
16
+ {"question_id": "JA25P1P16", "marks_awarded": 4, "evaluation_status": "correct", "predicted_answer": ["C"], "ground_truth": ["C"], "attempt": 1}
17
+ {"question_id": "JA25P1C01", "marks_awarded": 3, "evaluation_status": "correct", "predicted_answer": ["A"], "ground_truth": ["A"], "attempt": 1}
18
+ {"question_id": "JA25P1C02", "marks_awarded": 3, "evaluation_status": "correct", "predicted_answer": ["A"], "ground_truth": ["A"], "attempt": 1}
19
+ {"question_id": "JA25P1C03", "marks_awarded": 3, "evaluation_status": "correct", "predicted_answer": ["B"], "ground_truth": ["B"], "attempt": 1}
20
+ {"question_id": "JA25P1C04", "marks_awarded": -1, "evaluation_status": "incorrect", "predicted_answer": ["C"], "ground_truth": ["B"], "attempt": 1}
21
+ {"question_id": "JA25P1C05", "marks_awarded": 4, "evaluation_status": "correct_full", "predicted_answer": ["B", "C"], "ground_truth": ["B", "C"], "attempt": 1}
22
+ {"question_id": "JA25P1C06", "marks_awarded": 4, "evaluation_status": "correct_full", "predicted_answer": ["A", "B"], "ground_truth": ["A", "B"], "attempt": 1}
23
+ {"question_id": "JA25P1C07", "marks_awarded": 4, "evaluation_status": "correct_full", "predicted_answer": ["B"], "ground_truth": ["B"], "attempt": 1}
24
+ {"question_id": "JA25P1C08", "marks_awarded": 4, "evaluation_status": "correct", "predicted_answer": ["100"], "ground_truth": ["100"], "attempt": 1}
25
+ {"question_id": "JA25P1C09", "marks_awarded": 0, "evaluation_status": "incorrect", "predicted_answer": ["2"], "ground_truth": [["2.2", "2.3"]], "attempt": 1}
26
+ {"question_id": "JA25P1C10", "marks_awarded": 0, "evaluation_status": "incorrect", "predicted_answer": ["1.01"], "ground_truth": [["-7.2", "-7"]], "attempt": 1}
27
+ {"question_id": "JA25P1C11", "marks_awarded": 0, "evaluation_status": "incorrect", "predicted_answer": ["30"], "ground_truth": [["-29.95", "-29.8"], ["29.8", "29.95"]], "attempt": 1}
28
+ {"question_id": "JA25P1C12", "marks_awarded": 4, "evaluation_status": "correct", "predicted_answer": ["280"], "ground_truth": ["280"], "attempt": 1}
29
+ {"question_id": "JA25P1C13", "marks_awarded": 0, "evaluation_status": "incorrect", "predicted_answer": ["138"], "ground_truth": ["175"], "attempt": 1}
30
+ {"question_id": "JA25P1C14", "marks_awarded": 4, "evaluation_status": "correct", "predicted_answer": ["A"], "ground_truth": ["A"], "attempt": 1}
31
+ {"question_id": "JA25P1C15", "marks_awarded": -1, "evaluation_status": "incorrect", "predicted_answer": ["C"], "ground_truth": ["B"], "attempt": 1}
32
+ {"question_id": "JA25P1C16", "marks_awarded": -1, "evaluation_status": "incorrect", "predicted_answer": ["A"], "ground_truth": ["B"], "attempt": 1}
33
+ {"question_id": "JA25P1M01", "marks_awarded": -1, "evaluation_status": "failure_api_or_parse", "predicted_answer": null, "ground_truth": ["C"], "attempt": 2}
34
+ {"question_id": "JA25P1M02", "marks_awarded": 3, "evaluation_status": "correct", "predicted_answer": ["A"], "ground_truth": ["A"], "attempt": 1}
35
+ {"question_id": "JA25P1M03", "marks_awarded": 3, "evaluation_status": "correct", "predicted_answer": ["C"], "ground_truth": ["C"], "attempt": 1}
36
+ {"question_id": "JA25P1M04", "marks_awarded": -1, "evaluation_status": "incorrect", "predicted_answer": ["16"], "ground_truth": ["C"], "attempt": 1}
37
+ {"question_id": "JA25P1M05", "marks_awarded": -2, "evaluation_status": "incorrect_negative", "predicted_answer": ["A", "D"], "ground_truth": ["A", "C"], "attempt": 1}
38
+ {"question_id": "JA25P1M06", "marks_awarded": -2, "evaluation_status": "incorrect_negative", "predicted_answer": ["A", "C", "D"], "ground_truth": ["A", "D"], "attempt": 1}
39
+ {"question_id": "JA25P1M07", "marks_awarded": 4, "evaluation_status": "correct_full", "predicted_answer": ["A", "D"], "ground_truth": ["A", "D"], "attempt": 1}
40
+ {"question_id": "JA25P1M08", "marks_awarded": 4, "evaluation_status": "correct", "predicted_answer": ["105"], "ground_truth": ["105"], "attempt": 1}
41
+ {"question_id": "JA25P1M09", "marks_awarded": 4, "evaluation_status": "correct", "predicted_answer": ["1.2"], "ground_truth": [["1.15", "1.25"]], "attempt": 1}
42
+ {"question_id": "JA25P1M10", "marks_awarded": 4, "evaluation_status": "correct", "predicted_answer": ["762"], "ground_truth": ["762"], "attempt": 1}
43
+ {"question_id": "JA25P1M11", "marks_awarded": 4, "evaluation_status": "correct", "predicted_answer": ["2.4"], "ground_truth": [["2.35", "2.45"]], "attempt": 1}
44
+ {"question_id": "JA25P1M12", "marks_awarded": 4, "evaluation_status": "correct", "predicted_answer": ["96"], "ground_truth": ["96"], "attempt": 1}
45
+ {"question_id": "JA25P1M13", "marks_awarded": 0, "evaluation_status": "incorrect", "predicted_answer": ["0"], "ground_truth": ["2"], "attempt": 1}
46
+ {"question_id": "JA25P1M14", "marks_awarded": -1, "evaluation_status": "incorrect", "predicted_answer": ["A"], "ground_truth": ["C"], "attempt": 1}
47
+ {"question_id": "JA25P1M15", "marks_awarded": 4, "evaluation_status": "correct", "predicted_answer": ["B"], "ground_truth": ["B"], "attempt": 1}
48
+ {"question_id": "JA25P1M16", "marks_awarded": -1, "evaluation_status": "incorrect", "predicted_answer": ["C"], "ground_truth": ["A"], "attempt": 1}
49
+ {"question_id": "JA25P2P01", "marks_awarded": 3, "evaluation_status": "correct", "predicted_answer": ["B"], "ground_truth": ["B"], "attempt": 1}
50
+ {"question_id": "JA25P2P02", "marks_awarded": -1, "evaluation_status": "incorrect", "predicted_answer": ["B"], "ground_truth": ["C"], "attempt": 1}
51
+ {"question_id": "JA25P2P03", "marks_awarded": 3, "evaluation_status": "correct", "predicted_answer": ["C"], "ground_truth": ["C"], "attempt": 1}
52
+ {"question_id": "JA25P2P04", "marks_awarded": 3, "evaluation_status": "correct", "predicted_answer": ["C"], "ground_truth": ["A", "B", "C", "D"], "attempt": 1}
53
+ {"question_id": "JA25P2P05", "marks_awarded": -2, "evaluation_status": "incorrect_negative", "predicted_answer": ["A", "B", "C", "D"], "ground_truth": ["A", "B", "C"], "attempt": 1}
54
+ {"question_id": "JA25P2P06", "marks_awarded": -2, "evaluation_status": "incorrect_negative", "predicted_answer": ["C"], "ground_truth": ["A", "B"], "attempt": 1}
55
+ {"question_id": "JA25P2P07", "marks_awarded": -2, "evaluation_status": "incorrect_negative", "predicted_answer": ["A", "B", "C"], "ground_truth": ["A"], "attempt": 1}
56
+ {"question_id": "JA25P2P08", "marks_awarded": 2, "evaluation_status": "partial_2_of_3_plus", "predicted_answer": ["B", "C"], "ground_truth": ["A", "B", "C"], "attempt": 1}
57
+ {"question_id": "JA25P2P09", "marks_awarded": 0, "evaluation_status": "incorrect", "predicted_answer": ["0.5"], "ground_truth": [["1.65", "1.67"]], "attempt": 1}
58
+ {"question_id": "JA25P2P10", "marks_awarded": 0, "evaluation_status": "incorrect", "predicted_answer": ["12"], "ground_truth": [["11.7", "11.9"]], "attempt": 1}
59
+ {"question_id": "JA25P2P11", "marks_awarded": 4, "evaluation_status": "correct", "predicted_answer": ["1.6"], "ground_truth": ["1.6"], "attempt": 1}
60
+ {"question_id": "JA25P2P12", "marks_awarded": 0, "evaluation_status": "incorrect", "predicted_answer": ["3"], "ground_truth": [["2.3", "2.4"]], "attempt": 1}
61
+ {"question_id": "JA25P2P13", "marks_awarded": 0, "evaluation_status": "incorrect", "predicted_answer": ["1"], "ground_truth": ["0.2"], "attempt": 1}
62
+ {"question_id": "JA25P2P14", "marks_awarded": 0, "evaluation_status": "incorrect", "predicted_answer": ["2.4"], "ground_truth": ["1.2"], "attempt": 1}
63
+ {"question_id": "JA25P2P15", "marks_awarded": 4, "evaluation_status": "correct", "predicted_answer": ["170"], "ground_truth": [["167", "171"]], "attempt": 1}
64
+ {"question_id": "JA25P2P16", "marks_awarded": 0, "evaluation_status": "incorrect", "predicted_answer": ["16"], "ground_truth": [["26", "33"]], "attempt": 1}
65
+ {"question_id": "JA25P2C01", "marks_awarded": -1, "evaluation_status": "incorrect", "predicted_answer": ["B"], "ground_truth": ["A"], "attempt": 1}
66
+ {"question_id": "JA25P2C02", "marks_awarded": -1, "evaluation_status": "incorrect", "predicted_answer": ["B"], "ground_truth": ["A"], "attempt": 1}
67
+ {"question_id": "JA25P2C03", "marks_awarded": 3, "evaluation_status": "correct", "predicted_answer": ["D"], "ground_truth": ["D"], "attempt": 1}
68
+ {"question_id": "JA25P2C04", "marks_awarded": 3, "evaluation_status": "correct", "predicted_answer": ["C"], "ground_truth": ["C"], "attempt": 1}
69
+ {"question_id": "JA25P2C05", "marks_awarded": 4, "evaluation_status": "correct_full", "predicted_answer": ["C", "D"], "ground_truth": ["C", "D"], "attempt": 1}
70
+ {"question_id": "JA25P2C06", "marks_awarded": 4, "evaluation_status": "correct_full", "predicted_answer": ["B", "D"], "ground_truth": ["B", "D"], "attempt": 1}
71
+ {"question_id": "JA25P2C07", "marks_awarded": 4, "evaluation_status": "correct_full", "predicted_answer": ["A", "C"], "ground_truth": ["A", "C"], "attempt": 1}
72
+ {"question_id": "JA25P2C08", "marks_awarded": -2, "evaluation_status": "incorrect_negative", "predicted_answer": ["A", "B"], "ground_truth": ["B", "C"], "attempt": 1}
73
+ {"question_id": "JA25P2C09", "marks_awarded": 4, "evaluation_status": "correct", "predicted_answer": ["11.0"], "ground_truth": [["10.85", "11.1"]], "attempt": 1}
74
+ {"question_id": "JA25P2C10", "marks_awarded": 4, "evaluation_status": "correct", "predicted_answer": ["3.95"], "ground_truth": [["3.85", "4.15"]], "attempt": 1}
75
+ {"question_id": "JA25P2C11", "marks_awarded": 4, "evaluation_status": "correct", "predicted_answer": ["16"], "ground_truth": [["15.5", "16.5"]], "attempt": 1}
76
+ {"question_id": "JA25P2C12", "marks_awarded": 0, "evaluation_status": "incorrect", "predicted_answer": ["67"], "ground_truth": [["4", "4.25"]], "attempt": 1}
77
+ {"question_id": "JA25P2C13", "marks_awarded": 4, "evaluation_status": "correct", "predicted_answer": ["2.5"], "ground_truth": [["2.4", "2.55"]], "attempt": 1}
78
+ {"question_id": "JA25P2C14", "marks_awarded": 0, "evaluation_status": "incorrect", "predicted_answer": ["106"], "ground_truth": [["105.4", "105.6"]], "attempt": 1}
79
+ {"question_id": "JA25P2C15", "marks_awarded": 4, "evaluation_status": "correct", "predicted_answer": ["7.73"], "ground_truth": [["7.5", "7.8"]], "attempt": 1}
80
+ {"question_id": "JA25P2C16", "marks_awarded": 4, "evaluation_status": "correct", "predicted_answer": ["2"], "ground_truth": ["2"], "attempt": 1}
81
+ {"question_id": "JA25P2M01", "marks_awarded": -1, "evaluation_status": "incorrect", "predicted_answer": ["D"], "ground_truth": ["C"], "attempt": 1}
82
+ {"question_id": "JA25P2M02", "marks_awarded": -1, "evaluation_status": "incorrect", "predicted_answer": ["D"], "ground_truth": ["B"], "attempt": 1}
83
+ {"question_id": "JA25P2M03", "marks_awarded": -1, "evaluation_status": "incorrect", "predicted_answer": ["3"], "ground_truth": ["C"], "attempt": 1}
84
+ {"question_id": "JA25P2M04", "marks_awarded": -1, "evaluation_status": "incorrect", "predicted_answer": ["D"], "ground_truth": ["A"], "attempt": 1}
85
+ {"question_id": "JA25P2M05", "marks_awarded": -2, "evaluation_status": "incorrect_negative", "predicted_answer": ["D"], "ground_truth": ["A", "B"], "attempt": 1}
86
+ {"question_id": "JA25P2M06", "marks_awarded": 4, "evaluation_status": "correct_full", "predicted_answer": ["A", "C"], "ground_truth": ["A", "C"], "attempt": 1}
87
+ {"question_id": "JA25P2M07", "marks_awarded": 4, "evaluation_status": "correct_full", "predicted_answer": ["A", "C"], "ground_truth": ["A", "C"], "attempt": 1}
88
+ {"question_id": "JA25P2M08", "marks_awarded": 1, "evaluation_status": "partial_1_of_2_plus", "predicted_answer": ["B"], "ground_truth": ["B", "C", "D"], "attempt": 1}
89
+ {"question_id": "JA25P2M09", "marks_awarded": 0, "evaluation_status": "incorrect", "predicted_answer": ["2"], "ground_truth": [["0.7", "0.8"]], "attempt": 1}
90
+ {"question_id": "JA25P2M10", "marks_awarded": 4, "evaluation_status": "correct", "predicted_answer": ["6"], "ground_truth": ["6"], "attempt": 1}
91
+ {"question_id": "JA25P2M11", "marks_awarded": 4, "evaluation_status": "correct", "predicted_answer": ["0.3"], "ground_truth": [["0.27", "0.33"]], "attempt": 1}
92
+ {"question_id": "JA25P2M12", "marks_awarded": 4, "evaluation_status": "correct", "predicted_answer": ["-2"], "ground_truth": ["-2"], "attempt": 1}
93
+ {"question_id": "JA25P2M13", "marks_awarded": 0, "evaluation_status": "incorrect", "predicted_answer": ["4"], "ground_truth": ["-2"], "attempt": 1}
94
+ {"question_id": "JA25P2M14", "marks_awarded": 4, "evaluation_status": "correct", "predicted_answer": ["0.25"], "ground_truth": [["0.2", "0.3"]], "attempt": 1}
95
+ {"question_id": "JA25P2M15", "marks_awarded": 0, "evaluation_status": "incorrect", "predicted_answer": ["1"], "ground_truth": ["3"], "attempt": 1}
96
+ {"question_id": "JA25P2M16", "marks_awarded": 0, "evaluation_status": "incorrect", "predicted_answer": ["7"], "ground_truth": ["21"], "attempt": 1}
results/anthropic_claude-sonnet-4_JEE_ADVANCED_2025_20250603_220354/summary.md ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Benchmark Results: anthropic/claude-sonnet-4
2
+ **Exam Name:** JEE_ADVANCED
3
+ **Exam Year:** 2025
4
+ **Timestamp:** 20250603_220354
5
+ **Total Questions in Dataset:** 578
6
+ **Questions Filtered Out:** 482
7
+ **Total Questions Processed in this Run:** 96
8
+
9
+ ---
10
+
11
+ ## Exam Scoring Results
12
+ **Overall Score:** **147** / **360**
13
+ - **Fully Correct Answers:** 47
14
+ - **Partially Correct Answers:** 3
15
+ - **Incorrectly Answered (Choice Made):** 45
16
+ - **Skipped Questions:** 0
17
+ - **API/Parse Failures:** 1
18
+ - **Total Questions Processed:** 96
19
+
20
+ ### Detailed Score Calculation by Question Type
21
+ **Integer (42 questions):** 88 marks
22
+ *Calculation:* 22 Correct (+4) + 20 Incorrect (0) = 88
23
+ **Mcq Multiple Correct (21 questions):** 28 marks
24
+ *Calculation:* 10 Correct (+4) + 3 Partial + 8 Incorrect (-2) = 28
25
+ **Mcq Single Correct (33 questions):** 27 marks
26
+ *Calculation:* 15 Correct (+3) + 17 Incorrect (-1) + 1 API/Parse Fail (-1) = 27
27
+
28
+ ### Section Breakdown
29
+ | Section | Score | Fully Correct | Partially Correct | Incorrect Choice | Skipped | API/Parse Failures |
30
+ |---------------|-------|---------------|-------------------|------------------|---------|--------------------|
31
+ | Chemistry | 68 | 20 | 0 | 12 | 0 | 0 |
32
+ | Math | 45 | 15 | 1 | 15 | 0 | 1 |
33
+ | Physics | 33 | 12 | 2 | 18 | 0 | 0 |