Improve dataset card: Add text-classification task and sample usage (#2)
Browse files- Improve dataset card: Add text-classification task and sample usage (631763c045b77790586ca329db2739e1f11d7cf9)
Co-authored-by: Niels Rogge <nielsr@users.noreply.huggingface.co>
README.md
CHANGED
@@ -1,60 +1,54 @@
|
|
1 |
---
|
2 |
-
license: cc-by-4.0
|
3 |
language:
|
4 |
- zh
|
5 |
-
|
6 |
-
- finance
|
7 |
-
pretty_name: BizFinBench
|
8 |
size_categories:
|
9 |
- 10K<n<100K
|
10 |
task_categories:
|
11 |
- question-answering
|
|
|
|
|
|
|
|
|
12 |
configs:
|
13 |
-
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
-
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
-
|
28 |
-
|
29 |
-
|
30 |
-
|
31 |
-
|
32 |
-
|
33 |
-
|
34 |
-
|
35 |
-
|
36 |
-
|
37 |
-
|
38 |
-
|
39 |
-
|
40 |
-
|
41 |
-
|
42 |
-
|
43 |
-
-
|
44 |
-
|
45 |
-
|
46 |
-
|
47 |
-
|
48 |
-
|
49 |
-
data_files:
|
50 |
-
- split: test
|
51 |
-
path: Financial_Tool_Usage/*.jsonl
|
52 |
-
|
53 |
-
- config_name: Stock_Price_Prediction
|
54 |
-
data_files:
|
55 |
-
- split: test
|
56 |
-
path: Stock_Price_Prediction/*.jsonl
|
57 |
---
|
|
|
58 |
# BizFinBench: A Business-Driven Real-World Financial Benchmark for Evaluating LLMs
|
59 |
|
60 |
📖<a href="https://arxiv.org/abs/2505.19457">Paper</a> |🐙<a href="https://github.com/HiThink-Research/BizFinBench/">Github</a></h3>|🤗<a href="https://huggingface.co/datasets/HiThink-Research/BizFinBench">Huggingface</a></h3>
|
@@ -118,4 +112,127 @@ The models are evaluated across multiple tasks, with results color-coded to repr
|
|
118 |
| DeepSeek-R1 (671B) | 80.36 | 🥇 64.04 | 🥉 75.00 | 81.96 | 🥇 91.44 | 98.41 | 39.67 | 55.13 | 🥇 71.46 | 🥈 73.05 |
|
119 |
| QwQ-32B | 84.02 | 52.91 | 64.90 | 84.81 | 89.60 | 94.20 | 34.50 | 🥈 56.68 | 30.27 | 65.77 |
|
120 |
| DeepSeek-R1-Distill-Qwen-14B | 71.33 | 44.35 | 16.95 | 81.96 | 85.52 | 92.81 | 39.50 | 50.20 | 52.76 | 59.49 |
|
121 |
-
| DeepSeek-R1-Distill-Qwen-32B | 73.68 | 51.20 | 50.86 | 83.27 | 87.54 | 97.81 | 41.50 | 53.92 | 56.80 | 66.29 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
|
|
2 |
language:
|
3 |
- zh
|
4 |
+
license: cc-by-4.0
|
|
|
|
|
5 |
size_categories:
|
6 |
- 10K<n<100K
|
7 |
task_categories:
|
8 |
- question-answering
|
9 |
+
- text-classification
|
10 |
+
pretty_name: BizFinBench
|
11 |
+
tags:
|
12 |
+
- finance
|
13 |
configs:
|
14 |
+
- config_name: Anomalous_Event_Attribution
|
15 |
+
data_files:
|
16 |
+
- split: test
|
17 |
+
path: Anomalous_Event_Attribution/*.jsonl
|
18 |
+
- config_name: Emotion_Recognition
|
19 |
+
data_files:
|
20 |
+
- split: test
|
21 |
+
path: Emotion_Recognition/*.jsonl
|
22 |
+
- config_name: Financial_Data_Description
|
23 |
+
data_files:
|
24 |
+
- split: test
|
25 |
+
path: Financial_Data_Description/*.jsonl
|
26 |
+
- config_name: Financial_Knowledge_QA
|
27 |
+
data_files:
|
28 |
+
- split: test
|
29 |
+
path: Financial_Knowledge_QA/*.jsonl
|
30 |
+
- config_name: Financial_Named_Entity_Recognition
|
31 |
+
data_files:
|
32 |
+
- split: test
|
33 |
+
path: Financial_Named_Entity_Recognition/*.jsonl
|
34 |
+
- config_name: Financial_Numerical_Computation
|
35 |
+
data_files:
|
36 |
+
- split: test
|
37 |
+
path: Financial_Numerical_Computation/*.jsonl
|
38 |
+
- config_name: Financial_Time_Reasoning
|
39 |
+
data_files:
|
40 |
+
- split: test
|
41 |
+
path: Financial_Time_Reasoning/*.jsonl
|
42 |
+
- config_name: Financial_Tool_Usage
|
43 |
+
data_files:
|
44 |
+
- split: test
|
45 |
+
path: Financial_Tool_Usage/*.jsonl
|
46 |
+
- config_name: Stock_Price_Prediction
|
47 |
+
data_files:
|
48 |
+
- split: test
|
49 |
+
path: Stock_Price_Prediction/*.jsonl
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
50 |
---
|
51 |
+
|
52 |
# BizFinBench: A Business-Driven Real-World Financial Benchmark for Evaluating LLMs
|
53 |
|
54 |
📖<a href="https://arxiv.org/abs/2505.19457">Paper</a> |🐙<a href="https://github.com/HiThink-Research/BizFinBench/">Github</a></h3>|🤗<a href="https://huggingface.co/datasets/HiThink-Research/BizFinBench">Huggingface</a></h3>
|
|
|
112 |
| DeepSeek-R1 (671B) | 80.36 | 🥇 64.04 | 🥉 75.00 | 81.96 | 🥇 91.44 | 98.41 | 39.67 | 55.13 | 🥇 71.46 | 🥈 73.05 |
|
113 |
| QwQ-32B | 84.02 | 52.91 | 64.90 | 84.81 | 89.60 | 94.20 | 34.50 | 🥈 56.68 | 30.27 | 65.77 |
|
114 |
| DeepSeek-R1-Distill-Qwen-14B | 71.33 | 44.35 | 16.95 | 81.96 | 85.52 | 92.81 | 39.50 | 50.20 | 52.76 | 59.49 |
|
115 |
+
| DeepSeek-R1-Distill-Qwen-32B | 73.68 | 51.20 | 50.86 | 83.27 | 87.54 | 97.81 | 41.50 | 53.92 | 56.80 | 66.29 |
|
116 |
+
|
117 |
+
## 🛠️ Usage
|
118 |
+
|
119 |
+
### Quick Start – Evaluate a Local Model
|
120 |
+
|
121 |
+
```sh
|
122 |
+
export MODEL_PATH=model/Qwen2.5-0.5B # Path to the model to be evaluated
|
123 |
+
export REMOTE_MODEL_PORT=16668
|
124 |
+
export REMOTE_MODEL_URL=http://127.0.0.1:${REMOTE_MODEL_PORT}/model
|
125 |
+
export MODEL_NAME=Qwen2.5-0.5B
|
126 |
+
export PROMPT_TYPE=chat_template # Hithink llama3 llama2 none qwen chat_template; chat_template is recommended
|
127 |
+
|
128 |
+
# First start the model as a service
|
129 |
+
python inference/predict_multi_gpu.py \
|
130 |
+
--model ${MODEL_PATH} \
|
131 |
+
--server_port ${REMOTE_MODEL_PORT} \
|
132 |
+
--prompt ${PROMPT_TYPE} \
|
133 |
+
--preprocess preprocess \
|
134 |
+
--run_forever \
|
135 |
+
--max_new_tokens 4096 \
|
136 |
+
--tensor_parallel ${TENSOR_PARALLEL} &
|
137 |
+
|
138 |
+
# Pass in the config file path to start evaluation
|
139 |
+
python run.py --config config/offical/eval_fin_eval_diamond.yaml --model_name ${MODEL_NAME}
|
140 |
+
```
|
141 |
+
|
142 |
+
### Quick Start – Evaluate a Local Model and Score with a Judge Model
|
143 |
+
|
144 |
+
```sh
|
145 |
+
export MODEL_PATH=model/Qwen2.5-0.5B # Path to the model to be evaluated
|
146 |
+
export REMOTE_MODEL_PORT=16668
|
147 |
+
export REMOTE_MODEL_URL=http://127.0.0.1:${REMOTE_MODEL_PORT}/model
|
148 |
+
export MODEL_NAME=Qwen2.5-0.5B
|
149 |
+
export PROMPT_TYPE=chat_template # llama3 llama2 none qwen chat_template; chat_template is recommended
|
150 |
+
|
151 |
+
# First start the model as a service
|
152 |
+
python inference/predict_multi_gpu.py \
|
153 |
+
--model ${MODEL_PATH} \
|
154 |
+
--server_port ${REMOTE_MODEL_PORT} \
|
155 |
+
--prompt ${PROMPT_TYPE} \
|
156 |
+
--preprocess preprocess \
|
157 |
+
--run_forever \
|
158 |
+
--max_new_tokens 4096 \
|
159 |
+
--tensor_parallel ${TENSOR_PARALLEL} \
|
160 |
+
--low_vram &
|
161 |
+
|
162 |
+
# Start the judge model
|
163 |
+
export JUDGE_MODEL_PATH=/mnt/data/llm/models/base/Qwen2.5-7B
|
164 |
+
export JUDGE_TENSOR_PARALLEL=1
|
165 |
+
export JUDGE_MODEL_PORT=16667
|
166 |
+
python inference/predict_multi_gpu.py \
|
167 |
+
--model ${JUDGE_MODEL_PATH} \
|
168 |
+
--server_port ${JUDGE_MODEL_PORT} \
|
169 |
+
--prompt chat_template \
|
170 |
+
--preprocess preprocess \
|
171 |
+
--run_forever \
|
172 |
+
--manual_start \
|
173 |
+
--max_new_tokens 4096 \
|
174 |
+
--tensor_parallel ${JUDGE_TENSOR_PARALLEL} \
|
175 |
+
--low_vram &
|
176 |
+
|
177 |
+
# Pass in the config file path to start evaluation
|
178 |
+
python run.py --config "config/offical/eval_fin_eval.yaml" --model_name ${MODEL_NAME}
|
179 |
+
```
|
180 |
+
|
181 |
+
> **Note**: Add the `--manual_start` argument when launching the judge model, because the judge must wait until the main model finishes inference before starting (this is handled automatically by the `maybe_start_judge_model` function in `run.py`).
|
182 |
+
|
183 |
+
|
184 |
+
## ✒️Results
|
185 |
+
The models are evaluated across multiple tasks, with results color-coded to represent the top three performers for each task:
|
186 |
+
- 🥇 indicates the top-performing model.
|
187 |
+
- 🥈 represents the second-best result.
|
188 |
+
- 🥉 denotes the third-best performance.
|
189 |
+
|
190 |
+
| Model | AEA | FNC | FTR | FTU | FQA | FDD | ER | SP | FNER | Average |
|
191 |
+
| ---------------------------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- |
|
192 |
+
| **Proprietary LLMs** | | | | | | | | | | |
|
193 |
+
| ChatGPT-o3 | 🥈 86.23 | 61.30 | 🥈 75.36 | 🥇 89.15 | 🥈 91.25 | 🥉 98.55 | 🥉 44.48 | 53.27 | 65.13 | 🥇 73.86 |
|
194 |
+
| ChatGPT-o4-mini | 🥉 85.62 | 60.10 | 71.23 | 74.40 | 90.27 | 95.73 | 🥇 47.67 | 52.32 | 64.24 | 71.29 |
|
195 |
+
| GPT-4o | 79.42 | 56.51 | 🥇 76.20 | 82.37 | 87.79 | 🥇 98.84 | 🥈 45.33 | 54.33 | 65.37 | 🥉 71.80 |
|
196 |
+
| Gemini-2.0-Flash | 🥇 86.94 | 🥉 62.67 | 73.97 | 82.55 | 90.29 | 🥈 98.62 | 22.17 | 🥉 56.14 | 54.43 | 69.75 |
|
197 |
+
| Claude-3.5-Sonnet | 84.68 | 🥈 63.18 | 42.81 | 🥈 88.05 | 87.35 | 96.85 | 16.67 | 47.60 | 63.09 | 65.59 |
|
198 |
+
| **Open Source LLMs** | | | | | | | | | | |
|
199 |
+
| Qwen2.5-7B-Instruct | 73.87 | 32.88 | 39.38 | 79.03 | 83.34 | 78.93 | 37.50 | 51.91 | 30.31 | 56.35 |
|
200 |
+
| Qwen2.5-72B-Instruct | 69.27 | 54.28 | 70.72 | 85.29 | 87.79 | 97.43 | 35.33 | 55.13 | 54.02 | 67.70 |
|
201 |
+
| Qwen2.5-VL-3B | 53.85 | 15.92 | 17.29 | 8.95 | 81.60 | 59.44 | 39.50 | 52.49 | 21.57 | 38.96 |
|
202 |
+
| Qwen2.5-VL-7B | 73.87 | 32.71 | 40.24 | 77.85 | 83.94 | 77.41 | 38.83 | 51.91 | 33.40 | 56.68 |
|
203 |
+
| Qwen2.5-VL-14B | 37.12 | 41.44 | 53.08 | 82.07 | 84.23 | 7.97 | 37.33 | 54.93 | 47.47 | 49.52 |
|
204 |
+
| Qwen2.5-VL-32B | 76.79 | 50.00 | 62.16 | 83.57 | 85.30 | 95.95 | 40.50 | 54.93 | 🥉 68.36 | 68.62 |
|
205 |
+
| Qwen2.5-VL-72B | 69.55 | 54.11 | 69.86 | 85.18 | 87.37 | 97.34 | 35.00 | 54.94 | 54.41 | 67.53 |
|
206 |
+
| Qwen3-1.7B | 77.40 | 35.80 | 33.40 | 75.82 | 73.81 | 78.62 | 22.40 | 48.53 | 11.23 | 50.78 |
|
207 |
+
| Qwen3-4B | 83.60 | 47.40 | 50.00 | 78.19 | 82.24 | 80.16 | 42.20 | 50.51 | 25.19 | 59.94 |
|
208 |
+
| Qwen3-14B | 84.20 | 58.20 | 65.80 | 82.19 | 84.12 | 92.91 | 33.00 | 52.31 | 50.70 | 67.05 |
|
209 |
+
| Qwen3-32B | 83.80 | 59.60 | 64.60 | 85.12 | 85.43 | 95.37 | 39.00 | 52.26 | 49.19 | 68.26 |
|
210 |
+
| Xuanyuan3-70B | 12.14 | 19.69 | 15.41 | 80.89 | 86.51 | 83.90 | 29.83 | 52.62 | 37.33 | 46.48 |
|
211 |
+
| Llama-3.1-8B-Instruct | 73.12 | 22.09 | 2.91 | 77.42 | 76.18 | 69.09 | 29.00 | 54.21 | 36.56 | 48.95 |
|
212 |
+
| Llama-3.1-70B-Instruct | 16.26 | 34.25 | 56.34 | 80.64 | 79.97 | 86.90 | 33.33 | 🥇 62.16 | 45.95 | 55.09 |
|
213 |
+
| Llama 4 Scout | 73.60 | 45.80 | 44.20 | 85.02 | 85.21 | 92.32 | 25.60 | 55.76 | 43.00 | 61.17 |
|
214 |
+
| DeepSeek-V3 (671B) | 74.34 | 61.82 | 72.60 | 🥈 86.54 | 🥉 91.07 | 98.11 | 32.67 | 55.73 | 🥈 71.24 | 71.57 |
|
215 |
+
| DeepSeek-R1 (671B) | 80.36 | 🥇 64.04 | 🥉 75.00 | 81.96 | 🥇 91.44 | 98.41 | 39.67 | 55.13 | 🥇 71.46 | 🥈 73.05 |
|
216 |
+
| QwQ-32B | 84.02 | 52.91 | 64.90 | 84.81 | 89.60 | 94.20 | 34.50 | 🥈 56.68 | 30.27 | 65.77 |
|
217 |
+
| DeepSeek-R1-Distill-Qwen-14B | 71.33 | 44.35 | 16.95 | 81.96 | 85.52 | 92.81 | 39.50 | 50.20 | 52.76 | 59.49 |
|
218 |
+
| DeepSeek-R1-Distill-Qwen-32B | 73.68 | 51.20 | 50.86 | 83.27 | 87.54 | 97.81 | 41.50 | 53.92 | 56.80 | 66.29 |
|
219 |
+
|
220 |
+
|
221 |
+
## 📚 Example
|
222 |
+
<img src="static/Anomalous Event Attribution.drawio.png" alt="Data Distribution">
|
223 |
+
|
224 |
+
|
225 |
+
## ✒️Citation
|
226 |
+
|
227 |
+
```
|
228 |
+
@article{lu2025bizfinbench,
|
229 |
+
title={BizFinBench: A Business-Driven Real-World Financial Benchmark for Evaluating LLMs},
|
230 |
+
author={Lu, Guilong and Guo, Xuntao and Zhang, Rongjunchen and Zhu, Wenqiao and Liu, Ji},
|
231 |
+
journal={arXiv preprint arXiv:2505.19457},
|
232 |
+
year={2025}
|
233 |
+
}
|
234 |
+
```
|
235 |
+
|
236 |
+
## 📄 License
|
237 |
+
  **Usage and License Notices**: The data and code are intended and licensed for research use only.
|
238 |
+
License: Attribution-NonCommercial 4.0 International It should abide by the policy of OpenAI: https://openai.com/policies/terms-of-use
|