Spaces:
Running
Running
Update app.py
Browse files
app.py
CHANGED
@@ -282,7 +282,7 @@ def build_leaderboard_tab(leaderboard_table_file, text_recog_file, Inaccessible_
|
|
282 |
md_tail = f"""
|
283 |
# Notice
|
284 |
Sometimes, API calls to closed-source models may not succeed. In such cases, we will repeat the calls for unsuccessful samples until it becomes impossible to obtain a successful response. It is important to note that due to rigorous security reviews by OpenAI, GPT4V refuses to provide results for the 84 samples in OCRBench.
|
285 |
-
If you would like to include your model in the OCRBench leaderboard, please follow the evaluation instructions provided on [GitHub](https://github.com/Yuliang-Liu/MultimodalOCR), [VLMEvalKit](https://github.com/open-compass/VLMEvalKit) or [lmms-eval](https://github.com/EvolvingLMMs-Lab/lmms-eval) and feel free to contact us via email at zhangli123@hust.edu.cn. We will update the leaderboard in time."""
|
286 |
gr.Markdown(md_tail, elem_id="leaderboard_markdown")
|
287 |
|
288 |
def build_demo(leaderboard_table_file, recog_table_file, Inaccessible_model_file):
|
|
|
282 |
md_tail = f"""
|
283 |
# Notice
|
284 |
Sometimes, API calls to closed-source models may not succeed. In such cases, we will repeat the calls for unsuccessful samples until it becomes impossible to obtain a successful response. It is important to note that due to rigorous security reviews by OpenAI, GPT4V refuses to provide results for the 84 samples in OCRBench.
|
285 |
+
If you would like to include your model in the OCRBench leaderboard, please follow the evaluation instructions provided on [GitHub](https://github.com/Yuliang-Liu/MultimodalOCR), [VLMEvalKit](https://github.com/open-compass/VLMEvalKit) or [lmms-eval](https://github.com/EvolvingLMMs-Lab/lmms-eval), and send us the result screenshots as well as the output files (predictions) generated by your model. If you have any questions, feel free to contact us via email at zhangli123@hust.edu.cn. We will update the leaderboard in time."""
|
286 |
gr.Markdown(md_tail, elem_id="leaderboard_markdown")
|
287 |
|
288 |
def build_demo(leaderboard_table_file, recog_table_file, Inaccessible_model_file):
|