Papers
arxiv:2504.12764

GraphOmni: A Comprehensive and Extendable Benchmark Framework for Large Language Models on Graph-theoretic Tasks

Published on Apr 17
Authors:
,
,
,
,

Abstract

GraphOmni is a benchmark framework that evaluates LLMs in graph reasoning by analyzing serialization formats and prompt schemes, and proposes a reinforcement learning approach to improve accuracy.

AI-generated summary

In this paper, we presented GraphOmni, a comprehensive benchmark framework for systematically evaluating the graph reasoning capabilities of LLMs. By analyzing critical dimensions, including graph types, serialization formats, and prompt schemes, we provided extensive insights into the strengths and limitations of current LLMs. Our empirical findings emphasize that no single serialization or prompting strategy consistently outperforms others. Motivated by these insights, we propose a reinforcement learning-based approach that dynamically selects the best serialization-prompt pairings, resulting in significant accuracy improvements. GraphOmni's modular and extensible design establishes a robust foundation for future research, facilitating advancements toward general-purpose graph reasoning models.

Community

Paper author

🎓GraphOmni delivers the most Comprehensive Evaluation of LLMs on Graph Reasoning tasks.

your diagram

ArXiv: https://arxiv.org/abs/2504.12764
Github: https://github.com/GAI-Community/GraphOmni
Project Page: https://gai-community.github.io/Graph-Omni/
HF dataset: https://huggingface.co/datasets/G-A-I/GraphOmni

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2504.12764 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2504.12764 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.