Papers
arxiv:2112.08542

QAFactEval: Improved QA-Based Factual Consistency Evaluation for Summarization

Published on Dec 16, 2021
Authors:
,
,
,

Abstract

The research evaluates entailment and question answering-based metrics for factual consistency in text summarization, proposing QAFactEval for improved performance and suggesting their combination for even better results.

AI-generated summary

Factual consistency is an essential quality of text summarization models in practical settings. Existing work in evaluating this dimension can be broadly categorized into two lines of research, entailment-based and question answering (QA)-based metrics, and different experimental setups often lead to contrasting conclusions as to which paradigm performs the best. In this work, we conduct an extensive comparison of entailment and QA-based metrics, demonstrating that carefully choosing the components of a QA-based metric, especially question generation and answerability classification, is critical to performance. Building on those insights, we propose an optimized metric, which we call QAFactEval, that leads to a 14% average improvement over previous QA-based metrics on the SummaC factual consistency benchmark, and also outperforms the best-performing entailment-based metric. Moreover, we find that QA-based and entailment-based metrics can offer complementary signals and be combined into a single metric for a further performance boost.

Community

Sign up or log in to comment

Models citing this paper 2

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2112.08542 in a dataset README.md to link it from this page.

Spaces citing this paper 1

Collections including this paper 1