--- title: relation_extraction datasets: - none tags: - evaluate - metric description: >- This metric is used for evaluating the F1 accuracy of input references and predictions. sdk: gradio sdk_version: 3.19.1 app_file: app.py pinned: false license: apache-2.0 --- # Metric Card for relation_extraction evalutation This metric is used for evaluating the quality of relation extraction output. By calculating the Micro and Macro F1 score of every relation extraction outputs to ensure the quality. ## Metric Description This metric can be used in relation extraction evaluation. ## How to Use This metric takes 3 inputs, prediction, references(ground truth) and mode. Predictions and references are a list of list of dictionary of entity's name and entity's type. And mode define the evaluation type: ```python import evaluate metric_path = "Ikala-allen/relation_extraction" module = evaluate.load(metric_path) references = [ [ {"head": "phip igments", "head_type": "brand", "type": "sell", "tail": "國際認證之色乳", "tail_type": "product"}, {"head": "tinadaviespigments", "head_type": "brand", "type": "sell", "tail": "國際認證之色乳", "tail_type": "product"}, ] ] predictions = [ [ {"head": "phipigments", "head_type": "product", "type": "sell", "tail": "國際認證之色乳", "tail_type": "product"}, {"head": "tinadaviespigments", "head_type": "brand", "type": "sell", "tail": "國際認證之色乳", "tail_type": "product"}, ] ] evaluation_scores = module.compute(predictions=predictions, references=references, mode="strict") ``` ### Inputs - **predictions** (`list` of `list` of `dictionary`): A list of predicted relations from the model. - **references** (`list` of `list` of `dictionary`): A list of ground-truth or reference relations to compare the predictions against. - **mode** (`str`): Evaluation mode - 'strict' or 'boundaries'. 'strict' mode takes into account both entities type and their relationships, while 'boundaries' mode only considers the entity spans of the relationships. - **detailed_scores** (`bool`): If True it returns scores for each relation type specifically, if False it returns the overall scores. - **relation_types** (`list`): A list of relation types to consider while evaluating. If not provided, relation types will be constructed from the ground truth or reference data. ### Output Values **output** (`dictionary` of `dictionary`s) A dictionary mapping each entity type to its respective scoring metrics such as Precision, Recall, F1 Score. - **ALL** (`dictionary`): score of total relation type - **tp** : true positive count - **fp** : false positive count - **fn** : false negative count - **p** : precision - **r** : recall - **f1** : micro f1 score - **Macro_f1** : macro f1 score - **Macro_p** : macro precision - **Macro_r** : macro recall - **{selected relation type}** (`dictionary`): score of selected relation type - **tp** : true positive count - **fp** : false positive count - **fn** : false negative count - **p** : precision - **r** : recall - **f1** : micro f1 score Output Example: ```python {'tp': 1, 'fp': 1, 'fn': 1, 'p': 50.0, 'r': 50.0, 'f1': 50.0, 'Macro_f1': 50.0, 'Macro_p': 50.0, 'Macro_r': 50.0} ``` Remind : Macro_f1、Macro_p、Macro_r、p、r、f1 are always a number between 0 and 1. And tp、fp、fn depend on how many data inputs. ### Examples Example1 : only one prediction and reference, mode=strict, detailed_scores=False, only output total relation score ```python metric_path = "Ikala-allen/relation_extraction" module = evaluate.load(metric_path) references = [ [ {"head": "phipigments", "head_type": "brand", "type": "sell", "tail": "國際認證之色乳", "tail_type": "product"}, {"head": "tinadaviespigments", "head_type": "brand", "type": "sell", "tail": "國際認證之色乳", "tail_type": "product"}, {'head': 'A醛賦活緊緻精華', 'tail': 'Serum', 'head_type': 'product', 'tail_type': 'category', 'type': 'belongs_to'}, ] ] predictions = [ [ {"head": "phipigments", "head_type": "product", "type": "sell", "tail": "國際認證之色乳", "tail_type": "product"}, {"head": "tinadaviespigments", "head_type": "brand", "type": "sell", "tail": "國際認證之色乳", "tail_type": "product"}, ] ] evaluation_scores = module.compute(predictions=predictions, references=references, mode="strict", detailed_scores=False, relation_types=[]) print(evaluation_scores) >>> {'tp': 1, 'fp': 1, 'fn': 2, 'p': 50.0, 'r': 33.333333333333336, 'f1': 40.0, 'Macro_f1': 25.0, 'Macro_p': 25.0, 'Macro_r': 25.0} ``` Example2 : only one prediction and reference, mode=boundaries, detailed_scores=False, only output total relation score ```python metric_path = "Ikala-allen/relation_extraction" module = evaluate.load(metric_path) references = [ [ {"head": "phipigments", "head_type": "brand", "type": "sell", "tail": "國際認證之色乳", "tail_type": "product"}, {"head": "tinadaviespigments", "head_type": "brand", "type": "sell", "tail": "國際認證之色乳", "tail_type": "product"}, {'head': 'A醛賦活緊緻精華', 'tail': 'Serum', 'head_type': 'product', 'tail_type': 'category', 'type': 'belongs_to'}, ] ] predictions = [ [ {"head": "phipigments", "head_type": "product", "type": "sell", "tail": "國際認證之色乳", "tail_type": "product"}, {"head": "tinadaviespigments", "head_type": "brand", "type": "sell", "tail": "國際認證之色乳", "tail_type": "product"}, ] ] evaluation_scores = module.compute(predictions=predictions, references=references, mode="strict", detailed_scores=False, relation_types=[]) print(evaluation_scores) >>> {'tp': 1, 'fp': 1, 'fn': 2, 'p': 50.0, 'r': 33.333333333333336, 'f1': 40.0, 'Macro_f1': 25.0, 'Macro_p': 25.0, 'Macro_r': 25.0} ``` Example3 : two or more prediction and reference, mode=boundaries, detailed_scores=True, output all relation type score ```python metric_path = "Ikala-allen/relation_extraction" module = evaluate.load(metric_path) references = [ [ {"head": "phipigments", "head_type": "brand", "type": "sell", "tail": "國際認證之色乳", "tail_type": "product"}, {"head": "tinadaviespigments", "head_type": "brand", "type": "sell", "tail": "國際認證之色乳", "tail_type": "product"}, ], [ {'head': 'SABONTAIWAN', 'tail': '大馬士革玫瑰有機光燦系列', 'head_type': 'brand', 'tail_type': 'product', 'type': 'sell'}, {'head': 'A醛賦活緊緻精華', 'tail': 'Serum', 'head_type': 'product', 'tail_type': 'category', 'type': 'belongs_to'}, ] ] predictions = [ [ {"head": "phipigments", "head_type": "product", "type": "sell", "tail": "國際認證之色乳", "tail_type": "product"}, {"head": "tinadaviespigments", "head_type": "brand", "type": "sell", "tail": "國際認證之色乳", "tail_type": "product"}, ], [ {'head': 'SABONTAIWAN', 'tail': '大馬士革玫瑰有機光燦系列', 'head_type': 'brand', 'tail_type': 'product', 'type': 'sell'}, {'head': 'SNTAIWAN', 'tail': '大馬士革玫瑰有機光燦系列', 'head_type': 'brand', 'tail_type': 'product', 'type': 'sell'} ] ] evaluation_scores = module.compute(predictions=predictions, references=references, mode="boundaries", detailed_scores=True, relation_types=[]) print(evaluation_scores) >>> {'sell': {'tp': 3, 'fp': 1, 'fn': 0, 'p': 75.0, 'r': 100.0, 'f1': 85.71428571428571}, 'belongs_to': {'tp': 0, 'fp': 0, 'fn': 1, 'p': 0, 'r': 0, 'f1': 0}, 'ALL': {'tp': 3, 'fp': 1, 'fn': 1, 'p': 75.0, 'r': 75.0, 'f1': 75.0, 'Macro_f1': 42.857142857142854, 'Macro_p': 37.5, 'Macro_r': 50.0}} ``` Example 4 : two or more prediction and reference, mode=boundaries, detailed_scores=True, output all relation type score, relation_types = ["belongs_to"], only consider belongs_to type score ```python metric_path = "Ikala-allen/relation_extraction" module = evaluate.load(metric_path) references = [ [ {"head": "phipigments", "head_type": "brand", "type": "sell", "tail": "國際認證之色乳", "tail_type": "product"}, {"head": "tinadaviespigments", "head_type": "brand", "type": "sell", "tail": "國際認證之色乳", "tail_type": "product"}, ], [ {'head': 'SABONTAIWAN', 'tail': '大馬士革玫瑰有機光燦系列', 'head_type': 'brand', 'tail_type': 'product', 'type': 'sell'}, {'head': 'A醛賦活緊緻精華', 'tail': 'Serum', 'head_type': 'product', 'tail_type': 'category', 'type': 'belongs_to'}, ] ] predictions = [ [ {"head": "phipigments", "head_type": "product", "type": "sell", "tail": "國際認證之色乳", "tail_type": "product"}, {"head": "tinadaviespigments", "head_type": "brand", "type": "sell", "tail": "國際認證之色乳", "tail_type": "product"}, ], [ {'head': 'SABONTAIWAN', 'tail': '大馬士革玫瑰有機光燦系列', 'head_type': 'brand', 'tail_type': 'product', 'type': 'sell'}, {'head': 'SNTAIWAN', 'tail': '大馬士革玫瑰有機光燦系列', 'head_type': 'brand', 'tail_type': 'product', 'type': 'sell'} ] ] evaluation_scores = module.compute(predictions=predictions, references=references, mode="boundaries", detailed_scores=True, relation_types=["belongs_to"]) print(evaluation_scores) >>> {'belongs_to': {'tp': 0, 'fp': 0, 'fn': 1, 'p': 0, 'r': 0, 'f1': 0}, 'ALL': {'tp': 0, 'fp': 0, 'fn': 1, 'p': 0, 'r': 0, 'f1': 0, 'Macro_f1': 0.0, 'Macro_p': 0.0, 'Macro_r': 0.0}} ``` ## Limitations and Bias This metric has strict and boundaries mode, also can select relation_types for different type evaluation. Make sure to select suitable evaluation parameters. F1 score may be totally different. Prediction and reference entity_name should be exactly the same regardless of case and spaces. If prediction is not exactly the same as the reference one. It will count as fp or fn. ## Citation ```bibtex @Paper{ author = {Bruno Taillé, Vincent Guigue, Geoffrey Scoutheeten, Patrick Gallinari}, title = {Let's Stop Incorrect Comparisons in End-to-end Relation Extraction!}, year = {2020}, link = https://arxiv.org/abs/2009.10684 } ``` ## Further References This evaluation metric implementation uses *https://github.com/btaille/sincere/blob/master/code/utils/evaluation.py*