Datasets:

Modalities:
Tabular
Text
Languages:
English
ArXiv:
Libraries:
Datasets
License:
kanishka commited on
Commit
d182636
·
1 Parent(s): 95e4d2d

fixed citation

Browse files
Files changed (1) hide show
  1. README.md +14 -5
README.md CHANGED
@@ -33,10 +33,19 @@ subordinate concepts inherit the properties of their superordinate (hypernyms).
33
  ### Citation Information
34
 
35
  ```
36
- @article{misra2022comps,
37
- title={COMPS: Conceptual Minimal Pair Sentences for testing Property Knowledge and Inheritance in Pre-trained Language Models},
38
- author={Misra, Kanishka and Rayz, Julia Taylor and Ettinger, Allyson},
39
- journal={arXiv preprint arXiv:2210.01963},
40
- year={2022}
 
 
 
 
 
 
 
 
 
41
  }
42
  ```
 
33
  ### Citation Information
34
 
35
  ```
36
+ @inproceedings{misra-etal-2023-comps,
37
+ title = "{COMPS}: Conceptual Minimal Pair Sentences for testing Robust Property Knowledge and its Inheritance in Pre-trained Language Models",
38
+ author = "Misra, Kanishka and
39
+ Rayz, Julia and
40
+ Ettinger, Allyson",
41
+ booktitle = "Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics",
42
+ month = may,
43
+ year = "2023",
44
+ address = "Dubrovnik, Croatia",
45
+ publisher = "Association for Computational Linguistics",
46
+ url = "https://aclanthology.org/2023.eacl-main.213",
47
+ doi = "10.18653/v1/2023.eacl-main.213",
48
+ pages = "2928--2949",
49
+ abstract = "A characteristic feature of human semantic cognition is its ability to not only store and retrieve the properties of concepts observed through experience, but to also facilitate the inheritance of properties (can breathe) from superordinate concepts (animal) to their subordinates (dog){---}i.e. demonstrate property inheritance. In this paper, we present COMPS, a collection of minimal pair sentences that jointly tests pre-trained language models (PLMs) on their ability to attribute properties to concepts and their ability to demonstrate property inheritance behavior. Analyses of 22 different PLMs on COMPS reveal that they can easily distinguish between concepts on the basis of a property when they are trivially different, but find it relatively difficult when concepts are related on the basis of nuanced knowledge representations. Furthermore, we find that PLMs can show behaviors suggesting successful property inheritance in simple contexts, but fail in the presence of distracting information, which decreases the performance of many models sometimes even below chance. This lack of robustness in demonstrating simple reasoning raises important questions about PLMs{'} capacity to make correct inferences even when they appear to possess the prerequisite knowledge.",
50
  }
51
  ```