{ "paper_id": "F13-2012", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:41:57.879299Z" }, "title": "Lexical access via a simple co-occurrence network Trouver les mots dans un simple r\u00e9seau de co-occurrences", "authors": [ { "first": "Gemma", "middle": [], "last": "Bel-Enguix", "suffix": "", "affiliation": { "laboratory": "UMR 7279", "institution": "Aix Marseille Universit\u00e9", "location": { "settlement": "Marseille" } }, "email": "gemma.belenguix@gmail.com" }, { "first": "Michael", "middle": [], "last": "Zock", "suffix": "", "affiliation": { "laboratory": "UMR 7279", "institution": "Aix Marseille Universit\u00e9", "location": { "settlement": "Marseille" } }, "email": "michael.zock@lif.univ-mrs.fr" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Au cours des deux derni\u00e8res d\u00e9cennies des psychologues et des linguistes informaticiens ont essay\u00e9 de mod\u00e9liser l'acc\u00e8s lexical en construisant des simulations ou des ressources. Cependant, parmi ces chercheurs, pratiquement personne n'a vraiment cherch\u00e9 \u00e0 am\u00e9liorer la navigation dans des 'dictionnaires \u00e9lectroniques destin\u00e9s aux producteurs de langue'. Pourtant, beaucoup de travaux ont \u00e9t\u00e9 consacr\u00e9s \u00e0 l'\u00e9tude du ph\u00e9nom\u00e8ne du mot sur le bout de la langue et \u00e0 la construction de r\u00e9seaux lexicaux. Par ailleurs, vu les progr\u00e8s r\u00e9alis\u00e9s en neurosciences et dans le domaine des r\u00e9seaux complexes, on pourrait \u00eatre tent\u00e9 de construire un simulacre du dictionnaire mental, ou, \u00e0 d\u00e9faut une ressource destin\u00e9e aux producteurs de langue (\u00e9crivains, conf\u00e9renciers). Nous sommes restreints en construisant un r\u00e9seau de co-occurrences \u00e0 partir des r\u00e9sum\u00e9s de Wikipedia, le but \u00e9tant de v\u00e9rifier jusqu'o\u00f9 l'on pouvait pousser une telle ressource pour trouver un mot, sachant que la ressource ne contient pas de liens s\u00e9mantiques, car le r\u00e9seau est construit de mani\u00e8re automatique et \u00e0 partir de textes non-annot\u00e9s.", "pdf_parse": { "paper_id": "F13-2012", "_pdf_hash": "", "abstract": [ { "text": "Au cours des deux derni\u00e8res d\u00e9cennies des psychologues et des linguistes informaticiens ont essay\u00e9 de mod\u00e9liser l'acc\u00e8s lexical en construisant des simulations ou des ressources. Cependant, parmi ces chercheurs, pratiquement personne n'a vraiment cherch\u00e9 \u00e0 am\u00e9liorer la navigation dans des 'dictionnaires \u00e9lectroniques destin\u00e9s aux producteurs de langue'. Pourtant, beaucoup de travaux ont \u00e9t\u00e9 consacr\u00e9s \u00e0 l'\u00e9tude du ph\u00e9nom\u00e8ne du mot sur le bout de la langue et \u00e0 la construction de r\u00e9seaux lexicaux. Par ailleurs, vu les progr\u00e8s r\u00e9alis\u00e9s en neurosciences et dans le domaine des r\u00e9seaux complexes, on pourrait \u00eatre tent\u00e9 de construire un simulacre du dictionnaire mental, ou, \u00e0 d\u00e9faut une ressource destin\u00e9e aux producteurs de langue (\u00e9crivains, conf\u00e9renciers). Nous sommes restreints en construisant un r\u00e9seau de co-occurrences \u00e0 partir des r\u00e9sum\u00e9s de Wikipedia, le but \u00e9tant de v\u00e9rifier jusqu'o\u00f9 l'on pouvait pousser une telle ressource pour trouver un mot, sachant que la ressource ne contient pas de liens s\u00e9mantiques, car le r\u00e9seau est construit de mani\u00e8re automatique et \u00e0 partir de textes non-annot\u00e9s.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Lexical choice is an obligatory step in language production. During this stage, the author (speaker or writer) has to select a word expressing the concept or idea he/she has in mind. Of course, before choosing a word, one must have accessed a set of words from which to choose. While writers may use an external resource (dictionary) in case of word finding problems, speakers always rely on the internal or mental lexicon (human brain) which is known for its remarkable organisation. It is still a matter of debate where and in what form words are stored in the brain, yet, there is a general belief concerning dictionaries, namely: the bigger (the more entries), the better. While making sense from a practical point of view, this statement may nevertheless be misleading. Storage does not imply accessibility. This is well known via the 'tip of the tongue'-problem (TOT, Brown & McNeill, 1996; Brown, 1991) 2 , but this holds also for electronic resources. For example,, variations of the input (query) or variations concerning the principle underlying the building of the resource may affect considerably the success of finding a given target word (Zock & Schwab, 2013) . While authors need dictionaries, the latter are only truly useful if the words they contain are easily accessible. To allow for this we need good indexes (Zock & Schwab, 2013) .", "cite_spans": [ { "start": 868, "end": 896, "text": "(TOT, Brown & McNeill, 1996;", "ref_id": null }, { "start": 897, "end": 909, "text": "Brown, 1991)", "ref_id": "BIBREF0" }, { "start": 1152, "end": 1173, "text": "(Zock & Schwab, 2013)", "ref_id": "BIBREF20" }, { "start": 1330, "end": 1351, "text": "(Zock & Schwab, 2013)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Lexical access has been widely studied and modelled by psychologists (Dell, 1986; Levelt et al. 1999) . However, none of this work addresses the problem of word finding via an electronic resource. The work done by computational lexicographers is generally based on the readers' needs: words are listed alphabetically, and little if any provision is made to allow for conceptual input. Indeed, what kind of information (query, conceptual input) should a user give if the target words are 'avatar', 'tiara' or 'eschatology'? While there are many kinds of dictionaries, only very few of them are really helpful for the writer or speaker. Still, great efforts have been made to improve the situation. In fact, there are quite a few onomasiological dictionaries, like Roget's Thesaurus (Roget, 1852) , and various network-based dictionaries, with WordNet (Fellbaum, 1998; Miller et al., 1990) being the best known. There are also various collocation dictionaries (BBI, OECD), reverse dictionaries (Edmonds, 1999, or Wordsmyth, www.wordsmyth.net) and OneLook, which combines a dictionary (WordNet) and an encyclopedia (Wikipedia). Finally, there is MEDAL (Rundell and Fox, 2002), a thesaurus produced with the help of Kilgariff's Sketch Engine (Kilgarriff et al., 2004) .", "cite_spans": [ { "start": 69, "end": 81, "text": "(Dell, 1986;", "ref_id": "BIBREF2" }, { "start": 82, "end": 101, "text": "Levelt et al. 1999)", "ref_id": "BIBREF9" }, { "start": 781, "end": 794, "text": "(Roget, 1852)", "ref_id": "BIBREF16" }, { "start": 850, "end": 866, "text": "(Fellbaum, 1998;", "ref_id": "BIBREF4" }, { "start": 867, "end": 887, "text": "Miller et al., 1990)", "ref_id": "BIBREF13" }, { "start": 992, "end": 1040, "text": "(Edmonds, 1999, or Wordsmyth, www.wordsmyth.net)", "ref_id": null }, { "start": 1238, "end": 1263, "text": "(Kilgarriff et al., 2004)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Despite its shortcomings, of all these proposals WordNet (WN) clearly stands out. While being built manually, it embodies a number of features known from the mental lexicon: the lexicon is a multidimensional network whose nodes (words) are linked via various kinds of relations. WNs have been built for many languages (http://www.globalwordnet.org), and the initial resource has been adapted and improved, to yield eXtendedWN, (Mihalcea et Moldovan, 2001 ) an application able to support a great number of tasks in NLP.", "cite_spans": [ { "start": 427, "end": 454, "text": "(Mihalcea et Moldovan, 2001", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Other networks have been built differently. For example, JeuxDeMot (JdM, Lafourcade, 2007) was built via a huge community (crowdsourcing) playing games. The approach is similar to other web-based resources, like Open Mind Word Expert (Mihalcea et Chklovski, 2003) and SemKey (Marchetti et al., 2007) . JdM is coupled with AKI (Joubert et Lafourcade, 2012) , which is supposed to allow for word access. To what extend this is truly so remains an empirical question, despite the fact that the initial results look quite promising (Joubert et al. 2011) . Zock et al. (2010) propose an association-based index to support interactive lexical access for language producers. To this end they suggest to build a matrix on the basis of co-occurrences. Put differently, they try to capture word associations and the links holding between them. This approach seems attractive as the network is built automatically, corpus-based, computer-supported, and the resource allows for graphbased analysis (relative distance, clustering effects, etc.). However, this work is also confronted with some unsolved problems like disambiguation of the input (query, clue), explicitation of the link type and clustering of the output (the answers given in response to a query). Usability will be hampered as long as all this cannot be done automatically. We try to tackle a similar problem, but we do not address interactive search, but only automatic access. More precisely, we try to address the tip-of-the-tongue problem by using a graph-kind of approach. Overall, the following ideas underlie this work: a) usage of an non-annotated source, containing a large number of words; b) structuring of the lexicon in the simplest way possible, i.e. by relying only graph theory and statistics; c) exclusive usage of co-occurrences for building the graph. Semantic relations are ignored at this stage; d) exclusive reliance on automatic processing (hence, no manual annotations); e) conception of very simple graph search algorithms.", "cite_spans": [ { "start": 73, "end": 90, "text": "Lafourcade, 2007)", "ref_id": "BIBREF8" }, { "start": 234, "end": 263, "text": "(Mihalcea et Chklovski, 2003)", "ref_id": "BIBREF12" }, { "start": 275, "end": 299, "text": "(Marchetti et al., 2007)", "ref_id": "BIBREF10" }, { "start": 326, "end": 355, "text": "(Joubert et Lafourcade, 2012)", "ref_id": "BIBREF5" }, { "start": 528, "end": 549, "text": "(Joubert et al. 2011)", "ref_id": "BIBREF6" }, { "start": 552, "end": 570, "text": "Zock et al. (2010)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The approach is extremely simple. We use co-occurrences because it is a straightforward way to structure words on the basis of weights, i.e. numerical values. We do not claim any cognitive relevance other than statistics, which seem nevertheless to work when modelling language production (Levelt et al. 1999 ).", "cite_spans": [ { "start": 289, "end": 308, "text": "(Levelt et al. 1999", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our goal is to build a co-occurrence graph able to achieve similar results to the ones of annotated systems. To achieve this goal, we decided to start with a large, non-annotated corpus: the entire set of Wikipedia's abstracts, i.e. almost 4 million documents. To build the graph our system runs through a pipeline of five modules:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Co-occurrence network", "sec_num": "2" }, { "text": "1. document cleaning (deletion of stop-words); 2. parsing of the abstracts and extraction of 'Nouns' and 'Adjectives'; 3. lemmatisation of word forms to avoid duplicates (horse, horses); 4. computation of the (un-directed) graph's nodes. Links are created between direct neighbours; 5. computation of the edges' weights. The weight of an edge is equal to the number of its occurrences. We only use absolute values.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Co-occurrence network", "sec_num": "2" }, { "text": "Performing the above described operations yields a graph of 1.595.133 (different) nodes, of which nearly half (48%, i.e. 765.081) are happaxes, that is, terms occurring only once within the source. In order to understand the reason for this, one must take into account the nature of the resource, and the nature of the words used.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Co-occurrence network", "sec_num": "2" }, { "text": "Since our source is an encyclopaedia, it contains an unusually high number of terms related to science, history, peoples' names or names of geographical locations, concepts from other languages... Concerning the extracted words, it should be noted that only nouns and adjectives were used. The deletion of verbs and adverbs is motivated by practical considerations: decreasing the size of the network alleviates processing. Put differently, our choice has been made only for this specific experiment. We wanted to focus only on nouns and adjectives, maintaining them even if their weights are very low. Stop words have been also eliminated, but for a different reason. They are hardly ever used as 'clues', and using them nevertheless may bias the results. Finally, we get a weighted list of nodes. Note, that the weight of more than 2/3 of edges is 1, the weight of the remaining third is >1, the proportion being 69/31. Moreover, there is only one edge with a value greater than 100000, state-united, i.e. 'United States', the most frequently mentioned cooccurrence in the Wikipedia abstracts. The weight of the following edges exceeds 30000: The distribution of edges' weights is shown in Figure 2 . Data whose value is 1 are omitted in the figure. \u2022 Get the set of nodes V T = N(c 1 ) N(c 2 ) N(c 3 ) and consider V c ={c 1 , c 2 , c 3 } to be the set of nodes representing the clues. We define a subgraph of G, G T , that is a complete bipartite graph, where every element of V T is connected to every element of V c ;", "cite_spans": [], "ref_spans": [ { "start": 1192, "end": 1200, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Co-occurrence network", "sec_num": "2" }, { "text": "[(", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Co-occurrence network", "sec_num": "2" }, { "text": "\u2022 Rank the nodes of V T according to their strength (s) in G T . For every v in V T , s v =1/3 (w(vc 1 )+ w(vc 2 )+ w(vc 2 )).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Co-occurrence network", "sec_num": "2" }, { "text": "Taking random examples, the system's capacity to find words is remarkably good, provided that all the clues are from the same domain. Otherwise performance may degrade: compare (a1, b1) and b2. In the first two cases the target appears on top of the list, whereas in b-2 the target word gets demoted to the 13th position. Being from a different domain, the clue 'India' impedes performance. On the other hand, widening the clues' semantic scope has as a positive effect, see c1, c2. 2. If we provide 'tusk', 'trunk', 'India', we get the target in the 13 th position, right after 'first', 'year', 'country', 'name', 'member', 'species', 'born', 'family', 'small', 'large', 'long', 'upper', 'elephant'. c) Target: 'computer':", "cite_spans": [ { "start": 588, "end": 700, "text": "'year', 'country', 'name', 'member', 'species', 'born', 'family', 'small', 'large', 'long', 'upper', 'elephant'.", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Performance", "sec_num": "4" }, { "text": "1. The clues 'mouse', 'keyboard', 'screen' produce a large number of hits. The program displays only the first fifty. 1 (player, 600); 2 (computer, 264); 3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance", "sec_num": "4" }, { "text": "(first, 192); 4 (appearance, 191); 5 (name, 178); 6 (album, 99); 7 (small, 90); 8 (role, 89); 9 (music, 89); 10 (band, 82).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance", "sec_num": "4" }, { "text": "2. The clues 'mouse', 'Prolog' and 'Java' (figure 3-b) produce only four hits: 1 (program, 58); 2 (computer, 47); 3 (development, 31); 4 (book, 16)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance", "sec_num": "4" }, { "text": "The given examples could make us believe that the program works quite well. While being true, this is not always the case. For example, when we tried the examples used by , namely, 'wine', 'harvest', 'grape', the system was unable to find the target word 'vintage'. On the other hand, by changing slightly the input, providing 'vintage', 'harvest', and 'grape', we did get 'wine' in the first position and with a very strong score (735). This suggests both a conclusion and a question: (a) the algorithm is not yet good enough, since it works in some cases, but not in others; (b) since some terms are definitely better triggers or cues than others, we may wonder what are good cue words, and to this end we could use this resource in order to answer this question empirically. This is a possibility we are currently exploring.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance", "sec_num": "4" }, { "text": "Experiments done with the resource built on the basis of the co-occurrences extracted from Wikipedia shows that it allows for accessing words. It also shows, if ever necessary, that not all words are equally good as inputs. This being so, we could use this resource as a workbenchto find out empirically which words, or which specific kind of words are good inputs for a given target word.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" }, { "text": "While there is little doubt that Wikipedia is a quite useful source, it does also have its shortcomings. For example, it does not contain episodic knowledge (information concerning current events, anecdotes,...), hence, it may be good to consider other types of texts containing more common words (authentic exchanges between people).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" }, { "text": "Concerning the system's performance one may conclude that it is quite good, but we should bear in mind that we dealt with automatic access and not interactive word finding. While the number of hits is (within limits) of little importance in the former case -(computers will find quickly a word even in a huge list, say, a list of 3000 tokens),-it becomes a critical issue in the latter case. This is why typing the links, or clustering the output is an important component for supporting interactive word search. This being said, getting a clearer picture concerning clues may still be of interest for those interested in designing tools to support word access.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" }, { "text": "This work has been supported by the European Commission under a Marie Curie Fellowship.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The TOT-problem consists in the fact that an author knows a word, but is occasionally unable to access it. Typically, he has activated most of the target's features, but fails to retrieve some of the crucial, final, sound related fragments. This is why the speaker has the impression that the word has nearly made it, but not quite. The word is stuck on the tip of the tongue.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "c ATALA", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A review of the tip of the tongue experience", "authors": [ { "first": "A", "middle": [], "last": "Brown", "suffix": "" } ], "year": 1991, "venue": "Psychological Bulletin", "volume": "10", "issue": "", "pages": "204--223", "other_ids": {}, "num": null, "urls": [], "raw_text": "BROWN, A. (1991). A review of the tip of the tongue experience. Psychological Bulletin, 10, pages 204-223", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "The tip of the tongue phenomenon", "authors": [ { "first": "R", "middle": [], "last": "Brown", "suffix": "" }, { "first": "D", "middle": [], "last": "Mc Neill", "suffix": "" } ], "year": 1966, "venue": "Journal of Verbal Learning and Verbal Behavior", "volume": "5", "issue": "", "pages": "325--337", "other_ids": {}, "num": null, "urls": [], "raw_text": "BROWN, R. et MC NEILL, D. (1966). The tip of the tongue phenomenon. Journal of Verbal Learning and Verbal Behavior, 5, pages 325-337", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A spreading-activation theory of retrieval in sentence production", "authors": [ { "first": "G", "middle": [], "last": "Dell", "suffix": "" } ], "year": 1986, "venue": "Psychological Review", "volume": "93", "issue": "", "pages": "283--321", "other_ids": {}, "num": null, "urls": [], "raw_text": "DELL, G. (1986). A spreading-activation theory of retrieval in sentence production. Psychological Review, 93, 283-321.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "The Oxford Reverse Dictionary", "authors": [ { "first": "D", "middle": [], "last": "Edmonds", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "EDMONDS, D. (ed.) (1999). The Oxford Reverse Dictionary, Oxford University Press, Oxford, 1999.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "WordNet: An Electronic Lexical Database and some of its Applications", "authors": [ { "first": "C", "middle": [], "last": "Fellbaum", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "FELLBAUM, C. (\u00e9d.) (1998). WordNet: An Electronic Lexical Database and some of its Applications. Cambridge, MA: MIT Press.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A new dynamic approach for lexical networks evaluation", "authors": [ { "first": "A", "middle": [], "last": "Joubert", "suffix": "" }, { "first": "M", "middle": [], "last": "Lafourcade", "suffix": "" } ], "year": 2012, "venue": "Proceedings LREC'12 (Eight International Conference on Language Resources and Evaluation)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "JOUBERT, A., LAFOURCADE, M. (2012). A new dynamic approach for lexical networks evaluation. In Choukri et al. (eds.), Proceedings LREC'12 (Eight International Conference on Language Resources and Evaluation), Istanbul, Turkey, European Language Resources Association (ELRA).", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "\u00c9valuation et consolidation d'un r\u00e9seau lexical ! via un outil pour retrouver ! le mot sur le bout de la langue", "authors": [ { "first": "A", "middle": [], "last": "Joubert", "suffix": "" }, { "first": "M", "middle": [], "last": "Lafourcade", "suffix": "" }, { "first": "D", "middle": [], "last": "Schwab", "suffix": "" }, { "first": "M", "middle": [], "last": "Zock", "suffix": "" } ], "year": 2011, "venue": "602 c ATALA la 18\u00e8me conf\u00e9rence sur le Traitement Automatique des Langues Naturelles (TALN)", "volume": "", "issue": "", "pages": "295--306", "other_ids": {}, "num": null, "urls": [], "raw_text": "JOUBERT, A. LAFOURCADE, M., SCHWAB, D. et ZOCK, M. (2011). \u00c9valuation et consolidation d'un r\u00e9seau lexical ! via un outil pour retrouver ! le mot sur le bout de la langue. Actes de TALN-R\u00c9CITAL 2013, 17-21 Juin, Les Sables d'Olonne 602 c ATALA la 18\u00e8me conf\u00e9rence sur le Traitement Automatique des Langues Naturelles (TALN), Montpellier, pp. 295-306", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "The Sketch Engine", "authors": [ { "first": "A", "middle": [], "last": "Kilgarriff", "suffix": "" }, { "first": "R", "middle": [], "last": "Rychly", "suffix": "" }, { "first": "P", "middle": [], "last": "Smrz", "suffix": "" }, { "first": "D", "middle": [], "last": "Tugwell", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 11th Euralex International Congress", "volume": "", "issue": "", "pages": "105--116", "other_ids": {}, "num": null, "urls": [], "raw_text": "KILGARRIFF, A., RYCHLY, R., SMRZ, P. et TUGWELL, D. (2004). The Sketch Engine. Proceedings of the 11th Euralex International Congress. Lorient, France, pages 105-116", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Making people play for lexical acquisition", "authors": [ { "first": "M", "middle": [], "last": "Lafourcade", "suffix": "" } ], "year": 2007, "venue": "Proceedings SNLP 2007 (7th Symposium on Natural Language Processing)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "LAFOURCADE, M. (2007). Making people play for lexical acquisition. In Proceedings SNLP 2007 (7th Symposium on Natural Language Processing), Pattaya, Thailande.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "A theory of lexical access in speech production", "authors": [ { "first": "W", "middle": [], "last": "Levelt", "suffix": "" }, { "first": "A", "middle": [], "last": "Roelofs", "suffix": "" }, { "first": "A", "middle": [], "last": "Meyer", "suffix": "" } ], "year": 1999, "venue": "Behavioral and Brain Sciences", "volume": "22", "issue": "", "pages": "1--75", "other_ids": {}, "num": null, "urls": [], "raw_text": "LEVELT, W., ROELOFS, A. et MEYER, A. (1999). A theory of lexical access in speech production. Behavioral and Brain Sciences, 22, pages 1-75.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "SEMKEY. A semantic collaborative tagging system", "authors": [ { "first": "A", "middle": [], "last": "Marchetti", "suffix": "" }, { "first": "M", "middle": [], "last": "Tesconi", "suffix": "" }, { "first": "F", "middle": [], "last": "Ronzano", "suffix": "" }, { "first": "M", "middle": [], "last": "Rosella", "suffix": "" }, { "first": "S", "middle": [], "last": "Minutoli", "suffix": "" } ], "year": 2007, "venue": "Proceedings of WWW2007", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "MARCHETTI, A., TESCONI, M., RONZANO, F., ROSELLA, M. et MINUTOLI, S. (2007). SEMKEY. A semantic collaborative tagging system. In Proceedings of WWW2007, Banf, Canada.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Extended wordnet: progress report", "authors": [ { "first": "R", "middle": [], "last": "Mihalcea", "suffix": "" }, { "first": "D", "middle": [], "last": "Moldovan", "suffix": "" } ], "year": 2001, "venue": "NAACL 2001 (Workshop on WordNet and Other Lexical Resources)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "MIHALCEA, R. et MOLDOVAN, D. (2001). Extended wordnet: progress report. In NAACL 2001 (Workshop on WordNet and Other Lexical Resources), Pittsburgh, USA.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Open Mind Word Expert: Creating large annotated data collections with web userp's help", "authors": [ { "first": "R", "middle": [], "last": "Mihalcea", "suffix": "" }, { "first": "T", "middle": [], "last": "Chklovski", "suffix": "" } ], "year": 2003, "venue": "LINC 2003 (Proceedings of the EACL 2003 Workshop on Linguistically Annotated Corpora)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "MIHALCEA, R. et CHKLOVSKI, T. (2003). Open Mind Word Expert: Creating large annotated data collections with web userp's help. In LINC 2003 (Proceedings of the EACL 2003 Workshop on Linguistically Annotated Corpora), Budapest.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Introduction to WordNet: an on-line lexical database", "authors": [ { "first": "G", "middle": [], "last": "Miller", "suffix": "" }, { "first": "R", "middle": [], "last": "Beckwith", "suffix": "" }, { "first": "C", "middle": [], "last": "Fellbaum", "suffix": "" }, { "first": "D", "middle": [], "last": "Gross", "suffix": "" }, { "first": "K", "middle": [], "last": "Miller", "suffix": "" } ], "year": 1990, "venue": "International Journal of Lexicography", "volume": "3", "issue": "", "pages": "235--244", "other_ids": {}, "num": null, "urls": [], "raw_text": "MILLER, G., BECKWITH, R., FELLBAUM, C., GROSS, D., MILLER, K. (1990). Introduction to WordNet: an on-line lexical database. International Journal of Lexicography 3, pages 235-244.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "A bootstrapping algorithm for automatically harvesting semantic relations", "authors": [ { "first": "M", "middle": [], "last": "Pennacchiotti", "suffix": "" }, { "first": "P", "middle": [], "last": "Pantel", "suffix": "" } ], "year": 2006, "venue": "Proceedings of ICoS (Inference in Computational Semantis)", "volume": "", "issue": "", "pages": "87--96", "other_ids": {}, "num": null, "urls": [], "raw_text": "PENNACCHIOTTI, M., PANTEL, P. (2006). A bootstrapping algorithm for automatically harvesting semantic relations. In Proceedings of ICoS (Inference in Computational Semantis), Boxton, England, pages 87-96", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Macmillan English Dictionary for Advanced Learners (MEDAL)", "authors": [], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "RUNDELL, M. et FOX, G. (eds.) (2002). Macmillan English Dictionary for Advanced Learners (MEDAL). Oxford", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Thesaurus of English Words and Phrases", "authors": [ { "first": "P", "middle": [], "last": "Roget", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "ROGET, P. (1852). Thesaurus of English Words and Phrases. Longman, London", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Similarity of semantic relations", "authors": [ { "first": "P", "middle": [ "D" ], "last": "Turney", "suffix": "" } ], "year": 2006, "venue": "Computational Linguistics", "volume": "32", "issue": "", "pages": "379--416", "other_ids": {}, "num": null, "urls": [], "raw_text": "TURNEY, P.D. (2006). Similarity of semantic relations. Computational Linguistics 32, pages 379-416", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Deliberate word access: an intuition, a roadmap and some preliminary empirical results", "authors": [ { "first": "M", "middle": [], "last": "Zock", "suffix": "" }, { "first": "O", "middle": [], "last": "Ferret", "suffix": "" }, { "first": "D", "middle": [], "last": "Schwab", "suffix": "" } ], "year": 2010, "venue": "Int J Speech Technol", "volume": "13", "issue": "", "pages": "201--218", "other_ids": {}, "num": null, "urls": [], "raw_text": "ZOCK, M., FERRET, O., SCHWAB, D. (2010). Deliberate word access: an intuition, a roadmap and some preliminary empirical results. Int J Speech Technol 13, pages 201-218", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Storage does not garantee access: The problem of organizing and accessing words in a speaker's lexicon", "authors": [ { "first": "M", "middle": [], "last": "Zock", "suffix": "" }, { "first": "D", "middle": [], "last": "Schwab", "suffix": "" } ], "year": 2011, "venue": "Journal of Cognitive Science", "volume": "12", "issue": "", "pages": "233--259", "other_ids": {}, "num": null, "urls": [], "raw_text": "ZOCK, M. et SCHWAB, D. (2011). Storage does not garantee access: The problem of organizing and accessing words in a speaker's lexicon. Journal of Cognitive Science 12, pages 233-259", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "L'index, une ressource vitale pour guider les auteurs \u00e0 trouver le mot bloqu\u00e9 sur le bout de la langue", "authors": [ { "first": "M", "middle": [], "last": "Zock", "suffix": "" }, { "first": "D", "middle": [], "last": "Schwab", "suffix": "" } ], "year": 2013, "venue": "Gala, N. et M. Zock (\u00e9ds). Ressources lexicales : construction et utilisation. Lingvisticae Investigationes", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "ZOCK, M. et SCHWAB, D. (2013) L'index, une ressource vitale pour guider les auteurs \u00e0 trouver le mot bloqu\u00e9 sur le bout de la langue. In Gala, N. et M. Zock (\u00e9ds). Ressources lexicales : construction et utilisation. Lingvisticae Investigationes, John Benjamins, Amsterdam, The Netherlands", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "text": "Weights of the nodes of the graphFigure 1shows the distribution of frequencies. The weight of most nodes is below 10, speaking in absolute terms. Yet, 86 words are solid hubs with more than 100000 occurrences. Here are the 20 nodes with the greatest weight:[(State, 502915), (Born, 424243), (New, 349236), (County, 348655), (District, 344620), (First, 339583), (American, 330643), (United, 320260), (School, 280589), (Village, 277337), (City, 276718), (Album, 272357), (Film, 260753), (National, 251727), (Family, 247912), (University, 239137), (Year, 238700), (South, 236760), (Part, 231373), (Football, 224046)]", "type_str": "figure", "num": null }, "FIGREF1": { "uris": null, "text": "Weights of the edges of the graph 3 Search algorithmThe search of the target word T in a graph G, is done via some clues, say c 1 , c 2 , c 3 , (mouse, Prolog, Java, in figure 3) which act as inputs. G=(V, E) stands for the graph, with V expressing the set of vertices and E the set of edges. The clues c 1 , c 2 , c 3 V. N(i) expresses the neighbourhood of a node (i V) and is defined as 'every j V | e i,j E. The search algorithm is as follows:\u2022 Define the neighbourhood of c 1 , c 2 , c 3 , N(c 1 ), N(c 2 ), N(c 3 );", "type_str": "figure", "num": null }, "FIGREF2": { "uris": null, "text": "Graph G T for (tusk, trunk, ear) 3B -G T for (mouse, Prolog, Java) a) Target: 'hand': -The clues 'finger', 'wrist', 'glove', yield 9 hits, displaying the target in the first position: 1 (hand, 153); 2 (right, 29); 3 (arm, 25); 4 (part, 24); 5 (first, 21); 6 (side, 18); 7 (worn, 17); 8 (person, 12); 9 (game, 8). b) Target: 'elephant': 1. By entering the words 'tusk', 'trunk', 'ear' (figure 3a), we get a list of 14 items of which the first 10 are as follows: 1 (elephant), 51; 2 (upper),28; 3 (species, 28); 4 (single, 25); 5 (lower, 24); 6 (small, 23); 7 (album, 22); 8 (large, 19); 9 (name, 18); 10 (side, 17).", "type_str": "figure", "num": null } } } }