{ "paper_id": "P05-1016", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:37:55.339592Z" }, "title": "Inducing Ontological Co-occurrence Vectors", "authors": [ { "first": "Patrick", "middle": [], "last": "Pantel", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Southern California", "location": { "addrLine": "4676 Admiralty Way Marina del Rey", "postCode": "90292", "region": "CA" } }, "email": "pantel@isi.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In this paper, we present an unsupervised methodology for propagating lexical cooccurrence vectors into an ontology such as WordNet. We evaluate the framework on the task of automatically attaching new concepts into the ontology. Experimental results show 73.9% attachment accuracy in the first position and 81.3% accuracy in the top-5 positions. This framework could potentially serve as a foundation for ontologizing lexical-semantic resources and assist the development of other largescale and internally consistent collections of semantic information. * the system's attachment was a parent of the correct attachment. \u2020 error due to case mix-up (our algorithm does not differentiate between case).", "pdf_parse": { "paper_id": "P05-1016", "_pdf_hash": "", "abstract": [ { "text": "In this paper, we present an unsupervised methodology for propagating lexical cooccurrence vectors into an ontology such as WordNet. We evaluate the framework on the task of automatically attaching new concepts into the ontology. Experimental results show 73.9% attachment accuracy in the first position and 81.3% accuracy in the top-5 positions. This framework could potentially serve as a foundation for ontologizing lexical-semantic resources and assist the development of other largescale and internally consistent collections of semantic information. * the system's attachment was a parent of the correct attachment. \u2020 error due to case mix-up (our algorithm does not differentiate between case).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Despite considerable effort, there is still today no commonly accepted semantic corpus, semantic framework, notation, or even agreement on precisely which aspects of semantics are most useful (if at all). We believe that one important reason for this rather startling fact is the absence of truly wide-coverage semantic resources.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Recognizing this, some recent work on wide coverage term banks, like WordNet (Miller 1990) and CYC (Lenat 1995) , and annotated corpora, like FrameNet (Baker et al. 1998) , Propbank (Kingsbury et al. 2002) and Nombank (Meyers et al. 2004) , seeks to address the problem. But manual efforts such as these suffer from two drawbacks: they are difficult to tailor to new domains, and they have internal inconsistencies that can make automating the acquisition process difficult.", "cite_spans": [ { "start": 77, "end": 90, "text": "(Miller 1990)", "ref_id": "BIBREF17" }, { "start": 99, "end": 111, "text": "(Lenat 1995)", "ref_id": "BIBREF11" }, { "start": 151, "end": 170, "text": "(Baker et al. 1998)", "ref_id": "BIBREF1" }, { "start": 182, "end": 205, "text": "(Kingsbury et al. 2002)", "ref_id": "BIBREF9" }, { "start": 218, "end": 238, "text": "(Meyers et al. 2004)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this work, we introduce a general framework for inducing co-occurrence feature vectors for nodes in a WordNet-like ontology. We believe that this framework will be useful for a variety of applications, including adding additional semantic information to existing semantic term banks by disambiguating lexical-semantic resources.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Recently, researchers have applied text-and web-mining algorithms for automatically creating lexical semantic resources like similarity lists (Lin 1998) , semantic lexicons (Riloff and Shepherd 1997) , hyponymy lists (Shinzato and Torisawa 2004; Pantel and Ravichandran 2004) , partwhole lists (Girgu et al. 2003) , and verb relation graphs (Chklovski and Pantel 2004) . However, none of these resources have been directly linked into an ontological framework. For example, in VERBOCEAN (Chklovski and Pantel 2004) , we find the verb relation \"to surpass is-stronger-than to hit\", but it is not specified that it is the achieving sense of hit where this relation applies.", "cite_spans": [ { "start": 142, "end": 152, "text": "(Lin 1998)", "ref_id": "BIBREF12" }, { "start": 173, "end": 199, "text": "(Riloff and Shepherd 1997)", "ref_id": "BIBREF20" }, { "start": 217, "end": 245, "text": "(Shinzato and Torisawa 2004;", "ref_id": "BIBREF22" }, { "start": 246, "end": 275, "text": "Pantel and Ravichandran 2004)", "ref_id": null }, { "start": 294, "end": 313, "text": "(Girgu et al. 2003)", "ref_id": null }, { "start": 341, "end": 368, "text": "(Chklovski and Pantel 2004)", "ref_id": null }, { "start": 487, "end": 514, "text": "(Chklovski and Pantel 2004)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Ontologizing semantic resources", "sec_num": null }, { "text": "We term ontologizing a lexical-semantic resource as the task of sense disambiguating the resource. This problem is different but not orthogonal to word-sense disambiguation. If we could disambiguate large collections of text with high accuracy, then current methods for building lexical-semantic resources could easily be applied to ontologize them by treating each word's senses as separate words. Our method does not require the disambiguation of text. Instead, it relies on the principle of distributional similarity and that polysemous words that are similar in one sense are dissimilar in their other senses.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ontologizing semantic resources", "sec_num": null }, { "text": "Given the enriched ontologies produced by our method, we believe that ontologizing lexicalsemantic resources will be feasible. For example, consider the example verb relation \"to surpass isstronger-than to hit\" from above. To disambiguate the verb hit, we can look at all other verbs that to surpass is stronger than (for example, in VERBOCEAN, \"to surpass is-stronger-than to overtake\" and \"to surpass is-stronger-than to equal\"). Now, we can simply compare the lexical co-occurrence vectors of overtake and equal with the ontological feature vectors of the senses of hit (which are induced by our framework). The sense whose feature vector is most similar is selected.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ontologizing semantic resources", "sec_num": null }, { "text": "It remains to be seen in future work how well this approach performs on ontologizing various semantic resources. In this paper, we focus on the general framework for inducing the ontological co-occurrence vectors and we apply it to the task of linking new terms into the ontology.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ontologizing semantic resources", "sec_num": null }, { "text": "Our framework aims at enriching WordNet-like ontologies with syntactic features derived from a non-annotated corpus. Others have also made significant additions to WordNet. For example, in eXtended WordNet (Harabagiu et al. 1999) , the rich glosses in WordNet are enriched by disambiguating the nouns, verbs, adverbs, and adjectives with synsets. Another work has enriched WordNet synsets with topically related words extracted from the Web (Agirre et al. 2001) . While this method takes advantage of the redundancy of the web, our source of information is a local document collection, which opens the possibility for domain specific applications.", "cite_spans": [ { "start": 206, "end": 229, "text": "(Harabagiu et al. 1999)", "ref_id": "BIBREF5" }, { "start": 441, "end": 461, "text": "(Agirre et al. 2001)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Relevant work", "sec_num": "2" }, { "text": "Distributional approaches to building semantic repositories have shown remarkable power. The underlying assumption, called the Distributional Hypothesis (Harris 1985) , links the semantics of words to their lexical and syntactic behavior. The hypothesis states that words that occur in the same contexts tend to have similar meaning. Researchers have mostly looked at representing words by their surrounding words (Lund and Burgess 1996) and by their syntactical contexts (Hindle 1990; Lin 1998) . However, these representations do not distinguish between the different senses of words. Our framework utilizes these principles and representations to induce disam-biguated feature vectors. We describe these representations further in Section 3.", "cite_spans": [ { "start": 153, "end": 166, "text": "(Harris 1985)", "ref_id": "BIBREF6" }, { "start": 414, "end": 437, "text": "(Lund and Burgess 1996)", "ref_id": "BIBREF15" }, { "start": 472, "end": 485, "text": "(Hindle 1990;", "ref_id": "BIBREF8" }, { "start": 486, "end": 495, "text": "Lin 1998)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Relevant work", "sec_num": "2" }, { "text": "In supervised word sense disambiguation, senses are commonly represented by their surrounding words in a sense-tagged corpus (Gale et al. 1991 ). If we had a large collection of sensetagged text, then we could extract disambiguated feature vectors by collecting co-occurrence features for each word sense. However, since there is little sense-tagged text available, the feature vectors for a random WordNet concept would be very sparse. In our framework, feature vectors are induced from much larger untagged corpora (currently 3GB of newspaper text).", "cite_spans": [ { "start": 125, "end": 142, "text": "(Gale et al. 1991", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Relevant work", "sec_num": "2" }, { "text": "Another approach to building semantic repositories is to collect and merge existing ontologies. Attempts to automate the merging process have not been particularly successful (Knight and Luk 1994; Hovy 1998; Noy and Musen 1999) . The principal problems of partial and unbalanced coverage and of inconsistencies between ontologies continue to hamper these approaches.", "cite_spans": [ { "start": 175, "end": 196, "text": "(Knight and Luk 1994;", "ref_id": "BIBREF10" }, { "start": 197, "end": 207, "text": "Hovy 1998;", "ref_id": "BIBREF7" }, { "start": 208, "end": 227, "text": "Noy and Musen 1999)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Relevant work", "sec_num": "2" }, { "text": "The framework we present in Section 4 propagates any type of lexical feature up an ontology. In previous work, lexicals have often been represented by proximity and syntactic features. Consider the following sentence:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Resources", "sec_num": "3" }, { "text": "The tsunami left a trail of horror.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Resources", "sec_num": "3" }, { "text": "In a proximity approach, a word is represented by a window of words surrounding it. For the above sentence, a window of size 1 would yield two features (-1:the and +1:left) for the word tsunami. In a syntactic approach, more linguistically rich features are extracted by using each grammatical relation in which a word is involved (e.g. the features for tsunami are determiner:the and subject-of:leave).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Resources", "sec_num": "3" }, { "text": "For the purposes of this work, we consider the propagation of syntactic features. We used Minipar (Lin 1994 ), a broad coverage parser, to analyze text. We collected the statistics on the grammatical relations (contexts) output by Minipar and used these as the feature vectors. Following Lin (1998), we measure each feature f for a word e not by its frequency but by its pointwise mutual information, mi ef :", "cite_spans": [ { "start": 98, "end": 107, "text": "(Lin 1994", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Resources", "sec_num": "3" }, { "text": "( ) ( ) ( ) f P e P f e P mi ef \u00d7 = , log", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Resources", "sec_num": "3" }, { "text": "The resource described in the previous section yields lexical feature vectors for each word in a corpus. We term these vectors lexical because they are collected by looking only at the lexicals in the text (i.e. no sense information is used). We use the term ontological feature vector to refer to a feature vector whose features are for a particular sense of the word.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inducing ontological features", "sec_num": "4" }, { "text": "In this section, we describe our framework for inducing ontological feature vectors for each node of an ontology. Our approach employs two phases. A divide-and-conquer algorithm first propagates syntactic features to each node in the ontology. A final sweep over the ontology, which we call the Coup phase, disambiguates the feature vectors of lexicals (leaf nodes) in the ontology.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inducing ontological features", "sec_num": "4" }, { "text": "In the first phase of the algorithm, we propagate features up the ontology in a bottom-up approach. Figure 1 gives an overview of this phase.", "cite_spans": [], "ref_spans": [ { "start": 100, "end": 108, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Divide-and-conquer phase", "sec_num": null }, { "text": "The termination condition of the recursion is met when the algorithm processes a leaf node. The feature vector that is assigned to this node is an exact copy of the lexical feature vector for that leaf (obtained from a large corpus as described in Section 3). For example, for the two leaf nodes labeled chair in Figure 2 , we assign to both the same ambiguous lexical feature vector, an excerpt of which is shown in Figure 3 .", "cite_spans": [], "ref_spans": [ { "start": 313, "end": 321, "text": "Figure 2", "ref_id": null }, { "start": 417, "end": 425, "text": "Figure 3", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Divide-and-conquer phase", "sec_num": null }, { "text": "When the recursion meets a non-leaf node, like chairwoman in Figure 2 , the algorithm first recursively applies itself to each of the node's children. Then, the algorithm selects those features common to its children to propagate up to its own ontological feature vector. The assumption here is that features of other senses of polysemous words will not be propagated since they will not be common across the children. Below, we describe the two methods we used to propagate features: Shared and Committee.", "cite_spans": [], "ref_spans": [ { "start": 61, "end": 69, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Divide-and-conquer phase", "sec_num": null }, { "text": "The first technique for propagating features to a concept node n from its children C is the simplest and scored best in our evaluation (see Section 5.2). The goal is that the feature vector for n Input: A node n and a corpus C.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Shared propagation algorithm", "sec_num": null }, { "text": "Step 1: Termination Condition:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Shared propagation algorithm", "sec_num": null }, { "text": "If n is a leaf node then assign to n its lexical feature vector as described in Section 3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Shared propagation algorithm", "sec_num": null }, { "text": "Step 2: Recursion Step:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Shared propagation algorithm", "sec_num": null }, { "text": "For each child c of n, reecurse on c and C. Assign a feature vector to n by propagating features from its children. Output: A feature vector assigned to each node of the tree rooted by n. represents the general grammatical behavior that its children will have. For example, for the concept node furniture in Figure 2 , we would like to assign features like object-of:clean since mosttypes of furniture can be cleaned. However, even though you can eat on a table, we do not want the feature on:eat for the furniture concept since we do not eat on mirrors or beds.", "cite_spans": [], "ref_spans": [ { "start": 308, "end": 316, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Shared propagation algorithm", "sec_num": null }, { "text": "In the Shared propagation algorithm, we propagate only those features that are shared by at least t children. In our experiments, we experimentally set t = min(3, |C|).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Shared propagation algorithm", "sec_num": null }, { "text": "The frequency of a propagated feature is obtained by taking a weighted sum of the frequency of the feature across its children. Let f i be the frequency of the feature for child i, let c i be the total frequency of child i, and let N be the total frequency of all children. Then, the frequency f of the propagated feature is given by:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Shared propagation algorithm", "sec_num": null }, { "text": "\u2211 \u00d7 = i i i N c f f", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Shared propagation algorithm", "sec_num": null }, { "text": "(1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Shared propagation algorithm", "sec_num": null }, { "text": "The second propagation algorithm finds a set of representative children from which to propagate features. Pantel and Lin (2002) describe an algorithm, called Clustering By Committee (CBC), which discovers clusters of words according to their meanings in test. The key to CBC is finding for each class a set of representative elements, called a committee, which most unambiguously describe the members of the class. For example, for the color concept, CBC discovers the following committee members: purple, pink, yellow, mauve, turquoise, beige, fuchsia", "cite_spans": [ { "start": 106, "end": 127, "text": "Pantel and Lin (2002)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Committee propagation algorithm", "sec_num": null }, { "text": "Words like orange and violet are avoided because they are polysemous. For a given concept c, we build a committee by clustering its children according to their similarity and then keep the largest and most interconnected cluster (see Pantel and Lin (2002) for details).", "cite_spans": [ { "start": 234, "end": 255, "text": "Pantel and Lin (2002)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Committee propagation algorithm", "sec_num": null }, { "text": "The propagated features are then those that are shared by at least two committee members. The frequency of a propagated feature is obtained using Eq. 1 where the children i are chosen only among the committee members.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Committee propagation algorithm", "sec_num": null }, { "text": "Generating committees using CBC works best for classes with many members. In its original application (Pantel and Lin 2002) , CBC discovered a flat list of coarse concepts. In the finer grained concept hierarchy of WordNet, there are many fewer children for each concept so we expect to have more difficulty finding committees.", "cite_spans": [ { "start": 102, "end": 123, "text": "(Pantel and Lin 2002)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Committee propagation algorithm", "sec_num": null }, { "text": "At the end of the Divide-and-conquer phase, the non-leaf nodes of the ontology contain disambiguated features 1 . By design of the propagation algorithm, each concept node feature is shared by at least two of its children. We assume that two polysemous words, w 1 and w 2 , that are similar in one sense will be dissimilar in its other senses. Under the distributional hypothesis, similar words occur in the same grammatical contexts and dissimilar words occur in different grammatical contexts. We expect then that most features that are shared between w 1 and w 2 will be the grammatical contexts of their similar sense. Hence, mostly disambiguated features are propagated up the ontology in the Divide-and-conquer phase.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coup phase", "sec_num": null }, { "text": "However, the feature vectors for the leaf nodes remain ambiguous (e.g. the feature vectors for both leaf nodes labeled chair in Figure 2 are identical). In this phase of the algorithm, leaf node feature vectors are disambiguated by looking at the parents of their other senses.", "cite_spans": [], "ref_spans": [ { "start": 128, "end": 136, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Coup phase", "sec_num": null }, { "text": "Leaf nodes that are unambiguous in the ontology will have unambiguous feature vectors. For ambiguous leaf nodes (i.e. leaf nodes that have more than one concept parent), we apply the algorithm described in Figure 4 . Given a polysemous leaf node n, we remove from its ambiguous 1 By disambiguated features, we mean that the features are co-occurrences with a particular sense of a word; the features themselves are not sense-tagged.", "cite_spans": [], "ref_spans": [ { "start": 206, "end": 214, "text": "Figure 4", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Coup phase", "sec_num": null }, { "text": "Input: A node n and the enriched ontology O output from the algorithm in Figure 1 .", "cite_spans": [], "ref_spans": [ { "start": 73, "end": 81, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Coup phase", "sec_num": null }, { "text": "Step 1: If n is not a leaf node then return.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coup phase", "sec_num": null }, { "text": "Step 2: Remove from n's feature vector all features that intersect with the feature vector of any of n's other senses' parent concepts, but are not in n's parent concept feature vector.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coup phase", "sec_num": null }, { "text": "Output: A disambiguated feature vector for each leaf node n. feature vector those features that intersect with the ontological feature vector of any of its other senses' parent concept but that are not in its own parent's ontological feature vector. For example, consider the furniture sense of the leaf node chair in Figure 2 . After the Divide-and-conquer phase, the node chair is assigned the ambiguous lexical feature vector shown in Figure 3 . Suppose that chair only has one other sense in WordNet, which is the chairwoman sense illustrated in Figure 2. The features in bold in Figure 3 represent those features of chair that intersect with the ontological feature vector of chairwoman. In the Coup phase of our system, we remove these bold features from the furniture sense leaf node chair. What remains are features like \"chair and sofa\", \"chair and cushion\", \"Ottoman is a chair\", and \"recliner is a chair\". Similarly, for the chairwoman sense of chair, we remove those features that intersect with the ontological feature vector of the chair concept (the parent of the other chair leaf node).", "cite_spans": [], "ref_spans": [ { "start": 318, "end": 326, "text": "Figure 2", "ref_id": null }, { "start": 438, "end": 446, "text": "Figure 3", "ref_id": "FIGREF1" }, { "start": 550, "end": 556, "text": "Figure", "ref_id": null }, { "start": 584, "end": 592, "text": "Figure 3", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Coup phase", "sec_num": null }, { "text": "As shown in the beginning of this section, concept node feature vectors are mostly unambiguous after the Divide-and-conquer phase. However, the Divide-and-conquer phase may be repeated after the Coup phase using a different termination condition. Instead of assigning to leaf nodes ambiguous lexical feature vectors, we use the leaf node feature vectors from the Coup phase. In our experiments, we did not see any significant performance difference by skipping this extra Divide-and-conquer step.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coup phase", "sec_num": null }, { "text": "In this section, we provide a quantitative and qualitative evaluation of our framework.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental results", "sec_num": "5" }, { "text": "We used Minipar (Lin 1994 ), a broad coverage parser, to parse two 3GB corpora (TREC-9 and TREC-2002). We collected the frequency counts of the grammatical relations (contexts) output by Minipar and used these to construct the lexical feature vectors as described in Section 3.", "cite_spans": [ { "start": 16, "end": 25, "text": "(Lin 1994", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "5.1" }, { "text": "WordNet 2.0 served as our testing ontology. Using the algorithm presented in Section 4, we induced ontological feature vectors for the noun nodes in WordNet using the lexical co-occurrence features from the TREC-2002 corpus. Due to memory limitations, we were only able to propagate features to one quarter of the ontology. We experimented with both the Shared and Committee propagation models described in Section 4.1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "5.1" }, { "text": "To evaluate the resulting ontological feature vectors, we considered the task of attaching new nodes into the ontology. To automatically evaluate this, we randomly extracted a set of 1000 noun leaf nodes from the ontology and accumulated lexical feature vectors for them using the TREC-9 corpus (a separate corpus than the one used to propagate features, but of the same genre). We experimented with two test sets:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Quantitative evaluation", "sec_num": null }, { "text": "\u2022 Full:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Quantitative evaluation", "sec_num": null }, { "text": "The 424 of the 1000 random nodes that existed in the TREC-9 corpus \u2022 Subset: Subset of Full where only nodes that do not have concept siblings are kept (380 nodes).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Quantitative evaluation", "sec_num": null }, { "text": "For each random node, we computed the similarity of the node with each concept node in the ontology by computing the cosine of the angle (Salton and McGill 1983) between the lexical feature vector of the random node e i and the ontological feature vector of the concept nodes e j : We only kept those similar nodes that had a similarity above a threshold \u03c3. We experimentally set \u03c3 = 0.1.", "cite_spans": [ { "start": 137, "end": 161, "text": "(Salton and McGill 1983)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Quantitative evaluation", "sec_num": null }, { "text": "We collected the top-K most similar concept nodes (attachment points) for each node in the test sets and computed the accuracy of finding a correct attachment point in the top-K list. Table 1 shows the result.", "cite_spans": [], "ref_spans": [ { "start": 184, "end": 191, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Top-K accuracy", "sec_num": null }, { "text": "We expected the algorithm to perform better on the Subset data set since only concepts that have exclusively lexical children must be considered for attachment. In the Full data set, the algorithm must consider each concept in the ontology as a potential attachment point. However, considering the top-5 best attachments, the algorithm performed equally well on both data sets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Top-K accuracy", "sec_num": null }, { "text": "The Shared propagation algorithm performed consistently slightly better than the Committee method. As described in Section 4.1, building a committee performs best for concepts with many children. Since many nodes in WordNet have few direct children, the Shared propagation method is more appropriate. One possible extension of the Committee propagation algorithm is to find committee members from the full list of descendants of a node rather than only its immediate children.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Top-K accuracy", "sec_num": null }, { "text": "We computed the precision and recall of our system on varying numbers of returned attachments. Figure 5 and Figure 6 show the attachment precision and recall of our system when the maximum number of returned attachments ranges between 1 and 5. In Figure 5 , we see that the Shared propagation method has better precision than the Committee method. Both methods perform similarly on recall. The recall of the system increases most dramatically when returning two attachments without too much of a hit on precision. The low recall when returning only one attachment is due to both system errors and also to the fact that many nodes in the hierarchy are polysemous. In the next section, we discuss further experiments on polysemous nodes. Figure 6 illustrates the large difference on both precision and recall when using the simpler Subset data set. All 95% confidence bounds in Figure 5 and Figure 6 range between \u00b12.8% and \u00b15.3%.", "cite_spans": [], "ref_spans": [ { "start": 95, "end": 103, "text": "Figure 5", "ref_id": "FIGREF4" }, { "start": 108, "end": 116, "text": "Figure 6", "ref_id": null }, { "start": 247, "end": 255, "text": "Figure 5", "ref_id": "FIGREF4" }, { "start": 736, "end": 744, "text": "Figure 6", "ref_id": null }, { "start": 876, "end": 884, "text": "Figure 5", "ref_id": "FIGREF4" }, { "start": 889, "end": 897, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "Precision and Recall", "sec_num": null }, { "text": "84 of the nodes in the Full data set are polysemous (they are attached to more than one concept node in the ontology). On average, these nodes have 2.6 senses for a total of 219 senses. Figure 7 compares the precision and recall of the system on all nodes in the Full data set vs. the 84 polysemous nodes. The 95% confidence intervals range between \u00b13.8% and \u00b15.0% for the Full data set and between \u00b11.2% and \u00b19.4% for the polysemous nodes. The precision on the polysemous nodes is consistently better since these have more possible correct attachments.", "cite_spans": [], "ref_spans": [ { "start": 186, "end": 194, "text": "Figure 7", "ref_id": "FIGREF6" } ], "eq_spans": [], "section": "Polysemous nodes", "sec_num": null }, { "text": "Clearly, when the system returns at most one or two attachments, the recall on the polysemous nodes is lower than on the Full set. However, it is interesting to note that recall on the polysemous nodes equals the recall on the Full set after K=3. Figure 6 . Attachment precision and recall for the Full and Subset data sets when returning at most K attachments (using the Shared propagation method).", "cite_spans": [], "ref_spans": [ { "start": 247, "end": 255, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "Polysemous nodes", "sec_num": null }, { "text": "Inspection of errors revealed that the system often makes plausible attachments. Table 2 shows some example errors generated by our system. For the word arsenic, the system attached it to the concept trioxide, which is the parent of the correct attachment.", "cite_spans": [], "ref_spans": [ { "start": 81, "end": 88, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Qualitative evaluation", "sec_num": null }, { "text": "The system results may be useful to help validate the ontology. For example, for the word law, the system attached it to the regulation (as an organic process) and ordinance (legislative act) concepts. According to WordNet, law has seven possible attachment points, none of which are a legislative act. Hence, the system has found that in the TREC-9 corpus, the word law has a sense of legislative act. Similarly, the system discovered the symptom sense of vomiting.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Qualitative evaluation", "sec_num": null }, { "text": "The system discovered a potential anomaly in WordNet with the word slob. The system classified slob as follows: The ontology could use this output to verify if fool should link in the unpleasant person subtree. Capitalization is not very trustworthy in large collections of text. One of our design decisions was to ignore the case of words in our corpus, which in turn caused some errors since WordNet is case sensitive. For example, the lexical node Munch (Norwegian artist) was attached to the munch concept (food) by error because our system accumulated all features of the word Munch in text regardless of its capitalization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Qualitative evaluation", "sec_num": null }, { "text": "One question that remains unanswered is how clean an ontology must be in order for our methodology to work. Since the structure of the ontology guides the propagation of features, a very noisy ontology will result in noisy feature vectors. However, the framework is tolerant to some amount of noise and can in fact be used to correct some errors (as shown in Section 5.3).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "We showed in Section 1 how our framework can be used to disambiguate lexical-semantic resources like hyponym lists, verb relations, and unknown words or terms. Other avenues of future work include:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "Adapting/extending existing ontologies It takes a large amount of time to build resources like WordNet. However, adapting existing resources to a new corpus might be possible using our framework. Once we have enriched the ontology with features from a corpus, we can rearrange the ontological structure according to the inter-conceptual similarity of nodes. For example, consider the word computer in WordNet, which has two senses: a) a machine; and b) a person who calculates. In a computer science corpus, sense b) occurs very infrequently and possibly a new sense of computer (e.g. a processing chip) occurs. A system could potentially remove sense b) since the similarity of the other children of b) and computer is very low. It could also uncover the new processing chip sense by finding a high similarity between computer and the processing chip concept.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "This is a holy grail problem in the knowledge representation community. As a small step, our framework can be used to flag potential anomalies to the knowledge engineer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Validating ontologies", "sec_num": null }, { "text": "Given an enriched ontology, we can remove from the feature vectors of chair and recliner those features that occur in their parent furniture concept. The features that remain describe their different syntactic behaviors in text. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "What makes a chair different from a recliner?", "sec_num": null }, { "text": "We presented a framework for inducing ontological feature vectors from lexical co-occurrence vectors. Our method does not require the disambiguation of text. Instead, it relies on the principle of distributional similarity and the fact that polysemous words that are similar in one sense tend to be dissimilar in their other senses. On the task of attaching new words to WordNet using our framework, our experiments showed that the first attachment has 73.9% accuracy and that a correct attachment is in the top-5 attachments with 81.3% accuracy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "7" }, { "text": "We believe this work to be useful for a variety of applications. Not only can sense selection tasks such as word sense disambiguation, parsing, and semantic analysis benefit from our framework, but more inference-oriented tasks such as question answering and text summarization as well. We hope that this work will assist with the development of other large-scale and internally consistent collections of semantic information. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "7" } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Enriching WordNet concepts with topic signatures", "authors": [ { "first": "E", "middle": [], "last": "Agirre", "suffix": "" }, { "first": "O", "middle": [], "last": "Ansa", "suffix": "" }, { "first": "D", "middle": [], "last": "Martinez", "suffix": "" }, { "first": "E", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the NAACL workshop on WordNet and Other Lexical Resources: Applications, Extensions and Customizations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Agirre, E.; Ansa, O.; Martinez, D.; and Hovy, E. 2001. Enriching WordNet concepts with topic signatures. In Proceedings of the NAACL workshop on WordNet and Other Lexical Re- sources: Applications, Extensions and Customizations. Pitts- burgh, PA.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "The Berkeley Fra-meNet project", "authors": [ { "first": "C", "middle": [], "last": "Baker", "suffix": "" }, { "first": "C", "middle": [], "last": "Fillmore", "suffix": "" }, { "first": "J", "middle": [], "last": "Lowe", "suffix": "" } ], "year": 1998, "venue": "Proceedings of COLING-ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Baker, C.; Fillmore, C.; and Lowe, J. 1998. The Berkeley Fra- meNet project. In Proceedings of COLING-ACL. Montreal, Canada.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "VERBOCEAN: Mining the Web for Fine-Grained Semantic Verb Relations", "authors": [ { "first": "T", "middle": [], "last": "Chklovski", "suffix": "" }, { "first": "P", "middle": [], "last": "Pantel", "suffix": "" } ], "year": null, "venue": "Proceedings of EMNLP-2004", "volume": "", "issue": "", "pages": "33--40", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chklovski, T., and Pantel, P. VERBOCEAN: Mining the Web for Fine-Grained Semantic Verb Relations. In Proceedings of EMNLP-2004. pp. 33-40. Barcelona, Spain.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "A method for disambiguating word senses in a large corpus", "authors": [ { "first": "W", "middle": [], "last": "Gale", "suffix": "" }, { "first": "K", "middle": [], "last": "Church", "suffix": "" }, { "first": "D", "middle": [], "last": "Yarowsky", "suffix": "" } ], "year": 1992, "venue": "Computers and Humanities", "volume": "26", "issue": "", "pages": "415--439", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gale, W.; Church, K.; and Yarowsky, D. 1992. A method for disambiguating word senses in a large corpus. Computers and Humanities, 26:415-439.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Learning semantic constraints for the automatic discovery of part-whole relations", "authors": [ { "first": "R", "middle": [], "last": "Girju", "suffix": "" }, { "first": "A", "middle": [], "last": "Badulescu", "suffix": "" }, { "first": "D", "middle": [], "last": "Moldovan", "suffix": "" } ], "year": 2003, "venue": "Proceedings of HLT/NAACL-03", "volume": "", "issue": "", "pages": "80--87", "other_ids": {}, "num": null, "urls": [], "raw_text": "Girju, R.; Badulescu, A.; and Moldovan, D. 2003. Learning se- mantic constraints for the automatic discovery of part-whole relations. In Proceedings of HLT/NAACL-03. pp. 80-87. Ed- monton, Canada.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "WordNet 2 -A Morphologically and Semantically Enhanced Resource", "authors": [ { "first": "S", "middle": [], "last": "Harabagiu", "suffix": "" }, { "first": "G", "middle": [], "last": "Miller", "suffix": "" }, { "first": "D", "middle": [], "last": "Moldovan", "suffix": "" } ], "year": 1999, "venue": "Proceedings of SIGLEX-99", "volume": "", "issue": "", "pages": "1--8", "other_ids": {}, "num": null, "urls": [], "raw_text": "Harabagiu, S.; Miller, G.; and Moldovan, D. 1999. WordNet 2 - A Morphologically and Semantically Enhanced Resource. In Proceedings of SIGLEX-99. pp.1-8. University of Maryland.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Distributional structure", "authors": [ { "first": "Z", "middle": [], "last": "Harris", "suffix": "" } ], "year": 1985, "venue": "The Philosophy of Linguistics", "volume": "", "issue": "", "pages": "26--47", "other_ids": {}, "num": null, "urls": [], "raw_text": "Harris, Z. 1985. Distributional structure. In: Katz, J. J. (ed.) The Philosophy of Linguistics. New York: Oxford University Press. pp. 26-47.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Combining and standardizing large-scale, practical ontologies for machine translation and other uses", "authors": [ { "first": "E", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 1998, "venue": "Proceedings LREC-98", "volume": "", "issue": "", "pages": "535--542", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hovy, E. 1998. Combining and standardizing large-scale, practi- cal ontologies for machine translation and other uses. In Pro- ceedings LREC-98. pp. 535-542. Granada, Spain.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Noun classification from predicate-argument structures", "authors": [ { "first": "D", "middle": [], "last": "Hindle", "suffix": "" } ], "year": 1990, "venue": "Proceedings of ACL-90", "volume": "", "issue": "", "pages": "268--275", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hindle, D. 1990. Noun classification from predicate-argument structures. In Proceedings of ACL-90. pp. 268-275. Pitts- burgh, PA.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Adding semantic annotation to the Penn TreeBank", "authors": [ { "first": "P;", "middle": [], "last": "Kingsbury", "suffix": "" }, { "first": "M", "middle": [], "last": "Palmer", "suffix": "" }, { "first": "M", "middle": [], "last": "Marcus", "suffix": "" } ], "year": 2002, "venue": "Proceedings of HLT-2002", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kingsbury, P; Palmer, M.; and Marcus, M. 2002. Adding seman- tic annotation to the Penn TreeBank. In Proceedings of HLT- 2002. San Diego, California.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Building a large-scale knowledge base for machine translation", "authors": [ { "first": "K", "middle": [], "last": "Knight", "suffix": "" }, { "first": "S", "middle": [ "K" ], "last": "Luk", "suffix": "" } ], "year": 1994, "venue": "Proceedings of AAAI-1994", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Knight, K. and Luk, S. K. 1994. Building a large-scale knowl- edge base for machine translation. In Proceedings of AAAI- 1994. Seattle, WA.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "CYC: A large-scale investment in knowledge infrastructure", "authors": [ { "first": "D", "middle": [], "last": "Lenat", "suffix": "" } ], "year": 1995, "venue": "Communications of the ACM", "volume": "38", "issue": "11", "pages": "33--38", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lenat, D. 1995. CYC: A large-scale investment in knowledge infrastructure. Communications of the ACM, 38(11):33-38.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Automatic retrieval and clustering of similar words", "authors": [ { "first": "D", "middle": [], "last": "Lin", "suffix": "" } ], "year": 1998, "venue": "Proceedings of COLING/ACL-98", "volume": "", "issue": "", "pages": "768--774", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lin, D. 1998. Automatic retrieval and clustering of similar words. In Proceedings of COLING/ACL-98. pp. 768-774.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Principar -an efficient, broad-coverage, principlebased parser", "authors": [ { "first": "D", "middle": [], "last": "Lin", "suffix": "" } ], "year": 1994, "venue": "Proceedings of COLING-94", "volume": "", "issue": "", "pages": "42--48", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lin, D. 1994. Principar -an efficient, broad-coverage, principle- based parser. Proceedings of COLING-94. pp. 42-48. Kyoto, Japan.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Producing high-dimensional semantic spaces from lexical co-occurrence. Behavior Research Methods, Instruments, and Computers", "authors": [ { "first": "K", "middle": [], "last": "Lund", "suffix": "" }, { "first": "C", "middle": [], "last": "Burgess", "suffix": "" } ], "year": 1996, "venue": "", "volume": "28", "issue": "", "pages": "203--208", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lund, K. and Burgess, C. 1996. Producing high-dimensional semantic spaces from lexical co-occurrence. Behavior Re- search Methods, Instruments, and Computers, 28:203-208.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Annotating noun argument structure for NomBank", "authors": [ { "first": "A", "middle": [], "last": "Meyers", "suffix": "" }, { "first": "R", "middle": [], "last": "Reeves", "suffix": "" }, { "first": "C", "middle": [], "last": "Macleod", "suffix": "" }, { "first": "R", "middle": [], "last": "Szekely", "suffix": "" }, { "first": "V", "middle": [], "last": "Zielinska", "suffix": "" }, { "first": "B", "middle": [], "last": "Young", "suffix": "" }, { "first": "R", "middle": [], "last": "Grishman", "suffix": "" } ], "year": null, "venue": "Proceedings of LREC-2004", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Meyers, A.; Reeves, R.; Macleod, C.; Szekely, R.; Zielinska, V.; Young, B.; and Grishman, R. Annotating noun argument structure for NomBank. In Proceedings of LREC-2004. Lis- bon, Portugal.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "WordNet: An online lexical database", "authors": [ { "first": "G", "middle": [], "last": "Miller", "suffix": "" } ], "year": 1990, "venue": "International Journal of Lexicography", "volume": "3", "issue": "4", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Miller, G. 1990. WordNet: An online lexical database. Interna- tional Journal of Lexicography, 3(4).", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "An algorithm for merging and aligning ontologies: Automation and tool support", "authors": [ { "first": "N", "middle": [ "F" ], "last": "Noy", "suffix": "" }, { "first": "M", "middle": [ "A" ], "last": "Musen", "suffix": "" } ], "year": 1999, "venue": "Proceedings of the Workshop on Ontology Management (AAAI-99)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Noy, N. F. and Musen, M. A. 1999. An algorithm for merging and aligning ontologies: Automation and tool support. In Proceedings of the Workshop on Ontology Management (AAAI-99). Orlando, FL.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Discovering Word Senses from Text", "authors": [ { "first": "P", "middle": [], "last": "Pantel", "suffix": "" }, { "first": "D", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2002, "venue": "Proceedings of SIGKDD-02", "volume": "", "issue": "", "pages": "613--619", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pantel, P. and Lin, D. 2002. Discovering Word Senses from Text. In Proceedings of SIGKDD-02. pp. 613-619. Edmonton, Can- ada.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "A corpus-based approach for building semantic lexicons", "authors": [ { "first": "E", "middle": [], "last": "Riloff", "suffix": "" }, { "first": "J", "middle": [], "last": "Shepherd", "suffix": "" } ], "year": 1997, "venue": "Proceedings of EMNLP-1997", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Riloff, E. and Shepherd, J. 1997. A corpus-based approach for building semantic lexicons. In Proceedings of EMNLP-1997.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Introduction to Modern Information Retrieval", "authors": [ { "first": "G", "middle": [], "last": "Salton", "suffix": "" }, { "first": "M", "middle": [ "J" ], "last": "Mcgill", "suffix": "" } ], "year": 1983, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Salton, G. and McGill, M. J. 1983. Introduction to Modern In- formation Retrieval. McGraw Hill.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Acquiring hyponymy relations from web documents", "authors": [ { "first": "K", "middle": [], "last": "Shinzato", "suffix": "" }, { "first": "K", "middle": [], "last": "Torisawa", "suffix": "" } ], "year": 2004, "venue": "Proceedings of HLT-NAACL-2004", "volume": "", "issue": "", "pages": "73--80", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shinzato, K. and Torisawa, K. 2004. Acquiring hyponymy rela- tions from web documents. In Proceedings of HLT-NAACL- 2004. pp. 73-80. Boston, MA.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Divide-and-conquer phase.", "type_str": "figure", "num": null, "uris": null }, "FIGREF1": { "text": "Excerpt of a lexical feature vector for the word chair. Grammatical relations are in italics (conjunction and nominal-subject). The first column of numbers are frequency counts and the other are mutual information scores. In bold are the features that intersect with the induced ontological feature vector for the parent concept of chair's chairwoman sense.", "type_str": "figure", "num": null, "uris": null }, "FIGREF2": { "text": "Coup phase.", "type_str": "figure", "num": null, "uris": null }, "FIGREF4": { "text": "Attachment precision and recall for the Shared and Committee propagation methods when returning at most K attachments (on the Full set).", "type_str": "figure", "num": null, "uris": null }, "FIGREF6": { "text": "Attachment precision and recall on the Full set vs. the polysemous nodes in the Full set when the system returns at most K attachments.", "type_str": "figure", "num": null, "uris": null }, "TABREF1": { "text": "Correct attachment point in the top-K attachments (with 95% conf.)", "num": null, "content": "
KShared (Full)Committee (Full) Shared (Subset) Committee (Subset)
173.9% \u00b1 4.5%72.0% \u00b1 4.9%77.4% \u00b1 3.6%76.1% \u00b1 5.1%
278.7% \u00b1 4.1%76.6% \u00b1 4.2%80.7% \u00b1 4.0%79.1% \u00b1 4.5%
379.9% \u00b1 4.0%78.2% \u00b1 4.2%81.2% \u00b1 3.9%80.5% \u00b1 4.8%
480.6% \u00b1 4.1%79.0% \u00b1 4.0%81.5% \u00b1 4.1%80.8% \u00b1 5.0%
581.3% \u00b1 3.8%79.5% \u00b1 3.9%81.7% \u00b1 4.1%81.3% \u00b1 4.9%
", "type_str": "table", "html": null }, "TABREF2": { "text": "Example attachment errors by our system.", "num": null, "content": "
NodeSystem AttachmentCorrect Attachment
arsenic *trioxidearsenic OR element
lawregulationlaw OR police OR \u2026
Munch \u2020munchMunch
slobfoolslob
vomitingfeveremesis
", "type_str": "table", "html": null } } } }