{ "paper_id": "U11-1011", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:09:56.087515Z" }, "title": "Predicting Thread Linking Structure by Lexical Chaining", "authors": [ { "first": "Li", "middle": [], "last": "Wang", "suffix": "", "affiliation": { "laboratory": "Victoria Research Laboratory", "institution": "University of Melbourne \u2665 NICTA", "location": {} }, "email": "li.wang.d@gmail.com" }, { "first": "Diana", "middle": [], "last": "Mccarthy", "suffix": "", "affiliation": { "laboratory": "Victoria Research Laboratory", "institution": "University of Melbourne \u2665 NICTA", "location": {} }, "email": "" }, { "first": "Timothy", "middle": [], "last": "Baldwin", "suffix": "", "affiliation": { "laboratory": "Victoria Research Laboratory", "institution": "University of Melbourne \u2665 NICTA", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Web user forums are valuable means for users to resolve specific information needs, both interactively for participants and statically for users who search/browse over historical thread data. However, the complex structure of forum threads can make it difficult for users to extract relevant information. Thread linking structure has the potential to help tasks such as information retrieval (IR) and threading visualisation of forums, thereby improving information access. Unfortunately, thread linking structure is not always available in forums. This paper proposes an unsupervised approach to predict forum thread linking structure using lexical chaining, a technique which identifies lists of related word tokens within a given discourse. Three lexical chaining algorithms, including one that only uses statistical associations between words, are experimented with. Preliminary experiments lead to results which surpass an informed baseline.", "pdf_parse": { "paper_id": "U11-1011", "_pdf_hash": "", "abstract": [ { "text": "Web user forums are valuable means for users to resolve specific information needs, both interactively for participants and statically for users who search/browse over historical thread data. However, the complex structure of forum threads can make it difficult for users to extract relevant information. Thread linking structure has the potential to help tasks such as information retrieval (IR) and threading visualisation of forums, thereby improving information access. Unfortunately, thread linking structure is not always available in forums. This paper proposes an unsupervised approach to predict forum thread linking structure using lexical chaining, a technique which identifies lists of related word tokens within a given discourse. Three lexical chaining algorithms, including one that only uses statistical associations between words, are experimented with. Preliminary experiments lead to results which surpass an informed baseline.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Web user forums (or simply \"forums\") are online platforms for people to discuss and obtain information via a text-based threaded discourse, generally in a pre-determined domain (e.g. IT support or DSLR cameras). With the advent of Web 2.0, there has been rapid growth of web authorship in this area, and forums are now widely used in various areas such as customer support, community development, interactive reporting and online education. In addition to providing the means to interactively par-ticipate in discussions or obtain/provide answers to questions, the vast volumes of data contained in forums make them a valuable resource for \"support sharing\", i.e. looking over records of past user interactions to potentially find an immediately applicable solution to a current problem. On the one hand, more and more answers to questions over a wide range of domains are becoming available on forums; on the other hand, it is becoming harder and harder to extract and access relevant information due to the sheer scale and diversity of the data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Previous research shows that the thread linking structure can be used to improve information retrieval (IR) in forums, at both the post level (Xi et al., 2004; Seo et al., 2009) and thread level (Seo et al., 2009; Elsas and Carbonell, 2009) . These interpost links also have the potential to enhance threading visualisation, thereby improving information access over complex threads. Unfortunately, linking information is not supported in many forums. While researchers have started to investigate the task of thread linking structure recovery (Kim et al., 2010; Wang et al., 2011b) , most research efforts focus on supervised methods.", "cite_spans": [ { "start": 142, "end": 159, "text": "(Xi et al., 2004;", "ref_id": "BIBREF30" }, { "start": 160, "end": 177, "text": "Seo et al., 2009)", "ref_id": "BIBREF24" }, { "start": 195, "end": 213, "text": "(Seo et al., 2009;", "ref_id": "BIBREF24" }, { "start": 214, "end": 240, "text": "Elsas and Carbonell, 2009)", "ref_id": "BIBREF6" }, { "start": 544, "end": 562, "text": "(Kim et al., 2010;", "ref_id": "BIBREF13" }, { "start": 563, "end": 582, "text": "Wang et al., 2011b)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To illustrate the task of thread linking recovery, we use an example thread, made up of 5 posts from 4 distinct participants, from the CNET forum dataset of Kim et al. (2010) , as shown in Figure 1 . The linking structure of the thread is modelled as a rooted directed acyclic graph (DAG) . In this example, UserA initiates the thread with a question in the first post, by asking how to create an interactive input box on a webpage. This post is linked to a virtual root with link label 0. In response, UserB and UserC pro- Figure 1: A snippeted CNET thread annotated with linking structure vide independent answers. Therefore their posts are linked to the first post, with link labels 1 and 2 respectively. UserA responds to UserC (link = 1) to confirm the details of the solution, and at the same time, adds extra information to his/her original question (link = 3); i.e., this one post has two distinct links associated with it. Finally, UserD proposes a different solution again to the original question (link = 4). Lexical chaining is a technique for identifying lists of related words (lexical chains) within a given discourse. The extracted lexical chains represent the discourse's lexical cohesion, or \"cohesion indicated by relations between words in the two units, such as use of an identical word, a synonym, or a hypernym\" (Jurafsky and Martin, 2008, pp. 685) .", "cite_spans": [ { "start": 157, "end": 174, "text": "Kim et al. (2010)", "ref_id": "BIBREF13" }, { "start": 283, "end": 288, "text": "(DAG)", "ref_id": null }, { "start": 1335, "end": 1371, "text": "(Jurafsky and Martin, 2008, pp. 685)", "ref_id": null } ], "ref_spans": [ { "start": 189, "end": 197, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Lexical chaining has been investigated in many research tasks such as text segmentation (Stokes et al., 2004) , word sense disambiguation (Galley and McKeown, 2003) , and text summarisation (Barzilay and Elhadad, 1997) . The lexical chaining algorithms used usually rely on domain-independent thesauri such as Roget's Thesaurus, the Macquarie Thesaurus (Bernard, 1986) and WordNet (Fellbaum, 1998) , with some algorithms also utilising statistical associations between words (Stokes et al., 2004; Marathe and Hirst, 2010) . This paper explores unsupervised approaches for forum thread linking structure recovery, by using lexical chaining to analyse the inter-post lexical cohesion. We investigate three lexical chaining algorithms, including one that only uses statistical associations between words. The contributions of this research are:", "cite_spans": [ { "start": 88, "end": 109, "text": "(Stokes et al., 2004)", "ref_id": "BIBREF26" }, { "start": 138, "end": 164, "text": "(Galley and McKeown, 2003)", "ref_id": "BIBREF8" }, { "start": 190, "end": 218, "text": "(Barzilay and Elhadad, 1997)", "ref_id": "BIBREF1" }, { "start": 353, "end": 368, "text": "(Bernard, 1986)", "ref_id": null }, { "start": 373, "end": 397, "text": "WordNet (Fellbaum, 1998)", "ref_id": null }, { "start": 475, "end": 496, "text": "(Stokes et al., 2004;", "ref_id": "BIBREF26" }, { "start": 497, "end": 521, "text": "Marathe and Hirst, 2010)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Proposal of an unsupervised approach using lexical chaining to recover the inter-post links in web user forum threads.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Proposal of a lexical chaining approach that only uses statistical associations between words, which can be calculated from the raw text of the targeted domain.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The remainder of this paper is organised as follows. Firstly, we review related research on forum thread linking structure classification and lexical chaining. Then, the three lexical chaining algorithms used in this paper are described in detail. Next, the dataset and the experimental methodology are explained, followed by the experiments and analysis. Finally, the paper concludes with a brief summary and possible future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The linking structure of web user forum threads can be used in tasks such as IR (Xi et al., 2004; Seo et al., 2009; Elsas and Carbonell, 2009) and threading visualisation. However, many user forums don't support the user input of linking information. Automatically recovering the linking structure of forum threads is therefore an interesting task, and has started to attract research efforts in recent years. All the methods investigated so far are supervised, such as ranking SVMs (Seo et al., 2009) , SVM-HMMs (Kim et al., 2010) , Maximum Entropy (Kim et al., 2010) and Conditional Random Fields (CRF) (Kim et al., 2010; Wang et al., 2011b; Wang et al., 2011a; Aumayr et al., 2011) , with CRF models frequently being reported to deliver superior performance. While there is research that attempts to conduct cross-forum classification (Wang et al., 2011a) -where classifiers are trained over linking labels from one forum and tested over threads from other forums -the results have not been promising. This research explores unsupervised methods for thread linking structure recovery, by exploiting lexical cohesion between posts via lexical chaining. The first computational model for lexical chain extraction was proposed by Morris and Hirst (1991) , based on the use of the hierarchical structure of Roget's International Thesaurus, 4th Edition (1977) . Because of the lack of a machine-readable copy of the thesaurus at the time, the lexical chains were built by hand. Research in lexical chaining has then been investigated by researchers from different research fields such as information retrieval, and natural language processing. It has been demonstrated that the textual knowledge provided by lexical chains can benefit many tasks, including text segmentation (Kozima, 1993; Stokes et al., 2004) , word sense disambiguation (Galley and McKeown, 2003) , text summarisation (Barzilay and Elhadad, 1997) , topic detection and tracking (Stokes and Carthy, 2001) , information retrieval (Stairmand, 1997) , malapropism detection (Hirst and St-Onge, 1998) , and question answering (Moldovan and Novischi, 2002) .", "cite_spans": [ { "start": 80, "end": 97, "text": "(Xi et al., 2004;", "ref_id": "BIBREF30" }, { "start": 98, "end": 115, "text": "Seo et al., 2009;", "ref_id": "BIBREF24" }, { "start": 116, "end": 142, "text": "Elsas and Carbonell, 2009)", "ref_id": "BIBREF6" }, { "start": 483, "end": 501, "text": "(Seo et al., 2009)", "ref_id": "BIBREF24" }, { "start": 504, "end": 531, "text": "SVM-HMMs (Kim et al., 2010)", "ref_id": null }, { "start": 550, "end": 568, "text": "(Kim et al., 2010)", "ref_id": "BIBREF13" }, { "start": 605, "end": 623, "text": "(Kim et al., 2010;", "ref_id": "BIBREF13" }, { "start": 624, "end": 643, "text": "Wang et al., 2011b;", "ref_id": "BIBREF28" }, { "start": 644, "end": 663, "text": "Wang et al., 2011a;", "ref_id": "BIBREF27" }, { "start": 664, "end": 684, "text": "Aumayr et al., 2011)", "ref_id": "BIBREF0" }, { "start": 838, "end": 858, "text": "(Wang et al., 2011a)", "ref_id": "BIBREF27" }, { "start": 1230, "end": 1253, "text": "Morris and Hirst (1991)", "ref_id": "BIBREF21" }, { "start": 1306, "end": 1357, "text": "Roget's International Thesaurus, 4th Edition (1977)", "ref_id": null }, { "start": 1773, "end": 1787, "text": "(Kozima, 1993;", "ref_id": "BIBREF14" }, { "start": 1788, "end": 1808, "text": "Stokes et al., 2004)", "ref_id": "BIBREF26" }, { "start": 1837, "end": 1863, "text": "(Galley and McKeown, 2003)", "ref_id": "BIBREF8" }, { "start": 1885, "end": 1913, "text": "(Barzilay and Elhadad, 1997)", "ref_id": "BIBREF1" }, { "start": 1945, "end": 1970, "text": "(Stokes and Carthy, 2001)", "ref_id": null }, { "start": 1995, "end": 2012, "text": "(Stairmand, 1997)", "ref_id": "BIBREF25" }, { "start": 2037, "end": 2062, "text": "(Hirst and St-Onge, 1998)", "ref_id": "BIBREF9" }, { "start": 2088, "end": 2117, "text": "(Moldovan and Novischi, 2002)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Many types of lexical chaining algorithms rely on examining lexicographical relationships (i.e. semantic measures) between words using domainindependent thesauri such as the Longmans Dictionary of Contemporay English (Kozima, 1993) , Roget's Thesaurus (Jarmasz and Szpakowicz, 2003) , Macquarie Thesaurus (Marathe and Hirst, 2010) or WordNet (Barzilay and Elhadad, 1997; Hirst and St-Onge, 1998; Moldovan and Novischi, 2002; Galley and McKeown, 2003) . These lexical chaining algorithms are limited by the linguistic resources they depend upon, and often only apply to nouns.", "cite_spans": [ { "start": 217, "end": 231, "text": "(Kozima, 1993)", "ref_id": "BIBREF14" }, { "start": 252, "end": 282, "text": "(Jarmasz and Szpakowicz, 2003)", "ref_id": "BIBREF11" }, { "start": 305, "end": 330, "text": "(Marathe and Hirst, 2010)", "ref_id": "BIBREF18" }, { "start": 342, "end": 370, "text": "(Barzilay and Elhadad, 1997;", "ref_id": "BIBREF1" }, { "start": 371, "end": 395, "text": "Hirst and St-Onge, 1998;", "ref_id": "BIBREF9" }, { "start": 396, "end": 424, "text": "Moldovan and Novischi, 2002;", "ref_id": "BIBREF20" }, { "start": 425, "end": 450, "text": "Galley and McKeown, 2003)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Some lexical chaining algorithms also make use of statistical associations (i.e. distributional measures) between words which can be automatically generated from domain-specific corpora. For example, Stokes et al. (2004) 's lexical chainer extracts significant noun bigrams based on the G 2 statistic (Pedersen, 1996) , and uses these statistical word associations to find related words in the preceding context, building on the work of Hirst and St-Onge (1998) . Marathe and Hirst (2010) use distributional measures of conceptual distance, based on the methodology of Mohammad and Hirst (2006) to compute the relation between two words. This framework uses a very coarse-grained sense (con-cept or category) inventory from the Macquarie Thesaurus (Bernard, 1986) to build a word-category cooccurrence matrix (WCCM), based on the British National Corpus (BNC). Lin (1998a) 's measure of distributional similarity based on point-wise mutual information (PMI) is then used to measure the association between words.", "cite_spans": [ { "start": 200, "end": 220, "text": "Stokes et al. (2004)", "ref_id": "BIBREF26" }, { "start": 301, "end": 317, "text": "(Pedersen, 1996)", "ref_id": "BIBREF22" }, { "start": 437, "end": 461, "text": "Hirst and St-Onge (1998)", "ref_id": "BIBREF9" }, { "start": 464, "end": 488, "text": "Marathe and Hirst (2010)", "ref_id": "BIBREF18" }, { "start": 569, "end": 594, "text": "Mohammad and Hirst (2006)", "ref_id": "BIBREF19" }, { "start": 748, "end": 763, "text": "(Bernard, 1986)", "ref_id": null }, { "start": 861, "end": 872, "text": "Lin (1998a)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "This research will explore two thesaurus-based lexical chaining algorithms, as well as a novel lexical chaining approach which relies solely on statistical word associations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Three lexical chaining algorithms are experimented with in this research, as detailed in the following sections.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexical Chaining Algorithms", "sec_num": "3" }, { "text": "Chainer Roget is a Roget's Thesaurus based lexical chaining algorithm (Jarmasz and Szpakowicz, 2003) based on an off-the-shelf package, namely the Electronic Lexical Knowledge Base (ELKB) (Jarmasz and Szpakowicz, 2001 ).", "cite_spans": [ { "start": 70, "end": 100, "text": "(Jarmasz and Szpakowicz, 2003)", "ref_id": "BIBREF11" }, { "start": 188, "end": 217, "text": "(Jarmasz and Szpakowicz, 2001", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Chainer Roget", "sec_num": "3.1" }, { "text": "The underlying methodology of Chainer Roget is shown in Algorithm 1. Methods used to calculate the chain strength/weight are presented in Section 5. While the original Roget's Thesaurus-based algorithm by Morris and Hirst (1991) proposes five types of thesaural relations to add a candidate word in a chain, Chainer Roget only uses the first one, as is explained in Algorithm 1. Moreover, while Jarmasz and Szpakowicz (2003) use the 1987 Penguin's Roget's Thesaurus in their research, the ELKB package uses the Roget's Thesaurus from 1911 due to copyright restriction.", "cite_spans": [ { "start": 205, "end": 228, "text": "Morris and Hirst (1991)", "ref_id": "BIBREF21" }, { "start": 395, "end": 424, "text": "Jarmasz and Szpakowicz (2003)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Chainer Roget", "sec_num": "3.1" }, { "text": "Chainer W N is a non-greedy WordNet-based chaining algorithm proposed by Galley and McKeown (2003) . We reimplemented their method based on an incomplete implementation in NLTK. 1 The algorithm of Chainer W N is based on the assumption of one sense per discourse, and can be decomposed into three steps. Firstly, a \"disambiguation graph\" is built by adding the candidate nouns of Algorithm 1 Chainer Roget select a set of candidate nouns for each candidate noun do build all the possible chains, where each pair of nouns in each chain are either the same word or included in the same Head of Roget's Thesaurus, and select the strongest chain for each candidate noun. end for merge two chains if they contain at least one noun in common the discourse one by one. Each node in the graph represents a noun instance with all its senses, and each weighted edge represents the semantic relation between two senses of two nouns. The weight of each edge is calculated based on the distances between nouns in the discourse. Secondly, word sense disambiguation (WSD) is performed. In this step, a score of every sense of each noun node is calculated by summing the weight of all edges leaving that sense. The sense of each noun node with the highest score is considered as the right sense of this noun in the discourse. Lastly, all the edges of the disambiguation graph connecting (assumed) wrong senses of every noun node are removed, and the remaining edges linking noun nodes form the lexical chains of the discourse. The semantic relations exploited in this algorithm include hypernyms/hyponyms and siblings (i.e. hyponyms of hypernyms).", "cite_spans": [ { "start": 73, "end": 98, "text": "Galley and McKeown (2003)", "ref_id": "BIBREF8" }, { "start": 178, "end": 179, "text": "1", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Chainer W N", "sec_num": "3.2" }, { "text": "Chainer SV , as shown in Algorithm 2, is adapted from Marathe and Hirst (2010)'s lexical chaining algorithm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chainer SV", "sec_num": "3.3" }, { "text": "The main difference between Chainer SV and the original algorithm is the method used to calculate associations between words. Marathe and Hirst (2010) use two different measures, including Lin (1998b)'s WordNetbased measure, and Mohammad and Hirst (2006) 's distributional measures of concept distance framework. In Chainer SV , we use word vectors from WORDSPACE (Sch\u00fctze, 1998) models and apply cosine similarity to compute the associations between words. WORDSPACE is a multi-dimensional real-valued space, where words, contexts and senses are represented as vectors. A vector for word w is derived from words that co-occur with w. A dimensionality reduction technique is often used to reduce the dimension of the vector. We build the WORDSPACE model with SemanticVectors (Widdows and Ferraro, 2008) , which is based on Random Projection dimensionality reduction (Bingham and Mannila, 2001 ).", "cite_spans": [ { "start": 229, "end": 254, "text": "Mohammad and Hirst (2006)", "ref_id": "BIBREF19" }, { "start": 364, "end": 379, "text": "(Sch\u00fctze, 1998)", "ref_id": "BIBREF23" }, { "start": 775, "end": 802, "text": "(Widdows and Ferraro, 2008)", "ref_id": "BIBREF29" }, { "start": 866, "end": 892, "text": "(Bingham and Mannila, 2001", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Chainer SV", "sec_num": "3.3" }, { "text": "The underlying methodology of Chainer SV is shown in Algorithm 2. This algorithm requires a method to calculate the similarity between two tokens (i.e. words): sim tt (x, y), which is done by computing the cosine similarity of the two tokens' semantic vectors. The similarity between a token t i and a lexical chain c j is then calculated by:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chainer SV", "sec_num": "3.3" }, { "text": "sim tc (t i , c j ) = t k \u2208c j 1 l j sim tt (t i , t k )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chainer SV", "sec_num": "3.3" }, { "text": "where l j represents the length of lexical chain c j . The similarity between two chains c i and c j is then computed by:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chainer SV", "sec_num": "3.3" }, { "text": "sim cc (c i , c j ) = tm\u2208c i ,tn\u2208c j 1 l i \u00d7 l j sim tt (t m , t n )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chainer SV", "sec_num": "3.3" }, { "text": "where l i and l j are the lengths of c i and c j respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chainer SV", "sec_num": "3.3" }, { "text": "As is shown in Algorithm 2, Chainer SV has two parameters: the threshold for adding a token to a chain, threshold a ; and the threshold for merging two chains, threshold m . A larger threshold a leads to conservative chains where tokens in a chain are strongly related, while a smaller threshold a results in longer chains where the relationship between tokens in a chain may not be clear. Similarly, a larger threshold m is conservative and leads to less chain merging, while a smaller threshold m may create longer but less meaningful chains. Our initial experiments show that the combination of threshold a = 0.1 and threshold m = 0.05 often results in lexical chains with reasonable lengths and interpretations. Therefore, this parameter setting will be used throughout all the experiments described in this paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chainer SV", "sec_num": "3.3" }, { "text": "The main task performed in this research is to recover inter-post links within forum threads, by if chains = empty or max score < threshold a then create a new chain c k containing t i and add c k to chains else if more than one max chain then merge chains if the two chains' similarity is larger than threshold m , and add t i to the resultant chain or the first max chain else add t i to the max chain end if end for return chains analysing the lexical chains extracted from the posts. In this, we assume that a post can only link to an earlier post (or a virtual root node). Following Wang et al. (2011b), it is possible for there to be multiple links from a given post, e.g. if a post both confirms the validity of an answer and adds extra information to the original question (as happens in Post4 in Figure 1) .", "cite_spans": [], "ref_spans": [ { "start": 805, "end": 814, "text": "Figure 1)", "ref_id": null } ], "eq_spans": [], "section": "Task Description and Dataset", "sec_num": "4" }, { "text": "The dataset we use is the CNET forum dataset of Kim et al. (2010), 2 which contains 1332 annotated posts spanning 315 threads, collected from the Operating System, Software, Hardware and Web Development sub-forums of CNET. 3 Each post is labelled with one or more links (including the possibility of null-links, where the post doesn't link to any other post), and each link is labelled with a dialogue act. We only use the link part of the annotation in this research. For the details of the dialogue act tagset, see Kim et al. (2010) .", "cite_spans": [ { "start": 517, "end": 534, "text": "Kim et al. (2010)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Task Description and Dataset", "sec_num": "4" }, { "text": "We also obtain the original crawl of CNET forum collected by Kim et al. (2010) , which contains 262,402 threads. To build a WORDSPACE model for Chainer SV as is explained in Section 3, only the threads from the four sub-forums mentioned above are chosen, which consist of 536,482 posts spanning 114,139 threads. The reason for choosing only a subset of the whole dataset is to maintain the same types of technical dialogues as the annotated posts. The texts (with stop words and punctuations removed) from the titles and bodies of the posts are then extracted and fed into the SemanticVectors package with default settings to obtain the semantic vector for each word token.", "cite_spans": [ { "start": 61, "end": 78, "text": "Kim et al. (2010)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Task Description and Dataset", "sec_num": "4" }, { "text": "To the best of our knowledge, no previous research has adopted lexical chaining to predict inter-post links. The basic idea of our approach is to use lexical chains to measure the inter-post lexical cohesion (i.e. lexical similarity), and use these similarity scores to reconstruct inter-post links. To measure the lexical cohesion between two posts, the texts (with stop words and punctuations removed) from the titles and bodies of the two posts are first combined. Then, lexical chainers are applied over the combined texts to extract lexical chains. Lastly, the following weighting methods are used to calculate the lexical similarity between the two posts:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "5" }, { "text": "LCNum: the number of the lexical chains which span the two posts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "5" }, { "text": "LCLen: find the lexical chains which span the two posts, and use the sum of tokens contained in each as the similarity score.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "5" }, { "text": "LCStr: find the lexical chains which span the two posts, and use the sum of each chain's chain strength as the similarity score. The chain strength is calculated by using a formula suggested by Barzilay and Elhadad (1997) :", "cite_spans": [ { "start": 194, "end": 221, "text": "Barzilay and Elhadad (1997)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "5" }, { "text": "Score(Chain) = Length \u00d7 Homogeneity", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "5" }, { "text": "where Length is the number of tokens in the chain, and Homogeneity is 1\u2212 the number of distinct token occurrences divided by the Length.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "5" }, { "text": "LCBan: find the lexical chains which span the two posts, and use the sum of each chain's balance score as the similarity score. The balance score is calculated by using the following formula:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "5" }, { "text": "Score(Chain) = n 1 /n 2 n 1 < n 2 n 2 /n 1 else", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "5" }, { "text": "where n 1 is the number of tokens from the chain belonging to the first post, and n 2 is the number of tokens from the chain belonging to the second post.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "5" }, { "text": "The experiment results are evaluated using microaveraged Precision (P \u00b5 ), Recall (R \u00b5 ) and F-score (F \u00b5 : \u03b2 = 1), with F \u00b5 as the main evaluation metric. The statistical significance is tested using randomised estimation (Yeh, 2000) with p < 0.05. As our baseline for the unsupervised task, an informed heuristic (Heuristic) is used, where all first posts are labelled with link 0 (i.e. link to a virtual root) and all other posts are labelled with link 1 (i.e. link to the immediately preceding post).", "cite_spans": [ { "start": 223, "end": 234, "text": "(Yeh, 2000)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Assumptions, Experiments and Analysis", "sec_num": "6" }, { "text": "As is explained in Section 4, it is possible for there to be multiple links from a given post. Because these kinds of posts, which only account for less than 5% of the total posts, are sparse in the dataset, we only consider recovering one link per post in our experiments. However, our evaluation still considers all links (meaning that it is not possible for our methods to achieve an F-score of 1.0).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Assumptions, Experiments and Analysis", "sec_num": "6" }, { "text": "We observe that in web user forum threads, if a post replies to a preceding post, the two posts are usually semantically related and lexically similar. Based on this observation, we make the following assumption:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Initial Assumption and Experiments", "sec_num": "6.1" }, { "text": "Assumption 1. A post should be similar to the preceding post it is linked to.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Initial Assumption and Experiments", "sec_num": "6.1" }, { "text": "This assumption leads to our first unsupervised model, which compares each post (except for the first and second) in a given thread with all its preceding posts one by one, by firstly identifying the lexical chains using the lexical chainers described in Section 3 and then calculating the inter-post lexical similarity using the methods explained in Section 5. The experimental results are shown in Table 1 .", "cite_spans": [], "ref_spans": [ { "start": 400, "end": 407, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Initial Assumption and Experiments", "sec_num": "6.1" }, { "text": "From Table 1 we can see that no results surpass the Heuristic baseline. Further investigation reveals that while Assumption 1 is reasonable, it is not always correct -i.e. similar posts are not always linked together. For example, an answer post later in a thread might be linked back to the first question post but be more similar to preceding answer posts, to which it is not linked, simply because they are all answers to the same question. The initial experiments show that more careful analysis is needed to use inter-post lexical similarity to reconstruct interpost linking.", "cite_spans": [], "ref_spans": [ { "start": 5, "end": 12, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Initial Assumption and Experiments", "sec_num": "6.1" }, { "text": "Because Post 1 and Post 2 are always labelled with link 0 and 1 respectively, our analysis starts from Post 3 of each thread. Based on the analysis, the second assumption is made: Assumption 2. If the Post 3 vs. Post 1 lexical similarity is larger than Post the 2 vs. Post 1 lexical similarity, then Post 3 is more likely to be linked back to Post 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Post 3 Analysis", "sec_num": "6.2" }, { "text": "Assumption 2 leads to an unsupervised approach which combines the three lexical chaining algorithms introduced in Section 3 with the four weighting schemes explained in Section 5 to measure Post 3 vs. Post 1 similarity and Post 2 vs. Post 1 similarity. If the former is larger, Post 3 is linked back to Post 1, otherwise Post 3 is linked back to Post 2. As for the other posts, the link labels are the same as the ones from the Heuristic baseline. The experimental results are shown in Table 2 .", "cite_spans": [], "ref_spans": [ { "start": 486, "end": 493, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Post 3 Analysis", "sec_num": "6.2" }, { "text": "From the results in Table 2 we can see that Chainer SV is the only lexical chaining algorithm Table 2 : Results from the Assumption 2 based unsupervised approach, by using three lexical chaining algorithms with four different weighting schemes.", "cite_spans": [], "ref_spans": [ { "start": 20, "end": 27, "text": "Table 2", "ref_id": null }, { "start": 94, "end": 101, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Post 3 Analysis", "sec_num": "6.2" }, { "text": "that leads to results which are better than the Heuristic baseline. Analysis over the lexical chains generated by the three lexical chainers shows that both Chainer Roget and Chainer W N extract very few chains, most of which contain only repetitions of a same word. This is probably because these two lexical chainers only consider nouns, and therefore have limited input tokens. Especially for Chainer Roget which uses an old dictionary (1911 edition) that does not contain modern technical terms, such as Windows, OSX and PC. While Chainer W N uses WordNet which has a larger and more modern vocabulary, the chainer considers very limited semantic relations (i.e. hypernyms, hyponyms and hyponyms of hypernyms). Moreover, the texts in forum posts are usually relatively short and informal, and contain typos and nonstandard acronyms. These factors make it very difficult for Chainer Roget and Chainer W N to extract lexical chains. As for Chainer SV , because all the words (except for stop words) are considered as candidate words, and relations between words are flexible according to the thresholds (i.e. threshold a and threshold m ), relatively abundant lexical chains are generated. While some of the chains clearly capture lexical cohesion among words, some of the chains are hard to interpret. Nevertheless, the results from Chainer SV are encouraging for the unsupervised approach, and therefore further investigation is conducted using only Chainer SV .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Post 3 Analysis", "sec_num": "6.2" }, { "text": "Because the experiments based on the Assump- Table 3 : Results from the Assumption 3 based unsupervised approach, by using Chainer SV with different weighting schemes tion 2 derive promising results, further analysis is conducted to enforce this assumption. We notice that the posts from the initiator of a thread are often outliers compared to other posts -i.e. these posts are similar to the first post because they are from the same author, but at the same time an initiator rarely replies to his/her own posts. This observation leads to a stricter assumption:", "cite_spans": [], "ref_spans": [ { "start": 45, "end": 52, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Post 3 Analysis", "sec_num": "6.2" }, { "text": "Assumption 3. If Post 3 vs. Post 1 lexical similarity is larger than Post 2 vs. Post 1 lexical similarity and Post 3 is not posted by the initiator of the thread, then Post 3 is more likely to be linked back to Post 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Post 3 Analysis", "sec_num": "6.2" }, { "text": "Based on Assumption 3, experiments are carried out using Chainer SV with different weighting schemes. We also introduce a stronger baseline (Heuristic user ) based on Assumption 3, where Post 3 is linked to Post 1 if these two posts are from different users and all the other posts are linked as Heuristic. The experimental results are shown in Table 3 .", "cite_spans": [], "ref_spans": [ { "start": 345, "end": 352, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Post 3 Analysis", "sec_num": "6.2" }, { "text": "From Table 3 we can see that while all the results from Chainer SV are significantly better than the result from the Heuristic baseline, with the LCBan weighting leading to the best F \u00b5 of 0.816, these results are not significantly different from the Heuristic user baseline. It is clear that the improvements attribute to the user constraint introduced in Assumption 3. This observation matches up with the results of supervised classification from Wang et al. (2011b) , where the benefits brought by text similarity based features (i.e. TitSim and PostSim) are covered by more effective user information based features (i.e. UserProf). ", "cite_spans": [ { "start": 450, "end": 469, "text": "Wang et al. (2011b)", "ref_id": "BIBREF28" } ], "ref_spans": [ { "start": 5, "end": 12, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Post 3 Analysis", "sec_num": "6.2" }, { "text": "It is interesting to see whether our unsupervised approach can contribute to the supervised methods by providing additional features. To test this idea, we add a lexical chaining based feature to the classifier of Wang et al. (2011b) based on Assumption 3. The feature value for each post is calculated using the following formula:", "cite_spans": [ { "start": 214, "end": 233, "text": "Wang et al. (2011b)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Lexical Chaining for Supervised Learning", "sec_num": "6.3" }, { "text": "f eature = sim(post3,post1) sim(post2,post1) P ost3 0 N onP ost3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexical Chaining for Supervised Learning", "sec_num": "6.3" }, { "text": "where sim is calculated using Chainer SV with different weighting methods. The experimental results are shown in Table 4 . From the results we can see that, by adding the additional feature extracted from lexical chains, the results improve slightly. The feature from the Chainer SV with LCBan weighting leads to the best F \u00b5 of 0.897. These improvements are statistically insignificant, possibly because the information introduced by the lexical chaining feature is already captured by existing features. It is also possible that better feature representations are needed for the lexical chains.", "cite_spans": [], "ref_spans": [ { "start": 113, "end": 120, "text": "Table 4", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Lexical Chaining for Supervised Learning", "sec_num": "6.3" }, { "text": "These results are preliminary but nonetheless suggest the potential of utilising lexical chaining in the domain of web user forums.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexical Chaining for Supervised Learning", "sec_num": "6.3" }, { "text": "To date, all experiments have been based on just the first three posts in a thread, where the majority of our threads contain more than just three posts. We carried out preliminary experiments over full thread data, by generalising Assumption 3 to Post N for N \u2265 3. However, no significant improvements were achieved over an informed baseline with our unsupervised approach. This is probably because the situation for later posts (after Post 3) is more complicated, as more linking options are possible. Relaxing the assumptions entirely also led to disappointing results. What appears to be needed is a more sophisticated set of constraints, to generalise the assumptions made for Post 3 to all the posts. We leave this for future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments over All the Posts", "sec_num": "6.4" }, { "text": "Web user forums are a valuable information source for users to resolve specific information needs. However, the complex structure of forum threads poses a challenge for users trying to extract relevant information. While the linking structure of forum threads has the potential to improve information access, these inter-post links are not always available.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "In this research, we explore unsupervised approaches for thread linking structure recovery, by automatically analysing the lexical cohesion between posts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "Lexical cohesion between posts is measured using lexical chaining, a technique to extract lists of related word tokens from a given discourse. Most lexical chaining algorithms use domain-independent thesauri and only consider nouns. In the domain of web user forums, where the texts of posts can be very short and contain various typos and special terms, these conventional lexical chaining algorithms often struggle to find proper lexical chains. To address this problem, we proposed the use of statistical associations between words, which are captured by the WORDSPACE model, to construct lexical chains. Our preliminary experiments derive results which are better than an informed baseline.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "In future work, we want to explore methods which can be used to recover all the inter-post links. First, we plan to conduct more detailed analysis over interpost lexical cohesion, and its relationship with interpost links. Second, we want to investigate human linking behaviour in web user forums, hoping to find significant linking patterns. Furthermore, we want to investigate more methods and resources for constructing lexical chains, e.g. Cramer et al. (2012) .", "cite_spans": [ { "start": 444, "end": 464, "text": "Cramer et al. (2012)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "On top of exploring these potential approaches, it is worth considering stronger baseline methods such as using cosine similarity to measure inter-post similarity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "The Chainer SV , as described in Section 4, is built on a WORDSPACE model learnt over a subset of four domains. It is also worth comparing with a more general WORDSPACE model learnt over the whole dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "As for supervised learning, it would be interesting to conduct experiments out of domain (i.e. train the model over threads from one forum, and classify threads from another forum), and compare with the unsupervised approaches. We also hope to investigate more effective ways of extracting features from the created lexical chains to improve supervised learning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "http://people.virginia.edu/\u02dcma5ke/ classes/files/cs65lexicalChain.pdf", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Available from http://www.csse.unimelb.edu. au/research/lt/resources/conll2010-thread/ 3 http://forums.cnet.com/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "The authors wish to thank Malcolm Augat and Margaret Ladlow for providing access to their lexical chaining code, which was used to implement Chainer W N . NICTA is funded by the Australian government as represented by Department of Broadband, Communication and Digital Economy, and the Australian Research Council through the ICT Centre of Excellence programme.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Reconstruction of threaded conversations in online discussion forums", "authors": [ { "first": "Erik", "middle": [], "last": "Aumayr", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Chan", "suffix": "" }, { "first": "Conor", "middle": [], "last": "Haye", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Fifth International AAAI Conference on Weblogs and Social Media (ICWSM-11)", "volume": "", "issue": "", "pages": "26--33", "other_ids": {}, "num": null, "urls": [], "raw_text": "Erik Aumayr, Jeffrey Chan, and Conor Haye. 2011. Re- construction of threaded conversations in online dis- cussion forums. In Proceedings of the Fifth Interna- tional AAAI Conference on Weblogs and Social Media (ICWSM-11), pages 26-33, Barcelona, Spain.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Using lexical chains for text summarization", "authors": [ { "first": "Regina", "middle": [], "last": "Barzilay", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Elhadad", "suffix": "" } ], "year": 1997, "venue": "Proceedings of the Intelligent Scalable Text Summarization Workshop", "volume": "", "issue": "", "pages": "10--17", "other_ids": {}, "num": null, "urls": [], "raw_text": "Regina Barzilay and Michael Elhadad. 1997. Using lex- ical chains for text summarization. In Proceedings of the Intelligent Scalable Text Summarization Workshop, pages 10-17, Madrid, Spain.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "The Macquarie Thesaurus", "authors": [], "year": 1986, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J.R.L. Bernard, editor. 1986. The Macquarie Thesaurus. Macquarie Library,, Sydney, Australia.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Random projection in dimensionality reduction: applications to image and text data", "authors": [ { "first": "Ella", "middle": [], "last": "Bingham", "suffix": "" }, { "first": "Heikki", "middle": [], "last": "Mannila", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the Seventh ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD '01)", "volume": "", "issue": "", "pages": "245--250", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ella Bingham and Heikki Mannila. 2001. Random pro- jection in dimensionality reduction: applications to image and text data. In Proceedings of the Seventh ACM SIGKDD International Conference on Knowl- edge Discovery and Data Mining (KDD '01), pages 245-250, San Francisco, USA.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "CRFSGD software", "authors": [ { "first": "L\u00e9on", "middle": [], "last": "Bottou", "suffix": "" } ], "year": 2011, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "L\u00e9on Bottou. 2011. CRFSGD software. http:// leon.bottou.org/projects/sgd.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Exploring resources for lexical chaining: A comparison of automated semantic relatedness measures and human judgments", "authors": [ { "first": "Irene", "middle": [], "last": "Cramer", "suffix": "" }, { "first": "Tonio", "middle": [], "last": "Wandmacher", "suffix": "" }, { "first": "Ulli", "middle": [], "last": "Waltinger", "suffix": "" } ], "year": 2012, "venue": "Studies in Computational Intelligence", "volume": "370", "issue": "", "pages": "377--396", "other_ids": {}, "num": null, "urls": [], "raw_text": "Irene Cramer, Tonio Wandmacher, and Ulli Waltinger. 2012. Exploring resources for lexical chaining: A comparison of automated semantic relatedness measures and human judgments. In Alexander Mehler, Kai-Uwe K\u00fchnberger, Henning Lobin, Har- ald L\u00fcngen, Angelika Storrer, and Andreas Witt, edi- tors, Modeling, Learning, and Processing of Text Tech- nological Data Structures, volume 370 of Studies in Computational Intelligence, pages 377-396. Springer Berlin, Heidelberg.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "It pays to be picky: An evaluation of thread retrieval in online forums", "authors": [ { "first": "Jonathan", "middle": [ "L" ], "last": "Elsas", "suffix": "" }, { "first": "Jaime", "middle": [ "G" ], "last": "Carbonell", "suffix": "" } ], "year": 2009, "venue": "Proceedings of 32nd International ACM-SIGIR Conference on Research and Development in Information Retrieval (SIGIR'09)", "volume": "", "issue": "", "pages": "714--715", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jonathan L. Elsas and Jaime G. Carbonell. 2009. It pays to be picky: An evaluation of thread retrieval in online forums. In Proceedings of 32nd Interna- tional ACM-SIGIR Conference on Research and De- velopment in Information Retrieval (SIGIR'09), pages 714-715, Boston, USA.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "WordNet: An Electronic Lexical Database", "authors": [], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christiane Fellbaum, editor. 1998. WordNet: An Elec- tronic Lexical Database. The MIT Press, Cambridge, USA.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Improving word sense disambiguation in lexical chaining", "authors": [ { "first": "Michel", "middle": [], "last": "Galley", "suffix": "" }, { "first": "Kathleen", "middle": [], "last": "Mckeown", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 18th International Joint Conference on Artificial Intelligence (IJCAI-03)", "volume": "", "issue": "", "pages": "1486--1488", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michel Galley and Kathleen McKeown. 2003. Improv- ing word sense disambiguation in lexical chaining. In Proceedings of the 18th International Joint Confer- ence on Artificial Intelligence (IJCAI-03), pages 1486- 1488, Acapulco, Mexico.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Lexical chains as representations of context for the detection and correction of malapropisms", "authors": [ { "first": "Graeme", "middle": [], "last": "Hirst", "suffix": "" }, { "first": "David", "middle": [], "last": "St-Onge", "suffix": "" } ], "year": 1998, "venue": "WordNet: An electronic lexical database", "volume": "", "issue": "", "pages": "305--332", "other_ids": {}, "num": null, "urls": [], "raw_text": "Graeme Hirst and David St-Onge. 1998. Lexical chains as representations of context for the detection and cor- rection of malapropisms. In Christiane Fellbaum, ed- itor, WordNet: An electronic lexical database, pages 305-332. The MIT Press, Cambridge, USA.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "The design and implementation of an electronic lexical knowledge base", "authors": [ { "first": "Mario", "middle": [], "last": "Jarmasz", "suffix": "" }, { "first": "Stan", "middle": [], "last": "Szpakowicz", "suffix": "" } ], "year": 2001, "venue": "Advances in Artificial Intelligence", "volume": "2056", "issue": "", "pages": "325--334", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mario Jarmasz and Stan Szpakowicz. 2001. The design and implementation of an electronic lexical knowledge base. Advances in Artificial Intelligence, 2056(2001):325-334.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Not as easy as it seems: Automating the construction of lexical chains using rogets thesaurus", "authors": [ { "first": "Mario", "middle": [], "last": "Jarmasz", "suffix": "" }, { "first": "Stan", "middle": [], "last": "Szpakowicz", "suffix": "" } ], "year": 2003, "venue": "Advances in Artificial Intelligence", "volume": "2671", "issue": "", "pages": "994--999", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mario Jarmasz and Stan Szpakowicz. 2003. Not as easy as it seems: Automating the construction of lexical chains using rogets thesaurus. Advances in Artificial Intelligence, 2671(2003):994-999.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "SPEECH and LANGUAGE PROCESSING: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition", "authors": [ { "first": "Daniel", "middle": [], "last": "Jurafsky", "suffix": "" }, { "first": "James", "middle": [ "H" ], "last": "Martin", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Jurafsky and James H. Martin. 2008. SPEECH and LANGUAGE PROCESSING: An Introduction to Natural Language Processing, Computational Lin- guistics, and Speech Recognition. Pearson Prentice Hall, 2nd edition.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Tagging and linking web forum posts", "authors": [ { "first": "Nam", "middle": [], "last": "Su", "suffix": "" }, { "first": "Li", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Wang", "suffix": "" }, { "first": "", "middle": [], "last": "Baldwin", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 14th Conference on Computational Natural Language Learning (CoNLL-2010)", "volume": "", "issue": "", "pages": "192--202", "other_ids": {}, "num": null, "urls": [], "raw_text": "Su Nam Kim, Li Wang, and Timothy Baldwin. 2010. Tagging and linking web forum posts. In Proceedings of the 14th Conference on Computational Natural Lan- guage Learning (CoNLL-2010), pages 192-202, Upp- sala, Sweden.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Text segmentation based on similarity between words", "authors": [ { "first": "Hideki", "middle": [], "last": "Kozima", "suffix": "" } ], "year": 1993, "venue": "Proceedings of the 31st", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hideki Kozima. 1993. Text segmentation based on sim- ilarity between words. In Proceedings of the 31st", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Annual Meeting of the Association for Computational Linguistics", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "286--288", "other_ids": {}, "num": null, "urls": [], "raw_text": "Annual Meeting of the Association for Computational Linguistics, pages 286-288, Columbus, USA.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Automatic retrieval and clustering of similar words", "authors": [ { "first": "Dekang", "middle": [], "last": "Lin", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the 36th Annual Meeting of the ACL and 17th International Conference on Computational Linguistics (COLING/ACL-98)", "volume": "", "issue": "", "pages": "768--774", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dekang Lin. 1998a. Automatic retrieval and cluster- ing of similar words. In Proceedings of the 36th An- nual Meeting of the ACL and 17th International Con- ference on Computational Linguistics (COLING/ACL- 98), pages 768-774, Montreal, Canada.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "An information-theoretic definition of similarity", "authors": [ { "first": "Dekang", "middle": [], "last": "Lin", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the 15th International Conference on Machine Learning (ICML'98)", "volume": "", "issue": "", "pages": "296--304", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dekang Lin. 1998b. An information-theoretic definition of similarity. In Proceedings of the 15th International Conference on Machine Learning (ICML'98), pages 296-304, Madison, USA.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Lexical chains using distributional measures of concept distance", "authors": [ { "first": "Meghana", "middle": [], "last": "Marathe", "suffix": "" }, { "first": "Graeme", "middle": [], "last": "Hirst", "suffix": "" } ], "year": 2010, "venue": "Proceedings, 11th International Conference on Intelligent Text Processing and Computational Linguistics (CICLing-2010)", "volume": "", "issue": "", "pages": "291--302", "other_ids": {}, "num": null, "urls": [], "raw_text": "Meghana Marathe and Graeme Hirst. 2010. Lexical chains using distributional measures of concept dis- tance. In Proceedings, 11th International Conference on Intelligent Text Processing and Computational Lin- guistics (CICLing-2010), pages 291-302, Ia\u015fi, Roma- nia.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Distributional measures of concept-distance: A task-oriented evaluation", "authors": [ { "first": "Saif", "middle": [], "last": "Mohammad", "suffix": "" }, { "first": "Graeme", "middle": [], "last": "Hirst", "suffix": "" } ], "year": 2006, "venue": "Proceedings, 2006 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "35--43", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saif Mohammad and Graeme Hirst. 2006. Distributional measures of concept-distance: A task-oriented evalua- tion. In Proceedings, 2006 Conference on Empirical Methods in Natural Language Processing (EMNLP 2006), pages 35-43, Sydney, Australia.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Lexical chains for question answering", "authors": [ { "first": "Dan", "middle": [], "last": "Moldovan", "suffix": "" }, { "first": "Adrian", "middle": [], "last": "Novischi", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 19th International Conference on Computational Linguistics (COLING 2002)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dan Moldovan and Adrian Novischi. 2002. Lexical chains for question answering. In Proceedings of the 19th International Conference on Computational Lin- guistics (COLING 2002), Taiwan.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Lexical cohesion computed by thesaural relations as an indicator of the structure of text", "authors": [ { "first": "Jane", "middle": [], "last": "Morris", "suffix": "" }, { "first": "Graeme", "middle": [], "last": "Hirst", "suffix": "" } ], "year": 1991, "venue": "Computational Linguistics", "volume": "17", "issue": "1", "pages": "21--48", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jane Morris and Graeme Hirst. 1991. Lexical cohe- sion computed by thesaural relations as an indicator of the structure of text. Computational Linguistics, 17(1):21-48.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Fishing for exactness", "authors": [ { "first": "Ted", "middle": [], "last": "Pedersen", "suffix": "" } ], "year": 1996, "venue": "Proceedings of the South-Central SAS Users Group Conference (SCSUG-96)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ted Pedersen. 1996. Fishing for exactness. In Proceed- ings of the South-Central SAS Users Group Confer- ence (SCSUG-96), Austin, USA.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Automatic word sense discrimination", "authors": [ { "first": "Hinrich", "middle": [], "last": "Sch\u00fctze", "suffix": "" } ], "year": 1998, "venue": "Computational Linguistics", "volume": "24", "issue": "1", "pages": "97--123", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hinrich Sch\u00fctze. 1998. Automatic word sense discrimi- nation. Computational Linguistics, 24(1):97-123.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Online community search using thread structure", "authors": [ { "first": "Jangwon", "middle": [], "last": "Seo", "suffix": "" }, { "first": "W", "middle": [ "Bruce" ], "last": "Croft", "suffix": "" }, { "first": "David", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 18th ACM Conference on Information and Knowledge Management (CIKM 2009)", "volume": "", "issue": "", "pages": "1907--1910", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jangwon Seo, W. Bruce Croft, and David A. Smith. 2009. Online community search using thread struc- ture. In Proceedings of the 18th ACM Conference on Information and Knowledge Management (CIKM 2009), pages 1907-1910, Hong Kong, China.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Combining semantic and syntactic document classifiers to improve first story detection", "authors": [ { "first": "A", "middle": [], "last": "Mark", "suffix": "" }, { "first": "", "middle": [], "last": "Stairmand", "suffix": "" } ], "year": 1997, "venue": "Proceedings of the 20th annual international ACM SIGIR conference on Research and development in information retrieval (SI-GIR '97 )", "volume": "", "issue": "", "pages": "424--425", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark A. Stairmand. 1997. Textual context analysis for information retrieval. In Proceedings of the 20th annual international ACM SIGIR conference on Re- search and development in information retrieval (SI- GIR '97 ), pages 140-147, Philadelphia, USA. Nicola Stokes and Joe Carthy. 2001. Combining seman- tic and syntactic document classifiers to improve first story detection. In Proceedings of the 24th annual in- ternational ACM SIGIR conference on Research and development in information retrieval (SIGIR 2001), pages 424-425, New Orleans, USA.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "SeLeCT: a lexical cohesion based news story segmentation system", "authors": [ { "first": "Nicola", "middle": [], "last": "Stokes", "suffix": "" }, { "first": "Joe", "middle": [], "last": "Carthy", "suffix": "" }, { "first": "Alan", "middle": [ "F" ], "last": "Smeaton", "suffix": "" } ], "year": 2004, "venue": "AI Communications", "volume": "17", "issue": "1", "pages": "3--12", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nicola Stokes, Joe Carthy, and Alan F. Smeaton. 2004. SeLeCT: a lexical cohesion based news story segmen- tation system. AI Communications, 17(1):3-12.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Learning online discussion structures by conditional random fields", "authors": [ { "first": "Hongning", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Chi", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Chengxiang", "middle": [], "last": "Zhai", "suffix": "" }, { "first": "Jiawei", "middle": [], "last": "Han", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 34th Annual International ACM SIGIR Conference (SIGIR 2011)", "volume": "", "issue": "", "pages": "435--444", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hongning Wang, Chi Wang, ChengXiang Zhai, and Ji- awei Han. 2011a. Learning online discussion struc- tures by conditional random fields. In Proceedings of the 34th Annual International ACM SIGIR Conference (SIGIR 2011), pages 435-444, Beijing, China.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Predicting thread discourse structure over technical web forums", "authors": [ { "first": "Li", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Lui", "suffix": "" }, { "first": "Su", "middle": [ "Nam" ], "last": "Kim", "suffix": "" }, { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Baldwin", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "13--25", "other_ids": {}, "num": null, "urls": [], "raw_text": "Li Wang, Marco Lui, Su Nam Kim, Joakim Nivre, and Timothy Baldwin. 2011b. Predicting thread discourse structure over technical web forums. In Proceedings of the 2011 Conference on Empirical Methods in Nat- ural Language Processing, pages 13-25, Edinburgh, UK.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Semantic Vectors: a scalable open source package and online technology management application", "authors": [ { "first": "Dominic", "middle": [], "last": "Widdows", "suffix": "" }, { "first": "Kathleen", "middle": [], "last": "Ferraro", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the Sixth International Language Resources and Evaluation (LREC'08)", "volume": "", "issue": "", "pages": "1183--1190", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dominic Widdows and Kathleen Ferraro. 2008. Seman- tic Vectors: a scalable open source package and on- line technology management application. In Proceed- ings of the Sixth International Language Resources and Evaluation (LREC'08), pages 1183-1190, Mar- rakech, Morocco.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Learning effective ranking functions for newsgroup search", "authors": [ { "first": "Wensi", "middle": [], "last": "Xi", "suffix": "" }, { "first": "Jesper", "middle": [], "last": "Lind", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Brill", "suffix": "" } ], "year": 2004, "venue": "Proceedings of 27th International ACM-SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2004)", "volume": "", "issue": "", "pages": "394--401", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wensi Xi, Jesper Lind, and Eric Brill. 2004. Learning effective ranking functions for newsgroup search. In Proceedings of 27th International ACM-SIGIR Con- ference on Research and Development in Informa- tion Retrieval (SIGIR 2004), pages 394-401. Sheffield, UK.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "More accurate tests for the statistical significance of result differences", "authors": [ { "first": "Alexander", "middle": [], "last": "Yeh", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the 18th International Conference on Computational Linguistics (COLING 2000)", "volume": "", "issue": "", "pages": "947--953", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexander Yeh. 2000. More accurate tests for the sta- tistical significance of result differences. In Proceed- ings of the 18th International Conference on Compu- tational Linguistics (COLING 2000), pages 947-953, Saarbr\u00fccken, Germany.", "links": null } }, "ref_entries": { "FIGREF1": { "num": null, "uris": null, "type_str": "figure", "text": "" }, "TABREF0": { "text": "Algorithm 2 Chainer SV chains = empty select a set of candidate tokens for each candidate token t i do max score = max cj \u2208chains (sim tc (t i , c j )) max chain = arg max cj \u2208chains (sim tc (t i , c j ))", "num": null, "content": "", "type_str": "table", "html": null }, "TABREF2": { "text": "", "num": null, "content": "
: Results from the Assumption 1 based unsu-
pervised approach, by using three lexical chaining algo-
rithms with four different weighting schemes.
", "type_str": "table", "html": null }, "TABREF6": { "text": "", "num": null, "content": "
: Supervised linking classification by applying
CRF SGD over features from Wang et al. (2011b) with-
out (NoLC) and with (WithLC) features extracted from
lexical chains, created by Chainer SV with different
weighting schemes
", "type_str": "table", "html": null } } } }