{ "paper_id": "S15-1012", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:37:05.736952Z" }, "title": "Collective Document Classification with Implicit Inter-document Semantic Relationships", "authors": [ { "first": "Clinton", "middle": [], "last": "Burford", "suffix": "", "affiliation": { "laboratory": "", "institution": "The University of Melbourne", "location": { "postCode": "3010", "settlement": "VIC", "country": "Australia" } }, "email": "" }, { "first": "Steven", "middle": [], "last": "Bird", "suffix": "", "affiliation": { "laboratory": "", "institution": "The University of Melbourne", "location": { "postCode": "3010", "settlement": "VIC", "country": "Australia" } }, "email": "sbird@unimelb.edu.au" }, { "first": "Timothy", "middle": [], "last": "Baldwin", "suffix": "", "affiliation": { "laboratory": "", "institution": "The University of Melbourne", "location": { "postCode": "3010", "settlement": "VIC", "country": "Australia" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper addresses the question of how document classifiers can exploit implicit information about document similarity to improve document classifier accuracy. We infer document similarity using simple n-gram overlap, and demonstrate that this improves overall document classification performance over two datasets. As part of this, we find that collective classification based on simple iterative classifiers outperforms the more complex and computationally-intensive dual classifier approach.", "pdf_parse": { "paper_id": "S15-1012", "_pdf_hash": "", "abstract": [ { "text": "This paper addresses the question of how document classifiers can exploit implicit information about document similarity to improve document classifier accuracy. We infer document similarity using simple n-gram overlap, and demonstrate that this improves overall document classification performance over two datasets. As part of this, we find that collective classification based on simple iterative classifiers outperforms the more complex and computationally-intensive dual classifier approach.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "In machine learning, there is a rich tradition of research into the two tasks of: (1) \"point-wise\" classification, where each instance is represented as an independent instance, and the predictive model attempts to learn a decision boundary to capture instances of a given class; and (2) graphical learning and inference, where instances are connected in a graph, and learning/inference take place relative to the graph structure connecting those instances, based primarily on either conditional dependence (i.e. one event is dependent on the outcome of another) or \"homophily\" (i.e. the tendency for connected instances to share various properties). 1 Various joint models that combine the two have also been proposed, although in natural language processing at least, these have focused largely on conditional dependence, in the form of models such as hidden Markov models (Rabiner and Juang, 1986) and conditional random fields (Lafferty et al., 2001) , where independent properties of words, e.g., are combined with conditional dependencies based on their context of use to jointly predict the senses of all words in a given sentence (Ciaramita and Johnson, 2003; Johannsen et al., 2014) .", "cite_spans": [ { "start": 875, "end": 900, "text": "(Rabiner and Juang, 1986)", "ref_id": "BIBREF24" }, { "start": 931, "end": 954, "text": "(Lafferty et al., 2001)", "ref_id": "BIBREF14" }, { "start": 1138, "end": 1167, "text": "(Ciaramita and Johnson, 2003;", "ref_id": "BIBREF4" }, { "start": 1168, "end": 1191, "text": "Johannsen et al., 2014)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This paper explores the utility of homophily within joint models for document-level semantic classification, focusing specifically on tasks which are not associated with any explicit graph structure. That is, we examine whether implicit semantic document links can improve the results of a point-wise (content-based) classification approach.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Explicit inter-document links have been variously shown to improve document classifier performance, based on information sources including hyperlinks in web documents (Slattery and Craven, 1998; Oh et al., 2000; Yang et al., 2002) , direct name-references in congressional debates (Thomas et al., 2006; Burfoot et al., 2011; Stoyanov and Eisner, 2012) , citations in scientific papers (Giles et al., 1998; Lu and Getoor, 2003; McDowell et al., 2007) , and user mentions or retweets in social media Tan et al., 2011) . However, document collections often don't contain explicit inter-document links, limiting the practical usefulness of such methods. In this paper, we seek to expand the reach of research which incorporates linking information, in inducing implicit linking information between documents, and demonstrating that the resultant (noisy) network structure improves document classification accuracy.", "cite_spans": [ { "start": 167, "end": 194, "text": "(Slattery and Craven, 1998;", "ref_id": "BIBREF27" }, { "start": 195, "end": 211, "text": "Oh et al., 2000;", "ref_id": "BIBREF20" }, { "start": 212, "end": 230, "text": "Yang et al., 2002)", "ref_id": "BIBREF35" }, { "start": 281, "end": 302, "text": "(Thomas et al., 2006;", "ref_id": "BIBREF33" }, { "start": 303, "end": 324, "text": "Burfoot et al., 2011;", "ref_id": null }, { "start": 325, "end": 351, "text": "Stoyanov and Eisner, 2012)", "ref_id": "BIBREF29" }, { "start": 385, "end": 405, "text": "(Giles et al., 1998;", "ref_id": "BIBREF6" }, { "start": 406, "end": 426, "text": "Lu and Getoor, 2003;", "ref_id": "BIBREF16" }, { "start": 427, "end": 449, "text": "McDowell et al., 2007)", "ref_id": "BIBREF17" }, { "start": 498, "end": 515, "text": "Tan et al., 2011)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The intuition underlying this work is that some types of documents have features which are either absent or ambiguous in training data, but which have the special characteristic of indicating relationships between the labels of documents. Most often, an inter-document relationship indicates that two documents have the same label, but depending of the task, it may also indicate that they have different labels. In either case, classifiers gain an advantage if they can consider these features as well as conventional content-based features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The major contribution of this paper is in showing that document classification accuracy can be improved over a range of datasets using automaticallyinduced implicit semantic inter-document links, using collective classification. We are the first to achieve this using a general-purpose setup, as applied to a range of datasets. Our results are achieved using n-gram overlap features for both the CON-VOTE and BITTERLEMONS corpora, without the use of annotations for explicit semantic inter-document relationships. A second contribution of this work is the finding that simple iterative classifiers outperform more complex dual classifiers when using implicit inter-document links. This finding contradicts earlier work using explicit document links, where the dual classifier approach has generally been found to perform best (Thomas et al., 2006; Burfoot et al., 2011) . While the work presented here is conceptually quite simple, the findings are significant and potentially open the door to accuracy improvements on a range of document-level semantic tasks.", "cite_spans": [ { "start": 827, "end": 848, "text": "(Thomas et al., 2006;", "ref_id": "BIBREF33" }, { "start": 849, "end": 870, "text": "Burfoot et al., 2011)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Previous work has dealt with the question of collective document classification using implicit interdocument relationships in two basic ways: 1. proximity: use a spatial or temporal dimension of the domain to relate documents (Agrawal et al., 2003; Goldberg et al., 2007; McDowell et al., 2009; Somasundaran et al., 2009) .", "cite_spans": [ { "start": 226, "end": 248, "text": "(Agrawal et al., 2003;", "ref_id": "BIBREF0" }, { "start": 249, "end": 271, "text": "Goldberg et al., 2007;", "ref_id": "BIBREF7" }, { "start": 272, "end": 294, "text": "McDowell et al., 2009;", "ref_id": "BIBREF18" }, { "start": 295, "end": 321, "text": "Somasundaran et al., 2009)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "2. similarity: relate documents via some notion of their content-based similarity (Blum and Chawla, 2001; Joachims, 2003; Takamura et al., 2007; Sindhwani and Melville, 2008; Jurgens, 2013) The work using similarity-based links is the closest to ours but is also strongly differentiated because it focuses on transductive semi-supervised classification. That task begins with the premise that only a small amount of labelled training data is available, so content-only classification is likely to be inaccurate. By contrast, the supervised techniques in this paper deal with large amounts of labelled training data and relatively high content-only performance -76% for CONVOTE and 87% for BITTER-LEMONS. It is reasonable to assume that the types of similarity-based relationships derived for transductive semi-supervised classification would be ineffective in a supervised context. This conclusion is supported by an experiment that shows that the vocabularies of document pairs tend to overlap to similar degrees regardless of document class (Pang and Lee, 2005) .", "cite_spans": [ { "start": 82, "end": 105, "text": "(Blum and Chawla, 2001;", "ref_id": "BIBREF2" }, { "start": 106, "end": 121, "text": "Joachims, 2003;", "ref_id": "BIBREF9" }, { "start": 122, "end": 144, "text": "Takamura et al., 2007;", "ref_id": "BIBREF30" }, { "start": 145, "end": 174, "text": "Sindhwani and Melville, 2008;", "ref_id": "BIBREF26" }, { "start": 175, "end": 189, "text": "Jurgens, 2013)", "ref_id": "BIBREF12" }, { "start": 1043, "end": 1063, "text": "(Pang and Lee, 2005)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "We experiment with two corpora in this research: CONVOTE and BITTERLEMONS. These two are selected on the grounds that they satisfy two intuitive criteria about types of text collections that may contain features that are not useful for content-only classification, but which may indicate relationships between pairs of documents: (1) the corpora both use an unconstrained prose vocabulary, which increases the likelihood that authors will use distinctive words or sequences of words that are not frequent enough to be useful in training, but which can be used to semantically relate pairs of documents (c.f. newswire articles); and (2) the majority of the text content in both corpora is clearly relevant to the dimension of classification, i.e. there is minimal use of \"boilerplate\" or \"background\" material, so the pool from which to select task-relevant content to form interdocument semantic relationships is larger.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpora", "sec_num": "3" }, { "text": "CONVOTE (Thomas et al., 2006) consists of US congressional speeches relating to a specific bill or resolution, and the ultimate vote of each speaker (\"for\" or \"against\"). The document classifier uses the text of each speech to predict the vote of the speaker. Three modifications are made to the corpus: (1) speeches by the same speaker are concatenated, to more naturally represent the requirement that each speaker only has one vote; (2) we drop the fixed train, test, development set assignments from the original dataset, and instead evaluate using leave-one-out cross-validation over the 53 debates contained in the dataset, to allow for a more statistically robust evaluation; and (3) we discard the manually annotated inter-document relationships based on references to speaker names, because implicit relationships are the focus of this work. Table 1 gives statistics for our rendering of CON-VOTE. The identical figures for the average number of speeches and speakers per debate reflect the fact that each speaker now contributes only one unified speech.", "cite_spans": [ { "start": 8, "end": 29, "text": "(Thomas et al., 2006)", "ref_id": "BIBREF33" } ], "ref_spans": [ { "start": 851, "end": 858, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "CONVOTE", "sec_num": "3.1" }, { "text": "BITTERLEMONS (Lin et al., 2006 ) is a collection of articles on the Israeli-Arab conflict harvested from the Bitterlemons website. 2 In each weekly issue, the editors contribute an article giving their perspectives on some aspect of the conflict, and two guest authors contribute articles, one from an Israeli perspective and the other from a Palestinian perspective. Sometimes these guest contributions take the form of an interview, in which case we remove the questions (from the editors) and retain only the answers.", "cite_spans": [ { "start": 13, "end": 30, "text": "(Lin et al., 2006", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "BITTERLEMONS", "sec_num": "3.2" }, { "text": "The statistics in Table 2 give a picture of the size and structure of BITTERLEMONS.", "cite_spans": [], "ref_spans": [ { "start": 18, "end": 25, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "BITTERLEMONS", "sec_num": "3.2" }, { "text": "In accordance with Lin et al. (2006) , we experiment with heldout evaluation, with all articles contributed by the editors placed in the training set and those contributed by the guests in the test set. This allows the task to be framed as \"perspective\" classification, rather than author attribution, i.e. we are fo- cused on the content of the contributions rather than stylistic or biographical features that may identify one editor or the other.", "cite_spans": [ { "start": 19, "end": 36, "text": "Lin et al. (2006)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "BITTERLEMONS", "sec_num": "3.2" }, { "text": "To implement the hypothesis that documents that use the same rare word or sequence of words are more likely to carry the same label, we calculate a cosine similarity metric between every pairing of documents in a given corpus, using an idf-weighted term vector used to represent document d i . The idf weighting serves to emphasise terms that are rare within the corpus, and de-emphasise terms that are common. To further enhance this effect, we represent terms by existence-based rather than frequencybased features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Implicit Inter-document Similarity", "sec_num": "4" }, { "text": "An example of a (tokenised) high-idf sentence pair from CONVOTE is (with the speaker, party affiliation and vote shown in each case, and the high-idf token underlined):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Implicit Inter-document Similarity", "sec_num": "4" }, { "text": "(1) the president s top counselor dan bartlett said this week that there is no magic wand to reduce gas prices .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Implicit Inter-document Similarity", "sec_num": "4" }, { "text": "[CROWLEY, JOE (D); AGAINST]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Implicit Inter-document Similarity", "sec_num": "4" }, { "text": "(2) mr. chairman , yesterday the president said , i wish i could simply wave a magic wand and lower gas prices tomorrow. [ ", "cite_spans": [ { "start": 121, "end": 122, "text": "[", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Implicit Inter-document Similarity", "sec_num": "4" }, { "text": "An example for BITTERLEMONS is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EMANUEL, RAHM (D); AGAINST]", "sec_num": null }, { "text": "(3) Even if we /wanted/ to succumb to Israeli pressure, it is impossible to make a Palestinian teach his child that Jaffa or Haifa or Palestine For other examples and more justification of this methodology, see Burford (2013) .", "cite_spans": [ { "start": 211, "end": 225, "text": "Burford (2013)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "EMANUEL, RAHM (D); AGAINST]", "sec_num": null }, { "text": "Two standard approaches to collective classification are: (1) the dual classifier approach; and (2) the iterative classifier approach. We briefly review these approaches below, but refer the reader to Sen et al. (2008 ), McDowell et al. (2009 and Burford (2013) for a more detailed methodological discussion.", "cite_spans": [ { "start": 201, "end": 217, "text": "Sen et al. (2008", "ref_id": "BIBREF25" }, { "start": 218, "end": 242, "text": "), McDowell et al. (2009", "ref_id": "BIBREF18" }, { "start": 247, "end": 261, "text": "Burford (2013)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Collective Classification", "sec_num": "5" }, { "text": "The dual classifier approach is made up of three steps, as depicted in Figure 1: 1. Base classification: Produce base classifications using (1) a content-only classifier; and (2) a relationship classifier. The contentonly classifier makes a binary prediction: FOR and AGAINST for CONVOTE, and ISRAELI or PALESTINIAN for BITTERLEMONS. The relationship classifier indicates the preference that each document pair be SAME or not (SAME).", "cite_spans": [], "ref_spans": [ { "start": 71, "end": 80, "text": "Figure 1:", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Dual Classifier Approach", "sec_num": "5.1" }, { "text": "2. Normalisation: Normalise the scores, producing values for the classification preference functions, \u03c8 i , which can be input into a collective classification algorithm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dual Classifier Approach", "sec_num": "5.1" }, { "text": "3. Decoding: Produce final classifications by optimally decoding the content-only and relationship level preferences using a collective classification algorithm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dual Classifier Approach", "sec_num": "5.1" }, { "text": "For our content-only base classifier, we use the same bag-of-words SVM with binary (existencebased) unigram features as (Thomas et al., 2006) . This classifier has been shown to be the best bagof-words model for BITTERLEMONS (Beigman Klebanov et al., 2010). As our relationship base classifier, we use the cosine similarity scores described above, calculated using n-grams of several different lengths.", "cite_spans": [ { "start": 120, "end": 141, "text": "(Thomas et al., 2006)", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Base classification", "sec_num": "5.1.1" }, { "text": "We use probabilistic SVM normalisation to convert the signed decision-plane distance output by the content-only classifier into the probability that the instance is in the positive class (Platt, 1999) .", "cite_spans": [ { "start": 187, "end": 200, "text": "(Platt, 1999)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Normalisation", "sec_num": "5.1.2" }, { "text": "For the relationship classifier, the technique used to convert the cosine similarity score into a classification preference needs to fit complex criteria. Preliminary experiments suggested that while the very highest similarity scores are good indicators of SAME relationships, classifier precision drops quickly as recall increases. To avoid polluting the classification graph with large numbers of lowquality links, the normalisation method should incorporate a threshold that discards a significant proportion of the test set pairs. We adopt the following binning technique to convert the cosine similarity score into a probability that the two instances are SAME:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Normalisation", "sec_num": "5.1.2" }, { "text": "\u03c8 ij (l, l) = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 0.9 s(i, j) \u2265 b 1 ; 0.8 b 2 \u2264 s(i, j) < b 1 ; 0.7 b 3 \u2264 s(i, j) < b 2 ; 0.6 b 4 \u2264 s(i, j) < b 3 ; 0.5 s(i, j) < b 4 ;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Normalisation", "sec_num": "5.1.2" }, { "text": "where \u03c8 ij (l, l) represents the SAME preference (i.e. the probability of i and j having the same label); the values for b 1 , b 2 , b 3 , and b 4 are derived by sorting the relationships in the training data by similarity score, and separating them into intervals holding a proportion of SAME pairs equivalent to the nominated probability. This approach is similar to unsupervised discretisation (Kotsiantis and Kanellopoulos, 2006) , except the intervals are arranged so that the output categories have a probabilistic interpretation.", "cite_spans": [ { "start": 397, "end": 433, "text": "(Kotsiantis and Kanellopoulos, 2006)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Normalisation", "sec_num": "5.1.2" }, { "text": "Decoding is carried out using three techniques: (1) loopy belief propagation (McDowell et al., 2009) ; (2) mean-field; and (3) minimum-cut.", "cite_spans": [ { "start": 77, "end": 100, "text": "(McDowell et al., 2009)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Decoding", "sec_num": "5.1.3" }, { "text": "Loopy belief propagation is a message passing algorithm that can be expressed as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Loopy Belief Propagation", "sec_num": null }, { "text": "m i\u2192j (l) = \u03b1 l \u2208L \uf8eb \uf8ed \u03c8 i (l )\u03c8 ij (l , l) k\u2208N i \u2229D U \\{j} m k\u2192i (l ) \uf8f6 \uf8f8 b i (l) = \u03b1\u03c8 i (l) k\u2208N i \u2229D U m k\u2192i (l)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Loopy Belief Propagation", "sec_num": null }, { "text": "where m i\u2192j is a message sent by document d i to document d j , and \u03b1 is a normalization constant that ensures that each message and each set of marginal probabilities sum to 1. The message flow from d i to d j communicates the belief of d i about the label of d j . The algorithm proceeds by making each node communicate with its neighbours until the messages stabilise. The marginal probability is then derived by calculating b i (l).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Loopy Belief Propagation", "sec_num": null }, { "text": "Loopy belief propagation was used in early collective classification work (Taskar et al., 2002) and has remained popular since (Sen et al., 2008; Mc-Dowell et al., 2009; Stoyanov and Eisner, 2012) .", "cite_spans": [ { "start": 74, "end": 95, "text": "(Taskar et al., 2002)", "ref_id": "BIBREF32" }, { "start": 127, "end": 145, "text": "(Sen et al., 2008;", "ref_id": "BIBREF25" }, { "start": 146, "end": 169, "text": "Mc-Dowell et al., 2009;", "ref_id": null }, { "start": 170, "end": 196, "text": "Stoyanov and Eisner, 2012)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Loopy Belief Propagation", "sec_num": null }, { "text": "Mean-field is an alternative message passing algorithm, that can be expressed as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mean-field", "sec_num": null }, { "text": "b i (l) = \u03b1\u03c8 i (l) j\u2208N i \u2229D l \u2208L \u03c8 b i (l ) ij (l , l)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mean-field", "sec_num": null }, { "text": "and is re-computed for each document until the marginal probabilities stabilise.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mean-field", "sec_num": null }, { "text": "Loopy belief propagation and mean-field have both been justified as variational methods for Markov random fields (Jordan et al., 1999; Weiss, 2001; Yedidia et al., 2005) .", "cite_spans": [ { "start": 113, "end": 134, "text": "(Jordan et al., 1999;", "ref_id": "BIBREF11" }, { "start": 135, "end": 147, "text": "Weiss, 2001;", "ref_id": "BIBREF34" }, { "start": 148, "end": 169, "text": "Yedidia et al., 2005)", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "Mean-field", "sec_num": null }, { "text": "The minimum-cut technique involves formulating a binary collective classification task as a flow graph and finding solutions using standard methods for solving minimum-cut (maximum-flow) problems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Minimum Cut", "sec_num": null }, { "text": "We use the method described by Blum and Chawla (2001) in an in-sample setting, which is equivalent to finding the optimal solution for the cost function for labellings:", "cite_spans": [ { "start": 31, "end": 53, "text": "Blum and Chawla (2001)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Minimum Cut", "sec_num": null }, { "text": "cost(Y ) = d i \u2208D w i (Y i ) + (d i ,d j )\u2208E:Y i =Y j w r (d i , d j )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Minimum Cut", "sec_num": null }, { "text": "The relative weights given to the content-only and relational classifiers can be tuned as follows (for CONVOTE, without loss of generality):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tuning", "sec_num": "5.1.4" }, { "text": "\u03c8 i (FOR) = \u03c8 i (FOR)+ min(0,\u03b3)(\u03c8 i (FOR)\u2212\u03c8 i (AGAINST)) 2 \u03c8 ij (FOR, FOR) = \u03c8 ij (FOR, FOR)\u2212 max(0,\u03b3)(\u03c8 ij (FOR,FOR)\u2212\u03c8 ij (FOR,AGAINST)) 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tuning", "sec_num": "5.1.4" }, { "text": "where \u03c8 i and \u03c8 ij refer to the dampened versions of the content-only and relationship preference functions, respectively, \u03b3 is the dampening parameter absolute value of the dampening parameter. If the dampening parameter is < 0, only the content-only preferences will be dampened (giving more relative weight to relationship preferences). If the dampening parameter is > 0, only the relationship preferences will be dampened (giving more relative weight to the content-only preferences).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tuning", "sec_num": "5.1.4" }, { "text": "\u2208 [\u22121, 1], \u03c8 i (AGAINST) = 1 \u2212 \u03c8 i (FOR), \u03c8", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tuning", "sec_num": "5.1.4" }, { "text": "For CONVOTE, the training fold is adapted for tuning by use of 52-fold cross-validation, where each of the 52 debates in the training fold is classified using all of the other debates as training data. BITTERLEMONS does not have internal structure within the training set, so it cannot be adapted in this way. Instead, we use leave-one-out cross-validation over the training set. Unfortunately this approach carries the risk of producing base classifications that are unrealistically accurate, because the training set is composed of articles by only two authors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tuning", "sec_num": "5.1.4" }, { "text": "The iterative classifier approach has three major components, as depicted in Figure 2: 1. Base classification. Produce base classifications using a content-only classifier. As with the dual classifier approach, the content-only classifier will give the preference that each instance be classified with FOR or AGAINST for CONVOTE, and ISRAELI or PALESTINIAN for BITTERLEMONS.", "cite_spans": [], "ref_spans": [ { "start": 77, "end": 86, "text": "Figure 2:", "ref_id": null } ], "eq_spans": [], "section": "Iterative Classifier Approach", "sec_num": "5.2" }, { "text": "2. Addition of relational features. Produce local vectors by adding relational features to the vectors previously used for content-only classification.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Iterative Classifier Approach", "sec_num": "5.2" }, { "text": "3. Iterative re-classification. Use a local classifier to classify the new feature vectors. Update the relational features after each iteration to reflect new class assignments. Repeat until class assignments stabilise or a threshold number of iterations is met.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Iterative Classifier Approach", "sec_num": "5.2" }, { "text": "Once again, content-only classification for the iterative classifier is performed using a bag-of-words SVM with binary unigram features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Base Classification", "sec_num": "5.2.1" }, { "text": "Let, f s be an average similarity score:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relational Features", "sec_num": "5.2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "f s (i, l) = d j \u2208D\\{d i } s(i, j)\u03b4 Y j ,l d j \u2208D\\{d i } \u03b4 Y j ,l", "eq_num": "(5)" } ], "section": "Relational Features", "sec_num": "5.2.2" }, { "text": "where \u03b4 is the Kronecker delta. Put in words, f s is the average of the similarity scores for the pairings of the given instance with each of the instances that have the label l. We derive relational features for the iterative classifier from the average similarity score as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relational Features", "sec_num": "5.2.2" }, { "text": "f as (i, l) = 1 f s (i, l) > f s (i, l ); 0 otherwise.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relational Features", "sec_num": "5.2.2" }, { "text": "This means that the feature f as (i, l) is set to 1 iff the average similarity of document d i to instances with label l is greater than its average similarity to instances with label l . In training, document labels are used when counting negative and positive instances to determine the values for f as . In evaluation, the classes assigned in the previous iteration are used.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relational Features", "sec_num": "5.2.2" }, { "text": "We assess the accuracy of the dual classifier and iterative classifier approaches described above over CONVOTE and BITTERLEMONS in terms of classification accuracy, micro-averaging across the 53 folds of cross-validation in the case of CONVOTE. When quoted, statistical significance has been determined Table 4 : Collective classification performance on BITTERLEMONS ( signifies a statistically significant improvement over the content-only baseline, p < 0.05).", "cite_spans": [], "ref_spans": [ { "start": 303, "end": 310, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Experiments", "sec_num": "6" }, { "text": "using approximate randomisation with p < 0.05 (Nooreen, 1989) . Two baseline scores are shown in the tables for collective classification results: (1) \"Majority\" gives the performance of the simplest possible classifier, which classifies every instance with the label that is most frequent in training data; and (2) \"Contentonly\" gives the performance of the bag-of-words linear-kernel SVM used to perform base classification. Table 3 shows the overall collective classifier performance on CONVOTE. The best performer is the iterative classifier with 4-grams, with an accuracy of 79.05%. This is a statistically significant 2.65% absolute gain over the content-only baseline. The iterative classifier is the best performer in general, obtaining the next four best results with statistically significant absolute gains of 2.41%, 1.76%, 1.70% and 1.59% for 3-grams, 5-grams, 2-grams and 1grams respectively.", "cite_spans": [ { "start": 46, "end": 61, "text": "(Nooreen, 1989)", "ref_id": "BIBREF19" } ], "ref_spans": [ { "start": 427, "end": 434, "text": "Table 3", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Experiments", "sec_num": "6" }, { "text": "The dual classifier with minimum-cut is the next best performer, with a best score of 77.45% for 5-grams, a statistically significant absolute gain of 1.06%. 4-grams and 2-grams also provide statistically significant gains, but 3-grams and 1-grams do not. For loopy-belief and mean-field the story is less positive. None of the variations gives a statistically significant improvement over the content-only baseline. The best performer is mean-field with 5-grams, with a score of 76.63, a 0.23% absolute improvement over the baseline. Table 4 shows overall collective classifier performance on BITTERLEMONS. As with CONVOTE, the best performer is the iterative classifier. 4-grams and 3-grams are the top-performing variants, obtaining a score of 90.91%, a statistically significant 4.38% absolute gain over the content-only baseline. 2-grams and 5-grams are the next best, with a statistically significant 3.37% absolute gain over the content-only baseline. 1-grams are the only iterative classifier variant that do not yield a statistically significant improvement over the content-only baseline.", "cite_spans": [], "ref_spans": [ { "start": 535, "end": 542, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Collective Classifier Performance", "sec_num": "6.1" }, { "text": "The dual classifier results for BITTERLEMONS warrant special comment. As mentioned in Section 5.1.4, leave-one-out tuning with the BITTER-LEMONS training corpus is compromised. The aim of cross-validation on the training set is to gain a picture of likely performance on the test set. Unfortunately, BITTERLEMONS is not homogeneous: articles in each class in the training set are contributed by just one author, whereas articles in the test set are contributed by different authors. Tuning on BITTERLEMONS failed because leave-one-out on the training set produced 100% accuracy, presumably because there are features specific to the two authors that make classification easy. This meant that the ideal dampening parameter was found to be exactly 1, i.e. collective classification was unnecessary, because the expected performance on the test set was 100%.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Collective Classifier Performance", "sec_num": "6.1" }, { "text": "As with CONVOTE, none of the loopy belief or mean-field variants provide statistically significant improvements over the content-only baseline. The best performers are mean-field and loopy belief with 5-grams, with a score of 88.55%, a 2.02% absolute improvement over the baseline.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Collective Classifier Performance", "sec_num": "6.1" }, { "text": "We next examine the dampening response of the dual classifier methods, by presenting six graphs showing the performance of the three different decoding algorithms on the two test corpora. This analysis helps to establish a picture of the limitations of the dual classifier approach in comparison with the iterative classifier approach.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dual Classifier Dampening Response", "sec_num": "6.2" }, { "text": "Each of the graphs in this section shows the effect of a varying dampening factor on classification accuracy. In each graph only a small portion of the [\u22121, 1] range supported by the dampening parameter is shown. The reason for this is visible on many of the graphs: performance is fixed at or near 50% until the dampening parameter is close to 1. This indicates that the probabilities of the content-only classifier and relationship classifier are badly mismatched: performance only becomes reasonable after the relationship preferences have been massively reduced in strength relative to the content-only preferences. Figure 3 shows performance on CONVOTE for minimum-cut, loopy belief, and mean-field respectively. The trend is the same in each: performance is flat until a sudden jump-up, leading to steady im- Figure 3 : The impact of the dampening factor on dual classifier performance for CONVOTE. provement up to a peak, shortly before the maximum dampening value of 1. At 1, the relationship preferences are entirely dampened and performance is the same as the content-only baseline. For minimum-cut, 1-grams provide the highest peak accuracy with close to 78% at dampening factor 0.93. Each of the other n-gram orders jumps above the 76.40% baseline at close to this point, with 5-grams providing the most sustained period of high performance from dampening factor 0.85 through to almost 1.", "cite_spans": [], "ref_spans": [ { "start": 620, "end": 628, "text": "Figure 3", "ref_id": null }, { "start": 815, "end": 823, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Dual Classifier Dampening Response", "sec_num": "6.2" }, { "text": "Performance is worse for loopy belief and meanfield. Only 5-grams do better than the baseline, between approximately 0.92 and 0.95 dampening factor for both algorithms. Figure 4 shows performance on BITTERLEMONS Figure 4 : The impact of the dampening factor on dual classifier performance for BITTERLEMONS.", "cite_spans": [], "ref_spans": [ { "start": 169, "end": 177, "text": "Figure 4", "ref_id": null }, { "start": 212, "end": 220, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Dual Classifier Dampening Response", "sec_num": "6.2" }, { "text": "for minimum-cut, loopy belief, and mean-field respectively. The trend is the same: after a period of flat performance, scores steadily improve as the dampening factor is increased, reaching a peak shortly before the maximum dampening value of 1. For minimum-cut, 5-grams give the best performance with a peak of 90.57% accuracy at dampening factor 0.95. 4-grams do the next best, followed by 3-grams, 2-grams and 1-grams. Each algorithm rises to a sudden peak and then trails off as it approaches maximum dampening. Loopy belief and mean-field give almost identical performance. Both show the same peak-and-trail-off shape as with minimum-cut but the performance gain is smaller, with 5-grams obtaining a best score of 88.55%.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dual Classifier Dampening Response", "sec_num": "6.2" }, { "text": "The collective classification experiments in this paper demonstrate that useful inter-document semantic relationships can be accurately predicted using features based on matching sequences of words, i.e. semantic relationships between pairs of documents that can be detected based on the mutual use of particular n-grams. These semantic relationships can be used to build collective classifiers that outperform standard content-based classifiers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "7" }, { "text": "Iterative classifiers do better than dual classifiers at collective classification using similarity-based relationships. Their superiority goes beyond measures of performance: iterative classifiers are simpler to implement, and more efficient. The key advantage of the iterative classifier seems to lie in its ability to sum up relationship information in a single average similarity score.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "7" }, { "text": "Future work should consider the combination of the methods investigated in this paper with more advanced content-only approaches. For dual classifiers and iterative classifiers, it would be also interesting to explore whether alternative base classifiers can provide better performance. For example, confidence-weighted linear classification has been shown to be highly effective on non-collective document classification tasks, and could be easily adapted for use in a dual classifier or iterative classifier (Dredze et al., 2008) . Finally, there is significant scope to apply the techniques in this paper to other collective classification tasks and to unambiguously define the types of content for which collective document classification with implicit inter-document relationships can be expected to provide performance gains.", "cite_spans": [ { "start": 510, "end": 531, "text": "(Dredze et al., 2008)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "7" }, { "text": "In some tasks, it can also indicate heterophily, i.e. the tendency for connected instances to have contrasting properties, as we shall see for one of our two dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This research was supported in part by the Australian Research Council.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Mining newsgroups using networks arising from social behavior", "authors": [ { "first": "Rakesh", "middle": [], "last": "Agrawal", "suffix": "" }, { "first": "Ramakrishnan", "middle": [], "last": "Sridhar Rajagopalan", "suffix": "" }, { "first": "Yirong", "middle": [], "last": "Srikant", "suffix": "" }, { "first": "", "middle": [], "last": "Xu", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 12th International Conference on World Wide Web", "volume": "", "issue": "", "pages": "529--535", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rakesh Agrawal, Sridhar Rajagopalan, Ramakrishnan Srikant, and Yirong Xu. 2003. Mining newsgroups using networks arising from social behavior. In Proceedings of the 12th International Conference on World Wide Web, pages 529-535, Budapest, Hungary.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Vocabulary choice as an indicator of perspective", "authors": [ { "first": "Eyal", "middle": [], "last": "Beata Beigman Klebanov", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Beigman", "suffix": "" }, { "first": "", "middle": [], "last": "Diermeier", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics: Short papers", "volume": "", "issue": "", "pages": "253--257", "other_ids": {}, "num": null, "urls": [], "raw_text": "Beata Beigman Klebanov, Eyal Beigman, and Daniel Diermeier. 2010. Vocabulary choice as an indicator of perspective. In Proceedings of the 48th Annual Meet- ing of the Association for Computational Linguistics: Short papers, pages 253-257, Uppsala, Sweden.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Collective classification of congressional floordebate transcripts", "authors": [ { "first": "Avrim", "middle": [], "last": "Blum", "suffix": "" }, { "first": "Shuchi", "middle": [], "last": "Chawla", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "1506--1515", "other_ids": {}, "num": null, "urls": [], "raw_text": "Avrim Blum and Shuchi Chawla. 2001. Learning from labeled and unlabeled data using graph mincuts. In Proceedings of the 18th International Conference on Machine Learning, pages 19-26, Williamstown, USA. Clinton Burfoot, Steven Bird, and Timothy Baldwin. 2011. Collective classification of congressional floor- debate transcripts. In Proceedings of the 49th Annual Meeting of the Association for Computational Lin- guistics: Human Language Technologies, pages 1506- 1515, Portland, USA.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Collective Document Classification Using Explicit and Implicit Inter-document Relationships", "authors": [ { "first": "Clinton", "middle": [], "last": "Burford", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Clinton Burford. 2013. Collective Document Classifica- tion Using Explicit and Implicit Inter-document Rela- tionships. Ph.D. thesis, The University of Melbourne.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Supersense tagging of unknown nouns in WordNet", "authors": [ { "first": "Massimiliano", "middle": [], "last": "Ciaramita", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "168--175", "other_ids": {}, "num": null, "urls": [], "raw_text": "Massimiliano Ciaramita and Mark Johnson. 2003. Su- persense tagging of unknown nouns in WordNet. In Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing, pages 168- 175, Sapporo, Japan.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Confidence-weighted linear classification", "authors": [ { "first": "Mark", "middle": [], "last": "Dredze", "suffix": "" }, { "first": "Koby", "middle": [], "last": "Crammer", "suffix": "" }, { "first": "Fernando", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 25th International Conference on Machine Learning", "volume": "", "issue": "", "pages": "264--271", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark Dredze, Koby Crammer, and Fernando Pereira. 2008. Confidence-weighted linear classification. In Proceedings of the 25th International Conference on Machine Learning, pages 264-271, Helsinki, Finland.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Citeseer: an automatic citation indexing system", "authors": [ { "first": "Lee", "middle": [], "last": "Giles", "suffix": "" }, { "first": "Kurt", "middle": [ "D" ], "last": "Bollacker", "suffix": "" }, { "first": "Steve", "middle": [ "Lawrence" ], "last": "", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the 3rd ACM Conference on Digital libraries", "volume": "", "issue": "", "pages": "89--98", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lee Giles, Kurt D. Bollacker, and Steve Lawrence. 1998. Citeseer: an automatic citation indexing sys- tem. In Proceedings of the 3rd ACM Conference on Digital libraries, pages 89-98, Pittsburgh, USA.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Dissimilarity in graph-based semi-supervised classification", "authors": [ { "first": "Andrew", "middle": [ "B" ], "last": "Goldberg", "suffix": "" }, { "first": "Xiaojin", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Wright", "suffix": "" } ], "year": 2007, "venue": "Journal of Machine Learning Research", "volume": "2", "issue": "", "pages": "155--162", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrew B. Goldberg, Xiaojin Zhu, and Stephen Wright. 2007. Dissimilarity in graph-based semi-supervised classification. Journal of Machine Learning Research, 2:155-162.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Target-dependent Twitter sentiment classification", "authors": [ { "first": "Long", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Mo", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Xiaohua", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Tiejun", "middle": [], "last": "Zhao", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "151--160", "other_ids": {}, "num": null, "urls": [], "raw_text": "Long Jiang, Mo Yu, Ming Zhou, Xiaohua Liu, and Tiejun Zhao. 2011. Target-dependent Twitter sentiment clas- sification. In Proceedings of the 49th Annual Meet- ing of the Association for Computational Linguistics: Human Language Technologies, pages 151-160, Port- land, USA.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Transductive learning via spectral graph partitioning", "authors": [ { "first": "Thorsten", "middle": [], "last": "Joachims", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the International Conference on Machine Learning", "volume": "", "issue": "", "pages": "290--297", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thorsten Joachims. 2003. Transductive learning via spectral graph partitioning. In Proceedings of the In- ternational Conference on Machine Learning, pages 290-297, Washington, USA.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "More or less supervised supersense tagging of twitter", "authors": [ { "first": "Anders", "middle": [], "last": "Johannsen", "suffix": "" }, { "first": "Dirk", "middle": [], "last": "Hovy", "suffix": "" }, { "first": "Barbara", "middle": [], "last": "H\u00e9ctor Mart\u00ednez Alonso", "suffix": "" }, { "first": "Anders", "middle": [], "last": "Plank", "suffix": "" }, { "first": "", "middle": [], "last": "S\u00f8gaard", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Third Joint Conference on Lexical and Computational Semantics (*SEM 2014)", "volume": "", "issue": "", "pages": "1--11", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anders Johannsen, Dirk Hovy, H\u00e9ctor Mart\u00ednez Alonso, Barbara Plank, and Anders S\u00f8gaard. 2014. More or less supervised supersense tagging of twitter. In Pro- ceedings of the Third Joint Conference on Lexical and Computational Semantics (*SEM 2014), pages 1-11, Dublin, Ireland.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "An introduction to variational methods for graphical models", "authors": [ { "first": "Michael", "middle": [], "last": "Jordan", "suffix": "" }, { "first": "Zoubin", "middle": [], "last": "Ghahramani", "suffix": "" }, { "first": "Tommi", "middle": [], "last": "Jaakkola", "suffix": "" }, { "first": "Lawrence", "middle": [], "last": "Saul", "suffix": "" }, { "first": "David", "middle": [], "last": "Heckerman", "suffix": "" } ], "year": 1999, "venue": "Machine Learning", "volume": "37", "issue": "", "pages": "183--233", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Jordan, Zoubin Ghahramani, Tommi Jaakkola, Lawrence Saul, and David Heckerman. 1999. An in- troduction to variational methods for graphical mod- els. Machine Learning, 37:183-233.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "That's what friends are for: Inferring location in online social media platforms based on social relationships", "authors": [ { "first": "David", "middle": [], "last": "Jurgens", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 7th International Conference on Weblogs and Social Media (ICWSM 2013)", "volume": "", "issue": "", "pages": "273--282", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Jurgens. 2013. That's what friends are for: In- ferring location in online social media platforms based on social relationships. In Proceedings of the 7th In- ternational Conference on Weblogs and Social Media (ICWSM 2013), pages 273-282, Dublin, Ireland.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Discretization techniques: A recent survey", "authors": [ { "first": "Sotiris", "middle": [], "last": "Kotsiantis", "suffix": "" }, { "first": "Dimitris", "middle": [], "last": "Kanellopoulos", "suffix": "" } ], "year": 2006, "venue": "GESTS International Transactions on Computer Science and Engineering", "volume": "32", "issue": "", "pages": "47--58", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sotiris Kotsiantis and Dimitris Kanellopoulos. 2006. Discretization techniques: A recent survey. In GESTS International Transactions on Computer Science and Engineering, volume 32, pages 47-58.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data", "authors": [ { "first": "John", "middle": [ "D" ], "last": "Lafferty", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" }, { "first": "Fernando", "middle": [ "C N" ], "last": "Pereira", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the 18th International Conference on Machine Learning", "volume": "", "issue": "", "pages": "282--289", "other_ids": {}, "num": null, "urls": [], "raw_text": "John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilis- tic models for segmenting and labeling sequence data. In Proceedings of the 18th International Conference on Machine Learning, pages 282-289, Williamstown, USA.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Which side are you on? Identifying perspectives at the document and sentence levels", "authors": [ { "first": "Wei-Hao", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Theresa", "middle": [], "last": "Wilson", "suffix": "" }, { "first": "Janyce", "middle": [], "last": "Wiebe", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Hauptmann", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 10th Conference on Computational Natural Language Learning", "volume": "", "issue": "", "pages": "109--116", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wei-Hao Lin, Theresa Wilson, Janyce Wiebe, and Alexander Hauptmann. 2006. Which side are you on? Identifying perspectives at the document and sen- tence levels. In Proceedings of the 10th Conference on Computational Natural Language Learning, pages 109-116, New York, USA.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Link-based classification", "authors": [ { "first": "Qing", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Lise", "middle": [], "last": "Getoor", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 20th International Conference on Machine Learning", "volume": "", "issue": "", "pages": "496--503", "other_ids": {}, "num": null, "urls": [], "raw_text": "Qing Lu and Lise Getoor. 2003. Link-based classifica- tion. In Proceedings of the 20th International Confer- ence on Machine Learning, pages 496-503, Washing- ton, USA.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Case-based collective classification", "authors": [ { "first": "Luke", "middle": [], "last": "Mcdowell", "suffix": "" }, { "first": "Kalyan", "middle": [], "last": "Moy Gupta", "suffix": "" }, { "first": "David", "middle": [ "W" ], "last": "Aha", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 20th International Florida Artificial Intelligence Research Society Conference", "volume": "", "issue": "", "pages": "399--404", "other_ids": {}, "num": null, "urls": [], "raw_text": "Luke McDowell, Kalyan Moy Gupta, and David W. Aha. 2007. Case-based collective classification. In Pro- ceedings of the 20th International Florida Artificial Intelligence Research Society Conference, pages 399- 404, Key West, USA.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Cautious collective classification", "authors": [ { "first": "K", "middle": [], "last": "Luke", "suffix": "" }, { "first": "Kalyan", "middle": [], "last": "Mcdowell", "suffix": "" }, { "first": "David", "middle": [ "W" ], "last": "Moy Gupta", "suffix": "" }, { "first": "", "middle": [], "last": "Aha", "suffix": "" } ], "year": 2009, "venue": "Journal of Machine Learning Research", "volume": "10", "issue": "", "pages": "2777--2836", "other_ids": {}, "num": null, "urls": [], "raw_text": "Luke K McDowell, Kalyan Moy Gupta, and David W Aha. 2009. Cautious collective classification. Journal of Machine Learning Research, 10:2777-2836.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Computer Intensive Methods for Testing Hypothesis", "authors": [ { "first": "Eric", "middle": [ "W" ], "last": "Nooreen", "suffix": "" } ], "year": 1989, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eric W. Nooreen. 1989. Computer Intensive Methods for Testing Hypothesis. Wiley and Sons Inc., New York, USA.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "A practical hypertext categorization method using links and incrementally available class information", "authors": [ { "first": "Hyo-Jung", "middle": [], "last": "Oh", "suffix": "" }, { "first": "Sung", "middle": [ "Hyon" ], "last": "Myaeng", "suffix": "" }, { "first": "Mann-Ho", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the 23rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval", "volume": "", "issue": "", "pages": "264--271", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hyo-Jung Oh, Sung Hyon Myaeng, and Mann-Ho Lee. 2000. A practical hypertext categorization method using links and incrementally available class infor- mation. In Proceedings of the 23rd Annual Interna- tional ACM SIGIR Conference on Research and De- velopment in Information Retrieval, pages 264-271, Athens, Greece.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales", "authors": [ { "first": "Bo", "middle": [], "last": "Pang", "suffix": "" }, { "first": "Lillian", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 43rd", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bo Pang and Lillian Lee. 2005. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In Proceedings of the 43rd", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Annual Meeting of the Association for Computational Linguistics", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "115--124", "other_ids": {}, "num": null, "urls": [], "raw_text": "Annual Meeting of the Association for Computational Linguistics, pages 115-124, Ann Arbor, USA.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods", "authors": [ { "first": "John", "middle": [ "C" ], "last": "Platt", "suffix": "" } ], "year": 1999, "venue": "Advances in Large Margin Classifiers", "volume": "", "issue": "", "pages": "61--74", "other_ids": {}, "num": null, "urls": [], "raw_text": "John C. Platt. 1999. Probabilistic outputs for support vector machines and comparisons to regularized like- lihood methods. In Alexander Smola, Peter Bartlett, and Bernhard Sch\u00f6lkopf, editors, Advances in Large Margin Classifiers, pages 61-74. MIT Press, Cam- bridge, USA.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "An introduction to hidden markov models", "authors": [ { "first": "R", "middle": [], "last": "Lawrence", "suffix": "" }, { "first": "Biing-Hwang", "middle": [], "last": "Rabiner", "suffix": "" }, { "first": "", "middle": [], "last": "Juang", "suffix": "" } ], "year": 1986, "venue": "ASSP Magazine", "volume": "3", "issue": "1", "pages": "4--16", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lawrence R. Rabiner and Biing-Hwang Juang. 1986. An introduction to hidden markov models. ASSP Maga- zine, IEEE, 3(1):4-16.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Collective classification in network data", "authors": [ { "first": "Prithviraj", "middle": [], "last": "Sen", "suffix": "" }, { "first": "Galileo", "middle": [ "Mark" ], "last": "Namata", "suffix": "" }, { "first": "Mustafa", "middle": [], "last": "Bilgic", "suffix": "" }, { "first": "Lise", "middle": [], "last": "Getoor", "suffix": "" }, { "first": "Brian", "middle": [], "last": "Gallagher", "suffix": "" }, { "first": "Tina", "middle": [], "last": "Eliassi-Rad", "suffix": "" } ], "year": 2008, "venue": "AI Magazine", "volume": "29", "issue": "3", "pages": "93--106", "other_ids": {}, "num": null, "urls": [], "raw_text": "Prithviraj Sen, Galileo Mark Namata, Mustafa Bilgic, Lise Getoor, Brian Gallagher, and Tina Eliassi-Rad. 2008. Collective classification in network data. AI Magazine, 29(3):93-106.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Documentword co-regularization for semi-supervised sentiment analysis", "authors": [ { "first": "Vikas", "middle": [], "last": "Sindhwani", "suffix": "" }, { "first": "Prem", "middle": [], "last": "Melville", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 2008 IEEE International Conference on Data Mining", "volume": "", "issue": "", "pages": "1025--1030", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vikas Sindhwani and Prem Melville. 2008. Document- word co-regularization for semi-supervised sentiment analysis. In Proceedings of the 2008 IEEE Interna- tional Conference on Data Mining, pages 1025-1030, Washington, USA.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Combining statistical and relational methods for learning in hypertext domains", "authors": [ { "first": "Se\u00e1n", "middle": [], "last": "Slattery", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Craven", "suffix": "" } ], "year": 1998, "venue": "Proceedings of Inductive Logic Programming, 8th International Workshop", "volume": "", "issue": "", "pages": "38--52", "other_ids": {}, "num": null, "urls": [], "raw_text": "Se\u00e1n Slattery and Mark Craven. 1998. Combining sta- tistical and relational methods for learning in hyper- text domains. In Proceedings of Inductive Logic Pro- gramming, 8th International Workshop, pages 38-52, Madison, USA.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Opinion graphs for polarity and discourse classification", "authors": [ { "first": "Swapna", "middle": [], "last": "Somasundaran", "suffix": "" }, { "first": "Galileo", "middle": [], "last": "Namata", "suffix": "" }, { "first": "Lise", "middle": [], "last": "Getoor", "suffix": "" }, { "first": "Janyce", "middle": [], "last": "Wiebe", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 2009 Workshop on Graph-based Methods for Natural Language Processing", "volume": "", "issue": "", "pages": "66--74", "other_ids": {}, "num": null, "urls": [], "raw_text": "Swapna Somasundaran, Galileo Namata, Lise Getoor, and Janyce Wiebe. 2009. Opinion graphs for polar- ity and discourse classification. In Proceedings of the 2009 Workshop on Graph-based Methods for Natural Language Processing, pages 66-74, Singapore.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Minimumrisk training of approximate CRF-based NLP systems", "authors": [ { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Eisner", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "120--130", "other_ids": {}, "num": null, "urls": [], "raw_text": "Veselin Stoyanov and Jason Eisner. 2012. Minimum- risk training of approximate CRF-based NLP systems. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 120-130, Montr\u00e9al, Canada.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Extracting semantic orientations of phrases from dictionary", "authors": [ { "first": "Hiroya", "middle": [], "last": "Takamura", "suffix": "" }, { "first": "Takashi", "middle": [], "last": "Inui", "suffix": "" }, { "first": "Manabu", "middle": [], "last": "Okumura", "suffix": "" } ], "year": 2007, "venue": "Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "292--299", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hiroya Takamura, Takashi Inui, and Manabu Okumura. 2007. Extracting semantic orientations of phrases from dictionary. In Human Language Technologies 2007: The Conference of the North American Chap- ter of the Association for Computational Linguistics, pages 292-299, Rochester, USA.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "User-level sentiment analysis incorporating social networks", "authors": [ { "first": "Chenhao", "middle": [], "last": "Tan", "suffix": "" }, { "first": "Lillian", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Jie", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Long", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Ping", "middle": [], "last": "Li", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining", "volume": "", "issue": "", "pages": "1397--1405", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chenhao Tan, Lillian Lee, Jie Tang, Long Jiang, Ming Zhou, and Ping Li. 2011. User-level sentiment anal- ysis incorporating social networks. In Proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 1397- 1405, San Diego, USA.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Discriminative probabilistic models for relational data", "authors": [ { "first": "Ben", "middle": [], "last": "Taskar", "suffix": "" }, { "first": "Pieter", "middle": [], "last": "Abbeel", "suffix": "" }, { "first": "Daphne", "middle": [], "last": "Koller", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 18th Conference on Uncertainty in Artificial Intelligence", "volume": "", "issue": "", "pages": "485--492", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ben Taskar, Pieter Abbeel, and Daphne Koller. 2002. Discriminative probabilistic models for relational data. In Proceedings of the 18th Conference on Uncer- tainty in Artificial Intelligence, pages 485-492, Al- berta, Canada.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Get out the vote: Determining support or opposition from congressional floor-debate transcripts", "authors": [ { "first": "Matt", "middle": [], "last": "Thomas", "suffix": "" }, { "first": "Bo", "middle": [], "last": "Pang", "suffix": "" }, { "first": "Lillian", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "327--335", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matt Thomas, Bo Pang, and Lillian Lee. 2006. Get out the vote: Determining support or opposition from con- gressional floor-debate transcripts. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing, pages 327-335, Sydney, Aus- tralia.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Comparing the mean field method and belief propagation for approximate inference in MRFs", "authors": [ { "first": "Yair", "middle": [], "last": "Weiss", "suffix": "" } ], "year": 2001, "venue": "Advanced mean field methods: theory and practice", "volume": "", "issue": "", "pages": "229--239", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yair Weiss. 2001. Comparing the mean field method and belief propagation for approximate inference in MRFs. In Manfred Opper and David Saad, editors, Advanced mean field methods: theory and practice, pages 229- 239. MIT Press, Cambridge, USA.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "A study of approaches to hypertext categorization", "authors": [ { "first": "Yiming", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Se\u00e1n", "middle": [], "last": "Slattery", "suffix": "" }, { "first": "Rayid", "middle": [], "last": "Ghani", "suffix": "" } ], "year": 2002, "venue": "", "volume": "18", "issue": "", "pages": "219--241", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yiming Yang, Se\u00e1n Slattery, and Rayid Ghani. 2002. A study of approaches to hypertext categorization. Jour- nal of Intelligent Information Systems, 18(2-3):219- 241.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Constructing free-energy approximations and generalized belief propagation algorithms", "authors": [ { "first": "Jonathan", "middle": [], "last": "Yedidia", "suffix": "" }, { "first": "William", "middle": [], "last": "Freeman", "suffix": "" }, { "first": "Yair", "middle": [], "last": "Weiss", "suffix": "" } ], "year": 2005, "venue": "IEEE Transactions on Information Theory", "volume": "51", "issue": "", "pages": "2282--2312", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jonathan Yedidia, William Freeman, and Yair Weiss. 2005. Constructing free-energy approximations and generalized belief propagation algorithms. IEEE Transactions on Information Theory, 51:2282-2312.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Dual classifier with similarity-based links. before 1948 was not his land. [AHMAD HARB (GUEST); PALESTINIAN] (4) This is being neglected and Sharon is having his way in brutalizing the Palestinian people in the hope that they will succumb and abandon their rights. [HAIDAR ABDEL SHAFI (GUEST); PALESTINIAN]", "type_str": "figure", "num": null, "uris": null }, "FIGREF1": { "text": "ij (AGAINST, AGAINST) = \u03c8 ij (FOR, FOR), and\u03c8 ij (FOR, AGAINST) = \u03c8 ij (AGAINST, FOR) = 1 \u2212 \u03c8 ij (FOR, FOR).This approach works by reducing the difference between the preferences for the two classes (FOR or AGAINST) by an amount that is proportional to the", "type_str": "figure", "num": null, "uris": null }, "TABREF2": { "text": "Corpus statistics for BITTERLEMONS.", "num": null, "html": null, "type_str": "table", "content": "" }, "TABREF5": { "text": "Collective classification performance on CONVOTE ( signifies a statistically significant improvement over the content-only baseline, p < 0.05).", "num": null, "html": null, "type_str": "table", "content": "
TypeDescription12n-gram size 3 45
Baseline Majority49.83 49.8349.8349.8349.83
Baseline Content-only86.53 86.5386.5386.5386.53
DualCosine similarity, min-cut87.88 88.5588.8989.9090.57
DualCosine similarity, loopy belief 87.54 86.8787.8887.8888.55
DualCosine similarity, mean-field87.54 86.8787.8887.8888.55
Iterative Average similarity score87.54 89.9090.9190.9189.90
" } } } }