{ "paper_id": "N13-1039", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T14:41:21.680471Z" }, "title": "Improved Part-of-Speech Tagging for Online Conversational Text with Word Clusters", "authors": [ { "first": "Olutobi", "middle": [], "last": "Owoputi", "suffix": "", "affiliation": { "laboratory": "", "institution": "Carnegie Mellon University", "location": { "postCode": "15213", "settlement": "Pittsburgh", "region": "PA", "country": "USA" } }, "email": "" }, { "first": "Brendan", "middle": [], "last": "O'connor", "suffix": "", "affiliation": { "laboratory": "", "institution": "Carnegie Mellon University", "location": { "postCode": "15213", "settlement": "Pittsburgh", "region": "PA", "country": "USA" } }, "email": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "", "affiliation": { "laboratory": "", "institution": "Carnegie Mellon University", "location": { "postCode": "15213", "settlement": "Pittsburgh", "region": "PA", "country": "USA" } }, "email": "" }, { "first": "Kevin", "middle": [], "last": "Gimpel", "suffix": "", "affiliation": { "laboratory": "", "institution": "Toyota Technological Institute at Chicago", "location": { "postCode": "60637", "settlement": "Chicago", "region": "IL", "country": "USA" } }, "email": "" }, { "first": "Nathan", "middle": [], "last": "Schneider", "suffix": "", "affiliation": { "laboratory": "", "institution": "Carnegie Mellon University", "location": { "postCode": "15213", "settlement": "Pittsburgh", "region": "PA", "country": "USA" } }, "email": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "", "affiliation": { "laboratory": "", "institution": "Carnegie Mellon University", "location": { "postCode": "15213", "settlement": "Pittsburgh", "region": "PA", "country": "USA" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We consider the problem of part-of-speech tagging for informal, online conversational text. We systematically evaluate the use of large-scale unsupervised word clustering and new lexical features to improve tagging accuracy. With these features, our system achieves state-of-the-art tagging results on both Twitter and IRC POS tagging tasks; Twitter tagging is improved from 90% to 93% accuracy (more than 3% absolute). Qualitative analysis of these word clusters yields insights about NLP and linguistic phenomena in this genre. Additionally, we contribute the first POS annotation guidelines for such text and release a new dataset of English language tweets annotated using these guidelines. Tagging software, annotation guidelines, and large-scale word clusters are available at: http://www.ark.cs.cmu.edu/TweetNLP This paper describes release 0.3 of the \"CMU Twitter Part-of-Speech Tagger\" and annotated data.", "pdf_parse": { "paper_id": "N13-1039", "_pdf_hash": "", "abstract": [ { "text": "We consider the problem of part-of-speech tagging for informal, online conversational text. We systematically evaluate the use of large-scale unsupervised word clustering and new lexical features to improve tagging accuracy. With these features, our system achieves state-of-the-art tagging results on both Twitter and IRC POS tagging tasks; Twitter tagging is improved from 90% to 93% accuracy (more than 3% absolute). Qualitative analysis of these word clusters yields insights about NLP and linguistic phenomena in this genre. Additionally, we contribute the first POS annotation guidelines for such text and release a new dataset of English language tweets annotated using these guidelines. Tagging software, annotation guidelines, and large-scale word clusters are available at: http://www.ark.cs.cmu.edu/TweetNLP This paper describes release 0.3 of the \"CMU Twitter Part-of-Speech Tagger\" and annotated data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Online conversational text, typified by microblogs, chat, and text messages, 1 is a challenge for natural language processing. Unlike the highly edited genres that conventional NLP tools have been developed for, conversational text contains many nonstandard lexical items and syntactic patterns. These are the result of unintentional errors, dialectal variation, conversational ellipsis, topic diversity, and creative use of language and orthography (Eisenstein, 2013) . An example is shown in Fig. 1 . As a result of this widespread variation, standard modeling assumptions that depend on lexical, syntactic, and orthographic regularity are inappropriate. There 1 Also referred to as computer-mediated communication. is preliminary work on social media part-of-speech (POS) tagging (Gimpel et al., 2011) , named entity recognition (Ritter et al., 2011; Liu et al., 2011) , and parsing (Foster et al., 2011) , but accuracy rates are still significantly lower than traditional well-edited genres like newswire. Even web text parsing, which is a comparatively easier genre than social media, lags behind newspaper text (Petrov and McDonald, 2012) , as does speech transcript parsing (McClosky et al., 2010) .", "cite_spans": [ { "start": 450, "end": 468, "text": "(Eisenstein, 2013)", "ref_id": "BIBREF9" }, { "start": 783, "end": 804, "text": "(Gimpel et al., 2011)", "ref_id": "BIBREF14" }, { "start": 832, "end": 853, "text": "(Ritter et al., 2011;", "ref_id": "BIBREF32" }, { "start": 854, "end": 871, "text": "Liu et al., 2011)", "ref_id": "BIBREF21" }, { "start": 886, "end": 907, "text": "(Foster et al., 2011)", "ref_id": "BIBREF13" }, { "start": 1117, "end": 1144, "text": "(Petrov and McDonald, 2012)", "ref_id": "BIBREF29" }, { "start": 1181, "end": 1204, "text": "(McClosky et al., 2010)", "ref_id": "BIBREF25" } ], "ref_spans": [ { "start": 494, "end": 500, "text": "Fig. 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To tackle the challenge of novel words and constructions, we create a new Twitter part-of-speech tagger-building on previous work by Gimpel et al. (2011) -that includes new large-scale distributional features. This leads to state-of-the-art results in POS tagging for both Twitter and Internet Relay Chat (IRC) text. We also annotated a new dataset of tweets with POS tags, improved the annotations in the previous dataset from Gimpel et al., and developed annotation guidelines for manual POS tagging of tweets. We release all of these resources to the research community:", "cite_spans": [ { "start": 133, "end": 153, "text": "Gimpel et al. (2011)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 an open-source part-of-speech tagger for online conversational text ( \u00a72); \u2022 unsupervised Twitter word clusters ( \u00a73); \u2022 an improved emoticon detector for conversational text ( \u00a74);", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 POS annotation guidelines ( \u00a75.1); and \u2022 a new dataset of 547 manually POS-annotated tweets ( \u00a75).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our tagging model is a first-order maximum entropy Markov model (MEMM), a discriminative sequence model for which training and decoding are extremely efficient (Ratnaparkhi, 1996; McCallum et al., 2000) . 2 The probability of a tag y t is conditioned on the input sequence x and the tag to its left y t\u22121 , and is parameterized by a multiclass logistic regression:", "cite_spans": [ { "start": 160, "end": 179, "text": "(Ratnaparkhi, 1996;", "ref_id": "BIBREF31" }, { "start": 180, "end": 202, "text": "McCallum et al., 2000)", "ref_id": "BIBREF24" }, { "start": 205, "end": 206, "text": "2", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "MEMM Tagger", "sec_num": "2" }, { "text": "p(y t = k | y t\u22121 , x, t; \u03b2) \u221d exp \u03b2 (trans) y t\u22121 ,k + j \u03b2 (obs) j,k f j (x, t)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MEMM Tagger", "sec_num": "2" }, { "text": "We use transition features for every pair of labels, and extract base observation features from token t and neighboring tokens, and conjoin them against all K = 25 possible outputs in our coarse tagset (Appendix A). Our feature sets will be discussed below in detail.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MEMM Tagger", "sec_num": "2" }, { "text": "Decoding. For experiments reported in this paper, we use the O(|x|K 2 ) Viterbi algorithm for prediction; K is the number of tags. This exactly maximizes p(y | x), but the MEMM also naturally allows a faster O(|x|K) left-to-right greedy decoding:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MEMM Tagger", "sec_num": "2" }, { "text": "for t = 1 . . . |x|: y t \u2190 arg max k p(y t = k |\u0177 t\u22121 , x, t; \u03b2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MEMM Tagger", "sec_num": "2" }, { "text": "which we find is 3 times faster and yields similar accuracy as Viterbi (an insignificant accuracy decrease of less than 0.1% absolute on the DAILY547 test set discussed below). Speed is paramount for social media analysis applications-which often require the processing of millions to billions of messages-so we make greedy decoding the default in the released software.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MEMM Tagger", "sec_num": "2" }, { "text": "2 Although when compared to CRFs, MEMMs theoretically suffer from the \"label bias\" problem (Lafferty et al., 2001) , our system substantially outperforms the CRF-based taggers of previous work; and when comparing to Gimpel et al. system with similar feature sets, we observed little difference in accuracy. This is consistent with conventional wisdom that the quality of lexical features is much more important than the parametric form of the sequence model, at least in our setting: part-ofspeech tagging with a small labeled training set. This greedy tagger runs at 800 tweets/sec. (10,000 tokens/sec.) on a single CPU core, about 40 times faster than Gimpel et al.'s system. The tokenizer by itself ( \u00a74) runs at 3,500 tweets/sec. 3 Training and regularization. During training, the MEMM log-likelihood for a tagged tweet x, y is the sum over the observed token tags y t , each conditional on the tweet being tagged and the observed previous tag (with a start symbol before the first token in x),", "cite_spans": [ { "start": 91, "end": 114, "text": "(Lafferty et al., 2001)", "ref_id": "BIBREF18" }, { "start": 734, "end": 735, "text": "3", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "MEMM Tagger", "sec_num": "2" }, { "text": "(x, y, \u03b2) = |x| t=1 log p(y t | y t\u22121 , x, t; \u03b2).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MEMM Tagger", "sec_num": "2" }, { "text": "We optimize the parameters \u03b2 with OWL-QN, an L 1 -capable variant of L-BFGS (Andrew and Gao, 2007; Liu and Nocedal, 1989) to minimize the regularized objective", "cite_spans": [ { "start": 76, "end": 98, "text": "(Andrew and Gao, 2007;", "ref_id": "BIBREF1" }, { "start": 99, "end": 121, "text": "Liu and Nocedal, 1989)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "MEMM Tagger", "sec_num": "2" }, { "text": "arg min \u03b2 \u2212 1 N x,y (x, y, \u03b2) + R(\u03b2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MEMM Tagger", "sec_num": "2" }, { "text": "where N is the number of tokens in the corpus and the sum ranges over all tagged tweets x, y in the training data. We use elastic net regularization (Zou and Hastie, 2005) , which is a linear combination of L 1 and L 2 penalties; here j indexes over all features:", "cite_spans": [ { "start": 149, "end": 171, "text": "(Zou and Hastie, 2005)", "ref_id": "BIBREF38" } ], "ref_spans": [], "eq_spans": [], "section": "MEMM Tagger", "sec_num": "2" }, { "text": "R(\u03b2) = \u03bb 1 j |\u03b2 j | + 1 2 \u03bb 2 j \u03b2 2 j", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MEMM Tagger", "sec_num": "2" }, { "text": "Using even a very small L 1 penalty eliminates many irrelevant or noisy features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MEMM Tagger", "sec_num": "2" }, { "text": "Our POS tagger can make use of any number of possibly overlapping features. While we have only a small amount of hand-labeled data for training, we also have access to billions of tokens of unlabeled conversational text from the web. Previous work has shown that unlabeled text can be used to induce unsupervised word clusters which can improve the performance of many supervised NLP tasks (Koo et al., 2008; Turian et al., 2010; T\u00e4ckstr\u00f6m et al., 2012, inter alia) . We use a similar approach here to improve tagging performance for online conversational text. We also make our induced clusters publicly available in the hope that they will be useful for other NLP tasks in this genre. ", "cite_spans": [ { "start": 390, "end": 408, "text": "(Koo et al., 2008;", "ref_id": "BIBREF17" }, { "start": 409, "end": 429, "text": "Turian et al., 2010;", "ref_id": "BIBREF36" }, { "start": 430, "end": 465, "text": "T\u00e4ckstr\u00f6m et al., 2012, inter alia)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Unsupervised Word Clusters", "sec_num": "3" }, { "text": "G4 111010110001 <3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unsupervised Word Clusters", "sec_num": "3" }, { "text": "xoxo <33 xo <333 #love s2 #neversaynever <3333 Figure 2 : Example word clusters (HMM classes): we list the most probable words, starting with the most probable, in descending order. Boldfaced words appear in the example tweet ( Figure 1 ). The binary strings are root-to-leaf paths through the binary cluster tree. For example usage, see e.g. search.twitter.com, bing.com/social and urbandictionary.com.", "cite_spans": [], "ref_spans": [ { "start": 67, "end": 75, "text": "Figure 2", "ref_id": null }, { "start": 248, "end": 256, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Unsupervised Word Clusters", "sec_num": "3" }, { "text": "We obtained hierarchical word clusters via Brown clustering (Brown et al., 1992) on a large set of unlabeled tweets. 4 The algorithm partitions words into a base set of 1,000 clusters, and induces a hierarchy among those 1,000 clusters with a series of greedy agglomerative merges that heuristically optimize the likelihood of a hidden Markov model with a one-class-per-lexical-type constraint. Not only does Brown clustering produce effective features for discriminative models, but its variants are better unsupervised POS taggers than some models developed nearly 20 years later; see comparisons in Blunsom and Cohn (2011) . The algorithm is attractive for our purposes since it scales to large amounts of data. When training on tweets drawn from a single day, we observed time-specific biases (e.g., numerical dates appearing in the same cluster as the word tonight), so we assembled our unlabeled data from a random sample of 100,000 tweets per day from September 10, 2008 to August 14, 2012, and filtered out non-English tweets (about 60% of the sample) using langid.py (Lui and Baldwin, 2012) . 5 Each tweet was processed with our to-kenizer and lowercased. We normalized all atmentions to @MENTION and URLs/email addresses to their domains (e.g. http://bit.ly/ dP8rR8 \u21d2 URL-bit.ly ). In an effort to reduce spam, we removed duplicated tweet texts (this also removes retweets) before word clustering. This normalization and cleaning resulted in 56 million unique tweets (847 million tokens). We set the clustering software's count threshold to only cluster words appearing 40 or more times, yielding 216,856 word types, which took 42 hours to cluster on a single CPU. Fig. 2 shows example clusters. Some of the challenging words in the example tweet ( Fig. 1) are highlighted. The term lololol (an extension of lol for \"laughing out loud\") is grouped with a large number of laughter acronyms (A1: \"laughing my (fucking) ass off,\" \"cracking the fuck up\"). Since expressions of laughter are so prevalent on Twitter, the algorithm creates another laughter cluster (A1's sibling A2), that tends to have onomatopoeic, non-acronym variants (e.g., haha). The acronym ikr (\"I know, right?\") is grouped with expressive variations of \"yes\" and \"no\" (A4). Note that A1-A4 are grouped in a fairly specific subtree; and indeed, in this message ikr and lololol are both tagged as interjections.", "cite_spans": [ { "start": 60, "end": 80, "text": "(Brown et al., 1992)", "ref_id": "BIBREF5" }, { "start": 117, "end": 118, "text": "4", "ref_id": null }, { "start": 602, "end": 625, "text": "Blunsom and Cohn (2011)", "ref_id": "BIBREF2" }, { "start": 1076, "end": 1099, "text": "(Lui and Baldwin, 2012)", "ref_id": "BIBREF22" }, { "start": 1102, "end": 1103, "text": "5", "ref_id": null } ], "ref_spans": [ { "start": 1675, "end": 1681, "text": "Fig. 2", "ref_id": null }, { "start": 1759, "end": 1766, "text": "Fig. 1)", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Clustering Method", "sec_num": "3.1" }, { "text": "smh (\"shaking my head,\" indicating disapproval) seems related, though is always tagged in the annotated data as a miscellaneous abbreviation (G); the difference between acronyms that are interjections versus other acronyms may be complicated. Here, smh is in a related but distinct subtree from the above expressions (A5); its usage in this example is slightly different from its more common usage, which it shares with the other words in its cluster: message-ending expressions of commentary or emotional reaction, sometimes as a metacomment on the author's message; e.g., Maybe you could get a guy to date you if you actually respected yourself #smh or There is really NO reason why other girls should send my boyfriend a goodmorning text #justsaying.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cluster Examples", "sec_num": "3.2" }, { "text": "We observe many variants of categories traditionally considered closed-class, including pronouns (B: u = \"you\") and prepositions (C: fir = \"for\").", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cluster Examples", "sec_num": "3.2" }, { "text": "There is also evidence of grammatical categories specific to conversational genres of English; clusters E1-E2 demonstrate variations of single-word contractions for \"going to\" and \"trying to,\" some of which have more complicated semantics. 6 Finally, the HMM learns about orthographic variants, even though it treats all words as opaque symbols; cluster F consists almost entirely of variants of \"so,\" their frequencies monotonically decreasing in the number of vowel repetitions-a phenomenon called \"expressive lengthening\" or \"affective lengthening\" (Brody and Diakopoulos, 2011; Schnoebelen, 2012 ). This suggests a future direction to jointly model class sequence and orthographic information (Clark, 2003; Smith and Eisner, 2005; Blunsom and Cohn, 2011) .", "cite_spans": [ { "start": 240, "end": 241, "text": "6", "ref_id": null }, { "start": 552, "end": 581, "text": "(Brody and Diakopoulos, 2011;", "ref_id": null }, { "start": 582, "end": 599, "text": "Schnoebelen, 2012", "ref_id": "BIBREF33" }, { "start": 697, "end": 710, "text": "(Clark, 2003;", "ref_id": "BIBREF7" }, { "start": 711, "end": 734, "text": "Smith and Eisner, 2005;", "ref_id": "BIBREF34" }, { "start": 735, "end": 758, "text": "Blunsom and Cohn, 2011)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Cluster Examples", "sec_num": "3.2" }, { "text": "We have built an HTML viewer to browse these and numerous other interesting examples. 7", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cluster Examples", "sec_num": "3.2" }, { "text": "We use the term emoticon to mean a face or icon constructed with traditional alphabetic or punctua- 6 One coauthor, a native speaker of the Texan English dialect, notes \"finna\" (short for \"fixing to\", cluster E1) may be an immediate future auxiliary, indicating an immediate future tense that is present in many languages (though not in standard English). To illustrate: \"She finna go\" approximately means \"She will go,\" but sooner, in the sense of \"She is about to go.\" 7 http://www.ark.cs.cmu.edu/TweetNLP/ cluster_viewer.html tion symbols, and emoji to mean symbols rendered in software as small pictures, in line with the text. Since our tokenizer is careful to preserve emoticons and other symbols (see \u00a74), they are clustered just like other words. Similar emoticons are clustered together (G1-G4), including separate clusters of happy [[ :) ", "cite_spans": [ { "start": 100, "end": 101, "text": "6", "ref_id": null }, { "start": 842, "end": 847, "text": "[[ :)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Emoticons and Emoji", "sec_num": "3.3" }, { "text": "=) \u2227 _ \u2227 ]], sad/disappointed [[ :/ :( -_-:) ;-p >:d 8-) ;-d G2 11101011001011 :) (: =) :)-_--.-:-( :'( d: :| :s -__-=( =/ >.< -___-:-/ _< =[ :[ #fml", "num": null, "uris": null, "type_str": "figure" }, "FIGREF2": { "text": "Figure 3: OCT27 development set accuracy using only clusters as features. Model In dict. Out of dict. Full 93.4 85.0 No clusters 92.0 (\u22121.4) 79.3 (\u22125.7) Total tokens 4,808 1,394", "num": null, "uris": null, "type_str": "figure" }, "FIGREF3": { "text": ".(2011), trained on more data 88.3", "num": null, "uris": null, "type_str": "figure" }, "TABREF0": { "text": "lmfao lmaoo lmaooo hahahahaha lool ctfu rofl loool lmfaoo lmfaooo lmaoooo lmbo lololol A2 111010100011 haha hahaha hehe hahahaha hahah aha hehehe ahaha hah hahahah kk hahaa ahah A3 111010100100 yes yep yup nope yess yesss yessss ofcourse yeap likewise yepp yesh yw yuup yus A4 111010100101 yeah yea nah naw yeahh nooo yeh noo noooo yeaa ikr nvm yeahhh nahh noooooA5 11101011011100 smh jk #fail #random #fact smfh #smh #winning #realtalk smdh #dead #justsaying B", "html": null, "num": null, "content": "
Binary pathTop words (by frequency)
A1 111010100010
011101011u yu yuh yhu uu yuu yew y0u yuhh youh yhuu iget yoy yooh yuoyue juudya youz yyou
C11100101111001 w fo fa fr fro ov fer fir whit abou aft serie fore fah fuh w/her w/that fron isn agains
D111101011000facebook fb itunes myspace skype ebay tumblr bbm flickr aim msn netflix pandora
E1 0011001tryna gon finna bouta trynna boutta gne fina gonn tryina fenna qone trynaa qon
E2 0011000gonna gunna gona gna guna gnna ganna qonna gonnna gana qunna gonne goona
F0110110111soo sooo soooo sooooo soooooo sooooooo soooooooo sooooooooo soooooooooo
", "type_str": "table" }, "TABREF2": { "text": "", "html": null, "num": null, "content": "
: Annotated datasets: number of messages, to-
kens, tagset, and date range. More information in \u00a75,
\u00a76.3, and \u00a76.2.
", "type_str": "table" }, "TABREF4": { "text": "Accuracy comparison on Ritter et al.'s Twitter POS corpus ( \u00a76.2).", "html": null, "num": null, "content": "
TaggerAccuracy
This work93.4 \u00b1 0.3
Forsyth (2007) 90.8
Table 5: Accuracy comparison on Forsyth's NPSCHAT
IRC POS corpus ( \u00a76.3).
", "type_str": "table" }, "TABREF5": { "text": "", "html": null, "num": null, "content": "", "type_str": "table" } } } }