{ "paper_id": "N12-1033", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T14:05:39.073034Z" }, "title": "Stylometric Analysis of Scientific Articles", "authors": [ { "first": "Shane", "middle": [], "last": "Bergsma", "suffix": "", "affiliation": { "laboratory": "", "institution": "Johns Hopkins University Baltimore", "location": { "postCode": "21218", "region": "MD", "country": "USA" } }, "email": "sbergsma@jhu.edu" }, { "first": "Matt", "middle": [], "last": "Post", "suffix": "", "affiliation": { "laboratory": "", "institution": "Johns Hopkins University Baltimore", "location": { "postCode": "21218", "region": "MD", "country": "USA" } }, "email": "post@cs.jhu.edu" }, { "first": "David", "middle": [], "last": "Yarowsky", "suffix": "", "affiliation": { "laboratory": "", "institution": "Johns Hopkins University Baltimore", "location": { "postCode": "21218", "region": "MD", "country": "USA" } }, "email": "yarowsky@cs.jhu.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We present an approach to automatically recover hidden attributes of scientific articles, such as whether the author is a native English speaker, whether the author is a male or a female, and whether the paper was published in a conference or workshop proceedings. We train classifiers to predict these attributes in computational linguistics papers. The classifiers perform well in this challenging domain, identifying non-native writing with 95% accuracy (over a baseline of 67%). We show the benefits of using syntactic features in stylometry; syntax leads to significant improvements over bag-of-words models on all three tasks, achieving 10% to 25% relative error reduction. We give a detailed analysis of which words and syntax most predict a particular attribute, and we show a strong correlation between our predictions and a paper's number of citations.", "pdf_parse": { "paper_id": "N12-1033", "_pdf_hash": "", "abstract": [ { "text": "We present an approach to automatically recover hidden attributes of scientific articles, such as whether the author is a native English speaker, whether the author is a male or a female, and whether the paper was published in a conference or workshop proceedings. We train classifiers to predict these attributes in computational linguistics papers. The classifiers perform well in this challenging domain, identifying non-native writing with 95% accuracy (over a baseline of 67%). We show the benefits of using syntactic features in stylometry; syntax leads to significant improvements over bag-of-words models on all three tasks, achieving 10% to 25% relative error reduction. We give a detailed analysis of which words and syntax most predict a particular attribute, and we show a strong correlation between our predictions and a paper's number of citations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Stylometry aims to recover useful attributes of documents from the style of the writing. In some domains, statistical techniques have successfully deduced author identity (Mosteller and Wallace, 1984) , gender (Koppel et al., 2003) , native language (Koppel et al., 2005) , and even whether an author has dementia (Le et al., 2011) . Stylometric analysis is important to marketers, analysts and social scientists because it provides demographic data directly from raw text. There has been growing interest in applying stylometry to the content generated by users of Internet applications, e.g., detecting author ethnicity in social media (Eisenstein et al., 2011; Rao et ", "cite_spans": [ { "start": 171, "end": 200, "text": "(Mosteller and Wallace, 1984)", "ref_id": "BIBREF28" }, { "start": 210, "end": 231, "text": "(Koppel et al., 2003)", "ref_id": "BIBREF25" }, { "start": 250, "end": 271, "text": "(Koppel et al., 2005)", "ref_id": "BIBREF26" }, { "start": 314, "end": 331, "text": "(Le et al., 2011)", "ref_id": "BIBREF27" }, { "start": 638, "end": 663, "text": "(Eisenstein et al., 2011;", "ref_id": "BIBREF10" }, { "start": 664, "end": 670, "text": "Rao et", "ref_id": "BIBREF41" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Stylometry aims to recover useful attributes of documents from the style of their writing. In some domains, statistical techniques have successfully deduced author identities (Mosteller and Wallace, 1984) , gender (Koppel et al., 2003) , native language (Koppel et al., 2005) , and even whether an author has dementia (Le et al., 2011) . Stylometric analysis is important to marketers, analysts and social scientists because it provides demographic data directly from raw text. There has been growing interest in applying stylometry in Web 2.0 applications, e.g., detecting the ethnicity of Twitter users (Eisenstein et al., 2011; Rao et al., 2011) , or whether a person is writing deceptive online reviews (Ott et al., 2011) .", "cite_spans": [ { "start": 175, "end": 204, "text": "(Mosteller and Wallace, 1984)", "ref_id": "BIBREF28" }, { "start": 214, "end": 235, "text": "(Koppel et al., 2003)", "ref_id": "BIBREF25" }, { "start": 254, "end": 275, "text": "(Koppel et al., 2005)", "ref_id": "BIBREF26" }, { "start": 318, "end": 335, "text": "(Le et al., 2011)", "ref_id": "BIBREF27" }, { "start": 605, "end": 630, "text": "(Eisenstein et al., 2011;", "ref_id": "BIBREF10" }, { "start": 631, "end": 648, "text": "Rao et al., 2011)", "ref_id": "BIBREF41" }, { "start": 707, "end": 725, "text": "(Ott et al., 2011)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We evaluate stylometric techniques in the novel domain of scientific writing. Science is a difficult domain; authors are compelled, often explicitly by reviewers/submission guidelines, to comply with normative practices in spelling and grammar. Moreover, topical clues are less salient than in domains like social media. Yet science is more than just a good challenge for stylometry; it is an important area in itself. Systems for scientific stylometry would give sociologists new tools for analyzing academic communities, and new ways to resolve the nature of collaboration in specific articles (Johri et al., 2011) . Authors might also use these tools, e.g. to help ensure a consistent style in multi-authored papers (Glover and Hirst, 1995) . Our work includes: New Stylometric Tasks: We predict whether a paper is written: (1) by a native or non-native English speaker, (2) by a male or female, and (3) in the style of a conference or a workshop paper. The latter is a novel stylometric and bibliometric prediction.", "cite_spans": [ { "start": 596, "end": 616, "text": "(Johri et al., 2011)", "ref_id": "BIBREF23" }, { "start": 719, "end": 743, "text": "(Glover and Hirst, 1995)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We show the value of syntactic features for stylometry. Among others, we describe tree substitution grammar fragments, which have not previously been used in stylometry. TSG fragments are interpretable, efficient, and particularly effective for detecting non-native writing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "New Stylometric Features:", "sec_num": null }, { "text": "While recent studies have mostly evaluated single prediction tasks, we compare different strategies across different tasks on a common dataset and with a common infrastructure. In addition to contrasting different feature types, we also compare different al., 2011), or whether someone is writing deceptive online reviews (Ott et al., 2011) .", "cite_spans": [ { "start": 322, "end": 340, "text": "(Ott et al., 2011)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "New Stylometric Features:", "sec_num": null }, { "text": "We evaluate stylometric techniques in the novel domain of scientific writing. Science is a difficult domain; authors are encouraged, often explicitly by reviewers/submission-guidelines, to comply with normative practices in style, spelling and grammar. Moreover, topical clues are less salient than in domains like social media. Success in this challenging domain can bring us closer to correctly analyzing the huge volumes of online text that are currently unmarked for useful author attributes such as gender and native-language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "New Stylometric Features:", "sec_num": null }, { "text": "Yet science is more than just a good steppingstone for stylometry; it is an important area in itself. Systems for scientific stylometry would give sociologists new tools for analyzing academic communities, and new ways to resolve the nature of collaboration in specific articles (Johri et al., 2011) . Authors might also use these tools, e.g., to help ensure a consistent style in multi-authored papers (Glover and Hirst, 1995) , or to determine sections of a paper needing revision.", "cite_spans": [ { "start": 279, "end": 299, "text": "(Johri et al., 2011)", "ref_id": "BIBREF23" }, { "start": 403, "end": 427, "text": "(Glover and Hirst, 1995)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "New Stylometric Features:", "sec_num": null }, { "text": "The contributions of our paper include: New Stylometric Tasks: We predict whether a paper is written: (1) by a native or non-native speaker, (2) by a male or female, and (3) in the style of a conference or workshop paper. The latter is a fully novel stylometric and bibliometric prediction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "New Stylometric Features:", "sec_num": null }, { "text": "We show the value of syntactic features for stylometry. Among others, we describe tree substitution grammar fragments, which have not previously been used in stylometry. TSG fragments are interpretable, efficient, and particularly effective for detecting non-native writing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "New Stylometric Features:", "sec_num": null }, { "text": "While recent studies have mostly evaluated single prediction tasks, we compare different strategies across different tasks on a common dataset and with a common infrastructure. In addition to contrasting different feature types, we compare different training strategies, exploring ways to make use of training instances with label uncertainty.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "New Stylometric Features:", "sec_num": null }, { "text": "We also provide a detailed analysis that is interesting from a sociolinguistic standpoint. Precisely what words distinguish non-native writing? How does the syntax of female authors differ from males? What are the hallmarks of top-tier papers? Finally, we identify some strong correlations between our predictions and a paper's citation count, even when controlling for paper venue and origin.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "New Stylometric Features:", "sec_num": null }, { "text": "Bibliometrics is the empirical analysis of scholarly literature; citation analysis is a well-known bibliometric approach for ranking authors and papers (Borgman and Furner, 2001 ). Bibliometry and stylometry can share goals but differ in techniques. For example, in a work questioning the blindness of double-blind reviewing, Hill and Provost (2003) predict author identities. They ignore the article body and instead consider (a) potential self-citations and (b) similarity between the article's citation list and the citation lists of known papers. Radev et al. (2009a) perform a bibliometric analysis of computational linguistics. Teufel and Moens (2002) and Qazvinian and Radev (2008) summarize scientific articles, the latter by automatically finding and filtering sentences in other papers that cite the target article.", "cite_spans": [ { "start": 152, "end": 177, "text": "(Borgman and Furner, 2001", "ref_id": "BIBREF4" }, { "start": 326, "end": 349, "text": "Hill and Provost (2003)", "ref_id": "BIBREF20" }, { "start": 634, "end": 657, "text": "Teufel and Moens (2002)", "ref_id": "BIBREF46" }, { "start": 662, "end": 688, "text": "Qazvinian and Radev (2008)", "ref_id": "BIBREF37" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Our system does not consider citations; it is most similar to work that uses raw article text. Hall et al. (2008) build per-year topic models over scientific literature to track the evolution of scientific ideas. Gerrish and Blei (2010) assess the influence of individual articles by modeling their impact on the content of future papers. Yogatama et al. (2011) predict whether a paper will be cited based on both its content and its meta-data such as author names and publication venues. Johri et al. (2011) use per-author topic models to assess the nature of collaboration in a particular article (e.g., apprenticeship or synergy).", "cite_spans": [ { "start": 95, "end": 113, "text": "Hall et al. (2008)", "ref_id": "BIBREF18" }, { "start": 213, "end": 236, "text": "Gerrish and Blei (2010)", "ref_id": "BIBREF16" }, { "start": 339, "end": 361, "text": "Yogatama et al. (2011)", "ref_id": "BIBREF51" }, { "start": 489, "end": 508, "text": "Johri et al. (2011)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "One of the tasks in Sarawgi et al. (2011) concerned predicting gender in scientific writing, but they use a corpus of only ten \"highly established\" authors and make the prediction using twenty papers for each. Finally, Dale and Kilgarriff (2010) initiated a shared task on automatic editing of scientific papers written by non-native speakers, with the objective of developing \"tools which can help non-native speakers of English (NNSs) (and maybe some native ones) write academic English prose of the kind that helps a paper get accepted.\" Lexical and pragmatic choices in academic writing have also been analyzed within the applied linguistics community (Myers, 1989; Vassileva, 1998) .", "cite_spans": [ { "start": 20, "end": 41, "text": "Sarawgi et al. (2011)", "ref_id": "BIBREF42" }, { "start": 656, "end": 669, "text": "(Myers, 1989;", "ref_id": "BIBREF29" }, { "start": 670, "end": 686, "text": "Vassileva, 1998)", "ref_id": "BIBREF49" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "We use papers from the ACL Anthology Network (Radev et al., 2009b, Release 2011) and exploit its manually-curated meta-data such as normalized author names, affiliations (including country, available up to 2009), and citation counts. We convert each PDF to text 1 but remove text before the Abstract (to anonymize) and after the Acknowledgments/References headings. We split the text into sentences 2 and filter any documents with fewer than 100 (this removes some short/demo papers, malconverted PDFs, etc. -about 23% of the 13K papers with affiliation information). In case the text was garbled, we then filtered the first 3 lines from every file and any line with an '@' symbol (which might be part of an affiliation). We remove footers like Proceedings of ..., table/figure captions, and any lines with non-ASCII characters (e.g. math equations). Papers are then parsed via the Berke- (Petrov et al., 2006) , and part-of-speech (PoS) tagged using CRFTagger (Phan, 2006) .", "cite_spans": [ { "start": 889, "end": 910, "text": "(Petrov et al., 2006)", "ref_id": "BIBREF33" }, { "start": 961, "end": 973, "text": "(Phan, 2006)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "ACL Dataset and Preprocessing", "sec_num": "3" }, { "text": "Training sets always comprise papers from 2001-2007, while test sets are created by randomly shuffling the 2008-2009 portion and then dividing it into development/test sets. We also use papers from 1990-2000 for experiments in \u00a77.3 and \u00a77.4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ACL Dataset and Preprocessing", "sec_num": "3" }, { "text": "Each task has both a Strict training set, using only the data for which we are most confident in the labels (as described below), and a Lenient set, which forcibly assigns every paper in the training period to some class (Table 1) . All test papers are annotated using a Strict rule. While our approaches for automatically-assigning labels can be coarse, they allow us to scale our analysis to a realistic crosssection of academic papers, letting us discover some interesting trends.", "cite_spans": [], "ref_spans": [ { "start": 221, "end": 230, "text": "(Table 1)", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Stylometric Tasks", "sec_num": "4" }, { "text": "We introduce the task of predicting whether a scientific paper is written by a native English speaker (NES) or non-native speaker (NNS). Prior work has mostly made this prediction in learner corpora (Koppel et al., 2005; Tsur and Rappoport, 2007; Wong and Dras, 2011) , although there have been attempts in elicited speech transcripts (Tomokiyo and Jones, 2001 ) and e-mail (Estival et al., 2007) . There has also been a large body of work on correcting errors in non-native writing, with a specific focus on difficulties in preposition and article usage (Han et al., 2006; Chodorow et al., 2007; Felice and Pulman, 2007; Tetreault and Chodorow, 2008; Gamon, 2010) .", "cite_spans": [ { "start": 199, "end": 220, "text": "(Koppel et al., 2005;", "ref_id": "BIBREF26" }, { "start": 221, "end": 246, "text": "Tsur and Rappoport, 2007;", "ref_id": "BIBREF48" }, { "start": 247, "end": 267, "text": "Wong and Dras, 2011)", "ref_id": "BIBREF50" }, { "start": 335, "end": 360, "text": "(Tomokiyo and Jones, 2001", "ref_id": "BIBREF47" }, { "start": 374, "end": 396, "text": "(Estival et al., 2007)", "ref_id": "BIBREF11" }, { "start": 555, "end": 573, "text": "(Han et al., 2006;", "ref_id": "BIBREF19" }, { "start": 574, "end": 596, "text": "Chodorow et al., 2007;", "ref_id": "BIBREF7" }, { "start": 597, "end": 621, "text": "Felice and Pulman, 2007;", "ref_id": "BIBREF13" }, { "start": 622, "end": 651, "text": "Tetreault and Chodorow, 2008;", "ref_id": "BIBREF45" }, { "start": 652, "end": 664, "text": "Gamon, 2010)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "NativeL: Native vs. Non-Native English", "sec_num": "4.1" }, { "text": "We annotate papers using two pieces of associated meta-data: (1) author first names and (2) countries of affiliation. We manually marked each country for whether English is predominantly spoken there. We then built a list of common first names of English speakers via the top 150 male and female names from the U.S. census. 3 If the first author of a paper has an English first name and English-speakingcountry affiliation, we mark as NES. 4 If none of the authors have an English first name nor an Englishspeaking-country affiliation, we mark as NNS. We use this rule to label our development and test data, as well as our Strict training set. For Lenient training, we decide based solely on whether the first author is from an English-speaking country.", "cite_spans": [ { "start": 324, "end": 325, "text": "3", "ref_id": null }, { "start": 440, "end": 441, "text": "4", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "NativeL: Native vs. Non-Native English", "sec_num": "4.1" }, { "text": "This novel task aims to distinguish top-tier papers from those at workshops, based on style. We use the annual meeting of the ACL as our canonical toptier venue. For evaluation and Strict training, we label all main-session ACL papers as top-tier, and all workshop papers as workshop. For Lenient training, we assign all conferences (LREC, Coling, EMNLP, etc.) to be top-tier except for their non-main-session papers, which we label as workshop.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Venue: Top-Tier vs. Workshop", "sec_num": "4.2" }, { "text": "Because we are classifying an international set of authors, U.S. census names (the usual source of gender ground-truth) provide incomplete information. We therefore use the data of Bergsma and Lin (2006) . 5 This data has been widely used in coreference resolution but never in stylometry. Each line in the data lists how often a noun co-occurs with male, female, neutral and plural pronouns; this is commonly taken as an approximation of the true gender distribution. E.g., 'bill clinton' is 98% male (in 8344 instances) while 'elsie wayne' is 100% female (in 23). The data also has aggregate counts over all nouns with the same first token, e.g., 'elsie ...' is 94% female (in 255 instances). For Strict training/evaluation, we label papers with the following rule based on the first author's first name: if the name has an aggregate count >30 and female probability >0.85, label as female; otherwise if the aggregate count is >30 and male probability >0.85, label male. This rule captures many of ACL's unambiguously-gendered names, both male (Nathanael, Jens, Hiroyuki) and female (Widad, Yael, Sunita) . For Lenient training, we assign all papers based only on whether the male or female probability for the first author is higher. While potentially noisy, there is precedent for assigning a single gender to papers \"co-authored by researchers of mixed gender\" (Sarawgi et al., 2011) .", "cite_spans": [ { "start": 181, "end": 203, "text": "Bergsma and Lin (2006)", "ref_id": "BIBREF3" }, { "start": 206, "end": 207, "text": "5", "ref_id": null }, { "start": 1085, "end": 1106, "text": "(Widad, Yael, Sunita)", "ref_id": null }, { "start": 1366, "end": 1388, "text": "(Sarawgi et al., 2011)", "ref_id": "BIBREF42" } ], "ref_spans": [], "eq_spans": [], "section": "Gender: Male vs. Female", "sec_num": "4.3" }, { "text": "Model: We take a discriminative approach to stylometry, representing articles as feature vectors ( \u00a76) and classifying them using a linear, L2-regularized SVM, trained via LIBLINEAR (Fan et al., 2008) . SVMs are state-of-the-art and have been used previously in stylometry (Koppel et al., 2005) .", "cite_spans": [ { "start": 182, "end": 200, "text": "(Fan et al., 2008)", "ref_id": "BIBREF12" }, { "start": 273, "end": 294, "text": "(Koppel et al., 2005)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Models and Training Strategies", "sec_num": "5" }, { "text": "We test whether it's better to train with a smaller, more accurate Strict set, or a larger but noisier Lenient set. We also explore a third strategy, motivated by work in learning from noisy web images (Bergamo and Torresani, 2010) , in which we fix the Strict labels, but also include the remaining examples as unlabeled instances. We then optimize a Transductive SVM, solving an optimization problem where we not only choose the feature weights, but also labels for unlabeled training points. Like a regular SVM, the goal is to maximize the margin between the positive and negative vectors, but now the vectors have both fixed and imputed labels. We optimize using Joachims (1999)'s software. While the classifier is trained using a transductive strategy, it is still tested inductively, i.e., on unseen data. Koppel et al. (2003) describes a range of features that have been used in stylometry, ranging from early manual selection of potentially discriminative words, to approaches based on automated text categorization (Sebastiani, 2002) . We use the following three feature classes; the particular features were chosen based on development experiments.", "cite_spans": [ { "start": 202, "end": 231, "text": "(Bergamo and Torresani, 2010)", "ref_id": "BIBREF2" }, { "start": 812, "end": 832, "text": "Koppel et al. (2003)", "ref_id": "BIBREF25" }, { "start": 1024, "end": 1042, "text": "(Sebastiani, 2002)", "ref_id": "BIBREF43" } ], "ref_spans": [], "eq_spans": [], "section": "Strategy:", "sec_num": null }, { "text": "A variety of \"discouraging results\" in the text categorization literature have shown that simple bag-ofwords (Bow) representations usually perform better than \"more sophisticated\" ones (e.g. using syntax) (Sebastiani, 2002) . This was also observed in sentiment classification (Pang et al., 2002) . One key aim of our research is to see whether this is true of scientific stylometry. Our Bow representation uses a feature for each unique lower-case word-type in an article. We also preprocess papers by making all digits '0'. Normalizing digits and filtering capitalized words helps ensure citations and named-entities are excluded from our features. The feature value is the log-count of how often the corresponding word occurs in the document.", "cite_spans": [ { "start": 205, "end": 223, "text": "(Sebastiani, 2002)", "ref_id": "BIBREF43" }, { "start": 277, "end": 296, "text": "(Pang et al., 2002)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Bow Features", "sec_num": "6.1" }, { "text": "While text categorization relies on keywords, stylometry focuses on topic-independent measures like function word frequency (Mosteller and Wallace, 1984) , sentence length (Yule, 1939) , and PoS (Hirst and Feiguina, 2007) . We define a style-word to be:", "cite_spans": [ { "start": 124, "end": 153, "text": "(Mosteller and Wallace, 1984)", "ref_id": "BIBREF28" }, { "start": 172, "end": 184, "text": "(Yule, 1939)", "ref_id": "BIBREF52" }, { "start": 195, "end": 221, "text": "(Hirst and Feiguina, 2007)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Style Features", "sec_num": "6.2" }, { "text": "(1) punctuation, (2) a stopword, or (3) a Latin abbreviation. 6 We create Style features for all unigrams and bigrams, replacing non-style-words separately with both PoS-tags and spelling signatures. 7 Each feature is an N-gram, the value is its log-count in the article. We also include stylistic meta-features such as mean-words-per-sentence and mean-word-length.", "cite_spans": [ { "start": 62, "end": 63, "text": "6", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Style Features", "sec_num": "6.2" }, { "text": "Unlike recent work using generative PCFGs (Raghavan et al., 2010; Sarawgi et al., 2011) , we use syntax directly as features in discriminative models, which can easily incorporate arbitrary and overlapping syntactic clues. For example, we will see that one indicator of native text is the use of certain determiners as stand-alone noun phrases (NPs), like this in Figure 2 . This contrasts with a proposed non-native phrase, \"this/DT growing/VBG area/NN,\" where this instead modifies a noun. The Bow features are clearly unhelpful: this occurs in both cases. The Style features are likewise unhelpful; this-VBG also occurs in both cases. We need the deeper knowledge that a specific determiner is used as a complete NP.", "cite_spans": [ { "start": 42, "end": 65, "text": "(Raghavan et al., 2010;", "ref_id": "BIBREF40" }, { "start": 66, "end": 87, "text": "Sarawgi et al., 2011)", "ref_id": "BIBREF42" } ], "ref_spans": [ { "start": 364, "end": 372, "text": "Figure 2", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Syntax Features", "sec_num": "6.3" }, { "text": "We evaluate three feature types that aim to capture such knowledge. In each case, we aggregate the feature counts over all the parse trees constituting a document. The feature value is the log-count of how often each feature occurs. To remove content information from the features, we preprocess the parse tree terminals: all non-style-word terminals are replaced with their spelling signature (see \u00a76.2).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntax Features", "sec_num": "6.3" }, { "text": "We include a feature for every unique, single-level context-free-grammar (CFG) rule application in a paper (following Baayen et al. (1996) , Gamon (2004) , Hirst and Feiguina (2007) , Wong and Dras (2011)). The Figure 2 tree would have features: NP PRP, NP DT, DT this, etc. Such features do capture that a determiner was used as an NP, but they do not jointly encode which determiner was used. This is an important omission; we'll see that other determiners acting as stand-alone NPs indicate non-native writing (e.g., the word that, see \u00a77.2).", "cite_spans": [ { "start": 118, "end": 138, "text": "Baayen et al. (1996)", "ref_id": "BIBREF1" }, { "start": 141, "end": 153, "text": "Gamon (2004)", "ref_id": "BIBREF14" }, { "start": 156, "end": 181, "text": "Hirst and Feiguina (2007)", "ref_id": "BIBREF21" } ], "ref_spans": [ { "start": 211, "end": 219, "text": "Figure 2", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "CFG Rules:", "sec_num": null }, { "text": "TSG Fragments: A tree-substitution grammar is a generalization of CFGs that allow rewriting to tree fragments rather than sequences of non-terminals (Joshi and Schabes, 1997). Figure 2 gives the example NP (DT this). This fragment captures both the identity of the determiner and its syntactic function as an NP, as desired. Efficient Bayesian procedures have recently been developed that enable the training of large-scale probabilistic TSG grammars (Post and Gildea, 2009; Cohn et al., 2010) .", "cite_spans": [ { "start": 451, "end": 474, "text": "(Post and Gildea, 2009;", "ref_id": "BIBREF35" }, { "start": 475, "end": 493, "text": "Cohn et al., 2010)", "ref_id": "BIBREF8" } ], "ref_spans": [ { "start": 176, "end": 184, "text": "Figure 2", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "CFG Rules:", "sec_num": null }, { "text": "While TSGs have not been used previously in sty-lometry, Post (2011) uses them to predict sentence grammaticality (i.e. detecting pseudo-sentences following Okanohara and Tsujii (2007) and Cherry and Quirk (2008) ). We use Post's TSG training settings and his public code. 8 We parse with the TSG grammar and extract the fragments as features. We also follow Post by having features for aggregate TSG statistics, e.g., how many fragments are of a given size, tree-depth, etc. These syntactic meta-features are somewhat similar to the manually-defined stylometric features of Stamatatos et al. (2001) .", "cite_spans": [ { "start": 157, "end": 184, "text": "Okanohara and Tsujii (2007)", "ref_id": "BIBREF30" }, { "start": 189, "end": 212, "text": "Cherry and Quirk (2008)", "ref_id": "BIBREF6" }, { "start": 273, "end": 274, "text": "8", "ref_id": null }, { "start": 575, "end": 599, "text": "Stamatatos et al. (2001)", "ref_id": "BIBREF44" } ], "ref_spans": [], "eq_spans": [], "section": "CFG Rules:", "sec_num": null }, { "text": "We also extracted the reranking features of Charniak and Johnson (2005) . These features were hand-crafted for reranking the output of a parser, but have recently been used for other NLP tasks (Post, 2011; Wong and Dras, 2011) . They include lexicalized features for sub-trees and head-to-head dependencies, and aggregate features for conjunct parallelism and the degree of rightbranching. We get the features using another script from Post. 9 While TSG fragments tile a parse tree into a few useful fragments, C&J features can produce thousands of features per sentence, and are thus much more computationally-demanding.", "cite_spans": [ { "start": 44, "end": 71, "text": "Charniak and Johnson (2005)", "ref_id": "BIBREF5" }, { "start": 193, "end": 205, "text": "(Post, 2011;", "ref_id": "BIBREF36" }, { "start": 206, "end": 226, "text": "Wong and Dras, 2011)", "ref_id": "BIBREF50" } ], "ref_spans": [], "eq_spans": [], "section": "C&J Reranking Features:", "sec_num": null }, { "text": "We take the minority class as the positive class: NES for NativeL, top-tier for Venue and female for Gender, and calculate the precision/recall of these classes. We tune three hyperparameters for F1score on development data: (1) the SVM regularization parameter, (2) the threshold for classifying an instance as positive (using the signed hyperplanedistance as the score), and (3) for transductive training ( \u00a75), the fraction of unlabeled data to label as positive. Statistical significance on held-out test data is assessed with McNemar's test, p<0.05. For F1score, we use the following reasonable Baseline: we label all instances with the label of the minority class (achieving 100% recall but low precision).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments and Results", "sec_num": "7" }, { "text": "Development experiments showed that using all features, Bow+Style+Syntax, works best on all tasks, but there was no benefit in combining different Syntax features. We also found no gain from transductive training, but greater cost, with more hyperparameter tuning and a slower SVM solver. The best Syntax features depend on the task (Table 2) . Whether Strict or Lenient training: TSG was best for NativeL, C&J was best for Venue, and CFG was best for Gender. These trends continue on test data, where TSG exceeds CFG (91.6% vs. 91.2%). For the training strategy, Strict was best on NativeL and Gender, while Lenient was best on Venue (Table 2) . This latter result is interesting: recall that for Venue, Lenient training considers all conferences to be toptier, but evaluation is just on detecting ACL papers. We suggest some reasons for this below, highlighting some general features of conference papers that extend beyond particular venues. For the remainder of experiments on each task, we fix the syntactic features and training strategy to those that performed best on development data.", "cite_spans": [], "ref_spans": [ { "start": 333, "end": 342, "text": "(Table 2)", "ref_id": "TABREF3" }, { "start": 635, "end": 644, "text": "(Table 2)", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Selection of Syntax and Training Strategy", "sec_num": "7.1" }, { "text": "Gender remains the most difficult task on test data, but our F1 still substantially outperforms the baseline (Table 3) . Results on NativeL are particularly impressive; in terms of accuracy, we classify 94.6% of test articles correctly (the majority-class baseline is 66.9%). Regarding features, just using Style+Syntax always works better than using Bow. Combining all features always works better still. The gains of Bow+Style+Syntax over vanilla Bow are statistically significant in each case.", "cite_spans": [], "ref_spans": [ { "start": 109, "end": 118, "text": "(Table 3)", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Test Results and Feature Analysis", "sec_num": "7.2" }, { "text": "We also highlight important individual features: Natives also prefer certain abbreviations (e.g. 'e.g.') while non-natives prefer others ('i.e.', 'c.f.', 'etc.'). Exotic punctuation also suggests native text: the semi-colon, exclamation and question mark all predict NES. Note this also varies by region; semicolons are most popular in NES countries but papers from Israel and Italy are close behind. Table 5 gives highly-weighted TSG features for predicting NativeL. Note the determiner-as-NP usage described earlier ( \u00a7 6.3): these, this and each predict native when used as an NP; that-as-an-NP predicts non-native. Furthermore, while not all native speakers use a comma before a conjunction in a list, it's nevertheless a good flag for native writing ('NP NP, NP, (CC and) NP'). In terms of nonnative syntax, the passive voice is more common ('VP (VBZ is) VP' and 'VP VBN (PP (IN as) NP)'). We also looked for features involving determiners since correct determiner usage is a common difficulty for non-native speakers. We found cases where determiners were missing where natives might have used one ('NP JJ JJ NN'), but also those where a determiner might be optional and skipped by a native speaker ('NP (DT the) NN NNS'). Note that examples are based on actual usage in ACL papers. We also found that complex NPs were more associated with native text. Features such as 'NP DT JJ NN NN NN', and 'NP DT NN NN NNS' predict native writing. Non-natives also rely more on boilerplate. For example, the exact phrase \"The/This paper is organized as follows\" occurs 3 times as often in nonnative compared to native text (in 7.5% of all nonnative papers). Sentence re-use is only indirectly captured by our features; it would be interesting to encode flags for it directly.", "cite_spans": [], "ref_spans": [ { "start": 401, "end": 408, "text": "Table 5", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Test Results and Feature Analysis", "sec_num": "7.2" }, { "text": "NativeL:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Test Results and Feature Analysis", "sec_num": "7.2" }, { "text": "In general, we found very few highly-weighted features that pinpoint 'ungrammatical' non-native writing (the feature 'associated to' in Table 4 is a rare example). Our classifiers largely detect nonnative writing on a stylistic rather than grammatical basis.", "cite_spans": [], "ref_spans": [ { "start": 136, "end": 143, "text": "Table 4", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Test Results and Feature Analysis", "sec_num": "7.2" }, { "text": "Venue: Table 6 provides important Bow and Style features for the Venue task (syntactic features omitted due to space). While some features are topical (e.g. 'biomedical'), the table gives a blueprint for writing a solid main-conference paper. That is, good papers often have an explicit probability model (or algorithm), experimental baselines, error analysis, Table 7 . Several of the most highly-weighted female features include pronouns (e.g. PRP$). A higher frequency of pronouns in female writing has been attested previously (Argamon et al., 2003) , but has not been traced to particular syntactic constructions. Likewise, we observe a higher frequency of not just negation (noted previously) but adverbs (RB) in general (e.g. 'VP MD RB VP'). In terms of Bow features (not shown), the words contrast and comparison highly predict female, as do topical clues like verb and resource. The top-three male Bow features are (in order): simply, perform, parsing.", "cite_spans": [ { "start": 531, "end": 553, "text": "(Argamon et al., 2003)", "ref_id": "BIBREF25" } ], "ref_spans": [ { "start": 7, "end": 14, "text": "Table 6", "ref_id": "TABREF11" }, { "start": 361, "end": 368, "text": "Table 7", "ref_id": "TABREF13" } ], "eq_spans": [], "section": "Test Results and Feature Analysis", "sec_num": "7.2" }, { "text": "While our objective is to predict attributes of papers, we also show how that we can identify author attributes using a larger body of work. We make NativeL and Gender predictions for all papers in the 1990-2000 era using our Bow+Style+Syntax system. For each author+affiliation with \u22653 first-authored papers, we take the average classifier score on these papers. Table 8 shows cases where our model strongly predicts native, showing top authors with foreign affiliations and top authors in English-speaking countries. 10 While not perfect, the predictions correctly identify some native authors that would be difficult to detect using only name and location data. For example, Dekai Wu (Hong Kong) speaks English natively; Christer Samuelsson lists near-native English on his C.V.; etc. Likewise, we have also been able to accurately identify a set of non-native speakers with common American names that were working at American universities. Table 9 provides some of the extreme predictions of our system on Gender. The extreme male and female predictions are based on both style and content; females tend to work on summarization, discourse, Black, Nigel Collier, Jean-Luc Gauvain, Dan Cristea, Graham J. Russell, Kenneth R. Beesley, Dekai Wu, Christer Samuelsson, Raquel Martinez Highest NES Scores, English-country: Eric V. Siegel, Lance A. Ramshaw, Stephanie Seneff, Victor W. Zue, Joshua Goodman, Patti J. Price, Stuart M. Shieber, Jean Carletta, Lynn Lambert, Gina-Anne Levow etc., while many males focus on parsing. We also tried making these lists without Bow features, but the extreme examples still reflect topic to some extent. Topics themselves have their own style, which the style features capture; it is difficult to fully separate style from topic.", "cite_spans": [ { "start": 519, "end": 521, "text": "10", "ref_id": null }, { "start": 1145, "end": 1483, "text": "Black, Nigel Collier, Jean-Luc Gauvain, Dan Cristea, Graham J. Russell, Kenneth R. Beesley, Dekai Wu, Christer Samuelsson, Raquel Martinez Highest NES Scores, English-country: Eric V. Siegel, Lance A. Ramshaw, Stephanie Seneff, Victor W. Zue, Joshua Goodman, Patti J. Price, Stuart M. Shieber, Jean Carletta, Lynn Lambert, Gina-Anne Levow", "ref_id": null } ], "ref_spans": [ { "start": 364, "end": 371, "text": "Table 8", "ref_id": "TABREF14" }, { "start": 944, "end": 951, "text": "Table 9", "ref_id": "TABREF15" } ], "eq_spans": [], "section": "Author Rankings", "sec_num": "7.3" }, { "text": "We also test whether our systems' stylometric scores correlate with the most common bibliometric measure: citation count. To reduce the impact of topic, we only use Style+Syntax features. We plot results separately for ACL, Coling and Workshop papers (1990-2000 era) . Papers at each venue are sorted by their classifier scores and binned into five score bins. Each point in the plot is the meanscore/mean-number-of-citations for papers in a bin (within-community citation data is via the AAN \u00a73 and excludes self citations). We use a truncated mean for citation counts, leaving off the top/bottom five papers in each bin.", "cite_spans": [ { "start": 224, "end": 234, "text": "Coling and", "ref_id": null }, { "start": 235, "end": 266, "text": "Workshop papers (1990-2000 era)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Correlation with Citations", "sec_num": "7.4" }, { "text": "For NativeL, we only plot papers marked as native by our Strict rule (i.e. English name/country). Papers with the lowest NativeL-scores receive many fewer citations, but they soon level off (Figure 3(a) ). Many junior researchers at English universities are non-native speakers; early-career non-natives might receive fewer citations than well-known peers. The correlation between citations and Venue-scores is even stronger (Figure 3(b) ); the top-ranked workshop papers receive five times as many citations as the lowest ones, and are cited better than a good portion of ACL papers. These figures suggest that citation-predictors can get useful information beyond typical Bow features (Yogatama et al., 2011) . Although we focused on a past era, stylistic/syntactic features should also be more robust to the evolution of scientific topics; we plan to next test whether we can better forecast future citations. It would also be interesting to see whether these trends transfer to other academic disciplines.", "cite_spans": [ { "start": 687, "end": 710, "text": "(Yogatama et al., 2011)", "ref_id": "BIBREF51" } ], "ref_spans": [ { "start": 190, "end": 202, "text": "(Figure 3(a)", "ref_id": "FIGREF4" }, { "start": 425, "end": 437, "text": "(Figure 3(b)", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Correlation with Citations", "sec_num": "7.4" }, { "text": "For NativeL, we also created a special test corpus of 273 papers written by first-time ACL authors (2008-2009 era) . This set closely aligns with the system's potential use as a tool to help new authors compose papers. Two (native-speaking) annotators manually annotated each paper for whether it was primarily written by a native or non-native speaker (considering both content and author names/affiliations). The annotators agreed on 90% of decisions, with an inter-annotator kappa of 66%. We divided the papers into a test set and a development set. We applied our Bow+Style+Syntax system exactly as trained above, except we tuned its hyperparameters on the new development data. The system performed quite well on this set, reaching 68% F1 over a baseline of only 27%. Moreover, the system also reached 90% accuracy, matching the level of human agreement.", "cite_spans": [ { "start": 99, "end": 114, "text": "(2008-2009 era)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Further Experiments on NativeL", "sec_num": "7.5" }, { "text": "We have proposed, developed and successfully evaluated significant new tasks and methods in the stylometric analysis of scientific articles, including the novel resolution of publication venue based on paper style, and novel syntactic features based on tree substitution grammar fragments. In all cases, our syntactic and stylistic features significantly improve over a bag-of-words baseline, achieving 10% to 25% relative error reduction in all three major tasks. We have included a detailed and insightful analysis of discriminative stylometric features, and we showed a strong correlation between our predictions and a paper's number of citations. We observed evidence for L1-interference in non-native writing, for differences in topic between males and females, and for distinctive language usage which can successfully identify papers published in top-tier conferences versus wokrshop proceedings. We believe that this work can stimulate new research at the intersection of computational linguistics and bibliometrics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8" }, { "text": "Via the open-source utility pdftotext 2 Splitter from cogcomp.cs.illinois.edu/page/tools", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "www.census.gov/genealogy/names/names_files.html We also manually added common nicknames for these, e.g. Rob for Robert, Chris for Christopher, Dan for Daniel, etc.4 Of course, assuming the first author writes each paper is imperfect. In fact, for some native/non-native collaborations, our system ultimately predicts the 2nd (non-native) author to be the main writer; in one case we confirmed the accuracy of this prediction by personal communication with the authors.5 www.clsp.jhu.edu/\u02dcsbergsma/Gender/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The stopword list is the standard set of 524 SMART-system stopwords (following Tomokiyo and Jones (2001)). Latin abbreviations are i.e.,e.g., etc., c.f., et or al. 7 E.g., signature 'LC-ing' means lower-case, ending in ing. These are created via a script included with the Berkeley parser.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://github.com/mjpost/dptsg 9 http://github.com/mjpost/extract-spfeatures.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Note again that this is based on the affiliation of these authors during the 1990s; e.g. Gerald Penn published three papers while at the University of T\u00fcbingen.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Highest Model Scores (Male)", "authors": [ { "first": "Chao-Huang", "middle": [], "last": ": John Aberdeen", "suffix": "" }, { "first": "Giorgio", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Stanley", "middle": [ "F" ], "last": "Satta", "suffix": "" }, { "first": "Guodong", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Carl", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Akira", "middle": [], "last": "Weir", "suffix": "" }, { "first": "Hideki", "middle": [], "last": "Ushioda", "suffix": "" }, { "first": "Koichi", "middle": [], "last": "Tanaka", "suffix": "" }, { "first": "Douglas", "middle": [ "B" ], "last": "Takeda", "suffix": "" }, { "first": "Hideo", "middle": [], "last": "Paul", "suffix": "" }, { "first": "Adam", "middle": [ "L" ], "last": "Watanabe", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Berger", "suffix": "" }, { "first": "Jason", "middle": [ "M" ], "last": "Knight", "suffix": "" } ], "year": 2003, "venue": "", "volume": "23", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Highest Model Scores (Male): John Aberdeen, Chao-Huang Chang, Giorgio Satta, Stanley F. Chen, GuoDong Zhou, Carl Weir, Akira Ushioda, Hideki Tanaka, Koichi Takeda, Douglas B. Paul, Hideo Watan- abe, Adam L. Berger, Kevin Knight, Jason M. Eisner Highest Model Scores (Female): Julia B. Hirschberg, Johanna D. Moore, Judy L. Delin, Paola Merlo, Rebecca J. Passonneau, Bonnie Lynn Webber, Beth M. Sundheim, Jennifer Chu-Carroll, Ching-Long Yeh, Mary Ellen Okurowski, Erik-Jan Van Der Linden References Shlomo Argamon, Moshe Koppel, Jonathan Fine, and Anat Rachel Shimoni. 2003. Gender, genre, and writ- ing style in formal written texts. Text, 23(3), August.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Outside the cave of shadows: Using syntactic annotation to enhance authorship attribution", "authors": [ { "first": "Harald", "middle": [], "last": "Baayen", "suffix": "" }, { "first": "Fiona", "middle": [], "last": "Tweedie", "suffix": "" }, { "first": "Hans", "middle": [], "last": "Van Halteren", "suffix": "" } ], "year": 1996, "venue": "Literary and Linguistic Computing", "volume": "11", "issue": "3", "pages": "121--132", "other_ids": {}, "num": null, "urls": [], "raw_text": "Harald Baayen, Fiona Tweedie, and Hans van Halteren. 1996. Outside the cave of shadows: Using syntactic annotation to enhance authorship attribution. Literary and Linguistic Computing, 11(3):121-132.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Exploiting weakly-labeled web images to improve object classification: a domain adaptation approach", "authors": [ { "first": "Alessandro", "middle": [], "last": "Bergamo", "suffix": "" }, { "first": "Lorenzo", "middle": [], "last": "Torresani", "suffix": "" } ], "year": 2010, "venue": "Proc. NIPS", "volume": "", "issue": "", "pages": "181--189", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alessandro Bergamo and Lorenzo Torresani. 2010. Ex- ploiting weakly-labeled web images to improve object classification: a domain adaptation approach. In Proc. NIPS, pages 181-189.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Bootstrapping path-based pronoun resolution", "authors": [ { "first": "Shane", "middle": [], "last": "Bergsma", "suffix": "" }, { "first": "Dekang", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2006, "venue": "Proc. Coling-ACL", "volume": "", "issue": "", "pages": "33--40", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shane Bergsma and Dekang Lin. 2006. Bootstrapping path-based pronoun resolution. In Proc. Coling-ACL, pages 33-40.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Scholarly communication and bibliometrics", "authors": [ { "first": "Christine", "middle": [ "L" ], "last": "Borgman", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Furner", "suffix": "" } ], "year": 2001, "venue": "Annual Review of Information Science and Technology", "volume": "36", "issue": "", "pages": "3--72", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christine L. Borgman and Jonathan Furner. 2001. Schol- arly communication and bibliometrics. Annual Review of Information Science and Technology, 36:3-72.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Coarse-tofine n-best parsing and MaxEnt discriminative reranking", "authors": [ { "first": "Eugene", "middle": [], "last": "Charniak", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2005, "venue": "Proc. ACL", "volume": "", "issue": "", "pages": "173--180", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eugene Charniak and Mark Johnson. 2005. Coarse-to- fine n-best parsing and MaxEnt discriminative rerank- ing. In Proc. ACL, pages 173-180.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Discriminative, syntactic language modeling through latent SVMs", "authors": [ { "first": "Colin", "middle": [], "last": "Cherry", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Quirk", "suffix": "" } ], "year": 2008, "venue": "Proc. AMTA", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Colin Cherry and Chris Quirk. 2008. Discriminative, syntactic language modeling through latent SVMs. In Proc. AMTA.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Detection of grammatical errors involving prepositions", "authors": [ { "first": "Martin", "middle": [], "last": "Chodorow", "suffix": "" }, { "first": "Joel", "middle": [ "R" ], "last": "Tetreault", "suffix": "" }, { "first": "Na-Rae", "middle": [], "last": "Han", "suffix": "" } ], "year": 2007, "venue": "Proc. ACL-SIGSEM Workshop on Prepositions", "volume": "", "issue": "", "pages": "25--30", "other_ids": {}, "num": null, "urls": [], "raw_text": "Martin Chodorow, Joel R. Tetreault, and Na-Rae Han. 2007. Detection of grammatical errors involving prepositions. In Proc. ACL-SIGSEM Workshop on Prepositions, pages 25-30.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Inducing tree-substitution grammars", "authors": [ { "first": "Trevor", "middle": [], "last": "Cohn", "suffix": "" }, { "first": "Phil", "middle": [], "last": "Blunsom", "suffix": "" }, { "first": "Sharon", "middle": [], "last": "Goldwater", "suffix": "" } ], "year": 2010, "venue": "J. Mach. Learn. Res", "volume": "11", "issue": "", "pages": "3053--3096", "other_ids": {}, "num": null, "urls": [], "raw_text": "Trevor Cohn, Phil Blunsom, and Sharon Goldwater. 2010. Inducing tree-substitution grammars. J. Mach. Learn. Res., 11:3053-3096.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Helping our own: Text massaging for computational linguistics as a new shared task", "authors": [ { "first": "Robert", "middle": [], "last": "Dale", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Kilgarriff", "suffix": "" } ], "year": 2010, "venue": "Proc. 6th International Natural Language Generation Conference", "volume": "", "issue": "", "pages": "261--265", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robert Dale and Adam Kilgarriff. 2010. Helping our own: Text massaging for computational linguistics as a new shared task. In Proc. 6th International Natural Language Generation Conference, pages 261-265.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Discovering sociolinguistic associations with structured sparsity", "authors": [ { "first": "Jacob", "middle": [], "last": "Eisenstein", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" }, { "first": "Eric", "middle": [ "P" ], "last": "Xing", "suffix": "" } ], "year": 2011, "venue": "Proc. ACL", "volume": "", "issue": "", "pages": "1365--1374", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Eisenstein, Noah A. Smith, and Eric P. Xing. 2011. Discovering sociolinguistic associations with structured sparsity. In Proc. ACL, pages 1365-1374.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Author profiling for English emails", "authors": [ { "first": "Dominique", "middle": [], "last": "Estival", "suffix": "" }, { "first": "Tanja", "middle": [], "last": "Gaustad", "suffix": "" }, { "first": "Son-Bao", "middle": [], "last": "Pham", "suffix": "" }, { "first": "Will", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Ben", "middle": [], "last": "Hutchinson", "suffix": "" } ], "year": 2007, "venue": "Proc. PACLING", "volume": "", "issue": "", "pages": "263--272", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dominique Estival, Tanja Gaustad, Son-Bao Pham, Will Radford, and Ben Hutchinson. 2007. Author profiling for English emails. In Proc. PACLING, pages 263- 272.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "LIBLINEAR: A library for large linear classification", "authors": [ { "first": "Kai-Wei", "middle": [], "last": "Rong-En Fan", "suffix": "" }, { "first": "Cho-Jui", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Xiang-Rui", "middle": [], "last": "Hsieh", "suffix": "" }, { "first": "Chih-Jen", "middle": [], "last": "Wang", "suffix": "" }, { "first": "", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2008, "venue": "J. Mach. Learn. Res", "volume": "9", "issue": "", "pages": "1871--1874", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, Xiang-Rui Wang, and Chih-Jen Lin. 2008. LIBLINEAR: A li- brary for large linear classification. J. Mach. Learn. Res., 9:1871-1874.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Automatically acquiring models of preposition use", "authors": [ { "first": "Rachele", "middle": [], "last": "De Felice", "suffix": "" }, { "first": "Stephen", "middle": [ "G" ], "last": "Pulman", "suffix": "" } ], "year": 2007, "venue": "Proc. ACL-SIGSEM Workshop on Prepositions", "volume": "", "issue": "", "pages": "45--50", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rachele De Felice and Stephen G. Pulman. 2007. Au- tomatically acquiring models of preposition use. In Proc. ACL-SIGSEM Workshop on Prepositions, pages 45-50.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Linguistic correlates of style: authorship classification with deep linguistic analysis features", "authors": [ { "first": "Michael", "middle": [], "last": "Gamon", "suffix": "" } ], "year": 2004, "venue": "Proc. Coling", "volume": "", "issue": "", "pages": "611--617", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Gamon. 2004. Linguistic correlates of style: authorship classification with deep linguistic analysis features. In Proc. Coling, pages 611-617.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Using mostly native data to correct errors in learners' writing: a meta-classifier approach", "authors": [ { "first": "Michael", "middle": [], "last": "Gamon", "suffix": "" } ], "year": 2010, "venue": "Proc. HLT-NAACL", "volume": "", "issue": "", "pages": "163--171", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Gamon. 2010. Using mostly native data to cor- rect errors in learners' writing: a meta-classifier ap- proach. In Proc. HLT-NAACL, pages 163-171.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "A languagebased approach to measuring scholarly impact", "authors": [ { "first": "Sean", "middle": [], "last": "Gerrish", "suffix": "" }, { "first": "M", "middle": [], "last": "David", "suffix": "" }, { "first": "", "middle": [], "last": "Blei", "suffix": "" } ], "year": 2010, "venue": "Proc. ICML", "volume": "", "issue": "", "pages": "375--382", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sean Gerrish and David M. Blei. 2010. A language- based approach to measuring scholarly impact. In Proc. ICML, pages 375-382.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Detecting stylistic inconsistencies in collaborative writing", "authors": [ { "first": "Angela", "middle": [], "last": "Glover", "suffix": "" }, { "first": "Graeme", "middle": [], "last": "Hirst", "suffix": "" } ], "year": 1995, "venue": "Writers at work: Professional writing in the computerized environment", "volume": "", "issue": "", "pages": "147--168", "other_ids": {}, "num": null, "urls": [], "raw_text": "Angela Glover and Graeme Hirst. 1995. Detecting stylistic inconsistencies in collaborative writing. In Writers at work: Professional writing in the comput- erized environment, pages 147-168.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Studying the history of ideas using topic models", "authors": [ { "first": "David", "middle": [], "last": "Hall", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Jurafsky", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2008, "venue": "Proc. EMNLP", "volume": "", "issue": "", "pages": "363--371", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Hall, Daniel Jurafsky, and Christopher D. Man- ning. 2008. Studying the history of ideas using topic models. In Proc. EMNLP, pages 363-371.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Detecting errors in English article usage by nonnative speakers", "authors": [ { "first": "Na-Rae", "middle": [], "last": "Han", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Chodorow", "suffix": "" }, { "first": "Claudia", "middle": [], "last": "Leacock", "suffix": "" } ], "year": 2006, "venue": "Nat. Lang. Eng", "volume": "12", "issue": "2", "pages": "115--129", "other_ids": {}, "num": null, "urls": [], "raw_text": "Na-Rae Han, Martin Chodorow, and Claudia Leacock. 2006. Detecting errors in English article usage by non- native speakers. Nat. Lang. Eng., 12(2):115-129.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "The myth of the double-blind review?: Author identification using only citations", "authors": [ { "first": "Shawndra", "middle": [], "last": "Hill", "suffix": "" }, { "first": "Foster", "middle": [], "last": "Provost", "suffix": "" } ], "year": 2003, "venue": "SIGKDD Explor. Newsl", "volume": "5", "issue": "", "pages": "179--184", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shawndra Hill and Foster Provost. 2003. The myth of the double-blind review?: Author identification using only citations. SIGKDD Explor. Newsl., 5:179-184.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Bigrams of syntactic labels for authorship discrimination of short texts", "authors": [ { "first": "Graeme", "middle": [], "last": "Hirst", "suffix": "" }, { "first": "", "middle": [], "last": "Ol'ga Feiguina", "suffix": "" } ], "year": 2007, "venue": "Literary and Linguistic Computing", "volume": "22", "issue": "4", "pages": "405--417", "other_ids": {}, "num": null, "urls": [], "raw_text": "Graeme Hirst and Ol'ga Feiguina. 2007. Bigrams of syntactic labels for authorship discrimination of short texts. Literary and Linguistic Computing, 22(4):405- 417.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Transductive inference for text classification using support vector machines", "authors": [ { "first": "Thorsten", "middle": [], "last": "Joachims", "suffix": "" } ], "year": 1999, "venue": "Proc. ICML", "volume": "", "issue": "", "pages": "200--209", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thorsten Joachims. 1999. Transductive inference for text classification using support vector machines. In Proc. ICML, pages 200-209.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "A study of academic collaborations in computational linguistics using a latent mixture of authors model", "authors": [ { "first": "Nikhil", "middle": [], "last": "Johri", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Ramage", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Mcfarland", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2011, "venue": "Proc. 5th ACL-HLT Workshop on Language Technology for Cultural Heritage, Social Sciences, and Humanities", "volume": "", "issue": "", "pages": "124--132", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nikhil Johri, Daniel Ramage, Daniel McFarland, and Daniel Jurafsky. 2011. A study of academic collabo- rations in computational linguistics using a latent mix- ture of authors model. In Proc. 5th ACL-HLT Work- shop on Language Technology for Cultural Heritage, Social Sciences, and Humanities, pages 124-132.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Treeadjoining grammars", "authors": [ { "first": "K", "middle": [], "last": "Aravind", "suffix": "" }, { "first": "Yves", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "", "middle": [], "last": "Schabes", "suffix": "" } ], "year": 1997, "venue": "Handbook of Formal Languages: Beyond Words", "volume": "3", "issue": "", "pages": "71--122", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aravind K. Joshi and Yves Schabes. 1997. Tree- adjoining grammars. In G. Rozenberg and A. Salo- maa, editors, Handbook of Formal Languages: Beyond Words, volume 3, pages 71-122.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Automatically categorizing written texts by author gender", "authors": [ { "first": "Moshe", "middle": [], "last": "Koppel", "suffix": "" }, { "first": "Shlomo", "middle": [], "last": "Argamon", "suffix": "" }, { "first": "Anat Rachel", "middle": [], "last": "Shimoni", "suffix": "" } ], "year": 2003, "venue": "Literary and Linguistic Computing", "volume": "17", "issue": "4", "pages": "401--412", "other_ids": {}, "num": null, "urls": [], "raw_text": "Moshe Koppel, Shlomo Argamon, and Anat Rachel Shi- moni. 2003. Automatically categorizing written texts by author gender. Literary and Linguistic Computing, 17(4):401-412.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Determining an author's native language by mining a text for errors", "authors": [ { "first": "Moshe", "middle": [], "last": "Koppel", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Schler", "suffix": "" }, { "first": "Kfir", "middle": [], "last": "Zigdon", "suffix": "" } ], "year": 2005, "venue": "Proc. KDD", "volume": "", "issue": "", "pages": "624--628", "other_ids": {}, "num": null, "urls": [], "raw_text": "Moshe Koppel, Jonathan Schler, and Kfir Zigdon. 2005. Determining an author's native language by mining a text for errors. In Proc. KDD, pages 624-628.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Longitudinal detection of dementia through lexical and syntactic changes in writing: A case study of three British novelists. Literary and Linguistic Computing", "authors": [ { "first": "Xuan", "middle": [], "last": "Le", "suffix": "" }, { "first": "Ian", "middle": [], "last": "Lancashire", "suffix": "" }, { "first": "Graeme", "middle": [], "last": "Hirst", "suffix": "" }, { "first": "Regina", "middle": [], "last": "Jokel", "suffix": "" } ], "year": 2011, "venue": "", "volume": "26", "issue": "", "pages": "435--461", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xuan Le, Ian Lancashire, Graeme Hirst, and Regina Jokel. 2011. Longitudinal detection of dementia through lexical and syntactic changes in writing: A case study of three British novelists. Literary and Lin- guistic Computing, 26(4):435-461.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Applied Bayesian and Classical Inference: The Case of the Federalist Papers", "authors": [ { "first": "Frederick", "middle": [], "last": "Mosteller", "suffix": "" }, { "first": "David", "middle": [ "L" ], "last": "Wallace", "suffix": "" } ], "year": 1984, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Frederick Mosteller and David L. Wallace. 1984. Ap- plied Bayesian and Classical Inference: The Case of the Federalist Papers. Springer-Verlag.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "The pragmatics of politeness in scientific articles", "authors": [ { "first": "Greg", "middle": [], "last": "Myers", "suffix": "" } ], "year": 1989, "venue": "Applied Linguistics", "volume": "10", "issue": "1", "pages": "1--35", "other_ids": {}, "num": null, "urls": [], "raw_text": "Greg Myers. 1989. The pragmatics of politeness in sci- entific articles. Applied Linguistics, 10(1):1-35.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "A discriminative language model with pseudo-negative samples", "authors": [ { "first": "Daisuke", "middle": [], "last": "Okanohara", "suffix": "" }, { "first": "Jun'ichi", "middle": [], "last": "Tsujii", "suffix": "" } ], "year": 2007, "venue": "Proc. ACL", "volume": "", "issue": "", "pages": "73--80", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daisuke Okanohara and Jun'ichi Tsujii. 2007. A discriminative language model with pseudo-negative samples. In Proc. ACL, pages 73-80.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Finding deceptive opinion spam by any stretch of the imagination", "authors": [ { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Cardie", "suffix": "" }, { "first": "Jeffrey", "middle": [ "T" ], "last": "Hancock", "suffix": "" } ], "year": 2011, "venue": "Proc. ACL", "volume": "", "issue": "", "pages": "309--319", "other_ids": {}, "num": null, "urls": [], "raw_text": "Myle Ott, Yejin Choi, Claire Cardie, and Jeffrey T. Han- cock. 2011. Finding deceptive opinion spam by any stretch of the imagination. In Proc. ACL, pages 309- 319.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Thumbs up?: Sentiment classification using machine learning techniques", "authors": [ { "first": "Bo", "middle": [], "last": "Pang", "suffix": "" }, { "first": "Lillian", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Shivakumar", "middle": [], "last": "Vaithyanathan", "suffix": "" } ], "year": 2002, "venue": "Proc. EMNLP", "volume": "", "issue": "", "pages": "79--86", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan. 2002. Thumbs up?: Sentiment classification using machine learning techniques. In Proc. EMNLP, pages 79-86.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Learning accurate, compact, and interpretable tree annotation", "authors": [ { "first": "Slav", "middle": [], "last": "Petrov", "suffix": "" }, { "first": "Leon", "middle": [], "last": "Barrett", "suffix": "" }, { "first": "Romain", "middle": [], "last": "Thibaux", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2006, "venue": "Proc. Coling-ACL", "volume": "", "issue": "", "pages": "433--440", "other_ids": {}, "num": null, "urls": [], "raw_text": "Slav Petrov, Leon Barrett, Romain Thibaux, and Dan Klein. 2006. Learning accurate, compact, and inter- pretable tree annotation. In Proc. Coling-ACL, pages 433-440.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "CRFTagger: CRF English POS Tagger. crftagger.sourceforge.net", "authors": [ { "first": "Xuan-Hieu", "middle": [], "last": "Phan", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xuan-Hieu Phan. 2006. CRFTagger: CRF English POS Tagger. crftagger.sourceforge.net.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Bayesian learning of a tree substitution grammar", "authors": [ { "first": "Matt", "middle": [], "last": "Post", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Gildea", "suffix": "" } ], "year": 2009, "venue": "Proc. ACL-IJCNLP", "volume": "", "issue": "", "pages": "45--48", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matt Post and Daniel Gildea. 2009. Bayesian learning of a tree substitution grammar. In Proc. ACL-IJCNLP, pages 45-48.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Judging grammaticality with tree substitution grammar derivations", "authors": [ { "first": "Matt", "middle": [], "last": "Post", "suffix": "" } ], "year": 2011, "venue": "Proc. ACL", "volume": "", "issue": "", "pages": "217--222", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matt Post. 2011. Judging grammaticality with tree sub- stitution grammar derivations. In Proc. ACL, pages 217-222.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Scientific paper summarization using citation summary networks", "authors": [ { "first": "Vahed", "middle": [], "last": "Qazvinian", "suffix": "" }, { "first": "Dragomir", "middle": [ "R" ], "last": "Radev", "suffix": "" } ], "year": 2008, "venue": "Proc. Coling", "volume": "", "issue": "", "pages": "689--696", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vahed Qazvinian and Dragomir R. Radev. 2008. Scien- tific paper summarization using citation summary net- works. In Proc. Coling, pages 689-696.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "A bibliometric and network analysis of the field of computational linguistics", "authors": [ { "first": "R", "middle": [], "last": "Dragomir", "suffix": "" }, { "first": "Mark", "middle": [ "Thomas" ], "last": "Radev", "suffix": "" }, { "first": "Bryan", "middle": [], "last": "Joseph", "suffix": "" }, { "first": "Pradeep", "middle": [], "last": "Gibson", "suffix": "" }, { "first": "", "middle": [], "last": "Muthukrishnan", "suffix": "" } ], "year": 2009, "venue": "Journal of the American Society for Information Science and Technology", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dragomir R. Radev, Mark Thomas Joseph, Bryan Gib- son, and Pradeep Muthukrishnan. 2009a. A biblio- metric and network analysis of the field of computa- tional linguistics. Journal of the American Society for Information Science and Technology.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "The ACL anthology network corpus", "authors": [ { "first": "R", "middle": [], "last": "Dragomir", "suffix": "" }, { "first": "Pradeep", "middle": [], "last": "Radev", "suffix": "" }, { "first": "Vahed", "middle": [], "last": "Muthukrishnan", "suffix": "" }, { "first": "", "middle": [], "last": "Qazvinian", "suffix": "" } ], "year": 2009, "venue": "Proc. ACL Workshop on Natural Language Processing and Information Retrieval for Digital Libraries", "volume": "", "issue": "", "pages": "54--61", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dragomir R. Radev, Pradeep Muthukrishnan, and Vahed Qazvinian. 2009b. The ACL anthology network cor- pus. In Proc. ACL Workshop on Natural Language Processing and Information Retrieval for Digital Li- braries, pages 54-61.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Authorship attribution using probabilistic context-free grammars", "authors": [ { "first": "Adriana", "middle": [], "last": "Sindhu Raghavan", "suffix": "" }, { "first": "Raymond", "middle": [], "last": "Kovashka", "suffix": "" }, { "first": "", "middle": [], "last": "Mooney", "suffix": "" } ], "year": 2010, "venue": "Proc. ACL", "volume": "", "issue": "", "pages": "38--42", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sindhu Raghavan, Adriana Kovashka, and Raymond Mooney. 2010. Authorship attribution using proba- bilistic context-free grammars. In Proc. ACL, pages 38-42.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Hierarchical bayesian models for latent attribute detection in social media", "authors": [ { "first": "Delip", "middle": [], "last": "Rao", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Paul", "suffix": "" }, { "first": "Clay", "middle": [], "last": "Fink", "suffix": "" }, { "first": "David", "middle": [], "last": "Yarowsky", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Oates", "suffix": "" }, { "first": "Glen", "middle": [], "last": "Coppersmith", "suffix": "" } ], "year": 2011, "venue": "Proc. ICWSM", "volume": "", "issue": "", "pages": "598--601", "other_ids": {}, "num": null, "urls": [], "raw_text": "Delip Rao, Michael Paul, Clay Fink, David Yarowsky, Timothy Oates, and Glen Coppersmith. 2011. Hierar- chical bayesian models for latent attribute detection in social media. In Proc. ICWSM, pages 598-601.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Gender attribution: tracing stylometric evidence beyond topic and genre", "authors": [ { "first": "Ruchita", "middle": [], "last": "Sarawgi", "suffix": "" }, { "first": "Kailash", "middle": [], "last": "Gajulapalli", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2011, "venue": "Proc. CoNLL", "volume": "", "issue": "", "pages": "78--86", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ruchita Sarawgi, Kailash Gajulapalli, and Yejin Choi. 2011. Gender attribution: tracing stylometric evidence beyond topic and genre. In Proc. CoNLL, pages 78- 86.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Machine learning in automated text categorization", "authors": [ { "first": "Fabrizio", "middle": [], "last": "Sebastiani", "suffix": "" } ], "year": 2002, "venue": "ACM Comput. Surv", "volume": "34", "issue": "", "pages": "1--47", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fabrizio Sebastiani. 2002. Machine learning in auto- mated text categorization. ACM Comput. Surv., 34:1- 47.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Automatic text categorization in terms of genre and author", "authors": [ { "first": "Efstathios", "middle": [], "last": "Stamatatos", "suffix": "" }, { "first": "Nikos", "middle": [], "last": "Fakotakis", "suffix": "" }, { "first": "George", "middle": [], "last": "Kokkinakis", "suffix": "" } ], "year": 2001, "venue": "Computational Linguistics", "volume": "26", "issue": "4", "pages": "471--495", "other_ids": {}, "num": null, "urls": [], "raw_text": "Efstathios Stamatatos, Nikos Fakotakis, and George Kokkinakis. 2001. Automatic text categorization in terms of genre and author. Computational Linguistics, 26(4):471-495.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "The ups and downs of preposition error detection in ESL writing", "authors": [ { "first": "Joel", "middle": [ "R" ], "last": "Tetreault", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Chodorow", "suffix": "" } ], "year": 2008, "venue": "Proc. Coling", "volume": "", "issue": "", "pages": "865--872", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joel R. Tetreault and Martin Chodorow. 2008. The ups and downs of preposition error detection in ESL writ- ing. In Proc. Coling, pages 865-872.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "Summarizing scientific articles -experiments with relevance and rhetorical status", "authors": [ { "first": "Simone", "middle": [], "last": "Teufel", "suffix": "" }, { "first": "Marc", "middle": [], "last": "Moens", "suffix": "" } ], "year": 2002, "venue": "Computational Linguistics", "volume": "28", "issue": "4", "pages": "409--445", "other_ids": {}, "num": null, "urls": [], "raw_text": "Simone Teufel and Marc Moens. 2002. Summariz- ing scientific articles -experiments with relevance and rhetorical status. Computational Linguistics, 28(4):409-445.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "You're not from 'round here, are you? Naive Bayes detection of non-native utterances", "authors": [ { "first": "Laura", "middle": [], "last": "Mayfield Tomokiyo", "suffix": "" }, { "first": "Rosie", "middle": [], "last": "Jones", "suffix": "" } ], "year": 2001, "venue": "Proc. NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Laura Mayfield Tomokiyo and Rosie Jones. 2001. You're not from 'round here, are you? Naive Bayes detection of non-native utterances. In Proc. NAACL.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Using classifier features for studying the effect of native language on the choice of written second language words", "authors": [ { "first": "Oren", "middle": [], "last": "Tsur", "suffix": "" }, { "first": "Ari", "middle": [], "last": "Rappoport", "suffix": "" } ], "year": 2007, "venue": "Proc. Workshop on Cognitive Aspects of Computational Language Acquisition", "volume": "", "issue": "", "pages": "9--16", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oren Tsur and Ari Rappoport. 2007. Using classi- fier features for studying the effect of native language on the choice of written second language words. In Proc. Workshop on Cognitive Aspects of Computa- tional Language Acquisition, pages 9-16.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "Who am I/who are we in academic writing?", "authors": [ { "first": "Irena", "middle": [], "last": "Vassileva", "suffix": "" } ], "year": 1998, "venue": "International Journal of Applied Linguistics", "volume": "8", "issue": "2", "pages": "163--185", "other_ids": {}, "num": null, "urls": [], "raw_text": "Irena Vassileva. 1998. Who am I/who are we in aca- demic writing? International Journal of Applied Lin- guistics, 8(2):163-185.", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "Exploiting parse structures for native language identification", "authors": [ { "first": "Sze-Meng Jojo", "middle": [], "last": "Wong", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Dras", "suffix": "" } ], "year": 2011, "venue": "Proc. EMNLP", "volume": "", "issue": "", "pages": "1600--1610", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sze-Meng Jojo Wong and Mark Dras. 2011. Exploiting parse structures for native language identification. In Proc. EMNLP, pages 1600-1610.", "links": null }, "BIBREF51": { "ref_id": "b51", "title": "Predicting a scientific community's response to an article", "authors": [ { "first": "Dani", "middle": [], "last": "Yogatama", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Heilman", "suffix": "" }, { "first": "O'", "middle": [], "last": "Brendan", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Connor", "suffix": "" }, { "first": "Bryan", "middle": [ "R" ], "last": "Dyer", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Routledge", "suffix": "" }, { "first": "", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2011, "venue": "Proc. EMNLP", "volume": "", "issue": "", "pages": "594--604", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dani Yogatama, Michael Heilman, Brendan O'Connor, Chris Dyer, Bryan R. Routledge, and Noah A. Smith. 2011. Predicting a scientific community's response to an article. In Proc. EMNLP, pages 594-604.", "links": null }, "BIBREF52": { "ref_id": "b52", "title": "On sentence-length as a statistical characteristic of style in prose: With application to two cases of disputed authorship", "authors": [ { "first": "G", "middle": [], "last": "Udny Yule", "suffix": "" } ], "year": 1939, "venue": "Biometrika", "volume": "30", "issue": "3/4", "pages": "363--390", "other_ids": {}, "num": null, "urls": [], "raw_text": "G. Udny Yule. 1939. On sentence-length as a statis- tical characteristic of style in prose: With applica- tion to two cases of disputed authorship. Biometrika, 30(3/4):363-390.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "num": null, "text": "Predicting hidden attributes in scientific articles", "type_str": "figure" }, "FIGREF1": { "uris": null, "num": null, "text": "Predicting hidden attributes in scientific articles", "type_str": "figure" }, "FIGREF3": { "uris": null, "num": null, "text": "Motivating deeper syntactic features: The shaded TSG fragment indicates native English, but is not directly encoded in Bow, Style, nor standard CFG-rules.", "type_str": "figure" }, "FIGREF4": { "uris": null, "num": null, "text": "Correlation between predictions (x-axis) and mean number of citations (y-axis, log-scale).", "type_str": "figure" }, "TABREF1": { "text": "Number of documents for each task ley parser", "content": "", "html": null, "num": null, "type_str": "table" }, "TABREF3": { "text": "F1 scores for Bow+Style+Syntax system on development data: The best training strategy and the best syntactic features depend on the task.", "content": "
", "html": null, "num": null, "type_str": "table" }, "TABREF4": { "text": "Bow and Style features for NativeL. Some reflect differences in common", "content": "
FeaturesNativeL Venue Gender
Baseline49.845.533.1
Bow88.860.742.5
Style90.661.939.8
Syntax88.764.641.2
Bow+Style90.464.045.1
Bow+Syntax90.365.842.9
Style+Syntax89.465.543.3
Bow+Style+Syntax91.666.748.2
", "html": null, "num": null, "type_str": "table" }, "TABREF5": { "text": "", "content": "
: F1 scores with different features on held-out test
data: Including style and syntactic features is superior to
standard Bow features in all cases.
native/non-native topics; e.g., 'probabilities' pre-
dicts native while 'morphological' predicts non-
native. Several features, like 'obtained', indicate L1
interference; i.e., many non-natives have a cognate
for obtain in their native language and thus adopt the
English word. As an example, the word obtained
occurs 3.7 times per paper from Spanish-speaking
areas (cognate obtenir) versus once per native paper
and 0.8 times per German-authored paper.
", "html": null, "num": null, "type_str": "table" }, "TABREF6": { "text": "", "content": "
Predicts nativePredicts non-native
Bow featureWt. Bow featureWt.
initial2.25 obtained-2.15
techniques2.11 proposed-2.06
probabilities 1.38 method-2.06
additional1.23 morphological -1.96
fewer1.02 languages-1.23
Style feature Wt. Style featureWt.
used to1.92 , i.e.-2.60
JJR NN1.90 have to-1.65
has VBN1.90 the xxxx-ing-1.61
example ,1.75 thus-1.61
all of1.73 usually-1.24
's1.69 mainly-1.21
allow1.47 , because-1.12
has xxxx-ed1.45 the VBN-1.12
may be1.35 JJ for-1.11
; and1.21 cf-0.97
e.g.1.10 etc.-0.55
must VB0.99 associated to-0.23
", "html": null, "num": null, "type_str": "table" }, "TABREF7": { "text": "NativeL: Examples of highly-weighted style and content features in the Bow+Style+Syntax system.", "content": "", "html": null, "num": null, "type_str": "table" }, "TABREF9": { "text": "", "content": "
: NativeL: Highly-weighted syntactic features
(descending order of absolute weight) and examples in
the Bow+Style+Syntax system.
and statistical significance checking. On the other
hand, there might be a bias at main conferences for
focused, incremental papers; features of workshop
papers highlight the exploration of 'interesting' new
ideas/domains. Here, the objective might only be to
show what is 'possible' or what one is 'able to' do.
Main conference papers prefer work that improves
'performance' by '#%' on established tasks.
Gender: The CFG features for Gender are given
in
", "html": null, "num": null, "type_str": "table" }, "TABREF11": { "text": "Venue: Examples of highly-weighted style content features in the Bow+Style+Syntax system.", "content": "", "html": null, "num": null, "type_str": "table" }, "TABREF12": { "text": "new NE tag), (or) (no NE tag)", "content": "
CFG RuleExample
Predicts female author:
NP PRP$ NN NN(our) (upper) (bound)
QP RB CD(roughly) (6000)
NP NP, CC NP (a NP PRP$ JJ JJ NN (our) (first) (new) (approach)
VP MD RB VP(may) (not) (be useful)
ADVP RB RBR(significantly) (more)
Predicts male author:
ADVP RB RB(only) (superficially)
NP NP, SBARwe use (XYZ), (which is ...)
S S: S.(Trust me): (I'm a doctor)
S S, NP VP(To do so), (it) (needs help)
WHNP WP NNdepending on (what) (path) is ...
PP IN PRN(in) ((Jelinek, 1976))
", "html": null, "num": null, "type_str": "table" }, "TABREF13": { "text": "Gender: Highly-weighted syntactic features (descending order of weight) and examples in the Bow+Style+Syntax system.", "content": "
Highest NES Scores, non-English-country: Gerald
Penn, 10 Ezra W.
", "html": null, "num": null, "type_str": "table" }, "TABREF14": { "text": "Authors scoring highest on NativeL, in descending order, based exclusively on article text.", "content": "", "html": null, "num": null, "type_str": "table" }, "TABREF15": { "text": "Authors scoring highest (absolute values) on Gender, in descending order, based exclusively on article text.", "content": "
", "html": null, "num": null, "type_str": "table" } } } }