{ "paper_id": "D08-1038", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:29:35.563445Z" }, "title": "Studying the History of Ideas Using Topic Models", "authors": [ { "first": "David", "middle": [], "last": "Hall", "suffix": "", "affiliation": { "laboratory": "", "institution": "Stanford University Stanford", "location": { "postCode": "94305", "region": "CA", "country": "USA" } }, "email": "" }, { "first": "Daniel", "middle": [], "last": "Jurafsky", "suffix": "", "affiliation": { "laboratory": "", "institution": "Linguistics Stanford University Stanford", "location": { "postCode": "94305", "region": "CA", "country": "USA" } }, "email": "jurafsky@stanford.edu" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "", "affiliation": { "laboratory": "", "institution": "Stanford University Stanford", "location": { "postCode": "94305", "region": "CA", "country": "USA" } }, "email": "manning@stanford.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "How can the development of ideas in a scientific field be studied over time? We apply unsupervised topic modeling to the ACL Anthology to analyze historical trends in the field of Computational Linguistics from 1978 to 2006. We induce topic clusters using Latent Dirichlet Allocation, and examine the strength of each topic over time. Our methods find trends in the field including the rise of probabilistic methods starting in 1988, a steady increase in applications, and a sharp decline of research in semantics and understanding between 1978 and 2001, possibly rising again after 2001. We also introduce a model of the diversity of ideas, topic entropy, using it to show that COLING is a more diverse conference than ACL, but that both conferences as well as EMNLP are becoming broader over time. Finally, we apply Jensen-Shannon divergence of topic distributions to show that all three conferences are converging in the topics they cover.", "pdf_parse": { "paper_id": "D08-1038", "_pdf_hash": "", "abstract": [ { "text": "How can the development of ideas in a scientific field be studied over time? We apply unsupervised topic modeling to the ACL Anthology to analyze historical trends in the field of Computational Linguistics from 1978 to 2006. We induce topic clusters using Latent Dirichlet Allocation, and examine the strength of each topic over time. Our methods find trends in the field including the rise of probabilistic methods starting in 1988, a steady increase in applications, and a sharp decline of research in semantics and understanding between 1978 and 2001, possibly rising again after 2001. We also introduce a model of the diversity of ideas, topic entropy, using it to show that COLING is a more diverse conference than ACL, but that both conferences as well as EMNLP are becoming broader over time. Finally, we apply Jensen-Shannon divergence of topic distributions to show that all three conferences are converging in the topics they cover.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "How can we identify and study the exploration of ideas in a scientific field over time, noting periods of gradual development, major ruptures, and the waxing and waning of both topic areas and connections with applied topics and nearby fields? One important method is to make use of citation graphs (Garfield, 1955) . This enables the use of graphbased algorithms like PageRank for determining researcher or paper centrality, and examining whether their influence grows or diminishes over time.", "cite_spans": [ { "start": 299, "end": 315, "text": "(Garfield, 1955)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "However, because we are particularly interested in the change of ideas in a field over time, we have chosen a different method, following Kuhn (1962) . In Kuhn's model of scientific change, science proceeds by shifting from one paradigm to another. Because researchers' ideas and vocabulary are constrained by their paradigm, successive incommensurate paradigms will naturally have different vocabulary and framing. Kuhn's model is intended to apply only to very large shifts in scientific thought rather than at the micro level of trends in research foci. Nonetheless, we propose to apply Kuhn's insight that vocabulary and vocabulary shift is a crucial indicator of ideas and shifts in ideas. Our operationalization of this insight is based on the unsupervised topic model Latent Dirichlet Allocation (LDA; Blei et al. (2003) ).", "cite_spans": [ { "start": 138, "end": 149, "text": "Kuhn (1962)", "ref_id": "BIBREF9" }, { "start": 809, "end": 827, "text": "Blei et al. (2003)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "For many fields, doing this kind of historical study would be very difficult. Computational linguistics has an advantage, however: the ACL Anthology, a public repository of all papers in the Computational Linguistics journal and the conferences and workshops associated with the ACL, COLING, EMNLP, and so on. The ACL Anthology (Bird, 2008) , and comprises over 14,000 documents from conferences and the journal, beginning as early as 1965 through 2008, indexed by conference and year. This resource has already been the basis of citation analysis work, for example, in the ACL Anthology Network of Joseph and Radev (2007) . We apply LDA to the text of the papers in the ACL Anthology to induce topics, and use the trends in these topics over time and over conference venues to address questions about the development of the field. Despite the relative youth of our field, computational linguistics has witnessed a number of research trends and shifts in focus. While some trends are obvious (such as the rise in machine learning methods), others may be more subtle. Has the field gotten more theoretical over the years or has there been an increase in applications? What topics have declined over the years, and which ones have remained roughly constant? How have fields like Dialogue or Machine Translation changed over the years? Are there differences among the conferences, for example between COLING and ACL, in their interests and breadth of focus? As our field matures, it is important to go beyond anecdotal description to give grounded answers to these questions. Such answers could also help give formal metrics to model the differences between the many conferences and venues in our field, which could influence how we think about reviewing, about choosing conference topics, and about long range planning in our field.", "cite_spans": [ { "start": 328, "end": 340, "text": "(Bird, 2008)", "ref_id": "BIBREF0" }, { "start": 599, "end": 622, "text": "Joseph and Radev (2007)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The analyses in this paper are based on a textonly version of the Anthology that comprises some 12,500 papers. The distribution of the Anthology data is shown in Table 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "2.1" }, { "text": "Our experiments employ Latent Dirichlet Allocation (LDA; Blei et al. (2003) ), a generative latent variable model that treats documents as bags of words generated by one or more topics. Each document is char-acterized by a multinomial distribution over topics, and each topic is in turn characterized by a multinomial distribution over words. We perform parameter estimation using collapsed Gibbs sampling (Griffiths and Steyvers, 2004) .", "cite_spans": [ { "start": 57, "end": 75, "text": "Blei et al. (2003)", "ref_id": "BIBREF2" }, { "start": 406, "end": 436, "text": "(Griffiths and Steyvers, 2004)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Topic Modeling", "sec_num": "2.2" }, { "text": "Possible extensions to this model would be to integrate topic modelling with citations (e.g., Dietz et al. (2007) , Mann et al. (2006), and Jo et al. (2007) ). Another option is the use of more fine-grained or hierarchical model (e.g., Blei et al. (2004) , and Li and McCallum (2006) ).", "cite_spans": [ { "start": 94, "end": 113, "text": "Dietz et al. (2007)", "ref_id": "BIBREF5" }, { "start": 116, "end": 139, "text": "Mann et al. (2006), and", "ref_id": "BIBREF11" }, { "start": 140, "end": 156, "text": "Jo et al. (2007)", "ref_id": "BIBREF7" }, { "start": 236, "end": 254, "text": "Blei et al. (2004)", "ref_id": "BIBREF3" }, { "start": 261, "end": 283, "text": "Li and McCallum (2006)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Topic Modeling", "sec_num": "2.2" }, { "text": "All our studies measure change in various aspects of the ACL Anthology over time. LDA, however, does not explicitly model temporal relationships. One way to model temporal relationships is to employ an extension to LDA. The Dynamic Topic Model (Blei and Lafferty, 2006) , for example, represents each year's documents as generated from a normal distribution centroid over topics, with the following year's centroid generated from the preceding year's. The Topics over Time Model (Wang and McCallum, 2006) assumes that each document chooses its own time stamp based on a topic-specific beta distribution.", "cite_spans": [ { "start": 244, "end": 269, "text": "(Blei and Lafferty, 2006)", "ref_id": "BIBREF1" }, { "start": 479, "end": 504, "text": "(Wang and McCallum, 2006)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Topic Modeling", "sec_num": "2.2" }, { "text": "Both of these models, however, impose constraints on the time periods. The Dynamic Topic Model penalizes large changes from year to year while the beta distributions in Topics over Time are relatively inflexible. We chose instead to perform post hoc calculations based on the observed probability of each topic given the current year. We defin\u00ea p(z|y) as the empirical probability that an arbitrary paper d written in year y was about topic z:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Topic Modeling", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p(z|y) = d:t d =yp (z|d)p(d|y) = 1 C d:t d =yp (z|d) = 1 C d:t d =y z i \u2208d I(z i = z)", "eq_num": "(1)" } ], "section": "Topic Modeling", "sec_num": "2.2" }, { "text": "where I is the indicator function, t d is the date document d was written,p(d|y) is set to a constant 1/C.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Topic Modeling", "sec_num": "2.2" }, { "text": "We first ran LDA with 100 topics, and took 36 that we found to be relevant. We then hand-selected seed words for 10 more topics to improve coverage of the field. These 46 topics were then used as priors to a new 100-topic run. The top ten most frequent words for 43 of the topics along with hand-assigned labels are listed in Table 2 . Topics deriving from manual seeds are marked with an asterisk.", "cite_spans": [], "ref_spans": [ { "start": 326, "end": 333, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Summary of Topics", "sec_num": "3" }, { "text": "Given the space of possible topics defined in the previous section, we now examine the history of these in the entire ACL Anthology from 1978 until 2006. To visualize some trends, we show the probability mass associated with various topics over time, plotted as (a smoothed version of)p(z|y). Figure 1 shows topics that have become more prominent more recently. Of these new topics, the rise in probabilistic models and classification/tagging is unsurprising. In order to distinguish these two topics, we show 20 of the strongly weighted words: As Figure 1 shows, probabilistic models seem to have arrived significantly before classifiers. The probabilistic model topic increases around 1988, which seems to have been an important year for probabilistic models, including high-impact papers like A88-1019 and C88-1016 below. The ten papers from 1988 with the highest weights for the probabilistic model and classifier topics were the following: probabilistic models and classifiers entered the field? First, not surprisingly, we note that the vast majority (9 of 10) of the papers appeared in conference proceedings rather than the journal, confirming that in general new ideas appear in conferences. Second, of the 9 conference papers, most of them appeared in the COLING conference (5) or the ANLP workshop (3) compared to only 1 in the ACL conference. This suggests that COLING may have been more receptive than ACL to new ideas at the time, a point we return to in Section 6. Finally, we examined the background of the authors of these papers. Six of the 10 papers either focus on speech (C88-1010, A88-1028, C88-1071) or were written by authors who had previously published on speech recognition topics, including the influential IBM (Brown et al.) and AT&T (Church) labs (C88-1016, A88-1005, A88-1019). Speech recognition is historically an electrical engineering field which made quite early use of probabilistic and statistical methodologies. This suggests that researchers working on spoken language processing were an important conduit for the borrowing of statistical methodologies into computational linguistics. Figure 2 shows several topics that were more prominent at the beginning of the ACL but which have shown the most precipitous decline. Papers strongly associated with the plan-based dialogue topic include: The declines in both computational semantics and conceptual semantics/story understanding suggests that it is possible that the entire field of natural language understanding and computational semantics broadly construed has fallen out of favor. To see if this was in fact the case we created a metatopic called semantics in which we combined various semantics topics (not including pragmatic topics like anaphora resolution or discourse coherence) including: lexical semantics, conceptual semantics/story understanding, computational semantics, WordNet, word sense disambiguation, semantic role labeling, RTE and paraphrase, MUC information extraction, and events/temporal. We then plottedp(z \u2208 S|y), the sum of the proportions per year for these topics, as shown in Figure 3 . The steep decrease in semantics is readily apparent. The last few years has shown a levelling off of the decline, and possibly a revival of this topic; this possibility will need to be confirmed as we add data from 2007 and 2008. We next chose two fields, Dialogue and Machine Translation, in which it seemed to us that the topics discovered by LDA suggested a shift in paradigms in these fields. Figure 4 shows the shift in translation, while Figure 5 shows the change in dialogue.", "cite_spans": [], "ref_spans": [ { "start": 293, "end": 301, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 548, "end": 556, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 2125, "end": 2133, "text": "Figure 2", "ref_id": "FIGREF1" }, { "start": 3098, "end": 3106, "text": "Figure 3", "ref_id": "FIGREF2" }, { "start": 3506, "end": 3514, "text": "Figure 4", "ref_id": null }, { "start": 3553, "end": 3561, "text": "Figure 5", "ref_id": null } ], "eq_spans": [], "section": "Historical Trends in Computational Linguistics", "sec_num": "4" }, { "text": "The shift toward statistical machine translation is well known, at least anecdotally. The shift in dialogue seems to be a move toward more applied, speech-oriented, or commercial dialogue systems and away from more theoretical models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Topics That Have Declined", "sec_num": "4.2" }, { "text": "Finally, Figure 6 shows the history of several topics that peaked at intermediate points throughout the history of the field. We can see the peak of unification around 1990, of syntactic structure around 1985 of automata in 1985 and again in 1997, and of word sense disambiguation around 1998.", "cite_spans": [], "ref_spans": [ { "start": 9, "end": 17, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "Topics That Have Declined", "sec_num": "4.2" }, { "text": "More Applied?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Is Computational Linguistics Becoming", "sec_num": "5" }, { "text": "We don't know whether our field is becoming more applied, or whether perhaps there is a trend towards new but unapplied theories. We therefore Figure 7 shows a clear trend toward an increase in applications over time. The figure also shows an interesting bump near 1990. Why was there such a sharp temporary increase in applications at that time? Figure 8 shows details for each application, making it clear that the bump is caused by a temporary spike in the Speech Recognition topic.", "cite_spans": [], "ref_spans": [ { "start": 143, "end": 151, "text": "Figure 7", "ref_id": null }, { "start": 347, "end": 355, "text": "Figure 8", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Is Computational Linguistics Becoming", "sec_num": "5" }, { "text": "In order to understand why we see this temporary spike, Figure 9 shows the unsmoothed values of the Speech Recognition topic prominence over time. [1989] [1990] [1991] [1992] [1993] [1994] . That workshop contained a significant amount of speech until its last year (1994), and then it was revived in 2001 as the Human Language Technology workshop with a much smaller emphasis on speech processing. It is clear from Figure 9 that there is still some speech research appearing in the Anthology after 1995, certainly more than the period before 1989, but it's equally clear that speech recognition is not an application that the ACL community has been successful at attracting.", "cite_spans": [ { "start": 147, "end": 153, "text": "[1989]", "ref_id": null }, { "start": 154, "end": 160, "text": "[1990]", "ref_id": null }, { "start": 161, "end": 167, "text": "[1991]", "ref_id": null }, { "start": 168, "end": 174, "text": "[1992]", "ref_id": null }, { "start": 175, "end": 181, "text": "[1993]", "ref_id": null }, { "start": 182, "end": 188, "text": "[1994]", "ref_id": null } ], "ref_spans": [ { "start": 56, "end": 64, "text": "Figure 9", "ref_id": null }, { "start": 416, "end": 424, "text": "Figure 9", "ref_id": null } ], "eq_spans": [], "section": "Is Computational Linguistics Becoming", "sec_num": "5" }, { "text": "The computational linguistics community has two distinct conferences, COLING and ACL, with different histories, organizing bodies, and philosophies. Traditionally, COLING was larger, with parallel sessions and presumably a wide variety of topics, while ACL had single sessions and a more narrow scope. In recent years, however, ACL has moved to parallel sessions, and the conferences are of similar size. Has the distinction in breadth of topics also been blurred? What are the differences and similarities in topics and trends between these two conferences? More recently, the EMNLP conference grew out of the Workshop on Very Large Corpora, sponsored by the Special Interest Group on Linguistic Data and corpus-based approaches to NLP (SIGDAT). EMNLP started as a much smaller and narrower conference but more recently, while still smaller than both COLING and ACL, it has grown large enough to be considered with them. How does the breadth of its topics compare with the others?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Differences and Similarities Among COLING, ACL, and EMNLP", "sec_num": "6" }, { "text": "Our hypothesis, based on our intuitions as conference attendees, is that ACL is still more narrow in scope than COLING, but has broadened considerably. Similarly, our hypothesis is that EMNLP has begun to broaden considerably as well, although not to the extent of the other two.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Differences and Similarities Among COLING, ACL, and EMNLP", "sec_num": "6" }, { "text": "In addition, we're interested in whether the topics of these conferences are converging or not. Are the probabilistic and machine learning trends that are dominant in ACL becoming dominant in COLING as well? Is EMNLP adopting some of the topics that are popular at COLING?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Differences and Similarities Among COLING, ACL, and EMNLP", "sec_num": "6" }, { "text": "To investigate both of these questions, we need a model of the topic distribution for each conference. We define the empirical distribution of a topic z at a conference c, denoted byp(z|c) by:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Differences and Similarities Among COLING, ACL, and EMNLP", "sec_num": "6" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p(z|c) = d:c d =cp (z|d)p(d|c) = 1 C d:c d =cp (z|d) = 1 C d:c d =c z i \u2208d I(z i = z)", "eq_num": "(2)" } ], "section": "Differences and Similarities Among COLING, ACL, and EMNLP", "sec_num": "6" }, { "text": "We also condition on the year for each conference, giving usp(z|y, c). We propose to measure the breadth of a conference by using what we call topic entropy: the conditional entropy of this conference topic distribution. Entropy measures the average amount of information expressed by each assignment to a random variable. If a conference has higher topic entropy, then it more evenly divides its probability mass across the generated topics. If it has lower, it has a far more narrow focus on just a couple of topics. We therefore measured topic entropy: Figure 10 shows the conditional topic entropy of each conference over time. We removed from the ACL and COLING lines the years when ACL 6 1980 1985 1990 1995 2000 2005 ACL Conference COLING EMNLP Joint COLING/ACL Figure 10 : Entropy of the three major conferences per year and COLING are colocated (1984, 1998, 2006) , and marked those colocated years as points separate from either plot. As expected, COLING has been historically the broadest of the three conferences, though perhaps slightly less so in recent years. ACL started with a fairly narrow focus, but became nearly as broad as COLING during the 1990's. However, in the past 8 years it has become more narrow again, with a steeper decline in breadth than COLING. EMNLP, true to its status as a \"Special Interest\" conference, began as a very narrowly focused conference, but now it seems to be catching up to at least ACL in terms of the breadth of its focus.", "cite_spans": [ { "start": 692, "end": 723, "text": "6 1980 1985 1990 1995 2000 2005", "ref_id": null }, { "start": 854, "end": 860, "text": "(1984,", "ref_id": null }, { "start": 861, "end": 866, "text": "1998,", "ref_id": null }, { "start": 867, "end": 872, "text": "2006)", "ref_id": null } ], "ref_spans": [ { "start": 556, "end": 565, "text": "Figure 10", "ref_id": "FIGREF0" }, { "start": 769, "end": 778, "text": "Figure 10", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Differences and Similarities Among COLING, ACL, and EMNLP", "sec_num": "6" }, { "text": "H(z|c, y) = \u2212 K i=1p (z i |c, y) logp(z i |c, y) (3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Differences and Similarities Among COLING, ACL, and EMNLP", "sec_num": "6" }, { "text": "Since the three major conferences seem to be converging in terms of breadth, we investigated whether or not the topic distributions of the conferences were also converging. To do this, we plotted the Jensen-Shannon (JS) divergence between each pair of conferences. The Jensen-Shannon divergence is a symmetric measure of the similarity of two pairs of distributions. The measure is 0 only for identical distributions and approaches infinity as the two differ more and more. Formally, it is defined as the average of the KL divergence of each distribution to the average of the two distributions: 5 1980 1985 1990 1995 2000 2005 ACL/COLING ACL/EMNLP EMNLP/COLING Figure 11 : JS Divergence between the three major conferences and COLING have historically met very infrequently in the same year, so those similarity scores are plotted as points and not smoothed. The trend across all three conferences is clear: each conference is not only increasing in breadth, but also in similarity. In particular, EMNLP and ACL's differences, once significant, are nearly erased.", "cite_spans": [ { "start": 596, "end": 627, "text": "5 1980 1985 1990 1995 2000 2005", "ref_id": null } ], "ref_spans": [ { "start": 662, "end": 671, "text": "Figure 11", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Differences and Similarities Among COLING, ACL, and EMNLP", "sec_num": "6" }, { "text": "D", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Differences and Similarities Among COLING, ACL, and EMNLP", "sec_num": "6" }, { "text": "Our method discovers a number of trends in the field, such as the general increase in applications, the steady decline in semantics, and its possible reversal. We also showed a convergence over time in topic coverage of ACL, COLING, and EMNLP as well an expansion of topic diversity. This growth and convergence of the three conferences, perhaps influenced by the need to increase recall (Church, 2005) seems to be leading toward a tripartite realization of a single new \"latent\" conference.", "cite_spans": [ { "start": 388, "end": 402, "text": "(Church, 2005)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" } ], "back_matter": [ { "text": "Many thanks to Bryan Gibson and Dragomir Radev for providing us with the data behind the ACL Anthology Network. Also to Sharon Goldwater and the other members of the Stanford NLP Group as well as project Mimir for helpful advice. Finally, many thanks to the Office of the President, Stanford University, for partial funding.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Association of Computational Linguists Anthology", "authors": [ { "first": "Steven", "middle": [], "last": "Bird", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Steven Bird. 2008. Association of Computational Lin- guists Anthology. http://www.aclweb.org/anthology- index/.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Dynamic topic models. ICML", "authors": [ { "first": "David", "middle": [], "last": "Blei", "suffix": "" }, { "first": "D", "middle": [], "last": "John", "suffix": "" }, { "first": "", "middle": [], "last": "Lafferty", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Blei and John D. Lafferty. 2006. Dynamic topic models. ICML.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Latent Dirichlet allocation", "authors": [ { "first": "David", "middle": [], "last": "Blei", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Ng", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Jordan", "suffix": "" } ], "year": 2003, "venue": "Journal of Machine Learning Research", "volume": "3", "issue": "", "pages": "993--1022", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Blei, Andrew Ng, , and Michael Jordan. 2003. La- tent Dirichlet allocation. Journal of Machine Learning Research, 3:993-1022.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Hierarchical topic models and the nested Chinese restaurant process", "authors": [ { "first": "D", "middle": [], "last": "Blei", "suffix": "" }, { "first": "T", "middle": [], "last": "Gri", "suffix": "" }, { "first": "M", "middle": [], "last": "Jordan", "suffix": "" }, { "first": "J", "middle": [], "last": "Tenenbaum", "suffix": "" } ], "year": 2004, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Blei, T. Gri, M. Jordan, and J. Tenenbaum. 2004. Hi- erarchical topic models and the nested Chinese restau- rant process.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Reviewing the reviewers", "authors": [ { "first": "Kenneth", "middle": [], "last": "Church", "suffix": "" } ], "year": 2005, "venue": "Comput. Linguist", "volume": "31", "issue": "4", "pages": "575--578", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kenneth Church. 2005. Reviewing the reviewers. Com- put. Linguist., 31(4):575-578.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Unsupervised prediction of citation influences", "authors": [ { "first": "Laura", "middle": [], "last": "Dietz", "suffix": "" }, { "first": "Steffen", "middle": [], "last": "Bickel", "suffix": "" }, { "first": "Tobias", "middle": [], "last": "Scheffer", "suffix": "" } ], "year": 2007, "venue": "ICML", "volume": "", "issue": "", "pages": "233--240", "other_ids": {}, "num": null, "urls": [], "raw_text": "Laura Dietz, Steffen Bickel, and Tobias Scheffer. 2007. Unsupervised prediction of citation influences. In ICML, pages 233-240. ACM.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Citation indexes to science: A new dimension in documentation through association of ideas", "authors": [ { "first": "Eugene", "middle": [], "last": "Garfield", "suffix": "" } ], "year": 1955, "venue": "Science", "volume": "122", "issue": "", "pages": "108--111", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eugene Garfield. 1955. Citation indexes to science: A new dimension in documentation through association of ideas. Science, 122:108-111.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Detecting research topics via the correlation between graphs and texts", "authors": [ { "first": "L", "middle": [], "last": "Tom", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Griffiths", "suffix": "" }, { "first": "", "middle": [], "last": "Steyvers ; Yookyung", "suffix": "" }, { "first": "Carl", "middle": [], "last": "Jo", "suffix": "" }, { "first": "C. Lee", "middle": [], "last": "Lagoze", "suffix": "" }, { "first": "", "middle": [], "last": "Giles", "suffix": "" } ], "year": 2004, "venue": "KDD", "volume": "101", "issue": "", "pages": "370--379", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tom L. Griffiths and Mark Steyvers. 2004. Finding sci- entific topics. PNAS, 101 Suppl 1:5228-5235, April. Yookyung Jo, Carl Lagoze, and C. Lee Giles. 2007. Detecting research topics via the correlation between graphs and texts. In KDD, pages 370-379, New York, NY, USA. ACM.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Citation analysis, centrality, and the ACL anthology", "authors": [ { "first": "T", "middle": [], "last": "Mark", "suffix": "" }, { "first": "Dragomir", "middle": [ "R" ], "last": "Joseph", "suffix": "" }, { "first": "", "middle": [], "last": "Radev", "suffix": "" } ], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark T. Joseph and Dragomir R. Radev. 2007. Citation analysis, centrality, and the ACL anthology. Techni- cal Report CSE-TR-535-07, University of Michigan. Department of Electrical Engineering and Computer Science.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "The Structure of Scientific Revolutions", "authors": [ { "first": "Thomas", "middle": [ "S" ], "last": "Kuhn", "suffix": "" } ], "year": 1962, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas S. Kuhn. 1962. The Structure of Scientific Rev- olutions. University Of Chicago Press.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Pachinko allocation: DAG-structured mixture models of topic correlations", "authors": [ { "first": "Wei", "middle": [], "last": "Li", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2006, "venue": "ICML", "volume": "", "issue": "", "pages": "577--584", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wei Li and Andrew McCallum. 2006. Pachinko alloca- tion: DAG-structured mixture models of topic correla- tions. In ICML, pages 577-584, New York, NY, USA. ACM.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Bibliometric impact measures leveraging topic analysis", "authors": [ { "first": "Gideon", "middle": [ "S" ], "last": "Mann", "suffix": "" }, { "first": "David", "middle": [], "last": "Mimno", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2006, "venue": "JCDL '06: Proceedings of the 6th", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gideon S. Mann, David Mimno, and Andrew McCal- lum. 2006. Bibliometric impact measures leveraging topic analysis. In JCDL '06: Proceedings of the 6th", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "ACM/IEEE-CS joint conference on Digital libraries", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "65--74", "other_ids": {}, "num": null, "urls": [], "raw_text": "ACM/IEEE-CS joint conference on Digital libraries, pages 65-74, New York, NY, USA. ACM.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Topics over time: a non-Markov continuous-time model of topical trends", "authors": [ { "first": "Xuerui", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2006, "venue": "KDD", "volume": "", "issue": "", "pages": "424--433", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xuerui Wang and Andrew McCallum. 2006. Topics over time: a non-Markov continuous-time model of topical trends. In KDD, pages 424-433, New York, NY, USA. ACM.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Topics in the ACL Anthology that show a strong recent increase in strength.", "type_str": "figure", "uris": null, "num": null }, "FIGREF1": { "text": "Topics in the ACL Anthology that show a strong decline from 1978 to 2006.", "type_str": "figure", "uris": null, "num": null }, "FIGREF2": { "text": "Semantics over time", "type_str": "figure", "uris": null, "num": null }, "FIGREF3": { "text": "Figure 4: Translation over time", "type_str": "figure", "uris": null, "num": null }, "FIGREF4": { "text": "Six applied topics over time looked at trends over time for the following applications: Machine Translation, Spelling Correction, Dialogue Systems, Information Retrieval, Call Routing, Speech Recognition, and Biomedical applications.", "type_str": "figure", "uris": null, "num": null }, "FIGREF5": { "text": "clearly shows a huge spike for the years 1989-1994. These years correspond exactly to the DARPA Speech and Natural Language Workshop, Speech recognition over time held at different locations from", "type_str": "figure", "uris": null, "num": null }, "FIGREF7": { "text": "shows the JS divergence between each pair of conferences over time. Note that EMNLP", "type_str": "figure", "uris": null, "num": null }, "TABREF1": { "html": null, "num": null, "content": "
ing (ACL, 1996)
H89-2013Church, Kenneth Ward, Gale, William A. Enhanced
Good-Turing And CatCal: Two New Methods For Esti-
mating Probabilities Of English Bigrams (Workshop On
Speech And Natural Language, 1989)
P02-1023Gao, Jianfeng, Zhang, Min Improving Language Model
Size Reduction Using Better Pruning Criteria (ACL,
2002)
P94-1038Dagan, Ido, Pereira, Fernando C. N. Similarity-Based
Estimation Of Word Cooccurrence Probabilities (ACL,
1994)
Some of the papers with the highest weights for
the classification/tagging class include:
W00-0713 Van Den Bosch, Antal Using Induced Rules As Com-
plex Features In Memory-Based Language Learning
(CoNLL, 2000)
W01-0709 Estabrooks, Andrew, Japkowicz, Nathalie A Mixture-Of-
Experts Framework For Text Classification (Workshop
On Computational Natural Language Learning CoNLL,
2001)
A00-2035Mikheev, Andrei. Tagging Sentence Boundaries (ANLP-
NAACL, 2000)
H92-1022
", "text": "Probabilistic Models: model word probability set data number algorithm language corpus method figure probabilities table test statistical distribution function al values performance Classification/Tagging: features data corpus set feature table word tag al test accuracy pos classification performance tags tagging text task information class Some of the papers with the highest weights for the probabilistic models class include: N04-1039 Goodman, Joshua. Exponential Priors For Maximum Entropy Models (HLT-NAACL, 2004) W97-0309 Saul, Lawrence, Pereira, Fernando C. N. Aggregate And Mixed-Order Markov Models For Statistical Language Processing (EMNLP, 1997) P96-1041 Chen, Stanley F., Goodman, Joshua. An Empirical Study Of Smoothing Techniques For Language Model-", "type_str": "table" }, "TABREF2": { "html": null, "num": null, "content": "
Anaphora Resolutionresolution anaphora pronoun discourse antecedent pronouns coreference reference definite algorithm
Automatastring state set finite context rule algorithm strings language symbol
Biomedicalmedical protein gene biomedical wkh abstracts medline patient clinical biological
Call Routingcall caller routing calls destination vietnamese routed router destinations gorin
Categorial Grammarproof formula graph logic calculus axioms axiom theorem proofs lambek
Centering*centering cb discourse cf utterance center utterances theory coherence entities local
Classical MTjapanese method case sentence analysis english dictionary figure japan word
Classification/Taggingfeatures data corpus set feature table word tag al test
Comp. Phonologyvowel phonological syllable phoneme stress phonetic phonology pronunciation vowels phonemes
Comp. Semantics*semantic logical semantics john sentence interpretation scope logic form set
Dialogue Systemsuser dialogue system speech information task spoken human utterance language
Discourse Relationsdiscourse text structure relations rhetorical relation units coherence texts rst
Discourse Segment.segment segmentation segments chain chains boundaries boundary seg cohesion lexical
Events/Temporalevent temporal time events tense state aspect reference relations relation
French Function
C88-1071Kuhn, Roland. Speech Recognition and the Frequency
of Recently Used Words (COLING)
J88-1003DeRose, Steven. Grammatical Category Disambiguation
by Statistical Optimization. (CL Journal)
C88-2133Su, Keh-Yi, and Chang, Jing-Shin. Semantic and Syn-
tactic Aspects of Score Function. (COLING)
A88-1019 Church, Kenneth Ward. A Stochastic Parts Program and
WSD* Word Segmentation WordNet*Noun Phrase Parser for Unrestricted Text. (ANLP) word senses wordnet disambiguation lexical semantic context similarity dictionary C88-2134 Sukhotin, B.V. Optimization Algorithms of Deciphering chinese word character segmentation corpus dictionary korean language table system as the Elements of a Linguistic Theory. (COLING) P88-1013 Haigh, Robin, Sampson, Geoffrey, and Atwell, Eric. synset wordnet synsets hypernym ili wordnets hypernyms eurowordnet hyponym ewn wn
Project APRIL: a progress report. (ACL)
A88-1005 Boggess, Lois. Two Simple Prediction Algorithms to Fa-
cilitate Text Production. (ANLP)
C88-1016Peter F. Brown, et al. A Statistical Approach to Machine
Translation. (COLING)
A88-1028 Oshika, Beatrice, et al.. Computational Techniques for
Improved Name Search. (ANLP)
C88-1020Campbell, W.N. Speech-rate Variation and the Prediction
of Duration. (COLING)
What do these early papers tell us about how
", "text": "Paraphrase/RTE paraphrases paraphrase entailment paraphrasing textual para rte pascal entailed dagan Parsing parsing grammar parser parse rule sentence input left grammars np Plan-Based Dialogue plan discourse speaker action model goal act utterance user information Probabilistic Models model word probability set data number algorithm language corpus method Prosody prosodic speech pitch boundary prosody phrase boundaries accent repairs intonation Semantic Roles* semantic verb frame argument verbs role roles predicate arguments Yale School Semantics knowledge system semantic language concept representation information network concepts base Sentiment subjective opinion sentiment negative polarity positive wiebe reviews sentence opinions Speech Recognition speech recognition word system language data speaker error test spoken Spell Correction errors error correction spelling ocr correct corrections checker basque corrected detection Statistical MT english word alignment language source target sentence machine bilingual mt Statistical Parsing dependency parsing treebank parser tree parse head model al np Summarization sentence text evaluation document topic summary summarization human summaries score Syntactic Structure verb noun syntactic sentence phrase np subject structure case clause TAG Grammars* tree node trees nodes derivation tag root figure adjoining grammar Unification feature structure grammar lexical constraints unification constraint type structures rule", "type_str": "table" }, "TABREF3": { "html": null, "num": null, "content": "
0.2Computational Semantics
Conceptual Semantics
Plan-Based Dialogue and Discourse
0.15
0.1
0.05
0
198019851990199520002005
", "text": "Top 10 words for 43 of the topics. Starred topics are hand-seeded.", "type_str": "table" } } } }