{ "paper_id": "S17-1005", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:30:01.549695Z" }, "title": "Domain-Specific New Words Detection in Chinese", "authors": [ { "first": "Ao", "middle": [], "last": "Chen", "suffix": "", "affiliation": { "laboratory": "State Key Lab on Intelligent Technology and Systems", "institution": "Tsinghua University", "location": { "country": "China" } }, "email": "" }, { "first": "Maosong", "middle": [], "last": "Sun", "suffix": "", "affiliation": { "laboratory": "State Key Lab on Intelligent Technology and Systems", "institution": "Tsinghua University", "location": { "country": "China" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "With the explosive growth of Internet, more and more domain-specific environments appear, such as forums, blogs, MOOCs and etc. Domain-specific words appear in these areas and always play a critical role in the domain-specific NLP tasks. This paper aims at extracting Chinese domain-specific new words automatically. The extraction of domain-specific new words has two parts including both new words in this domain and the especially important words. In this work, we propose a joint statistical model to perform these two works simultaneously. Compared to traditional new words detection models, our model doesn't need handcraft features which are labor intensive. Experimental results demonstrate that our joint model achieves a better performance compared with the state-of-the-art methods.", "pdf_parse": { "paper_id": "S17-1005", "_pdf_hash": "", "abstract": [ { "text": "With the explosive growth of Internet, more and more domain-specific environments appear, such as forums, blogs, MOOCs and etc. Domain-specific words appear in these areas and always play a critical role in the domain-specific NLP tasks. This paper aims at extracting Chinese domain-specific new words automatically. The extraction of domain-specific new words has two parts including both new words in this domain and the especially important words. In this work, we propose a joint statistical model to perform these two works simultaneously. Compared to traditional new words detection models, our model doesn't need handcraft features which are labor intensive. Experimental results demonstrate that our joint model achieves a better performance compared with the state-of-the-art methods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Accompanying with the development of Internet, many new specific domains appear, such as forums, blogs, Massive Open Online Courses (MOOCs) and etc. There are always a group of important words in these domains, which are known as domain-specific words. Domainspecific words include two types as shown in Table 1. The first ones are rare and unambiguous words which will seldom appear in other domains such as \"\u6808\u9876\"(stack top) and \"\u4e8c\u53c9\u6811\"(binary tree). These words may cause word segmentation problems. For example, if we do not recognize \"\u6808\u9876\"(stack top) as a word, the segmentation \"\u6808 \u9876 \u8fd0\u7b97\u7b26 \u662f \u4e58\u53f7\"(the operator at stack top is multiplication sign) will be like \"\u6808 \u9876\u8fd0 \u7b97\u7b26 \u662f \u4e58\u53f7\". In this case, \"\u6808\u9876\" means \"stack top\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u6808\u9876 stack top 1 \u4e8c\u53c9\u6811 binary tree 1 \u590d\u6742\u5ea6 complexity 2 \u904d\u5386 iterate 2 Table 1 : Examples of domain-specific word in data structure domain and \"\u8fd0\u7b97\u7b26\" means \"operator\", but in the segmentation result, \"\u9876\u8fd0\" is segmented into a word in mistake and will bring lots of problems to the further applications. The other type is common and ambiguous words which have specific new meanings in this domain, such as \"\u590d\u6742\u5ea6\"(complexity) and \"\u904d \u5386\"(iterate). These words often play important roles in domain-specific tasks. For example\uff0cin MOOCs which are typical domain-specific environments, there is an Automated Navigation Suggestion(ANS) (Zhang et al., 2017) task which suggests a time point for users when they want to review the front contents of the video. With the help of the recognition of this type of words, we can easily give higher weights to those domainspecific contents.", "cite_spans": [ { "start": 616, "end": 636, "text": "(Zhang et al., 2017)", "ref_id": "BIBREF16" } ], "ref_spans": [ { "start": 63, "end": 70, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Domain words Translation Type", "sec_num": null }, { "text": "After extracting these two type of words, we can also use them for creating ontologies, term lists, and in the Semantic Web Area for finding novel entities (F\u00e4rber et al., 2016) . Besides, in MOOCs area it will also benefit Certification Prediction(CP)(Coleman et al., 2015) (which predicts whether a user will get a course certification or not), Course Recommendation(CR)(Aher and Lobo, 2013) and so on by providing textual knowledge.", "cite_spans": [ { "start": 156, "end": 177, "text": "(F\u00e4rber et al., 2016)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Domain words Translation Type", "sec_num": null }, { "text": "Researchers have made great efforts to extract domain-specific words. Traditional new word detection methods usually employ statistical methods according to the pattern that new words ap-pear constantly. Such methods like Pointwise Mutual Information (Church and Hanks, 1990) , Enhanced Mutual Information , and Multi-word Expression Distance (Bu et al., 2010) . These methods focus on extracting the first type of domain-specific words and conduct postprocessing to discover the second type of words. Deng et al. proposed a statistical model Top-Words (Deng et al., 2016) to extract the first type of words, it can imply some of these statistical measures into the model itself. Besides, it designs a feature called relative frequency to extract the second type of domain-specific words. TopWords is based on a Word Dictionary Model(WDM) (Ge et al., 1999; Chang and Su, 1997; Cohen et al., 2007) in which a sentence is sampled from a word dictionary. To extract the second type of words, it needs to train its model on a common background corpus which is expensive and time-consuming.", "cite_spans": [ { "start": 251, "end": 275, "text": "(Church and Hanks, 1990)", "ref_id": "BIBREF5" }, { "start": 343, "end": 360, "text": "(Bu et al., 2010)", "ref_id": "BIBREF2" }, { "start": 553, "end": 572, "text": "(Deng et al., 2016)", "ref_id": "BIBREF9" }, { "start": 839, "end": 856, "text": "(Ge et al., 1999;", "ref_id": "BIBREF11" }, { "start": 857, "end": 876, "text": "Chang and Su, 1997;", "ref_id": "BIBREF3" }, { "start": 877, "end": 896, "text": "Cohen et al., 2007)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Domain words Translation Type", "sec_num": null }, { "text": "To address these issues, we propose a Domain TopWords model by assuming that a sentence is sampled from two word dictionaries, one for common words and the other for domain-specific words. Besides, we propose a flexible domain score function to take the external information into consideration, such as word frequencies in common background corpus. Therefore, the proposed model can extract these two types of words jointly. The main contributions of this paper are summarized as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Domain words Translation Type", "sec_num": null }, { "text": "\u2022 We propose a novel Domain TopWords model that can extract both two types of domain-specific words jointly. Experimental results demonstrate the effectiveness of our model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Domain words Translation Type", "sec_num": null }, { "text": "\u2022 Our model achieves a comparable performance even with much less information comparing to the origin TopWords model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Domain words Translation Type", "sec_num": null }, { "text": "The rest of the paper is structured as follows: the related work will be introduced in section 2. Our model will be introduced in section 3, including model definition and the algorithm details. Then we will present the experiments in section 4. Finally, the work is summarized in section 5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Domain words Translation Type", "sec_num": null }, { "text": "New word detection as a superset of new domainspecific word detection has been investigated for a long time. New word detection methods mainly contain two directions: the first ones conduct the word segmentation and new word detection jointly. Most of them are supervised models, typical models include conditional random fields proposed by Peng et al. (2004) . These supervised models cannot be used in domain-specific words detection directly, due to the lack of annotated domain-specific data. In addition, there are also some unsupervised models, such as Top-Words proposed by Deng et al. (2016) . However, it needs time-consuming post-processing to extract the second type of domain-specific words.", "cite_spans": [ { "start": 341, "end": 359, "text": "Peng et al. (2004)", "ref_id": "BIBREF15" }, { "start": 581, "end": 599, "text": "Deng et al. (2016)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "Another type treats new word detection as a separate task. This line of methods can be mainly divided into three genres. The first genre is usually preceded by part-of-speech tagging, and treats the new word detection task as a classification problem or directly extracts new words by semantic rules. For example, Argamon et al. (1998) segments the POS sequence of a multi-word into small POS tiles, and then counts tile frequency in both new words and non-new words on training sets, then uses these counts to extract new word. Chen and Ma (2002) uses statistical rules to extract new Chinese word. GuoDong 2005proposes a discriminative Markov Model to detect new words by chunking one or more separated words. However, these supervised models usually need expert knowledge to design linguistic features and lots of annotated data which are expensive and unavailable in the new arising domains.", "cite_spans": [ { "start": 314, "end": 335, "text": "Argamon et al. (1998)", "ref_id": "BIBREF1" }, { "start": 529, "end": 547, "text": "Chen and Ma (2002)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "The second genre employs user behavior data to detect new words. User typing behavior in Sogou Chinese Pinyin input method which is the most popular Chinese input method is used to detect new words by Zheng et al. (2009) . Zhang et al. (2010) proposed to utilize user query log to extract new words. However, these works are usually limited by the availability of the commercial resources.", "cite_spans": [ { "start": 201, "end": 220, "text": "Zheng et al. (2009)", "ref_id": "BIBREF19" }, { "start": 223, "end": 242, "text": "Zhang et al. (2010)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "The third genre employs statistical features and has been extensively studied. In this type of works, new word detection is usually considered as multi-word expression extraction. The measurements of multi-word association are crucial in this type of work. Traditional measurements include: Pointwise Mutual Information (PMI) (Church and Hanks, 1990) and Symmetrical Conditional Probability (SCP) (da Silva and Lopes, 1999) . Both these two measures are proposed to measure bi-gram association. Among all 84 bi-gram association measurements, PMI has been reported to be the best in Czech data (Pecina, 2005) . To measure arbitrary of n-grams, some works separate n-grams into two parts and adopt the existing bi-gram based measurements directly. Some other n-gram based measures are also proposed, such as Enhanced Mutual Information (EMI) . And Multi-word Expression Distance (MED) was proposed by Bu et al. (2010) which based on the information distance theory. The MED measure was reported superior performance to EMI, SCP and other measures. And a pattern based framework which integrates these statistical features together to detect new words was proposed by Huang et al. (2014) .", "cite_spans": [ { "start": 326, "end": 350, "text": "(Church and Hanks, 1990)", "ref_id": "BIBREF5" }, { "start": 401, "end": 423, "text": "Silva and Lopes, 1999)", "ref_id": "BIBREF8" }, { "start": 593, "end": 607, "text": "(Pecina, 2005)", "ref_id": "BIBREF14" }, { "start": 899, "end": 915, "text": "Bu et al. (2010)", "ref_id": "BIBREF2" }, { "start": 1165, "end": 1184, "text": "Huang et al. (2014)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "In this section, we propose a Domain Top-Words model. We introduce the Word Dictionary Model (Ge et al., 1999; Chang and Su, 1997; Cohen et al., 2007) and TopWords model proposed by Deng et al. (2016) in subsection 3.1 and 3.2. Then we introduce our Domain TopWords model in subsection 3.3, 3.4 and 3.5. At last, we introduce the modified EM algorithm for our model in 3.6.", "cite_spans": [ { "start": 93, "end": 110, "text": "(Ge et al., 1999;", "ref_id": "BIBREF11" }, { "start": 111, "end": 130, "text": "Chang and Su, 1997;", "ref_id": "BIBREF3" }, { "start": 131, "end": 150, "text": "Cohen et al., 2007)", "ref_id": "BIBREF6" }, { "start": 182, "end": 200, "text": "Deng et al. (2016)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "3" }, { "text": "Word Dictionary Model (WDM) is a unigram language model. It treats a sentence as a sequence of basic units, i.e., words, phrases, idioms, which in this paper are broadly defined as \"words\". Let D = {w 1 , w 2 , . . . , w N } be the vocabulary (dictionary) which contains all interested words, then the sentence can be represented as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Dictionary Model", "sec_num": "3.1" }, { "text": "S i = w i i w i 2 . . . w i j .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Dictionary Model", "sec_num": "3.1" }, { "text": "And each word is a sequence of characters. Let A = {a 1 , . . . , a p } be the basic characters of the interested language which in English contain only 26 letters but may include thousands of distinct Chinese characters. Then the words can be represented as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Dictionary Model", "sec_num": "3.1" }, { "text": "w i = a i 1 a i 2 . . . a i j .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Dictionary Model", "sec_num": "3.1" }, { "text": "WDM treats each sentence S as a sampling of words from D with the sampling probability \u03b8 i for word w i . Let \u03b8 = (\u03b8 1 , \u03b8 2 , ...\u03b8 N ) be the sampling probability of the whole D, then the probability of sampling a specific sentence with length K is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Dictionary Model", "sec_num": "3.1" }, { "text": "P (S|D, \u03b8) = K k=1 \u03b8 k", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Dictionary Model", "sec_num": "3.1" }, { "text": "(1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Dictionary Model", "sec_num": "3.1" }, { "text": "TopWords algorithm based on WDM is introduced in Deng et al. 2016, and is used as an unsupervised Chinese text segmentation and new word discovery method. In English texts, words are split by spacing, but in Chinese, there is no spacing between words in a sentence. For unsegemented Chinese text T , let C T denote the set of all possible segmentations under the dictionary D. Then, under WDM, we have the probability of a Chinese text T :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "TopWords", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (T |D, \u03b8) = S i \u2208C T P (S i |D, \u03b8)", "eq_num": "(2)" } ], "section": "TopWords", "sec_num": "3.2" }, { "text": "Then the likelihood of the parameter \u03b8 under the given corpus G is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "TopWords", "sec_num": "3.2" }, { "text": "L(\u03b8|D, G) = P (G|D, \u03b8) = T j \u2208G P (T j |D, \u03b8) = n j=1 S i \u2208C T j P (S i |D, \u03b8) (3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "TopWords", "sec_num": "3.2" }, { "text": "where \u03b8 i k is the sampling probability of k-th word w i k in segmentation S i , n is the number of sentences in the corpus G. Then the value of \u03b8 can be estimated by the maximum-likelihood estimate(MLE) as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "TopWords", "sec_num": "3.2" }, { "text": "\u03b8 * = arg max \u03b8 n j=1 S i \u2208C T j P (S i |D, \u03b8) (4)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "TopWords", "sec_num": "3.2" }, { "text": "The MLE value of \u03b8 can be computed by the EM algorithm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "TopWords", "sec_num": "3.2" }, { "text": "After extracting the first type of domain-specific words, the author proposes a measure called relative frequency to extract the second type of domain-specific words. The relative frequency \u03c6 k i of word w i in domain k can be estimated as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "TopWords", "sec_num": "3.2" }, { "text": "\u03c6 k i = \u03b8 k i K j=1 \u03b8 j i (5) \u03b8 k", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "TopWords", "sec_num": "3.2" }, { "text": "i is estimated probability of word w i from the kth domain.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "TopWords", "sec_num": "3.2" }, { "text": "To add the ability to discover domain-specific words, we first use a Domain Word Dictionary Model (D-WDM) instead of the origin WDM model. D-WDM regards a sentence as a sampling from two word dictionaries, one is the common background word dictionary D c and the other is the domain word dictionary D d . So a word w i in a sentence S is sampling first with probability \u03d5 to determine which dictionary it is from, and then with probability \u03b8 \u03b9 i from D d or D c . So the probability of sampling in D-WDM a specific sentence with length K is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Domain Word Dictionary Model", "sec_num": "3.3" }, { "text": "P (S i |D, \u03b8, \u03d5) = K i k=1 (\u03d5\u03b8 c i k + (1 \u2212 \u03d5)\u03b8 d i k ) (6)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Domain Word Dictionary Model", "sec_num": "3.3" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Domain Word Dictionary Model", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03b8 = (\u03b8 c , \u03b8 d )", "eq_num": "(7)" } ], "section": "Domain Word Dictionary Model", "sec_num": "3.3" }, { "text": "3.4 Domain TopWords", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Domain Word Dictionary Model", "sec_num": "3.3" }, { "text": "The main difference between Domain TopWords(D-TopWords) and TopWords is that D-TopWords is under the D-WDM model. So there are two word dictionaries, one for common words and the other for the domain-specific words. So the likelihood of \u03b8 with the given corpus G under the D-WDM model is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Domain Word Dictionary Model", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L(\u03b8|D, G, \u03d5) = T j \u2208G S i \u2208C T j P (S i |D, \u03b8, \u03d5) = n j=1 S i \u2208C T j K i k=1 (\u03d5\u03b8 c i k + (1 \u2212 \u03d5)\u03b8 d i k )", "eq_num": "(8)" } ], "section": "Domain Word Dictionary Model", "sec_num": "3.3" }, { "text": "where the parameter \u03d5 need to be fixed. If the \u03d5 is adapted, the model will converge at a point which maximize the probability difference of the words between the initial \u03b8 d and \u03b8 c . However, in the D-WDM model, there is no difference between the domain dictionary D d and the common dictionary D c except the parameter \u03d5. So if we use pure EM algorithm to estimate the parameter \u03b8 c and \u03b8 d , it is obvious that the algorithm cannot determine whether a word should be sampled from D c or D d . And even though the model has the ability to distinguish the two kinds of words, it can not find out which words are domain-specific words either if we only use the domain-specific corpus. So we must add the common background corpus knowledge into our model and denote this function as domain score function \u03c3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Domain Word Dictionary Model", "sec_num": "3.3" }, { "text": "Domain TopWords model uses an optimized probability function of a segmentation which can take the background knowledge into consideration. The probability of a segmentation S i of a sentence as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Domain Word Dictionary Model", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (S i |T ; D, \u03b8, \u03d5, \u03c3) = Q(S i |T ; D, \u03b8, \u03d5) S i \u2208C T Q(S i |T ; D, \u03b8, \u03d5) (9) Q(S i |T ; D, \u03b8, \u03d5, \u03c3) = K i k=1 (\u03d5\u03b8 c i k + (1 \u2212 \u03d5)\u03b8 d i k \u03c3 i k )", "eq_num": "(10)" } ], "section": "Domain Word Dictionary Model", "sec_num": "3.3" }, { "text": "is the score of the sampled segmentation", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Domain Word Dictionary Model", "sec_num": "3.3" }, { "text": "S i of T. P (S i |T ; D, \u03b8, \u03d5, ) is the nomorlized version of Q(S i |T ; D, \u03b8, \u03d5, \u03c3). \u03c3 i k is the domain score of the word w i k .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Domain Word Dictionary Model", "sec_num": "3.3" }, { "text": "As mentioned above, we need a domain score function \u03c3 to tell our model how to distinguish whether a word is a common word or a domainspecific word. This function has several choices, i.e., the frequency of the word in a large background corpus, matches of specific templates, and so on. And we find that statistical features, like left(right) entropy and mutual information, are useless as the background knowledge function because the D-TopWords model itself has taken this part of features into consideration. We introduce some choices of the \u03c3 function and evaluate the effects in our experiment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Selection of domain score \u03c3", "sec_num": "3.5" }, { "text": "Constant Score The first choice of \u03c3 function is a constant function which returns a constant number for all words. This means there is no encouragement for any word so that we will get a \u03b8 d which has almost the same word distribution as \u03b8 c . We denote D-TopWords with constant \u03c3 function as D-TopWords+Const.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Selection of domain score \u03c3", "sec_num": "3.5" }, { "text": "Background Frequency Score It is a natural idea that uses the reciprocal of the frequency of the word in a common background corpus. This \u03c3 function encourages words with low background frequency to be sampled from \u03b8 d . The detailed function is as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Selection of domain score \u03c3", "sec_num": "3.5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03c3(w) = P F re(w)", "eq_num": "(11)" } ], "section": "Selection of domain score \u03c3", "sec_num": "3.5" }, { "text": "where P is a constant. The parameter P need to be tuned according to the size of the domain corpus, in our experiments we choose 900 to get a domain score in the range of 1-10 for domain words. And F re(w) is the frequency of word w in background corpus. We denote the result as D-TopWords+Fre. RF Score We use the reciprocal of word probability in the dictionary of the origin TopWords method estimated with common background corpus as our domain score. We denote this function as RF function respect to the relative frequency in TopWords. The detailed function is as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Selection of domain score \u03c3", "sec_num": "3.5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03c3(w) = 1 W P (w) \u00d7 10 5", "eq_num": "(12)" } ], "section": "Selection of domain score \u03c3", "sec_num": "3.5" }, { "text": "where the W P (w) is the word probability of word w in the dictionary of origin TopWords model. We denote the result as D-TopWords+RF.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Selection of domain score \u03c3", "sec_num": "3.5" }, { "text": "The parameter \u03b8 will be estimated by the EM algorithm as we will show below. In the beginning, we add all the words in vocabulary to \u03b8 and default values will be set for both \u03b8 c and \u03b8 d before EM steps. We employ a \"top-down\" strategy to discover words, and this is the reason why this method is called TopWords. It adds all words into its dictionary at first and then drops the words whose probability is close to zero (e.g., < 10 \u22128 , and we use this value in our experiments). A good choice of the default value for \u03b8s is the normalized frequency vector of the words in the corpus. Next, we will show the EM algorithm for our D-TopWords model. Let \u03b8 (r) be the estimated value of \u03b8 at the r-th iteration. Then the E-step and the M-step can be computed as follows. The E-step computes the Q-function:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EM estimation of \u03b8", "sec_num": "3.6" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Q(\u03b8|\u03b8 (r) ) =E S|G,\u03b8 (r) [logL(\u03b8; G, S)] = n j=1 S\u2208C T j P (S|T j ; D, \u03b8 (r) ) logP (S|D, \u03b8)", "eq_num": "(13)" } ], "section": "EM estimation of \u03b8", "sec_num": "3.6" }, { "text": "and the M-step maximizes Q(\u03b8|\u03b8(r)) so as to update \u03b8 d and \u03b8 c as follows", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EM estimation of \u03b8", "sec_num": "3.6" }, { "text": "\u03b8 c(r+1) = (c (r) 1 , . . . , c (r) N , n)/(n + i (c (r) i )) \u03b8 d(r+1) = (d (r) 1 , . . . , d (r) N , n)/(n + i (d (r) i )) (14) where c (r) i = T j \u2208G c i (T j ) c i (T j ) = S\u2208T j c i (S) \u2022 P (S|T j ; D, \u03b8 (r) )\u2022 \u03d5\u03b8 c(r) i \u03d5\u03b8 c(r) i + (1 \u2212 \u03d5)\u03b8 d(r) i \u03c3 i (15) c i (S)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EM estimation of \u03b8", "sec_num": "3.6" }, { "text": "is the number of occurrences of w i which is sampled from common dictionary in sentence S, and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EM estimation of \u03b8", "sec_num": "3.6" }, { "text": "d (r) i = T j \u2208G d i (T j ) d i (T j ) = S\u2208T j d i (S) \u2022 P (S|T j ; D, \u03b8 (r) )\u2022 (1 \u2212 \u03d5)\u03b8 d(r) i \u03d5\u03b8 c(r) i + (1 \u2212 \u03d5)\u03b8 d(r) i \u03c3 i (16) d i (S)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EM estimation of \u03b8", "sec_num": "3.6" }, { "text": "is the number of occurrences of w i which is sampled from domain dictionary D d in sentence S.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EM estimation of \u03b8", "sec_num": "3.6" }, { "text": "In the experiment, we found that because of the lack of domain-specific data the model tends to get long words and short segmentation. We add a segmentation length related factor to reduce this tendency, then our Q function of segmentation S i becomes:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EM estimation of \u03b8", "sec_num": "3.6" }, { "text": "Q(S i |\u03b8) = \u03b1 K i K i k=1 (\u03d5\u03b8 c i k + (1 \u2212 \u03d5)\u03b8 d i k \u03c3 i k ) (17)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EM estimation of \u03b8", "sec_num": "3.6" }, { "text": "\u03b1 is a constant parameter. K i is the length of the segmentation S i .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EM estimation of \u03b8", "sec_num": "3.6" }, { "text": "In this section, we first perform an experiment to compare our method to several baselines. And then we perform parameter analysis to demonstrate how the parameters will affect our model. At last, we conduct some case studies to analysis these methods in details.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "We use transcripts of an online course called Data Structure from Xuetangx.com. Xuetangx.com is one of the biggest MOOC platforms in China. These transcripts are a total of 55,045 lines, including 655312 Chinese characters in it and totally 1,792 different characters. We segment the corpus by characters and count the frequency of character-based n-grams from unigram up to 7-gram. We drop words with the frequency less than 5 and result in a 55,452 lines ngram list. The resulted n-gram list is very sparse (close to 1:170) and most of the results are obviously meaningless (like \"\u8fd9\u6837\u4e00\" which means \"one such\"). We asked two annotators to label these n-grams. These two annotators are requested to judge whether an n-gram is a domain-specific word or not, it takes almost one week to annotate these n-grams. If there is a disagreement in these annotations, the annotators will discuss the final annotation and result in a 12.6% disagreement ratio. Most of the disagreements are like \"\u8bbf \u95ee\"(visit) and \"\u63d2\u5165\"(insert) which are somewhat ambiguous. Finally, we use a relatively strict standard, this results in 326 domain-specific words. The final annotated file can be accessed in our Github repo 1 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Preparation", "sec_num": "4.1" }, { "text": "We use YUWEI corpus as our common background corpus. This corpus is developed by the National Language Commission, which contains 25,000,309 words with 51,311,659 characters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Preparation", "sec_num": "4.1" }, { "text": "The output of our method is a ranked list, so we use mean average precision (MAP) as one of our 1 http://github.com/dreamszl/dtopwords evaluation metrics. The MAP value is computed as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metric", "sec_num": "4.2" }, { "text": "M AP (K) = K k=1 P (k) \u00d7 rel(k) K k=1 rel(k) (18)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metric", "sec_num": "4.2" }, { "text": "where the P (k) is the precision of the top k words, rel(k) is a indicator function which return 1 when word at rank k is a domain-specific word and 0 otherwise. K is the length of the result list. When we get a list whose elements are all domainspecific words, the M AP (K) will be 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metric", "sec_num": "4.2" }, { "text": "We will also display the precision-recall curves of our results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metric", "sec_num": "4.2" }, { "text": "We compare different settings of our method with two baselines. The first baseline is pattern-based unsupervised new word detection method, which is proposed by Huang et al. (2014) . The following statistical features are taken into consideration: left pattern entropy (LPE), normalized multiword expression distance (NMED), enhanced mutual information (EMI). We implement both character based and word-based version, and the wordbased version outperforms character based version. We use the optimal parameter setting in Huang's method, which is the LPE+NMED setting in their paper. And we use annotated words to extract the candidate patterns which is a pretty good treatment for this method.", "cite_spans": [ { "start": 161, "end": 180, "text": "Huang et al. (2014)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Experiment Settings", "sec_num": "4.3.1" }, { "text": "The second baseline is origin TopWords method which has been mentioned in above section. We first run the TopWords method in the domainspecific corpus, and then use a function to rerank the word dictionary \u03b8. We use two functions to rerank the dictionary. The first one is the background frequency function and we denote this version as TopWords+Fre. The second one is the standard relative frequency method, we use the dictionary \u03b8 B of TopWords method run in background D-TopWords+Fre", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment Settings", "sec_num": "4.3.1" }, { "text": "Huang et al. \u5177\u4f53\u6765\u8bf4(specifically speaking) \u63a5\u4e0b\u6765(next) \u786e\u5b9e(indeed) \u8bf7\u6ce8\u610f(attention please) \u6362\u800c\u8a00\u4e4b(in other words) \u81f3\u5c11(at least) \u6362\u800c\u8a00\u4e4b(in other words) \u5177\u4f53\u6765\u8bf4(specifically speaking) \u5bf9\u9f50\u4f4d\u7f6e(alignment position) \u5b57\u7b26(character) \u540c\u5b66\u4eec\u597d(hello students) \u987a\u5e8f\u6027(succession) \u62ec\u53f7(brackets) \u6211\u4eec(we) \u8bf8\u5982\u6b64\u7c7b(and so on) corpus to rerank \u03b8. We denote this version as Top-Words+RF.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "TopWords+Fre", "sec_num": null }, { "text": "(1) The MAP values of all the methods are shown in Table 2 , and the PR-curves are shown in Figure 1 . From the results, we can see our D-TopWords+RF and TopWords+RF achieve the best performance. Our D-TopWords+RF achieves better performance than TopWords+RF method, especially when the recall is lower our D-TopWords+RF outperforms TopWords+RF obviously as shown in Figure 1 . In the actual application scenario, our model is more practical as the top results returned by the model are more important.", "cite_spans": [], "ref_spans": [ { "start": 51, "end": 58, "text": "Table 2", "ref_id": "TABREF1" }, { "start": 92, "end": 100, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 367, "end": 375, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Result and Analysis", "sec_num": "4.3.2" }, { "text": "(2) Our D-TopWords methods achieve better performance than the corresponding TopWords results. We expect that our D-TopWords model can use the external information more effectively and accurately. Our D-TopWords model will give more weights to the probability whether a sequence can be a word or not, and the TopWords model will more reliable on the external information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Result and Analysis", "sec_num": "4.3.2" }, { "text": "(3) More than that, our D-TopWords+Fre meth-ods is significantly better than TopWords+Fre model and comparable to the D-TopWords+RF and TopWords+Rf model. The external background information RF takes the probability a sequence can be a word or not into consideration, however, our D-TopWords can consider this information in the model itself. So RF information is relative redundancy than Fre information to our D-TopWords model. The RF information needs to be trained on the common background corpus when the common background corpus is large it will take a very long time. 4We perform experiments of Huang et al.'s method with different domain score functions and all of these result in a poor performance. With the recall raising the precision decreases sharply, we suppose that it is because such statistical features based models cannot deal with low-frequency words well. However, our model can deal with this kind of words better by using the context information. And our model can hold a better balance between the probability whether a sequence can be a word or not and the domain score, which is hard for Huang et al.'s method. Table 5 shows how the performance changes with different \u03b1 which is the segmentation length related parameter and \u03d5 which is the dictionary weight parameter. As we can see, the performance gets better when \u03d5 increases and get the best result when \u03d5 is 0.9. \u03d5 represents the probability a word is sampled from the common dictionary, so it means that a word is sampled from the common dictionary with a 90% possibility and domain-specific dictionary with 10%.", "cite_spans": [], "ref_spans": [ { "start": 1138, "end": 1145, "text": "Table 5", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Result and Analysis", "sec_num": "4.3.2" }, { "text": "It achieves the best performance when \u03d5 is set as 0.9 and \u03b1 is set as 100. Looking into the results, we found \u03b1 determines the length of the words in \u03b8. When \u03b1 chooses a smaller value the results tend to be longer, when \u03b1 chooses bigger value the results tend to be shorter. And when the size of cor- pus increasing, a smaller \u03b1 value will get better performance. We set \u03b1 as 10 when estimates \u03b8 of the common background corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parameter Tuning", "sec_num": "4.4" }, { "text": "(1) The top five wrong results of D-TopWords+RF and TopWords+RF are similar. There are some wrong results appearing in top 100 results in TopWords+RF but not in D-TopWords+RF such as \"\u5927 \u5bb6 \u6ce8 \u610f\"(everybody attention).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Case study", "sec_num": "4.5" }, { "text": "After inspecting the common dictionary \u03b8 c in D-TopWords+RF, we find both \"\u5927\u5bb6\"(everybody) and \"\u6ce8\u610f\"(attention) are in high ranks. We suppose that the usage of Domain Word Dictionary Model helps to deal with this type of sequences better.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Case study", "sec_num": "4.5" }, { "text": "(2) The teacher of this course uses \"\u6362 \u800c \u8a00\u4e4b\"(in other words), \"\u5177\u4f53\u6765\u8bf4\"(specifically speaking) very frequently, so the TopWords+Fre and D-TopWords+Fre cannot recognize them. And the wrong results \"\u63a5\u4e0b\u6765\"(next) and \"\u540c\u5b66 \u4eec\u597d\"(hello students) rank lower in our method compared to TopWords+Fre method (i.e., 25 and 41 vs 4 and 13). We suppose that it is because our method can keep a better balance of the domain score and the probability that a sequence be a new word. And we inspect other wrong results which have a similar situation, these words all have a much lower rank in our method. So these phenomena confirm our assumption that our model achieves better performance in the sequences that with low frequency in background corpus but cannot be a word.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Case study", "sec_num": "4.5" }, { "text": "(3) The wrong result \"\u6211\u4eec\"(we) doesn't appear in the domain dictionary \u03b8 d , but appears at rank 7 in the \u03b8 c dictionary in our model. There are also some results appearing in a high rank in Top-Words+Fre method, but in a low rank in our D-TopWords+Fre method. For example, \u6bd4\u5982\u8bf4(for example) ranks in 39 in TopWords+Fre but rank in 574 in D-TopWords+Fre, \"\u8fd9\u4e48\u6837\"(the same as it) ranks in 31 in TopWords+Fre but ranks in 2759 in D-TopWords+Fre, \"\u4e5f\u5c31\u662f\"(that's it) ranks in 53 in TopWords+Fre but not appear in our method, and so on. We suppose that the usage of Domain Word Dictionary Model is the reason that our model can reach a better performance in these type of words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Case study", "sec_num": "4.5" }, { "text": "(4) The first 10 results (D-TopWords+Fre) in Data Structure course and two other courses are shown in table 4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Case study", "sec_num": "4.5" }, { "text": "We propose a pure unsupervised D-TopWords model to extract new domain-specific words. Compared to traditional new word extraction model, our model doesn't need handcrafted lexical features or statistical features and starts from the unsegmented corpus. Compared to the origin TopWords model, our model can reach a better performance with the same information and can reach a comparable performance with only back-ground corpus frequency information to the Top-Words model with the relative frequency which is expensive and time-consuming.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "Our D-TopWords model adds the ability to distinguish whether a word from common dictionary or domain dictionary to the origin TopWords model. We add a domain score parameter to let our model which can take the external information easily and efficiently. Experiments show that due to our modification our model can use much less external information to reach a comparable performance to the origin TopWords model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" } ], "back_matter": [ { "text": "I am very grateful to my friends in THUNLP lab and the reviewers for giving many suggestions in the course of my thesis writing. This work is supported by Center for Massive Online Education, Tsinghua University, and XuetangX (http://www.xuetangx.com/), the largest MOOC platform in China.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ackonwledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Combination of machine learning algorithms for recommendation of courses in e-learning system based on historical data", "authors": [ { "first": "B", "middle": [], "last": "Sunita", "suffix": "" }, { "first": "Lmrj", "middle": [], "last": "Aher", "suffix": "" }, { "first": "", "middle": [], "last": "Lobo", "suffix": "" } ], "year": 2013, "venue": "Knowledge-Based Systems", "volume": "51", "issue": "", "pages": "1--14", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sunita B Aher and LMRJ Lobo. 2013. Combination of machine learning algorithms for recommendation of courses in e-learning system based on historical data. Knowledge-Based Systems 51:1-14.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "A memory-based approach to learning shallow natural language patterns", "authors": [ { "first": "Shlomo", "middle": [], "last": "Argamon", "suffix": "" }, { "first": "Ido", "middle": [], "last": "Dagan", "suffix": "" }, { "first": "Yuval", "middle": [], "last": "Krymolowski", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the 17th international conference on Computational linguistics", "volume": "1", "issue": "", "pages": "67--73", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shlomo Argamon, Ido Dagan, and Yuval Kry- molowski. 1998. A memory-based approach to learning shallow natural language patterns. In Pro- ceedings of the 17th international conference on Computational linguistics-Volume 1. Association for Computational Linguistics, pages 67-73.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Measuring the non-compositionality of multiword expressions", "authors": [ { "first": "Fan", "middle": [], "last": "Bu", "suffix": "" }, { "first": "Xiaoyan", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Ming", "middle": [ "Li" ], "last": "", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 23rd International Conference on Computational Linguistics. Association for Computational Linguistics", "volume": "", "issue": "", "pages": "116--124", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fan Bu, Xiaoyan Zhu, and Ming Li. 2010. Measuring the non-compositionality of multiword expressions. In Proceedings of the 23rd International Conference on Computational Linguistics. Association for Com- putational Linguistics, pages 116-124.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "An unsupervised iterative method for chinese new lexicon extraction", "authors": [ { "first": "Jing-Shin", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Keh-Yih", "middle": [], "last": "Su", "suffix": "" } ], "year": 1997, "venue": "Computational Linguistics and Chinese Language Processing", "volume": "2", "issue": "", "pages": "97--148", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jing-Shin Chang and Keh-Yih Su. 1997. An unsu- pervised iterative method for chinese new lexicon extraction. Computational Linguistics and Chinese Language Processing 2(2):97-148.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Unknown word extraction for chinese documents", "authors": [ { "first": "Keh-Jiann", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Wei-Yun", "middle": [], "last": "Ma", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 19th international conference on Computational linguistics", "volume": "1", "issue": "", "pages": "1--7", "other_ids": {}, "num": null, "urls": [], "raw_text": "Keh-Jiann Chen and Wei-Yun Ma. 2002. Unknown word extraction for chinese documents. In Proceed- ings of the 19th international conference on Compu- tational linguistics-Volume 1. Association for Com- putational Linguistics, pages 1-7.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Word association norms, mutual information, and lexicography", "authors": [ { "first": "Kenneth", "middle": [ "Ward" ], "last": "Church", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Hanks", "suffix": "" } ], "year": 1990, "venue": "Computational linguistics", "volume": "16", "issue": "1", "pages": "22--29", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kenneth Ward Church and Patrick Hanks. 1990. Word association norms, mutual information, and lexicog- raphy. Computational linguistics 16(1):22-29.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Voting experts: An unsupervised algorithm for segmenting sequences", "authors": [ { "first": "Paul", "middle": [], "last": "Cohen", "suffix": "" }, { "first": "Niall", "middle": [], "last": "Adams", "suffix": "" }, { "first": "Brent", "middle": [], "last": "Heeringa", "suffix": "" } ], "year": 2007, "venue": "Intelligent Data Analysis", "volume": "11", "issue": "6", "pages": "607--625", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paul Cohen, Niall Adams, and Brent Heeringa. 2007. Voting experts: An unsupervised algorithm for segmenting sequences. Intelligent Data Analysis 11(6):607-625.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Probabilistic use cases: Discovering behavioral patterns for predicting certification", "authors": [ { "first": "A", "middle": [], "last": "Cody", "suffix": "" }, { "first": "", "middle": [], "last": "Coleman", "suffix": "" }, { "first": "T", "middle": [], "last": "Daniel", "suffix": "" }, { "first": "Isaac", "middle": [], "last": "Seaton", "suffix": "" }, { "first": "", "middle": [], "last": "Chuang", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the Second (2015) ACM Conference on Learning@ Scale", "volume": "", "issue": "", "pages": "141--148", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cody A Coleman, Daniel T Seaton, and Isaac Chuang. 2015. Probabilistic use cases: Discovering behav- ioral patterns for predicting certification. In Pro- ceedings of the Second (2015) ACM Conference on Learning@ Scale. ACM, pages 141-148.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "A local maxima method and a fair dispersion normalization for extracting multi-word units from corpora", "authors": [ { "first": "J", "middle": [], "last": "Ferreira Da Silva", "suffix": "" }, { "first": "G Pereira", "middle": [], "last": "Lopes", "suffix": "" } ], "year": 1999, "venue": "Sixth Meeting on Mathematics of Language", "volume": "", "issue": "", "pages": "369--381", "other_ids": {}, "num": null, "urls": [], "raw_text": "J Ferreira da Silva and G Pereira Lopes. 1999. A lo- cal maxima method and a fair dispersion normaliza- tion for extracting multi-word units from corpora. In Sixth Meeting on Mathematics of Language. pages 369-381.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "On the unsupervised analysis of domainspecific chinese texts", "authors": [ { "first": "Ke", "middle": [], "last": "Deng", "suffix": "" }, { "first": "K", "middle": [], "last": "Peter", "suffix": "" }, { "first": "Kate", "middle": [ "J" ], "last": "Bol", "suffix": "" }, { "first": "Jun S", "middle": [], "last": "Li", "suffix": "" }, { "first": "", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the National Academy of Sciences page", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ke Deng, Peter K Bol, Kate J Li, and Jun S Liu. 2016. On the unsupervised analysis of domain- specific chinese texts. Proceedings of the National Academy of Sciences page 201516510.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "On emerging entity detection", "authors": [ { "first": "Michael", "middle": [], "last": "F\u00e4rber", "suffix": "" }, { "first": "Achim", "middle": [], "last": "Rettinger", "suffix": "" }, { "first": "Boulos", "middle": [], "last": "", "suffix": "" } ], "year": 2016, "venue": "Knowledge Engineering and Knowledge Management: 20th International Conference", "volume": "", "issue": "", "pages": "223--238", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael F\u00e4rber, Achim Rettinger, and Boulos El As- mar. 2016. On emerging entity detection. In Knowledge Engineering and Knowledge Manage- ment: 20th International Conference, EKAW 2016, Bologna, Italy, November 19-23, 2016, Proceedings 20. Springer, pages 223-238.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Discovering chinese words from unsegmented text (poster abstract)", "authors": [ { "first": "Xianping", "middle": [], "last": "Ge", "suffix": "" }, { "first": "Wanda", "middle": [], "last": "Pratt", "suffix": "" }, { "first": "Padhraic", "middle": [], "last": "Smyth", "suffix": "" } ], "year": 1999, "venue": "Proceedings of the 22nd annual international ACM SIGIR conference on Research and development in information retrieval", "volume": "", "issue": "", "pages": "271--272", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xianping Ge, Wanda Pratt, and Padhraic Smyth. 1999. Discovering chinese words from unsegmented text (poster abstract). In Proceedings of the 22nd an- nual international ACM SIGIR conference on Re- search and development in information retrieval. ACM, pages 271-272.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "A chunking strategy towards unknown word detection in chinese word segmentation", "authors": [ { "first": "", "middle": [], "last": "Zhou Guodong", "suffix": "" } ], "year": 2005, "venue": "International Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "530--541", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhou GuoDong. 2005. A chunking strategy towards unknown word detection in chinese word segmenta- tion. In International Conference on Natural Lan- guage Processing. Springer, pages 530-541.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "New word detection for sentiment analysis", "authors": [ { "first": "Minlie", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Borui", "middle": [], "last": "Ye", "suffix": "" }, { "first": "Yichen", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Haiqiang", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Junjun", "middle": [], "last": "Cheng", "suffix": "" }, { "first": "Xiaoyan", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2014, "venue": "ACL (1)", "volume": "", "issue": "", "pages": "531--541", "other_ids": {}, "num": null, "urls": [], "raw_text": "Minlie Huang, Borui Ye, Yichen Wang, Haiqiang Chen, Junjun Cheng, and Xiaoyan Zhu. 2014. New word detection for sentiment analysis. In ACL (1). pages 531-541.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "An extensive empirical study of collocation extraction methods", "authors": [ { "first": "Pavel", "middle": [], "last": "Pecina", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the ACL Student Research Workshop", "volume": "", "issue": "", "pages": "13--18", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pavel Pecina. 2005. An extensive empirical study of collocation extraction methods. In Proceedings of the ACL Student Research Workshop. Association for Computational Linguistics, pages 13-18.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Chinese segmentation and new word detection using conditional random fields", "authors": [ { "first": "Fuchun", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Fangfang", "middle": [], "last": "Feng", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 20th international conference on Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fuchun Peng, Fangfang Feng, and Andrew McCallum. 2004. Chinese segmentation and new word detec- tion using conditional random fields. In Proceed- ings of the 20th international conference on Compu- tational Linguistics. Association for Computational Linguistics, page 562.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Smart jump: Automated navigation suggestion for videos in moocs", "authors": [ { "first": "Han", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Maosong", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Xiaochen", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Zhengyang", "middle": [], "last": "Song", "suffix": "" }, { "first": "Jie", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Jimeng", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 26th International Conference on World Wide Web Companion. International World Wide Web Conferences Steering Committee", "volume": "", "issue": "", "pages": "331--339", "other_ids": {}, "num": null, "urls": [], "raw_text": "Han Zhang, Maosong Sun, Xiaochen Wang, Zhengyang Song, Jie Tang, and Jimeng Sun. 2017. Smart jump: Automated navigation sug- gestion for videos in moocs. In Proceedings of the 26th International Conference on World Wide Web Companion. International World Wide Web Conferences Steering Committee, pages 331-339.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Improving effectiveness of mutual information for substantival multiword expression extraction", "authors": [ { "first": "Wen", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Taketoshi", "middle": [], "last": "Yoshida", "suffix": "" }, { "first": "Xijin", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Tu-Bao", "middle": [], "last": "Ho", "suffix": "" } ], "year": 2009, "venue": "Expert Systems with Applications", "volume": "36", "issue": "8", "pages": "10919--10930", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wen Zhang, Taketoshi Yoshida, Xijin Tang, and Tu- Bao Ho. 2009. Improving effectiveness of mu- tual information for substantival multiword expres- sion extraction. Expert Systems with Applications 36(8):10919-10930.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Chinese new word detection from query logs", "authors": [ { "first": "Yan", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Maosong", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2010, "venue": "International Conference on Advanced Data Mining and Applications", "volume": "", "issue": "", "pages": "233--243", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yan Zhang, Maosong Sun, and Yang Zhang. 2010. Chinese new word detection from query logs. In In- ternational Conference on Advanced Data Mining and Applications. Springer, pages 233-243.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Incorporating user behaviors in new word detection", "authors": [ { "first": "Yabin", "middle": [], "last": "Zheng", "suffix": "" }, { "first": "Zhiyuan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Maosong", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Liyun", "middle": [], "last": "Ru", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2009, "venue": "IJCAI. Citeseer", "volume": "9", "issue": "", "pages": "2101--2106", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yabin Zheng, Zhiyuan Liu, Maosong Sun, Liyun Ru, and Yang Zhang. 2009. Incorporating user behav- iors in new word detection. In IJCAI. Citeseer, vol- ume 9, pages 2101-2106.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "text": "PR-Curves of our methods and two baselines", "type_str": "figure", "num": null }, "TABREF1": { "num": null, "text": "Discovering new words in data structure domain(MAP)", "type_str": "table", "html": null, "content": "" }, "TABREF2": { "num": null, "text": "Top 5 wrong results of D-TopWords+Fre, TopWords+Fre and Huang et al.'s method", "type_str": "table", "html": null, "content": "
" }, "TABREF4": { "num": null, "text": "", "type_str": "table", "html": null, "content": "
: Top 10 results of D-TopWords+Fre in three courses
\u03d5\u03b110501005001000
0.30.243 0.344 0.389 0.416 0.429
0.50.323 0.441 0.479 0.529 0.516
0.70.405 0.513 0.559 0.593 0.483
0.90.437 0.672 0.719 0.547 0.448
0.99 0.306 0.470 0.479 0.519 0.447
" }, "TABREF5": { "num": null, "text": "MAP of top 100 results'performance with different \u03b1 and \u03d5, under the D-TopWords+Fre model.", "type_str": "table", "html": null, "content": "" } } } }