{ "paper_id": "2019", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:27:19.053651Z" }, "title": "Computational Linguistics & Chinese Language Processing Aims and Scope", "authors": [ { "first": "\u7c21\u570b\u5cfb", "middle": [ "\uf02a" ], "last": "\u3001\u5f35\u5609\u60e0", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Tsing Hua University", "location": {} }, "email": "" }, { "first": "Kuo-Chun", "middle": [], "last": "Chien", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Tsing Hua University", "location": {} }, "email": "" }, { "first": "Chia-Hui", "middle": [], "last": "Chang", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Tsing Hua University", "location": {} }, "email": "" }, { "first": "\uf02a", "middle": [], "last": "\u570b\u7acb\u4e2d\u592e\u5927\u5b78\u8cc7\u8a0a\u5de5\u7a0b\u5b78\u7cfb", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Yen-Hao", "middle": [], "last": "Huang", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Tsing Hua University", "location": {} }, "email": "" }, { "first": "Ting-Wei", "middle": [], "last": "Liu", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Tsing Hua University", "location": {} }, "email": "" }, { "first": "Ssu-Rui", "middle": [], "last": "Lee", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Tsing Hua University", "location": {} }, "email": "" }, { "first": "Ya-Wen", "middle": [], "last": "Yu", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Tsing Hua University", "location": {} }, "email": "" }, { "first": "Wan-Hsuan", "middle": [], "last": "Lee", "suffix": "", "affiliation": {}, "email": "" }, { "first": "\uf02a", "middle": [], "last": "Fernando", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Tsing Hua University", "location": {} }, "email": "" }, { "first": "Henrique", "middle": [], "last": "Calderon", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Tsing Hua University", "location": {} }, "email": "" }, { "first": "Yi-Shin", "middle": [], "last": "Chen", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Tsing Hua University", "location": {} }, "email": "" }, { "first": "\u5f35\u4fee\u745e", "middle": [ "\uf02a" ], "last": "\u3001\u8d99\u5049\u6210", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Tsing Hua University", "location": {} }, "email": "" }, { "first": "\uf02a", "middle": [], "last": "\u3001\u7f85\u5929\u5b8f", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Tsing Hua University", "location": {} }, "email": "" }, { "first": "\uf02a", "middle": [], "last": "\u3001\u9673\u67cf\u7433", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Tsing Hua University", "location": {} }, "email": "" }, { "first": "Hsiu-Jui", "middle": [], "last": "Chang", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Tsing Hua University", "location": {} }, "email": "" }, { "first": "Wei-Cheng", "middle": [], "last": "Chao", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Tien-Hong", "middle": [], "last": "Lo", "suffix": "", "affiliation": {}, "email": "teinhonglo@ntnu.edu.tw" }, { "first": "Berlin", "middle": [], "last": "Chen", "suffix": "", "affiliation": {}, "email": "berlin@ntnu.edu.tw" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Named Entity Recognition (NER) is an essential task in Natural Language Processing. Memory Enhanced CRF (MECRF) integrates external memory to extend Conditional Random Field (CRF) to capture long-range dependencies with attention mechanism. However, the performance of pure MECRF for Chinese NER is not good. In this paper, we enhance MECRF with Stacked CNNs and gated mechanism to capture better word and sentence representation for Chinese NER. Meanwhile, we combine both character and word information to improve the performance. We further improve the performance by importing common before and common after vocabularies of named entities as well as entity prefix and suffix via feature mining. The BAPS features are then combined with character embedding features to automatically adjust the weight. The model proposed in this research achieve 91.67% tagging accuracy on the online social media data for Chinese person name recognition, and reach the highest F1-score 92.45% for location name recognition and 90.95% overall recall rate in SIGHAN-MSRA dataset.", "pdf_parse": { "paper_id": "2019", "_pdf_hash": "", "abstract": [ { "text": "Named Entity Recognition (NER) is an essential task in Natural Language Processing. Memory Enhanced CRF (MECRF) integrates external memory to extend Conditional Random Field (CRF) to capture long-range dependencies with attention mechanism. However, the performance of pure MECRF for Chinese NER is not good. In this paper, we enhance MECRF with Stacked CNNs and gated mechanism to capture better word and sentence representation for Chinese NER. Meanwhile, we combine both character and word information to improve the performance. We further improve the performance by importing common before and common after vocabularies of named entities as well as entity prefix and suffix via feature mining. The BAPS features are then combined with character embedding features to automatically adjust the weight. The model proposed in this research achieve 91.67% tagging accuracy on the online social media data for Chinese person name recognition, and reach the highest F1-score 92.45% for location name recognition and 90.95% overall recall rate in SIGHAN-MSRA dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "\u547d\u540d\u5be6\u9ad4\u8fa8\u8b58", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u7dd2\u8ad6 (Introduction)", "sec_num": "1." }, { "text": "\u6211\u5011\u53c3\u8003 MECRF \u505a\u6cd5\uff0c\u4f7f\u7528\u4e8c\u7d44\u96d9\u5411\u9577\u77ed\u671f\u8a18\u61b6(LSTM)\u5206\u5225\u5c0d\u8a18\u61b6 GM \u9032\u884c\u7de8\u78bc\uff0c\u5c07 \u6642 \u9593 \u5e8f \u5217 \u8a0a \u865f \u52a0 \u5165 \u6a21 \u578b \u7576 \u4e2d \uff0c \u7522 \u751f \u8f38 \u5165 \u8a18 \u61b6 (Input Memory) \u4ee5 \u53ca \u8f38 \u51fa \u8a18 \u61b6 (Output Memory)\uff0c \u5982\u5f0f(4)\u3001 5 (6) \u6700\u5f8c\u4f7f\u7528\u52a0\u6b0a\u548c\u4f86\u8a08\u7b97\u7576\u524d\u7684\u8f38\u51fa \uff0c\u5982\u5f0f 7 ", "cite_spans": [ { "start": 138, "end": 141, "text": "(6)", "ref_id": "BIBREF54" } ], "ref_spans": [], "eq_spans": [], "section": "\u8a18\u61b6\u5c64(Memory Layer)", "sec_num": "3.2" }, { "text": ", , ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u8a18\u61b6\u5c64(Memory Layer)", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u6211 \u5011 \u8a66 \u5716 \u5c07 \u6b64 \u56db \u985e \u7279 \u5fb5 \u5404 \u4e09 \u7a2e \u9577 \u5ea6 (1-gram, 2-gram, 3-gram) \u5171 12 \u500b \u7279 \u5fb5 \uff0c \u7d93 \u904e CNN-BiGRU-MECRF \u8207\u4e0a\u8ff0\u6a21\u578b\u7d50\u5408\uff0c\u518d\u4f7f\u7528\u4e00\u500b\u53ef\u7531\u6a21\u578b\u81ea\u52d5\u8a13\u7df4\u7684\u8b8a\u6578\u03b1 (a\u2208[0,1]) \u4f86\u8abf\u6574\u5d4c\u5165\u5411\u91cf(EMB)\u8207 BAPS \u7279\u5fb5\u6240\u4f54\u7684\u6bd4\u91cd\uff0c\u7d93\u904e\u5f0f(10)\u7684\u8a08\u7b97\u5f8c\uff0c\u6700\u5f8c\u518d\u4f7f\u7528\u689d\u4ef6 \u96a8\u6a5f\u5834\u57df\u9032\u884c\u5e8f\u5217\u6a19\u8a18\u3002 \u22c5 1 \u22c5", "eq_num": "(10" } ], "section": "\u8a18\u61b6\u5c64(Memory Layer)", "sec_num": "3.2" }, { "text": "With the popularization of the Internet and communication devices, information can be sent more quickly and widely than ever before. However, technological advances have also made it difficult to avoid incorrect information. Sponsored reviews, which have recently become a popular marketing strategy in online forums, can provide incorrect information. The intention of these articles is to give their consumers a positive impression of the product. Some advertisement companies have even begun to use sponsored reviews as a new method of promoting their commodities. Such sponsored reviews usually only provide positive information about a product. Thus, these reviews may hide the disadvantages of a product and potentially mislead consumers into making an unbeneficial purchase.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "As unreliable data may contain incomplete or incorrect information, it is important to avoid them. Most of the filtering approaches on online social platforms rely on mutual reviewing from users or human-designed rules. However, no matter which approach is used, automatic filtering is still limited due to the various methods of writing sponsored reviews and how quickly information is generated. Consequently, a system to automatically identify these kinds of information has become an important issue in the information reliability research field.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "In this work, we focus on recognizing the information reliability of review articles on online web platforms. Review articles are widely consumed by readers in order for them to purchase the best products. General filtering methods fail to address two main difficulties. First, current filters are easily fooled if the method only considers word-based characteristics; writers can simply avoid specific words/phrases to pass the filtering check. Second, there is a lack of defined and labeled sponsored review article data for testing reliability problems. It is difficult enough to manually collect these articles, let alone to create rules for automatically gathering them, because these articles are written by experienced writers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "To address the first issue of keywords bias, this research focused on extracting the latent writing style of review articles to avoid specific word biases found in word-level methods. The presented research proposes a Contextualized Affect Representation for Implicit Style Recognition (CARISR) method to recognize the writing styles of various reviews. The proposed CARISR consists of an unsupervised approach for generating stylistic word patterns, which condenses patterns into distributed matrix representations, and a learning-based model. Sections 4 and 5 describe the details of the stylistic patterns and model, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The biggest difference between the general methods and CARISR is that the latter defines two specific word groups, stylistic skeleton words (CW) and stylistic content words (SW), to capture the writing style information. A set of stylistic word patterns are extracted based on the constructive relationship of different stylistic skeleton words and content words Discovering the Latent Writing Style from Articles: 17 A Contextualized Feature Extraction Approach in the sentence. By adopting stylistic word patterns, the experiment results show that CARISR is more robust compared to the word-based approaches, including neural network methods. In other words, the contextualized effect representation model is less susceptible to changes to specific words. Consequently, CARISR has a better ability to deal with the first challenge, that is, to detect the implicit word usages of advertisement writers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "For the second difficulty, the lack of labeled data, we defined our recognized targets as sponsored reviews (\u696d\u914d\u6587), trial product reviews (\u7522\u54c1\u8a66\u7528\u6587), and self-purchased product reviews (\u81ea\u8cfc\u5fc3\u5f97\u6587). Since it is rare for sponsored reviews to actually be labelled as such, we introduced a similar class that is more easily obtained, called official advertisements (\u5ee3 \u544a), as the weak label concept for model pre-training. The transfer learning approach can then be applied to the target label of sponsored review.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "This work proposes that the purpose of the sponsored review is more similar to official advertisements than self-purchased product reviews. This similarity allows for transfer learning to be adopted in our work. After preliminary training leveraging a large number of advertisements, the model should have the ability to classify the implicit writing style of advertisements. Further, we manually collect small amounts of sponsored review for transfer learning and fine-tune. The proposed model achieves around 70 percent accuracy and shows better robustness than the compared models, which demonstrates that our framework works successfully, even with the scarce sponsored review resources.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "To shortly summarize this research, we highlight the following contributions:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "\uf0b7 To quantify the problem of review articles' reliability, we defined different levels of reviews and collect the corresponding dataset for the training model. \uf0b7 To prevent our model from being defrauded by intentional word selection, our model recognizes reliability based on the implicit writing style instead of word-level features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "\uf0b7 To capture the implicit writing style, we first applied graph-based pattern extraction to the review articles. Then, we designed the embedding strategy of contextual stylistic patterns for the convolutional neural network model. \uf0b7 To overcome the insufficient quantity problem, we combined the weak label concept and the transfer learning approach to stabilize the learning process and improve the performance and robustness of our model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Information reliability research aims to distinguish whether the given information is reliable or not. Most of the information reliability research could be consider as credibility analysis on news. The main difficulty of credibility analysis is how to find the effective features to identifying the news is reliable or not. To address the problems, the researchers attempt to extract different features, which could be categorized as the propagation-based, knowledge-based and content-based approaches.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Information Reliability", "sec_num": "2.1" }, { "text": "For propagation-based approach, social media could be one major domain for news sharing, the analysis within social media relies heavily on social context features like author profiles, retweets, likes, etc. Social media rumor detection (Derczynski et al., 2017) utilized conversation on Twitter to determine the veracity as RumorEval tasks. By modeling the sequence posts and behaviors on social media, researchers (Kochkina, Liakata, & Zubiaga, 2018; Ruchansky, Seo, & Liu, 2017; Volkova, Shaffer, Jang, & Hodas, 2017) proposed supervised method to detect the rumors and fake content. These approaches assume that the footprint and network of fake news are different from real news. Moreover, it has been shown that the spread speed of fake news is faster than real news (Vosoughi, Roy, & Aral, 2018) . The propagation-based methods rely on social context feature; therefore, it is difficult to capture enough information for fake news detection right after the newly emerged news. Also, they are limited to social network for social context features. In contrast, this work studied reliability only on textual information, therefore, it can recognition the unreliable information in real time.", "cite_spans": [ { "start": 237, "end": 262, "text": "(Derczynski et al., 2017)", "ref_id": null }, { "start": 416, "end": 452, "text": "(Kochkina, Liakata, & Zubiaga, 2018;", "ref_id": "BIBREF17" }, { "start": 453, "end": 481, "text": "Ruchansky, Seo, & Liu, 2017;", "ref_id": null }, { "start": 482, "end": 520, "text": "Volkova, Shaffer, Jang, & Hodas, 2017)", "ref_id": "BIBREF18" }, { "start": 773, "end": 802, "text": "(Vosoughi, Roy, & Aral, 2018)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Information Reliability", "sec_num": "2.1" }, { "text": "Knowledge-based method includes the tradition manual fact-checked by expert and automatic factchecking (Shi & Weninger, 2016; Shiralkar, Flammini, Menczer, & Ciampaglia, 2017; Wu, Agarwal, Li, Yang, & Yu, 2014) . Several organizations, such as PolitiFact and Snopes, investigate the news and related document to report the credibility of the claim. The manual fact-checking method is time-consuming and expert oriented, which is difficult to handle the huge amount of false claim in online news media. Thus, the automated knowledge-based fact-checking system has been developed. The system will extract the claims in news content and try to match the claim to relevant data on the external knowledge base. In our work, we do not count on the external knowledge bases or web evidences; instead, we extract the stylistic features from articles to automatically capture the implicit style of unreliable article information.", "cite_spans": [ { "start": 103, "end": 125, "text": "(Shi & Weninger, 2016;", "ref_id": null }, { "start": 126, "end": 175, "text": "Shiralkar, Flammini, Menczer, & Ciampaglia, 2017;", "ref_id": null }, { "start": 176, "end": 210, "text": "Wu, Agarwal, Li, Yang, & Yu, 2014)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Information Reliability", "sec_num": "2.1" }, { "text": "Content-based methods aim to capture the keywords or writing style of malicious fabrication news from its content. The advantage of content-based methods is that it can immediately alarm the reader only from its content no matter the news is newly emerged or not. Previous works on content-based methods can be categorized into two groups by their method. One focused on the \"textual content classification\" (Al-Anzi & AbuZeina, 2017; Pavlinek & Podgorelec, 2017; Qu et al., 2018; Wang, Luo, Li, & Wang, 2017) . It classified content by \"Content words\", which were meaningful and different depended on the content. The other interested in \"writing style recognition\" (Gomez Adorno, Rios, Posadas Dur\u00e1n, A Contextualized Feature Extraction Approach Sidorov, & Sierra, 2018; Rexha, Kr\u00f6ll, Ziak, & Kern, 2018; Stamatatos, 2009) which aimed to find out the articles that have the same style but different content. These word-based methods concerned more about the \"Function words\" and the structure of sentence, which were often regarded as less important part before. Several research Karimi and Tang (2019) ; Khan, Khondaker, Iqbal, and Afroz (2019) ; Wang et al. (2018) has shown the promising result by taking advantage of machine learning technique. However, Janicka, Pszona, and Wawer (2019) address the issue that the failure of cross-domain detection, which can be interpreted as a type of overfilling on the training domain. The work conducts the experiment on four types of domain including short-text claim, full-text content. generated fake new via Amazon Mechanical Turk (AMT), and fake news on Facebook. The experiment shows that the model can fit well in the same domain, but the accuracy drops sharply when testing on the other domain.", "cite_spans": [ { "start": 408, "end": 434, "text": "(Al-Anzi & AbuZeina, 2017;", "ref_id": "BIBREF5" }, { "start": 435, "end": 463, "text": "Pavlinek & Podgorelec, 2017;", "ref_id": null }, { "start": 464, "end": 480, "text": "Qu et al., 2018;", "ref_id": null }, { "start": 481, "end": 509, "text": "Wang, Luo, Li, & Wang, 2017)", "ref_id": "BIBREF20" }, { "start": 748, "end": 772, "text": "Sidorov, & Sierra, 2018;", "ref_id": "BIBREF11" }, { "start": 773, "end": 806, "text": "Rexha, Kr\u00f6ll, Ziak, & Kern, 2018;", "ref_id": null }, { "start": 807, "end": 824, "text": "Stamatatos, 2009)", "ref_id": null }, { "start": 1082, "end": 1104, "text": "Karimi and Tang (2019)", "ref_id": "BIBREF13" }, { "start": 1107, "end": 1147, "text": "Khan, Khondaker, Iqbal, and Afroz (2019)", "ref_id": "BIBREF14" }, { "start": 1150, "end": 1168, "text": "Wang et al. (2018)", "ref_id": "BIBREF21" }, { "start": 1260, "end": 1293, "text": "Janicka, Pszona, and Wawer (2019)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Information Reliability", "sec_num": "2.1" }, { "text": "To represent unique characteristics of different text documents, several features extraction methods have been proposed. Before the widespread use of the deep learning models, there are many methods relied on the hand-crafted, lexicon-based and syntactic approaches.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Text Representation", "sec_num": "2.2" }, { "text": "The hand-craft approaches are based on predefined dictionaries or linguistic resources such as the linguistic inquiry and word count (LIWC) affect lexicon (Pennebaker, Booth, & Francis, 2007) . One of the advantages of using predefined dictionaries is that they are usually of high quality due to the rigorous process of labeling. However, this also presents a scalability problem as these features may not be representative of the dynamically evolving language used.", "cite_spans": [ { "start": 155, "end": 191, "text": "(Pennebaker, Booth, & Francis, 2007)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Text Representation", "sec_num": "2.2" }, { "text": "The lexicon-based approaches automatically extract the representative tokens from corpus, such as bag of word (BOW) or term frequency-inverse document frequency (TF-IDF). BOW learns the distribution of word usages to present the corpus. By integrating the n-grams consideration, the token units of BOW could be extended to n words as phrases rather than a single word to extract more high-level features. TF-IDF further introduces the statistical concept to reduce the importance of common tokens, such as \"the\" and \"or\". One of the benefits of the lexicon-based approach is that are robust to misspellings and the out of vocabulary (OOV) problems. However, it result in a extreme large size of vocabularies in memories and the curse of the dimensionality from the sparsity of vocabularies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Text Representation", "sec_num": "2.2" }, { "text": "The syntactic approaches including part of speech (POS) parsing tree and graph-based word pattern, which considering the relation among the words. The POS parsing tree converts words by the POS tags and models the syntactic structure of sentence. The syntactic POS tree benefits the understanding for sentence, however, the POS tagging process relies on predefined dictionaries and may encountered OOV and not perform stably for specific terminologies or among different languages. The graph-based word pattern approaches (Argueta, Saravia, & Chen, 2015; Saravia, Liu, Huang, Wu, & Chen, 2018) analyze the hidden word relation by learning a word relation graph dynamically from the corpus. By adopting the graph analysis techniques, words that is important in the connection of graph structure could be extracted and used to construct the n-grams word patterns. As the word graph could present a longer connection of words than n-gram approaches, the hidden relations among words could be better preserved. The word pattern derived from graph structure learns the syntactic features of the corpus rather than n-grams key tokens; the syntactic word pattern is thus considered as a representation of the writing style. Although the method could learn the syntactic writing styles from word relation graph, however, the current approaches only focused on the English corpus. This work aims to leverage the benefits of word relation graph and propose the modification to extract syntactic writing style features from Mandarin corpus.", "cite_spans": [ { "start": 522, "end": 554, "text": "(Argueta, Saravia, & Chen, 2015;", "ref_id": "BIBREF6" }, { "start": 555, "end": 593, "text": "Saravia, Liu, Huang, Wu, & Chen, 2018)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Text Representation", "sec_num": "2.2" }, { "text": "In the deep learning approaches, words are embedded as the vector representations by different contextual learning techniques, such as word2vec (Mikolov, Chen, Corrado, & Dean, 2013) and GloVE (Pennington, Socher, & Manning, 2014) . The word vectors preserve the semantic reasoning capabilities of the word and are treated as the input feature representations to the deep learning models, such as the sequence-modeling recurrent neural network (RNN) and the convolution neural network (CNN) which focus on the local pattern extraction.", "cite_spans": [ { "start": 144, "end": 182, "text": "(Mikolov, Chen, Corrado, & Dean, 2013)", "ref_id": "BIBREF3" }, { "start": 187, "end": 230, "text": "GloVE (Pennington, Socher, & Manning, 2014)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Text Representation", "sec_num": "2.2" }, { "text": "By integrating the traditional methods and the modern neural network approaches, this study proposes an approach that leverages the graph pattern features and a convolutional neural network model to identify the unreliable text information. The proposed model not only captures the textual and stylistic feature from articles but also has the adaptability for different writing styles.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Text Representation", "sec_num": "2.2" }, { "text": "To prevent keyword bias, we studied various writing styles with a focus on frequent word usages and corresponding co-located words for each writing style. In this work, we adapted the concept of graph-based pattern extraction approaches to dynamically learn the writing style of Mandarin product review datasets. This approach has been applied in related works on emotion analysis by extracting the word patterns for each emotion. In the following sections, we highlight the adaptation of the graph-based emotion pattern approach to extract stylistic word patterns as the writing style.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Contextualized Affect Representation for Implicit Style Recognition", "sec_num": "3." }, { "text": "The overall framework, which can be separated into stylistic pattern feature extraction (titles highlighted in orange) and model architecture (title highlighted in yellow), is shown in Figure 1 . By constructing the word relation graph, the hidden word relations are preserved to enrich the stylistic words patterns in comparison to traditional lexicon-based approaches. A weighting mechanism was proposed to learn the significance of each pattern for each style.", "cite_spans": [], "ref_spans": [ { "start": 185, "end": 193, "text": "Figure 1", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Contextualized Affect Representation for Implicit Style Recognition", "sec_num": "3." }, { "text": "Articles were first transformed into stylistic patterns by encoding each matched pattern and determining the corresponding score vector, which represents the article's stylistic pattern. In this work, the pattern representations were treated as the input of a neural network model for document classification based on writing style features. The details of the stylistic pattern feature extraction and model architecture are summarized in the following subsections.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discovering the Latent Writing Style from Articles: 21 A Contextualized Feature Extraction Approach", "sec_num": null }, { "text": "Given a set of corpuses and the sentences in corpus , the sequences of word are denoted as in sentence . The word graph then represents the graph structure for the corpus set C, such that = ( , , ). Vertices is a set of nodes which represent all the word tokens in corpus , and \uff21 is a set of arcs that represents a bi-gram relationship between each two adjacent tokens. For example, the tokenized sentence \"\u7528 _ \u8d77 \u4f86 _ \u9084\u6709 _ \u98fe\u8272 _ \u6548\u679c _\uff0c_ \u7d66 _ \u4f60 _ \u7121\u53ef\u53d6\u4ee3 _ \u7684 _ \u900f\u4eae _ \u860b\u679c\u5149 _ \u5537 _\uff01\uff01\" could construct the following bi-gram relations: \"\u7528 \u2192 \u8d77\u4f86\", \"\u8d77\u4f86 \u2192 \u9084\u6709\", \"\u9084 \u6709 \u2192 \u98fe\u8272\", ..., \"\u860b\u679c\u5149 \u2192 \u5537\", \"\u5537 \u2192 \uff01\uff01\". Note that the under-dash \"_\" shows how the sentence is tokenized and the arrow \"\u2192\" denotes the link relation in the word graph.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stylistic Graph Construction", "sec_num": "4.1" }, { "text": "For the edge weights , instead of initialized with binary representation, which is align with the adjacency matrix, the edge weight , are defined as the bi-gram probability between two word tokens and in order to capture the significance of link relation. The bi-gram probability is designed with a denominator of global bi-gram frequency, the frequency of all the bi-grams, rather than the degree of word node or the frequency of out nodes from node . By comparing to all the bi-gram tokens, the word graph could better capture and compare the global significance for each node. Consistent to the setting of edge weight, the weighted adjacency matrix is designed as the matrix representation of the edge weights and defined in Definition 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stylistic Graph Construction", "sec_num": "4.1" }, { "text": "By having the weighted mechanism, the word graph could have a better ability to preserve the syntactic structure of words by a graph representation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stylistic Graph Construction", "sec_num": "4.1" }, { "text": ", represents the relation of word pair in the word graph :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 1 (Weighted Adjacency Matrix) Let be the weighted adjacency matrix that each entry", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": ", , \u2211 , , \u2208 ,", "eq_num": "(1)" } ], "section": "Definition 1 (Weighted Adjacency Matrix) Let be the weighted adjacency matrix that each entry", "sec_num": null }, { "text": "where the freq() denotes the frequency of two bi-gram words , or , . ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 1 (Weighted Adjacency Matrix) Let be the weighted adjacency matrix that each entry", "sec_num": null }, { "text": "Writing styles vary from individual to individual. The idea that people utilize different distributions of words for different topics has widely been accepted in several topical methods, such as latent dirichlet allocation (LDA) (Blei, Ng, & Jordan, 2003) . This work also uses this concept to extract and decompose the writing style into two elements: the stylistic skeleton and the stylistic contents. This work assumes that sentence and corpus are constructed by choosing the words of selected style to form skeleton and deciding the contents words to complete the sentence structure.", "cite_spans": [ { "start": 229, "end": 255, "text": "(Blei, Ng, & Jordan, 2003)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Stylistic Word Extraction", "sec_num": "4.2" }, { "text": "To extract the stylistic elements, two types of graph analyses-centrality and clustering-were applied to the word graph . Each analysis method helps to generate a set of words: stylistic skeleton words (CW) (i.e., stylistic stop words) and stylistic content words (SW).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stylistic Word Extraction", "sec_num": "4.2" }, { "text": "The stylistic skeleton represents the fundamental elements of word usages in a style, where such words should be widely used in all the corpuses of a given style. That is, all of the words included in the stylistic skeleton of a style should consistently appear in all of the corpuses of that style. In the structure of the graphical representation, skeleton words that represent a strong connection to other words are considered suitable candidates for stylistic skeleton words, as those words act as the fundamental nodes in the word relation graph . Inspired by Google's PageRank (Page, Brin, Motwani, & Winograd, 1999) , in which nodes with high connection word nodes contribute more importance than low connection word nodes, the eigenvector centrality was selected to measure the influence of each node in .", "cite_spans": [ { "start": 583, "end": 622, "text": "(Page, Brin, Motwani, & Winograd, 1999)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Stylistic Skeleton", "sec_num": "4.2.1" }, { "text": "The eigenvector centrality is calculated as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 2 (Eigenvector Centrality)", "sec_num": null }, { "text": "\u2211 , \u2208 (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 2 (Eigenvector Centrality)", "sec_num": null }, { "text": "where is a proportionality factor and is the centrality score of word node . Let be the corresponding eigenvalue, the equation could be rewritten into vector form Me = \u03bbe, where e is the eigenvector of M.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 2 (Eigenvector Centrality)", "sec_num": null }, { "text": "A word is selected as a connecter word if its eigenvector centrality is higher than the empirically defined threshold to ensure the quality of the high connectivity word. The higher the centrality of a word , the more important the word is in the graph . By the centrality measurement, a set of connector words with both high frequency and connectivity to other high-rank nodes are extracted from the word relation graph and considered stylistic skeleton words , such that | , \u2208 . The examples of the stylistic skeleton words in this task (the makeup advertisement dataset) were as follows: \"\u6211,\" \"\u7684,\" \"\u56e0\u70ba,\" \"\u808c\u819a,\" and \"\u7279\u5225.\" The extracted stylistic skeleton words not only contained numerous traditional stopwords but also style-specific words, which are known as stylistic stopwords.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 2 (Eigenvector Centrality)", "sec_num": null }, { "text": "The stylistic contents represent frequently appearing topics within a style, where topics could be formed by several separated words (i.e., LDA) or continuous word sequences. Apart from the skeleton, a topic could be presented by using the words in different ways; however, to represent the similar semantics of the topic, the topic words are generally interchangeable. For example, in the makeup advertisement dataset, there are several ways to describe the product' s effect on skin care, such as \"\u80fd _ \u6709\u6548 _ \u4fdd\u990a _ \u808c\u819a,\"\"\u4fdd\u8b77 _ \u5ae9\u767d _ \u808c\u819a,\" or \"\u64c1 \u6709 _ \u6c34\u5ae9 _ \u81c9\u9830.\" In the above example, some word tokens can be changed while keeping the meaning the same, such as \"\u4fdd\u990a\" to \"\u4fdd\u8b77\" or \"\u5ae9\u767d\" to \"\u6c34\u5ae9\" and so on.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stylistic Content", "sec_num": "4.2.2" }, { "text": "To capture the stylistic content cues, this work focuses on interchangeable word usages. By converting the style corpus in the word relation graph, the cross connections between these interchangeable word nodes are discovered. Such stylistic content word nodes tend to cluster with other nodes that share this or similar concepts. The clustering behavior in the graph can be measured by a graph analysis factor, namely the clustering coefficient, which determines how a node interconnects with its neighbor nodes. This work therefore applied the clustering coefficient to dynamically extract the stylistic contents, as shown below.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stylistic Content", "sec_num": "4.2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u2211 , ,", "eq_num": ", , , , \u2211 , , , , , | | (3)" } ], "section": "Definition 3 (Clustering Coefficient) The clustering coefficient is defined by clustering coefficient as:", "sec_num": null }, { "text": "where denotes the average clustering coefficient of node .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 3 (Clustering Coefficient) The clustering coefficient is defined by clustering coefficient as:", "sec_num": null }, { "text": "Similarly, the word nodes were also filtered by a predefined threshold for clustering coefficient to ensure the clustering quality. During the computing process of clustering coefficient for each node , we discovered that there were many nodes with high coefficients. However, many of them belonged to local mini-clusters in which the degree of node was too small, resulting in too many specific words for stylistic contents. A", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 3 (Clustering Coefficient) The clustering coefficient is defined by clustering coefficient as:", "sec_num": null }, { "text": "A Contextualized Feature Extraction Approach post-filtering step was then applied to remove the local mini-cluster and small cluster words based on the number of triangles of the word nodes , where less node triangles indicated a smaller cluster. With the post-filtering step, a set of qualified stylistic content words SW were retrieved, such that SW = {sw | > , > }, sw \u2208 , where denotes the empirical threshold for the number of triangles for the word node. Some examples of stylistic content words in this task were \"\u68ee\u6797\u7cfb,\" \"\u4e16\u754c\u7d1a,\" \"\u9ecf\u7a20\u5ea6,\" and \"\u53ef \u611b\u611f.\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discovering the Latent Writing Style from Articles: 25", "sec_num": null }, { "text": "With the extracted stylistic skeleton and stylistic content words, this step aimed to construct the stylistic word pattern template. The stylistic word pattern is designed to capture hidden word usages in a writing style. For a word pattern, the length l of the pattern can be dynamic; that is, there may exist a longer stylistic word pattern (i.e., slogans) or a shorter one (i.e., topic tokens). In this work, a short length was adapted, as a longer word pattern may be difficult to match in a real-world case.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stylistic Pattern Construction", "sec_num": "4.3" }, { "text": "To construct the word pattern templates , the permutation of stylistic skeleton and content words, CW and SW, were adopted in our work using the rules below:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stylistic Pattern Construction", "sec_num": "4.3" }, { "text": "\uf0b7 The stylistic skeleton words are required to exist in the pattern at any position as such words have the top connectivity in the corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stylistic Pattern Construction", "sec_num": "4.3" }, { "text": "\uf0b7 A word pattern could contain more than one skeleton words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stylistic Pattern Construction", "sec_num": "4.3" }, { "text": "For example, in pattern length 3, each pattern feature is composed of an arbitrary permutation, such as \"cw sw cw\" or \"cw sw sw,\" from the set of CW and SW. The word patterns are then used to search the corpus set to retrieve the pattern frequency. The word patterns that belongs to last 20% infrequent patterns are dropped, as they are not general enough.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stylistic Pattern Construction", "sec_num": "4.3" }, { "text": "Instead of utilizing the word pattern by exact matching (bag-of-word matching) as n-gram does, this work adopts a flexible representation to increase the versatility of the pattern template due to the issue of easily overfitting for n-grams and pattern size consumption. Compared to the stylistic skeleton words, the stylistic content words are relatively easier to update or replace (i.e., develop new terms) as these are determined by the clustering coefficient, which captures interchangeable words. With respect to the stylistic content characteristics, various words that may be beyond the knowledge coverage of the training dataset could be used to describe a topic. Therefore, flexible representation was designed and performed by replacing the SW in the word pattern with a placeholder <*>, which means any token could be considered in the stylistic patterns during the matching process (i.e., \"\u6211 <*> \u808c\u819a\", \"\u7279\u5225 <*> \u7684\").", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stylistic Pattern Construction", "sec_num": "4.3" }, { "text": "The flexibility of the pattern (the wildcard representation <*>) enables our model to possess robust generalization ability, which increases pattern coverage for dealing with out-of-vocabulary words and slang or coded words used in specific domains when extracting features during testing. The complete steps for stylistic word extraction and stylistic pattern construction are formally summarized in Algorithm 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stylistic Pattern Construction", "sec_num": "4.3" }, { "text": "Calculate eigenvector centrality (e) and clustering coefficient (cl) for topic graph. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 1 Stylistic Pattern Features Extraction Algorithm", "sec_num": null }, { "text": "With the stylistic word pattern, it is critical that how to transform a set of patterns to features for the classification. One of the traditional ways is to present the word pattern as a set of bag-of-patterns with the frequency or normalized frequency (probability of occurrence) as the numerical features. However, such bag-of-pattern representations limited in the current state-of-the-art deep neural network (DNN) models, which applied several word embedding techniques to present the hidden information for a word. Such embedding features are very flexible which could be utilized not only in traditional classifiers (i.e. support vector machine (SVM) or random forest), but also the DNN models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Representation of Stylistic Pattern", "sec_num": "4.4" }, { "text": "Inspired from it, this work aims to proposed a flexible numerical vector representation for the extracted word patterns in a pre-training manner which could perform as the initialized parameters for the classification models. The numerical representation is designed to leverage the uniqueness of each word pattern for each label, which is the style in this work. The uniqueness of the pattern for different labels is calculated by a weighting schema, namely identical stylistic degree. Formally, given a set of corpuses and a set of possible style , where each corpus c belongs to a style s, the identical stylistic degree is defined by three components, which are pattern frequency, inverse style frequency.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discovering the Latent Writing Style from Articles: 27 A Contextualized Feature Extraction Approach", "sec_num": null }, { "text": "The pattern frequency pf is defined as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 4 (Pattern Frequency)", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": ", , \u2211 \u2208 ,", "eq_num": "(4)" } ], "section": "Definition 4 (Pattern Frequency)", "sec_num": null }, { "text": "where freq , represents the frequency of the pattern p in the style s, and , is the logarithmic scaled frequency of p in all the articles of the style s.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 4 (Pattern Frequency)", "sec_num": null }, { "text": "Pattern frequency is designed to capture the frequently appeared word pattern under the assumption that the more a pattern exists in the corpus of a style, the more important the pattern is. As the frequency is dramatically different from pattern to pattern, the scale of the freq , score may encounter biased due to the large frequency gap. A logarithm function is thus applied to avoid the identical stylistic degree dominated by pattern frequency.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 4 (Pattern Frequency)", "sec_num": null }, { "text": "The inverse style frequency is computed as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 5 (Inverse Style Frequency)", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u2211 \u2208 , ,", "eq_num": "(5)" } ], "section": "Definition 5 (Inverse Style Frequency)", "sec_num": null }, { "text": "where is the measurement of the rareness of the pattern p in all articles.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 5 (Inverse Style Frequency)", "sec_num": null }, { "text": "The inverse style frequency aims to decrease the importance for the commonly appeared pattern among many styles. The traditional inverse document frequency in TF-IDF is designed to examine whether the pattern exist in how many styles. However, the pattern frequency in a style is able to be treated as the intensity of the pattern existence. This work then refines the inverse style frequency by introducing the pattern frequency as indicator to calculate the cross styles uniqueness.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 5 (Inverse Style Frequency)", "sec_num": null }, { "text": "Finally, the uniqueness of each stylistic pattern could be presented by the identical stylistic degree as below. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 5 (Inverse Style Frequency)", "sec_num": null }, { "text": "where , is the identical stylistic degree that represents the importance of the pattern to the style s.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 5 (Inverse Style Frequency)", "sec_num": null }, { "text": ", , it is able to quantify the uniqueness of each stylistic word pattern for a style . The stylistic pattern is then able to present in a vectorized form = | , |, \u2208 | | , namely stylistic pattern embeddings, where each component represents the identical stylistic degree , of pattern for a style . The flexibility of the proposed identical stylistic degree also allows the weighting schema to be extended when the number of styles | | is increased.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "With the identical stylistic degree", "sec_num": null }, { "text": "In this section, we describe the classification model and the transfer learning procedure.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Training", "sec_num": "5." }, { "text": "Due to the well performance of Convolutional Neural Network architecture on several text classification tasks in the past, CARISR was based on Multi-layer ConvNet (Kim, 2014) architecture, as shown in the bottom of Figure 1 . Consider a set of corpuses , , \u2026 , , \u2026 , where \u2208 1, . Each article was transformed into pattern degree matrix based on the stylistic pattern embedding described in previous section.", "cite_spans": [ { "start": 163, "end": 174, "text": "(Kim, 2014)", "ref_id": "BIBREF15" } ], "ref_spans": [ { "start": 215, "end": 223, "text": "Figure 1", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Model Architecture", "sec_num": "5.1" }, { "text": ", where \u2208", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Architecture", "sec_num": "5.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "| |", "eq_num": "(7)" } ], "section": "Model Architecture", "sec_num": "5.1" }, { "text": "where L denotes the parameter as the threshold for the maximum number of patterns for an article, and | | denotes the number of categories, respectively. If the number pattern for an article is less than L, it will be filled with zero as pattern scores. For the sake of brevity, we used to present single instance . Each entry , in the pattern degree matrix represented identical stylistic degree for pattern in category , where \u2208 1, | | , \u2208 1, .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Architecture", "sec_num": "5.1" }, { "text": "X is following fed into three paths which are composed by 1-D convolutional layer with different filter size of 1, 3, and 8. The output is passed through a ReLU activation function (Nair & Hinton, 2010) that produces a feature map. A 1-D max pooling layer of size 3 is then applied to each feature map.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Architecture", "sec_num": "5.1" }, { "text": ", _ ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Architecture", "sec_num": "5.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "_ ,", "eq_num": "(10)" } ], "section": "Model Architecture", "sec_num": "5.1" }, { "text": "where denotes filter size. Stacked with three _ , the results were concatenated together and passed through two fully connected layers of dimensions 256 and 16 in order.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Architecture", "sec_num": "5.1" }, { "text": "(11) 12Classification:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2295 \u2295", "sec_num": null }, { "text": "(13)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2295 \u2295", "sec_num": null }, { "text": "where \u2295 denotes the concatenate operation, is the output of stacked block which kernel size is . We used softmax to get the probability of each category and used cross entropy as loss function. In order to prevent overfitting to training data, Dropout was applied to convolution layers and fully connected layers. The corresponding dropout rate is 0.5 and 0.7. The L2 regularization is also applied in the loss function, and the coefficient is 0.05. We chose a batch size of 64 and trained for 12 epochs using Adam optimizer (Kingma & Ba, 2014) . We used Keras (Chollet et al., 2015) to implement the CARISR architecture.", "cite_spans": [ { "start": 525, "end": 544, "text": "(Kingma & Ba, 2014)", "ref_id": "BIBREF16" }, { "start": 561, "end": 583, "text": "(Chollet et al., 2015)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "\u2295 \u2295", "sec_num": null }, { "text": "Due to the difficulty of collecting labelled sponsored reviews and self-purchased product reviews, a limited dataset was available to train the classifier to distinguish sponsored reviews from self-purchased product reviews. Inspired by the idea of transfer learning, we predicted that the flexibility of the proposed stylistic patterns could enable the proposed model to be transferable. This research thus proposes a two-stage training process to recognize sponsored reviews.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transfer Learning", "sec_num": "5.2" }, { "text": "In the first stage, a large amount of advertisement and product review data were collected as weak label data to pre-train the CARISR model. In terms of writing styles, advertisements are designed to highlight the features of sale products, while sponsored reviews are written in a manner similar to trial reviews. However, sponsored reviews are considered a special kind of advertisement, as they aim to both introduce the product and spotlight it. More specifically, both advertisements and sponsored reviews have the same objective, which is to advertise the product in a positive manner. In other words, the model could learn the diverse writing styles of advertisements in the early stages (learning from advertisement) through the weak label pre-trained procedure.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transfer Learning", "sec_num": "5.2" }, { "text": "In the second stage, the transfer learning concept was applied to fine-tune the pre-trained model with what little sponsored review data were available. Having the prior knowledge of the advertisement writing style, the model could more easily learn to distinguish sponsored reviews. To fine-tune it, the parameters of CNN blocks were fixed, and the first fully connected layer in CARISR was taken as the feature vector of articles. The feature vector was fed into another fully connected layer to examine the transformation from feature vector to classification result. This approach allows CARISR to distinguish sponsored reviews from true product reviews.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transfer Learning", "sec_num": "5.2" }, { "text": "In this two-stage transfer learning process, the model's feature representation improved thanks to pre-training with a large amount of weak label data. It learned to distinguish the writing style of sponsored reviews and product reviews through fine-tuning with the small amount of true label data available. Based on the training process, we predict that even with the lack of true labeled data, the model could still perform well and avoid overfitting.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transfer Learning", "sec_num": "5.2" }, { "text": "To distinguish the sponsored and product review, this research utilized the transfer learning concept which leveraged user reviews and advertisement articles as pre-training corpus and fine-tune the model with sponsored and self-purchased product reviews. For the entire training process, two datasets are collected and introduced below.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "6.1" }, { "text": "The first dataset was collected from UrCosme, a famous makeup product review website in Taiwan, with three classes Self-purchased product review, Trial product review, and Advertisement, where the three classes are tagged and verified by UrCosme. It has total 194,099 makeup reviews from 17,006 users from 2015 to 2018 June and includes 22,094 products and 4,594 articles from 498 brands.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "6.1" }, { "text": "The second dataset was from PIXNET, an online social blog in Taiwan, makeup product-related articles are collected with three classes Self-purchased product review, Trial product review, and the target Sponsored review. Since there are no article tags provided from PIXNET, several rules are defined for identifying the three classes. Firstly, the Sponsored review are the articles which contain the URL links with specific blogger's identification tokens. To trace the web reference from which bloggers to the product web page, this kind of URLs are widely been used to record the number of clicks and make profits to the bloggers. The text content from articles with specific URLs are collected with the Sponsored review label. Second, based on matching the keywords, \"\u9080\u7a3f\" and \"\u8a66\u7528\", to label the Trial product review and other normal product reviews are labeled as Self-purchased product review. After categorizing the articles, we manually pick 125 articles from each category as the PIXNET dataset and cross valid the dataset with 5 experts. To prevent our model learned from the specific contents, all the clues (including URLs and keywords, tokens that have used to create labels) are removed in advance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "6.1" }, { "text": "Due to the lack of the sponsored review, the UrCosme dataset is considered as the weak Discovering the Latent Writing Style from Articles: 31 A Contextualized Feature Extraction Approach label dataset for the main task, the classification of sponsored and product review. The PIXNET dataset is treated as the ground truth dataset as it is labeled by manual efforts. The detail data distribution of two datasets are shown in Table 1 and Table 2 . The experiment 6.3 takes the training part of the UrCosme dataset for model pre-training but evaluates on the testing part of PIXNET dataset. In experiment 6.4, the completed PIXNET dataset is involved for evaluating the pre-training model from UrCosme dataset. For experiment 6.5, the PIXNET dataset is down sampled following the ratio 4:1 for fine-tuning and evaluating. Trial product review 87,508 10,000 2,423", "cite_spans": [], "ref_spans": [ { "start": 424, "end": 443, "text": "Table 1 and Table 2", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Data", "sec_num": "6.1" }, { "text": "Self-purchased product review 106,591 10,000 2,423 ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "6.1" }, { "text": "Trial product review 125", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sponsored review 125", "sec_num": null }, { "text": "Self-purchased product review 125", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sponsored review 125", "sec_num": null }, { "text": "To represent a text corpus, the term frequency-inverse document frequency (TF-IDF) has been widely used in several text classification tasks. It could automatically learn the important n-grams from the corpus and present the corpus based on the extracted important n-grams.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Methods", "sec_num": "6.2" }, { "text": "Represented by the TF-IDF features, all the articles were transformed into TF-IDF feature vector with 2500 dimensions for the extraction of the important n-grams.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Methods", "sec_num": "6.2" }, { "text": "In deep neural network (DNN) approaches, a text corpus is frequently represented by a sequence of the word vectors, namely word embeddings. The word embeddings could be either provided by a pre-trained word vectors or derived by the DNN models during the training procedure. In this work, a pretrained 400 dimensions word vector from YZU NLP Lab 1 , trained from traditional Mandarin Wikipedia, were applied as initialized representation to present the words. The word embeddings were set as trainable to be fine-tuned in the learning procedure. For the classification model, both traditional model and DNN model were applied in our work, which were the Logistic Regression (LR) model and the Long Short-term Memory (LSTM) model. The LR model learned a specific weight for each dimension of the features, which could provide a more interpretable explanation for analysis. For DNN models, the text-CNN and LSTM were applied in the experiments. The text-CNN (Kim, 2014) considers local word features by n-gram windows. By adopting multiple convolutional layer, model could summarize the local word features and representation the corpus. This work set the filter size of convolution layer as 3, stacked 3 convolution layers and following with 512,128 dense layers for feature summary. The LSTM model takes the input word sequence in a word by word manner and models the words relation step by step. In this work, the bi-directional LSTM with attention mechanism was applied which achieved several state-of-the-art performance for many NLP tasks. The LSTM model was connected with a 128-dimension fully connected layer for feature summary. For two DNN models, the categorical predictions were done by the Softmax activation function for feature summaries.", "cite_spans": [ { "start": 956, "end": 967, "text": "(Kim, 2014)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Baseline Methods", "sec_num": "6.2" }, { "text": "In the first training stage, all of the models were trained to distinguish the three different classes with the UrCosme dataset as weak label pre-training for the main task, which was the classification of sponsored and product reviews. After the model pre-training, the testing data from UrCosme was applied to evaluate the pre-training performance, the results of which are shown in Table 3 . Overall, the proposed CARISR did not have the best performance in the first stage of the training process compared to the TF-IDF baseline method and LSTM-based models. However, after analyzing the weight of the model, we observed that the baseline method result was easily influenced by specific keywords. An example from a real article is discussed below: The example articled was a trial product review, which it was correctly classified as by the baseline models but was incorrectly classified as an advertisement by the CARISR model. Although this article was misclassified as an advertisement, the writing style of the article showed more similarity to an advertisement than a real review by human judgement. By analyzing the weight of each term in the LR model, the result showed that the model relied on some specific terms, such as activity (\u6d3b\u52d5), satisfy (\u6eff\u610f), and invite (\u9080\u8acb). In this example, the model would be easily misled by malicious writers due to these specific terms.", "cite_spans": [], "ref_spans": [ { "start": 385, "end": 392, "text": "Table 3", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "Weak Label Classification Training", "sec_num": "6.3" }, { "text": "\u611f\u8b1d UrCosme \u8207 SK-II\uff0c\u8b93\u6211\u53c3\u8207\u300c\u8d85\u808c\u56e0\u947d\u5149\u6de8\u767d\u7cbe\u83ef\u300d\u65b0\u54c1\u6d3b\u52d5\uff01 \u8d85\u808c\u56e0\u947d\u5149\u6de8\u767d\u7cbe\u83ef 0.7ml x", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Weak Label Classification Training", "sec_num": "6.3" }, { "text": "Based on this example, although the accuracy of the CARISR model result was lower, it gave greater consideration to the relation between word structures in the article as a whole. The following experiment shows that the CARISR model was better able to resist the influence of specific terms. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Weak Label Classification Training", "sec_num": "6.3" }, { "text": "The pre-trained models were evaluated with the testing data from the weak labeled UrCosme dataset discussed in the previous section. The pre-trained models were evaluated with the human-labeled dataset; that is, the reviews from PIXNET were used as testing data with the advertisement label in UrCosme replaced by sponsored review label. As shown in Figure 2 , although the baseline models had better performance using the pre-trained settings, they performed worse than CARISR using the PIXNET dataset. More importantly, in the classification of sponsored reviews, baseline methods could not successfully differentiate sponsored reviews. This indicates that the baseline models had a good ability to learn but were hampered by the overfitting issue when using the training dataset. The main reason for this was that the baseline methods relied heavily on specific terms as clues, which resulted in the models not being general enough to apply to different testing data, even data from the same domain dataset (in this task, both were sponsored makeup reviews). Instead, CARISR leveraged the stylistic patterns to keep the features of sentence structure and writing style rather than only specific keywords or n-grams. Therefore, even if the testing dataset was slightly changed, the model was still able to determine the advertisement writing style.", "cite_spans": [], "ref_spans": [ { "start": 350, "end": 358, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Sponsored Review Testing", "sec_num": "6.4" }, { "text": "In real-world sponsored reviews, malicious writers usually pretend that the advertisement is a self-purchased product review. Many words used in commercial reviews usually appear in self-purchased product reviews; therefore, it is easy for them to avoid detection if the model relies heavily on specific terms or baseline methods. The proposed model, CARISR, was better able to avoid this problem, making it more suitable to real-world situations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sponsored Review Testing", "sec_num": "6.4" }, { "text": "when applied to the PIXNET dataset. AVG is the average F1-score for all three categories, and Sponsored is the F1-score for sponsored reviews.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 2. Comparison of TF-IDF, text-CNN, bi-LSTM-attention, and CARISR", "sec_num": null }, { "text": "According to the classification results presented in the previous section, CARISR demonstrated the ability to recognize the latent writing styles of sponsored articles. Transfer learning was applied to fine-tune the DNN models to boost its performance based on a small number of manually collected sponsored reviews on PIXNET. One-fifth of the PIXNET dataset (25 samples for each class) was kept for the final testing, and the rest of the data were utilized for fine-tuning (100 samples for each class). Note that the TF-IDF model was excluded from this section, as it is not able to perform standard transfer learning based on the Figure 3 .", "cite_spans": [], "ref_spans": [ { "start": 632, "end": 640, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Transfer Learning with Sponsored Reviews", "sec_num": "6.5" }, { "text": "All three of the tested models manifested better performance after adjusting the parameters using transfer learning. For three-label classification, the text-CNN, bi-LSTM-attention and CARISR had F1-scores of 0.21, 0.47 and 0.51, respectively. Furthermore, our analysis found that a large percentage of collected sponsored reviews were very similar to advertisements. This may be the reason why the CARISR-Trans3 did not perform as well as expected.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transfer Learning with Sponsored Reviews", "sec_num": "6.5" }, { "text": "Therefore, we conducted another experiment that only used sponsored reviews and self-purchased product reviews, as checked by humans, to build a binary classification model. As shown in Figure 3 , with the application of two-category transfer learning (Transfer-2), the CARISR F1-score was improved to 0.70 and outperformed the bi-LSTM-attention by 0.07 points.", "cite_spans": [], "ref_spans": [ { "start": 186, "end": 194, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Transfer Learning with Sponsored Reviews", "sec_num": "6.5" }, { "text": "sponsored, trial product, and self-purchased product review. Transfer-2 shows the results of the models after fine-tuning with only sponsored and self-purchased product reviews.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 3. Comparison between original method and transfer learning. Transfer-3 indicates the result of the models after fine-tuning using three categories:", "sec_num": null }, { "text": "This research mainly focused on quantifying the reliability problem that results from sponsored articles on popular Mandarin forums or websites. To address the problem with limited labeled data, we first proposed a framework, CARISR, that combines weak label and transfer learning methods. CARISR can learned implicit writing styles from weak label data, and it can be further improved by transfer learning with minimal amounts of manually labelled data. Thanks to its graph-based feature, CARISR is not only more robust, but it also has better generalization compared to the traditional token-based features. Experimental results showed that our model can correctly recognize around 70% of sponsored articles from the human-labeled dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7." }, { "text": "Our work provides a new perspective on and further improvement to reliability tasks. In the future, we plan to merge graph-based and semantic features to capture more underlying meaning in context. Meanwhile, the enrichment of stylistic word patterns could also improve model comprehension.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7." }, { "text": "A Contextualized Feature Extraction Approach Mikolov, T., Chen, K., Corrado, G., & Dean, J. (2013) . Efficient estimation of word representations in vector space. In arXiv preprint arXiv:1301.3781.", "cite_spans": [ { "start": 45, "end": 98, "text": "Mikolov, T., Chen, K., Corrado, G., & Dean, J. (2013)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7." }, { "text": "Nair, V., & Hinton, G. E. (2010) . Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th international conference on machine learning (icml-10), 807-814.", "cite_spans": [ { "start": 10, "end": 32, "text": "& Hinton, G. E. (2010)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7." }, { "text": "Page, L., Brin, S., Motwani, R., & Winograd, T. (1999) . The pagerank citation ranking: Bringing order to the web. (Technical Report No. 1999-66) Qu, Z., Song, X., Zheng, S., Wang, X., Song, X., & Li, Z. (2018) . Improved bayes method based on TF-IDF feature and grade factor feature for chinese information classification. Ruchansky, N., Seo, S., & Liu, Y. (2017) . Csi: A hybrid deep model for fake news detection.", "cite_spans": [ { "start": 10, "end": 54, "text": "Brin, S., Motwani, R., & Winograd, T. (1999)", "ref_id": null }, { "start": 115, "end": 145, "text": "(Technical Report No. 1999-66)", "ref_id": null }, { "start": 175, "end": 210, "text": "Wang, X., Song, X., & Li, Z. (2018)", "ref_id": null }, { "start": 348, "end": 364, "text": "& Liu, Y. (2017)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7." }, { "text": "In Proceedings of the 2017 acm on conference on information and knowledge management, 797-806. doi: 10.1145/3132847.3132877 Saravia, E., Liu, H.-C. T., Huang, Y.-H., Wu, J., & Chen, Y.-S. (2018) (Graves, Fern\u00e1ndez, Gomez & Schmidhuber, 2006 )\uff0c\u751a\u81f3\u5728\u8cc7\u6599\u91cf\u5920\u5927(\u901a\u5e38\u5927\u65bc 3000 \u5c0f\u6642)\u6642\u80fd\u5920 \u76f4\u63a5\u5c0d\u61c9\u5230\u55ae\u8a5e (Soltau, Liao & Sak, 2016) (Li, Ye, Das, Zhao & Gong, 2018) ", "cite_spans": [ { "start": 86, "end": 123, "text": "797-806. doi: 10.1145/3132847.3132877", "ref_id": null }, { "start": 137, "end": 194, "text": "Liu, H.-C. T., Huang, Y.-H., Wu, J., & Chen, Y.-S. (2018)", "ref_id": null }, { "start": 195, "end": 240, "text": "(Graves, Fern\u00e1ndez, Gomez & Schmidhuber, 2006", "ref_id": "BIBREF27" }, { "start": 277, "end": 303, "text": "(Soltau, Liao & Sak, 2016)", "ref_id": "BIBREF43" }, { "start": 304, "end": 336, "text": "(Li, Ye, Das, Zhao & Gong, 2018)", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u7d66\u5b9a\u4e00\u6bb5\u9577\u5ea6\u70ba T \u7684\u8072\u5b78\u7279\u5fb5\u5e8f\u5217 X \u53ca\u4e00\u6bb5\u9577\u5ea6\uff2c\u7684\u6a19\u7c64\u5e8f\u5217 C\uff0c\u5176\u4e2dC \u2208 U|l 1, \u2026 , \uff0cU \u70ba\u5b58\u5728\u7684\u6a19\u7c64\u96c6\u5408\u3002\u4e26\u4e14 CTC \u5f15\u5165\u4e86\u984d\u5916\u7684\u7a7a\u767d\u6a19\u7c64\uff0c\u4f5c\u70ba\u6a19\u7c64\u9593\u7684\u5206\u754c\uff0c \u6bcf\u500b\u97f3\u6846\u7684\u6a19\u7c64\u5e8f\u5217\u53ef\u8868\u793a\u70ba U \u222a | 1, \u2026 \u3002 X \u5c0d\u61c9 C \u7684\u5f8c\u9a57\u6a5f \u7387\u53ef\u8868\u793a\u70ba\uff1a | | , | | | (1) \u7531\u65bc CTC \u5047\u8a2d\u6bcf\u4e00\u6642\u9593\u4e0b\u7684\u8072\u97f3\u8f38\u5165\u5c0d\u61c9\u5b57\u7b26\u70ba\u689d\u4ef6\u7368\u7acb\uff0c\u56e0\u6b64 | , | \uff0c\u5176 \u4e2d | \u53ef\u4ee5\u8996\u70ba CTC \u6a19\u7c64\u6a21\u578b\uff0c\u53ef\u4ee5\u5206\u5225\u7531\u8c9d\u6c0f\u5b9a\u7406(Bayes' Rule)\u3001\u93c8\u5f0f\u6cd5\u5247(Chain \u5f35\u4fee\u745e \u7b49 | | | , : ,", "eq_num": "(2)" } ], "section": "Conclusion", "sec_num": "7." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u5176\u4e2d\uff0c \u70ba\u5b57\u7b26\u7d1a\u5225\u7684\u8a9e\u8a00\u6a21\u578b\uff0c \u70ba\u6bcf\u4e00\u72c0\u614b\u7684\u5148\u9a57\u6a5f\u7387\uff0c , \u70ba\u72c0\u614b\u8f49 \u79fb\u6a5f\u7387\uff0c\u70ba\u4e86\u4f7f\u8f38\u51fa\u6709\u7a7a\u767d\u6a19\u7c64\uff0cCTC \u5c07\u4e0a\u8ff0\u9577\u5ea6\uff2c\u7684\u6a19\u7c64\u5e8f\u5217 C \u8abf\u6574\u70ba\uff1a , ,", "eq_num": ", , , \u2026 \u2032" } ], "section": "Conclusion", "sec_num": "7." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u2208 U \u222a | 1, \u2026 2 1 (3) \u72c0\u614b\u8f49\u79fb\u6a5f\u7387 , \u53ef\u4ee5\u8868\u793a\u70ba\uff1a , 1 \u2032 \u2032 1 \u2032 \u2032 1 \u2032 \u2032 0 (4) \u5176\u4f9d\u5e8f\u5206\u5225\u70ba\u76f8\u4f3c\u65bc HMM \u7684\u81ea\u6211\u8f49\u79fb(Self-loop)\uff0c\u8f49\u79fb\u81f3\u4e0b\u4e00\u72c0\u614b\uff0c\u800c\u7b2c\u4e09\u500b\u5247\u662f\u5728 \u70ba \u5076\u6578\u6642\u4e14 \u2032 \u53ca \u2032 \u7686\u5c6c\u65bc\u6a19\u7c64\u5e8f\u5217 S \u6642\u8df3\u904e blank \u72c0\u614b\uff0c\u5982\u540c\u4e0b\u5716\u7684\u62d3\u6a38\u7d50\u69cb\uff1a \u5716 1. CTC \u62d3\u58a3\u7d50\u69cb [Figure 1. CTC's topology] \u53e6\u4e00\u65b9\u9762\uff0c S|X \u70ba CTC \u8072\u5b78\u6a21\u578b\uff0c\u7531\u93c8\u5f0f\u6cd5\u5247\u5c55\u958b\u5f8c\uff0c\u518d\u5e36\u5165\u689d\u4ef6\u7368\u7acb\u7684\u5047\u8a2d\u53ef\u4ee5\u8868 \u793a\u70ba\uff1a | | , \u2026 , , |", "eq_num": "(5)" } ], "section": "Conclusion", "sec_num": "7." }, { "text": "\u63a2\u7a76\u7aef\u5c0d\u7aef\u6df7\u5408\u6a21\u578b\u67b6\u69cb\u65bc\u83ef\u8a9e\u8a9e\u97f3\u8fa8\u8b58 43", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u5176\u4e2d | \u70ba softmax \u8f38\u51fa\u7684\u7d50\u679c\uff0c\u7d9c\u5408\u4e0a\u8ff0\u5f0f(2)\u3001\u5f0f(5)\uff0c\u53ef\u4ee5\u5f97\u5230\uff1a | | , |", "eq_num": "(6)" } ], "section": "Conclusion", "sec_num": "7." }, { "text": "\u800c CTC \u7684\u76ee\u6a19\u51fd\u6578\u901a\u5e38\u4e0d\u5305\u542b \uff0c\u56e0\u6b64\u53ef\u5b9a\u7fa9\u70ba\uff1a", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "| | , |", "eq_num": "(7)" } ], "section": "Conclusion", "sec_num": "7." }, { "text": "\u4e0a\u5f0f\u70ba CTC \u76ee\u6a19\u51fd\u6578\uff0c\u800c\u8a13\u7df4\u6642\u5e0c\u671b\u6700\u5c0f\u5316\u640d\u5931\u51fd\u6578(Loss Function)\u4fbf\u662f ln", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7." }, { "text": "* | \uff0c * \u70ba\u8a13\u7df4\u8a9e\u6599\u7684\u6b63\u78ba\u5b57\u7b26\u5e8f\u5217\u7684\u6a19\u7c64\uff0c\u640d\u5931\u51fd\u6578\u8d8a\u5c0f\u7b49\u540c\u65bc\u8f38\u51fa\u6b63\u78ba\u6a19\u7c64\u7684\u6a5f\u7387\u8d8a\u5927\u3002", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7." }, { "text": "\u6709\u5225\u65bc CTC \u5c0d\u65bc\u8072\u97f3\u5c0d\u61c9\u5b57\u7b26\u7684\u689d\u4ef6\u7368\u7acb\u5047\u8a2d\uff0cAttention \u6a21\u578b\u76f4\u63a5\u4f30\u6e2c\u8072\u5b78\u7279\u5fb5\u5c0d\u61c9\u5230 \u5b57\u7b26\u7684\u5f8c\u9a57\u6a5f\u7387\uff0c\u5176\u76ee\u6a19\u51fd\u5f0f\u53ef\u5b9a\u7fa9\u70ba\uff1a", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Attention \u6a21\u578b (Attention-based Encoder-Decoder Network)", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "| | , :", "eq_num": "(8)" } ], "section": "Attention \u6a21\u578b (Attention-based Encoder-Decoder Network)", "sec_num": "2.2" }, { "text": "| , :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Attention \u6a21\u578b (Attention-based Encoder-Decoder Network)", "sec_num": "2.2" }, { "text": "\u53ef\u4ee5\u7531\u4e0b\u5217\u5f0f\u5b50\u63a8\u5f97\uff1a (9) \u3111 : * (10) tanh (11) : * (12) \u2211 (13) tanh (14) \u2211 (15) \u2211 (16) | , :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Attention \u6a21\u578b (Attention-based Encoder-Decoder Network)", "sec_num": "2.2" }, { "text": "Decoder , , ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Attention \u6a21\u578b (Attention-based Encoder-Decoder Network)", "sec_num": "2.2" }, { "text": "http://nlp.innobic.yzu.edu.tw/demo/word-embedding.html", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Rule)\u5c55\u958b\u3002\u6700\u5f8c\u5e36\u5165\u689d\u4ef6\u7368\u7acb\u7684\u5047\u8a2d\u53ef\u63a8\u5c0e\u70ba\uff1a", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "\u672c\u7bc7\u8ad6\u6587\u63a2\u8a0e\u4e86\u5169\u7a2e\u7aef\u5c0d\u7aef\u8a9e\u97f3\u8fa8\u8b58\u7684\u4e3b\u6d41\u65b9\u6cd5\uff0c\u4ee5\u53ca CTC-Attention \u6a21\u578b\u6b0a\u91cd\u5c0d\u65bc\u8a9e\u53e5 \u9577 \u77ed \u7684 \u8fa8 \u8b58 \u6548 \u679c \uff0c \u6211 \u5011 \u767c \u73fe \u5728 \u77ed \u8a9e \u53e5 \u8fa8 \u8b58 \u4e0a CTC-Attention \u6a21 \u578b \u76f8 \u4e0d \u50c5 \u76f8 \u8f03 \u65bc TDNN-LFMMI \u7684\u8868\u73fe\u66f4\u52a0\u51fa\u8272\uff0c\u540c\u6642\u5177\u6709\u80fd\u5920\u4f9d\u64da\u8a9e\u53e5\u9577\u77ed\u6539\u8b8a\u6b0a\u91cd\u89e3\u78bc\u7684\u5f48\u6027\u3002\u53e6 \u4e00\u65b9\u9762\uff0c\u4e26\u4e14\u7531\u65bc\u4f7f\u7528\u5b57\u7b26\u7d1a\u5225\u7684\u9810\u6e2c\u76ee\u6a19\u53ca\u8a9e\u8a00\u6a21\u578b\uff0c\u66f4\u80fd\u6709\u6548\u8655\u7406\u672a\u77e5\u8a5e\u7684\u554f\u984c\u3002 \u8fd1\u5e74\u4f86\u5728\u5e8f\u5217\u5c0d\u5e8f\u5217\u6a21\u578b\u4e0a\u6709\u5b78\u8005\u63d0\u51fa\u8a31\u591a\u512a\u5316\u8a13\u7df4\u7684\u65b9\u6cd5\u5982 (Pereyra, Tucker, Chorowski, Kaiser & Hinton, 2017) \uff0c\u80fd\u5920\u907f\u514d Overconfidence\uff0c\u4ee5\u53ca Cold Fusion (Sriram, Jun, Satheesh & Coates, 2018 ", "cite_spans": [ { "start": 226, "end": 277, "text": "(Pereyra, Tucker, Chorowski, Kaiser & Hinton, 2017)", "ref_id": "BIBREF37" }, { "start": 314, "end": 351, "text": "(Sriram, Jun, Satheesh & Coates, 2018", "ref_id": "BIBREF44" } ], "ref_spans": [], "eq_spans": [], "section": "\u7d50\u8ad6\u8207\u672a\u4f86\u5c55\u671b (Conclusion and Future works)", "sec_num": "4." }, { "text": "Please send application to:The ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "To Register\uff1a", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "The Third International Chinese Language Processing Bakeoff: Word Segmentation and Named Entity Recognition", "authors": [ { "first": "G.-A", "middle": [], "last": "Levow", "suffix": "" } ], "year": 2006, "venue": "Proceedings the Fifth SIGHAN Workshop on Chinese Language Processing", "volume": "", "issue": "", "pages": "108--117", "other_ids": {}, "num": null, "urls": [], "raw_text": "Levow, G.-A. (2006). The Third International Chinese Language Processing Bakeoff: Word Segmentation and Named Entity Recognition. In Proceedings the Fifth SIGHAN Workshop on Chinese Language Processing, 108-117.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies", "authors": [ { "first": "T", "middle": [], "last": "Linzen", "suffix": "" }, { "first": "E", "middle": [], "last": "Dupoux", "suffix": "" }, { "first": "Y", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 2016, "venue": "TACL", "volume": "4", "issue": "", "pages": "521--535", "other_ids": { "DOI": [ "10.1162/tacl_a_00115" ] }, "num": null, "urls": [], "raw_text": "Linzen, T., Dupoux, E., & Goldberg, Y. (2016). Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies. TACL, 4 (2016), 521-535. doi: 10.1162/tacl_a_00115", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Capturing Long-range Contextual Dependencies with Memory-enhanced Conditional Random Fields", "authors": [ { "first": "F", "middle": [], "last": "Liu", "suffix": "" }, { "first": "T", "middle": [], "last": "Baldwin", "suffix": "" }, { "first": "T", "middle": [], "last": "Cohn", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the Eighth International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "555--565", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liu, F., Baldwin, T., & Cohn, T. (2017). Capturing Long-range Contextual Dependencies with Memory-enhanced Conditional Random Fields. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (IJCNLP 2017), 555-565.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Efficient Estimation of Word Representations in Vector Space", "authors": [ { "first": "T", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "K", "middle": [], "last": "Chen", "suffix": "" }, { "first": "G", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "J", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mikolov, T., Chen, K., Corrado, G., & Dean, J. (2013). Efficient Estimation of Word Representations in Vector Space. CoRR abs/1301.3781", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Jieba\" (Chinese for \"to stutter\") Chinese text segmentation: built to be the best Python Chinese word segmentation module", "authors": [ { "first": "J", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2012, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sun, J. (2012). \"Jieba\" (Chinese for \"to stutter\") Chinese text segmentation: built to be the best Python Chinese word segmentation module.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Toward an enhanced arabic text classification using cosine similarity and latent semantic indexing", "authors": [ { "first": "F", "middle": [ "S" ], "last": "Al-Anzi", "suffix": "" }, { "first": "D", "middle": [], "last": "Abuzeina", "suffix": "" } ], "year": 2017, "venue": "Journal of King Saud University-Computer and Information Sciences", "volume": "29", "issue": "2", "pages": "189--195", "other_ids": { "DOI": [ "10.1016/j.jksuci.2016.04.001" ] }, "num": null, "urls": [], "raw_text": "Al-Anzi, F. S., & AbuZeina, D. (2017). Toward an enhanced arabic text classification using cosine similarity and latent semantic indexing. Journal of King Saud University-Computer and Information Sciences, 29(2), 189-195. doi: 10.1016/j.jksuci.2016.04.001", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Unsupervised graph-based patterns extraction for emotion classification", "authors": [ { "first": "C", "middle": [], "last": "Argueta", "suffix": "" }, { "first": "E", "middle": [], "last": "Saravia", "suffix": "" }, { "first": "Y.-S", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 ieee/acm international conference on advances in social networks analysis and mining 2015", "volume": "", "issue": "", "pages": "336--341", "other_ids": { "DOI": [ "10.1145/2808797.2809419" ] }, "num": null, "urls": [], "raw_text": "Argueta, C., Saravia, E., & Chen, Y.-S. (2015). Unsupervised graph-based patterns extraction for emotion classification. In Proceedings of the 2015 ieee/acm international conference on advances in social networks analysis and mining 2015, 336-341. doi: 10.1145/2808797.2809419", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Latent dirichlet allocation", "authors": [ { "first": "D", "middle": [ "M" ], "last": "Blei", "suffix": "" }, { "first": "A", "middle": [ "Y" ], "last": "Ng", "suffix": "" }, { "first": "M", "middle": [ "I" ], "last": "Jordan", "suffix": "" } ], "year": 2003, "venue": "Journal of machine Learning research", "volume": "3", "issue": "", "pages": "993--1022", "other_ids": {}, "num": null, "urls": [], "raw_text": "Blei, D. M., Ng, A. Y., & Jordan, M. I. (2003). Latent dirichlet allocation. Journal of machine Learning research, 3(Jan), 993-1022.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Rumoureval: Determining rumour veracity and support for rumours", "authors": [], "year": null, "venue": "", "volume": "8", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1704.05972" ] }, "num": null, "urls": [], "raw_text": "Semeval-2017 task 8: Rumoureval: Determining rumour veracity and support for rumours. In arXiv preprint arXiv:1704.05972.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Stylometrybased approach for detecting writing style changes in literary texts", "authors": [ { "first": "H", "middle": [ "M" ], "last": "Gomez Adorno", "suffix": "" }, { "first": "G", "middle": [], "last": "Rios", "suffix": "" }, { "first": "J", "middle": [ "P" ], "last": "Posadas Dur\u00e1n", "suffix": "" }, { "first": "G", "middle": [], "last": "Sidorov", "suffix": "" }, { "first": "G", "middle": [], "last": "Sierra", "suffix": "" } ], "year": 2018, "venue": "Computaci\u00f3n y Sistemas", "volume": "22", "issue": "1", "pages": "47--53", "other_ids": { "DOI": [ "10.13053/CyS-22-1-2882" ] }, "num": null, "urls": [], "raw_text": "Gomez Adorno, H. M., Rios, G., Posadas Dur\u00e1n, J. P., Sidorov, G., & Sierra, G. (2018). Stylometrybased approach for detecting writing style changes in literary texts. Computaci\u00f3n y Sistemas, 22(1), 47-53. doi: 10.13053/CyS-22-1-2882", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Cross-domain failures of fake news detection", "authors": [ { "first": "M", "middle": [], "last": "Janicka", "suffix": "" }, { "first": "M", "middle": [], "last": "Pszona", "suffix": "" }, { "first": "A", "middle": [], "last": "Wawer", "suffix": "" } ], "year": 2019, "venue": "Computaci\u00f3n y Sistemas", "volume": "23", "issue": "3", "pages": "1089--1097", "other_ids": { "DOI": [ "10.13053/CyS-23-3-3281" ] }, "num": null, "urls": [], "raw_text": "Janicka, M., Pszona, M., & Wawer, A. (2019). Cross-domain failures of fake news detection. Computaci\u00f3n y Sistemas, 23(3), 1089-1097. doi: 10.13053/CyS-23-3-3281", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Learning hierarchical discourse-level structure for fake news detection", "authors": [ { "first": "H", "middle": [], "last": "Karimi", "suffix": "" }, { "first": "J", "middle": [], "last": "Tang", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXivpreprintarXiv:1903.07389" ] }, "num": null, "urls": [], "raw_text": "Karimi, H., & Tang, J. (2019). Learning hierarchical discourse-level structure for fake news detection. In arXiv preprint arXiv:1903.07389.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "A benchmark study on machine learning methods for fake news detection", "authors": [ { "first": "J", "middle": [ "Y" ], "last": "Khan", "suffix": "" }, { "first": "M", "middle": [ "T I" ], "last": "Khondaker", "suffix": "" }, { "first": "A", "middle": [], "last": "Iqbal", "suffix": "" }, { "first": "S", "middle": [], "last": "Afroz", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXivpreprintarXiv:1905.04749" ] }, "num": null, "urls": [], "raw_text": "Khan, J. Y., Khondaker, M. T. I., Iqbal, A., & Afroz, S. (2019). A benchmark study on machine learning methods for fake news detection. In arXiv preprint arXiv:1905.04749.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Convolutional neural networks for sentence classification", "authors": [ { "first": "Y", "middle": [], "last": "Kim", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1408.5882" ] }, "num": null, "urls": [], "raw_text": "Kim, Y. (2014). Convolutional neural networks for sentence classification. In arXiv preprint arXiv:1408.5882.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Adam: A method for stochastic optimization", "authors": [ { "first": "D", "middle": [ "P" ], "last": "Kingma", "suffix": "" }, { "first": "J", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1412.6980" ] }, "num": null, "urls": [], "raw_text": "Kingma, D. P., & Ba, J. (2014). Adam: A method for stochastic optimization. In arXiv preprint arXiv:1412.6980.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "All-in-one: Multi-task learning for rumour verification", "authors": [ { "first": "E", "middle": [], "last": "Kochkina", "suffix": "" }, { "first": "M", "middle": [], "last": "Liakata", "suffix": "" }, { "first": "A", "middle": [], "last": "Zubiaga", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXivpreprintarXiv:1806.03713" ] }, "num": null, "urls": [], "raw_text": "Kochkina, E., Liakata, M., & Zubiaga, A. (2018). All-in-one: Multi-task learning for rumour verification. In arXiv preprint arXiv:1806.03713.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Separating facts from fiction: Linguistic models to classify suspicious and trusted news posts on twitter", "authors": [ { "first": "S", "middle": [], "last": "Volkova", "suffix": "" }, { "first": "K", "middle": [], "last": "Shaffer", "suffix": "" }, { "first": "J", "middle": [ "Y" ], "last": "Jang", "suffix": "" }, { "first": "N", "middle": [], "last": "Hodas", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th annual meeting of the association for computational linguistics", "volume": "2", "issue": "", "pages": "647--653", "other_ids": { "DOI": [ "10.18653/v1/P17-2102" ] }, "num": null, "urls": [], "raw_text": "Volkova, S., Shaffer, K., Jang, J. Y., & Hodas, N. (2017). Separating facts from fiction: Linguistic models to classify suspicious and trusted news posts on twitter. In Proceedings of the 55th annual meeting of the association for computational linguistics,volume 2: Short papers, 647-653. doi: 10.18653/v1/P17-2102", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "The spread of true and false news online", "authors": [ { "first": "S", "middle": [], "last": "Vosoughi", "suffix": "" }, { "first": "D", "middle": [], "last": "Roy", "suffix": "" }, { "first": "S", "middle": [], "last": "Aral", "suffix": "" } ], "year": 2018, "venue": "Science", "volume": "359", "issue": "6380", "pages": "1146--1151", "other_ids": { "DOI": [ "10.1126/science.aap9559" ] }, "num": null, "urls": [], "raw_text": "Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, 359(6380), 1146-1151. doi : 10.1126/science.aap9559", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Reasearch on feature mapping based on labels information in multi-label text classification", "authors": [ { "first": "T", "middle": [], "last": "Wang", "suffix": "" }, { "first": "T", "middle": [], "last": "Luo", "suffix": "" }, { "first": "J", "middle": [], "last": "Li", "suffix": "" }, { "first": "C", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2017, "venue": "Proceedings of 2017 7th ieee international conference on Electronics information and emergency communication (iceiec)", "volume": "", "issue": "", "pages": "452--456", "other_ids": { "DOI": [ "10.1109/ICEIEC.2017.8076603" ] }, "num": null, "urls": [], "raw_text": "Wang, T., Luo, T., Li, J., & Wang, C. (2017). Reasearch on feature mapping based on labels information in multi-label text classification. In Proceedings of 2017 7th ieee international conference on Electronics information and emergency communication (iceiec), 452-456. doi: 10.1109/ICEIEC.2017.8076603", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Eann: Event adversarial neural networks for multi-modal fake news detection", "authors": [ { "first": "Y", "middle": [], "last": "Wang", "suffix": "" }, { "first": "F", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Z", "middle": [], "last": "Jin", "suffix": "" }, { "first": "Y", "middle": [], "last": "Yuan", "suffix": "" }, { "first": "G", "middle": [], "last": "Xun", "suffix": "" }, { "first": "K", "middle": [], "last": "Jha", "suffix": "" }, { "first": "", "middle": [], "last": "Gao", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 24th acm sigkdd international conference on knowledge discovery & data mining", "volume": "", "issue": "", "pages": "849--857", "other_ids": { "DOI": [ "10.1145/3219819.3219903" ] }, "num": null, "urls": [], "raw_text": "Wang, Y., Ma, F., Jin, Z., Yuan, Y., Xun, G., Jha, K., \u2026 Gao, J. (2018). Eann: Event adversarial neural networks for multi-modal fake news detection. In Proceedings of the 24th acm sigkdd international conference on knowledge discovery & data mining, 849-857. doi :10.1145/3219819.3219903", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Toward computational fact-checking", "authors": [ { "first": "Y", "middle": [], "last": "Wu", "suffix": "" }, { "first": "P", "middle": [ "K" ], "last": "Agarwal", "suffix": "" }, { "first": "C", "middle": [], "last": "Li", "suffix": "" }, { "first": "J", "middle": [], "last": "Yang", "suffix": "" }, { "first": "C", "middle": [], "last": "Yu", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the VLDB Endowment", "volume": "7", "issue": "", "pages": "589--600", "other_ids": { "DOI": [ "10.14778/2732286.2732295" ] }, "num": null, "urls": [], "raw_text": "Wu, Y., Agarwal, P. K., Li, C., Yang, J., & Yu, C. (2014). Toward computational fact-checking. Proceedings of the VLDB Endowment, 7(7), 589-600. doi: 10.14778/2732286.2732295", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Listen, attend and spell: A neural network for large vocabulary conversational speech recognition", "authors": [ { "first": "W", "middle": [], "last": "Chan", "suffix": "" }, { "first": "N", "middle": [], "last": "Jaitly", "suffix": "" }, { "first": "Q", "middle": [], "last": "Le", "suffix": "" }, { "first": "O", "middle": [], "last": "Vinyals", "suffix": "" } ], "year": 2016, "venue": "Proceedings of ICASSP 2016", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1109/ICASSP.2016.7472621" ] }, "num": null, "urls": [], "raw_text": "Chan, W., Jaitly, N., Le, Q., & Vinyals, O. (2016). Listen, attend and spell: A neural network for large vocabulary conversational speech recognition. In Proceedings of ICASSP 2016. doi: 10.1109/ICASSP.2016.7472621", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Attention-Based Models for Speech Recognition", "authors": [ { "first": "J", "middle": [], "last": "Chorowski", "suffix": "" }, { "first": "D", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "D", "middle": [], "last": "Serdyuk", "suffix": "" }, { "first": "K", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Y", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2015, "venue": "Proceedings of NIPS 2015", "volume": "", "issue": "", "pages": "577--585", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chorowski, J., Bahdanau, D., Serdyuk, D., Cho, K., & Bengio, Y. (2015). Attention-Based Models for Speech Recognition. In Proceedings of NIPS 2015, 577-585.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "The Application of Hidden Markov Models in Speech Recognition", "authors": [ { "first": "M", "middle": [], "last": "Gales", "suffix": "" }, { "first": "S", "middle": [], "last": "Yang", "suffix": "" } ], "year": 2008, "venue": "Foundations and Trends\u00ae in Signal Processing", "volume": "1", "issue": "3", "pages": "195--304", "other_ids": { "DOI": [ "10.1561/2000000004" ] }, "num": null, "urls": [], "raw_text": "Gales, M. & Yang, S. (2008). The Application of Hidden Markov Models in Speech Recognition. Foundations and Trends\u00ae in Signal Processing, 1(3), 195-304. doi: 10.1561/2000000004", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Learning to forget: Continual prediction with LSTM", "authors": [ { "first": "F", "middle": [ "A" ], "last": "Gers", "suffix": "" }, { "first": "J", "middle": [], "last": "Schmidhuber", "suffix": "" }, { "first": "F", "middle": [], "last": "Cummins", "suffix": "" } ], "year": 1999, "venue": "Proceedings of ICANN 1999", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1049/cp:19991218" ] }, "num": null, "urls": [], "raw_text": "Gers, F. A., Schmidhuber, J., & Cummins, F. (1999). Learning to forget: Continual prediction with LSTM. In Proceedings of ICANN 1999. doi: 10.1049/cp:19991218", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks", "authors": [ { "first": "A", "middle": [], "last": "Graves", "suffix": "" }, { "first": "S", "middle": [], "last": "Fern\u00e1ndez", "suffix": "" }, { "first": "F", "middle": [], "last": "Gomez", "suffix": "" }, { "first": "J", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 2006, "venue": "Proceedings of ICML 2006", "volume": "", "issue": "", "pages": "369--376", "other_ids": { "DOI": [ "10.1145/1143844.1143891" ] }, "num": null, "urls": [], "raw_text": "Graves, A., Fern\u00e1ndez, S., Gomez, F., & Schmidhuber, J. (2006). Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks. In Proceedings of ICML 2006, 369-376. doi: 10.1145/1143844.1143891", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Hybrid speech recognition with deep bidirectional LSTM", "authors": [ { "first": "A", "middle": [], "last": "Graves", "suffix": "" }, { "first": "N", "middle": [], "last": "Jaitly", "suffix": "" }, { "first": "A", "middle": [], "last": "Mohamed", "suffix": "" } ], "year": 2013, "venue": "Proceedings of ASRU 2013", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1109/ASRU.2013.6707742" ] }, "num": null, "urls": [], "raw_text": "Graves, A., Jaitly, N., & Mohamed, A.-r. (2013). Hybrid speech recognition with deep bidirectional LSTM. In Proceedings of ASRU 2013. doi: 10.1109/ASRU.2013.6707742", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Speech recognition with deep recurrent neural networks", "authors": [ { "first": "A", "middle": [], "last": "Graves", "suffix": "" }, { "first": "", "middle": [], "last": "Mohamed", "suffix": "" }, { "first": "G", "middle": [], "last": "Hinton", "suffix": "" } ], "year": 2013, "venue": "Proceedings of ICASSP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Graves, A., Mohamed, A.-r., & Hinton, G. (2013). Speech recognition with deep recurrent neural networks. In Proceedings of ICASSP 2013.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "On Using Monolingual Corpora in Neural Machine Translation", "authors": [ { "first": "C", "middle": [], "last": "Gulcehre", "suffix": "" }, { "first": "O", "middle": [], "last": "Firat", "suffix": "" }, { "first": "K", "middle": [], "last": "Xu", "suffix": "" }, { "first": "K", "middle": [], "last": "Cho", "suffix": "" }, { "first": "L", "middle": [], "last": "Barrault", "suffix": "" }, { "first": "H.-C", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Y", "middle": [], "last": "\u2026bengio", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXivpreprintarXiv:1503.03535" ] }, "num": null, "urls": [], "raw_text": "Gulcehre, C., Firat, O., Xu, K., Cho, K., Barrault, L., Lin, H.-C., \u2026Bengio, Y. (2015). On Using Monolingual Corpora in Neural Machine Translation. In arXiv preprint arXiv: 1503.03535", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups", "authors": [ { "first": "G", "middle": [], "last": "Hinton", "suffix": "" }, { "first": "L", "middle": [], "last": "Deng", "suffix": "" }, { "first": "D", "middle": [], "last": "Yu", "suffix": "" }, { "first": "G", "middle": [ "E" ], "last": "Dahl", "suffix": "" }, { "first": "", "middle": [], "last": "Mohamed", "suffix": "" }, { "first": "N", "middle": [], "last": "Jaitly", "suffix": "" }, { "first": "B", "middle": [], "last": "\u2026kingsbury", "suffix": "" } ], "year": 2012, "venue": "IEEE Signal processing magazine", "volume": "29", "issue": "6", "pages": "82--97", "other_ids": { "DOI": [ "10.1109/MSP.2012.2205597" ] }, "num": null, "urls": [], "raw_text": "Hinton, G., Deng, L., Yu, D., Dahl, G. E., Mohamed, A.-r., Jaitly, N., \u2026Kingsbury, B. (2012). Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal processing magazine, 29(6), 82-97. doi: 10.1109/MSP.2012.2205597", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Long short-term memory", "authors": [ { "first": "S", "middle": [], "last": "Hochreiter", "suffix": "" }, { "first": "J", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 1997, "venue": "Neural computation", "volume": "9", "issue": "8", "pages": "", "other_ids": { "DOI": [ "10.1162/neco.1997.9.8.1735" ] }, "num": null, "urls": [], "raw_text": "Hochreiter, S. & Schmidhuber, J. (1997). Long short-term memory. Neural computation, 9(8), 1735-1780. doi: 10.1162/neco.1997.9.8.1735", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Joint CTC-Attention based end-to-end speech recognition using multi-task learning", "authors": [ { "first": "S", "middle": [], "last": "Kim", "suffix": "" }, { "first": "T", "middle": [], "last": "Hori", "suffix": "" }, { "first": "S", "middle": [], "last": "Watanabe", "suffix": "" } ], "year": 2017, "venue": "Proceedings of ICASSP 2017", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1109/ICASSP.2017.7953075" ] }, "num": null, "urls": [], "raw_text": "Kim, S., Hori, T., & Watanabe, S. (2017). Joint CTC-Attention based end-to-end speech recognition using multi-task learning. In Proceedings of ICASSP 2017. doi: 10.1109/ICASSP.2017.7953075", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Scalable minimum Bayes risk training of deep neural network acoustic models using distributed Hessian-free optimization", "authors": [ { "first": "B", "middle": [], "last": "Kingsbury", "suffix": "" }, { "first": "T", "middle": [ "N" ], "last": "Sainath", "suffix": "" }, { "first": "H", "middle": [], "last": "Soltau", "suffix": "" } ], "year": 2012, "venue": "Proceedings of Interspeech 2012", "volume": "", "issue": "", "pages": "10--13", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kingsbury, B., Sainath, T. N., & Soltau, H. (2012). Scalable minimum Bayes risk training of deep neural network acoustic models using distributed Hessian-free optimization. In Proceedings of Interspeech 2012, 10-13.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Constructing long short-term memory based deep recurrent neural networks for large vocabulary speech recognition", "authors": [ { "first": "X", "middle": [], "last": "Li", "suffix": "" }, { "first": "X", "middle": [], "last": "Wu", "suffix": "" } ], "year": 2015, "venue": "Proceedings of ICASSP 2015", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1109/ICASSP.2015.7178826" ] }, "num": null, "urls": [], "raw_text": "Li, X. & Wu, X. (2015). Constructing long short-term memory based deep recurrent neural networks for large vocabulary speech recognition. In Proceedings of ICASSP 2015. doi: 10.1109/ICASSP.2015.7178826", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Advancing Acoustic-to-word CTC model", "authors": [ { "first": "J", "middle": [], "last": "Li", "suffix": "" }, { "first": "G", "middle": [], "last": "Ye", "suffix": "" }, { "first": "A", "middle": [], "last": "Das", "suffix": "" }, { "first": "R", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Y", "middle": [], "last": "Gong", "suffix": "" } ], "year": 2018, "venue": "Proceedings of ICASSP 2018", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1109/ICASSP.2018.8462017" ] }, "num": null, "urls": [], "raw_text": "Li, J., Ye, G., Das, A., Zhao, R., & Gong, Y. (2018). Advancing Acoustic-to-word CTC model. In Proceedings of ICASSP 2018. doi: 10.1109/ICASSP.2018.8462017", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Regularizing neural networks by penalizing confident output distributions", "authors": [ { "first": "G", "middle": [], "last": "Pereyra", "suffix": "" }, { "first": "G", "middle": [], "last": "Tucker", "suffix": "" }, { "first": "J", "middle": [], "last": "Chorowski", "suffix": "" }, { "first": "L", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "G", "middle": [], "last": "Hinton", "suffix": "" } ], "year": 2017, "venue": "Proceedings of ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pereyra, G., Tucker, G., Chorowski, J., Kaiser, L., & Hinton, G. (2017). Regularizing neural networks by penalizing confident output distributions. In Proceedings of ICLR 2017.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "The Kaldi Speech Recognition Toolkit", "authors": [ { "first": "D", "middle": [], "last": "Povey", "suffix": "" }, { "first": "A", "middle": [], "last": "Ghoshal", "suffix": "" }, { "first": "G", "middle": [], "last": "Boulianne", "suffix": "" }, { "first": "L", "middle": [], "last": "Burget", "suffix": "" }, { "first": "O", "middle": [], "last": "Glembek", "suffix": "" }, { "first": "N", "middle": [], "last": "Goel", "suffix": "" }, { "first": "K", "middle": [], "last": "\u2026vesely", "suffix": "" } ], "year": 2011, "venue": "Proceedings of ASRU", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Povey, D., Ghoshal, A., Boulianne, G., Burget, L., Glembek, O., Goel, N., \u2026Vesely, K. (2011). The Kaldi Speech Recognition Toolkit. In Proceedings of ASRU 2011.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Purely Sequence-Trained Neural Networks for ASR Based on Lattice-Free MMI", "authors": [ { "first": "D", "middle": [], "last": "Povey", "suffix": "" }, { "first": "V", "middle": [], "last": "Peddinti", "suffix": "" }, { "first": "D", "middle": [], "last": "Galvez", "suffix": "" }, { "first": "P", "middle": [], "last": "Ghahrmani", "suffix": "" }, { "first": "V", "middle": [], "last": "Manohar", "suffix": "" }, { "first": "X", "middle": [], "last": "Na", "suffix": "" }, { "first": "S", "middle": [], "last": "\u2026khudanpur", "suffix": "" } ], "year": 2016, "venue": "Proceedings of Interspeech", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.21437/Interspeech.2016-595" ] }, "num": null, "urls": [], "raw_text": "Povey, D., Peddinti, V., Galvez, D., Ghahrmani, P., Manohar, V., Na, X., \u2026Khudanpur, S. (2016). Purely Sequence-Trained Neural Networks for ASR Based on Lattice-Free MMI. In Proceedings of Interspeech 2016. doi: 10.21437/Interspeech.2016-595", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition", "authors": [ { "first": "L", "middle": [ "R" ], "last": "Rabiner", "suffix": "" } ], "year": 1989, "venue": "Proceedings of the IEEE", "volume": "77", "issue": "", "pages": "257--286", "other_ids": { "DOI": [ "10.1109/5.18626" ] }, "num": null, "urls": [], "raw_text": "Rabiner, L. R. (1989). A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition. Proceedings of the IEEE, 77(2), 257 -286. doi: 10.1109/5.18626", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Long short-term memory recurrent neural network architectures for large scale acoustic modeling", "authors": [ { "first": "H", "middle": [], "last": "Sak", "suffix": "" }, { "first": "A", "middle": [], "last": "Senior", "suffix": "" }, { "first": "F", "middle": [], "last": "Beaufays", "suffix": "" } ], "year": 2014, "venue": "Proceedings of INTERSPEECH-2014", "volume": "", "issue": "", "pages": "338--342", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sak, H., Senior, A. & Beaufays, F. (2014). Long short-term memory recurrent neural network architectures for large scale acoustic modeling. In Proceedings of INTERSPEECH-2014, 338-342.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Sequence discriminative distributed training of long short-term memory recurrent neural networks", "authors": [ { "first": "H", "middle": [], "last": "Sak", "suffix": "" }, { "first": "O", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "G", "middle": [], "last": "Heigold", "suffix": "" } ], "year": 2014, "venue": "Proceedings of Interspeech", "volume": "", "issue": "", "pages": "1209--1213", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sak, H., Vinyals, O., & Heigold, G. (2014). Sequence discriminative distributed training of long short-term memory recurrent neural networks. In Proceedings of Interspeech 2014, 1209-1213.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Neural speech recognizer: Acoustic-to-word LSTM model for large vocabulary speech recognition", "authors": [ { "first": "H", "middle": [], "last": "Soltau", "suffix": "" }, { "first": "H", "middle": [], "last": "Liao", "suffix": "" }, { "first": "H", "middle": [], "last": "Sak", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXivpreprintarXiv:1610.09975" ] }, "num": null, "urls": [], "raw_text": "Soltau, H., Liao, H., & Sak, H. (2016). Neural speech recognizer: Acoustic-to-word LSTM model for large vocabulary speech recognition. In arXiv preprint arXiv: 1610.09975", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Cold Fusion: Training Seq2Seq Models Together with Language Models", "authors": [ { "first": "A", "middle": [], "last": "Sriram", "suffix": "" }, { "first": "H", "middle": [], "last": "Jun", "suffix": "" }, { "first": "S", "middle": [], "last": "Satheesh", "suffix": "" }, { "first": "A", "middle": [], "last": "Coates", "suffix": "" } ], "year": 2018, "venue": "Proceedings of", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sriram, A., Jun, H., Satheesh, S., & Coates, A. (2018). Cold Fusion: Training Seq2Seq Models Together with Language Models. In Proceedings of ICLR 2018.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "Sequence discriminative training of deep neural networks", "authors": [ { "first": "K", "middle": [], "last": "Vesel\u00fd", "suffix": "" }, { "first": "A", "middle": [], "last": "Ghoshal", "suffix": "" }, { "first": "L", "middle": [], "last": "Burget", "suffix": "" }, { "first": "D", "middle": [], "last": "Povey", "suffix": "" } ], "year": 2013, "venue": "Proceedings of Interspeech 2013", "volume": "", "issue": "", "pages": "2345--2349", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vesel\u00fd, K., Ghoshal, A., Burget, L., & Povey, D. (2013). Sequence discriminative training of deep neural networks. In Proceedings of Interspeech 2013, 2345-2349..", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "ESPnet: End-to-End Speech Processing Toolkit", "authors": [ { "first": "S", "middle": [], "last": "Watanabe", "suffix": "" }, { "first": "T", "middle": [], "last": "Hori", "suffix": "" }, { "first": "S", "middle": [], "last": "Karita", "suffix": "" }, { "first": "T", "middle": [], "last": "Hayashi", "suffix": "" }, { "first": "J", "middle": [], "last": "Nishitoba", "suffix": "" }, { "first": "Y", "middle": [], "last": "Unno", "suffix": "" }, { "first": "T", "middle": [], "last": "\u2026ochiai", "suffix": "" } ], "year": 2018, "venue": "Proceedings of", "volume": "", "issue": "", "pages": "2207--2211", "other_ids": { "DOI": [ "10.21437/Interspeech.2018-1456" ] }, "num": null, "urls": [], "raw_text": "Watanabe, S., Hori, T., Karita, S., Hayashi, T., Nishitoba, J., Unno, Y., \u2026Ochiai , T. (2018). ESPnet: End-to-End Speech Processing Toolkit. In Proceedings of Interspeech 2018, 2207-2211. doi: 10.21437/Interspeech.2018-1456", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Hybrid CTC/attention architecture for end-to-end speech recognition", "authors": [ { "first": "S", "middle": [], "last": "Watanabe", "suffix": "" }, { "first": "T", "middle": [], "last": "Hori", "suffix": "" }, { "first": "S", "middle": [], "last": "Kim", "suffix": "" }, { "first": "J", "middle": [ "R" ], "last": "Hershey", "suffix": "" }, { "first": "T", "middle": [], "last": "Hayash", "suffix": "" } ], "year": 2017, "venue": "IEEE Journal of Selected Topics in Signal Processing", "volume": "11", "issue": "8", "pages": "1240--1253", "other_ids": { "DOI": [ "10.1109/JSTSP.2017.2763455" ] }, "num": null, "urls": [], "raw_text": "Watanabe, S., Hori, T., Kim, S., Hershey, J. R., & Hayash, T. (2017). Hybrid CTC/attention architecture for end-to-end speech recognition. IEEE Journal of Selected Topics in Signal Processing, 11(8), 1240-1253. doi: 10.1109/JSTSP.2017.2763455", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "Show, attend and tell: Neural image caption generation with visual attention", "authors": [ { "first": "K", "middle": [], "last": "Xu", "suffix": "" }, { "first": "J", "middle": [], "last": "Ba", "suffix": "" }, { "first": "R", "middle": [], "last": "Kiros", "suffix": "" }, { "first": "K", "middle": [], "last": "Cho", "suffix": "" }, { "first": "A", "middle": [], "last": "Courville", "suffix": "" }, { "first": "R", "middle": [], "last": "Salakhutdinov", "suffix": "" }, { "first": "Y", "middle": [], "last": "\u2026bengio", "suffix": "" } ], "year": 2015, "venue": "Proceedings of ICML 2015", "volume": "", "issue": "", "pages": "2048--2057", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xu, K., Ba, J., Kiros, R., Cho, K., Courville, A., Salakhutdinov, R., \u2026Bengio, Y. (2015). Show, attend and tell: Neural image caption generation with visual attention. In Proceedings of ICML 2015, 2048-2057.", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "Activities\uff1a 1. Holding the Republic of China Computational Linguistics Conference (ROCLING) annually. 2. Facilitating and promoting academic research, seminars, training, discussions, comparative evaluations and other activities related to computational linguistics", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Activities\uff1a 1. Holding the Republic of China Computational Linguistics Conference (ROCLING) annually. 2. Facilitating and promoting academic research, seminars, training, discussions, comparative evaluations and other activities related to computational linguistics.", "links": null }, "BIBREF51": { "ref_id": "b51", "title": "Collecting information and materials on recent developments in the field of computational linguistics, domestically and internationally", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Collecting information and materials on recent developments in the field of computational linguistics, domestically and internationally.", "links": null }, "BIBREF52": { "ref_id": "b52", "title": "Publishing pertinent journals, proceedings and newsletters", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Publishing pertinent journals, proceedings and newsletters.", "links": null }, "BIBREF53": { "ref_id": "b53", "title": "Setting of the Chinese-language technical terminology and symbols related to computational linguistics", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Setting of the Chinese-language technical terminology and symbols related to computational linguistics.", "links": null }, "BIBREF54": { "ref_id": "b54", "title": "Maintaining contact with international computational linguistics academic organizations", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maintaining contact with international computational linguistics academic organizations.", "links": null }, "BIBREF55": { "ref_id": "b55", "title": "Dealing with various other matters related to the development of computational linguistics", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dealing with various other matters related to the development of computational linguistics.", "links": null } }, "ref_entries": { "FIGREF1": { "text": "22Yen-Hao Huang et al.", "uris": null, "type_str": "figure", "num": null }, "FIGREF2": { "text": "The framework of CARISR. Discovering the Latent Writing Style from Articles: 23 A Contextualized Feature Extraction Approach", "uris": null, "type_str": "figure", "num": null }, "FIGREF3": { "text": ", clustering coefficient and number of triangles. CW\u2190 a set of stylistic skeleton words TW\u2190 a set of stylistic content words for all node v in V do = number of triangles for v Construct patterns P with the permutation of stylistic skeleton words and content words. for all pattern p in P do p = Replace the sw with wildcard (<*>) from p end for", "uris": null, "type_str": "figure", "num": null }, "FIGREF4": { "text": "Identical Stylistic Degree)The identical stylistic degree sd is calculated as:, ,", "uris": null, "type_str": "figure", "num": null }, "TABREF0": { "content": "
\u61c9\u7528\u8a18\u61b6\u589e\u5f37\u689d\u4ef6\u96a8\u6a5f\u5834\u57df\u8207\u4e4b\u6df1\u5ea6\u5b78\u7fd2\u53ca \u61c9\u7528\u8a18\u61b6\u589e\u5f37\u689d\u4ef6\u96a8\u6a5f\u5834\u57df\u8207\u4e4b\u6df1\u5ea6\u5b78\u7fd2\u53ca3 5
\u81ea\u52d5\u5316\u8a5e\u5f59\u7279\u5fb5\u65bc\u4e2d\u6587\u547d\u540d\u5be6\u9ad4\u8fa8\u8b58\u4e4b\u7814\u7a76 \u81ea\u52d5\u5316\u8a5e\u5f59\u7279\u5fb5\u65bc\u4e2d\u6587\u547d\u540d\u5be6\u9ad4\u8fa8\u8b58\u4e4b\u7814\u7a76
\u5316\u4e4b\u8cc7\u6599\uff1b\u518d\u85c9\u7531\u5377\u7a4d\u5c64\u3001\u96d9\u5411 GRU \u5c64\u63d0\u4f9b\u6a21\u578b\u66f4\u591a\u7684\u7279\u5fb5\uff0c\u53ca\u6574\u5408\u9577\u8ddd\u96e2\u6587\u7ae0\u8cc7\u8a0a (padding)\u8a2d\u70ba SAME\uff0c\u610f\u5373\u8f38\u51fa\u9577\u5ea6\u7b49\u65bc\u8f38\u5165\u9577\u5ea6\u3002\u6b64\u8655\u5377\u7a4d\u904b\u7b97\u5f8c\u4e0d\u63a1\u7528\u6c60\u5316\u5c64
\u7684\u8a18\u61b6\u5c64\uff0c\u4f7f\u547d\u540d\u5be6\u9ad4\u4efb\u52d9\u4e0d\u540c\u65bc\u5f80\u5e38\u50c5\u80fd\u5920\u64f7\u53d6\u5c0f\u7bc4\u570d\u7684\u8cc7\u8a0a\uff0c\u80fd\u5920\u7372\u53d6\u8c50\u5bcc\u5b8c\u6574\u7684 (Pooling Layer)\uff0c\u5176\u539f\u56e0\u70ba\u4e2d\u6587\u8a9e\u610f\u4e2d\u6bcf\u500b\u7279\u5fb5\u90fd\u6709\u5176\u610f\u7fa9\uff0c\u4e0d\u50cf\u5f71\u50cf\u53ef\u80fd\u6703\u7d93\u904e\u653e\u5927\u3001
\u6587\u7ae0\u8a0a\u606f\u3002\u6b64\u5916\uff0c\u4e5f\u85c9\u7531\u7279\u5fb5\u7684\u63a2\u52d8(Chou & Chang, 2017)\uff0c\u4e26\u4f7f\u7528\u6df1\u5ea6\u5b78\u7fd2\u6a21\u578b\u53ef\u81ea\u52d5 \u8a13\u7df4\u7684\u53c3\u6578\uff0c\u81ea\u52d5\u8abf\u6574\u8a5e\u5411\u91cf\u53ca\u8a5e\u5f59\u7279\u5fb5\uff0c\u9664\u9577\u8ddd\u96e2\u7684\u6587\u7ae0\u8cc7\u8a0a\u5916\uff0c\u66f4\u80fd\u5145\u5206\u7372\u5f97\u6587\u7ae0 \u7e2e \u5c0f \u6216 \u8005 (Concatenation)\u3002
\u6240\u96b1\u85cf\u7684\u8a0a\u606f\u3002
\u50b3\u7d71\u7684\u689d\u4ef6\u96a8\u6a5f\u5834\u57df\u6c92\u6709\u80fd\u529b\u53bb\u6293\u53d6\u8f03\u9577\u7bc4\u570d\u4ee5\u5916\u7684\u6587\u7ae0\u7279\u5fb5\uff0c\u800c\u905e\u6b78\u795e\u7d93\u7db2\u8def\u5728\u9577\u8ddd \u96e2\u7684\u6587\u7ae0\u8cc7\u8a0a\u64f7\u53d6\u4e0a\u6548\u80fd\u4e5f\u4e26\u4e0d\u51fa\u8272\uff0c\u56e0\u6b64\uff0cWeston \u7b49\u4eba\u63d0\u51fa\u8a18\u61b6\u7db2\u8def(Memory Network) \u4f86\u589e\u5f37\u64f7\u53d6\u9577\u7bc4\u570d\u6587\u7ae0\u7279\u5fb5\u7684\u8868\u73fe\uff0c\u4e26\u61c9\u7528\u65bc\u554f\u7b54(QA)\u7684\u4efb\u52d9\u7576\u4e2d(Weston, Chopra & Bordes, 2014)\uff0c\u8b49\u660e\u8a18\u61b6\u7684\u589e\u52a0\u5c0d\u65bc\u57f7\u884c\u9700\u8981\u5e38\u8ddd\u96e2\u6587\u7ae0\u8cc7\u8a0a\u7684\u63a8\u7406\u81f3\u95dc\u91cd\u8981\u3002 \u8fd1\u671f\uff0cLiu \u5c07\u8a18\u61b6\u7db2\u8def\u7684\u6982\u5ff5\u52a0\u5165\u689d\u4ef6\u96a8\u6a5f\u5834\u57df\u7576\u4e2d(Liu et al., 2017)\uff0c\u900f\u904e\u6574\u5408\u984d\u5916 \u7684\u8a18\u61b6(Memory)\uff0c\u4f7f\u6a21\u578b\u80fd\u5920\u7372\u53d6\u8f03\u9577\u7bc4\u570d\u4ee5\u5916\u7684\u6587\u7ae0\u7279\u5fb5\uff0c\u4e26\u4e14\u5728\u82f1\u6587\u8cc7\u6599\u96c6\u4e0a\u7372\u5f97 \u4e86\u51fa\u8272\u7684\u8868\u73fe\u3002 3. \u6a21\u578b\u67b6\u69cb\u53ca\u65b9\u6cd5(Model Architecture and Method) \u5728\u547d\u540d\u5be6\u9ad4\u8fa8\u8b58\u6a19\u8a18\u4efb\u52d9\u4e2d\uff0c\u6bcf\u4e00\u500b\u53e5\u5b50 S \u662f\u7531 T \u5b57\u5143(character)\u7d44\u5408\u800c\u6210\u7684\u5e8f\u5217 , \u22ef , \uff0c\u5176\u5c0d\u61c9\u7684\u6a19\u7c64\u5e8f\u5217\u53ef\u8868\u793a\u70ba , \u22ef , \u3002\u4e0d\u540c\u65bc\u50b3\u7d71\u7684\u689d\u4ef6\u96a8\u6a5f\u5834 \u57df\u50c5\u9700\u8981\u8f38\u5165\u53e5\u5b50\uff0cMECRF \u7684\u7279\u9ede\u662f\u53e6\u6709\u4e0a\u4e0b\u6587\u8cc7\u8a0a\u6216\u7a31\u4e4b\u70ba\u8a18\u61b6\u9ad4 \u3002\u5047\u8a2d\u6bcf\u7bc7\u6587 \u7ae0\u662f\u7531|D|\u53e5\u5b50\u7d44\u6210 , \u22ef , | | \uff0c\u8207\u5176\u5c0d\u61c9\u7684\u5e8f\u5217\u6a19\u7c64\u96c6\u5408 , \u2026 , | | \u3002\u70ba\u907f\u514d \u8f38\u5165\u6574\u7bc7\u6587\u7ae0\u9020\u6210\u8a18\u61b6\u9ad4\u6d88\u8017\u904e\u5927\uff0c\u6211\u5011\u50c5\u6293\u53d6\u7576\u524d\u53e5\u5b50 \u7684\u524d\u5f8c B \u53e5\u5171\u6293\u53d6 2B+1 \u500b \u672c\u7814\u7a76\u6240\u4f7f\u7528\u7684\u8cc7\u6599\u70ba Chou \u53ca 2.1 \u5377\u7a4d\u795e\u7d93\u7db2\u8def(Convolutional Neural Networks) \u53e5\u5b50 \uff1d , \u22ef , , \u22ef ,
\u5377\u7a4d\u795e\u7d93\u7db2\u8def\u662f\u4e00\u7a2e\u524d\u994b\u795e\u7d93\u7db2\u8def\uff0c\u901a\u5e38\u7531\u5377\u7a4d\u5c64(Convolutional)\u3001\u6c60\u5316\u5c64(Pooling)\u3001\u5168 \u9023\u63a5\u5c64(Fully-Connected)\u7d44\u6210\uff0c\u76f8\u8f03\u65bc\u5176\u4ed6\u7684\u7db2\u8def\uff0c\u5377\u7a4d\u795e\u7d93\u7db2\u8def\u6240\u9700\u8981\u4f7f\u7528\u7684\u53c3\u6578\u8f03\u5c11\uff0c \u56e0\u800c\u6210\u70ba\u4e00\u7a2e\u9817\u5177\u5438\u5f15\u529b\u7684\u6df1\u5ea6\u5b78\u7fd2\u6a21\u578b\u3002\u5377\u7a4d\u795e\u7d93\u7db2\u8def\u64c1\u6709\u80fd\u5920\u81ea\u52d5\u6293\u53d6\u76f8\u9130\u7279\u5fb5\u7684 \u512a\u9ede\uff0cCollobert \u7b49\u4eba(2011)\u9996\u5148\u5c07\u5377\u7a4d\u795e\u7d93\u7db2\u8def\u81ea\u52d5\u6293\u53d6\u76f8\u9130\u7279\u5fb5\u7684\u512a\u9ede\u61c9\u7528\u5728\u81ea\u7136\u8a9e \u8a00\u8655\u7406\u7684\u5e8f\u5217\u6a19\u8a18\u4efb\u52d9\u4e2d\uff0c\u8b93\u81ea\u7136\u8a9e\u8a00\u8655\u7406\u4e0d\u518d\u76f8\u4f9d\u65bc\u5c08\u696d\u77e5\u8b58\u7279\u88fd\u800c\u6210\u7684\u7279\u5fb5\u6a21\u677f\u3002 \u8fd1\u671f\uff0cWang \u7b49\u4eba(2017)\u900f\u904e\u5806\u758a\u5f0f\u7684\u5377\u7a4d\u795e\u7d93\u7db2\u8def\u66f4\u6709\u7d50\u69cb\u3001\u591a\u968e\u5c64\u5730\u8403\u53d6\u4e2d\u6587\u8a9e\u610f\u7279 \u5728\u672c\u7bc7\u8ad6\u6587\u4e2d\uff0c\u6211\u5011\u61c9\u7528\u591a\u5c64\u6b21\u5377\u7a4d(Convolution Layer)\u4f86\u8403\u53d6\u6587\u5b57\u7279\u5fb5\uff0c\u4e26\u53c3\u8003 Dauphin \u7b49\u4eba\u505a\u6cd5\u5728\u5c64\u8207\u5c64\u9593\u52a0\u5165\u9580\u63a7\u6a5f\u5236\u4f86\u6cdb\u5316\u8403\u53d6\u7684\u7279\u5fb5\u3002\u9580\u63a7\u6a5f\u5236\u5ee3\u6cdb\u5730\u61c9\u7528\u65bc \u5716 \u2022 (2)
\u5fb5\uff0c\u540c\u6642\u7d50\u5408 Dauphin \u7b49\u4eba(2016)\u63d0\u51fa\u7684\u9598\u9580\u7dda\u6027\u55ae\u5143(Gated linear unit, GLU)\uff0c\u61c9\u7528\u65bc \u5faa\u74b0\u795e\u7d93\u7db2\u8def\u67b6\u69cb\u4e2d\uff0c\u7528\u4f86\u63a7\u5236\u9577\u671f\u795e\u7d93\u7db2\u8def\u4e2d\u8cc7\u8a0a\u7684\u6d41\u52d5\uff1b\u5728\u5377\u7a4d\u795e\u7d93\u7db2\u8def\u4e2d\u96d6\u6c92\u6709
Baldwin & Cohn, 2017)\uff0c\u63d0\u51fa MECRF \u67b6\u69cb\uff0c\u900f\u904e\u6574\u5408\u4e0a\u4e0b\u6587\u984d\u5916\u7684\u8a18\u61b6\uff0c\u4f7f\u6a21\u578b\u80fd\u5920\u7372 \u4e2d\u6587\u65b7\u8a5e\u4efb\u52d9\u4e2d\u3002 \u9577\u671f\u4f9d\u8cf4\u7684\u554f\u984c\uff0c\u4e0d\u9700\u8981\u8f38\u5165\u95a5\u9580\u4ee5\u53ca\u907a\u5fd8\u95a5\u9580\uff0c\u4f46\u662f Dauphin \u7b49\u4eba\u8a8d\u70ba\u5728\u591a\u5c64\u6b21\u7684\u5377
\u53d6\u8f03\u9577\u7bc4\u570d\u4ee5\u5916\u7684\u6587\u7ae0\u7279\u5fb5\uff0c\u540c\u6a23\u5728\u82f1\u6587\u8cc7\u6599\u96c6\u4e0a\u7372\u5f97\u4e86\u51fa\u8272\u7684\u8868\u73fe\u3002\u7136\u800c\u9019\u4e9b\u57fa\u790e\u6df1 \u7a4d\u795e\u7d93\u7db2\u8def\u4e2d\uff0c\u5c64\u8207\u5c64\u4e4b\u9593\u53ef\u4ee5\u900f\u904e\u985e\u4f3c\u8f38\u51fa\u95a5\u9580\u7684\u9580\u63a7\u6a5f\u5236\u4f86\u6c7a\u5b9a\u795e\u7d93\u5143\u7684\u6d41\u901a\u8207\u5426\uff0c
\u5ea6\u5b78\u7fd2\u6a21\u578b\u61c9\u7528\u65bc\u8cc7\u6599\u54c1\u8cea\u8f03\u70ba\u512a\u826f\u7684\u8cc7\u6599\u96c6\u4e0a\uff0c\u96d6\u5747\u6709\u4e0d\u932f\u7684\u6548\u679c\uff0c\u4f46\u5728\u793e\u7fa4\u5a92\u9ad4\u8cc7 2.2 \u905e\u6b78\u795e\u7d93\u7db2\u8def(Recurrent Neural Networks) \u4e26\u6709\u6548\u7387\u5730\u64f7\u53d6\u6709\u6548\u7684\u7279\u5fb5\u3002\u5047\u8a2d\u524d\u9762\u5d4c\u5165\u5c64\u8f38\u51fa\u70baE (\u03f5 \uff0c\u5247\u6b64\u8655\u5377\u7a4d\u904b\u7b97\u53ef\u8868
\u6599\u96c6\u4e2d\u537b\u672a\u80fd\u9054\u5230\u50b3\u7d71\u6a5f\u5668\u5b78\u7fd2\u65b9\u5f0f\u4e4b\u57fa\u6e96\u503c\uff0c\u56e0\u6b64\u5982\u4f55\u6709\u6548\u5730\u64f7\u53d6\u6587\u5b57\u4e2d\u6240\u96b1\u542b\u7684\u8cc7 \u793a\u70ba\uff1a
\u8a0a\uff0c\u4f7f\u6a21\u578b\u6709\u8f03\u597d\u7684\u6ffe\u9664\u96dc\u8a0a\u4e4b\u80fd\u529b\uff0c\u4e5f\u662f\u5728\u61c9\u7528\u4e0a\u975e\u5e38\u91cd\u8981\u7684\u4e00\u74b0\u3002 E \u2295(1) (3)
\u70ba\u6539\u5584\u4e0a\u8ff0\u7684\u9650\u5236\uff0c\u672c\u7814\u7a76\u5ef6\u4f38\u8a18\u61b6\u589e\u5f37\u689d\u4ef6\u96a8\u6a5f\u5834\u57df MECRF \u65bc\u4e2d\u6587\u547d\u540d\u5be6\u9ad4\u8fa8 \u5176\u4e2d \u70ba\u5927\u5c0f\u70ba \u7684\u5377\u7a4d\u904b\u7b97\u904e\u6ffe\u5668 Kernel Filter \uff0cK \u82e5\u904e\u5c0f\u5c0e\u81f4\u4e0d\u80fd\u542b\u62ec\u6709\u6548\u8cc7\u8a0a\uff1b \u8b58\u4efb\u52d9\uff1bMECRF \u7684\u6982\u5ff5\u662f\u57fa\u65bc\u4e0a\u4e0b\u6587\u53ef\u80fd\u4e0d\u53ea\u4e00\u6b21\u63d0\u53ca\u5be6\u9ad4\u540d\u7a31\uff0c\u4ee5\u53ca Attention \u6a5f\u5236 \u82e5\u904e\u5927\u5c0e\u81f4\u542b\u62ec\u5197\u9918\u8cc7\u8a0a\u5c0d\u7cfb\u7d71\u7522\u751f\u4e0d\u5fc5\u8981\u7684\u5e72\u64fe\uff0c\u672c\u7814\u7a76\u4e2d\u5c07 K \u8a2d\u5b9a\u70ba 3\uff0c\u518d\u900f\u904e\u591a \u7684\u61c9\u7528\uff0c\u85c9\u4ee5\u66f4\u6b63\u78ba\u627e\u51fa\u547d\u540d\u5be6\u9ad4\u3002\u6211\u5011\u9996\u5148\u900f\u904e\u8a13\u7df4\u8a5e\u5411\u91cf\u6a21\u578b\uff0c\u5c07\u5b57\u5143\u8f49\u63db\u70ba\u6578\u503c \u5c64\u5377\u7a4d\u5c64\u64f4\u53ca\u5b57\u5143\u524d\u5f8c\u8cc7\u8a0a\uff1b\u6211\u5011\u5c07\u6ed1\u52d5\u8996\u7a97\u79fb\u52d5\u7684\u683c\u6578(strides)\u8a2d\u70ba 1\uff0c\u4e26\u5c07\u88dc\u96f6\u65b9\u5f0f
", "num": null, "type_str": "table", "html": null, "text": "(Named Entity Recognition, NER)\u662f\u81ea\u7136\u8a9e\u8a00\u8655\u7406\u4e2d\u8a0a\u606f\u7406\u89e3\u7684\u7b2c\u4e00\u6b65\uff0c\u5176 \u76ee\u6a19\u662f\u63d0\u53d6\u7576\u4e2d\u7684\u547d\u540d\u5be6\u9ad4\u4e26\u6b78\u985e\u5230\u9810\u5148\u5b9a\u7fa9\u7684\u5206\u985e\u7576\u4e2d\uff0c\u5982\uff1a\u4eba\u540d\u3001\u5730\u540d\u3001\u7d44\u7e54\u7b49\u3002 \u50b3\u7d71\u7684\u6a5f\u5668\u5b78\u7fd2\u65bc\u547d\u540d\u5be6\u9ad4\u7684\u8fa8\u8b58\u4efb\u52d9\u4e2d\uff0c\u5927\u591a\u4f7f\u7528\u7d71\u8a08\u5f0f\u689d\u4ef6\u96a8\u6a5f\u5834\u57df\u9032\u884c\u5e8f\u5217\u6a19\u8a18\uff0c \u56e0\u6b64\u53d7\u9650\u65bc\u5c0f\u7bc4\u570d\u7684\u7279\u5fb5\u64f7\u53d6\u3002\u5982\u4f55\u5728\u4e2d\u6587\u7684\u8cc7\u6599\u96c6\u7576\u4e2d\u64f7\u53d6\u53c3\u8003\u9577\u8ddd\u96e2\u4e0a\u4e0b\u6587\u8cc7\u8a0a\uff0c \u5224\u65b7\u7576\u524d\u5b57\u8a5e\u6b63\u78ba\u7684\u8a9e\u610f\uff0c\u9032\u800c\u6b63\u78ba\u7684\u8fa8\u8b58\u547d\u540d\u5be6\u9ad4\uff0c\u662f\u6a5f\u5668\u7406\u89e3\u8a0a\u606f\u6839\u672c\u7684\u4efb\u52d9\u3002 \u8fd1\u5e74\u4f86\u6df1\u5ea6\u5b78\u7fd2\u88ab\u904b\u7528\u5728\u5e8f\u5217\u6a19\u8a18\u7684\u6a21\u578b\u5efa\u7acb\uff0c\u5f97\u5230\u4e0d\u932f\u7684\u9032\u5c55\u3002\u4f8b\u5982 Huang \u5728\u5e8f \u5217\u6a19\u8a18\u7684\u4efb\u52d9\u4e0a\u4f7f\u7528\u9577\u77ed\u671f\u8a18\u61b6(Huang, Xu & Yu, 2015)\uff0c\u61c9\u7528\u65bc\u82f1\u6587\u7684\u8cc7\u6599\u96c6\u7576\u4e2d\u7372\u5f97 \u4e86\u975e\u5e38\u597d\u7684\u6548\u80fd\u3002Liu \u7b49\u4eba\u65bc IJCNLP 2017 \u5c07\u8a18\u61b6\u7db2\u8def\u7684\u6982\u5ff5\u52a0\u5165\u689d\u4ef6\u96a8\u6a5f\u5834\u57df\u7576\u4e2d(Liu, Chang \u6240\u4f7f\u7528\u7684 PerNews \u6e2c\u8a66\u8cc7\u6599\u96c6\uff0c\u4f46\u5176\u8cc7\u6599\u96c6 \u662f\u4ee5\u53e5\u5b50\u70ba\u55ae\u4f4d\u9032\u884c\u6a19\u8a18\uff0c\u4e26\u7121\u4e0a\u4e0b\u6587\uff0c\u56e0\u6b64\u6211\u5011\u81ea\u88fd\u722c\u87f2\u7a0b\u5f0f\uff0c\u8490\u96c6\u539f\u59cb\u8cc7\u6599\u7684\u7db2\u8def \u65b0\u805e\u53ca\u793e\u7fa4\u5a92\u9ad4\u505a\u70ba\u8a13\u7df4\u53ca\u6e2c\u8a66\u8cc7\u6599\u3002\u7d93\u5be6\u9a57\u7d50\u679c\u6bd4\u8f03\uff0c\u5728\u7db2\u8def\u793e\u7fa4\u5a92\u9ad4\u7684\u8cc7\u6599\u4e2d\u53ef\u4ee5 \u9054\u5230\u7684 91.67\uff05\u7684\u6a19\u8a18\u6e96\u78ba\u7387\uff0c\u8207\u5c1a\u672a\u52a0\u5165\u8a18\u61b6\u7684\u6a21\u578b\u76f8\u6bd4\u5927\u5e45\u63d0\u5347 2.9\uff05\uff0c\u518d\u52a0\u5165\u8a5e\u5411 \u91cf\u53ca\u8a5e\u5f59\u7279\u5fb5\uff0c\u8207\u57fa\u790e\u7684\u8a18\u61b6\u6a21\u578b\u76f8\u6bd4\u66f4\u662f\u63d0\u5347\u4e86 6.04\uff05\u3002\u672c\u7814\u7a76\u6240\u63d0\u51fa\u4e4b\u6a21\u578b\u4e5f\u5728 SIGHAN-MSRA \u4e2d\u5f97\u5230\u6700\u9ad8\u7684 92.45\uff05\u5730\u540d\u5be6\u9ad4\u8fa8\u8b58\u6548\u679c\u53ca 90.95\uff05\u53ec\u56de\u7387\u3002 \u5e8f\u5217\u6a19\u8a18\u5df2\u7d93\u767c\u5c55\u8a31\u4e45\uff0c\u5e38\u898b\u7684\u6a21\u578b\u6709\u96b1\u85cf\u5f0f\u99ac\u53ef\u592b\u6a21\u578b(Hidden Markov Model, HMM)\u3001 \u6700\u5927\u5316\u71b5\u99ac\u53ef\u592b\u6a21\u578b(Maximum Entropy Markov Model, MEMM)\u4ee5\u53ca\u689d\u4ef6\u96a8\u6a5f\u5834\u57df (Conditional Random Field, CRF)\u3002Lafferty \u7b49\u4eba(2001)\u6240\u63d0\u51fa\u7684\u689d\u4ef6\u96a8\u6a5f\u5834\u57df\u5728\u81ea\u7136\u8a9e\u8a00 \u8655\u7406\u5e8f\u5217\u6a19\u8a18(Sequence Labeling)\u7684\u4efb\u52d9\u4e2d\uff0c\u662f\u591a\u6578\u4eba\u7684\u9078\u64c7\u4e14\u88ab\u5ee3\u6cdb\u7684\u61c9\u7528\uff0c\u4f46\u662f\u689d\u4ef6 \u96a8\u6a5f\u5834\u57df\u50c5\u80fd\u5920\u6293\u53d6\u5c0f\u7bc4\u570d\u7684\u6587\u7ae0\u8cc7\u8a0a(Finkel, Grenager & Manning, 2005)\uff0c\u5c0d\u65bc\u7372\u53d6\u6574 \u7bc7\u6587\u7ae0\u4e2d\u7684\u8cc7\u8a0a\u5247\u662f\u689d\u4ef6\u96a8\u6a5f\u5834\u57df\u95dc\u9375\u7684\u9650\u5236\u3002 RNN \u5247\u662f\u53e6\u4e00\u7a2e\u8655\u7406\u5e8f\u5217\u578b\u8f38\u5165\u7684\u795e\u7d93\u67b6\u69cb\uff0c\u4f46\u662f\u55ae\u7d14\u7684 RNN \u6a21\u578b\u7121\u6cd5\u64f7\u53d6\u9577\u8ddd\u96e2\u7684 \u6587\u7ae0\u8cc7\u8a0a\uff0c\u70ba\u4e86\u4e0d\u53d7\u5c40\u90e8\u9650\u5236\u7684\u5f71\u97ff\uff0c\u56e0\u6b64\u6709\u5e38\u77ed\u671f\u8a18\u61b6(Long Short Term Memory)\u7684\u63d0 \u51fa\u3002Huang \u7b49\u4eba(2015)\u5728\u5e8f\u5217\u6a19\u8a18\u7684\u4efb\u52d9\u4e0a\u4f7f\u7528\u9577\u77ed\u671f\u8a18\u61b6\uff0c\u5c0e\u5165\u96d9\u5411(Bidirectional)\u7684\u6982 \u5ff5\u4f86\u64f7\u53d6\u6b63\u5411\u53ca\u53cd\u5411\u7684\u8cc7\u8a0a\uff0c\u61c9\u7528\u65bc\u82f1\u6587\u7684\u8cc7\u6599\u96c6\u7576\u4e2d\u7372\u5f97\u4e86\u975e\u5e38\u597d\u7684\u6548\u80fd\u3002 \u4f46\u662f\u905e\u6b78\u795e\u7d93\u7db2\u8def\u96a8\u8457\u8f38\u5165\u53e5\u5b50\u7684\u9577\u5ea6\u589e\u52a0(Cho, van Merrienboer, Bahdanau & \u7c21\u570b\u5cfb\u8207\u5f35\u5609\u60e0 Bengio, 2014)\uff0c\u6703\u5e36\u4f86\u6548\u80fd\u7684\u60e1\u5316\u3002\u5728\u76f8\u95dc\u7684\u7814\u7a76(Lai, Xu, Liu & Zhao, 2015; Linzen, Dupoux & Goldberg, 2016)\u66f4\u986f\u793a\uff0c\u905e\u6b78\u795e\u7d93\u7db2\u8def\u5305\u62ec\u5176\u8b8a\u5316\u4e4b\u985e\u578b\uff0c\u5118\u7ba1\u5df2\u7d93\u52a0\u5165\u6642\u9593 \u5e8f\u5217\u7684\u6a19\u8a18\uff0c\u4f46\u4ecd\u504f\u5411\u65bc\u76f8\u9130\u7684\u5b57\u5143\u8cc7\u8a0a\uff0c\u5728\u6d89\u53ca\u9060\u7a0b\u4e0a\u4e0b\u6587\u4f9d\u8cf4\u6027\u7684\u5224\u65b7\u4e2d\u8868\u73fe\u4e0d\u4f73\u3002 2.3 \u8a18\u61b6\u7db2\u8def(Memory Networks) \u505a\u70ba\u77ed\u671f\u8a18\u61b6(short context)\uff0c\u5176\u9577\u5ea6\u53ef\u8a18\u70ba N=\u2211 (\u5176\u4e2d \u8868\u53e5\u5b50 \u7684\u9577\u5ea6)\u3002\u6bcf\u500b\u8f38\u5165\u5b57\u5143w \u53ef\u4ee5\u900f\u904e word2vec \u6216 GloVe \u5c0d\u6587\u5b57\u9032\u884c \u7de8\u78bc, \u4ee5EMB w \u4f86\u8868\u793a\u3002\u5047\u8a2d D \u70ba Embedding \u7684\u7dad\u5ea6\uff0c\u5247\u77ed\u671f\u8a18\u61b6\u589e\u5f37\u96a8\u6a5f\u5834\u57df\u7684\u8f38 \u5165\u5e8f\u5217\u70ba\u5927\u5c0f TxD \u7684\u53e5\u5b50E \u3001\u548c LxD \u7684\u77ed\u671f\u8a18\u61b6E \u3002 \u4f4d\u79fb \uff0c \u56e0 \u6b64 \u672c\u7814 \u7a76 \u76f4 \u63a5 \u5c07 L \u500b \u5377 \u7a4d Filters \u8f38 \u51fa\u7684 feature maps \u505a \u9023 \u63a5 A \u53ca B \u5206\u5225\u70ba\u7d93\u7531\u5169\u7d44 CNN \u5377\u7a4d\u904b\u7b97\u4e4b\u5f8c\u6240\u7522\u751f\u7684\u77e9\u9663\u3002\u524d\u8005\u4e0d\u7d93\u904e\u4efb\u4f55\u555f\u52d5\u51fd\u6578\uff0c \u5f8c\u8005\u5c07\u901a\u904e\u4e00\u975e\u7dda\u6027\u8f49\u63db(sigmoid function)\u7528\u4f86\u6c7a\u5b9a\u795e\u7d93\u5143\u7684\u53d6\u6368\uff0c\u518d\u5c07\u5169\u8f38\u51fa\u505a\u77e9\u9663\u9010 \u5143\u7d20\u4e58\u6cd5(element-wise multiplication)\uff0c\u5982\u5f0f (1)\u6240\u793a\u3002 \u61c9\u7528\u591a\u5c64\u5377\u7a4d(Stacked Convolution)\u53ca\u9580\u63a7\u6a5f\u5236(Gated-CNNs)\u64f7\u53d6\u76f8\u9130\u5b57\u8a5e\u7279\u5fb5\u5f8c\uff0c \u6211\u5011\u53c3\u8003 MECRF \u63a1\u7528\u905e\u6b78\u795e\u7d93\u7db2\u8def(RNN)\u7684\u8b8a\u5316\u9ad4 GRU\uff0c\u4e14\u900f\u904e\u96d9\u5411\u7684\u6280\u8853\u4f86\u64f7\u53d6\u7576 \u4e0b\u4f4d\u7f6e\u8655\u7684\u6587\u5b57 \u6b63\u5411\u53ca\u53cd\u5411\u7684\u8cc7\u8a0a\uff0c\u4e26\u4e14\u65bc\u8f38\u51fa\u6642\uff0c\u5c07\u6b63\u5411\u53ca\u53cd\u5411\u7684\u8cc7\u8a0a\u5957\u7528\u4e00\u500b\u975e \u7dda\u6027\u55ae\u5143 tanh\uff0c\u505a\u70ba\u4f4d\u7f6e t \u7684\u8f38\u51fa\u8cc7\u8a0aG \uff0c\u5982\u5f0f(2)\u3002 \u5982\u5716 1 \u6240\u793a\uff0c\u6211\u5011\u4ee5 \u53caC \u5206\u5225\u4ee3\u8868\u53e5\u5b50 S \u53ca\u8a18\u61b6 M \u7d93\u904e\u5169\u7d44\u5377\u7a4d\u5c64\u5f8c\u7684\u8f38\u51fa\uff0cG \u53caG \u5206\u5225\u4ee3\u8868\u53e5\u5b50 S \u53ca\u8a18\u61b6 M \u7d93\u904e\u96d9\u5411 GRU \u5c64\u5f8c\u7684\u8f38\u51fa\u3002 6 \u7c21\u570b\u5cfb\u8207\u5f35\u5609\u60e0" }, "TABREF1": { "content": "
\u3002
(4)
(5)
,
", "num": null, "type_str": "table", "html": null, "text": "\u5047\u8a2d\u7576\u524d\u8f38\u5165\u662f\u53e5\u5b50\u7684\u7b2c t \u500b\u5b57\u5143 \uff0c\u70ba\u4e86\u8a08\u7b97 \u8207\u8a18\u61b6\u7576\u4e2d\u6bcf\u500b\u5143\u7d20 \u7684\u6ce8\u610f\u529b\u503c , \uff0c \u6211\u5011\u5c07\u7576\u524d\u8f38\u5165 \u8207\u8f38\u5165\u8a18\u61b6 \u505a\u5167\u7a4d\u904b\u7b97\uff0c\u4f46\u662f\u4e0d\u540c\u65bc MECRF \u63a1\u7528 Softmax\uff0c\u6b64\u8655\u6211 \u5011\u63a1\u7528 tanh \u51fd\u6578\u5f37\u5316\u91cd\u8981\u7684\u8a18\u61b6\u4f4d\u7f6e\uff0c\u5982\u5f0f(6)\uff0c\u5176\u4e2d j \u2208 [1, N]\u3002" }, "TABREF2": { "content": "
\uff0c\u4e26\u7d50\u5408\u7576\u524d\u8f38\u5165 \uff0c\u505a\u70ba\u6700\u5f8c\u7684\u8f38\u51fa\uff0c (7) (8) 7 \u81ea\u52d5\u5316\u8a5e\u5f59\u7279\u5fb5\u65bc\u4e2d\u6587\u547d\u540d\u5be6\u9ad4\u8fa8\u8b58\u4e4b\u7814\u7a76 , Attention \u7684\u6a5f\u5236\u5141\u8a31\u6a21\u578b\u53ef\u4ee5\u4e0d\u53d7\u9650\u5236\u7684\u8a2a\u554f\u6587\u7ae0\u4e2d\u77ed\u671f\u8a18\u61b6\u6db5\u84cb\u7684\u4f4d\u7f6e\uff0c\u8b93\u6211\u5011\u7684\u6a21 \u5982\u5f0f(8)\u3002 \u2211 \u578b\u53ef\u4ee5\u7372\u53d6\u8f03\u8c50\u5bcc\u7684\u6587\u7ae0\u8cc7\u8a0a\u3002\u6700\u5f8c\u6211\u5011\u63a1\u7528\u689d\u4ef6\u96a8\u6a5f\u5834\u57df\uff0c\u7d93\u7531\u8f49\u79fb\u77e9\u9663\u8003\u616e\u6a19\u8a18\u4e4b \u9593\u7684\u4f9d\u8cf4\u95dc\u4fc2\uff0c\u7528\u4ee5\u589e\u52a0\u6e96\u78ba\u7387\u3002\u5b8c\u6574\u67b6\u69cb\u53ef\u53c3\u8003\u5716 2\u3002 4. \u5be6\u9a57\u8207\u7cfb\u7d71\u6548\u80fd(Experiments Dataset Sentences Average Characters/ per Sentence Entity PerNews Train 335,056 13.13 PERSON:54,338 PerNews Test 363,572 13.14 PERSON:54,546 SIGHAN-MSRA Train 141,546 14.94 PERSON:17,615 LOCATION:36,861 ORGANIZATION:20,584 SIGHAN-MSRA Test 11,679 14.45 PERSON:1,973 LOCATION:2,886 ORGANIZATION:1,331 Character Embedding 250 Conv layer # filters 50 Kernel width of filters 3 Learning rate 0.0005 Dropout rate 0.2 Memory size 200 4.1 PerNews Dataset Models Min/epoch Precision Recall F1 DS4NER-CRF++ 25* 0.9347 0.7968 0.8603 CE-MECRF 15 0.8881 0.8284 0.8572 CE-CNNs-BIGRU-CRF 25 0.9345 0.8289 0.8785 CE-CNNs-BIGRU-MECRF 65 0.9067 0.9084 0.9075 * DS4NER-CRF++\u70ba\u5168\u90e8\u8a13\u7df4\u6642\u9593 4.1.1 \u5377\u7a4d\u5c64\u904e\u6ffe\u5668\u6578\u91cf (Fiters Number of Convolution Layer) \u5728\u6b64\u5be6\u9a57\u4e2d\uff0c\u6211\u5011\u8abf\u6574\u5377\u7a4d(CNN)\u5c64\u7684\u904e\u6ffe\u5668\u7684\u6578\u91cf\uff0c\u6bd4\u8f03\u4e0d\u540c\u904e\u6ffe\u5668\u6578\u91cf\u5c0d\u65bc\u6548\u80fd\u7684 \u5f71\u97ff\u3002\u5982\u5716 3 \u6240\u793a\uff0c\u5c07\u904e\u6ffe\u5668\u6578\u91cf\u8a2d\u5b9a\u70ba 50 \u7684\u6642\u5019\uff0c\u6548\u80fd\u8868\u73fe\u6700\u4f73\uff0c\u904e\u6ffe\u5668\u6578\u91cf\u9010\u6f38\u589e \u52a0\u7684\u60c5\u5f62\u4e0b\uff0c\u4e26\u7121\u6cd5\u986f\u8457\u63d0\u5347\u6548\u80fd\uff0c\u4e14\u503c\u5f97\u6ce8\u610f\u7684\u662f\uff0c\u8d8a\u591a\u7684\u5377\u7a4d\u904e\u6ffe\u5668\u96d6\u56e0\u7522\u751f\u66f4\u591a \u7684\u7279\u5fb5\uff0c\u53ef\u4ee5\u5f97\u5230\u8f03\u597d\u7684\u7cbe\u6e96\u7387\uff0c\u4f46\u662f\u53ec\u56de\u7387\u7684\u8868\u73fe\u4e0a\u5247\u662f\u9010\u6b65\u4e0b\u6ed1\u3002 \u61c9\u7528\u8a18\u61b6\u589e\u5f37\u689d\u4ef6\u96a8\u6a5f\u5834\u57df\u8207\u4e4b\u6df1\u5ea6\u5b78\u7fd2\u53ca 9 \u81ea\u52d5\u5316\u8a5e\u5f59\u7279\u5fb5\u65bc\u4e2d\u6587\u547d\u540d\u5be6\u9ad4\u8fa8\u8b58\u4e4b\u7814\u7a76 \u5716 3.\u4e0d\u540c\u904e\u6ffe\u5668\u6578\u91cf\u5c0d\u65bc\u6548\u80fd\u7684\u5f71\u97ff PerNews \u70ba\u7db2\u8def\u4e0a\u4e4b\u8cc7\u6599\uff0c\u7576\u4e2d\u64c1\u6709\u8a31\u591a\u96dc\u8a0a\uff0c\u56e0\u6b64\u5728\u8a18\u61b6\u904e\u5927\u7684\u60c5\u6cc1\u4e0b\uff0c\u6a21\u578b\u53c3 \u8003\u5230\u8f03\u591a\u7684\u96dc\u8a0a\u8cc7\u6599\uff0c\u6548\u80fd\u53cd\u800c\u6709\u6240\u6e1b\u640d\uff0c\u800c\u5728 200 \u5b57\u5143\u8a18\u61b6\u9ad4\u6642\u6548\u80fd\u6700\u4f73\u3002 \u5716 4.\u8a18\u61b6\u9ad4\u8207\u6548\u80fd\u7684\u5f71\u97ff [Figure 4. Effects of Memory Size] 4.1.3 \u52a0\u5165\u8a5e\u5411\u91cf (Word Embedding) \u96d6\u7136\u5806\u758a\u5377\u7a4d\u53ef\u4ee5\u627e\u51fa\u76f8\u9130\u5b57\u5143\u4e4b\u9593\u7684\u95dc\u4fc2\uff0c\u4f46\u662f\u5176\u5be6\u7121\u6cd5\u50cf\u4e2d\u6587\u8a5e\u5f59\u8a5e\u5411\u91cf\u90a3\u9ebc\u6709\u610f \u7fa9\uff0c\u56e0\u6b64\u6211\u5011\u5728\u57fa\u65bc\u5b57\u5143\u7684\u6a19\u8a18\u7576\u4e2d\u52a0\u5165\u4ee5\u7576\u524d\u5b57 \u70ba\u4e2d\u5fc3\uff0c\u8207\u524d\u5f8c\u5b57\u5143\u7d50\u5408\uff0c\u52a0\u5165\u9069 \u7576\u7684\u8a5e\u5411\u91cf\u3002\u5982\u5f0f(8)\u6240\u793a\uff0c\u6211\u5011\u4e3b\u52d5\u52a0\u5165 \u3001 \u3001 \u3001 \u3001 \u5716 2. \u61c9\u7528\u8a18\u61b6\u589e\u5f37\u689d\u4ef6\u96a8\u6a5f\u5834\u57df\u8207\u4e4b\u6df1\u5ea6\u5b78\u7fd2\u53ca \u7b49\u4e94\u500b\u8a5e\u7684\u8a5e\u5411\u91cf\uff0c\u82e5\u67e5\u7121\u6b64\u8a5e\u5f59\uff0c\u5247\u88dc\u96f6\u5411\u91cf\u3002
", "num": null, "type_str": "table", "html": null, "text": "\u5728\u672c\u7ae0\u7bc0\u4e2d\uff0c\u6211\u5011\u5c07\u91dd\u5c0d\u672c\u7814\u7a76\u6240\u63d0\u51fa\u7684\u5404\u5c64\u6a21\u7d44\u6548\u80fd\u53ca\u53ef\u8abf\u6574\u7684\u8b8a\u6578\u9032\u884c\u6bd4\u8f03\u3002\u6211\u5011 \u4f7f\u7528 PerNews \u53ca SIGHAN-MSRA \u5169\u7d44\u8cc7\u6599\u8a55\u4f30\u6a21\u578b\u4e4b\u6548\u80fd\u3002\u5176\u4e2d PerNews \u8cc7\u6599\u96c6(Chou & Chang, 2017)\u4fc2\u4ee5\u8fa8\u8b58\u793e\u7fa4\u5a92\u9ad4\u4e0a\u7684\u4eba\u540d\u5be6\u9ad4\u70ba\u4e3b\u8981\u7814\u7a76\u76ee\u6a19\uff0c\u85c9\u7531 7053 \u500b\u4eba\u540d\u6e05\u55ae\uff0c \u81ea\u52d5\u6a19\u8a18\u51fa\u73fe\u65bc\u8cc7\u6599\u96c6\u4e2d\u7684\u4eba\u540d\u3002\u4e0d\u540c\u65bc Chou \u8207 Chang \u7684\u505a\u6cd5\u50c5\u7559\u5305\u542b\u4eba\u540d\u7684\u53e5\u5b50\uff0c \u672c\u7814\u7a76\u56e0\u9700\u8981\u53c3\u8003\u4e0a\u4e0b\u6587\u8cc7\u8a0a\uff0c\u56e0\u6b64\u5728\u6a19\u8a18\u6642\u4e0d\u6703\u904e\u6ffe\u6389\u672a\u542b\u6709\u4efb\u4f55\u5be6\u9ad4\u7684\u53e5\u5b50\u3002 SIGHAN-MSRA \u5247\u662f\u5b78\u8853\u754c\u666e\u904d\u7528\u4f86\u8a55\u4f30\u4e2d\u6587\u65b7\u8a5e\u8207\u547d\u540d\u5be6\u9ad4\u8fa8\u8b58\u5de5\u5177\u6548\u80fd\u7684\u6a19\u6e96\u6578\u64da \u96c6(Levow, 2006)\uff0c\u672c\u7814\u7a76\u4e3b\u8981\u91dd\u5c0d\u4eba\u540d\u3001\u5730\u540d\u3001\u7d44\u7e54\u540d\u9032\u884c\u547d\u540d\u5be6\u9ad4\u8fa8\u8b58\u3002\u8cc7\u6599\u96c6\u4e4b\u57fa \u672c\u7d71\u8a08\u8cc7\u6599\u5982 Table 1 \u6240\u793a\u3002 \u672c\u7814\u7a76\u63a1\u7528\u7684\u6a19\u8a18\u6cd5\u70ba BIESO \u6a19\u8a18\u6cd5\uff0c\u8a55\u4f30\u65b9\u5f0f\u70ba\u7cbe\u6e96\u6bd4\u5c0d\u5b8c\u6574\u547d\u540d\u5be6\u9ad4\u5f8c\uff0c\u4ee5\u5e38\u7528\u7684 \u6307\u6a19\uff0c\u5373\u7cbe\u78ba\u7387\u3001\u53ec\u56de\u7387\u4ee5\u53ca F1-Score \u4f86\u9032\u884c\u6548\u80fd\u7684\u8a55\u4f30\u3002\u6a21\u578b\u6240\u63a1\u7528\u7684\u53c3\u6578\u5982 Table 2 \u6240\u793a\uff1a\u4e2d\u6587\u5b57\u5143\u5d4c\u5165\u7dad\u5ea6 250\u3001\u4e09\u5c64\u5377\u7a4d\u5c64\u3001\u6bcf\u5c64 50 \u500b Kernel Filters\u3001\u77ed\u671f\u8a18\u61b6\u9ad4\u70ba 200 \u5b57\u5143\uff0c\u5b78\u7fd2\u7387\u8207 dropout rate \u5206\u5225\u70ba 0.0005 \u53ca 0.2\u3002 8 \u7c21\u570b\u5cfb\u8207\u5f35\u5609\u60e0" }, "TABREF4": { "content": "
)
", "num": null, "type_str": "table", "html": null, "text": "\u6240\u793a\uff0c\u5169\u8005\u5404\u5225\u6539\u9032\u7684\u5e45\u5ea6\u4e0d\u5927\uff0c\u7d9c\u5408 \u4f86\u770b\u6709 1%\u7684\u9032\u6b65\uff0c\u8b93 F1 \u6548\u80fd\u9054\u5230 0.9176\u3002" }, "TABREF6": { "content": "
TotalTrainingTesting
Advertisement9,6819,6812,423
", "num": null, "type_str": "table", "html": null, "text": "" }, "TABREF7": { "content": "", "num": null, "type_str": "table", "html": null, "text": "" }, "TABREF9": { "content": "
MethodAvg.F1-ScoreAd.F1-Score
TF-IDF0.790.97
text-CNN0.790.98
bi-LSTM-attention0.820.98
CARISR0.700.97
", "num": null, "type_str": "table", "html": null, "text": "" }, "TABREF10": { "content": "
Discovering the Latent Writing Style from Articles:35
A Contextualized Feature Extraction Approach
", "num": null, "type_str": "table", "html": null, "text": "TF-IDF and LR algorithms. The experimental result, labelled Transfer-3, is shown in" }, "TABREF14": { "content": "
\u63a2\u7a76\u7aef\u5c0d\u7aef\u6df7\u5408\u6a21\u578b\u67b6\u69cb\u65bc\u83ef\u8a9e\u8a9e\u97f3\u8fa8\u8b5841
Neural Networks, CLDNN) (Chan, Jaitly, Le & Vinyals, 2016)\u3002
\u96d6\u7136\u7aef\u5c0d\u7aef\u7684\u8a13\u7df4\u65b9\u5f0f\u76f8\u8f03\u65bc\u50b3\u7d71\u7684 DNN-HMM \u8a13\u7df4\u66f4\u52a0\u7c21\u55ae\uff0c\u4f46\u5728\u5c11\u91cf\u8a9e\u6599\u4e0b\uff0c
\u5176\u6548\u80fd\u4ecd\u8207\u50b3\u7d71\u7684 DNN-HMM \u6a21\u578b\u6709\u4e00\u6bb5\u5dee\u8ddd\u3002\u70ba\u6b64\uff0c(Kim, Hori & Watanabe, 2017)
(Watanabe, Hori, Kim, Hershey & Hayash, 2017)\uff0c\u4f7f\u7528 CTC-Attention \u6a21\u578b (Hybrid
CTC-Attention Model)\u3002\u8a72\u65b9\u6cd5\u70ba\u7d50\u5408 CTC \u8207 Attention \u6a21\u578b\u7684\u591a\u4efb\u52d9\u5b78\u7fd2\u67b6\u69cb\uff0c\u76ee\u7684\u662f
\u5e0c\u671b\u5229\u7528 CTC \u5f4c\u88dc Attention \u6a21\u578b\u5c0d\u9f4a\u932f\u8aa4(Misalignment)\u53ca\u6536\u6582\u6162\u7684\u554f\u984c\u3002\u5728(Kim et al.,
2017) (Watanabe et al., 2017)\u7684\u5be6\u9a57\u7d50\u679c\u986f\u793a\uff0cCTC-Attention \u6a21\u578b\u53ef\u5728\u5c11\u91cf\u8a9e\u6599\u4e0b\uff0c\u80fd\u5920
\u66f4\u63a5\u8fd1\u751a\u81f3\u4f4e\u65bc DNN-HMM \u6a21\u578b\u7684\u8fa8\u8b58\u7387\u3002\u56e0\u6b64\uff0c\u672c\u7bc7\u8ad6\u6587\u5e0c\u671b\u57fa\u65bc\u6b64\u6a21\u578b\u5c0d\u65bc\u4e2d\u6587\u6703
\u8b70\u8a9e\u6599\u7684\u8fa8\u8b58\u505a\u7814\u7a76\u63a2\u8a0e\uff0c\u6211\u5011\u7684\u8ca2\u737b\u53ef\u5206\u70ba\uff1a
1. \u4e0d\u540c Attention \u6a5f\u5236\u7684\u8fa8\u8b58\u7d50\u679c\uff1a\u5728\u9577\u53e5\u6e2c\u9a57\u96c6\u5be6\u9a57\u7d50\u679c\u4e2d\u767c\u73fe\u4f7f\u7528 Coverage Location
\u6548\u679c\u6bd4 Location \u6a5f\u5236\u597d\uff0c\u800c\u5728\u77ed\u53e5\u5be6\u9a57\u5247\u53cd\u4e4b\u3002
2. CTC \u7684\u6b0a\u91cd\u5c0d\u65bc\u8fa8\u8b58\u7d50\u679c\u4e4b\u5f71\u97ff\uff1a\u4e00\u822c\u4f86\u8aaa\u60c5\u6cc1\u4e0b\uff0c\u591a\u4efb\u52d9\u67b6\u69cb\u8a13\u7df4\u4e4b\u8072\u5b78\u6a21\u578b\u53ef\u512a\u65bc
\u50b3\u7d71 CTC \u6216 Attention \u6a21\u578b\u3002
3. CTC-Attention \u6df7\u5408\u6a21\u578b\u65bc\u77ed\u8a9e\u53e5\u6e2c\u8a66\u4e4b\u5f71\u97ff\uff1a\u77ed\u53e5\u8fa8\u8b58\u4efb\u52d9\u4e0a\uff0c\u7576\u4f7f\u7528\u8f03\u5927\u7684 CTC \u6b0a
\u91cd\u4f5c\u70ba\u89e3\u78bc\u53c3\u6578\uff0c\u53ef\u4ee5\u5f97\u5230\u6700\u597d\u7684\u6548\u679c\u3002
2. \u65b9\u6cd5 (Method)
2.1
\uff0c\u4e26\u4e14\u5728\u89e3\u78bc
\u6642\u53ef\u4ee5\u4e0d\u9700\u8981\u8a9e\u8a00\u6a21\u578b\uff0c\u9019\u6a23\u7684\u505a\u6cd5\u7a31\u4e4b\u70ba\u7aef\u5c0d\u7aef\u7684\u8a13\u7df4\u65b9\u5f0f\u3002\u53e6\u4e00\u65b9\u9762\uff0c\u6709\u9451\u65bc CTC
\u7aef\u5c0d\u7aef\u6a21\u578b\u7684\u6210\u529f\uff0c\u4e14\u57fa\u65bc Attention \u7684\u905e\u6b78\u985e\u795e\u7d93\u7db2\u8def\u5df2\u88ab\u5ee3\u6cdb\u61c9\u7528\u65bc\u5404\u500b\u7814\u7a76\u9818\u57df
(Bahdanau, Cho & Bengio, 2015) (Xu et al., 2015)\uff0c(Chorowski, Bahdanau, Serdyuk, Cho &
Bengio, 2015)\u4e5f\u5c07\u6b64\u6a21\u578b\u61c9\u7528\u65bc\u8a9e\u97f3\u8fa8\u8b58\u7684\u4efb\u52d9\u4e0a\uff0c\u5f97\u5230\u63a5\u8fd1 CTC \u7684 WER\u3002\u5728\u5f8c\u7e8c\u5176
\u4ed6\u5b78\u8005\u7814\u7a76\u4e2d\uff0c\u5728\u5927\u91cf\u8a9e\u6599\u7684\u60c5\u6cc1\u4e0b\uff0cAttention \u6a21\u578b\u7684 WER \u751a\u81f3\u80fd\u903c\u8fd1\u8fa8\u8b58\u6548\u679c\u5f88\u597d
\u7684 CLDNN-HMM \u6a21\u578b(Convolutional Long Short-Term Memory, Fully Connected Deep
", "num": null, "type_str": "table", "html": null, "text": "" }, "TABREF15": { "content": "
\u63a2\u7a76\u7aef\u5c0d\u7aef\u6df7\u5408\u6a21\u578b\u67b6\u69cb\u65bc\u83ef\u8a9e\u8a9e\u97f3\u8fa8\u8b58\u5f35\u4fee\u745e \u7b49 45
\u4e2d\u83ef\u6c11\u570b\u8a08\u7b97\u8a9e\u8a00\u5b78\u5b78\u6703
\u91cf { , , \u2026}\u62bd\u53d6\u7684\u5411\u91cf\u96c6\u5408\uff0cFl={fl1, fl2, \u2026, flT}\u3002v \u70ba Coverage Attention \u6a5f\u5236 \u76f8\u95dc\u51fa\u7248\u54c1\u50f9\u683c\u8868\u53ca\u8a02\u8cfc\u55ae
(Watanabe et al., 2017)\u4e2d\u8ca0\u8cac\u7d00\u9304\u6240\u6709 Decoder \u904e\u53bb\u7684 Attention \u6b0a\u91cd\u5206\u4f48\uff0c\u52a0\u5165\u8a72\u6a5f\u5236 \u7de8\u865f \u66f8\u76ee \u6703 \u54e1 \u975e\u6703\u54e1 \u518a\u6578 \u91d1\u984d
\u7684\u76ee\u7684\u662f\u5e0c\u671b\u80fd\u5920\u6e1b\u5c11\u63d2\u5165\u932f\u8aa4(Insertion)\u8207\u522a\u9664\u932f\u8aa4(Deletion)\u7684\u51fa\u73fe\uff0c\u4ee5\u9054\u5230\u66f4\u4f4e\u7684 WER \u6216 CER\u3002Attention \u6a21\u578b\u8a13\u7df4\u6642\u640d\u5931\u51fd\u6578\u4e5f\u540c\u6a23\u5e0c\u671b\u6700\u5c0f\u5316 ln AIR no.92-01, no. 92-04 (\u5408\u8a02\u672c) ICG \u4e2d\u7684\u8ad6\u65e8\u89d2\u8272 \u8207 AIR 1. A conceptual Structure for Parsing Mandarin--its * | \u3002Attention Surface (US&EURP) (ASIA) VOLUME Frame and General Applications--NT$ 80 NT$ _____ _____AMOUNT
\u6a21\u578b\u8207 CTC \u640d\u5931\u51fd\u6578\u5dee\u7570\u5728\u65bc\u524d\u8005\u8a08\u7b97\u6642\u5fc5\u9808\u8003\u616e\u904e\u53bb\u8f38\u51fa\u7684\u5b57\u7b26\u3002 2.3 CTC-Attention\u6a21\u578b (Hybrid CTC-Attention model) 2. no.92-02, no. 92-03 (\u5408\u8a02\u672c) no.92-01, no. 92-04(\u5408\u8a02\u672c) ICG \u4e2d\u7684\u8ad6\u65e8\u89d2\u8272\u8207 A Conceptual V-N \u8907\u5408\u540d\u8a5e\u8a0e\u8ad6\u7bc7 \u8207V-R \u8907\u5408\u52d5\u8a5e\u8a0e\u8ad6\u7bc7 120 Structure for Parsing Mandarin --Its Frame and General Applications--US$ 9 US$ 19 3. no.93-01 \u65b0\u805e\u8a9e\u6599\u5eab\u5b57\u983b\u7d71\u8a08\u8868 120 no.92-02 V-N \u8907\u5408\u540d\u8a5e\u8a0e\u8ad6\u7bc7 & 92-03 V-R \u8907\u5408\u52d5\u8a5e\u8a0e\u8ad6\u7bc7 12 21 4. no.93-02 \u65b0\u805e\u8a9e\u6599\u5eab\u8a5e\u983b\u7d71\u8a08\u8868 360 \u7531\u65bc\u8a9e\u97f3\u7684\u6bcf\u500b\u97f3\u6846\u9593\u5f7c\u6b64\u76f8\u95dc\uff0c\u6240\u4ee5 CTC \u4e2d\u5c0d\u65bc\u6bcf\u500b\u97f3\u6846\u5c0d\u61c9\u6587\u5b57\u8f38\u51fa\u7684\u7368\u7acb\u6027\u5047\u8a2d _____ _____ US$15 _____ _____ _____ 17 _____ _____ _____ \u662f\u98fd\u53d7\u6279\u8a55\u3002\u53e6\u4e00\u65b9\u9762\uff0cAttention \u6a21\u578b\u6709\u8457\u975e\u55ae\u8abf\u7684\u5de6\u5230\u53f3\u5c0d\u9f4a\u548c\u6536\u6582\u8f03\u6162\u7684\u7f3a\u9ede\u3002(Kim 3. no.93-01 \u65b0\u805e\u8a9e\u6599\u5eab\u5b57\u983b\u7d71\u8a08\u8868 1. 2. 8 13 11 _____ 5. no.93-03 \u65b0\u805e\u5e38\u7528\u52d5\u8a5e\u8a5e\u983b\u8207\u5206\u985e 180 _____ _____ 4. no.93-02 \u65b0\u805e\u8a9e\u6599\u5eab\u8a5e\u983b\u7d71\u8a08\u8868 18 30 24 _____ 6. no.93-05 \u4e2d\u6587\u8a5e\u985e\u5206\u6790 185 _____ __________ _____ _____ _____
et al., 2017) (Watanabe et al., 2017)\u901a\u904e\u4f7f\u7528 CTC \u76ee\u6a19\u51fd\u6578\u4f5c\u70ba\u8f14\u52a9\u51fd\u6578\uff0c\u5c07 Attention \u6a21 5. no.93-03 \u65b0\u805e\u5e38\u7528\u52d5\u8a5e\u8a5e\u983b\u8207\u5206\u985e 10 15 13 _____ 7. no.93-06 \u73fe\u4ee3\u6f22\u8a9e\u4e2d\u7684\u6cd5\u76f8\u8a5e 40 _____ __________
\u578b\u8207 CTC \u7d50\u5408\u4f5c\u591a\u4efb\u52d9\u5b78\u7fd2\u3002\u9019\u7a2e\u8a13\u7df4\u65b9\u5f0f\u53ef\u4fdd\u7559 Attention \u6a21\u578b\u7684\u512a\u52e2\uff0c\u4e26\u80fd\u6709\u6548\u6539 6. no.93-05 \u4e2d\u6587\u8a5e\u985e\u5206\u6790 10 15 13 _____ 8. no.94-01 \u4e2d\u6587\u66f8\u9762\u8a9e\u983b\u7387\u8a5e\u5178(\u65b0\u805e\u8a9e\u6599\u8a5e\u983b\u7d71\u8a08) 380 _____ __________
\u5584 Attention \u6a21\u578b\u7684\u6536\u6582\u901f\u5ea6\u8207\u5c0d\u9f4a\u932f\u8aa4\u7684\u554f\u984c\u3002\u7d9c\u5408\u5f0f(7)\u53ca\u5f0f(8)\uff0cCTC-Attention \u6df7\u5408 \u6a21\u578b\u900f\u904e\u7dda\u6027\u7d44\u5408\u5169\u7a2e\u6a21\u578b\u7684\u76ee\u6a19\u51fd\u6578\uff0c\u5176\u8a13\u7df4\u7684\u640d\u5931\u51fd\u6578\u53ef\u4ee5\u8868\u793a\u6210\uff1a | 1 | (18) \u5176\u4e2d\u03bb\u7684\u7bc4\u570d\u70ba 0 \u2264 \u03bb \u2264 1\uff0c\u800c\u5728\u89e3\u78bc\u6642\uff0c\uff0c\u6211\u5011\u53ef\u540c\u6642\u4f7f\u7528 CTC \u53ca Attention \u6a21\u578b\u7684\u8f38 \u51fa\uff0c\u53ef\u8868\u793a\u70ba\uff1a | : , : | : , : 1 | : , : (19) 2.4 \u8072\u5b78\u6a21\u578b (Acoustic model) 7. no.93-06 \u73fe\u4ee3\u6f22\u8a9e\u4e2d\u7684\u6cd5\u76f8\u8a5e 5 10 8 _____ 8. no.94-01 \u4e2d\u6587\u66f8\u9762\u8a9e\u983b\u7387\u8a5e\u5178(\u65b0\u805e\u8a9e\u6599\u8a5e\u983b\u7d71\u8a08) 18 30 24 _____ 9. no.94-02 \u53e4\u6f22\u8a9e\u5b57\u983b\u8868 180 _____ _____ 9. no.94-02 \u53e4\u6f22\u8a9e\u5b57\u983b\u8868 11 16 14 _____ 10. no.95-01 \u6ce8\u97f3\u6aa2\u7d22\u73fe\u4ee3\u6f22\u8a9e\u5b57\u983b\u8868 75 _____ _____ \u5716 2. CTC-Attention \u6df7\u5408\u6a21\u578b\u67b6\u69cb [Figure 2. Hybrid CTC-Attention model architecture] 3. \u5be6\u9a57\u7d50\u679c\u8207\u5206\u6790 (Experiments and Results) 3.1 \u5be6\u9a57\u8a9e\u6599\u8207\u8a2d\u5b9a (Corpus and Setup) \u672c\u8ad6\u6587\u5be6\u9a57\u4f7f\u7528\u7684\u8a9e\u6599\u70ba\u83ef\u8a9e\u6703\u8b70\u8a9e\u6599\uff0c\u8a72\u8a9e\u6599\u70ba\u570b\u5167\u4f01\u696d\u6240\u6536\u96c6\u6574\u7406\u7684\u8a9e\u6599\u5eab\u3002\u5176\u4e2d \u8ac7\u8a71\u5167\u5bb9\u6c92\u6709\u7d93\u904e\u8a2d\u8a08\uff0c\u800c\u662f\u4e00\u822c\u516c\u53f8\u5728\u5be6\u969b\u958b\u6703\u4e2d\u8a0e\u8ad6\u9762\u81e8\u7684\u554f\u984c\u8207\u6280\u8853\uff0c\u800c\u8aaa\u8a71\u65b9 \u5f0f\u5c6c\u65bc\u6b63\u5e38\u4ea4\u8ac7\uff0c\u6240\u4ee5\u6703\u6709\u4e0d\u5c11\u505c\u9813\u3001\u53e3\u5403\u3001\u4e2d\u82f1\u6587\u8f49\u63db\u7b49\u60c5\u5f62\uff0c\u76f8\u8f03\u65bc\u65b0\u805e\u8a9e\u6599\uff0c\u8f03 \u5177\u6709\u6311\u6230\u6027\u3002\u5176\u8a13\u7df4\u96c6\u70ba 230 \u5c0f\u6642\uff0c\u800c\u6e2c\u8a66\u96c6\u5247\u70ba 2.6 \u5c0f\u6642\u5169\u5834\u6703\u8b70\u7684\u5167\u5bb9\uff0c\u53e6\u5916\u9084\u6709 \u4e00\u984d\u5916 3 \u5c0f\u6642\u77ed\u53e5\u6e2c\u8a66\u96c6\uff0c\u5176\u5167\u5bb9\u70ba\u591a\u70ba\u5728\u8a13\u7df4\u8a9e\u6599\u4e2d\u672a\u66fe\u51fa\u73fe\u7684\u5c08\u6709\u540d\u8a5e\uff0c\u5728\u8fa8\u8b58\u4e0a \u66f4\u6709\u96e3\u5ea6\u3002 \u8868 1.\u8a9e\u6599\u5eab\u8a13\u7df4\u96c6\u3001\u6e2c\u8a66\u96c6\u5c0f\u6642\u6578\u8207\u53e5\u6578 [Table 1. hours of training set and test set ] \u7e3d\u5c0f\u6642\u6578 \u53e5\u6578 \u8a13\u7df4\u96c6 230 367434 \u6e2c\u8a66\u96c6 2.6 2306 \u77ed\u53e5\u6e2c\u8a66\u96c6 3 2809 Attention CER #Deletion #Insertion location 24.7 3637 1474 Coverage location 23.7 3378 1467 CER location 64.8 Coverage location 67.7 TDNN-LFMMI 85.5 10. no.95-01 \u6ce8\u97f3\u6aa2\u7d22\u73fe\u4ee3\u6f22\u8a9e\u5b57\u983b\u8868 8 13 10 _____ 11. no.95-02/98-04 \u4e2d\u592e\u7814\u7a76\u9662\u5e73\u8861\u8a9e\u6599\u5eab\u7684\u5167\u5bb9\u8207\u8aaa\u660e 3 8 6 _____ 12. no.95-03 \u8a0a\u606f\u70ba\u672c\u7684\u683c\u4f4d\u8a9e\u6cd5\u8207\u5176\u5256\u6790\u65b9\u6cd5 3 8 6 _____ 13. no.96-01 \u300c\u641c\u300d\u6587\u89e3\u5b57-\u4e2d\u6587\u8a5e\u754c\u7814\u7a76\u8207\u8cc7\u8a0a\u7528\u5206\u8a5e\u6a19\u6e96 8 13 11 _____ 14. no.97-01 \u53e4\u6f22\u8a9e\u8a5e\u983b\u8868 (\u7532) 19 31 25 _____ 15. no.97-02 \u8ad6\u8a9e\u8a5e\u983b\u8868 9 14 12 _____ 16. no.98-01 \u8a5e\u983b\u8a5e\u5178 18 30 26 _____ 17. no.98-02 Accumulated Word Frequency in CKIP Corpus 15 25 21 _____ 18. no.98-03 \u81ea\u7136\u8a9e\u8a00\u8655\u7406\u53ca\u8a08\u7b97\u8a9e\u8a00\u5b78\u76f8\u95dc\u8853\u8a9e\u4e2d\u82f1\u5c0d\u8b6f\u8868 4 9 7 _____ 19. no.02-01 \u73fe\u4ee3\u6f22\u8a9e\u53e3\u8a9e\u5c0d\u8a71\u8a9e\u6599\u5eab\u6a19\u8a3b\u7cfb\u7d71\u8aaa\u660e 8 13 11 _____ 20. Computational Linguistics & Chinese Languages Processing (One year) (Back issues of IJCLCLP: US$ 20 per copy) ---100 100 _____ 21. Readings in Chinese Language Processing 25 25 21 _____ TOTAL _____ 10% member discount: ___________Total Due:__________ \u2027 OVERSEAS USE ONLY \u2027 PAYMENT\uff1a \u25a1 Credit Card ( Preferred ) \u25a1 Name (please print): Signature: 11. no.95-02/98-04 \u4e2d\u592e\u7814\u7a76\u9662\u5e73\u8861\u8a9e\u6599\u5eab\u7684\u5167\u5bb9\u8207\u8aaa\u660e 75 _____ _____ 12. no.95-03 \u8a0a\u606f\u70ba\u672c\u7684\u683c\u4f4d\u8a9e\u6cd5\u8207\u5176\u5256\u6790\u65b9\u6cd5 75 _____ _____ 13. no.96-01 \u300c\u641c\u300d\u6587\u89e3\u5b57-\u4e2d\u6587\u8a5e\u754c\u7814\u7a76\u8207\u8cc7\u8a0a\u7528\u5206\u8a5e\u6a19\u6e96 110 _____ _____ 14. no.97-01 \u53e4\u6f22\u8a9e\u8a5e\u983b\u8868 (\u7532) 400 _____ _____ 15. no.97-02 \u8ad6\u8a9e\u8a5e\u983b\u8868 90 _____ _____ 16 no.98-01 \u8a5e\u983b\u8a5e\u5178 395 _____ _____ 17. no.98-02 Accumulated Word Frequency in CKIP Corpus 340 _____ _____ 18. no.98-03 \u81ea\u7136\u8a9e\u8a00\u8655\u7406\u53ca\u8a08\u7b97\u8a9e\u8a00\u5b78\u76f8\u95dc\u8853\u8a9e\u4e2d\u82f1\u5c0d\u8b6f\u8868 90 _____ _____ 19. no.02-01 \u73fe\u4ee3\u6f22\u8a9e\u53e3\u8a9e\u5c0d\u8a71\u8a9e\u6599\u5eab\u6a19\u8a3b\u7cfb\u7d71\u8aaa\u660e 75 _____ _____ 20 \u8ad6\u6587\u96c6 COLING 2002 \u7d19\u672c 100 _____ _____ 21. \u8ad6\u6587\u96c6 COLING 2002 \u5149\u789f\u7247 300 _____ _____ 22. \u8ad6\u6587\u96c6 COLING 2002 Workshop \u5149\u789f\u7247 300 _____ _____ 23. \u8ad6\u6587\u96c6 ISCSLP 2002 \u5149\u789f\u7247 300 _____ _____ 24. \u4ea4\u8ac7\u7cfb\u7d71\u66a8\u8a9e\u5883\u5206\u6790\u7814\u8a0e\u6703\u8b1b\u7fa9 (\u4e2d\u83ef\u6c11\u570b\u8a08\u7b97\u8a9e\u8a00\u5b78\u5b78\u67031997\u7b2c\u56db\u5b63\u5b78\u8853\u6d3b\u52d5) 130 _____ _____ 25. \u4e2d\u6587\u8a08\u7b97\u8a9e\u8a00\u5b78\u671f\u520a (\u4e00\u5e74\u56db\u671f) \u5e74\u4efd\uff1a______ (\u904e\u671f\u671f\u520a\u6bcf\u672c\u552e\u50f9500\u5143) ---2,500 _____ _____ 26. Readings of Chinese Language Processing 675 _____ _____ 27. \u5256\u6790\u7b56\u7565\u8207\u6a5f\u5668\u7ffb\u8b6f 1990 150 _____ _____ \u5408 \u8a08 _____ _____ \u203b \u6b64\u50f9\u683c\u8868\u50c5\u9650\u570b\u5167 (\u53f0\u7063\u5730\u5340) \u4f7f\u7528 \u5283\u64a5\u5e33\u6236\uff1a\u4e2d\u83ef\u6c11\u570b\u8a08\u7b97\u8a9e\u8a00\u5b78\u5b78\u6703 \u5283\u64a5\u5e33\u865f\uff1a19166251 \u7531\u5716 3 [Model Fax: \uf997\u7d61\u96fb\u8a71\uff1a(02) 2788-3799 \u8f491502 E-mail: \uf997\u7d61\u4eba\uff1a \u9ec3\u742a \u5c0f\u59d0\u3001\u4f55\u5a49\u5982 \u5c0f\u59d0 E-mail:aclclp@hp.iis.sinica.edu.tw_____ _____ _____ _____ _____ _____ _____ _____ _____ _____ _____ _____ _____ _____ _____ _____
\u8a02\u8cfc\u8005\uff1a Address\uff1a \u5730 \u5740\uff1a\u6536\u64da\u62ac\u982d\uff1a\u5411
\u96fb\u8a71\uff1aE-mail:
", "num": null, "type_str": "table", "html": null, "text": "\u672c\u7bc7\u8ad6\u6587\u5728\u8072\u5b78\u6a21\u578b\u7684 Encoder \u90e8\u5206\u4f7f\u7528\u7684\u662f\u5169\u5c64\u7684 VGG \u5c64\u52a0\u4e0a\u516b\u5c64 Long Short-Term Memory Projection(LSTMP)\uff0cLSTMP\u662f LSTM \u7684\u8b8a\u5f62\uff0c\u901a\u904e\u6dfb\u52a0\u6295\u5f71\u5c64 \u4f86\u9032\u4e00\u6b65\u512a\u5316 LSTM \u7684\u901f\u5ea6\u548c\u6548\u80fd\u3002\u800c VGG \u8207(Chan et al., 2016)\u7684\u91d1\u5b57\u5854\u578b\u7684 LSTM \u7d50 \u69cb\u4f5c\u70ba Encoder \u76f8\u6bd4\uff0c\u4f7f\u7528 VGG \u7684\u6548\u679c\u5728(Watanabe et al., 2018)\u8aaa\u660e\u4e86\u5728\u5927\u591a\u6578\u60c5\u6cc1\u6703 \u512a\u65bc\u91d1\u5b57\u5854\u578b\u7684 LSTM\uff0c\u56e0\u6b64\u6211\u5011\u63a1\u7528 VGG-LSTMP \u4f5c\u70ba Encoder \uff0c\u5b8c\u6574\u6a21\u578b\u67b6\u69cb\u5982\u5716 2\uff0c \u5176\u4e2d X \u4ee3\u8868\u8f38\u5165\u7279\u5fb5\uff0cC \u4ee3\u8868\u8f38\u51fa\u7684\u5b57\u7b26\u5e8f\u5217\u3002\u89e3\u78bc\u7b97\u6cd5\u63a1\u7528\u5149\u675f\u641c\u5c0b\uff0c\u641c\u5c0b\u6642\u7684\u5206\u6578 \u7d50\u5408\u53ef\u53c3\u8003 2.3 \u7bc0\u5f0f 19\u3002 \u7279\u5fb5\u90e8\u4efd\uff0c\u6211\u5011\u4f7f\u7528 80 \u7dad\u7684 Filterbank \u52a0 Pitch \u7279\u5fb5\uff1b\u8072\u5b78\u6a21\u578b\u90e8\u5206\uff0c\u6211\u5011\u4f7f\u7528\u5169 \u5c64 VGG \u5c64\u53ca\u516b\u5c64 LSTMP \u4f5c\u70ba Encoder\uff0c\u6bcf\u5c64 LSTMP \u5404\u6709 320 \u500b\u55ae\u5143\uff0cDecoder \u90e8\u5206\u5247 \u4f7f\u7528\u55ae\u5c64 300 \u500b\u55ae\u5143\u7684 LSTM\uff0c\u5982\u5716 2 \u6240\u793a\u3002Attention \u6a5f\u5236\u5206\u5225\u70ba Location \u53ca Coverage Location\u3002\u8a9e\u8a00\u6a21\u578b\u90e8\u5206\u6211\u5011\u7528\u8a13\u7df4\u96c6\u7684\u8f49\u5beb\u4f5c\u70ba\u8a9e\u6599\u8a13\u7df4\u5b57\u7b26\u7d1a\u5225\u7684 RNN \u8a9e\u8a00\u6a21\u578b\uff0c \u8a13\u7df4\u6642 CTC \u6b0a\u91cd\u8a2d\u70ba 0.5\uff0c\u5728\u89e3\u78bc\u6642\u4f7f\u7528(Watanabe et al., 2017)\u7684\u89e3\u78bc\u7b97\u6cd5\u4e26\u5229\u7528 \u5f35\u4fee\u745e \u7b49 Shallow Fusion (Gulcehre et al., 2015)\u7684\u65b9\u5f0f\uff0c\u63d2\u5165\u984d\u5916\u7684\u8a9e\u8a00\u6a21\u578b\u5206\u6578\u4ee5\u63d0\u5347\u6574\u9ad4\u8fa8\u8b58 \u6548\u80fd\uff0c\u5be6\u4f5c\u4e0a\u4f7f\u7528 Espnet (Watanabe et al., 2018)\u5de5\u5177\uff0c\u53e6\u5916\u70ba\u6211\u5011\u4e5f\u4f7f\u7528\u4e86 Kaldi (Povey et al., 2011) \u5de5 \u5177 \u5be6 \u4f5c \u6642 \u5ef6 \u5f0f \u985e \u795e \u7d93 \u7db2 \u8def (Time-delay Neural Network, TDNN) \u7d50 \u5408 Lattice-free Maximum Mutual Information (LF-MMI) (Povey et al., 2016)\u8a13\u7df4\u7684\u8072\u5b78\u6a21\u578b\u8207 \u7aef\u5c0d\u7aef\u6df7\u548c\u6a21\u578b\u505a\u6bd4\u8f03\u3002 \u5716 3 \u6a6b\u8ef8\u4ee3\u8868 CTC \u7684\u6b0a\u91cd\uff0c\u800c\u7e31\u8ef8\u4ee3\u8868 CER\u3002\u7531\u65bc CTC \u7684\u6b0a\u91cd\u5728\u89e3\u78bc\u6642\u662f\u53ef\u4ee5\u8b8a\u52d5\u7684\uff0c \u6211\u5011\u5229\u7528\u7aae\u8209\u7684\u65b9\u5f0f\u5617\u8a66\u4e0d\u540c\u7684\u6b0a\u91cd\u7d44\u5408\u3002\u7531\u5be6\u9a57\u7d50\u679c\u5f97\u77e5\uff0c\u6211\u5011\u767c\u73fe Location \u53ca Coverage Location \u7686\u767c\u73fe\u6b0a\u91cd\u8a2d\u70ba 0.5 \u5728\u6e2c\u8a66\u96c6\u4e0a\u8868\u73fe\u6700\u597d\uff0c\u800c\u6b0a\u91cd\u504f\u5411 CTC \u6216\u662f Attention \u90fd\u4f7f CER \u6709\u4e0a\u5347\u8da8\u52e2\u3002\u7576 CTC \u6b0a\u91cd\u70ba 1.0 \u6642\u53ef\u8996\u70ba\u50b3\u7d71 CTC \u6a21\u578b\uff0c\u53cd\u4e4b\u7576\u6b0a \u91cd\u70ba 0.0 \u6642\u70ba\u50b3\u7d71 Attention \u6a21\u578b\u3002\u53e6\u4e00\u65b9\u9762\uff0cCoverage Location \u5728\u4efb\u4e00\u6b0a\u91cd\u4e0b\u5176 CER \u7686 \u6bd4 Location Attention \u6a21\u578b\u4f4e\uff0c\u56e0\u6b64\u6211\u5011\u9032\u4e00\u6b65\u53bb\u5206\u6790\u5176\u89e3\u78bc\u7d50\u679c\u3002 \u8868 \u5df2\u77e5\u9053 CTC \u7684\u6b0a\u91cd\u8a2d\u70ba 0.5 \u6642\u5176 CER \u70ba\u6700\u4f4e\uff0c\u56e0\u6b64\u8868 1 \u70ba\u8a72\u6b0a\u91cd\u4e0b\u7684\u8fa8\u8b58 \u7387\uff0cCER \u5206\u5225\u70ba 24.7 \u53ca 23.7\u3002\u5728\u5be6\u9a57\u7684\u7d50\u679c\u4e2d\uff0c\u6211\u5011\u767c\u73fe\u7531 Coverage \u6a5f\u5236\u7684\u6a21\u578b\u89e3\u78bc \u5f8c\uff0c\u63d2\u5165\u932f\u8aa4\u8207\u522a\u9664\u932f\u8aa4\u6578\u6709\u4e9b\u5fae\u4f46\u4e00\u81f4\u7684\u9032\u6b65\uff0c\u5176\u7d50\u679c\u4e5f\u53cd\u6620\u5728 CER \u4e0a\u3002\u5176\u4e2d\u53ef\u80fd\u7684 \u539f\u56e0\u662f Coverage \u6a5f\u5236\uff0c\u8a72\u6a5f\u5236\u907f\u514d\u4e86\u6a21\u578b\u7684\u6ce8\u610f\u529b\u904e\u5ea6\u96c6\u4e2d\u5728\u540c\u500b\u97f3\u6846\u7684\u8a9e\u97f3\u7279\u5fb5\u4e0a\u3002 \u63a2\u7a76\u7aef\u5c0d\u7aef\u6df7\u5408\u6a21\u578b\u67b6\u69cb\u65bc\u83ef\u8a9e\u8a9e\u97f3\u8fa8\u8b58 47 \u53e6\u5916 TDNN-LFMMI \u65bc\u6b64\u6e2c\u8a66\u96c6\u7684 CER \u70ba 17%\uff0c\u76f8\u8f03\u4e4b\u4e0b\u6211\u5011\u7684\u65b9\u6cd5\u4ecd\u6709\u9032\u6b65\u7a7a\u9593\u3002 \u5716 4. \u4e0d\u540c\u7684 CTC \u6b0a\u91cd\u5c0d\u65bc\u77ed\u53e5\u6e2c\u8a66\u96c6 CER \u7684\u5f71\u97ff \u5728\u9019\u6b21\u7684\u5be6\u9a57\u4e2d\uff0c\u6211\u5011\u984d\u5916\u6bd4\u8f03 CTC-Attention \u6df7\u5408\u6a21\u578b\u65bc\u77ed\u53e5\u8fa8\u8b58\u4efb\u52d9\u4e0a\u7684\u8868\u73fe\uff0c \u7531\u5716 4 \u53ef\u4ee5\u5f97\u77e5\u5728\u4efb\u4e00\u6b0a\u91cd\u4e0b\u7684 CER\uff0c\u8207\u524d\u4e00\u500b\u6e2c\u8a66\u96c6\u7684\u5be6\u9a57\u76f8\u53cd\uff0cLocation \u6a5f\u5236\u7684\u6a21 \u578b\u53cd\u800c\u8f03 Coverage Location \u597d\u3002\u63a8\u6e2c\u5176\u539f\u56e0\u53ef\u80fd\u5728\u65bc\u8a9e\u53e5\u904e\u77ed\uff0c\u4f7f\u5f97 Coverage Location \u6a21\u578b\u7121\u6cd5\u767c\u63ee Coverage \u6a5f\u5236\u7684\u4f5c\u7528\uff0c\u56e0\u800c\u8868\u73fe\u8f03\u5dee\u3002\u800c CTC \u6b0a\u91cd\u70ba 1.0 \u6642\uff0c\u5373\u50c5\u4f7f\u7528 CTC \u89e3\u78bc\uff0c\u5169\u7a2e\u6a21\u578b\u7686\u70ba\u6700\u4f73\u8868\u73fe\uff0c\u5176\u539f\u56e0\u53ef\u80fd\u5728\u65bc CTC \u6a21\u578b\u662f\u70ba\u4e86\u89e3\u6c7a\u8f38\u51fa\u7684\u6587\u5b57 \u5e8f\u5217\u9577\u5ea6\u5c0f\u65bc\u8f38\u5165\u7684\u8072\u97f3\u9577\u5ea6\u7684\u60c5\u6cc1\u800c\u8a2d\u8a08\uff0c\u800c Attention \u6a21\u578b\uff0c\u4e5f\u51fa\u73fe\u4e86\u5982\u540c(Chan et al., 2016) \u7684\u5be6\u9a57\u7d50\u679c\uff0c\u7576\u6e2c\u8a66\u8a9e\u53e5\u8207\u8a13\u7df4\u8a9e\u53e5\u9577\u5ea6\u5dee\u7570\u592a\u5927\u6642\uff0c\u89e3\u78bc\u51fa\u4f86\u7684 CER \u8b8a\u5dee\u8a31\u591a\uff0c \u7136\u800c\u56e0\u70ba CTC \u6b0a\u91cd\u7684\u53ef\u8b8a\u52d5\u6027\uff0c\u53ef\u4ee5\u770b\u5230 CTC-Attention \u6df7\u5408\u6a21\u578b\u5177\u6709\u56e0\u61c9\u4e0d\u540c\u8a9e\u53e5\u9577 \u5ea6\u7684\u5f48\u6027\u3002 \u8868 Publications of the Association for Computational Linguistics and Chinese Language Processing Money Order or Check payable to \"The Association for Computation Linguistics and Chinese Language Processing \" or \"\u4e2d\u83ef\u6c11\u570b\u8a08\u7b97\u8a9e\u8a00\u5b78\u5b78\u6703\" \u2027 E-mail\uff1aaclclp@hp.iis.sinica.edu.tw" } } } }