{ "paper_id": "U16-1023", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:10:45.554500Z" }, "title": "Pairwise FastText Classifier for Entity Disambiguation", "authors": [ { "first": "Cheng", "middle": [], "last": "Yu", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Bing", "middle": [], "last": "Chu", "suffix": "", "affiliation": { "laboratory": "", "institution": "The Australian National University", "location": {} }, "email": "" }, { "first": "Rohit", "middle": [], "last": "Ram", "suffix": "", "affiliation": { "laboratory": "", "institution": "The Australian National University", "location": {} }, "email": "" }, { "first": "James", "middle": [], "last": "Aichinger", "suffix": "", "affiliation": { "laboratory": "", "institution": "The Australian National University", "location": {} }, "email": "" }, { "first": "Lizhen", "middle": [], "last": "Qu", "suffix": "", "affiliation": { "laboratory": "", "institution": "The Australian National University", "location": {} }, "email": "lizhen.qu@data61.csiro.au" }, { "first": "Hanna", "middle": [ "2016" ], "last": "Suominen", "suffix": "", "affiliation": { "laboratory": "", "institution": "The Australian National University", "location": {} }, "email": "hanna.suominen@anu.edu.au" }, { "first": "", "middle": [], "last": "Pairwise Fasttext", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "For the Australasian Language Technology Association (ALTA) 2016 Shared Task, we devised Pairwise FastText Classifier (PFC), an efficient embedding-based text classifier, and used it for entity disambiguation. Compared with a few baseline algorithms, PFC achieved a higher F1 score at 0.72 (under the team name BCJR). To generalise the model, we also created a method to bootstrap the training set deterministically without human labelling and at no financial cost. By releasing PFC and the dataset augmentation software to the public 1 , we hope to invite more collaboration. 2 The original paper of FastText used the typography fastText 3 SVC: Support vector classification", "pdf_parse": { "paper_id": "U16-1023", "_pdf_hash": "", "abstract": [ { "text": "For the Australasian Language Technology Association (ALTA) 2016 Shared Task, we devised Pairwise FastText Classifier (PFC), an efficient embedding-based text classifier, and used it for entity disambiguation. Compared with a few baseline algorithms, PFC achieved a higher F1 score at 0.72 (under the team name BCJR). To generalise the model, we also created a method to bootstrap the training set deterministically without human labelling and at no financial cost. By releasing PFC and the dataset augmentation software to the public 1 , we hope to invite more collaboration. 2 The original paper of FastText used the typography fastText 3 SVC: Support vector classification", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The goal of the ALTA 2016 Shared Task was to disambiguate two person or organisation entities . The real-world motivation for the Task includes gathering information about potential clients, and law enforcement.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We designed a Pairwise FastText Classifier (PFC) to disambiguate the entities . The major source of inspiration for PFC came from FastText 2 algorithm which achieved quick and accurate text classification (Joulin et al., 2016) . We also devised a method to augment our training examples deterministically, and released all source code to the public.", "cite_spans": [ { "start": 205, "end": 226, "text": "(Joulin et al., 2016)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The rest of the paper will start with PFC and a mixture model based on PFC, and proceeds to present our solution to augment the labelled dataset deterministically. Then we will evaluate PFC's performance against a few baseline methods, including SVC 3 with hand-crafted text features. Finally, we will discuss ways to improve disambiguation performance using PFC.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our Pairwise FastText Classifier is inspired by the FastText. Thus this section starts with a brief description of FastText, and proceeds to demonstrate PFC.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pairwise Fast-Text Classifier (PFC)", "sec_num": "2" }, { "text": "FastText maps each vocabulary to a real-valued vector, with unknown words having a special vocabulary ID. A document can be represented as the average of all these vectors. Then FastText will train a maximum entropy multi-class classifier on the vectors and the output labels. Fast Text has been shown to train quickly and achieve prediction performance comparable to Recurrent Neural Network embedding model for text classification (Joulin et al., 2016) .", "cite_spans": [ { "start": 433, "end": 454, "text": "(Joulin et al., 2016)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "FastText", "sec_num": "2.1" }, { "text": "PFC is similar to FastText except that PFC takes two inputs in the form of a list of vocabulary IDs, because disambiguation requires two URL inputs. We specify that each of them is passed into the same embedding matrix. If each entity is represented by a d dimensional vector, then we can concatenate them, and represent the two entities by a 2d dimensional vector. Then we train a maximum entropy classifier based on the concatenated vector. The diagram of the model is in Figure 1 . ", "cite_spans": [], "ref_spans": [ { "start": 474, "end": 482, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "PFC", "sec_num": "2.2" }, { "text": "The previous section introduces word-embedding-based PFC. In order to improve disambiguation performance, we built a mixture model based on various PFC sub-models: Besides word-embedding-based PFC, we also trained characterembedding-based PFC, which includes one unicharacter PFC, and one bi-character PFC. In the following subsections, we will first briefly explain character-embedding-based PFC, and then show the Mixture model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The PFC Mixture Model", "sec_num": "2.3" }, { "text": "Character-embedding-based PFC models typically have fewer parameters than word-embedding-based PFC, and thus reducing the probability of overfitting. Uni-character embedding maps each character in the URL and search engine snippet into a 13dimensional vector, take the average of an input document, concatenate the two documents, and then train a maximum entropy classification on top of the concatenated vectors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Character-Embedding-Based PFCs", "sec_num": "2.3.1" }, { "text": "Bi-character embedding model has a moving window of two characters and mapped every such two characters into a 16-dimensional vector.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Character-Embedding-Based PFCs", "sec_num": "2.3.1" }, { "text": "Our implementation of the character-embedding based PFC model includes only lowercase English letters and space. After converting all letters to lowercase, other characters are simply skipped and ignored.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Character-Embedding-Based PFCs", "sec_num": "2.3.1" }, { "text": "The mixture model has two phases. In phase one, we train each sub-model independently. In phase 2, we train a simple binary classifier based on the probability output of each individual PFC. The diagram of the PFC mixture model is shown in Figure 2.", "cite_spans": [], "ref_spans": [ { "start": 240, "end": 246, "text": "Figure", "ref_id": null } ], "eq_spans": [], "section": "Mixing PFC Sub-models", "sec_num": "2.3.2" }, { "text": "Embedding-models tend to have a large number of parameters. Our word-embedding matrix has over 3700 rows, and thus it is natural to brainstorm ways to augment the training set to prevent overfitting.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Augmenting More Training Examples Deterministically", "sec_num": "3" }, { "text": "We created a method to harvest additional training examples deterministically without the need for human labelling, and the data can be acquired at no additional cost.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Augmenting More Training Examples Deterministically", "sec_num": "3" }, { "text": "To acquire URL pairs that refer to different people, we wrote a scraping bot that visits LinkedIn, and grabs hyperlinks in a section called \"People that are similar to the person\", where LinkedIn recommends professionals that have similar to the current profile that we are browsing. LinkedIn restricts the number of profiles we can browse in a given month unless the user is a Premium user, so we upgraded our LinkedIn account for scraping purpose. We used the LinkedIn URLs provided to us in the training samples, and grabbed similar LinkedIn profiles, which ended up with about 850 profiles, with some of the LinkedIn URLs no longer up to date.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acquiring Training Examples for the Negative Class 4", "sec_num": "3.1" }, { "text": "To acquire training examples of different social media profiles that belong to the same person, we used examples from about.me. About.me is a platform where people could create a personal page showing their professional portfolios and links to various social media sites. We wrote a scraping bot that visits about.me/discover, where the site showcases their users, and clicks open class. if a pair of URL entities refer to the same persons or organisations, the pair belongs to the positive class.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acquiring Training Examples for the Positive Class", "sec_num": "3.2" }, { "text": "each user, acquires their social media links, and randomly selects two as a training example. For example, for someone with 5 social media profiles, including Facebook, Twitter, LinkedIn, Pinterest, and Google+, the bot can generate (5, 2) = 10 training examples.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acquiring Training Examples for the Positive Class", "sec_num": "3.2" }, { "text": "Using the training data provided by the Organiser and data acquired using the method mentioned in Section 3, we evaluated the performance of our PFC and PFC Mixture against a few baseline models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4" }, { "text": "The organiser prepared 200 labelled pairs of training samples and 200 unlabelled test samples (Hachey, 2016) . All baseline methods and PFC methods are trained using the original 200 URL pairs. The only exception is \"PFC with augmented dataset\", which uses the method in the previous section to acquire 807 negative class URL pairs, and 891 positive class URL pairs.", "cite_spans": [ { "start": 94, "end": 108, "text": "(Hachey, 2016)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "4.1" }, { "text": "Text content for the PFC comes from the search engine snippet file provided by the Organiser and text scraped from the URLs provided by the training examples. Unknown words in the test set are represented by a special symbol.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pre-Processing", "sec_num": "4.2" }, { "text": "The reason we choose a few baseline models is that there is no gold-standard baseline model for URL entity disambiguation. Baseline models are explained as followed. Word-Embedding with Pre-Trained Vectors: The training corpus Google comes from News Articles (Mikolov et al., 2013) . For each URL entity, we calculated the mean vector of the search result snippet text by using pre-trained word embedding vectors from Google. Unknown words were ignored. Then we concatenated the vectors and trained a maximum entropy classifier on top of it.", "cite_spans": [ { "start": 259, "end": 281, "text": "(Mikolov et al., 2013)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "4.3" }, { "text": "SVC with Hand-Selected Text Features: Our Support Vector Classifier is built on top of handselected text features. For each pair of URLs, we manually selected the following text features. Explanation of these features is available in Appendix-A.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "4.3" }, { "text": "LSTM Word-Embedding: We passed each document token sequentially using word embedding into an LSTM layer with 50 LSTM units (Brownlee, 2016) (Goodfellow et al., 2016) , concatenated the two output vectors, and trained a maximum entropy classifier on top of it. To reduce overfitting, we added dropout layers with the dropout parameter set to 0.2 (Zaremba, Sutskever, & Vinyals, 2014) .", "cite_spans": [ { "start": 123, "end": 139, "text": "(Brownlee, 2016)", "ref_id": "BIBREF0" }, { "start": 140, "end": 165, "text": "(Goodfellow et al., 2016)", "ref_id": "BIBREF3" }, { "start": 345, "end": 382, "text": "(Zaremba, Sutskever, & Vinyals, 2014)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "4.3" }, { "text": "Neural Tensor Network: Inspired by Socher et al., by passing a pair of documents represented in vector form into a tensor, we built a relationship classifier based on the architecture in the paper (Socher et al., 2013) . Document vectors are calculated from pre-trained Google embedding word vectors.", "cite_spans": [ { "start": 197, "end": 218, "text": "(Socher et al., 2013)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "4.3" }, { "text": "The experimental results from the setup is summarised in the ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "5" }, { "text": "Adding more training data seems to hurt the F1 score for the Shared Task. However, if we allow the newly acquired training examples to be part of the validation set, the validation set accuracy could reach 0.92. Due to time constraint, we were only able to acquire about 1700 training examples, with approximately equal number in each category. Whether adding more training data can improve disambiguation performance remains to be experimented.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Issues with Augmented Dataset", "sec_num": "5.1" }, { "text": "The performance of the PFC might improve if we use a similarity scoring function # , % = # ( % , where is a diagonal matrix. The binary classifier becomes = ( # , % ) , while the original PFC classifier is = ( ( [ # , % ] ) . Both and are learnable weights.", "cite_spans": [], "ref_spans": [ { "start": 208, "end": 221, "text": "( ( [ # , % ]", "ref_id": null } ], "eq_spans": [], "section": "Improve PFC", "sec_num": "5.2" }, { "text": "In our experiments, the PFC mixture model achieves the best performance, comparable to SVC with hand-selected features. Uni-character model by itself tends to under fit because the training data themselves cannot be separated by the model alone. PFC is robust because allows text features to be learnt automatically.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Compare PFC with Baseline SVC", "sec_num": "5.3" }, { "text": "We introduced Pairwise FastText Classifier to disambiguate URL entities. It uses embeddingbased vector representation for text, can be trained quickly, and performs better than most of the alternative baseline models in our experiments. PFC has the potential to generalise towards a wide range of disambiguation tasks. In order to generalise the application of the model, we created a method to deterministically harvest more training examples, which does not require manual labelling. By releasing all of them to the public, we hope for the continual advancement in the field of disambiguation, which could be applied to identity verification, anti-terrorism, and online general knowledge-base creation. Word Mover Distance between the nouns and named entities between \"ASnippet\" and \"BSnippet\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "Using the pretrained Stanford GloVe vectors (Pennington et al., 2014) .", "cite_spans": [ { "start": 44, "end": 69, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "All source code can be downloaded from: https://github.com/projectcleopatra/PFC", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "In the Shared Task, if a pair of URL entities refer to different persons or organisations, the pair belongs to the negativeFigure 2: The PFC Mixture Model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "F1 total is the simple average of F1 Public (calculated from half of half of the test data) and F1 Private (from the second half of the data)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "Appendix A includes manually selected text features for the SVC baseline model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Appendix A", "sec_num": null }, { "text": "If one URL has \"au\" and another one has \"uk\", then the value is 1, otherwise 0. 2 Edit distance between the two URLsSimply the Levenshtein distance between the string tokens of the two URLs (Jurafsky & Martin, 2007) . If the url contains \".org\" or \".gov\", then it returns 1. 7 isSportsStar(url_a) If the url contains \"espn\", \"ufc.com\", or \"sports\", then the feature is 1. Otherwise 0.", "cite_spans": [ { "start": 190, "end": 215, "text": "(Jurafsky & Martin, 2007)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "ID Feature Name Description 1 Country code difference", "sec_num": null }, { "text": "Features for url_b Analogous to Feature 3 -7", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "-12", "sec_num": "8" }, { "text": "13 Edit distance of the first part of the title for the two URLsDue to the differences of length between different titles, only the first part of the titles is preserved for calculating the Levenshtein distance. This feature is chosen because the first part of the title usually contains the first and last name of the person or the name of the company. 14 Cosine distance of the embedded matricesThe vector representation of the text is same as FastText except that the embedding matrix is pre-trained from Google. Any token not trained by Google will be ignored (Weston, Chopra, & Bordes, 2015) .", "cite_spans": [ { "start": 564, "end": 596, "text": "(Weston, Chopra, & Bordes, 2015)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "ID Feature Name Description", "sec_num": null }, { "text": "This refers to features made from fields \"ASnippet\" and \"BSnippet\" of the search result file provided by the Organiser.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.3 Snippet Features", "sec_num": null }, { "text": "15 Word Mover Distance between the nouns and named entities between \"ASnippet\" and \"BSnippet\" (Pele & Werman, A linear time histogram metric for improved sift Using pretrained Google word-embedding vectors", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ID Feature Name Description", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Sequence Classification with LSTM Recurrent Neural Networks in Python with Keras. Retrieved from Machine Learning Mastery", "authors": [ { "first": "J", "middle": [], "last": "Brownlee", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brownlee, J. (2016, July 26). Sequence Classification with LSTM Recurrent Neural Networks in Python with Keras. Retrieved from Machine Learning Mastery: http://machinelearningmastery.com/sequence -classification-lstm-recurrent-neural- networks-python-keras/", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Overview of the 2016 ALTA Shared Task: Cross-KB Coreference", "authors": [ { "first": "A", "middle": [], "last": "Chisholm", "suffix": "" }, { "first": "B", "middle": [], "last": "Hachey", "suffix": "" }, { "first": "D", "middle": [], "last": "Molla", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Australasian Language Technology Association Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chisholm, A., Hachey, B., & Molla, D. (2016). Overview of the 2016 ALTA Shared Task: Cross-KB Coreference. Proceedings of the Australasian Language Technology Association Workshop 2016.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Discovering Entity Knowledge Bases on the Web", "authors": [ { "first": "A", "middle": [], "last": "Chisholm", "suffix": "" }, { "first": "W", "middle": [], "last": "Radford", "suffix": "" }, { "first": "B", "middle": [], "last": "Hachey", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 5th Workshop on Automated Knowledge Base Construction", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chisholm, A., Radford, W., & Hachey, B. (2016). Discovering Entity Knowledge Bases on the Web. Proceedings of the 5th Workshop on Automated Knowledge Base Construction", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Deep Learning", "authors": [ { "first": "I", "middle": [], "last": "Goodfellow", "suffix": "" }, { "first": "Y", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "A", "middle": [], "last": "Courville", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "373--420", "other_ids": {}, "num": null, "urls": [], "raw_text": "Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep Learning, pages 373 -420.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "How was the data obtained? -ALTA 2016", "authors": [ { "first": "B", "middle": [], "last": "Hachey", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hachey, B. (2016). How was the data obtained? - ALTA 2016. Retrieved from Kaggle: https://inclass.kaggle.com/c/alta-2016- challenge/forums/t/23480/how-was-the-data- obtained", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Bag of Tricks for Efficient Text Classification", "authors": [ { "first": "A", "middle": [], "last": "Joulin", "suffix": "" }, { "first": "E", "middle": [], "last": "Grave", "suffix": "" }, { "first": "P", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "T", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1607.01759" ] }, "num": null, "urls": [], "raw_text": "Joulin, A., Grave, E., Bojanowski, P., & Mikolov, T. (2016). Bag of Tricks for Efficient Text Classification. arXiv preprint arXiv:1607.01759.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Speech and Language Processing", "authors": [ { "first": "D", "middle": [], "last": "Jurafsky", "suffix": "" }, { "first": "J", "middle": [ "H" ], "last": "Martin", "suffix": "" } ], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jurafsky, D., & Martin, J. H. (2007). Speech and Language Processing, 3 rd edition [Draft].", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Chapter 2", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chapter 2.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Efficient Estimation of Word Representations in Vector Space", "authors": [ { "first": "T", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "K", "middle": [], "last": "Chen", "suffix": "" }, { "first": "G", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "J", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1301.3781" ] }, "num": null, "urls": [], "raw_text": "Mikolov, T., Chen, K., Corrado, G., & Dean, J. (2013). Efficient Estimation of Word Representations in Vector Space. arXiv preprint arXiv:1301.3781.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "A linear time histogram metric for improved sift matching", "authors": [ { "first": "O", "middle": [], "last": "Pele", "suffix": "" }, { "first": "M", "middle": [], "last": "Werman", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pele, O., & Werman, M. (2008). A linear time histogram metric for improved sift matching. Computer Vision--ECCV 2008.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Fast and robust earth mover's distances", "authors": [ { "first": "O", "middle": [], "last": "Pele", "suffix": "" }, { "first": "M", "middle": [], "last": "Werman", "suffix": "" } ], "year": 2009, "venue": "IEEE 12th International Conference on Computer Vision", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pele, O., & Werman, M. (2009). Fast and robust earth mover's distances. 2009 IEEE 12th International Conference on Computer Vision.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "GloVe: Global Vectors for Word Representation", "authors": [ { "first": "J", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "R", "middle": [], "last": "Socher", "suffix": "" }, { "first": "C", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1532--1543", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pennington, J., Socher, R., & Manning, C. D. (2014). GloVe: Global Vectors for Word Representation. Empirical Methods in Natural Language Processing, pages 1532 - 1543.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Completion, Reasoning With Neural Tensor Networks for Knowledge Base", "authors": [ { "first": "R", "middle": [], "last": "Socher", "suffix": "" }, { "first": "D", "middle": [], "last": "Chen", "suffix": "" }, { "first": "C", "middle": [], "last": "Manning", "suffix": "" }, { "first": "A", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2013, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Socher, R., Chen, D., Manning, C., & Ng, A. (2013). Completion, Reasoning With Neural Tensor Networks for Knowledge Base. In Advances in Neural Information Processing Systems, 2013a.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Memory Networks", "authors": [ { "first": "J", "middle": [], "last": "Weston", "suffix": "" }, { "first": "S", "middle": [], "last": "Chopra", "suffix": "" }, { "first": "A", "middle": [], "last": "Bordes", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1410.3916" ] }, "num": null, "urls": [], "raw_text": "Weston, J., Chopra, S., & Bordes, A. (2015). Memory Networks. arXiv:1410.3916.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Recurrent Neural Network Regularization", "authors": [ { "first": "W", "middle": [], "last": "Zaremba", "suffix": "" }, { "first": "I", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "O", "middle": [], "last": "Vinyals", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1409.2329" ] }, "num": null, "urls": [], "raw_text": "Zaremba, W., Sutskever, I., & Vinyals, O. (2014). Recurrent Neural Network Regularization. arXiv preprint arXiv:1409.2329.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "PFC model. W1 and W2 are trainable weights.", "type_str": "figure", "uris": null, "num": null }, "TABREF0": { "html": null, "content": "
MethodF1F1F1
PublicPri-To-
vatetal5
PFC-PFC with0.750.640.69
basedWord-Embed-
ding
PFC Mixture0.740.710.72
Model
PFC with0.650.690.67
augmented
dataset
Base-Neural tensor0.670.60.64
linenetwork
SVC using0.750.690.72
hand-selected
features
LSTM word-0.510.530.52
embedding
", "text": "", "type_str": "table", "num": null }, "TABREF1": { "html": null, "content": "", "text": "Result comparison.", "type_str": "table", "num": null } } } }