{ "paper_id": "Y14-1040", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T13:44:36.017462Z" }, "title": "Retrieval Term Prediction Using Deep Belief Networks", "authors": [ { "first": "Qing", "middle": [], "last": "Ma", "suffix": "", "affiliation": { "laboratory": "", "institution": "Ryukoku University", "location": {} }, "email": "" }, { "first": "Ibuki", "middle": [], "last": "Tanigawa", "suffix": "", "affiliation": { "laboratory": "", "institution": "Ryukoku University", "location": {} }, "email": "" }, { "first": "Masaki", "middle": [], "last": "Murata", "suffix": "", "affiliation": { "laboratory": "", "institution": "Tottori University", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper presents a method to predict retrieval terms from relevant/surrounding words or descriptive texts in Japanese by using deep belief networks (DBN), one of two typical types of deep learning. To determine the effectiveness of using DBN for this task, we tested it along with baseline methods using examplebased approaches and conventional machine learning methods, i.e., multi-layer perceptron (MLP) and support vector machines (SVM), for comparison. The data for training and testing were obtained from the Web in manual and automatic manners. Automatically created pseudo data was also used. A grid search was adopted for obtaining the optimal hyperparameters of these machine learning methods by performing cross-validation on training data. Experimental results showed that (1) using DBN has far higher prediction precisions than using baseline methods and higher prediction precisions than using either MLP or SVM; (2) adding automatically gathered data and pseudo data to the manually gathered data as training data is an effective measure for further improving the prediction precisions; and (3) DBN is able to deal with noisier training data than MLP, i.e., the prediction precision of DBN can be improved by adding noisy training data, but that of MLP cannot be. 1 For example, according to a questionnaire administered by Microsoft in 2010, about 60% of users had difficulty deciding on the proper retrieval terms.", "pdf_parse": { "paper_id": "Y14-1040", "_pdf_hash": "", "abstract": [ { "text": "This paper presents a method to predict retrieval terms from relevant/surrounding words or descriptive texts in Japanese by using deep belief networks (DBN), one of two typical types of deep learning. To determine the effectiveness of using DBN for this task, we tested it along with baseline methods using examplebased approaches and conventional machine learning methods, i.e., multi-layer perceptron (MLP) and support vector machines (SVM), for comparison. The data for training and testing were obtained from the Web in manual and automatic manners. Automatically created pseudo data was also used. A grid search was adopted for obtaining the optimal hyperparameters of these machine learning methods by performing cross-validation on training data. Experimental results showed that (1) using DBN has far higher prediction precisions than using baseline methods and higher prediction precisions than using either MLP or SVM; (2) adding automatically gathered data and pseudo data to the manually gathered data as training data is an effective measure for further improving the prediction precisions; and (3) DBN is able to deal with noisier training data than MLP, i.e., the prediction precision of DBN can be improved by adding noisy training data, but that of MLP cannot be. 1 For example, according to a questionnaire administered by Microsoft in 2010, about 60% of users had difficulty deciding on the proper retrieval terms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The current Web search engines have a very high retrieval performance as long as the proper retrieval terms are given. However, many people, particularly children, seniors, and foreigners, have difficulty deciding on the proper retrieval terms for representing the retrieval objects, 1 especially with searches related to technical fields. The support systems are in place for search engine users that show suitable retrieval term candidates when some clues such as their descriptive texts or relevant/surrounding words are given by the users. For example, when the relevant/surrounding words \"computer\", \"previous state\", and \"return\" are given by users, \"system restore\" is predicted by the systems as a retrieval term candidate.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our objective is to develop various domainspecific information retrieval support systems that can predict suitable retrieval terms from relevant/surrounding words or descriptive texts in Japanese. To our knowledge, no such studies have been done so far in Japanese. As the first step, here, we confined the retrieval terms to the computerrelated field and proposed a method to predict them using machine learning methods with deep belief networks (DBN), one of two typical types of deep learning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In recent years, deep learning/neural network techniques have attracted a great deal of attention in various fields and have been successfully applied not only in speech recognition (Li et al., 2013) and image recognition (Krizhevsky et al., 2012) tasks but also in NLP tasks including morphology & syn-", "cite_spans": [ { "start": 182, "end": 199, "text": "(Li et al., 2013)", "ref_id": "BIBREF14" }, { "start": 222, "end": 247, "text": "(Krizhevsky et al., 2012)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "! 339 tax (Billingsley and Curran, 2012; Hermann and Blunsom, 2013; Luong et al., 2013; Socher et al., 2013a) , semantics (Hashimoto et al., 2013; Srivastava et al., 2013; Tsubaki et al., 2013) , machine translation (Auli et al., 2013; Liu et al., 2013; Kalchbrenner and Blunsom, 2013; Zou et al., 2013 ), text classification (Glorot et al., 2011) , information retrieval (Huang et al., 2013; Salakhutdinov and Hinton, 2009) , and others (Seide et al., 2011; Socher et al., 2011; Socher et al., 2013b) . Moreover, a unified neural network architecture and learning algorithm has also been proposed that can be applied to various NLP tasks including part-of-speech tagging, chunking, named entity recognition, and semantic role labeling (Collobert et al., 2011) .", "cite_spans": [ { "start": 10, "end": 40, "text": "(Billingsley and Curran, 2012;", "ref_id": "BIBREF4" }, { "start": 41, "end": 67, "text": "Hermann and Blunsom, 2013;", "ref_id": "BIBREF8" }, { "start": 68, "end": 87, "text": "Luong et al., 2013;", "ref_id": "BIBREF16" }, { "start": 88, "end": 109, "text": "Socher et al., 2013a)", "ref_id": null }, { "start": 122, "end": 146, "text": "(Hashimoto et al., 2013;", "ref_id": "BIBREF7" }, { "start": 147, "end": 171, "text": "Srivastava et al., 2013;", "ref_id": "BIBREF22" }, { "start": 172, "end": 193, "text": "Tsubaki et al., 2013)", "ref_id": "BIBREF23" }, { "start": 216, "end": 235, "text": "(Auli et al., 2013;", "ref_id": "BIBREF0" }, { "start": 236, "end": 253, "text": "Liu et al., 2013;", "ref_id": "BIBREF15" }, { "start": 254, "end": 285, "text": "Kalchbrenner and Blunsom, 2013;", "ref_id": "BIBREF11" }, { "start": 286, "end": 302, "text": "Zou et al., 2013", "ref_id": "BIBREF26" }, { "start": 326, "end": 347, "text": "(Glorot et al., 2011)", "ref_id": "BIBREF6" }, { "start": 372, "end": 392, "text": "(Huang et al., 2013;", "ref_id": "BIBREF10" }, { "start": 393, "end": 424, "text": "Salakhutdinov and Hinton, 2009)", "ref_id": "BIBREF17" }, { "start": 438, "end": 458, "text": "(Seide et al., 2011;", "ref_id": "BIBREF18" }, { "start": 459, "end": 479, "text": "Socher et al., 2011;", "ref_id": "BIBREF19" }, { "start": 480, "end": 501, "text": "Socher et al., 2013b)", "ref_id": null }, { "start": 736, "end": 760, "text": "(Collobert et al., 2011)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "PACLIC 28", "sec_num": null }, { "text": "To our knowledge, however, there have been no studies on applying deep learning to information retrieval support tasks. We therefore have two main objectives in our current study. One is to develop an effective method for predicting suitable retrieval terms and the other is to determine whether deep learning is more effective than other conventional machine learning methods, i.e., multi-layer perceptron (MLP) and support vector machines (SVM), in such NLP tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PACLIC 28", "sec_num": null }, { "text": "The data used for experiments were obtained from the Web in both manual and automatic manners. Automatically created pseudo data was also used. A grid search was used to obtain the optimal hyperparameters of these machine learning methods by performing cross-validation on training data. Experimental results showed that (1) using DBN has a far higher prediction precision than using baseline methods and a higher prediction precision than using either MLP or SVM; (2) adding automatically gathered data and pseudo data to the manually gathered data as training data is an effective measure for further improving the prediction precision; and (3) the DBN can deal with noisier training data than the MLP, i.e., the prediction precision of DBN can be improved by adding noisy training data, but that of MLP cannot be.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PACLIC 28", "sec_num": null }, { "text": "For training, a corpus consisting of pairs of inputs and their responses (or correct answers) -in our case, pairs of the relevant/surrounding words or de-scriptive texts and retrieval terms -is needed. The responses are typically called labels in supervised learning and so here we call the retrieval terms labels. Table 1 shows examples of these pairs, where the \"Relevant/surrounding words\" are those extracted from descriptive texts in accordance with steps described in Subsection 2.4. In this section, we describe how the corpus is obtained and how the feature vectors of the inputs are constructed from the corpus for machine learning.", "cite_spans": [], "ref_spans": [ { "start": 315, "end": 322, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "The Corpus", "sec_num": "2" }, { "text": "Considering that the descriptive texts of labels necessarily include their relevant/surrounding words, we gather Web pages containing these texts in both manual and automatic manners. In the manual manner, we manually select the Web pages that describe the labels. In contrast, in the automatic manner, we respectively combine five words or parts of phrases (toha, \"what is\"), (ha, \"is\"), (toiumonoha, \"something like\"), (nitsuiteha, \"about\"), and (noimiha, \"the meaning of\"), on the labels to form the retrieval terms (e.g., if a label is (gurafikku boudo, \"graphic board\"), then the retrieval terms are (gurafikku boudo toha, \"what is graphic board\"), (gurafikku boudo ha, \"graphic board is\"), and etc.) and then use these terms to obtain the relevant Web pages by a Google search.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Manual and Automatic Gathering of Data", "sec_num": "2.1" }, { "text": "To acquire as high a generalization capability as possible, for training we use not only the small scale of manually gathered data, which is high precision, but also the large scale of automatically gathered data, which includes a certain amount of noise. In contrast to manually gathered data, automatically gathered data might have incorrect labels, i.e., labels that do not match the descriptive texts. We therefore also use pseudo data, which can be regarded as data that includes some noises and/or deficiencies added to the original data (i.e., to the descriptive texts of the manually gathered data) but with less noise than the automatically gathered data and with all the labels correct. The procedure for creating pseudo data from the manually gathered data involves 1 manually gathered data and (2) for each label, randomly adding the words that were extracted in step (1) but not included in the descriptive texts and/or deleting words that originally existed in the descriptive texts so that the newly generated data (i.e., the newly generated descriptive texts) have 10% noises and/or deficiencies added to the original data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pseudo Data", "sec_num": "2.2" }, { "text": "The data described in Subsections 2.1 and 2.2 are for training. The data used for testing are different to the training data and are also obtained from automatically gathered data. Since automatically gathered data may include a lot of incorrect labels that cannot be used as objective assessment data, we manually select correct ones from the automatically gathered data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Testing Data", "sec_num": "2.3" }, { "text": "Relevant/surrounding words are extracted from descriptive texts by steps (1)-(4) below and the inputs are represented by feature vectors in machine learning constructed by steps (1)-(6): (1) perform morphological analysis on the manually gathered data and extract all nouns, including proper nouns, verbal nouns (nouns forming verbs by adding word (suru, \"do\")), and general nouns;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Extraction and Feature Vector Construction", "sec_num": "2.4" }, { "text": "(2) connect the nouns successively appearing as single words;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Extraction and Feature Vector Construction", "sec_num": "2.4" }, { "text": "(3) extract the words whose appearance frequency in each label is ranked in the top 50; (4) exclude the words appearing in the descriptive texts of more than two labels; (5) use the words obtained by the above steps as the vector elements with binary values, taking value 1 if a word appears and 0 if not; and (6) perform morphological analysis on all data described in Subsections 2.1, 2.2, and 2.3 and construct the feature vectors in accordance with step (5).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Extraction and Feature Vector Construction", "sec_num": "2.4" }, { "text": "Two typical approaches have been proposed for implementing deep learning: using deep belief networks (DBN) (Hinton et al., 2006; Lee et al., 2009; Bengio et al., 2007; Bengio, 2009; Bengio et al., 2013) and using stacked denoising autoencoder (SdA) (Bengio et al., 2007; Bengio, 2009; Bengio et al., 2013; Vincent et al., 2008; Vincent et al., 2010) . In this work we use DBN, which has an elegant architecture and a performance more than or equal to that of SdA in many tasks.", "cite_spans": [ { "start": 107, "end": 128, "text": "(Hinton et al., 2006;", "ref_id": null }, { "start": 129, "end": 146, "text": "Lee et al., 2009;", "ref_id": "BIBREF13" }, { "start": 147, "end": 167, "text": "Bengio et al., 2007;", "ref_id": "BIBREF1" }, { "start": 168, "end": 181, "text": "Bengio, 2009;", "ref_id": "BIBREF2" }, { "start": 182, "end": 202, "text": "Bengio et al., 2013)", "ref_id": "BIBREF3" }, { "start": 249, "end": 270, "text": "(Bengio et al., 2007;", "ref_id": "BIBREF1" }, { "start": 271, "end": 284, "text": "Bengio, 2009;", "ref_id": "BIBREF2" }, { "start": 285, "end": 305, "text": "Bengio et al., 2013;", "ref_id": "BIBREF3" }, { "start": 306, "end": 327, "text": "Vincent et al., 2008;", "ref_id": "BIBREF24" }, { "start": 328, "end": 349, "text": "Vincent et al., 2010)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Deep Learning", "sec_num": "3" }, { "text": "DBN is a multiple layer neural network equipped with an unsupervised learning based on restricted Boltzmann machines (RBM) for pre-training to extract features and a supervised learning for finetuning to output labels. The supervised learning can be implemented with a single layer or multi-layer perceptron or others (linear regression, logistic regression, etc.).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Deep Learning", "sec_num": "3" }, { "text": "RBM is a probabilistic graphical model representing the probability distribution of training data with a fast unsupervised learning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Restricted Boltzmann Machine", "sec_num": "3.1" }, { "text": "It consists of two layers, one visible and PACLIC 28", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Restricted Boltzmann Machine", "sec_num": "3.1" }, { "text": "! 341 one hidden, that respectively have visible units", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Restricted Boltzmann Machine", "sec_num": "3.1" }, { "text": "(v 1 , v 2 , \u2022 \u2022 \u2022 , v m ) and hidden units (h 1 , h 2 , \u2022 \u2022 \u2022 , h n )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Restricted Boltzmann Machine", "sec_num": "3.1" }, { "text": "connected to each other between the two layers (Figure 1 ). Given training data, the weights of the connections between units are modified by learning so that the behavior of the RBM stochastically fits the training data as well as possible. The learning algorithm is briefly described below.", "cite_spans": [], "ref_spans": [ { "start": 47, "end": 56, "text": "(Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Restricted Boltzmann Machine", "sec_num": "3.1" }, { "text": "First, sampling is performed on the basis of conditional probabilities when a piece of training data is given to the visible layer using Eqs. (1), (2), and then (1) again:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Restricted Boltzmann Machine", "sec_num": "3.1" }, { "text": "P (h (k) i = 1|v (k) ) = sigmoid( m j=1 w ij v (k) j +c i ) (1) and P (v (k+1) j = 1|h (k) ) = sigmoid( n i=1 w ij h (k) i + b j ),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Restricted Boltzmann Machine", "sec_num": "3.1" }, { "text": "(2) where k (\u2265 1) is a repeat count of sampling and v (1) = v which is a piece of training data, w ij is the weight of connection between units v j and h i , and b j and c i are offsets for the units v j and h i of the visible and hidden layers. After k repetition sampling, the weights and offsets are updated by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Restricted Boltzmann Machine", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "W \u2190 W + \u03f5(h (1) v T \u2212 P (h (k+1) = 1|v (k+1) )v (k+1)T ),", "eq_num": "(3)" } ], "section": "Restricted Boltzmann Machine", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "b \u2190 b + \u03f5(v \u2212 v (k+1) ),", "eq_num": "(4)" } ], "section": "Restricted Boltzmann Machine", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "c \u2190 c + \u03f5(h (1) \u2212 P (h (k+1) = 1|v (k+1) )),", "eq_num": "(5)" } ], "section": "Restricted Boltzmann Machine", "sec_num": "3.1" }, { "text": "where \u03f5 is a learning rate and the initial values of W , b, and c are 0. Sampling with a large enough repeat count is called Gibbs sampling, which is computationally expensive. A method called k-step Contrastive Divergence (CD-k) which stops sampling after k repetitions is therefore usually adopted. It is empirically known that even k = 1 (CD-1) often gives good results, and so we set k = 1 in this work. If we assume totally e epochs are performed for learning n training data using CD-k, the procedure for learning RBM can be given as in Figure 2 . As the learning progresses, the samples 2 of the visible layer v (k+1) approach the training data v.", "cite_spans": [], "ref_spans": [ { "start": 543, "end": 551, "text": "Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Restricted Boltzmann Machine", "sec_num": "3.1" }, { "text": "For each of all epochs e do For each of all data n do For each repetition of CD k do Sample according to Eqs. (1), (2), (1) End for Update using Eqs. 3 Figure 3 shows a DBN composed of three RBMs for pre-training and a supervised learning device for fine-tuning. Naturally the number of RBMs is changeable as needed. As shown in the figure, the hidden layers of the earlier RBMs become the visible layers of the new RBMs. Below, for simplic-ity, we consider the layers of RBMs (excluding the input layer) as hidden layers of DBN. The DBN in the figure therefore has three hidden layers, and this number is equal to the number of RBMs. Although supervised learning can be implemented by any method, in this work we use logistic regression.", "cite_spans": [], "ref_spans": [ { "start": 152, "end": 160, "text": "Figure 3", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Restricted Boltzmann Machine", "sec_num": "3.1" }, { "text": "The procedure for learning the DBN with three RBMs is shown in Figure 4. 1. Train RBM 1 with the training data as inputs by the procedure for learning RBM (Figure 2 ) and fix its weights and offsets.", "cite_spans": [], "ref_spans": [ { "start": 63, "end": 72, "text": "Figure 4.", "ref_id": "FIGREF4" }, { "start": 155, "end": 164, "text": "(Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Deep Belief Network", "sec_num": "3.2" }, { "text": "2. Train RBM 2 with the samples of the hidden layer of RBM 1 as inputs by the procedure for learning RBM (Figure 2 ) and fix its weights and offsets.", "cite_spans": [], "ref_spans": [ { "start": 105, "end": 114, "text": "(Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Deep Belief Network", "sec_num": "3.2" }, { "text": "3. Train RBM 3 with the samples of the hidden layer of RBM 2 as inputs by the procedure for learning RBM (Figure 2 ) and fix its weights and offsets.", "cite_spans": [], "ref_spans": [ { "start": 105, "end": 114, "text": "(Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Deep Belief Network", "sec_num": "3.2" }, { "text": "4. Perform supervised learning with the samples of the hidden layer of RBM 3 as inputs and the labels as the desired outputs. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Deep Belief Network", "sec_num": "3.2" }, { "text": "We formed 13 training data sets by adding different amounts of automatically gathered data and/or pseudo data to a base data set, as shown in Table 2 . In the table, m300 is the base data set including 300 pieces of manually gathered data and, for example, a2400 is a data set including 2,400 automatically gathered pieces of data and m300, p2400 is a data set including 2,400 pieces of pseudo data and m300, and a2400p2400 is a data set including 2,400 pieces of automatically gathered data, 2,400 pieces of pseudo data, and m300. Altogether there were 100 pieces of testing data. The number of labels was 10; i.e., the training data listed in Table 2 and the testing data have 10 labels. The dimension of the feature vectors constructed in accordance with the steps in Subsection 2.4 was 182. m300 a300 a600 a1200 a2400 p300 p600 p1200 p2400 a300p300 a600p600 a1200p1200 a2400p2400 ", "cite_spans": [], "ref_spans": [ { "start": 142, "end": 149, "text": "Table 2", "ref_id": "TABREF1" }, { "start": 645, "end": 656, "text": "Table 2 and", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Data", "sec_num": "4.1.1" }, { "text": "The optimal hyperparameters of the various machine learning methods used were determined by a grid search using 5-fold cross-validation on training data. The hyperparameters for the grid search are shown in Table 3 . To avoid unfair bias toward the DBN during cross-validation due to the DBN having more hyperparameters than the other methods, we divided the MLP and SVM hyperparameter grids more finely than that of the DBN so that they had the same or more hyperparameter combinations than the DBN. For MLP, we also considered another case in which we used network structures, learning rates, and learning epochs completely the same as those of the DBN. In this case, the number of MLP hyperparameter combinations was quite small compared to that of the DBN. We refer to this MLP as MLP 1 and to the former MLP as MLP 2. Ultimately, the DBN and MLP 2 both had 864 hyperparameter combinations, the SVM (Linear) and SVM (RBF) had 900, and MLP 1 had 72.", "cite_spans": [], "ref_spans": [ { "start": 207, "end": 214, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Hyperparameter Search", "sec_num": "4.1.2" }, { "text": "For comparison, in addition to MLP and SVM, we run tests on baseline methods using examplebased approaches and compare the testing data of each with all the training data to determine which one had the largest number of words corresponding to the testing data. The algorithm is shown in Figure 5 , where the words used for counting are those extracted from the descriptive texts in accordance with steps (1)-(4) in Subsection 2.4. 137-91, 152-121-91, 273, 273-273, 273-273-273 \u03f5 0.001, 0.0025, 0.005, 0.0075, 0.01, 0.025, 0.05, 0.075, 0.1 epoch 6 divisions between 500-1000 and 10 divisions between 1200-3000 in a linear scale SVM (Linear) \u03b3 900 divisions between 10 \u22124 -10 4 in a logarithmic scale SVM (RBF) \u03b3 30 divisions between 10 \u22124 -10 4 in a logarithmic scale C 30 divisions between 10 \u22124 -10 4 in a logarithmic scale Table 3 : Hyperparameters for grid search.", "cite_spans": [ { "start": 431, "end": 478, "text": "137-91, 152-121-91, 273, 273-273, 273-273-273 \u03f5", "ref_id": null } ], "ref_spans": [ { "start": 287, "end": 295, "text": "Figure 5", "ref_id": "FIGREF5" }, { "start": 825, "end": 832, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Baselines", "sec_num": "4.1.3" }, { "text": "For each input i of testing data do For each input j of training data do 1. Count the same words between i and j 2. Find the j with the largest count and set m=j End for 1. Let the label of m of training data (r) be the predicting result of the input i 2. Compare r with the label of i of testing data and determine the correctness End for 1. Count the correct predicting results and compute the correct rate (precision) Figure 6 compares the testing data precisions when using different training data sets with individual machine learning methods. The precisions are averages when using the top N sets of the hyperparameters in ascending order of the cross-validation errors, with N varying from 5 to 30. As shown in the figure, both the DBN and the MLPs had the highest precisions overall and the SVMs had approximately the highest precision when using data set a2400p2400, i.e., in the case of adding the largest number of automatically gathered data and pseudo data to the manually gathered data as training data. Moreover, the DBN, MLPs, and SVM (RBF) all had higher precisions when adding the appropriate amount of automatically gathered data and pseudo data compared to the case of using only manually gathered data, but the SVM (Linear) did not have this tendency. 4 Further, the DBN and SVM (RBF) had higher precisions when adding the appropriate amount of automatically gathered data only, whereas the MLPs had higher precisions when adding the appropriate amount of pseudo data only compared to the case of using only manually gathered data. From these results, we can infer that (1) all the machine learning methods (excluding SVM (Linear)) can improve their precisions by adding automatically gathered and pseudo data as training data and that (2) the DBN and SVM (RBF) can deal with noisier data than the MLPs, as the automatically gathered data are noisier than the pseudo data. Figure 7 compares the testing data precisions of DBN and MLPs and of DBN and SVMs when using different training data sets (i.e., the data set of Table 2 ) that are not distinguished from each other. As in Figure 6 , the precisions are averages of using the top N sets of hyperparameters in ascending order of the cross-validation errors, with N varying from 5 to 30. We can see at a glance that the performance of the DBN was generally superior to all the other machine learning methods. We should point out that the ranges of the vertical axes of all the graphs are set to be the same and so four lines of the SVM (RBF) are not indicated in the DBN vs. SVM (RBF) graph because their precisions were lower than 0.9. Full results, however, are shown in Figure 6 . Table 4 , 5, and 6 show the precisions of the baseline method and the average precisions of the machine learning methods for the top 5 and 10 sets of hyperparameters in ascending order of the crossvalidation errors, respectively, when using different data sets for training. First, in contrast to the machine learning methods, we see that adding noisy training data (i.e., adding only the automatically gathered data or adding both the automatically gathered and the pseudo data) was not useful for the baseline method to improve the prediction precisions: on the contrary, the noisy data significantly reduced the prediction precisions. Second, in almost all cases, the precisions of the baseline method were far lower than those of all machine learning methods. Finally, we see that in almost all cases, the DBN had the highest precision (the bold figures in the tables) of all the machine learning methods.", "cite_spans": [], "ref_spans": [ { "start": 421, "end": 429, "text": "Figure 6", "ref_id": null }, { "start": 1894, "end": 1902, "text": "Figure 7", "ref_id": null }, { "start": 2039, "end": 2046, "text": "Table 2", "ref_id": "TABREF1" }, { "start": 2099, "end": 2107, "text": "Figure 6", "ref_id": null }, { "start": 2646, "end": 2654, "text": "Figure 6", "ref_id": null }, { "start": 2657, "end": 2664, "text": "Table 4", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Hyperparameters", "sec_num": null }, { "text": "In addition, even when only using the base data set (i.e., the manually gathered data (m300)) for training, we can conclude from Figure 6 and Table 5 and 6 that, in all cases, the precision of DBN was the highest.", "cite_spans": [], "ref_spans": [ { "start": 129, "end": 137, "text": "Figure 6", "ref_id": null }, { "start": 142, "end": 150, "text": "Table 5", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Results", "sec_num": "4.2" }, { "text": "We proposed methods to predict retrieval terms from the relevant/surrounding words or the descriptive texts in Japanese by using deep belief networks (DBN), one of the two typical types of deep learn- ing. To determine the effectiveness of using DBN for this task, we tested it along with baseline methods using example-based approaches and conventional machine learning methods such as MLP and SVM in comparative experiments. The data for training and testing were obtained from the Web in both manual and automatic manners. We also used automatically created pseudo data. We adopted a grid search to obtain the optimal hyperparameters of these methods by performing cross-validation on the training data. Experimental results showed that (1) using DBN has far higher prediction precisions than using the baseline methods and has higher prediction precisions than using either MLP or SVM;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "(2) adding automatically gathered data and pseudo data to the manually gathered data as training data further improves the prediction precisions; and (3) DBN and SVM (RBF) are able to deal with more noisier training data than MLP, i.e., the prediction precision of DBN can be improved by adding noisy training data, but that of MLP cannot be. In our future work, we plan to re-confirm the effectiveness of the proposed methods by scaling up the experimental data and then start developing various practical domain-specific systems that can predict suitable retrieval terms from the relevant/surrounding words or descriptive texts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "By \"samples\" here we mean the data generated on the basis of the conditional probabilities of Eqs. (1) and (2).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "As an example, the structure (hidden layers) 152-121-91 shown in the table refers to a DBN with a 182-152-121-91-10 structure, where 182 and 10 refer to dimensions of the input and output layers, respectively. These figures were set not in an arbitrary manner but using regular intervals in a linear form, i.e., 152 = 182 \u00d7 5/6, 121 = 182 \u00d7 4/6, and 91 = 182 \u00d7 3/6.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "This is because the SVM (Linear) can only deal with data capable of linear separation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work was supported by JSPS KAKENHI Grant Number 25330368.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Joint Language and Translation Modeling with Recurrent Neural Networks", "authors": [ { "first": "M", "middle": [], "last": "Auli", "suffix": "" }, { "first": "M", "middle": [], "last": "Galley", "suffix": "" }, { "first": "C", "middle": [], "last": "Quirk", "suffix": "" }, { "first": "G", "middle": [], "last": "Zweig", "suffix": "" } ], "year": 2013, "venue": "EMNLP", "volume": "2013", "issue": "", "pages": "1044--1054", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Auli, M. Galley, C. Quirk, and G. Zweig. 2013. Joint Language and Translation Modeling with Recurrent Neural Networks. EMNLP 2013, 1044-1054.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Greedy Layer-wise Training of Deep Networks. 153-160", "authors": [ { "first": "Y", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "P", "middle": [], "last": "Lamblin", "suffix": "" }, { "first": "D", "middle": [], "last": "Popovici", "suffix": "" }, { "first": "H", "middle": [], "last": "Larochelle", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "153--160", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y. Bengio, P. Lamblin, D. Popovici, and H. Larochelle. 2007. Greedy Layer-wise Training of Deep Networks. 153-160. NIPS 2006, 153-160.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Learning Deep Architectures for AI. Foundations and Trends in Machine Learning", "authors": [ { "first": "Y", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2009, "venue": "", "volume": "2", "issue": "", "pages": "1--127", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y. Bengio. 2009. Learning Deep Architectures for AI. Foundations and Trends in Machine Learning, 2(1):1- 127.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Representation Learning: A Review and New Perspectives", "authors": [ { "first": "Y", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "A", "middle": [], "last": "Courville", "suffix": "" }, { "first": "P", "middle": [], "last": "Vincent", "suffix": "" } ], "year": 2013, "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "volume": "35", "issue": "8", "pages": "1798--1828", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y. Bengio, A. Courville, and P. Vincent. 2013. Repre- sentation Learning: A Review and New Perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8):1798-1828.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Improvements to Training an RNN Parser", "authors": [ { "first": "R", "middle": [], "last": "Billingsley", "suffix": "" }, { "first": "J", "middle": [], "last": "Curran", "suffix": "" } ], "year": 2012, "venue": "", "volume": "", "issue": "", "pages": "279--294", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Billingsley and J. Curran. 2012. Improvements to Training an RNN Parser. COLING 2012, 279-294.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Natural Language Processing (Almost) from Scratch", "authors": [ { "first": "R", "middle": [], "last": "Collobert", "suffix": "" }, { "first": "J", "middle": [], "last": "Weston", "suffix": "" }, { "first": "L", "middle": [], "last": "Bottou", "suffix": "" }, { "first": "M", "middle": [], "last": "Karlen", "suffix": "" }, { "first": "K", "middle": [], "last": "Kavukcuoglu", "suffix": "" }, { "first": "P", "middle": [], "last": "Kuksa", "suffix": "" } ], "year": 2011, "venue": "Journal of Machine Learning Research", "volume": "12", "issue": "", "pages": "2493--2537", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa. 2011. Natural Language Processing (Almost) from Scratch. Journal of Ma- chine Learning Research, 12:2493-2537.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Domain Adaptation for Large-Scale Sentiment Classification: A Deep Learning Approach", "authors": [ { "first": "X", "middle": [], "last": "Glorot", "suffix": "" }, { "first": "A", "middle": [], "last": "Bordes", "suffix": "" }, { "first": "Y", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2011, "venue": "", "volume": "", "issue": "", "pages": "513--520", "other_ids": {}, "num": null, "urls": [], "raw_text": "X. Glorot, A. Bordes, and Y. Bengio. 2011. Domain Adaptation for Large-Scale Sentiment Classification: A Deep Learning Approach. ICML 2011, 513-520.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Simple Customization of Recursive Neural Networks for Semantic Relation Classification", "authors": [ { "first": "K", "middle": [], "last": "Hashimoto", "suffix": "" }, { "first": "M", "middle": [], "last": "Miwa", "suffix": "" }, { "first": "Y", "middle": [], "last": "Tsuruoka", "suffix": "" }, { "first": "T", "middle": [], "last": "Chikayama", "suffix": "" } ], "year": 2013, "venue": "EMNLP", "volume": "2013", "issue": "", "pages": "1372--1376", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. Hashimoto, M. Miwa, Y. Tsuruoka, and T. Chikayama. 2013. Simple Customization of Recur- sive Neural Networks for Semantic Relation Classifi- cation. EMNLP 2013, 1372-1376.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "The Role of Syntax in Vector Space Models of Compositional Semantics", "authors": [ { "first": "K", "middle": [ "M" ], "last": "Hermann", "suffix": "" }, { "first": "P", "middle": [], "last": "Blunsom", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "894--904", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. M. Hermann and P. Blunsom. 2013. The Role of Syntax in Vector Space Models of Compositional Se- mantics. ACL 2013, 894-904.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "A Fast Learning Algorithm for Deep Belief Nets", "authors": [ { "first": "G", "middle": [ "E" ], "last": "Hiton", "suffix": "" }, { "first": "S", "middle": [], "last": "Osindero", "suffix": "" }, { "first": "Y", "middle": [], "last": "Teh", "suffix": "" } ], "year": 2006, "venue": "Neural Computation", "volume": "18", "issue": "", "pages": "1527--1554", "other_ids": {}, "num": null, "urls": [], "raw_text": "G. E. Hiton, S. Osindero, and Y. Teh. 2006. A Fast Learning Algorithm for Deep Belief Nets. Neural Computation, 18:1527-1554.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Learning deep structured semantic models for web search using clickthrough data", "authors": [ { "first": "P", "middle": [ "S" ], "last": "Huang", "suffix": "" }, { "first": "X", "middle": [], "last": "He", "suffix": "" }, { "first": "J", "middle": [], "last": "Gao", "suffix": "" }, { "first": "L", "middle": [], "last": "Deng", "suffix": "" }, { "first": "A", "middle": [], "last": "Acero", "suffix": "" }, { "first": "L", "middle": [], "last": "Heck", "suffix": "" } ], "year": 2013, "venue": "CIKM", "volume": "2013", "issue": "", "pages": "2333--2338", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. S. Huang, X. He, J. Gao, L. Deng, A. Acero, and L. Heck 2013. Learning deep structured semantic mod- els for web search using clickthrough data. CIKM 2013, 2333-2338.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Recurrent Continuous Translation Models", "authors": [ { "first": "N", "middle": [], "last": "Kalchbrenner", "suffix": "" }, { "first": "P", "middle": [], "last": "Blunsom", "suffix": "" } ], "year": 2013, "venue": "EMNLP", "volume": "2013", "issue": "", "pages": "1700--1709", "other_ids": {}, "num": null, "urls": [], "raw_text": "N. Kalchbrenner and P. Blunsom. 2013. Recurrent Con- tinuous Translation Models. EMNLP 2013, 1700- 1709.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Im-ageNet Classification with Deep Convolutional Neural Networks. NIPS 2012", "authors": [ { "first": "A", "middle": [], "last": "Krizhevsky", "suffix": "" }, { "first": "I", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "G", "middle": [ "E" ], "last": "Hinton", "suffix": "" } ], "year": 2012, "venue": "", "volume": "", "issue": "", "pages": "1097--1105", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Krizhevsky, I. Sutskever, and G. E. Hinton. 2012. Im- ageNet Classification with Deep Convolutional Neural Networks. NIPS 2012, 1097-1105.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Convolutional Deep Belief Networks for Scalable Unsupervised Learning of Hierarchical Representations", "authors": [ { "first": "H", "middle": [], "last": "Lee", "suffix": "" }, { "first": "R", "middle": [], "last": "Grosse", "suffix": "" }, { "first": "R", "middle": [], "last": "Ranganath", "suffix": "" }, { "first": "A", "middle": [ "Y" ], "last": "Ng", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "609--616", "other_ids": {}, "num": null, "urls": [], "raw_text": "H. Lee, R. Grosse, R. Ranganath, and A. Y. Ng. 2009. Convolutional Deep Belief Networks for Scalable Un- supervised Learning of Hierarchical Representations. ICML 2009, 609-616.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Hybrid Deep Neural Network -Hidden Markov Model (DNN-HMM) Based Speech Emotion Recognition", "authors": [ { "first": "L", "middle": [], "last": "Li", "suffix": "" }, { "first": "Y", "middle": [], "last": "Zhao", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "L. Li and Y. Zhao, et al. 2013. Hybrid Deep Neural Net- work -Hidden Markov Model (DNN-HMM) Based Speech Emotion Recognition. ACII 2013.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Additive Neural Networks for Statistical Machine Translation. ACL 2013", "authors": [ { "first": "L", "middle": [], "last": "Liu", "suffix": "" }, { "first": "T", "middle": [], "last": "Watanabe", "suffix": "" }, { "first": "E", "middle": [], "last": "Sumita", "suffix": "" }, { "first": "T", "middle": [], "last": "Zhao", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "791--801", "other_ids": {}, "num": null, "urls": [], "raw_text": "L. Liu, T. Watanabe, E. Sumita and T. Zhao. 2013. Ad- ditive Neural Networks for Statistical Machine Trans- lation. ACL 2013, 791-801.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Better Word Representations with Recursive Neural Networks for Morphology. ACL 2013", "authors": [ { "first": "T", "middle": [], "last": "Luong", "suffix": "" }, { "first": "R", "middle": [], "last": "Socher", "suffix": "" }, { "first": "C", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "104--113", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Luong, R. Socher, and C. Manning. 2013. Better Word Representations with Recursive Neural Networks for Morphology. ACL 2013, 104-113.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Semantic Hashing", "authors": [ { "first": "R", "middle": [], "last": "Salakhutdinov", "suffix": "" }, { "first": "G", "middle": [ "E" ], "last": "Hinton", "suffix": "" } ], "year": 2009, "venue": "International Journal of Approximate Reasoning", "volume": "50", "issue": "7", "pages": "969--978", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Salakhutdinov and G. E. Hinton. 2009. Semantic Hashing. International Journal of Approximate Rea- soning, 50(7): 969-978.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Conversational Speech Transcription Using Context-Dependent Deep Neural Networks", "authors": [ { "first": "F", "middle": [], "last": "Seide", "suffix": "" }, { "first": "G", "middle": [], "last": "Li", "suffix": "" }, { "first": "D", "middle": [], "last": "Yu", "suffix": "" } ], "year": 2011, "venue": "", "volume": "", "issue": "", "pages": "437--440", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. Seide, G. Li, and D. Yu. 2011. Conversational Speech Transcription Using Context-Dependent Deep Neural Networks. INTERSPEECH 2011, 437-440.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Dynamic Pooling and Unfolding Recursive Autoencoders for Paraphrase Detection", "authors": [ { "first": "R", "middle": [], "last": "Socher", "suffix": "" }, { "first": "E", "middle": [ "H" ], "last": "Huang", "suffix": "" }, { "first": "J", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "A", "middle": [ "Y" ], "last": "Ng", "suffix": "" }, { "first": "C", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2011, "venue": "", "volume": "", "issue": "", "pages": "801--809", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Socher, E. H. Huang, J. Pennington, A. Y. Ng, and C. D. Manning. 2011. Dynamic Pooling and Unfold- ing Recursive Autoencoders for Paraphrase Detection. 801-809. NIPS 2011, 801-809.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Parsing with Computational Vector Grammars", "authors": [ { "first": "R", "middle": [], "last": "Socher", "suffix": "" }, { "first": "J", "middle": [], "last": "Bauer", "suffix": "" }, { "first": "C", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "A", "middle": [ "Y" ], "last": "Ng", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "455--465", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Socher, J. Bauer, C. D. Manning, and A. Y. Ng. 2013. Parsing with Computational Vector Grammars. ACL 2013, 455-465.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank", "authors": [ { "first": "R", "middle": [], "last": "Socher", "suffix": "" }, { "first": "A", "middle": [], "last": "Perelygin", "suffix": "" }, { "first": "J", "middle": [ "Y" ], "last": "Wu", "suffix": "" }, { "first": "J", "middle": [], "last": "Chuang", "suffix": "" } ], "year": 2013, "venue": "EMNLP", "volume": "2013", "issue": "", "pages": "1631--1642", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Socher, A. Perelygin, J. Y. Wu, and J. Chuang. 2013. Recursive Deep Models for Semantic Compositional- ity Over a Sentiment Treebank. EMNLP 2013, 1631- 1642.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "A Walk-Based Semantically Enriched Tree Kernel Over Distributed Word Representations", "authors": [ { "first": "S", "middle": [], "last": "Srivastava", "suffix": "" }, { "first": "D", "middle": [], "last": "Hovy", "suffix": "" }, { "first": "E", "middle": [ "H" ], "last": "Hovy", "suffix": "" } ], "year": 2013, "venue": "EMNLP", "volume": "2013", "issue": "", "pages": "1411--1416", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Srivastava, D. Hovy, and E. H. Hovy. 2013. A Walk- Based Semantically Enriched Tree Kernel Over Dis- tributed Word Representations. EMNLP 2013, 1411- 1416.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Modeling and Learning Semantic Co-Compositionality through Prototype Projections and Neural Networks", "authors": [ { "first": "M", "middle": [], "last": "Tsubaki", "suffix": "" }, { "first": "K", "middle": [], "last": "Duh", "suffix": "" }, { "first": "M", "middle": [], "last": "Shimbo", "suffix": "" }, { "first": "Y", "middle": [], "last": "Matsumoto", "suffix": "" } ], "year": 2013, "venue": "EMNLP", "volume": "2013", "issue": "", "pages": "130--140", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Tsubaki, K. Duh, M. Shimbo, and Y. Mat- sumoto. 2013. Modeling and Learning Semantic Co- Compositionality through Prototype Projections and Neural Networks. EMNLP 2013, 130-140.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Extracting and Composing Robust Features with Denoising Autoencoders", "authors": [ { "first": "P", "middle": [], "last": "Vincent", "suffix": "" }, { "first": "H", "middle": [], "last": "Larochelle", "suffix": "" }, { "first": "Y", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "P", "middle": [ "A" ], "last": "Manzagol", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "1096--1103", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Vincent, H. Larochelle, Y. Bengio, and P. A. Manzagol. 2008. Extracting and Composing Robust Features with Denoising Autoencoders. ICML 2008, 1096- 1103.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion", "authors": [ { "first": "P", "middle": [], "last": "Vincent", "suffix": "" }, { "first": "H", "middle": [], "last": "Larochelle", "suffix": "" }, { "first": "I", "middle": [], "last": "Lajoie", "suffix": "" }, { "first": "Y", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "P", "middle": [ "A" ], "last": "Manzagol", "suffix": "" } ], "year": 2010, "venue": "Journal of Machine Learning Research", "volume": "11", "issue": "", "pages": "3371--3408", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, and P. A. Manzagol. 2010. Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion. Journal of Machine Learning Research, 11:3371-3408.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Bilingual Word Embeddings for Phrase-Based Machine Translation", "authors": [ { "first": "W", "middle": [ "Y" ], "last": "Zou", "suffix": "" }, { "first": "R", "middle": [], "last": "Socher", "suffix": "" }, { "first": "D", "middle": [ "M" ], "last": "Cer", "suffix": "" }, { "first": "C", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2013, "venue": "EMNLP", "volume": "2013", "issue": "", "pages": "1393--1398", "other_ids": {}, "num": null, "urls": [], "raw_text": "W. Y. Zou, R. Socher, D. M. Cer, and C. D. Manning. 2013. Bilingual Word Embeddings for Phrase-Based Machine Translation. EMNLP 2013, 1393-1398.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Restricted Boltzmann machine.", "num": null, "type_str": "figure", "uris": null }, "FIGREF2": { "text": "Procedure for learning RBM.", "num": null, "type_str": "figure", "uris": null }, "FIGREF3": { "text": "Example of a deep belief network.", "num": null, "type_str": "figure", "uris": null }, "FIGREF4": { "text": "Procedure for learning DBN with three RBMs.", "num": null, "type_str": "figure", "uris": null }, "FIGREF5": { "text": "Baseline algorithm.", "num": null, "type_str": "figure", "uris": null }, "FIGREF6": { "text": "Average precisions of DBN, MLP, and SVM for top N varying from 5 to 30. Comparison of average precisions for top N varying from 5 to 30.", "num": null, "type_str": "figure", "uris": null }, "TABREF0": { "type_str": "table", "content": "
PACLIC 28
Labels
(Retrieval
terms)
Relevant/surrounding wordsscreen, picture, eye, displays, as commanded, \u2022 \u2022 \u2022
Descriptive textA device that provides independent functions for outputting or inputting
Relevant/surrounding wordsvideo as signals on a PC or various other types of computer \u2022 \u2022 \u2022 independent, functions, outputting, inputting, video, signals, PC, \u2022 \u2022 \u2022
Main memory\u2022 \u2022 \u2022\u2022 \u2022 \u2022
! 340
", "text": "extracting all the different words from the Also known as: graphic card, graphic accelerator, GB, VGA. While the screen outputs the picture actually seen by the eye, the screen only displays as commanded and does not output anything if \u2022 \u2022 \u2022", "num": null, "html": null }, "TABREF1": { "type_str": "table", "content": "", "text": "Training data sets.", "num": null, "html": null }, "TABREF4": { "type_str": "table", "content": "
m300a300a600a1200a2400p300p600
MLP 10.9440.9400.9420.9280.9220.9380.946
MLP 20.9540.9480.9460.9340.9240.9580.948
SVM (Linear) 0.9500.9300.9420.9280.9200.9300.930
SVM (RBF)0.9020.9460.9220.9320.9240.8540.888
DBN0.9580.9620.9640.9660.9460.9560.974
p1200p2400a300p300a600p600a1200p1200 a2400p2400
MLP 10.9440.9420.9500.9520.9580.956
MLP 20.9540.9480.9320.9600.9580.960
SVM (Linear) 0.9300.9300.9200.9400.9400.950
SVM (RBF)0.8340.6860.9440.9200.9640.956
DBN0.9440.9500.9580.9700.9660.968
", "text": "Precisions of the baseline.", "num": null, "html": null }, "TABREF5": { "type_str": "table", "content": "
m300a300a600a1200a2400p300p600
MLP 10.9450.9320.9390.9310.9140.9420.951
MLP 20.9510.9440.9430.9330.9240.9540.953
SVM (Linear) 0.9500.9300.9420.9270.9210.9300.930
SVM (RBF)0.9600.9410.9140.9360.9240.8420.872
DBN0.9610.9620.9650.9680.9480.9480.964
p1200p2400a300p300a600p600a1200p1200 a2400p2400
MLP 10.9440.9420.9450.9520.9570.956
MLP 20.9520.9490.9410.9550.9580.961
SVM (Linear) 0.9300.9300.9260.9380.9400.950
SVM (RBF)0.8220.7570.9360.9260.9520.951
DBN0.9540.9500.9530.9610.9630.968
", "text": "Average precisions of DBN, MLP, and SVM for top 5.", "num": null, "html": null }, "TABREF6": { "type_str": "table", "content": "", "text": "Average precisions of DBN, MLP, and SVM for top 10.", "num": null, "html": null } } } }