{ "paper_id": "P09-1001", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:55:10.995278Z" }, "title": "Heterogeneous Transfer Learning for Image Clustering via the Social Web", "authors": [ { "first": "Qiang", "middle": [], "last": "Yang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Hong Kong University of Science and Technology", "location": { "addrLine": "Clearway Bay", "settlement": "Kowloon, Hong Kong" } }, "email": "qyang@cs.ust.hk" }, { "first": "Yuqiang", "middle": [], "last": "Chen", "suffix": "", "affiliation": { "laboratory": "", "institution": "Shanghai Jiao Tong University", "location": { "addrLine": "800 Dongchuan Road", "postCode": "200240", "settlement": "Shanghai", "country": "China" } }, "email": "yuqiangchen@apex.sjtu.edu.cn" }, { "first": "Gui-Rong", "middle": [], "last": "Xue", "suffix": "", "affiliation": { "laboratory": "", "institution": "Shanghai Jiao Tong University", "location": { "addrLine": "800 Dongchuan Road", "postCode": "200240", "settlement": "Shanghai", "country": "China" } }, "email": "grxue@apex.sjtu.edu.cn" }, { "first": "Wenyuan", "middle": [], "last": "Dai", "suffix": "", "affiliation": { "laboratory": "", "institution": "Shanghai Jiao Tong University", "location": { "addrLine": "800 Dongchuan Road", "postCode": "200240", "settlement": "Shanghai", "country": "China" } }, "email": "" }, { "first": "Yong", "middle": [], "last": "Yu", "suffix": "", "affiliation": { "laboratory": "", "institution": "Shanghai Jiao Tong University", "location": { "addrLine": "800 Dongchuan Road", "postCode": "200240", "settlement": "Shanghai", "country": "China" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In this paper, we present a new learning scenario, heterogeneous transfer learning, which improves learning performance when the data can be in different feature spaces and where no correspondence between data instances in these spaces is provided. In the past, we have classified Chinese text documents using English training data under the heterogeneous transfer learning framework. In this paper, we present image clustering as an example to illustrate how unsupervised learning can be improved by transferring knowledge from auxiliary heterogeneous data obtained from the social Web. Image clustering is useful for image sense disambiguation in query-based image search, but its quality is often low due to imagedata sparsity problem. We extend PLSA to help transfer the knowledge from social Web data, which have mixed feature representations. Experiments on image-object clustering and scene clustering tasks show that our approach in heterogeneous transfer learning based on the auxiliary data is indeed effective and promising.", "pdf_parse": { "paper_id": "P09-1001", "_pdf_hash": "", "abstract": [ { "text": "In this paper, we present a new learning scenario, heterogeneous transfer learning, which improves learning performance when the data can be in different feature spaces and where no correspondence between data instances in these spaces is provided. In the past, we have classified Chinese text documents using English training data under the heterogeneous transfer learning framework. In this paper, we present image clustering as an example to illustrate how unsupervised learning can be improved by transferring knowledge from auxiliary heterogeneous data obtained from the social Web. Image clustering is useful for image sense disambiguation in query-based image search, but its quality is often low due to imagedata sparsity problem. We extend PLSA to help transfer the knowledge from social Web data, which have mixed feature representations. Experiments on image-object clustering and scene clustering tasks show that our approach in heterogeneous transfer learning based on the auxiliary data is indeed effective and promising.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Traditional machine learning relies on the availability of a large amount of data to train a model, which is then applied to test data in the same feature space. However, labeled data are often scarce and expensive to obtain. Various machine learning strategies have been proposed to address this problem, including semi-supervised learning (Zhu, 2007) , domain adaptation (Wu and Dietterich, 2004; Blitzer et al., 2006; Blitzer et al., 2007; Arnold et al., 2007; Chan and Ng, 2007; Daume, 2007; Jiang and Zhai, 2007; Reichart and Rappoport, 2007; Andreevskaia and Bergler, 2008) , multi-task learning (Caruana, 1997; Reichart et al., 2008; Arnold et al., 2008) , self-taught learning (Raina et al., 2007) , etc. A commonality among these methods is that they all require the training data and test data to be in the same feature space. In addition, most of them are designed for supervised learning. However, in practice, we often face the problem where the labeled data are scarce in their own feature space, whereas there may be a large amount of labeled heterogeneous data in another feature space. In such situations, it would be desirable to transfer the knowledge from heterogeneous data to domains where we have relatively little training data available.", "cite_spans": [ { "start": 341, "end": 352, "text": "(Zhu, 2007)", "ref_id": "BIBREF37" }, { "start": 373, "end": 398, "text": "(Wu and Dietterich, 2004;", "ref_id": "BIBREF35" }, { "start": 399, "end": 420, "text": "Blitzer et al., 2006;", "ref_id": "BIBREF4" }, { "start": 421, "end": 442, "text": "Blitzer et al., 2007;", "ref_id": "BIBREF5" }, { "start": 443, "end": 463, "text": "Arnold et al., 2007;", "ref_id": "BIBREF1" }, { "start": 464, "end": 482, "text": "Chan and Ng, 2007;", "ref_id": "BIBREF8" }, { "start": 483, "end": 495, "text": "Daume, 2007;", "ref_id": "BIBREF12" }, { "start": 496, "end": 517, "text": "Jiang and Zhai, 2007;", "ref_id": "BIBREF20" }, { "start": 518, "end": 547, "text": "Reichart and Rappoport, 2007;", "ref_id": "BIBREF30" }, { "start": 548, "end": 579, "text": "Andreevskaia and Bergler, 2008)", "ref_id": "BIBREF0" }, { "start": 602, "end": 617, "text": "(Caruana, 1997;", "ref_id": "BIBREF7" }, { "start": 618, "end": 640, "text": "Reichart et al., 2008;", "ref_id": "BIBREF31" }, { "start": 641, "end": 661, "text": "Arnold et al., 2008)", "ref_id": "BIBREF2" }, { "start": 685, "end": 705, "text": "(Raina et al., 2007)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To learn from heterogeneous data, researchers have previously proposed multi-view learning (Blum and Mitchell, 1998; Nigam and Ghani, 2000) in which each instance has multiple views in different feature spaces. Different from previous works, we focus on the problem of heterogeneous transfer learning, which is designed for situation when the training data are in one feature space (such as text), and the test data are in another (such as images), and there may be no correspondence between instances in these spaces. The type of heterogeneous data can be very different, as in the case of text and image. To consider how heterogeneous transfer learning relates to other types of learning, Figure 1 presents an intuitive illustration of four learning strategies, including traditional machine learning, transfer learning across different distributions, multi-view learning and heterogeneous transfer learning. As we can see, an important distinguishing feature of heterogeneous transfer learning, as compared to other types of learning, is that more constraints on the problem are relaxed, such that data instances do not need to correspond anymore. This allows, for example, a collection of Chinese text documents to be classified using another collection of English text as the training data (c.f. (Ling et al., 2008) and Section 2.1).", "cite_spans": [ { "start": 91, "end": 116, "text": "(Blum and Mitchell, 1998;", "ref_id": "BIBREF6" }, { "start": 117, "end": 139, "text": "Nigam and Ghani, 2000)", "ref_id": "BIBREF28" }, { "start": 1301, "end": 1320, "text": "(Ling et al., 2008)", "ref_id": "BIBREF24" } ], "ref_spans": [ { "start": 691, "end": 699, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we will give an illustrative example of heterogeneous transfer learning to demonstrate how the task of image clustering can benefit from learning from the heterogeneous social Web data. A major motivation of our work is Web-based image search, where users submit textual queries and browse through the returned result pages. One problem is that the user queries are often ambiguous. An ambiguous keyword such as \"Apple\" might retrieve images of Apple computers and mobile phones, or images of fruits. Image clustering is an effective method for improving the accessibility of image search result. Loeff et al. (2006) addressed the image clustering problem with a focus on image sense discrimination. In their approach, images associated with textual features are used for clustering, so that the text and images are clustered at the same time. Specifically, spectral clustering is applied to the distance matrix built from a multimodal feature set associated with the images to get a better feature representation. This new representation contains both image and text information, with which the performance of image clustering is shown to be improved. A problem with this approach is that when images contained in the Web search results are very scarce and when the textual data associated with the images are very few, clustering on the images and their associated text may not be very effective.", "cite_spans": [ { "start": 612, "end": 631, "text": "Loeff et al. (2006)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Different from these previous works, in this paper, we address the image clustering problem as a heterogeneous transfer learning problem. We aim to leverage heterogeneous auxiliary data, social annotations, etc. to enhance image clustering performance. We observe that the World Wide Web has many annotated images in Web sites such as Flickr (http://www.flickr.com), which can be used as auxiliary information source for our clustering task. In this work, our objective is to cluster a small collection of images that we are interested in, where these images are not sufficient for traditional clustering algorithms to perform well due to data sparsity and the low level of image features. We investigate how to utilize the readily available socially annotated image data on the Web to improve image clustering. Although these auxiliary data may be irrelevant to the images to be clustered and cannot be directly used to solve the data sparsity problem, we show that they can still be used to estimate a good latent feature representation, which can be used to improve image clustering.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this section, we summarize our previous work on cross-language classification as an example of heterogeneous transfer learning. This example is related to our image clustering problem because they both rely on data from different feature spaces.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Heterogeneous Transfer Learning Between Languages", "sec_num": "2.1" }, { "text": "As the World Wide Web in China grows rapidly, it has become an increasingly important problem to be able to accurately classify Chinese Web pages. However, because the labeled Chinese Web pages are still not sufficient, we often find it difficult to achieve high accuracy by applying traditional machine learning algorithms to the Chinese Web pages directly. Would it be possible to make the best use of the relatively abundant labeled English Web pages for classifying the Chinese Web pages?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Heterogeneous Transfer Learning Between Languages", "sec_num": "2.1" }, { "text": "To answer this question, in (Ling et al., 2008) , we developed a novel approach for classifying the Web pages in Chinese using the training documents in English. In this subsection, we give a brief summary of this work. The problem to be solved is: we are given a collection of labeled English documents and a large number of unlabeled Chinese documents. The English and Chinese texts are not aligned. Our objective is to classify the Chinese documents into the same label space as the English data.", "cite_spans": [ { "start": 28, "end": 47, "text": "(Ling et al., 2008)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Heterogeneous Transfer Learning Between Languages", "sec_num": "2.1" }, { "text": "Our key observation is that even though the data use different text features, they may still share many of the same semantic information. What we need to do is to uncover this latent semantic information by finding out what is common among them. We did this in (Ling et al., 2008) by using the information bottleneck theory (Tishby et al., 1999) . In our work, we first translated the Chinese document into English automatically using some available translation software, such as Google translate. Then, we encoded the training text as well as the translated target text together, in terms of the information theory. We allowed all the information to be put through a 'bottleneck' and be represented by a limited number of code- words (i.e. labels in the classification problem). Finally, information bottleneck was used to maintain most of the common information between the two data sources, and discard the remaining irrelevant information. In this way, we can approximate the ideal situation where similar training and translated test pages shared in the common part are encoded into the same codewords, and are thus assigned the correct labels. In (Ling et al., 2008) , we experimentally showed that heterogeneous transfer learning can indeed improve the performance of cross-language text classification as compared to directly training learning models (e.g., Naive Bayes or SVM) and testing on the translated texts.", "cite_spans": [ { "start": 261, "end": 280, "text": "(Ling et al., 2008)", "ref_id": "BIBREF24" }, { "start": 324, "end": 345, "text": "(Tishby et al., 1999)", "ref_id": null }, { "start": 1153, "end": 1172, "text": "(Ling et al., 2008)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Heterogeneous Transfer Learning Between Languages", "sec_num": "2.1" }, { "text": "In the past, several other works made use of transfer learning for cross-feature-space learning. Wu and Oard (2008) proposed to handle the crosslanguage learning problem by translating the data into a same language and applying kNN on the latent topic space for classification. Most learning algorithms for dealing with cross-language heterogeneous data require a translator to convert the data to the same feature space. For those data that are in different feature spaces where no translator is available, Davis and Domingos (2008) proposed a Markov-logic-based transfer learning algorithm, which is called deep transfer, for transferring knowledge between biological domains and Web domains. Dai et al. (2008a) proposed a novel learning paradigm, known as translated learning, to deal with the problem of learning heterogeneous data that belong to quite different feature spaces by using a risk minimization framework.", "cite_spans": [ { "start": 97, "end": 115, "text": "Wu and Oard (2008)", "ref_id": "BIBREF36" }, { "start": 508, "end": 533, "text": "Davis and Domingos (2008)", "ref_id": "BIBREF13" }, { "start": 695, "end": 713, "text": "Dai et al. (2008a)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Other Works in Transfer Learning", "sec_num": "2.2" }, { "text": "Our work makes use of PLSA. Probabilistic latent semantic analysis (PLSA) is a widely used probabilistic model (Hofmann, 1999) , and could be considered as a probabilistic implementation of latent semantic analysis (LSA) (Deerwester et al., 1990 ). An extension to PLSA was proposed in (Cohn and Hofmann, 2000) , which incorporated the hyperlink connectivity in the PLSA model by using a joint probabilistic model for connectivity and content. Moreover, PLSA has shown a lot of applications ranging from text clustering (Hofmann, 2001) to image analysis (Sivic et al., 2005 ).", "cite_spans": [ { "start": 111, "end": 126, "text": "(Hofmann, 1999)", "ref_id": "BIBREF18" }, { "start": 221, "end": 245, "text": "(Deerwester et al., 1990", "ref_id": "BIBREF14" }, { "start": 286, "end": 310, "text": "(Cohn and Hofmann, 2000)", "ref_id": "BIBREF9" }, { "start": 554, "end": 573, "text": "(Sivic et al., 2005", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Relation to PLSA", "sec_num": "2.3" }, { "text": "Compared to many previous works on image clustering, we note that traditional image clustering is generally based on techniques such as Kmeans (MacQueen, 1967) and hierarchical clustering (Kaufman and Rousseeuw, 1990) . However, when the data are sparse, traditional clustering algorithms may have difficulties in obtaining high-quality image clusters. Recently, several researchers have investigated how to leverage the auxiliary information to improve target clustering performance, such as supervised clustering (Finley and Joachims, 2005) , semi-supervised clustering (Basu et al., 2004) , self-taught clustering (Dai et al., 2008b) , etc.", "cite_spans": [ { "start": 143, "end": 159, "text": "(MacQueen, 1967)", "ref_id": "BIBREF27" }, { "start": 188, "end": 217, "text": "(Kaufman and Rousseeuw, 1990)", "ref_id": "BIBREF21" }, { "start": 515, "end": 542, "text": "(Finley and Joachims, 2005)", "ref_id": "BIBREF16" }, { "start": 572, "end": 591, "text": "(Basu et al., 2004)", "ref_id": "BIBREF3" }, { "start": 617, "end": 636, "text": "(Dai et al., 2008b)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Relation to Clustering", "sec_num": "2.4" }, { "text": "In this section, we present our annotation-based probabilistic latent semantic analysis algorithm (aPLSA), which extends the traditional PLSA model by incorporating annotated auxiliary image data. Intuitively, our algorithm aPLSA performs PLSA analysis on the target images, which are converted to an image instance-to-feature cooccurrence matrix. At the same time, PLSA is also applied to the annotated image data from social Web, which is converted into a text-to-imagefeature co-occurrence matrix. In order to unify those two separate PLSA models, these two steps are done simultaneously with common latent variables used as a bridge linking them. Through these common latent variables, which are now constrained by both target image data and auxiliary annotation data, a better clustering result is expected for the target data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Image Clustering with Annotated Auxiliary Data", "sec_num": "3" }, { "text": "Let F = {f i } |F | i=1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Latent Semantic Analysis", "sec_num": "3.1" }, { "text": "be an image feature space, and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Latent Semantic Analysis", "sec_num": "3.1" }, { "text": "V = {v i } |V| i=1 be the image data set. Each image v i \u2208 V is represented by a bag-of-features {f |f \u2208 v i \u2227 f \u2208 F}.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Latent Semantic Analysis", "sec_num": "3.1" }, { "text": "Based on the image data set V, we can estimate an image instance-to-feature co-occurrence matrix A |V|\u00d7|F | \u2208 R |V|\u00d7|F | , where each element A ij (1 \u2264 i \u2264 |V| and 1 \u2264 j \u2264 |F|) in the matrix A is the frequency of the feature f j appearing in the instance", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Latent Semantic Analysis", "sec_num": "3.1" }, { "text": "v i . Let W = {w i } |W| i=1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Latent Semantic Analysis", "sec_num": "3.1" }, { "text": "be a text feature space. The annotated image data allow us to obtain the cooccurrence information between images v and text features w \u2208 W. An example of annotated image data is the Flickr (http://www.flickr. com), which is a social Web site containing a large number of annotated images.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Latent Semantic Analysis", "sec_num": "3.1" }, { "text": "By extracting image features from the annotated images v, we can estimate a text-to-image feature co-occurrence matrix B |W|\u00d7|F | \u2208 R |W|\u00d7|F | , where each element B ij (1 \u2264 i \u2264 |W| and 1 \u2264 j \u2264 |F|) in the matrix B is the frequency of the text feature w i and the image feature f j occurring together in the annotated image data set. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Latent Semantic Analysis", "sec_num": "3.1" }, { "text": "Let Z = {z i } |Z| i=1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Latent Semantic Analysis", "sec_num": "3.1" }, { "text": "be the latent variable set in our aPLSA model. In clustering, each latent variable z i \u2208 Z corresponds to a certain cluster.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Latent Semantic Analysis", "sec_num": "3.1" }, { "text": "Our objective is to estimate a clustering function g : V \u2192 Z with the help of the two cooccurrence matrices A and B as defined above.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Latent Semantic Analysis", "sec_num": "3.1" }, { "text": "To formally introduce the aPLSA model, we start from the probabilistic latent semantic analysis (PLSA) (Hofmann, 1999) model. PLSA is a probabilistic implementation of latent semantic analysis (LSA) (Deerwester et al., 1990) . In our image clustering task, PLSA decomposes the instance-feature co-occurrence matrix A under the assumption of conditional independence of image instances V and image features F, given the latent variables Z.", "cite_spans": [ { "start": 103, "end": 118, "text": "(Hofmann, 1999)", "ref_id": "BIBREF18" }, { "start": 199, "end": 224, "text": "(Deerwester et al., 1990)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Latent Semantic Analysis", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (f |v) = z\u2208Z P (f |z)P (z|v).", "eq_num": "(1)" } ], "section": "Probabilistic Latent Semantic Analysis", "sec_num": "3.1" }, { "text": "The graphical model representation of PLSA is shown in Figure 2 .", "cite_spans": [], "ref_spans": [ { "start": 55, "end": 63, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Probabilistic Latent Semantic Analysis", "sec_num": "3.1" }, { "text": "Based on the PLSA model, the log-likelihood can be defined as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Latent Semantic Analysis", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L = i j A ij j A ij log P (f j |v i )", "eq_num": "(2)" } ], "section": "Probabilistic Latent Semantic Analysis", "sec_num": "3.1" }, { "text": "where A |V|\u00d7|F | \u2208 R |V|\u00d7|F | is the image instancefeature co-occurrence matrix. The term 2is a normalization term ensuring each image is giving the same weight in the loglikelihood.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Latent Semantic Analysis", "sec_num": "3.1" }, { "text": "A ij P j A ij in Equation", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Latent Semantic Analysis", "sec_num": "3.1" }, { "text": "Using EM algorithm (Dempster et al., 1977) , which locally maximizes the log-likelihood of the PLSA model (Equation (2)), the probabilities P (f |z) and P (z|v) can be estimated. Then, the clustering function is derived as", "cite_spans": [ { "start": 19, "end": 42, "text": "(Dempster et al., 1977)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Latent Semantic Analysis", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "g(v) = argmax z\u2208Z P (z|v).", "eq_num": "(3)" } ], "section": "Probabilistic Latent Semantic Analysis", "sec_num": "3.1" }, { "text": "Due to space limitation, we omit the details for the PLSA model, which can be found in (Hofmann, 1999) .", "cite_spans": [ { "start": 87, "end": 102, "text": "(Hofmann, 1999)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Latent Semantic Analysis", "sec_num": "3.1" }, { "text": "In this section, we consider how to incorporate a large number of socially annotated images in a", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "aPLSA: Annotation-based PLSA", "sec_num": "3.2" }, { "text": "V W Z F P ( z | v ) P ( z | w ) P (f |z)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "aPLSA: Annotation-based PLSA", "sec_num": "3.2" }, { "text": "Figure 3: Graphical model representation of aPLSA model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "aPLSA: Annotation-based PLSA", "sec_num": "3.2" }, { "text": "unified PLSA model for the purpose of utilizing the correlation between text features and image features. In the auxiliary data, each image has certain textual tags that are attached by users. The correlation between text features and image features can be formulated as follows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "aPLSA: Annotation-based PLSA", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (f |w) = z\u2208Z P (f |z)P (z|w).", "eq_num": "(4)" } ], "section": "aPLSA: Annotation-based PLSA", "sec_num": "3.2" }, { "text": "It is clear that Equations (1) and (4) share a same term P (f |z). So we design a new PLSA model by joining the probabilistic model in Equation 1and the probabilistic model in Equation 4into a unified model, as shown in Figure 3 . In Figure 3 , the latent variables Z depend not only on the correlation between image instances V and image features F, but also the correlation between text features W and image features F. Therefore, the auxiliary socially-annotated image data can be used to help the target image clustering performance by estimating good set of latent variables Z.", "cite_spans": [], "ref_spans": [ { "start": 220, "end": 228, "text": "Figure 3", "ref_id": null }, { "start": 234, "end": 242, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "aPLSA: Annotation-based PLSA", "sec_num": "3.2" }, { "text": "Based on the graphical model representation in Figure 3 , we derive the log-likelihood objective function, in a similar way as in (Cohn and Hofmann, 2000) , as follows", "cite_spans": [ { "start": 130, "end": 154, "text": "(Cohn and Hofmann, 2000)", "ref_id": "BIBREF9" } ], "ref_spans": [ { "start": 47, "end": 55, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "aPLSA: Annotation-based PLSA", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L = j \u03bb i A ij j A ij log P (f j |v i ) +(1 \u2212 \u03bb) l B lj j B lj log P (f j |w l ) ,", "eq_num": "(5)" } ], "section": "aPLSA: Annotation-based PLSA", "sec_num": "3.2" }, { "text": "where A |V|\u00d7|F | \u2208 R |V|\u00d7|F | is the image instancefeature co-occurrence matrix, and B |W|\u00d7|F | \u2208 R |W|\u00d7|F | is the text-to-image feature-level cooccurrence matrix. Similar to Equation (2),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "aPLSA: Annotation-based PLSA", "sec_num": "3.2" }, { "text": "A ij P j A ij and B lj P j B lj", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "aPLSA: Annotation-based PLSA", "sec_num": "3.2" }, { "text": "in Equation (5) are the normalization terms to prevent imbalanced cases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "aPLSA: Annotation-based PLSA", "sec_num": "3.2" }, { "text": "Furthermore, \u03bb acts as a trade-off parameter between the co-occurrence matrices A and B. In the extreme case when \u03bb = 1, the log-likelihood objective function ignores all the biases from the text-to-image occurrence matrix B. In this case, the aPLSA model degenerates to the traditional PLSA model. Therefore, aPLSA is an extension to the PLSA model. Now, the objective is to maximize the loglikelihood L of the aPLSA model in Equation (5). Then we apply the EM algorithm (Dempster et al., 1977) to estimate the conditional probabilities P (f |z), P (z|w) and P (z|v) with respect to each dependence in Figure 3 as follows.", "cite_spans": [ { "start": 472, "end": 495, "text": "(Dempster et al., 1977)", "ref_id": "BIBREF15" } ], "ref_spans": [ { "start": 603, "end": 611, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "aPLSA: Annotation-based PLSA", "sec_num": "3.2" }, { "text": "\u2022 E-Step: calculate the posterior probability of each latent variable z given the observation of image features f , image instances v and text features w based on the old estimate of P (f |z), P (z|w) and P (z|v):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "aPLSA: Annotation-based PLSA", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (z k |v i , f j ) = P (f j |z k )P (z k |v i ) k P (f j |z k )P (z k |v i ) (6) P (z k |w l , f j ) = P (f j |z k )P (z k |w l ) k P (f j |z k )P (z k |w l )", "eq_num": "(7)" } ], "section": "aPLSA: Annotation-based PLSA", "sec_num": "3.2" }, { "text": "\u2022 M-Step: re-estimates conditional probabilities P (z k |v i ) and P (z k |w l ):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "aPLSA: Annotation-based PLSA", "sec_num": "3.2" }, { "text": "P (z k |v i ) = j A ij j A ij P (z k |v i , f j ) (8) P (z k |w l ) = j B lj j B lj P (z k |w l , f j ) (9)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "aPLSA: Annotation-based PLSA", "sec_num": "3.2" }, { "text": "and conditional probability P (f j |z k ), which is a mixture portion of posterior probability of latent variables", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "aPLSA: Annotation-based PLSA", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (f j |z k ) \u221d \u03bb i A ij j A ij P (z k |v i , f j ) + (1 \u2212 \u03bb) l B lj j B lj P (z k |w l , f j )", "eq_num": "(10)" } ], "section": "aPLSA: Annotation-based PLSA", "sec_num": "3.2" }, { "text": "Finally, the clustering function for a certain image v is", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "aPLSA: Annotation-based PLSA", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "g(v) = argmax z\u2208Z P (z|v).", "eq_num": "(11)" } ], "section": "aPLSA: Annotation-based PLSA", "sec_num": "3.2" }, { "text": "From the above equations, we can derive our annotation-based probabilistic latent semantic analysis (aPLSA) algorithm. As shown in Algorithm 1, aPLSA iteratively performs the E-Step and the M-Step in order to seek local optimal points based on the objective function L in Equation (5).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "aPLSA: Annotation-based PLSA", "sec_num": "3.2" }, { "text": "Algorithm 1 Annotation-based PLSA Algorithm (aPLSA) Input: The V-F co-occurrence matrix A and W-F co-occurrence matrix B. Output: A clustering (partition) function g : V \u2192 Z, which maps an image instance v \u2208 V to a latent variable z \u2208 Z.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "aPLSA: Annotation-based PLSA", "sec_num": "3.2" }, { "text": "1: Initial Z so that |Z| equals the number clusters desired. 2: Initialize P (z|v), P (z|w), P (f |z) randomly. 3: while the change of L in Eq. 5 ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "aPLSA: Annotation-based PLSA", "sec_num": "3.2" }, { "text": "v in V do 8: g(v) \u2190 argmax z P (z|v).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "aPLSA: Annotation-based PLSA", "sec_num": "3.2" }, { "text": "9: end for 10: Return g.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "aPLSA: Annotation-based PLSA", "sec_num": "3.2" }, { "text": "In this section, we empirically evaluate the aPLSA algorithm together with some state-of-art baseline methods on two widely used image corpora, to demonstrate the effectiveness of our algorithm aPLSA.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "In order to evaluate the effectiveness of our algorithm aPLSA, we conducted experiments on several data sets generated from two image corpora, Caltech-256 (Griffin et al., 2007) and the fifteenscene (Lazebnik et al., 2006) . The Caltech-256 data set has 256 image objective categories, ranging from animals to buildings, from plants to automobiles, etc. The fifteen-scene data set contains 15 scenes such as store and forest. From these two corpora, we randomly generated eleven image clustering tasks, including seven 2way clustering tasks, two 4-way clustering task, one 5-way clustering task and one 8-way clustering task. The detailed descriptions for these clustering tasks are given in Table 1 . In these tasks, bi7 and oct1 were generated from fifteen-scene data set, and the rest were from Caltech-256 data set. INVOLVED CLASSES DATA SIZE bi1 skateboard, airplanes 102, 800 bi2 billiards, mars 278, 155 bi3 cd, greyhound 102, 94 bi4 electric-guitar, snake 122, 112 bi5 calculator, dolphin 100, 106 bi6 mushroom, teddy-bear 202, 99 bi7 MIThighway, livingroom 260, 289 quad1 calculator, diamond-ring, dolphin, microscope 100 , 118, 106, 116 quad2 bonsai, comet, frog, saddle 122, 120, 115, 110 quint1 frog, kayak, bear, jesus-christ, watch 115, 102, 101, 87, 201 oct1 MIThighway, MITmountain, kitchen, MITcoast, PARoffice, MITtallbuilding, livingroom, bedroom 260, 374, 210, 360, 215, 356, 289 Table 1 : The descriptions of all the image clustering tasks used in our experiment. Among these data sets, bi7 and oct1 were generated from fifteen-scene data set, and the rest were from Caltech-256 data set.", "cite_spans": [ { "start": 155, "end": 177, "text": "(Griffin et al., 2007)", "ref_id": "BIBREF17" }, { "start": 199, "end": 222, "text": "(Lazebnik et al., 2006)", "ref_id": "BIBREF22" }, { "start": 1159, "end": 1301, "text": ", 118, 106, 116 quad2 bonsai, comet, frog, saddle 122, 120, 115, 110 quint1 frog, kayak, bear, jesus-christ, watch 115, 102, 101, 87, 201 oct1", "ref_id": null }, { "start": 1357, "end": 1373, "text": "MITtallbuilding,", "ref_id": null }, { "start": 1374, "end": 1385, "text": "livingroom,", "ref_id": null }, { "start": 1386, "end": 1398, "text": "bedroom 260,", "ref_id": null }, { "start": 1399, "end": 1403, "text": "374,", "ref_id": null }, { "start": 1404, "end": 1408, "text": "210,", "ref_id": null }, { "start": 1409, "end": 1413, "text": "360,", "ref_id": null }, { "start": 1414, "end": 1418, "text": "215,", "ref_id": null }, { "start": 1419, "end": 1423, "text": "356,", "ref_id": null }, { "start": 1424, "end": 1427, "text": "289", "ref_id": null } ], "ref_spans": [ { "start": 692, "end": 699, "text": "Table 1", "ref_id": null }, { "start": 820, "end": 1158, "text": "INVOLVED CLASSES DATA SIZE bi1 skateboard, airplanes 102, 800 bi2 billiards, mars 278, 155 bi3 cd, greyhound 102, 94 bi4 electric-guitar, snake 122, 112 bi5 calculator, dolphin 100, 106 bi6 mushroom, teddy-bear 202, 99 bi7 MIThighway, livingroom 260, 289 quad1 calculator, diamond-ring, dolphin, microscope 100", "ref_id": "TABREF1" }, { "start": 1428, "end": 1435, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Data Sets", "sec_num": "4.1" }, { "text": "To empirically investigate the parameter \u03bb and the convergence of our algorithm aPLSA, we generated five more date sets as the development sets. The detailed description of these five development sets, namely tune1 to tune5 is listed in Table 1 as well.", "cite_spans": [], "ref_spans": [ { "start": 237, "end": 244, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "DATA SET", "sec_num": null }, { "text": "The auxiliary data were crawled from the Flickr (http://www.flickr.com/) web site during August 2007. Flickr is an internet community where people share photos online and express their opinions as social tags (annotations) attached to each image. From Flicker, we collected 19, 959 images and 91, 719 related annotations, among which 2, 600 words are distinct. Based on the method described in Section 3, we estimated the co-occurrence matrix B between text features and image features. This co-occurrence matrix B was used by all the clustering tasks in our experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "DATA SET", "sec_num": null }, { "text": "For data preprocessing, we adopted the bag-offeatures representation of images (Li and Perona, 2005) in our experiments. Interesting points were found in the images and described via the SIFT descriptors (Lowe, 2004) . Then, the interesting points were clustered to generate a codebook to form an image feature space. The size of codebook was set to 2, 000 in our experiments. Based on the codebook, which serves as the image feature space, each image can be represented as a corresponding feature vector to be used in the next step.", "cite_spans": [ { "start": 79, "end": 100, "text": "(Li and Perona, 2005)", "ref_id": "BIBREF23" }, { "start": 204, "end": 216, "text": "(Lowe, 2004)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "DATA SET", "sec_num": null }, { "text": "To set our evaluation criterion, we used the non, 1948 ) is a measure of the uncertainty associated with a random variable. In our problem, entropy serves as a measure of randomness of clustering result. The entropy of g on a single latent variable z is defined to be H(g, z) \u2212 c\u2208C P (c|z) log 2 P (c|z), where C is the class label set of V and P (c|z", "cite_spans": [ { "start": 45, "end": 54, "text": "non, 1948", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "DATA SET", "sec_num": null }, { "text": ") = |{v|g(v)=z\u2227t(v)=c}| |{v|g(v)=z}|", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "DATA SET", "sec_num": null }, { "text": ", in which t(v) is the true class label of image v. Lower entropy H(g, Z) indicates less randomness and thus better clustering result.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "DATA SET", "sec_num": null }, { "text": "We now empirically analyze the effectiveness of our aPLSA algorithm. Because, to our best of knowledge, few existing methods addressed the problem of image clustering with the help of social annotation image data, we can only compare our aPLSA with several state-of-the-art clustering algorithms that are not directly designed for our problem. The first baseline is the well-known KMeans algorithm (MacQueen, 1967) . Since our algorithm is designed based on PLSA (Hofmann, 1999) , we also included PLSA for clustering as a baseline method in our experiments.", "cite_spans": [ { "start": 398, "end": 414, "text": "(MacQueen, 1967)", "ref_id": "BIBREF27" }, { "start": 463, "end": 478, "text": "(Hofmann, 1999)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Empirical Analysis", "sec_num": "4.2" }, { "text": "For each of the above two baselines, we have two strategies: (1) separated: the baseline method was applied on the target image data only;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Empirical Analysis", "sec_num": "4.2" }, { "text": "(2) combined: the baseline method was applied to cluster the combined data consisting of both target image data and the annotated image data. Clustering results on target image data were used for evaluation. Note that, in the combined data, all the annotations were thrown away since baseline methods evaluated in this paper do not leverage annotation information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Empirical Analysis", "sec_num": "4.2" }, { "text": "In addition, we compared our algorithm aPLSA to a state-of-the-art transfer clustering strategy, known as self-taught clustering (STC) (Dai et al., 2008b) . STC makes use of auxiliary data to estimate a better feature representation to benefit the target clustering. In these experiments, the annotated image data were used as auxiliary data in STC, which does not use the annotation text.", "cite_spans": [ { "start": 135, "end": 154, "text": "(Dai et al., 2008b)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Empirical Analysis", "sec_num": "4.2" }, { "text": "In our experiments, the performance is in the form of the average entropy and variance of five repeats by randomly selecting 50 images from each of the categories. We selected only 50 images per category, since this paper is focused on clustering sparse data. Table 2 shows the performance with respect to all comparison methods on each of the image clustering tasks measured by the entropy criterion. From the tables, we can see that our algorithm aPLSA outperforms the baseline methods in all the data sets. We believe that is because aPLSA can effectively utilize the knowledge from the socially annotated image data. On average, aPLSA gives rise to 21.8% of entropy reduction and as compared to KMeans, 5.7% of entropy reduction as compared to PLSA, and 10.1% of entropy reduction as compared to STC.", "cite_spans": [], "ref_spans": [ { "start": 260, "end": 267, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Empirical Analysis", "sec_num": "4.2" }, { "text": "We now show how the data size affects aPLSA, with two baseline methods KMeans and PLSA as reference. The experiments were conducted on different amounts of target image data, varying from 10 to 80. The corresponding experimental results in average entropy over all the 11 clustering tasks are shown in Figure 4(a) . From this figure, we observe that aPLSA always yields a significant reduction in entropy as compared with two baseline methods KMeans and PLSA, regardless of the size of target image data that we used. ", "cite_spans": [], "ref_spans": [ { "start": 302, "end": 313, "text": "Figure 4(a)", "ref_id": null } ], "eq_spans": [], "section": "Varying Data Size", "sec_num": "4.2.1" }, { "text": "In aPLSA, there is a trade-off parameter \u03bb that affects how the algorithm relies on auxiliary data. When \u03bb = 0, the aPLSA relies only on annotated image data B. When \u03bb = 1, aPLSA relies only on target image data A, in which case aPLSA degenerates to PLSA. Smaller \u03bb indicates heavier reliance on the annotated image data. We have done some experiments on the development sets to investigate how different \u03bb affect the performance of aPLSA. We set the number of images per category to 50, and tested the performance of aPLSA. The result in average entropy over all development sets is shown in Figure 4 (b). In the experiments described in this paper, we set \u03bb to 0.2, which is the best point in Figure 4(b) .", "cite_spans": [], "ref_spans": [ { "start": 593, "end": 601, "text": "Figure 4", "ref_id": null }, { "start": 695, "end": 706, "text": "Figure 4(b)", "ref_id": null } ], "eq_spans": [], "section": "Parameter Sensitivity", "sec_num": "4.2.2" }, { "text": "In our experiments, we tested the convergence property of our algorithm aPLSA as well. Figure 4(c) shows the average entropy curve given by aPLSA over all development sets. From this figure, we see that the entropy decreases very fast during the first 100 iterations and becomes stable after 150 iterations. We believe that 200 iterations is sufficient for aPLSA to converge.", "cite_spans": [], "ref_spans": [ { "start": 87, "end": 98, "text": "Figure 4(c)", "ref_id": null } ], "eq_spans": [], "section": "Convergence", "sec_num": "4.2.3" }, { "text": "In this paper, we proposed a new learning scenario called heterogeneous transfer learning and illustrated its application to image clustering. Image clustering, a vital component in organizing search results for query-based image search, was shown to be improved by transferring knowledge from unrelated images with annotations in a social Web. This is done by first learning the high-quality latent variables in the auxiliary data, and then transferring this knowledge to help improve the clustering of the target image data. We conducted experi-ments on two image data sets, using the Flickr data as the annotated auxiliary image data, and showed that our aPLSA algorithm can greatly outperform several state-of-the-art clustering algorithms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" }, { "text": "In natural language processing, there are many future opportunities to apply heterogeneous transfer learning. In (Ling et al., 2008) we have shown how to classify the Chinese text using English text as the training data. We may also consider clustering, topic modeling, question answering, etc., to be done using data in different feature spaces. We can consider data in different modalities, such as video, image and audio, as the training data. Finally, we will explore the theoretical foundations and limitations of heterogeneous transfer learning as well.", "cite_spans": [ { "start": 113, "end": 132, "text": "(Ling et al., 2008)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" } ], "back_matter": [ { "text": "Acknowledgement Qiang Yang thanks Hong Kong CERG grant 621307 for supporting the research.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "acknowledgement", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "When specialists and generalists work together: Overcoming domain dependence in sentiment tagging", "authors": [ { "first": "Alina", "middle": [], "last": "Andreevskaia", "suffix": "" }, { "first": "Sabine", "middle": [], "last": "Bergler", "suffix": "" } ], "year": 2008, "venue": "ACL-08: HLT", "volume": "", "issue": "", "pages": "290--298", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alina Andreevskaia and Sabine Bergler. 2008. When spe- cialists and generalists work together: Overcoming do- main dependence in sentiment tagging. In ACL-08: HLT, pages 290-298, Columbus, Ohio, June.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "A comparative study of methods for transductive transfer learning", "authors": [ { "first": "Andrew", "middle": [], "last": "Arnold", "suffix": "" }, { "first": "Ramesh", "middle": [], "last": "Nallapati", "suffix": "" }, { "first": "William", "middle": [ "W" ], "last": "Cohen", "suffix": "" } ], "year": 2007, "venue": "ICDM 2007 Workshop on Mining and Management of Biological Data", "volume": "", "issue": "", "pages": "77--82", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrew Arnold, Ramesh Nallapati, and William W. Cohen. 2007. A comparative study of methods for transductive transfer learning. In ICDM 2007 Workshop on Mining and Management of Biological Data, pages 77-82.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Exploiting feature hierarchy for transfer learning in named entity recognition", "authors": [ { "first": "Andrew", "middle": [], "last": "Arnold", "suffix": "" }, { "first": "Ramesh", "middle": [], "last": "Nallapati", "suffix": "" }, { "first": "William", "middle": [ "W" ], "last": "Cohen", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrew Arnold, Ramesh Nallapati, and William W. Cohen. 2008. Exploiting feature hierarchy for transfer learning in named entity recognition. In ACL-08: HLT.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "A probabilistic framework for semi-supervised clustering", "authors": [ { "first": "Sugato", "middle": [], "last": "Basu", "suffix": "" }, { "first": "Mikhail", "middle": [], "last": "Bilenko", "suffix": "" }, { "first": "Raymond", "middle": [ "J" ], "last": "Mooney", "suffix": "" } ], "year": 2004, "venue": "ACM SIGKDD 2004", "volume": "", "issue": "", "pages": "59--68", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sugato Basu, Mikhail Bilenko, and Raymond J. Mooney. 2004. A probabilistic framework for semi-supervised clustering. In ACM SIGKDD 2004, pages 59-68.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Domain adaptation with structural correspondence learning", "authors": [ { "first": "John", "middle": [], "last": "Blitzer", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Mcdonald", "suffix": "" }, { "first": "Fernando", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 2006, "venue": "EMNLP 2006", "volume": "", "issue": "", "pages": "120--128", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Blitzer, Ryan Mcdonald, and Fernando Pereira. 2006. Domain adaptation with structural correspondence learn- ing. In EMNLP 2006, pages 120-128, Sydney, Australia.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification", "authors": [ { "first": "John", "middle": [], "last": "Blitzer", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Dredze", "suffix": "" }, { "first": "Fernando", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 2007, "venue": "ACL 2007", "volume": "", "issue": "", "pages": "440--447", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Blitzer, Mark Dredze, and Fernando Pereira. 2007. Biographies, bollywood, boom-boxes and blenders: Do- main adaptation for sentiment classification. In ACL 2007, pages 440-447, Prague, Czech Republic.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Combining labeled and unlabeled data with co-training", "authors": [ { "first": "Avrim", "middle": [], "last": "Blum", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Mitchell", "suffix": "" } ], "year": 1998, "venue": "COLT 1998", "volume": "", "issue": "", "pages": "92--100", "other_ids": {}, "num": null, "urls": [], "raw_text": "Avrim Blum and Tom Mitchell. 1998. Combining labeled and unlabeled data with co-training. In COLT 1998, pages 92-100, New York, NY, USA. ACM.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Multitask learning", "authors": [ { "first": "Rich", "middle": [], "last": "Caruana", "suffix": "" } ], "year": 1997, "venue": "Machine Learning", "volume": "28", "issue": "", "pages": "41--75", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rich Caruana. 1997. Multitask learning. Machine Learning, 28(1):41-75.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Domain adaptation with active learning for word sense disambiguation", "authors": [ { "first": "Yee", "middle": [], "last": "Seng Chan", "suffix": "" }, { "first": "Hwee Tou", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yee Seng Chan and Hwee Tou Ng. 2007. Domain adaptation with active learning for word sense disambiguation. In ACL 2007, Prague, Czech Republic.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "The missing link -a probabilistic model of document content and hypertext connectivity", "authors": [ { "first": "A", "middle": [], "last": "David", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Cohn", "suffix": "" }, { "first": "", "middle": [], "last": "Hofmann", "suffix": "" } ], "year": 2000, "venue": "NIPS 2000", "volume": "", "issue": "", "pages": "430--436", "other_ids": {}, "num": null, "urls": [], "raw_text": "David A. Cohn and Thomas Hofmann. 2000. The missing link -a probabilistic model of document content and hy- pertext connectivity. In NIPS 2000, pages 430-436.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Translated learning: Transfer learning across different feature spaces", "authors": [ { "first": "Wenyuan", "middle": [], "last": "Dai", "suffix": "" }, { "first": "Yuqiang", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Gui-Rong", "middle": [], "last": "Xue", "suffix": "" }, { "first": "Qiang", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Yong", "middle": [], "last": "Yu", "suffix": "" } ], "year": 2008, "venue": "NIPS 2008", "volume": "", "issue": "", "pages": "353--360", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wenyuan Dai, Yuqiang Chen, Gui-Rong Xue, Qiang Yang, and Yong Yu. 2008a. Translated learning: Transfer learn- ing across different feature spaces. In NIPS 2008, pages 353-360.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Self-taught clustering", "authors": [ { "first": "Wenyuan", "middle": [], "last": "Dai", "suffix": "" }, { "first": "Qiang", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Gui-Rong", "middle": [], "last": "Xue", "suffix": "" }, { "first": "Yong", "middle": [], "last": "Yu", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "200--207", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wenyuan Dai, Qiang Yang, Gui-Rong Xue, and Yong Yu. 2008b. Self-taught clustering. In ICML 2008, pages 200- 207. Omnipress.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Frustratingly easy domain adaptation", "authors": [ { "first": "Iii", "middle": [], "last": "Hal Daume", "suffix": "" } ], "year": 2007, "venue": "ACL 2007", "volume": "", "issue": "", "pages": "256--263", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hal Daume, III. 2007. Frustratingly easy domain adaptation. In ACL 2007, pages 256-263, Prague, Czech Republic.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Deep transfer via second-order markov logic", "authors": [ { "first": "Jesse", "middle": [], "last": "Davis", "suffix": "" }, { "first": "Pedro", "middle": [], "last": "Domingos", "suffix": "" } ], "year": 2008, "venue": "AAAI 2008 Workshop on Transfer Learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jesse Davis and Pedro Domingos. 2008. Deep transfer via second-order markov logic. In AAAI 2008 Workshop on Transfer Learning, Chicago, USA.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Indexing by latent semantic analysis", "authors": [ { "first": "Scott", "middle": [], "last": "Deerwester", "suffix": "" }, { "first": "Susan", "middle": [ "T" ], "last": "Dumais", "suffix": "" }, { "first": "George", "middle": [ "W" ], "last": "Furnas", "suffix": "" }, { "first": "Thomas", "middle": [ "K L" ], "last": "", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Harshman", "suffix": "" } ], "year": 1990, "venue": "Journal of the American Society for Information Science", "volume": "", "issue": "", "pages": "391--407", "other_ids": {}, "num": null, "urls": [], "raw_text": "Scott Deerwester, Susan T. Dumais, George W. Furnas, Thomas K. L, and Richard Harshman. 1990. Indexing by latent semantic analysis. Journal of the American Society for Information Science, pages 391-407.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Maximum likelihood from incomplete data via the em algorithm", "authors": [ { "first": "A", "middle": [ "P" ], "last": "Dempster", "suffix": "" }, { "first": "N", "middle": [ "M" ], "last": "Laird", "suffix": "" }, { "first": "D", "middle": [ "B" ], "last": "Rubin", "suffix": "" } ], "year": 1977, "venue": "J. of the Royal Statistical Society", "volume": "39", "issue": "", "pages": "1--38", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. P. Dempster, N. M. Laird, and D. B. Rubin. 1977. Max- imum likelihood from incomplete data via the em algo- rithm. J. of the Royal Statistical Society, 39:1-38.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Supervised clustering with support vector machines", "authors": [ { "first": "Thomas", "middle": [], "last": "Finley", "suffix": "" }, { "first": "Thorsten", "middle": [], "last": "Joachims", "suffix": "" } ], "year": 2005, "venue": "ICML 2005", "volume": "", "issue": "", "pages": "217--224", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Finley and Thorsten Joachims. 2005. Supervised clustering with support vector machines. In ICML 2005, pages 217-224, New York, NY, USA. ACM.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Caltech-256 object category dataset", "authors": [ { "first": "G", "middle": [], "last": "Griffin", "suffix": "" }, { "first": "A", "middle": [], "last": "Holub", "suffix": "" }, { "first": "P", "middle": [], "last": "Perona", "suffix": "" } ], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "G. Griffin, A. Holub, and P. Perona. 2007. Caltech-256 ob- ject category dataset. Technical Report 7694, California Institute of Technology.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Probabilistic latent semantic analysis", "authors": [ { "first": "Thomas", "middle": [], "last": "Hofmann", "suffix": "" } ], "year": 1999, "venue": "Proc. of Uncertainty in Artificial Intelligence, UAI99. Pages", "volume": "", "issue": "", "pages": "289--296", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Hofmann. 1999 Probabilistic latent semantic anal- ysis. In Proc. of Uncertainty in Artificial Intelligence, UAI99. Pages 289-296", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Unsupervised learning by probabilistic latent semantic analysis", "authors": [ { "first": "Thomas", "middle": [], "last": "Hofmann", "suffix": "" } ], "year": 2001, "venue": "Machine Learning", "volume": "42", "issue": "1-2", "pages": "177--196", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Hofmann. 2001. Unsupervised learning by proba- bilistic latent semantic analysis. Machine Learning. vol- ume 42, number 1-2, pages 177-196. Kluwer Academic Publishers.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Instance weighting for domain adaptation in NLP", "authors": [ { "first": "Jing", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Chengxiang", "middle": [], "last": "Zhai", "suffix": "" } ], "year": 2007, "venue": "ACL 2007", "volume": "", "issue": "", "pages": "264--271", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jing Jiang and Chengxiang Zhai. 2007. Instance weighting for domain adaptation in NLP. In ACL 2007, pages 264- 271, Prague, Czech Republic, June.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Finding groups in data: an introduction to cluster analysis", "authors": [ { "first": "Leonard", "middle": [], "last": "Kaufman", "suffix": "" }, { "first": "Peter", "middle": [ "J" ], "last": "Rousseeuw", "suffix": "" } ], "year": 1990, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Leonard Kaufman and Peter J. Rousseeuw. 1990. Finding groups in data: an introduction to cluster analysis. John Wiley and Sons, New York.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories", "authors": [ { "first": "Svetlana", "middle": [], "last": "Lazebnik", "suffix": "" }, { "first": "Cordelia", "middle": [], "last": "Schmid", "suffix": "" }, { "first": "Jean", "middle": [], "last": "Ponce", "suffix": "" } ], "year": 2006, "venue": "CVPR 2006", "volume": "", "issue": "", "pages": "2169--2178", "other_ids": {}, "num": null, "urls": [], "raw_text": "Svetlana Lazebnik, Cordelia Schmid, and Jean Ponce. 2006. Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. In CVPR 2006, pages 2169-2178, Washington, DC, USA.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "A bayesian hierarchical model for learning natural scene categories", "authors": [ { "first": "Fei-Fei", "middle": [], "last": "Li", "suffix": "" }, { "first": "Pietro", "middle": [], "last": "Perona", "suffix": "" } ], "year": 2005, "venue": "CVPR 2005", "volume": "", "issue": "", "pages": "524--531", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fei-Fei Li and Pietro Perona. 2005. A bayesian hierarchi- cal model for learning natural scene categories. In CVPR 2005, pages 524-531, Washington, DC, USA.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Can chinese web pages be classified with english data source", "authors": [ { "first": "Xiao", "middle": [], "last": "Ling", "suffix": "" }, { "first": "Gui-Rong", "middle": [], "last": "Xue", "suffix": "" }, { "first": "Wenyuan", "middle": [], "last": "Dai", "suffix": "" }, { "first": "Yun", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Qiang", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Yong", "middle": [], "last": "Yu", "suffix": "" } ], "year": 2008, "venue": "WWW 2008", "volume": "", "issue": "", "pages": "969--978", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiao Ling, Gui-Rong Xue, Wenyuan Dai, Yun Jiang, Qiang Yang, and Yong Yu. 2008. Can chinese web pages be classified with english data source? In WWW 2008, pages 969-978, New York, NY, USA. ACM.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Discriminating image senses by clustering with multimodal features", "authors": [ { "first": "Nicolas", "middle": [], "last": "Loeff", "suffix": "" }, { "first": "Cecilia", "middle": [], "last": "Ovesdotter Alm", "suffix": "" }, { "first": "David", "middle": [ "A" ], "last": "Forsyth", "suffix": "" } ], "year": 2006, "venue": "COLING/ACL 2006 Main conference poster sessions", "volume": "", "issue": "", "pages": "547--554", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nicolas Loeff, Cecilia Ovesdotter Alm, and David A. Forsyth. 2006. Discriminating image senses by clustering with multimodal features. In COLING/ACL 2006 Main conference poster sessions, pages 547-554.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Distinctive image features from scaleinvariant keypoints", "authors": [ { "first": "David", "middle": [ "G" ], "last": "Lowe", "suffix": "" } ], "year": 2004, "venue": "International Journal of Computer Vision", "volume": "60", "issue": "2", "pages": "91--110", "other_ids": {}, "num": null, "urls": [], "raw_text": "David G. Lowe. 2004. Distinctive image features from scale- invariant keypoints. International Journal of Computer Vision (IJCV) 2004, volume 60, number 2, pages 91-110.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Some methods for classification and analysis of multivariate observations", "authors": [ { "first": "J", "middle": [ "B" ], "last": "Macqueen", "suffix": "" } ], "year": 1967, "venue": "Proceedings of Fifth Berkeley Symposium on Mathematical Statistics and Probability", "volume": "1", "issue": "", "pages": "281--297", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. B. MacQueen. 1967. Some methods for classification and analysis of multivariate observations. In Proceedings of Fifth Berkeley Symposium on Mathematical Statistics and Probability, pages 1:281-297, Berkeley, CA, USA.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Analyzing the effectiveness and applicability of co-training", "authors": [ { "first": "Kamal", "middle": [], "last": "Nigam", "suffix": "" }, { "first": "Rayid", "middle": [], "last": "Ghani", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the Ninth International Conference on Information and Knowledge Management", "volume": "", "issue": "", "pages": "86--93", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kamal Nigam and Rayid Ghani. 2000. Analyzing the effec- tiveness and applicability of co-training. In Proceedings of the Ninth International Conference on Information and Knowledge Management, pages 86-93, New York, USA.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Self-taught learning: transfer learning from unlabeled data", "authors": [ { "first": "Rajat", "middle": [], "last": "Raina", "suffix": "" }, { "first": "Alexis", "middle": [], "last": "Battle", "suffix": "" }, { "first": "Honglak", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Packer", "suffix": "" }, { "first": "Andrew", "middle": [ "Y" ], "last": "Ng", "suffix": "" } ], "year": 2007, "venue": "ICML 2007", "volume": "", "issue": "", "pages": "759--766", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rajat Raina, Alexis Battle, Honglak Lee, Benjamin Packer, and Andrew Y. Ng. 2007. Self-taught learning: transfer learning from unlabeled data. In ICML 2007, pages 759- 766, New York, NY, USA. ACM.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Self-training for enhancement and domain adaptation of statistical parsers trained on small datasets", "authors": [ { "first": "Roi", "middle": [], "last": "Reichart", "suffix": "" }, { "first": "Ari", "middle": [], "last": "Rappoport", "suffix": "" } ], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roi Reichart and Ari Rappoport. 2007. Self-training for enhancement and domain adaptation of statistical parsers trained on small datasets. In ACL 2007.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Multi-task active learning for linguistic annotations", "authors": [ { "first": "Roi", "middle": [], "last": "Reichart", "suffix": "" }, { "first": "Katrin", "middle": [], "last": "Tomanek", "suffix": "" }, { "first": "Udo", "middle": [], "last": "Hahn", "suffix": "" }, { "first": "Ari", "middle": [], "last": "Rappoport", "suffix": "" } ], "year": 2008, "venue": "ACL-08: HLT", "volume": "", "issue": "", "pages": "861--869", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roi Reichart, Katrin Tomanek, Udo Hahn, and Ari Rap- poport. 2008. Multi-task active learning for linguistic annotations. In ACL-08: HLT, pages 861-869.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "A mathematical theory of communication. Bell system technical journal", "authors": [ { "first": "C", "middle": [ "E" ], "last": "Shannon", "suffix": "" } ], "year": 1948, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. E. Shannon. 1948. A mathematical theory of communi- cation. Bell system technical journal, 27.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Discovering object categories in image collections", "authors": [ { "first": "J", "middle": [], "last": "Sivic", "suffix": "" }, { "first": "B", "middle": [ "C" ], "last": "Russell", "suffix": "" }, { "first": "A", "middle": [ "A" ], "last": "Efros", "suffix": "" }, { "first": "A", "middle": [], "last": "Zisserman", "suffix": "" }, { "first": "W", "middle": [ "T" ], "last": "Freeman", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Sivic, B. C. Russell, A. A. Efros, A. Zisserman, and W. T. Freeman. 2005. Discovering object categories in image collections. In ICCV 2005.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "The information bottleneck method. 1999", "authors": [ { "first": "Naftali", "middle": [], "last": "Tishby", "suffix": "" }, { "first": "Fernando", "middle": [ "C" ], "last": "Pereira", "suffix": "" }, { "first": "William", "middle": [], "last": "Bialek", "suffix": "" } ], "year": null, "venue": "Proc. of the 37-th Annual Allerton Conference on Communication, Control and Computing", "volume": "", "issue": "", "pages": "368--377", "other_ids": {}, "num": null, "urls": [], "raw_text": "Naftali Tishby, Fernando C. Pereira, and William Bialek. The information bottleneck method. 1999. In Proc. of the 37- th Annual Allerton Conference on Communication, Con- trol and Computing, pages 368-377.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Improving svm accuracy by training on auxiliary data sources", "authors": [ { "first": "Pengcheng", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Thomas", "middle": [ "G" ], "last": "Dietterich", "suffix": "" } ], "year": 2004, "venue": "ICML 2004", "volume": "", "issue": "", "pages": "110--117", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pengcheng Wu and Thomas G. Dietterich. 2004. Improving svm accuracy by training on auxiliary data sources. In ICML 2004, pages 110-117, New York, NY, USA.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Bilingual topic aspect classification with a few training examples", "authors": [ { "first": "Yejun", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Douglas", "middle": [ "W" ], "last": "Oard", "suffix": "" } ], "year": 2008, "venue": "ACM SIGIR 2008", "volume": "", "issue": "", "pages": "203--210", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yejun Wu and Douglas W. Oard. 2008. Bilingual topic as- pect classification with a few training examples. In ACM SIGIR 2008, pages 203-210, New York, NY, USA.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Semi-supervised learning literature survey", "authors": [ { "first": "Xiaojin", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaojin Zhu. 2007. Semi-supervised learning literature sur- vey. Technical Report 1530, Computer Sciences, Univer- sity of Wisconsin-Madison.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "An intuitive illustration of different kinds learning strategies using classification/clustering of image apple and banana as the example.", "num": null, "uris": null, "type_str": "figure" }, "FIGREF1": { "text": "Graphical model representation of PLSA model.", "num": null, "uris": null, "type_str": "figure" }, "FIGREF2": { "text": ": Update P (z|v, f ) and P (z|w, f ) based on Eq.(6)and (7) respectively. 5: M-Step: Update P (z|v), P (z|w) and P (f |z) based on Eq. (8), (9) and (10) respectively. 6: end while 7: for all", "num": null, "uris": null, "type_str": "figure" }, "FIGREF3": { "text": "(a) The entropy curve as a function of different amounts of data per category. (b) The entropy curve as a function of different number of iterations. (c) The entropy curve as a function of different trade-off parameter \u03bb.", "num": null, "uris": null, "type_str": "figure" }, "TABREF1": { "content": "
entropy to measure the quality of our clustering |
results. In information theory, entropy (Shan |