{ "paper_id": "P15-1046", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:09:55.570781Z" }, "title": "New Transfer Learning Techniques for Disparate Label Sets", "authors": [ { "first": "Young-Bum", "middle": [], "last": "Kim", "suffix": "", "affiliation": { "laboratory": "", "institution": "Microsoft Corporation", "location": { "settlement": "Redmond", "region": "WA" } }, "email": "ybkim@microsoft.com" }, { "first": "Karl", "middle": [], "last": "Stratos", "suffix": "", "affiliation": { "laboratory": "", "institution": "Columbia University", "location": { "settlement": "New York", "region": "NY" } }, "email": "stratos@cs.columbia.edu" }, { "first": "Ruhi", "middle": [], "last": "Sarikaya", "suffix": "", "affiliation": { "laboratory": "", "institution": "Microsoft Corporation", "location": { "settlement": "Redmond", "region": "WA" } }, "email": "ruhi.sarikaya@microsoft.com" }, { "first": "Minwoo", "middle": [], "last": "Jeong", "suffix": "", "affiliation": { "laboratory": "", "institution": "Microsoft Corporation", "location": { "settlement": "Redmond", "region": "WA" } }, "email": "minwoo.jeong@microsoft.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In natural language understanding (NLU), a user utterance can be labeled differently depending on the domain or application (e.g., weather vs. calendar). Standard domain adaptation techniques are not directly applicable to take advantage of the existing annotations because they assume that the label set is invariant. We propose a solution based on label embeddings induced from canonical correlation analysis (CCA) that reduces the problem to a standard domain adaptation task and allows use of a number of transfer learning techniques. We also introduce a new transfer learning technique based on pretraining of hidden-unit CRFs (HUCRFs). We perform extensive experiments on slot tagging on eight personal digital assistant domains and demonstrate that the proposed methods are superior to strong baselines.", "pdf_parse": { "paper_id": "P15-1046", "_pdf_hash": "", "abstract": [ { "text": "In natural language understanding (NLU), a user utterance can be labeled differently depending on the domain or application (e.g., weather vs. calendar). Standard domain adaptation techniques are not directly applicable to take advantage of the existing annotations because they assume that the label set is invariant. We propose a solution based on label embeddings induced from canonical correlation analysis (CCA) that reduces the problem to a standard domain adaptation task and allows use of a number of transfer learning techniques. We also introduce a new transfer learning technique based on pretraining of hidden-unit CRFs (HUCRFs). We perform extensive experiments on slot tagging on eight personal digital assistant domains and demonstrate that the proposed methods are superior to strong baselines.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The main goal of NLU is to automatically extract the meaning of spoken or typed queries. In recent years, this task has become increasingly important as more and more speech-based applications have emerged. Recent releases of personal digital assistants such as Siri, Google Now, Dragon Go and Cortana in smart phones provide natural language based interface for a variety of domains (e.g. places, weather, communications, reminders). The NLU in these domains are based on statistical machine learned models which require annotated training data. Typically each domain has its own schema to annotate the words and queries. However the meaning of words and utterances could be different in each domain. For example, \"sunny\" is considered a weather condition in the weather domain but it may be a song title in a music domain. Thus every time a new application is developed or a new domain is built, a significant amount of resources is invested in creating annotations specific to that application or domain.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "One might attempt to apply existing techniques (Blitzer et al., 2006; Daum\u00e9 III, 2007) in domain adaption to this problem, but a straightforward application is not possible because these techniques assume that the label set is invariant.", "cite_spans": [ { "start": 47, "end": 69, "text": "(Blitzer et al., 2006;", "ref_id": "BIBREF2" }, { "start": 70, "end": 86, "text": "Daum\u00e9 III, 2007)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this work, we provide a simple and effective solution to this problem by abstracting the label types using the canonical correlation analysis (CCA) by Hotelling (Hotelling, 1936) a powerful and flexible statistical technique for dimensionality reduction. We derive a low dimensional representation for each label type that is maximally correlated to the average context of that label via CCA. These shared label representations, or label embeddings, allow us to map label types across different domains and reduce the setting to a standard domain adaptation problem. After the mapping, we can apply the standard transfer learning techniques to solve the problem.", "cite_spans": [ { "start": 164, "end": 181, "text": "(Hotelling, 1936)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Additionally, we introduce a novel pretraining technique for hidden-unit CRFs (HUCRFs) to effectively transfer knowledge from one domain to another. In our experiments, we find that our pretraining method is almost always superior to strong baselines such as the popular domain adaptation method of Daum\u00e9 III (2007) .", "cite_spans": [ { "start": 299, "end": 315, "text": "Daum\u00e9 III (2007)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Let D be the number of distinct domains. Let X i be the space of observed samples for the i-th domain. Let Y i be the space of possible labels for the i-th domain. In most previous works in domain adaptation (Blitzer et al., 2006; Daum\u00e9 III, 2007) , observed data samples may vary but label space is invariant 1 . That is,", "cite_spans": [ { "start": 208, "end": 230, "text": "(Blitzer et al., 2006;", "ref_id": "BIBREF2" }, { "start": 231, "end": 247, "text": "Daum\u00e9 III, 2007)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Problem description and related work", "sec_num": "2" }, { "text": "Y i = Y j \u2200i, j \u2208 {1 . . . D}", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem description and related work", "sec_num": "2" }, { "text": "but X i = X j for some domains i and j. For example, in part-of-speech (POS) tagging on newswire and biomedical domains, the observed data sample may be radically different but the POS tag set remains the same. In practice, there are cases, where the same query is labeled differently depending on the domain or application and the context. For example, Fred Myer can be tagged differently; \"send a text message to Fred Myer\" and \"get me driving direction to Fred Myer \". In the first case, Fred Myer is person in user's contact list but it is a grocery store in the second one.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem description and related work", "sec_num": "2" }, { "text": "So, we relax the constraint that label spaces must be the same. Instead, we assume that surface forms (i.e words) are similar. This is a natural setting in developing multiple applications on speech utterances; input spaces (service request utterances) do not change drastically but output spaces (slot tags) might.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem description and related work", "sec_num": "2" }, { "text": "Multi-task learning differs from our task. In general multi-task learning aims to improve performance across all domains while our domain adaptation objective is to optimize the performance of semantic slot tagger on the target domain.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem description and related work", "sec_num": "2" }, { "text": "Below, we review related work in domain adaption and natural language understanding (NLU).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem description and related work", "sec_num": "2" }, { "text": "Domain adaptation has been widely used in many natural language processing (NLP) applications including part-of-speech tagging (Schnabel and Sch\u00fctze, 2014), parsing (McClosky et al., 2010) , and machine translation (Foster et al., 2010) . Most of the work can be classified either supervised domain adaptation (Chelba and Acero, 2006; Blitzer et al., 2006; Daume III and Marcu, 2006; Daum\u00e9 III, 2007; Finkel and Manning, 2009; Chen et al., 2011) or semi-supervised adaptation (Ando and Zhang, 2005; Jiang and Zhai, 2007; Kumar et al., 2010; Huang and Yates, 2010) . Our problem setting falls into the former.", "cite_spans": [ { "start": 127, "end": 140, "text": "(Schnabel and", "ref_id": "BIBREF31" }, { "start": 141, "end": 188, "text": "Sch\u00fctze, 2014), parsing (McClosky et al., 2010)", "ref_id": null }, { "start": 215, "end": 236, "text": "(Foster et al., 2010)", "ref_id": "BIBREF12" }, { "start": 310, "end": 334, "text": "(Chelba and Acero, 2006;", "ref_id": "BIBREF4" }, { "start": 335, "end": 356, "text": "Blitzer et al., 2006;", "ref_id": "BIBREF2" }, { "start": 357, "end": 383, "text": "Daume III and Marcu, 2006;", "ref_id": "BIBREF7" }, { "start": 384, "end": 400, "text": "Daum\u00e9 III, 2007;", "ref_id": "BIBREF8" }, { "start": 401, "end": 426, "text": "Finkel and Manning, 2009;", "ref_id": "BIBREF11" }, { "start": 427, "end": 445, "text": "Chen et al., 2011)", "ref_id": "BIBREF5" }, { "start": 476, "end": 498, "text": "(Ando and Zhang, 2005;", "ref_id": "BIBREF1" }, { "start": 499, "end": 520, "text": "Jiang and Zhai, 2007;", "ref_id": "BIBREF16" }, { "start": 521, "end": 540, "text": "Kumar et al., 2010;", "ref_id": "BIBREF21" }, { "start": 541, "end": 563, "text": "Huang and Yates, 2010)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2.1" }, { "text": "Multi-task learning has become popular in NLP. Sutton and McCallum (2005) showed that joint 1 Multilingual learning (Kim et al., 2011; Kim and Snyder, 2012; Kim and Snyder, 2013) has same setting. learning and/or decoding of sub-tasks helps to improve performance. Collobert and Weston (2008) proved the similar claim in a deep learning architecture. While our problem resembles their settings, there are two clear distinctions. First, we aim to optimize performance on the target domain by minimizing the gap between source and target domain while multi-task learning jointly learns the shared tasks. Second, in our problem the domains are different, but they are closely related. On the other hand, prior work focuses on multiple subtasks of the same data.", "cite_spans": [ { "start": 47, "end": 73, "text": "Sutton and McCallum (2005)", "ref_id": "BIBREF32" }, { "start": 116, "end": 134, "text": "(Kim et al., 2011;", "ref_id": "BIBREF19" }, { "start": 135, "end": 156, "text": "Kim and Snyder, 2012;", "ref_id": "BIBREF17" }, { "start": 157, "end": 178, "text": "Kim and Snyder, 2013)", "ref_id": "BIBREF18" }, { "start": 265, "end": 292, "text": "Collobert and Weston (2008)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2.1" }, { "text": "Despite the increasing interest in NLU (De Mori et al., 2008; Xu and Sarikaya, 2013; Xu and Sarikaya, 2014; Anastasakos et al., 2014; El-Kahky et al., 2014; Marin et al., 2014; Celikyilmaz et al., 2015; Ma et al., 2015; Kim et al., 2015) , transfer learning in the context of NLU has not been much explored. The most relevant previous work is Tur (2006) and Li et al. (2011) , which described both the effectiveness of multi-task learning in the context of NLU. For multi-task learning, they used shared slots by associating each slot type with aggregate active feature weight vector based on an existing domain specific slot tagger. Our empirical results shows that these vector representation might be helpful to find shared slots across domain, but cannot find bijective mapping between domains.", "cite_spans": [ { "start": 39, "end": 61, "text": "(De Mori et al., 2008;", "ref_id": "BIBREF9" }, { "start": 62, "end": 84, "text": "Xu and Sarikaya, 2013;", "ref_id": "BIBREF34" }, { "start": 85, "end": 107, "text": "Xu and Sarikaya, 2014;", "ref_id": "BIBREF35" }, { "start": 108, "end": 133, "text": "Anastasakos et al., 2014;", "ref_id": "BIBREF0" }, { "start": 134, "end": 156, "text": "El-Kahky et al., 2014;", "ref_id": "BIBREF10" }, { "start": 157, "end": 176, "text": "Marin et al., 2014;", "ref_id": "BIBREF27" }, { "start": 177, "end": 202, "text": "Celikyilmaz et al., 2015;", "ref_id": "BIBREF3" }, { "start": 203, "end": 219, "text": "Ma et al., 2015;", "ref_id": "BIBREF25" }, { "start": 220, "end": 237, "text": "Kim et al., 2015)", "ref_id": "BIBREF20" }, { "start": 343, "end": 353, "text": "Tur (2006)", "ref_id": "BIBREF33" }, { "start": 358, "end": 374, "text": "Li et al. (2011)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2.1" }, { "text": "Also, Jeong and Lee (2009) presented a transfer learning approach in multi-domain NLU, where the model jointly learns slot taggers in multiple domains and simultaneously predicts domain detection and slot tagging results. 2 To share parameters across domains, they added an additional node for domain prediction on top of the slot sequence. However, this framework also limited to a setting in which the label set remains invariant. In contrast, our method is restricted to this setting without any modification of models.", "cite_spans": [ { "start": 6, "end": 26, "text": "Jeong and Lee (2009)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2.1" }, { "text": "The proposed techniques in Section 4 and 5 are generic methodologies and not tied to any particular models such as any sequence models and instanced based models. However, because of superior performance over CRF, we use a hidden unit CRF (HUCRF) of Maaten et al. (2011) . While popular and effective, a CRF is still a linear model. In contrast, a HUCRF benefits from nonlinearity, leading to superior performance over CRF (Maaten et al., 2011) . Thus we will focus on HUCRFs to demonstrate our techniques in experiments.", "cite_spans": [ { "start": 250, "end": 270, "text": "Maaten et al. (2011)", "ref_id": "BIBREF26" }, { "start": 423, "end": 444, "text": "(Maaten et al., 2011)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Sequence Modeling Technique", "sec_num": "3" }, { "text": "A HUCRF introduces a layer of binary-valued hidden units z = z 1 . . . z n \u2208 {0, 1} for each pair of label sequence y = y 1 . . . y n and observation sequence x = x 1 . . . x n . A HUCRF parametrized by \u03b8 \u2208 R d and \u03b3 \u2208 R d defines a joint probability of y and z conditioned on x as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hidden Unit CRF (HUCRF)", "sec_num": "3.1" }, { "text": "p \u03b8,\u03b3 (y, z|x) = exp(\u03b8 \u03a6(x, z) + \u03b3 \u03a8(z, y)) z \u2208{0,1} n y \u2208Y(x,z ) exp(\u03b8 \u03a6(x, z ) + \u03b3 \u03a8(z , y )) (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hidden Unit CRF (HUCRF)", "sec_num": "3.1" }, { "text": "where Y(x, z) is the set of all possible label sequences for x and z, and \u03a6(x, z) \u2208 R d and \u03a8(z, y) \u2208 R d are global feature functions that decompose into local feature functions:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hidden Unit CRF (HUCRF)", "sec_num": "3.1" }, { "text": "\u03a6(x, z) = n j=1 \u03c6(x, j, z j ) \u03a8(z, y) = n j=1 \u03c8(z j , y j\u22121 , y j )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hidden Unit CRF (HUCRF)", "sec_num": "3.1" }, { "text": "HUCRF forces the interaction between the observations and the labels at each position j to go through a latent variable z j : see Figure 1 for illustration. Then the probability of labels y is given by marginalizing over the hidden units,", "cite_spans": [], "ref_spans": [ { "start": 130, "end": 138, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Hidden Unit CRF (HUCRF)", "sec_num": "3.1" }, { "text": "p \u03b8,\u03b3 (y|x) = z\u2208{0,1} n p \u03b8,\u03b3 (y, z|x)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hidden Unit CRF (HUCRF)", "sec_num": "3.1" }, { "text": "As in restricted Boltzmann machines (Larochelle and Bengio, 2008) , hidden units are conditionally independent given observations and labels. This allows for efficient inference with HUCRFs despite their richness (see Maaten et al. (2011) for details). We use a perceptron-style algorithm of Maaten et al. (2011) for training HUCRFs.", "cite_spans": [ { "start": 36, "end": 65, "text": "(Larochelle and Bengio, 2008)", "ref_id": "BIBREF22" }, { "start": 218, "end": 238, "text": "Maaten et al. (2011)", "ref_id": "BIBREF26" }, { "start": 292, "end": 312, "text": "Maaten et al. (2011)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Hidden Unit CRF (HUCRF)", "sec_num": "3.1" }, { "text": "In this section, we describe three methods for utilizing annotations in domains with different label types. First two methods are about transferring features and last method is about transferring model parameters. Each of these methods requires some sort of mapping for label types. A fine-grained label type needs to be mapped to a coarse one; a label type in one domain needs to be mapped to the corresponding label type in another domain. We will provide a solution to obtaining these label mappings automatically in Section 5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transfer learning between domains with different label sets", "sec_num": "4" }, { "text": "This approach has some similarities to the method of Li et al. (2011) in that shared slots are used to transfer information between domains. In this two-stage approach, we train a model on the source domain, make predictions on the target domain, and then use the predicted labels as additional features to train a final model on the target domain. This can be helpful if there is some correlation between the label types in the source domain and the label types in the target domain. However, it is not desirable to directly use the label types in the source domain since they can be highly specific to that particular domain. An effective way to combat this problem is to reduce the original label types such start-time, contract-info, and restaurant as to a set of coarse label types such as name, date, time, and location that are universally shared across all domains. By doing so, we can use the first model to predict generic labels such as time and then use the second model to use this information to predict fine-grained labels such as start-time and end-time.", "cite_spans": [ { "start": 53, "end": 69, "text": "Li et al. (2011)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Coarse-to-fine prediction", "sec_num": "4.1" }, { "text": "In this popular technique for domain adaptation, we train a model on the union of the source domain data and the target domain data but with the following preprocessing step: each feature is duplicated and the copy is conjoined with a domain indicator. For example, in a WEATHER domain dataset, a feature that indicates the identity of the string \"Sunny\" will generate both w(0) = Sunny and (w(0) = Sunny) \u2227 (domain = W EAT HER) as feature types. This preprocessing allows the model to utilize all data through the common features and at the same time specialize to specific domains through the domain specific features. This is especially helpful when there is label ambiguity on particular features (e.g., \"Sunny\" might be a weather-condition in a WEATHER domain dataset but a music-song-name in a MUSIC domain dataset).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method of Daum\u00e9 III (2007)", "sec_num": "4.2" }, { "text": "Note that a straightforward application of this technique is in general not feasible in our situation. This is because we have features conjoined with label types and our domains do not share label types. This breaks the sharing of features across domains: many feature types in the source domain are disjoint from those in the target domain due to different labeling.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method of Daum\u00e9 III (2007)", "sec_num": "4.2" }, { "text": "Thus it is necessary to first map source domain label types to target domain label type. After the mapping, features are shared across domains and we can apply this technique.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method of Daum\u00e9 III (2007)", "sec_num": "4.2" }, { "text": "In this approach, we train HUCRF on the source domain and transfer the learned parameters to initialize the training process on the target domain. This can be helpful for at least two reasons:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transferring model parameter", "sec_num": "4.3" }, { "text": "1. The resulting model will have parameters for feature types observed in the source domain as well as the target domain. Thus it has better feature coverage.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transferring model parameter", "sec_num": "4.3" }, { "text": "2. If the training objective is non-convex, this initialization can be helpful in avoiding bad local optima.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transferring model parameter", "sec_num": "4.3" }, { "text": "Since the training objective of HUCRFs is nonconvex, both benefits can apply. We show in our experiments that this is indeed the case: the model benefits from both better feature coverage and better initialization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transferring model parameter", "sec_num": "4.3" }, { "text": "Note that in order to use this approach, we need to map source domain label types to target domain label type so that we know which parameter in Figure 2 : Illustration of a pretraining scheme for HUCRFs.", "cite_spans": [], "ref_spans": [ { "start": 145, "end": 153, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Transferring model parameter", "sec_num": "4.3" }, { "text": "the source domain corresponds to which parameter in the target domain. This can be a many-toone, one-to-many, one-to-one mapping depending on the label sets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transferring model parameter", "sec_num": "4.3" }, { "text": "In fact, pretraining HUCRFs in the source domain can be done in various ways. Recall that there are two parameter types: \u03b8 \u2208 R d for scoring observations and hidden states and \u03b3 \u2208 R d for scoring hidden states and labels (Eq. (1)). In pretraining, we first train a model (\u03b8 1 , \u03b3 1 ) on the source data", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pretraining with HUCRFs", "sec_num": "4.3.1" }, { "text": "{(x (i) src , y (i) src )} nsrc i=1 : (\u03b8 1 , \u03b3 1 ) \u2248 arg max \u03b8,\u03b3 nsrc i=1 log p \u03b8,\u03b3 (y (i) src |x (i) src )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pretraining with HUCRFs", "sec_num": "4.3.1" }, { "text": "Then we train a model (\u03b8 2 , \u03b3 2 ) on the target data {(x", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pretraining with HUCRFs", "sec_num": "4.3.1" }, { "text": "(i) trg , y (i) trg )} ntrg i=1 by initializing (\u03b8 2 , \u03b3 2 ) \u2190 (\u03b8 1 , \u03b3 1 ): (\u03b8 2 , \u03b3 2 ) \u2248 arg max \u03b8,\u03b3 ntrg i=1 log p \u03b8,\u03b3 (y (i) trg |x (i) trg )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pretraining with HUCRFs", "sec_num": "4.3.1" }, { "text": "Here, we can choose to initialize only \u03b8 2 \u2190 \u03b8 1 and discard the parameters for hidden states and labels since they may not be the same. The \u03b8 1 parameters model the hidden structures in the source domain data and serve as a good initialization point for learning the \u03b8 2 parameters in the target domain. This can be helpful if the mapping between the label types in the source data and the label types in the target data is unreliable. This process is illustrated in Figure 2 .", "cite_spans": [], "ref_spans": [ { "start": 468, "end": 476, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Pretraining with HUCRFs", "sec_num": "4.3.1" }, { "text": "All methods described in Section 4 require a way to propagate the information in label types across different domains. A straightforward solution would be to manually construct such mappings by inspection. For instance, we can specify that start-time and end-time are grouped as the same label time, or that the label public-transportation-route in the PLACES domain maps to the label implicit-location in the CALENDAR domain.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatic generation of label mappings", "sec_num": "5" }, { "text": "Instead, we propose a technique that automatically generates the label mappings. We induce vector representations for all label types through canonical correlation analysis (CCA) -a powerful and flexible technique for deriving lowdimensional representation. We give a review of CCA in Section 5.1 and describe how we use the technique to construct label mappings in Section 5.2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatic generation of label mappings", "sec_num": "5" }, { "text": "CCA is a general technique that operates on a pair of multi-dimensional variables. CCA finds k dimensions (k is a parameter to be specified) in which these variables are maximally correlated.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Canonical Correlation Analysis (CCA)", "sec_num": "5.1" }, { "text": "Let x 1 . . . x n \u2208 R d and y 1 . . . y n \u2208 R d be n samples of the two variables. For simplicity, assume that these variables have zero mean. Then CCA computes the following for i = 1 . . . k:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Canonical Correlation Analysis (CCA)", "sec_num": "5.1" }, { "text": "arg max u i \u2208R d , v i \u2208R d : u i u i =0 \u2200i