|
{ |
|
"paper_id": "O16-2003", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T08:05:04.161031Z" |
|
}, |
|
"title": "Enriching Cold Start Personalized Language Model Using Social Network Information 1", |
|
"authors": [ |
|
{ |
|
"first": "Yu-Yang", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "National Taiwan University", |
|
"location": { |
|
"settlement": "Taipei", |
|
"country": "Taiwan" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Rui", |
|
"middle": [], |
|
"last": "Yan", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Pennsylvania", |
|
"location": { |
|
"settlement": "Philadelphia", |
|
"country": "U.S.A" |
|
} |
|
}, |
|
"email": "ruiyan@seas.upenn.edu" |
|
}, |
|
{ |
|
"first": "Tsung-Ting", |
|
"middle": [], |
|
"last": "Kuo", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "National Taiwan University", |
|
"location": { |
|
"settlement": "Taipei", |
|
"country": "Taiwan" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Shou-De", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "National Taiwan University", |
|
"location": { |
|
"settlement": "Taipei", |
|
"country": "Taiwan" |
|
} |
|
}, |
|
"email": "sdlin@csie.ntu.edu.tw" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Personalized language models are useful in many applications, such as personalized search and personalized recommendation. Nevertheless, it is challenging to build a personalized language model for cold start users, in which the size of the training corpus of those users is too small to create a reasonably accurate and representative model. We introduce a generalized framework to enrich the personalized language models for cold start users. The cold start problem is solved with content written by friends on social network services. Our framework consists of a mixture language model, whose mixture weights are estimated with a factor graph. The factor graph is used to incorporate prior knowledge and heuristics to identify the most appropriate weights. The intrinsic and extrinsic experiments show significant improvement on cold start users.", |
|
"pdf_parse": { |
|
"paper_id": "O16-2003", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Personalized language models are useful in many applications, such as personalized search and personalized recommendation. Nevertheless, it is challenging to build a personalized language model for cold start users, in which the size of the training corpus of those users is too small to create a reasonably accurate and representative model. We introduce a generalized framework to enrich the personalized language models for cold start users. The cold start problem is solved with content written by friends on social network services. Our framework consists of a mixture language model, whose mixture weights are estimated with a factor graph. The factor graph is used to incorporate prior knowledge and heuristics to identify the most appropriate weights. The intrinsic and extrinsic experiments show significant improvement on cold start users.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Personalized language models on social network services are useful in many aspects (Xue et al., 2009; Wen et al., 2012; Clements, 2007) . For instance, if the authorship of a document is in doubt, a language model may be used as a generative model to identify it. In this sense, a language model serves as a proxy of one's writing style. Furthermore, personalized language models can improve the quality of information retrieval and content-based recommendation systems, where documents or topics can be recommended based on the generative probabilities.", |
|
"cite_spans": [ |
|
{ |
|
"start": 83, |
|
"end": 101, |
|
"text": "(Xue et al., 2009;", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 102, |
|
"end": 119, |
|
"text": "Wen et al., 2012;", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 120, |
|
"end": 135, |
|
"text": "Clements, 2007)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "It is challenging, however, to build a personalized language model for users who just entered the system since the content posted by these users is insufficient to characterize them. These users are referred to as \"cold start\" users. Since the popularity of a system largely depends on whether new users will continue to stick to the system, it is even more critical to generate good recommendation results for new users. Therefore, this paper focuses on how to obtain a better personalized language model for cold start users.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "To achieve the aforementioned purpose, the content written by friends on a social media site is exploited. It can be a reply or a post written by friends on sites like Facebook or Twitter. Here, the hypothesis is that friends, who usually share common interests, tend to discuss similar topics and use similar words than non-friends. In other words, we believe that a cold start user's language model can be enriched and better personalized by incorporating content written by friends.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "Intuitively, a linear combination of language models can be used to mix the content written by friends into a user's language model. Nevertheless, it should be noticed that some documents are more relevant than others and should be weighted higher. To obtain better weights, some simple heuristics could be exploited. For example, we can measure the similarity or dissimilarity between a user language model and a document language model. In addition, documents being shared more frequently in a social network usually are considered to be more influential and should be expected to contribute more to the refinement of a user language model. More complex heuristics also can be derived. For instance, if we can categorize the documents and find that two documents are of the same category, then their weights should be more similar. The main challenge lies in how such heuristics can be utilized in a systematic manner to infer the weights of each document-level language model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "In this paper, we exploit the information on social network services in two ways. First, we impose the social dependency assumption via a finite mixture model. We model the true, albeit unknown, personalized language model as a combination of a biased user language model and a set of relevant document language models. Due to the noise inevitably contained in social media content, instead of using all available documents, we argue that, by properly specifying the set of relevant documents, a better personalized language model can be learnt. In other words, each user language model is enriched by a personalized collection of background documents.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "Second, we propose a factor graph model to incorporate prior knowledge (e.g. the heuristics described above) into our model. Each mixture weight is represented by a random variable in the factor graph. An efficient algorithm is proposed to optimize the model and infer Using Social Network Information the marginal distribution of these variables. Useful information and heuristics are encoded into the model by a set of potential functions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "The main contributions of this work are summarized below.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "\uf0a7 To solve the cold start problem encountered during the estimation of personalized language models, a generalized probabilistic framework is proposed. We incorporate social network information into user language models through the use of a factor graph model. An iterative optimization procedure utilizing perplexity is presented to learn the model parameters. To our knowledge, this is the first proposal to use a factor graph model to enrich language models.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "\uf0a7 Perplexity is selected as an intrinsic evaluation, and experiment on authorship attribution is used as an extrinsic evaluation. The results show that our model yields significant improvements for cold start users.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "We describe how to construct and enrich a personalized language model in this section. In the first subsection, we propose a social-driven, personalized mixture language model. The original, poorly estimated user language model is enriched with a set of relevant document language models. In Section 2.2, a graphical model is presented to identify the mixture weights of each mixture component. The relative importance of each mixture component, i.e. document language model, is determined with the use of prior knowledge that comes from a social network. In Section 2.3, we describe how the model is optimized under a lack of labelled information.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methodology", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "The language model of a collection of documents can be estimated by normalizing the counts of words in the entire collection (Zhai, 2008) . To build a user language model, one na\u00efve way is first to normalize word frequency , within each document, then average over all the documents in a user's document collection. The resulting unigram user language model is:", |
|
"cite_spans": [ |
|
{ |
|
"start": 125, |
|
"end": 137, |
|
"text": "(Zhai, 2008)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Social-Driven Personalized Language Model", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "| | \u2211 , | | \u2208 | | \u2211 \u2208 (1)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Social-Driven Personalized Language Model", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "where is the language model of a particular document, is the user's document collection, and |\u2022| denotes the number of elements in a set. This formulation is basically an equal-weighted finite mixture model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Social-Driven Personalized Language Model", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "A simple yet effective way to smooth a language model is to linearly interpolate with a background language model (Chen & Goodman, 1996; Zhai & Lafferty, 2001 ). In the linear interpolation method, all background documents are treated equally. The entire document collection is added to the user language model with the same interpolation coefficient. On social media, however, articles are often short and noisy. The user language models generated in this way are prone to overfitting. To obtain a better personalized user language model, we must take into consideration the complicated document-level correlations and dissimilarities, and this is where our idea was born.", |
|
"cite_spans": [ |
|
{ |
|
"start": 114, |
|
"end": 136, |
|
"text": "(Chen & Goodman, 1996;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 137, |
|
"end": 158, |
|
"text": "Zhai & Lafferty, 2001", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Social-Driven Personalized Language Model", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Our main idea is to specify a set of relevant documents for the target user and enrich the user language model with these documents. Then, through the use of the information embedded in a social network, the relative importance of these documents is learnt. Suppose that the target user is u. Letting denote the content posted by people that are most relevant to u (e.g. friends on a social network), our idea can be concisely expressed as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Social-Driven Personalized Language Model", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "\u2211 \u2208 (2)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Social-Driven Personalized Language Model", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "where is the mixture weight of the language model of document d, and \u2211 1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Social-Driven Personalized Language Model", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Documents posted by irrelevant users are ignored as we believe the user language model can be personalized better by exploiting the social relationship in a more structured way. In our experiments, we choose the documents posted by friends as .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Social-Driven Personalized Language Model", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Also note that we have made no assumption about how the \"base\" user language model is built. In practice, it need not be models following maximum likelihood estimation, but any language model can be integrated into our framework to achieve a more refined model. Furthermore, any smoothing method can be applied to the language model without degrading the effectiveness.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Social-Driven Personalized Language Model", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Now, we discuss how the mixture weights can be estimated. We introduce a factor graph model to make use of the diverse information on a social network. Factor graph (Kschischang et al., 2006) is a bipartite graph consisting of a set of random variables and a set of factors that signifies the relationships among the variables. It is best suited to situations where the data is clearly of a relational nature (Wang et al., 2012) . The joint distribution of the variables is factored according to the graph structure. Using a factor graph model, we can incorporate the knowledge into the potential function for optimization and perform joint inference over documents.", |
|
"cite_spans": [ |
|
{ |
|
"start": 165, |
|
"end": 191, |
|
"text": "(Kschischang et al., 2006)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 409, |
|
"end": 428, |
|
"text": "(Wang et al., 2012)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Factor Graph Model", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "A factor graph model is presented in Figure 1 . As can be seen from Equation 2, there are | | unknown mixture weights to be estimated. For each mixture weight , we put a Bernoulli random variable in the factor graph. The value 1 means that the document d should be included in the enriched personalized language model of the target user. In this sense, a larger value of 1 implies a higher mixture weight of d. In particular, we set to be proportional to 1 in the final estimation. ( 3)where \u2329 \u232a is a vector of predefined feature functions and \u03b1 is the parameter vector to be learnt. We assume that all feature functions take a value of zero if 0. So, the larger the value of 1 is, the higher is the value 1 relative to 0 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 37, |
|
"end": 45, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Factor Graph Model", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "In other words, is (locally) believed to be higher.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Using Social Network Information", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In our experiment, we define the vector of feature functions as \u2329 , , ,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Using Social Network Information", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\uf0a7 Similarity function f sim . The similarity between language models of the target user and a document should play an important role. We use cosine similarity between two unigram models in our experiments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Using Social Network Information", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\uf0a7 Document quality function f oov . The out-of-vocabulary (OOV) ratio is used to measure the quality of a document. It is defined as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Using Social Network Information", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "1 | : \u2208 , \u2209 | | |", |
|
"eq_num": "(4)" |
|
} |
|
], |
|
"section": "Using Social Network Information", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "where V is the vocabulary set of the entire corpus, with stop words excluded. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Using Social Network Information", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": ", ,", |
|
"eq_num": "(5)" |
|
} |
|
], |
|
"section": "Using Social Network Information", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "where g is a vector of feature functions indicating whether two variables are correlated. We assume that the two variables are not connected in the factor graph if g(y i , y j ) = 0.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Using Social Network Information", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "If we further denote the set of all variables linked to as , then, for any variable y d , we obtain the following result:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Using Social Network Information", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": ", \u220f , \u2208 \u2211 , \u2208", |
|
"eq_num": "(6)" |
|
} |
|
], |
|
"section": "Using Social Network Information", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "which is a function of only. This expression will be used in the following equations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Using Social Network Information", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We define the vector of feature functions \u2329 , \u232a as follows.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Using Social Network Information", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\uf0a7 User relationship function g rel . We assume that two variables y i and y j are higher correlated if ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Using Social Network Information", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The similarity between documents , is measured by the cosine similarity between two unigram language models.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Using Social Network Information", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\uf0a7 Co-category function g cat . For any two variables y i and y j , it is intuitive that the two variables ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Using Social Network Information", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The flexibility of the proposed framework lies in the following aspects.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Using Social Network Information", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\uf0a7 The factor graph model is adaptable. The feature functions are not restricted to the ones we have used and can be freely added or redesigned in order to properly model different datasets.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Using Social Network Information", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\uf0a7 The set of relevant documents can be changed. In our experiment, we used the documents posted by friends to enrich the language model. Nevertheless, this is not a requirement. Whenever appropriate, documents posted by friends of friends, or any arbitrary set of documents can be adapted to tackle this problem.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Using Social Network Information", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\uf0a7 As we mentioned at the end of Section 2.1, the \"base\" user language model can be already smoothed by any technique. Furthermore, the language models need not be unigram models. If a higher order n-gram model is more suitable, it can be used in our framework. For our particular dataset, however, we find that it gives no advantage to use higher order n-gram models.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Using Social Network Information", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Let Y be the set of all random variables. The joint distribution encoded by the factor graph model is given by multiplying all potential functions:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model Inference and Optimization", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u220f , \u2208", |
|
"eq_num": "(9)" |
|
} |
|
], |
|
"section": "Model Inference and Optimization", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "where Z is a normalization term to ensure that the probability sums to one.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model Inference and Optimization", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "The desired marginal distribution can be obtained by marginalizing all other variables. Under most circumstances, however, the factor graph is densely connected. This makes the exact inference intractable, and approximate inference is required. After obtaining the marginal probabilities with the approximate inference algorithm, the mixture weights in Eq. 2 are estimated by normalizing the corresponding marginal probabilities to satisfy the constraint \u2211 1. The normalization can be written as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model Inference and Optimization", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "1 \u2211 \u2208 .", |
|
"eq_num": "(10)" |
|
} |
|
], |
|
"section": "Model Inference and Optimization", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "It can be verified that the above equation leads to a valid probability distribution for our mixture model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model Inference and Optimization", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "The proposed factor graph model has | | | | parameters, where | | means the dimensionality of the vector . Combining Equation 2 and Equation 10, it can be observed that the total number of parameters in the mixture model is reduced from 1 | | to", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model Inference and Optimization", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "1 | | | |, lowering the risk of overfitting.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model Inference and Optimization", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "A factor graph is often optimized by gradient-based methods. Unfortunately, since the ground truth values of the mixture weights are not available, we are prohibited from using these approaches. Here, we propose a two-step iterative procedure to optimize our model with respect to the model perplexity on held-out data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model Inference and Optimization", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "At first, all of the model parameters (i.e. , , ) are initialized randomly. Then, we infer the marginal probabilities of the random variables. Given these marginal probabilities, we can evaluate the perplexity of the user language model on a held-out dataset and search for better parameters. This procedure is repeated until convergence. We have also tried to train the model by optimizing the accuracy of the authorship attribution task. Nevertheless, we find that models trained by optimizing the perplexity give better performance.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model Inference and Optimization", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "In this section, we evaluate the performance of language model enrichment with both intrinsic (perplexity) and extrinsic (authorship attribution) metrics. We compare the enriched language model with the original base language model and three intuitive enrichment methods.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiment", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "We performed experiments on the Twitter dataset collected by Galuba et al. (2010) . Twitter data have been used to verify models with different purposes (Lin et al., 2011; Tan et al., 2011) . To emphasize the cold start scenario, we randomly selected 15 users with about 35 tweets and 70 friends as candidates for an authorship attribution task. Our corpus consists of 4322 tweets. All words with less than 5 occurrences were removed. Stop words and URLs also were removed, and all words were stemmed. We identified the 100 most frequent terms as categories. The size of the vocabulary set is 1377.", |
|
"cite_spans": [ |
|
{ |
|
"start": 61, |
|
"end": 81, |
|
"text": "Galuba et al. (2010)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 153, |
|
"end": 171, |
|
"text": "(Lin et al., 2011;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 172, |
|
"end": 189, |
|
"text": "Tan et al., 2011)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset and Experiment Setup", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We randomly partitioned the tweets of each user into training, validation, and testing sets. The reported result is the average of 20 random splits. In all experiments, we varied the size of training data from 1% to 15%, and held the same number of tweets from each user as validation and testing data. The statistics of our dataset, given 15% training data, are shown in Table 1 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 372, |
|
"end": 379, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Dataset and Experiment Setup", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Loopy belief propagation (Murphy et al., 1999) was used to obtain the marginal probabilities. Parameters were searched with the pattern search algorithm (Audet & Dennis, 2002) . To not lose generality, we used the default configuration in all experiments. ", |
|
"cite_spans": [ |
|
{ |
|
"start": 25, |
|
"end": 46, |
|
"text": "(Murphy et al., 1999)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 153, |
|
"end": 175, |
|
"text": "(Audet & Dennis, 2002)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset and Experiment Setup", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We compared our framework with three baseline methods. The first (\"Cosine\") is a straightforward implementation that sets all mixture weights to the cosine similarity between the probability mass vectors of the document and user unigram language models. The second (\"PS\") uses the pattern search algorithm to perform constrained optimization over the mixture weights. Satisfying the constraint \u2211 1, the algorithm iteratively searches for the optimal mixture weights and to lower the perplexity on the validation data. As mentioned in Section 2.3, the main difference between this method and ours (\"FGM\") is that we reduce the search space of the parameters by the factor graph model. Furthermore, social network information is exploited in our framework, while the PS method performs a direct search over mixture weights, discarding valuable knowledge. The third (\"LR\") models the probability with the logistic function, where the local feature functions of each node are regarded as the independent variables, i.e. 1 1 \u2044 . By comparing our model to this baseline method, we want to show that the pairwise connections between nodes are useful.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baseline Methods", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Different from other smoothing methods, which usually are mutually exclusive, any other smoothing methods can be easily merged into our framework. In Equation 2, the base language model can be already smoothed by any technique before being plugged into our framework. Our framework then enriches the user language model with social network information. We selected four popular smoothing methods to demonstrate such an effect, namely additive smoothing, absolute smoothing (Ney et al., 1995) , Jelinek-Mercer (JM) smoothing (Jelinek & Mercer, 1980) , and Dirichlet smoothing (MacKay & Peto, 1994) . Except for additive smoothing, the other three smoothing methods were all based on interpolation with the background corpus. The results of using only the base model (i.e. setting 0 in Eq. 2) are denoted as \"Base\" in the following tables.", |
|
"cite_spans": [ |
|
{ |
|
"start": 473, |
|
"end": 491, |
|
"text": "(Ney et al., 1995)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 494, |
|
"end": 513, |
|
"text": "Jelinek-Mercer (JM)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 524, |
|
"end": 548, |
|
"text": "(Jelinek & Mercer, 1980)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 575, |
|
"end": 596, |
|
"text": "(MacKay & Peto, 1994)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baseline Methods", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "As an intrinsic evaluation, for each tweet in the testing set, we computed the perplexity of it under the author's own language model. The idea is that a better personalized language model should assign higher probability to a tweet if and only if it is written by the user himself.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Perplexity", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The perplexity of a single sentence is defined as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Perplexity", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "\u22ef 2 \u2211 (11)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Perplexity", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "where \u22ef is an unseen testing sentence. The overall perplexity on a collection of sentences is simply computed by concatenating them. A smaller value signifies better performance. The results are shown in Table 2 , where the asterisks indicate a significant difference between the best score and the second best score by t-test at a significance level of 0.05. Our method significantly outperforms all of the methods in almost all settings. Furthermore, all methods gradually improve as more data are used to train the model. As expected, the advantage of our method is more apparent when data is sparse. Also, our method works much better than the \"LR\" method, which demonstrates the usefulness of the pairwise connections between documents in the factor graph.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 204, |
|
"end": 211, |
|
"text": "Table 2", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Perplexity", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "We observe that the \"PS\" method takes a long time to converge and is prone to overfitting, likely because it has to search a few hundred parameters on average. It can also be observed from the table that, if the base language model is already smoothed by JM or Dirichlet smoothing, the \"PS\" method will only worsen, instead of enrich, the language model. In terms of testing set perplexity, the \"Cosine\" method is the second best method to enrich a user's language model. The gap between the \"Cosine\" method and our method becomes smaller as more training data is available. When the base model is already smoothed by JM or Dirichlet smoothing and the data is less sparse (for example, the \"15%\" row), the \"Cosine\" method performs almost as good as our method. Nevertheless, when the base user language model is sparse (i.e. the \"1%\" or \"5%\" rows), the similarity scores are not reliable and the performance of this method is restricted due to the bias coming from the base model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Perplexity", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The authorship attribution task is chosen as the extrinsic evaluation metric. Given a sentence of unknown authorship, the task is to identify its author from a finite set of candidate users. For a thorough survey of recent works on this topic, see (Stamatatos, 2009) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 248, |
|
"end": 266, |
|
"text": "(Stamatatos, 2009)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Authorship Attribution", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "Here, the goal is not about comparing with the state-of-the-art approaches in authorship attribution, but showing that typical applications of language model techniques can benefit from our framework.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Authorship Attribution", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "To apply a personalized language model on this task, a na\u00efve Bayes classifier is implemented (Peng et al., 2004) . The most probable author of a document d is the one whose personalized language model yields the highest probability. It is determined by the following equation: * \u220f \u2208", |
|
"cite_spans": [ |
|
{ |
|
"start": 93, |
|
"end": 112, |
|
"text": "(Peng et al., 2004)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Authorship Attribution", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "where we assume uniformity in the candidate users, and is as defined in Eq. 2.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Authorship Attribution", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "Notice that we have used the unigram probability in Eq. 12, as in all following experiments. In fact, we also have conducted experiments with bigram models. Although the bigram model achieves lower perplexity as expected, we have observed a 15 to 20 percent decrease of the accuracy on the authorship attribution task, compared to the unigram model. This signifies that higher order n-gram models may not be suitable for the sparse data scenario. Similar arguments also have been given by Peng (2004) . The results are shown in Table 3 , where the asterisks indicate a significant difference between the best score and the second best score, by t-test at a significance level of 0.05.", |
|
"cite_spans": [ |
|
{ |
|
"start": 489, |
|
"end": 500, |
|
"text": "Peng (2004)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 528, |
|
"end": 535, |
|
"text": "Table 3", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Authorship Attribution", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "Comparing the four \"Base\" columns, we find that additive smoothing performs about as well as the other three smoothing methods. That is, blindly interpolating with the entire background corpus does not fix the sparse data problem.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Authorship Attribution", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "Similar to the result in Section 3.3, the \"Cosine\" method gets better (even better than our method, if the JM or Dirichlet smoothing is applied) when more data is available. A possible explanation is that the cosine similarity between the base user language model and a document language model is not reliable when is estimated from a small corpus. In these cases, using the cosine similarity alone is not enough and it is better to rely less on this information and include other types of features. Our factor graph model can alleviate this problem by bringing more information into the language model; hence, it performs better than this baseline method under such circumstances. Conclusions that can be drawn from the results of the \"PS\" and \"LR\" methods are similar to those in the previous section. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Authorship Attribution", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "In this section, we evaluate the effectiveness of the feature functions mentioned in Section 2.2. There are five local feature functions ( , , , , ) and two pairwise feature functions ( , ) in our model. Including , there are a total of eight parameters for each user language model . Since all features are standardized before training, we can inspect the feature weights learnt from data to determine the relative importance of them. To be focused on the cold start setting, we choose to analyze the case where only 1% of the data is used during model training. We will refer to the feature weights as, say , for simplicity.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature Effectiveness", |
|
"sec_num": "3.5" |
|
}, |
|
{ |
|
"text": "First, we inspect the mixture weight for the base user language model. The higher this value is, the better is the original base user language model . We plot this value for all 15 users (the x-axis) in Figure 2 . The different colors of the bars represent the different smoothing methods that were applied to the base model .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 203, |
|
"end": 211, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Feature Effectiveness", |
|
"sec_num": "3.5" |
|
}, |
|
{ |
|
"text": "As can be observed from Figure 2 , the users can be roughly categorized into three types:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 24, |
|
"end": 32, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Feature Effectiveness", |
|
"sec_num": "3.5" |
|
}, |
|
{ |
|
"text": "\uf0a7 is low for all smoothing methods, e.g. Users 1, 2, and 4.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature Effectiveness", |
|
"sec_num": "3.5" |
|
}, |
|
{ |
|
"text": "\uf0a7 is high for all smoothing methods, e.g. Users 5, 7, 11, and 13. \uf0a7 is higher for the JM and Dirichlet smoothing, but is lower for the additive and absolute smoothing, e.g. Users 3, 6, and 9. Figure 2 . The mixture weight for each user.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 192, |
|
"end": 200, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Feature Effectiveness", |
|
"sec_num": "3.5" |
|
}, |
|
{ |
|
"text": "We take Users 1, 5, and 6 as examples of the three types, respectively. Their feature weights are plotted in Figures 3, 4 , and 5. It is worthwhile to mention that there is no user whose is lower for the JM and Dirichlet smoothing but is higher for the additive and absolute smoothing. This is consistent with the general idea that the JM and Dirichlet smoothing are better techniques than the other two.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 109, |
|
"end": 121, |
|
"text": "Figures 3, 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Using Social Network Information", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Among all feature functions, generally has a higher feature weight . This is consistent with the experiments in Sections 3.3 and 3.4, where it has been shown that the cosine similarity may lead to an improved model performance. The weight has the highest value when is also high, as in Figure 4 . This is reasonable because a higher signifies that the best model selected (with respect to the validation set perplexity, as described in Section 2.3) relies more on the base user language model . The similarity score computed from should also be reliable.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 286, |
|
"end": 294, |
|
"text": "Figure 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Using Social Network Information", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "By regarding each user as a sample, the correlation between and the feature weights can be computed. The results are shown in Table 4 . There exists a relatively high positive correlation between and , as expected from the figures above. Also, for the pairwise features, the correlation coefficients are negative. This makes sense because, if the best model selected favors the base user LM more (i.e. a higher ), then there should be no need to include the complicated pairwise features to enrich the language model. The negative correlation coefficients for and can be explained in the same way.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 126, |
|
"end": 133, |
|
"text": "Table 4", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Using Social Network Information", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For some of the users whose is higher for JM and Dirichlet smoothing but lower for Additive Absolute JM Dirichlet the other two, the feature weights exhibit a similar grouping. For User 6 (as shown in Figure 5) , , , , , and are lower if JM or Dirichlet smoothing is applied and higher if either of the other two is applied. This also explains the negative correlation between and (or ). As has been mentioned, we want to demonstrate the usefulness of the pairwise feature functions ( and ) by comparing our model to the \"LR\" baseline. In fact, the usefulness of these feature functions can also be verified by observing that both and are greater than zero. By t-test at a significance level of 0.05, is significantly greater than zero for all users and all smoothing methods. Similarly, is significantly greater than zero for all except three of the \u2329user,smoothing method\u232a pairs (out of 60).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 201, |
|
"end": 211, |
|
"text": "Figure 5)", |
|
"ref_id": "FIGREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Using Social Network Information", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Personalization has been studied for a long time in various textual related tasks. Personalized search is established by modeling user behavior when using search engines (Shen et al., 2005; Xue et al., 2009) . A query language model could be also expanded based on personalized user modeling (Chirita et al., 2007) . Personalization has also been modeled in many NLP tasks, such as summarization (Yan et al., 2011) and recommendation (Yan et al., 2012) . Different from our purpose, these models do not aim at exploiting social media content to enrich a language model. Wen et al. (2012 Wen et al. ( , 2013 combines user-level language models from a social network, but instead of focusing on the cold start problem, they try to improve the speech recognition performance using a mass amount of texts on a social network. On the other hand, our work explicitly models the more sophisticated document-level relationships using a probabilistic graphical model.", |
|
"cite_spans": [ |
|
{ |
|
"start": 170, |
|
"end": 189, |
|
"text": "(Shen et al., 2005;", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 190, |
|
"end": 207, |
|
"text": "Xue et al., 2009)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 292, |
|
"end": 314, |
|
"text": "(Chirita et al., 2007)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 396, |
|
"end": 414, |
|
"text": "(Yan et al., 2011)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 434, |
|
"end": 452, |
|
"text": "(Yan et al., 2012)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 570, |
|
"end": 586, |
|
"text": "Wen et al. (2012", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 587, |
|
"end": 606, |
|
"text": "Wen et al. ( , 2013", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "The advantage of our model is threefold. First, prior knowledge and heuristics about the social network can be adapted in a structured way through the use of a factor graph model. Second, by exploiting a well-studied graphical model, mature inference techniques can be applied in the optimization procedure, making it much more effective and efficient. Finally, different from most smoothing methods, which are mutually exclusive, any other smoothing method can be incorporated into our framework to be further enhanced. Using only 1% of the training corpus, our model can improve the perplexity of base models by as much as 40% and the accuracy of authorship attribution by at most 13%.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5." |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Analysis of generalized pattern searches", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Audet", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Dennis", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "SIAM Journal on Optimization", |
|
"volume": "13", |
|
"issue": "3", |
|
"pages": "889--903", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Audet, C., & Dennis Jr, J. E. (2002). Analysis of generalized pattern searches. SIAM Journal on Optimization, 13(3), 889-903.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "An empirical study of smoothing techniques for language modeling", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"F" |
|
], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Goodman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "Proceedings of the 34th annual meeting on Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "310--318", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chen, S. F., & Goodman, J. (1996). An empirical study of smoothing techniques for language modeling. In Proceedings of the 34th annual meeting on Association for Computational Linguistics, 310-318. Association for Computational Linguistics. Using Social Network Information", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Personalized query expansion for the web", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Chirita", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Firan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Nejdl", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "7--14", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chirita, P. A., Firan, C. S., & Nejdl, W. (2007). Personalized query expansion for the web. In Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval, 7-14. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Personalization of Social Media", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Clements", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the BCS IRSG Symposium: Future Directions in Information Access", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Clements, M. (2007). Personalization of Social Media. In Proceedings of the BCS IRSG Symposium: Future Directions in Information Access 2007.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Outtweeting the twitterers -predicting information cascades in microblogs", |
|
"authors": [ |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Galuba", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Aberer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Chakraborty", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Z", |
|
"middle": [], |
|
"last": "Despotovic", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Kellerer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 3rd conference on Online social networks", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3--3", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Galuba, W., Aberer, K., Chakraborty, D., Despotovic, Z., & Kellerer, W. (2010). Outtweeting the twitterers -predicting information cascades in microblogs. In Proceedings of the 3rd conference on Online social networks, 3-3. USENIX Association.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Interpolated estimation of Markov source parameters from sparse data", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Jelinek", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1980, |
|
"venue": "Proceedings of the Workshop on Pattern Recognition in Practice", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "381--397", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jelinek, F. (1980). Interpolated estimation of Markov source parameters from sparse data. In Proceedings of the Workshop on Pattern Recognition in Practice, 381-397.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Factor graphs and the sum-product algorithm. Information Theory", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Kschischang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Frey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Loeliger", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "IEEE Transactions on", |
|
"volume": "47", |
|
"issue": "2", |
|
"pages": "498--519", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kschischang, F. R., Frey, B. J., & Loeliger, H. A. (2001). Factor graphs and the sum-product algorithm. Information Theory, IEEE Transactions on, 47(2), 498-519.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Smoothing techniques for adaptive online language models: topic tracking in tweet streams", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Snow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Morgan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "422--429", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lin, J., Snow, R., & Morgan, W. (2011). Smoothing techniques for adaptive online language models: topic tracking in tweet streams. In Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining, 422-429. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "A hierarchical Dirichlet language model", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Mackay", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [ |
|
"C B" |
|
], |
|
"last": "Peto", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Natural language engineering", |
|
"volume": "1", |
|
"issue": "03", |
|
"pages": "289--308", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "MacKay, D. J., & Peto, L. C. B. (1995). A hierarchical Dirichlet language model. Natural language engineering, 1(03), 289-308.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Loopy belief propagation for approximate inference: An empirical study", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Murphy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Weiss", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"I" |
|
], |
|
"last": "Jordan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "467--475", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Murphy, K. P., Weiss, Y., & Jordan, M. I. (1999). Loopy belief propagation for approximate inference: An empirical study. In Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence, 467-475. Morgan Kaufmann Publishers Inc..", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "On the estimation ofsmall'probabilities by leaving-one-out. Pattern Analysis and Machine Intelligence", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Ney", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "U", |
|
"middle": [], |
|
"last": "Essen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Kneser", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "IEEE Transactions on", |
|
"volume": "17", |
|
"issue": "12", |
|
"pages": "1202--1212", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ney, H., Essen, U., & Kneser, R. (1995). On the estimation ofsmall'probabilities by leaving-one-out. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 17(12), 1202-1212.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Augmenting naive bayes classifiers with statistical language models", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Peng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Schuurmans", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Information Retrieval", |
|
"volume": "7", |
|
"issue": "3-4", |
|
"pages": "317--345", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peng, F., Schuurmans, D., & Wang, S. (2004). Augmenting naive bayes classifiers with statistical language models. Information Retrieval, 7(3-4), 317-345.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Implicit user modeling for personalized search", |
|
"authors": [ |
|
{ |
|
"first": "X", |
|
"middle": [], |
|
"last": "Shen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Tan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Zhai", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the 14th ACM international conference on Information and knowledge management", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "824--831", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shen, X., Tan, B., & Zhai, C. (2005). Implicit user modeling for personalized search. In Proceedings of the 14th ACM international conference on Information and knowledge management, 824-831. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "A survey of modern authorship attribution methods", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Stamatatos", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Journal of the American Society for information Science and Technology", |
|
"volume": "60", |
|
"issue": "3", |
|
"pages": "538--556", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stamatatos, E. (2009). A survey of modern authorship attribution methods. Journal of the American Society for information Science and Technology, 60(3), 538-556.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "User-level sentiment analysis incorporating social networks", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Tan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Tang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Jiang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1397--1405", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tan, C., Lee, L., Tang, J., Jiang, L., Zhou, M., & Li, P. (2011). User-level sentiment analysis incorporating social networks. In Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining, 1397-1405. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Cross-lingual knowledge linking across wiki knowledge bases", |
|
"authors": [ |
|
{ |
|
"first": "Z", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Z", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Tang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 21st international conference on World Wide Web", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "459--468", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wang, Z., Li, J., Wang, Z., & Tang, J. (2012). Cross-lingual knowledge linking across wiki knowledge bases. In Proceedings of the 21st international conference on World Wide Web, 459-468. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Personalized language modeling by crowd sourcing with social network data for voice access of cloud applications", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Wen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Lee", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Spoken Language Technology Workshop (SLT)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "188--193", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wen, T. H., Lee, H. Y., Chen, T. Y., & Lee, L. S. (2012). Personalized language modeling by crowd sourcing with social network data for voice access of cloud applications. In Spoken Language Technology Workshop (SLT), 2012 IEEE, 188-193.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Recurrent neural network based language model personalization by social network crowdsourcing", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Wen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Heidel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Tsao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Lee", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "INTERSPEECH", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2703--2707", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wen, T. H., Heidel, A., Lee, H. Y., Tsao, Y., & Lee, L. S. (2013). Recurrent neural network based language model personalization by social network crowdsourcing. In INTERSPEECH, 2703-2707.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "User language model for collaborative personalized search", |
|
"authors": [ |
|
{ |
|
"first": "G", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Xue", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Han", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Q", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "ACM Transactions on Information Systems (TOIS)", |
|
"volume": "27", |
|
"issue": "2", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xue, G. R., Han, J., Yu, Y., & Yang, Q. (2009). User language model for collaborative personalized search. ACM Transactions on Information Systems (TOIS), 27(2), 11.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Summarize what you are interested in: an optimization framework for interactive personalized summarization", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Yan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Nie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "X", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1342--1351", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yan, R., Nie, J. Y., & Li, X. (2011). Summarize what you are interested in: an optimization framework for interactive personalized summarization. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, 1342-1351. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Tweet recommendation with graph co-ranking", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Yan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Lapata", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "X", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "516--525", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yan, R., Lapata, M., & Li, X. (2012). Tweet recommendation with graph co-ranking. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers-Volume 1, 516-525. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Statistical language models for information retrieval", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Zhai", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Synthesis Lectures on Human Language Technologies", |
|
"volume": "1", |
|
"issue": "1", |
|
"pages": "1--141", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhai, C. (2008). Statistical language models for information retrieval. Synthesis Lectures on Human Language Technologies, 1(1), 1-141.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "A study of smoothing methods for language models applied to ad hoc information retrieval", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Zhai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Lafferty", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "334--342", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhai, C., & Lafferty, J. (2001). A study of smoothing methods for language models applied to ad hoc information retrieval. In Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval, 334-342. ACM.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"num": null, |
|
"text": "The proposed factor graph model. Two kinds of potential functions are defined in the proposed factor graph model. Local potential function f(y d ). This potential function captures the local attributes of a random variable y d . Suppose that the random variable y d corresponds to a document d. The local potential function 1 should take a larger value relative to 0 if d is likely to contribute more significantly to the language model . The local potential function f(y d ) is parameterized in a log-linear form:", |
|
"type_str": "figure" |
|
}, |
|
"FIGREF1": { |
|
"uris": null, |
|
"num": null, |
|
"text": "Document popularity function f pop . To model the popularity of a document, this function is defined as the number of times the document d is shared. \uf0a7 Common friend function f cmf . It is defined as the number of common friends between the target user and the author of d. \uf0a7 Author friendship function f af . Assuming that documents posted by a user with more friends are more influential, this function is defined as the number of friends of the author of document d.Pairwise potential functiong(y d , G(y d )). For any two documents d i and d j , let the corresponding variables in the factor graph be y i and y j , respectively. The pairwise potential function defines the correlation of a variable y i with another variable y j . Similar to the local potential function, this function is parameterized as:", |
|
"type_str": "figure" |
|
}, |
|
"FIGREF2": { |
|
"uris": null, |
|
"num": null, |
|
"text": "d i and d j are of the same author or the two authors are friends. The correlation should be even greater if the two documents are similar. Letting denote the author of a document d and denote the closed neighborhood of a user u, we define", |
|
"type_str": "figure" |
|
}, |
|
"FIGREF3": { |
|
"uris": null, |
|
"num": null, |
|
"text": "would have a higher correlation if d i and d j are of the same category. Letting denote the category of document d, we define , 0 if if .", |
|
"type_str": "figure" |
|
}, |
|
"FIGREF4": { |
|
"uris": null, |
|
"num": null, |
|
"text": "The feature weights for User 6. ( . for additive and absolute smoothing, and . for JM and Dirichlet smoothing)", |
|
"type_str": "figure" |
|
}, |
|
"TABREF0": { |
|
"type_str": "table", |
|
"num": null, |
|
"text": "", |
|
"html": null, |
|
"content": "<table><tr><td># of</td><td colspan=\"3\">Max. Min. Avg.</td></tr><tr><td>Tweets</td><td>70</td><td>19</td><td>35.4</td></tr><tr><td>Friends</td><td>139</td><td>24</td><td>68.9</td></tr><tr><td colspan=\"2\">Variables 467</td><td>97</td><td>252.7</td></tr><tr><td>Edges</td><td colspan=\"3\">9216 231 3427.1</td></tr></table>" |
|
}, |
|
"TABREF1": { |
|
"type_str": "table", |
|
"num": null, |
|
"text": "", |
|
"html": null, |
|
"content": "<table><tr><td>Train %</td><td>Base Cosine</td><td>Additive PS</td><td>LR</td><td>FGM Base Cosine</td><td>Absolute PS</td><td>LR</td><td>FGM</td></tr><tr><td>1%</td><td colspan=\"7\">903.1 711.6 701.3 843.9 534.8 * 864.7 695.3 696.7 814.7 540.4 *</td></tr><tr><td>5%</td><td colspan=\"7\">818.6 622.9 681.0 784.1 504.3 * 776.2 608.1 672.9 748.9 509.6 *</td></tr><tr><td colspan=\"8\">10% 749.9 559.8 658.1 726.4 479.1 * 703.2 543.7 638.9 681.3 483.6 *</td></tr><tr><td colspan=\"8\">15% 696.3 517.2 633.7 683.9 471.7 * 647.0 498.7 613.8 630.7 469.4 *</td></tr><tr><td>Train %</td><td colspan=\"3\">Jelinek-Mercer Base Cosine PS LR</td><td>FGM Base Cosine</td><td>Dirichlet PS</td><td>LR</td><td>FGM</td></tr><tr><td>1%</td><td colspan=\"7\">631.7 571.3 640.6 615.7 533.1 * 632.6 571.6 635.8 616.2 532.3 *</td></tr><tr><td>5%</td><td colspan=\"7\">588.9 524.5 602.2 574.9 505.3 * 589.8 525.0 602.1 575.6 503.0 *</td></tr><tr><td colspan=\"8\">10% 553.0 489.9 570.2 545.4 478.7 * 554.0 490.7 581.4 546.2 477.4 *</td></tr><tr><td colspan=\"8\">15% 529.0 469.3 561.4 523.5 465.6 * 529.8 469.8 567.1 524.5 466.2</td></tr></table>" |
|
}, |
|
"TABREF2": { |
|
"type_str": "table", |
|
"num": null, |
|
"text": "", |
|
"html": null, |
|
"content": "<table><tr><td>Train %</td><td>Base Cosine</td><td>Additive PS</td><td>LR</td><td>FGM Base Cosine</td><td>Absolute PS</td><td>LR</td><td>FGM</td></tr><tr><td colspan=\"8\">1% 0.5307 0.5853 0.6040 0.5700 0.6587 * 0.5120 0.5787 0.5947 0.5640 0.6413 *</td></tr><tr><td colspan=\"8\">5% 0.5967 0.6313 0.6267 0.6073 0.6760 * 0.5753 0.6233 0.6080 0.5947 0.6687 *</td></tr><tr><td colspan=\"8\">10% 0.6307 0.6640 0.6280 0.6433 0.7053 * 0.6153 0.6733 0.6220 0.6400 0.6920</td></tr><tr><td colspan=\"8\">15% 0.6513 0.6867 0.6400 0.6560 0.7113 * 0.6487 0.6967 0.6260 0.6560 0.7200 *</td></tr><tr><td>Train %</td><td colspan=\"3\">Jelinek-Mercer Base Cosine PS LR</td><td>FGM Base Cosine</td><td>Dirichlet PS</td><td>LR</td><td>FGM</td></tr><tr><td colspan=\"8\">1% 0.5400 0.6127 0.6107 0.5967 0.6560 * 0.5233 0.6127 0.6093 0.5860 0.6527 *</td></tr><tr><td colspan=\"8\">5% 0.6147 0.6607 0.6420 0.6173 0.6787 * 0.6020 0.6567 0.6360 0.6187 0.6787 *</td></tr><tr><td colspan=\"8\">10% 0.6373 0.6967 0.6633 0.6513 0.6860 0.6293 0.6927 0.6507 0.6433 0.6947</td></tr><tr><td colspan=\"8\">15% 0.6553 0.7094 0.6520 0.6593 0.6987 0.6533 0.7100 * 0.6467 0.6567 0.6907</td></tr></table>" |
|
}, |
|
"TABREF4": { |
|
"type_str": "table", |
|
"num": null, |
|
"text": "", |
|
"html": null, |
|
"content": "<table><tr><td>Feature type</td><td>Local</td><td>Pairwise</td></tr><tr><td>Feature weight</td><td/><td/></tr><tr><td/><td colspan=\"2\">0.3429 -0.0682 0.0112 -0.2316 -0.2166 -0.1583 -0.4920</td></tr></table>" |
|
} |
|
} |
|
} |
|
} |