ACL-OCL / Base_JSON /prefixE /json /ecnlp /2020.ecnlp-1.8.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:33:46.491935Z"
},
"title": "Item-based Collaborative Filtering with BERT",
"authors": [
{
"first": "Yuyangzi",
"middle": [],
"last": "Fu",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Tian",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {},
"email": "twang5@ebay.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In e-commerce, recommender systems have become an indispensable part of helping users explore the available inventory. In this work, we present a novel approach for item-based collaborative filtering, by leveraging BERT to understand items, and score relevancy between different items. Our proposed method could address problems that plague traditional recommender systems such as cold start, and \"more of the same\" recommended content. We conducted experiments on a large-scale realworld dataset with full cold-start scenario, and the proposed approach significantly outperforms the popular Bi-LSTM model.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "In e-commerce, recommender systems have become an indispensable part of helping users explore the available inventory. In this work, we present a novel approach for item-based collaborative filtering, by leveraging BERT to understand items, and score relevancy between different items. Our proposed method could address problems that plague traditional recommender systems such as cold start, and \"more of the same\" recommended content. We conducted experiments on a large-scale realworld dataset with full cold-start scenario, and the proposed approach significantly outperforms the popular Bi-LSTM model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Recommender systems are an integral part of ecommerce platforms, helping users pick out items of interest from large inventories at scale. Traditional recommendation algorithms can be divided into two types: collaborative filtering-based (Schafer et al., 2007; Linden et al., 2003) and content-based (Lops et al., 2011; Pazzani and Billsus, 2007) . However, these have their own limitations when applied directly to real-world ecommerce platforms. For example, traditional userbased collaborative filtering recommendation algorithms (see, e.g., Schafer et al., 2007) find the most similar users based on the seed user's rated items, and then recommend new items which other users rated highly. For item-based collaborative filtering (see, e.g., Linden et al., 2003) , given a seed item, recommended items are chosen to have most similar user feedback. However, for highly active e-commerce platforms with large and constantly changing inventory, both approaches are severely impacted by data sparsity in the user-item interaction matrix.",
"cite_spans": [
{
"start": 238,
"end": 260,
"text": "(Schafer et al., 2007;",
"ref_id": "BIBREF10"
},
{
"start": 261,
"end": 281,
"text": "Linden et al., 2003)",
"ref_id": "BIBREF7"
},
{
"start": 300,
"end": 319,
"text": "(Lops et al., 2011;",
"ref_id": "BIBREF8"
},
{
"start": 320,
"end": 346,
"text": "Pazzani and Billsus, 2007)",
"ref_id": "BIBREF9"
},
{
"start": 545,
"end": 566,
"text": "Schafer et al., 2007)",
"ref_id": "BIBREF10"
},
{
"start": 745,
"end": 765,
"text": "Linden et al., 2003)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Content-based recommendation algorithms calculate similarities in content between candidate items and seed items that the user has provided feedback for (which may be implicit e.g. clicking, or explicit e.g. rating), and then select the most similar items to recommend. Although less impacted by data sparsity, due to their reliance on content rather than behavior, they can struggle to provide novel recommendations which may activate the user's latent interests, a highly desirable quality for recommender systems (Castells et al., 2011) .",
"cite_spans": [
{
"start": 516,
"end": 539,
"text": "(Castells et al., 2011)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Due to the recent success of neural networks in multiple AI domains (LeCun et al., 2015) and their superior modeling capacity, a number of research efforts have explored new recommendation algorithms based on Deep Learning (see, e.g., Barkan and Koenigstein, 2016; He et al., 2017; Hidasi et al., 2015; Covington et al., 2016) .",
"cite_spans": [
{
"start": 68,
"end": 88,
"text": "(LeCun et al., 2015)",
"ref_id": "BIBREF6"
},
{
"start": 235,
"end": 264,
"text": "Barkan and Koenigstein, 2016;",
"ref_id": "BIBREF0"
},
{
"start": 265,
"end": 281,
"text": "He et al., 2017;",
"ref_id": "BIBREF4"
},
{
"start": 282,
"end": 302,
"text": "Hidasi et al., 2015;",
"ref_id": "BIBREF5"
},
{
"start": 303,
"end": 326,
"text": "Covington et al., 2016)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we propose a novel approach for item-based collaborative filtering, by leveraging the BERT model (Devlin et al., 2018) to understand item titles and model relevance between different items. We adapt the masked language modelling and next sentence prediction tasks from the natural language context to the e-commerce context. The contributions of this work are summarized as follows:",
"cite_spans": [
{
"start": 112,
"end": 133,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Instead of relying on unique item identifier to aggregate history information, we only use item's title as content, along with token embeddings to solve the cold start problem, which is the main shortcoming of traditional recommendation algorithms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 By training model with user behavior data, our model learns user's latent interests more than item similarities, while traditional recommendation algorithms and some pair-wise deep learning algorithms only provide similar items which users may have bought.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We conduct experiments on a large-scale e-commerce dataset, demonstrating the effectiveness of our approach and producing recommendation results with higher quality.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As mentioned earlier, for a dynamic e-commerce platform, items enter and leave the market continuously, resulting in an extremely sparse user-item interaction matrix. In addition to the challenge of long-tail recommendations, this also requires the recommender system to be continuously retrained and redeployed in order to accommodate newly listed items. To address these issues, in our proposed approach, instead of representing each item with a unique identifier, we choose to represent each item with its title tokens, which are further mapped to a continuous vector representation. By doing so, essentially two items with the same title would be treated as the same, and can aggregate user behaviors accordingly. For a newly listed item in the cold-start setting, the model can utilize the similarity of the item title to ones observed before to find relevant recommended items. The goal of item-based collaborative filtering is to score the relevance between two items, and for a seed item, the top scored items would be recommended as a result. Our model is based on BERT (Devlin et al., 2018) . Rather than the traditional RNN / CNN structure, BERT adopts transformer encoder as a language model, and use attention mechanism to calculate the relationship between input and output. The training of BERT model can be divided into two parts: Masked Language Model and Next Sentence Prediction. We re-purpose these tasks for the e-commerce context into Masked Language Model on Item Titles, and Next Purchase Prediction. Since the distribution of item title tokens differs drastically from words in natural language which the original BERT model is trained on, retraining the masked language model allows better understanding of item information for our use-case. Next Purchase Prediction can directly be used as the relevance scoring function for our item collaborative filtering task.",
"cite_spans": [
{
"start": 1079,
"end": 1100,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Item-based Collaborative Filtering with BERT",
"sec_num": "2"
},
{
"text": "Our model is based on the architecture of BERT base (Devlin et al., 2018) . The encoder of BERT base contains 12 Transformer layers, with 768 hidden units, and 12 self-attention heads.",
"cite_spans": [
{
"start": 52,
"end": 73,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "2.1"
},
{
"text": "The goal of this task is to predict the next item a user is going to purchase given the seed item he/she has just bought. We start with a pre-trained BERT base model, and fine-tune it for our next purchase prediction task. We feed seed item as sentence A, and target item as sentence B. Both item titles are concatenated and truncated to have at most 128 tokens, including one [CLS] and two [SEP] tokens. For a seed item, its positive items are generated by collecting items purchased within the same user session, and the negative ones are randomly sampled. Given the positive item set I p , and the negative item set I n , the cross-entropy loss for next purchase prediction may be computed as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Next Purchase Prediction",
"sec_num": "2.1.1"
},
{
"text": "L np = \u2212 i j \u2208Ip log p(i j ) \u2212 i j \u2208In log(1 \u2212 p(i j )).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Next Purchase Prediction",
"sec_num": "2.1.1"
},
{
"text": "(1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Next Purchase Prediction",
"sec_num": "2.1.1"
},
{
"text": "As the distribution of item title tokens is different from the natural language corpus used to train BERT base , we further fine-tune the model for the masked language model (MLM) task as well. In the masked language model task, we follow the training schema outlined in Devlin et al. (2018) wherein 15% of the tokens in the title are chosen to be replaced by [MASK] , random token, or left unchanged, with a probability of 80%, 10% and 10% respectively. Given the set of chosen tokens M , the corresponding loss for masked language model is",
"cite_spans": [
{
"start": 271,
"end": 291,
"text": "Devlin et al. (2018)",
"ref_id": "BIBREF3"
},
{
"start": 360,
"end": 366,
"text": "[MASK]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Masked Language Model",
"sec_num": "2.1.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L lm = \u2212 m i \u2208M log p(m i ).",
"eq_num": "(2)"
}
],
"section": "Masked Language Model",
"sec_num": "2.1.2"
},
{
"text": "The whole model is optimized against the joint loss L lm + L np .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Masked Language Model",
"sec_num": "2.1.2"
},
{
"text": "As the evaluation is conducted on the dataset having a complete cold-start setting, for the sake of comparison, we build a baseline model consisting of a title token embedding layer with 768 dimensions, a bidirectional LSTM layer with 64 hidden units, and a 2-layer MLP with 128 and 32 hidden units respectively. For every pair of items, the two titles are concatenated into a sequence. After going through the embedding layer, the bidirectional LSTM reads through the entire sequence and generates a representation at the last timestep. The MLP layer with logistic function produces the estimated Table 1 : Result on ranking the item probability score. The baseline model is trained using the same cross-entropy loss shown in Eq. 1.",
"cite_spans": [],
"ref_spans": [
{
"start": 598,
"end": 605,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Bi-LSTM Model (baseline)",
"sec_num": "2.1.3"
},
{
"text": "We train our models on an e-commerce website data. We collected 8,001,577 pairs of items, of which 33% are co-purchased (BIN event) within the same user session, while the rest are randomly sampled as negative samples. 99.9999% of entries of the item-item interaction matrix is empty. The sparsity of data forces the model to focus on generalization rather than memorization. The rationale would be further explained with the presence of the statistics of our dataset. Another 250,799 pairs of items are sampled in the same manner for use as a validation set, for conducting early stopping for training. For testing, in order to mimic the cold-start scenario in the production system wherein traditional item-item collaborative filtering fails completely, we sampled 10,000 pairs of co-purchased items with the seed item not present in the training set. For each positive sample containing a seed item and a ground-truth co-purchased item, we paired the seed item with 999 random negative samples, and for testing, we use the trained model to rank the total of 1000 items given each seed item.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "2.2"
},
{
"text": "The results of our evaluation are presented in Table. 1. We do not consider the traditional item-toitem collaborative filtering model (Linden et al., 2003) here since the evaluation is conducted assuming a complete cold-start setting, with all seed items unobserved in the training set, resulting in complete failure of such a model. Following the same reason, other approaches relying on unique item identifier (e.g. itemId) couldn't be considered either in our experiment. We believe its a practical experiment setting, as for a large-scale e-commerce platform, a massive amount of new items would be created every moment, and ignoring those items from the recommender system would be costly and inefficient.",
"cite_spans": [
{
"start": 134,
"end": 155,
"text": "(Linden et al., 2003)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "3"
},
{
"text": "We observe that the proposed BERT model greatly outperforms the LSTM-based model. When only fine-tuned for the Next Purchase Prediction task, our model exceeds the baseline by 310.9%, 96.6%, 93.9%, and 150.3% in precision@1, precision@10, recall@10, and NDCG@10 respectively. When fine tuning for the masked language model task is added, we see the metrics improved further by another 111.0%, 38.6%, 38.3%, and 64.0%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "3"
},
{
"text": "From the experiment, the superiority of proposed BERT model for item-based collaborative filtering is clear. It is also clear that adapting the token distribution for the e-commerce context with masked language model within BERT is essential for achieving the best performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "3"
},
{
"text": "In order to visually examine the quality of recommendations, we present the recommended items for two different seed items in Table. 2. For the first seed item 'Marvel Spiderman T-shirt Small Black Tee Superhero Comic Book Character', most of the recommended items are T-shirts, paired with clothing accessories and tableware decoration, all having Marvel as the theme. For the second seed item 'Microsoft Surface Pro 4 12.3\" Multi-Touch Tablet (Intel i5, 128GB) + Keyboard', the recommended items span a wide range of categories including tablets, digital memberships, electronic accessories, and computer hardware. From these two examples, we see that the proposed model appears to automatically find relevant selection criteria without manual specification, as well as make decisions between focusing on a specific category and catering to a wide range of inventory by learning from the data.",
"cite_spans": [],
"ref_spans": [
{
"start": 126,
"end": 132,
"text": "Table.",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "3"
},
{
"text": "In this paper, we adapt the BERT model for the task of item-based recommendations. Instead of directly representing an item with a unique identifier, we use the item's title tokens as content, along with token embeddings, to address the cold start problem. We demonstrate the superiority of our model over a traditional neural network model in understanding item titles and learning relationships between items across vast inventory. their helpful comments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary",
"sec_num": "4"
}
],
"back_matter": [
{
"text": "The authors would like to thank Sriganesh Madhvanath, Hua Yang, Xiaoyuan Wu, Alan Lu, Timothy Heath, and Kyunghyun Cho for their support and discussion, as well as anonymous reviewers for",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Item2vec: neural item embedding for collaborative filtering",
"authors": [
{
"first": "Oren",
"middle": [],
"last": "Barkan",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Koenigstein",
"suffix": ""
}
],
"year": 2016,
"venue": "2016 IEEE 26th International Workshop on Machine Learning for Signal Processing",
"volume": "",
"issue": "",
"pages": "1--6",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oren Barkan and Noam Koenigstein. 2016. Item2vec: neural item embedding for collaborative filtering. In 2016 IEEE 26th International Workshop on Machine Learning for Signal Processing (MLSP), pages 1-6. IEEE.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Novelty and diversity metrics for recommender systems: choice, discovery and relevance",
"authors": [
{
"first": "P",
"middle": [],
"last": "Castells",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Vargas",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2011,
"venue": "International Workshop on Diversity in Document Retrieval (DDR 2011) at the 33rd European Conference on Information Retrieval",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P Castells, S Vargas, and J Wang. 2011. Novelty and diversity metrics for recommender systems: choice, discovery and relevance. In International Workshop on Diversity in Document Retrieval (DDR 2011) at the 33rd European Conference on Information Re- trieval (ECIR 2011).",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Deep neural networks for youtube recommendations",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Covington",
"suffix": ""
},
{
"first": "Jay",
"middle": [],
"last": "Adams",
"suffix": ""
},
{
"first": "Emre",
"middle": [],
"last": "Sargin",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 10th ACM conference on recommender systems",
"volume": "",
"issue": "",
"pages": "191--198",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul Covington, Jay Adams, and Emre Sargin. 2016. Deep neural networks for youtube recommendations. In Proceedings of the 10th ACM conference on rec- ommender systems, pages 191-198. ACM.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Neural collaborative filtering",
"authors": [
{
"first": "Xiangnan",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Lizi",
"middle": [],
"last": "Liao",
"suffix": ""
},
{
"first": "Hanwang",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Liqiang",
"middle": [],
"last": "Nie",
"suffix": ""
},
{
"first": "Xia",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Tat-Seng",
"middle": [],
"last": "Chua",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 26th international conference on world wide web",
"volume": "",
"issue": "",
"pages": "173--182",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiangnan He, Lizi Liao, Hanwang Zhang, Liqiang Nie, Xia Hu, and Tat-Seng Chua. 2017. Neural collab- orative filtering. In Proceedings of the 26th inter- national conference on world wide web, pages 173- 182. International World Wide Web Conferences Steering Committee.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Linas Baltrunas, and Domonkos Tikk",
"authors": [
{
"first": "Bal\u00e1zs",
"middle": [],
"last": "Hidasi",
"suffix": ""
},
{
"first": "Alexandros",
"middle": [],
"last": "Karatzoglou",
"suffix": ""
}
],
"year": 2015,
"venue": "Session-based recommendations with recurrent neural networks",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1511.06939"
]
},
"num": null,
"urls": [],
"raw_text": "Bal\u00e1zs Hidasi, Alexandros Karatzoglou, Linas Bal- trunas, and Domonkos Tikk. 2015. Session-based recommendations with recurrent neural networks. arXiv preprint arXiv:1511.06939.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Deep learning. nature",
"authors": [
{
"first": "Yann",
"middle": [],
"last": "Lecun",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "521",
"issue": "",
"pages": "436--444",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. 2015. Deep learning. nature, 521(7553):436-444.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Amazon. com recommendations: Item-to-item collaborative filtering",
"authors": [
{
"first": "Greg",
"middle": [],
"last": "Linden",
"suffix": ""
},
{
"first": "Brent",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "Jeremy",
"middle": [],
"last": "York",
"suffix": ""
}
],
"year": 2003,
"venue": "IEEE Internet computing",
"volume": "",
"issue": "1",
"pages": "76--80",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Greg Linden, Brent Smith, and Jeremy York. 2003. Amazon. com recommendations: Item-to-item col- laborative filtering. IEEE Internet computing, (1):76-80.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Content-based recommender systems: State of the art and trends",
"authors": [
{
"first": "Pasquale",
"middle": [],
"last": "Lops",
"suffix": ""
},
{
"first": "Giovanni",
"middle": [],
"last": "Marco De Gemmis",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Semeraro",
"suffix": ""
}
],
"year": 2011,
"venue": "Recommender systems handbook",
"volume": "",
"issue": "",
"pages": "73--105",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pasquale Lops, Marco De Gemmis, and Giovanni Se- meraro. 2011. Content-based recommender sys- tems: State of the art and trends. In Recommender systems handbook, pages 73-105. Springer.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Contentbased recommendation systems",
"authors": [
{
"first": "J",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Pazzani",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Billsus",
"suffix": ""
}
],
"year": 2007,
"venue": "The adaptive web",
"volume": "",
"issue": "",
"pages": "325--341",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael J Pazzani and Daniel Billsus. 2007. Content- based recommendation systems. In The adaptive web, pages 325-341. Springer.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Collaborative filtering recommender systems",
"authors": [
{
"first": "Ben",
"middle": [],
"last": "Schafer",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Frankowski",
"suffix": ""
},
{
"first": "Jon",
"middle": [],
"last": "Herlocker",
"suffix": ""
},
{
"first": "Shilad",
"middle": [],
"last": "Sen",
"suffix": ""
}
],
"year": 2007,
"venue": "The adaptive web",
"volume": "",
"issue": "",
"pages": "291--324",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J Ben Schafer, Dan Frankowski, Jon Herlocker, and Shilad Sen. 2007. Collaborative filtering recom- mender systems. In The adaptive web, pages 291- 324. Springer.",
"links": null
}
},
"ref_entries": {}
}
}