ACL-OCL / Base_JSON /prefixE /json /ecnlp /2021.ecnlp-1.1.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:33:34.429317Z"
},
"title": "BERT Goes Shopping: Comparing Distributional Models for Product Representations",
"authors": [
{
"first": "Federico",
"middle": [],
"last": "Bianchi",
"suffix": "",
"affiliation": {},
"email": "f.bianchi@unibocconi.it"
},
{
"first": "Jacopo",
"middle": [],
"last": "Tagliabue",
"suffix": "",
"affiliation": {},
"email": "jtagliabue@coveo.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Word embeddings (e.g., word2vec) have been applied successfully to eCommerce products through prod2vec. Inspired by the recent performance improvements on several NLP tasks brought by contextualized embeddings, we propose to transfer BERT-like architectures to eCommerce: our model-Prod2BERT-is trained to generate representations of products through masked session modeling. Through extensive experiments over multiple shops, different tasks, and a range of design choices, we systematically compare the accuracy of Prod2BERT and prod2vec embeddings: while Prod2BERT is found to be superior in several scenarios, we highlight the importance of resources and hyperparameters in the best performing models. Finally, we provide guidelines to practitioners for training embeddings under a variety of computational and data constraints.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Word embeddings (e.g., word2vec) have been applied successfully to eCommerce products through prod2vec. Inspired by the recent performance improvements on several NLP tasks brought by contextualized embeddings, we propose to transfer BERT-like architectures to eCommerce: our model-Prod2BERT-is trained to generate representations of products through masked session modeling. Through extensive experiments over multiple shops, different tasks, and a range of design choices, we systematically compare the accuracy of Prod2BERT and prod2vec embeddings: while Prod2BERT is found to be superior in several scenarios, we highlight the importance of resources and hyperparameters in the best performing models. Finally, we provide guidelines to practitioners for training embeddings under a variety of computational and data constraints.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Distributional semantics (Landauer and Dumais, 1997) is built on the assumption that the meaning of a word is given by the contexts in which it appears: word embeddings obtained from co-occurrence patterns through word2vec (Mikolov et al., 2013) , proved to be both accurate by themselves in representing lexical meaning, and very useful as components of larger Natural Language Processing (NLP) architectures (Lample et al., 2018) . The empirical success and scalability of word2vec gave rise to many domain-specific models (Ng, 2017; Grover and Leskovec, 2016; Yan et al., 2017) : in eCommerce, prod2vec is trained replacing words in a sentence with product interactions in a shopping session (Grbovic et al., 2015) , eventually generating vector representations of the products. The key intuition is the same underlying word2vec -you can tell a lot about a product by the company it keeps (in shopping sessions). The model enjoyed immediate success in the field and is now essential to NLP and Information Retrieval (IR) use cases in eCommerce (Vasile et al., 2016a; .",
"cite_spans": [
{
"start": 25,
"end": 52,
"text": "(Landauer and Dumais, 1997)",
"ref_id": "BIBREF16"
},
{
"start": 223,
"end": 245,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF24"
},
{
"start": 410,
"end": 431,
"text": "(Lample et al., 2018)",
"ref_id": "BIBREF14"
},
{
"start": 525,
"end": 535,
"text": "(Ng, 2017;",
"ref_id": "BIBREF27"
},
{
"start": 536,
"end": 562,
"text": "Grover and Leskovec, 2016;",
"ref_id": "BIBREF11"
},
{
"start": 563,
"end": 580,
"text": "Yan et al., 2017)",
"ref_id": "BIBREF57"
},
{
"start": 695,
"end": 717,
"text": "(Grbovic et al., 2015)",
"ref_id": "BIBREF10"
},
{
"start": 1047,
"end": 1069,
"text": "(Vasile et al., 2016a;",
"ref_id": "BIBREF53"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As a key improvement over word2vec, the NLP community has recently introduced contextualized representations, in which a word like play would have different embeddings depending on the general topic (e.g. a sentence about theater vs soccer), whereas in word2vec the word play is going to have only one vector. Transformer-based architectures (Vaswani et al., 2017) in large-scale models -such as BERT (Devlin et al., 2019) -achieved SOTA results in many tasks (Nozza et al., 2020; Rogers et al., 2020) . As Transformers are being applied outside of NLP , it is natural to ask whether we are missing a fruitful analogy with product representations. It is a priori reasonable to think that a pair of sneakers can have different representations depending on the shopping context: is the user interested in buying these shoes because they are running shoes, or because these shoes are made by her favorite brand?",
"cite_spans": [
{
"start": 342,
"end": 364,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF55"
},
{
"start": 401,
"end": 422,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF8"
},
{
"start": 460,
"end": 480,
"text": "(Nozza et al., 2020;",
"ref_id": "BIBREF29"
},
{
"start": 481,
"end": 501,
"text": "Rogers et al., 2020)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, we explore the adaptation of BERTlike architectures to eCommerce: through extensive experimentation on downstream tasks and empirical benchmarks on typical digital retailers, we discuss advantages and disadvantages of contextualized embeddings when compared to traditional prod2vec. We summarize our main contributions as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1. we propose and implement a BERT-based contextualized product embeddings model (hence, Prod2BERT), which can be trained with online shopper behavioral data and produce product embeddings to be leveraged by downstream tasks;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2. we benchmark Prod2BERT against prod2vec embeddings, showing the potential accuracy gain of contextual representations across different shops and data requirements. By testing on shops that differ for traffic, catalog, and data distribution, we increase our confidence that our findings are indeed applicable to a vast class of typical retailers;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "3. we perform extensive experiments by varying hyperparameters, architectures and finetuning strategies. We report detailed results from numerous evaluation tasks, and finally provide recommendations on how to best trade off accuracy with training cost;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "4. we share our code 1 , to help practitioners replicate our findings on other shops and improve on our benchmarks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The eCommerce industry has been steadily growing in recent years: according to U.S. Department of Commerce (2020), 16% of all retail transactions now occur online; worldwide eCommerce is estimated to turn into a $4.5 trillion industry in 2021 (Statista Research Department, 2020). Interest from researchers has been growing at the same pace (Tsagkias et al., 2020) , stimulated by challenging problems and by the large-scale impact that machine learning systems have in the space (Pichestapong, 2019) . Within the fast adoption of deep learning methods in the field (Ma et al., 2020; Yuan et al., 2020) , product representations obtained through prod2vec play a key role in many neural architectures: after training, a product space can be used directly (Vasile et al., 2016b) , as a part of larger systems for recommendation (Tagliabue et al., 2020b) , or in downstream NLP/IR tasks . Combining the size of the market with the past success of NLP models in the space, investigating whether Transformer-based architectures result in superior product representations is both theoretically interesting and practically important. Anticipating some of the themes below, it is worth mentioning that our study sits at the intersection of two important trends: on one side, neural models typically show significant improvements at large scale (Kaplan et al., 2020) -by quantifying expected gains for \"reasonable-sized\" shops, our results are relevant also outside a few public companies (Tagliabue et al., 2021) , and allow for a principled trade-off between accuracy and ethical considerations (Strubell et al., 2019) ; on the other side, the rise of multi-tenant players 2 makes sophisticated models potentially available to an unprecedented number of shops -in this regard, we design our methodology to include multiple shops in our benchmarks, and report how training resources and accuracy scale across deployments. For these reasons, we believe our findings will be interesting to a wide range of researchers and practitioners.",
"cite_spans": [
{
"start": 341,
"end": 364,
"text": "(Tsagkias et al., 2020)",
"ref_id": null
},
{
"start": 480,
"end": 500,
"text": "(Pichestapong, 2019)",
"ref_id": "BIBREF33"
},
{
"start": 566,
"end": 583,
"text": "(Ma et al., 2020;",
"ref_id": "BIBREF22"
},
{
"start": 584,
"end": 602,
"text": "Yuan et al., 2020)",
"ref_id": "BIBREF58"
},
{
"start": 754,
"end": 776,
"text": "(Vasile et al., 2016b)",
"ref_id": "BIBREF54"
},
{
"start": 826,
"end": 851,
"text": "(Tagliabue et al., 2020b)",
"ref_id": "BIBREF43"
},
{
"start": 1336,
"end": 1357,
"text": "(Kaplan et al., 2020)",
"ref_id": null
},
{
"start": 1480,
"end": 1504,
"text": "(Tagliabue et al., 2021)",
"ref_id": "BIBREF40"
},
{
"start": 1588,
"end": 1611,
"text": "(Strubell et al., 2019)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Product Embeddings: an Industry Perspective",
"sec_num": "1.1"
},
{
"text": "Distributional Models. Word2vec (Mikolov et al., 2013) enjoyed great success in NLP thanks to its computational efficiency, unsupervised nature and accurate semantic content (Levy et al., 2015; Al-Saqqa and Awajan, 2019; Lample et al., 2018) . Recently, models such as BERT (Devlin et al., 2019) and RoBERTa shifted much of the community attention to Transformer architectures and their performance (Talmor and Berant, 2019; Vilares et al., 2020) , while it is increasingly clear that big datasets (Kaplan et al., 2020) and substantial computing resources play a role in the overall accuracy of these architectures; in our experiments, we explicitly address robustness by i) varying model designs, together with other hyperparameters; and ii) test on multiple shops, differing in traffic, industry and product catalog.",
"cite_spans": [
{
"start": 32,
"end": 54,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF24"
},
{
"start": 174,
"end": 193,
"text": "(Levy et al., 2015;",
"ref_id": "BIBREF19"
},
{
"start": 194,
"end": 220,
"text": "Al-Saqqa and Awajan, 2019;",
"ref_id": "BIBREF0"
},
{
"start": 221,
"end": 241,
"text": "Lample et al., 2018)",
"ref_id": "BIBREF14"
},
{
"start": 274,
"end": 295,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF8"
},
{
"start": 399,
"end": 424,
"text": "(Talmor and Berant, 2019;",
"ref_id": "BIBREF44"
},
{
"start": 425,
"end": 446,
"text": "Vilares et al., 2020)",
"ref_id": null
},
{
"start": 498,
"end": 519,
"text": "(Kaplan et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Product Embeddings. Prod2vec is a straightforward adaptation to eCommerce of word2vec (Grbovic et al., 2015) . Product embeddings quickly became a fundamental component for recommendation and personalization systems (Caselles-Dupr\u00e9 et al., 2018; Tagliabue et al., 2020a) , as well as NLP-based predictions . To the best of our knowledge, this work is the first to explicitly investigate whether Transformer-based architectures deliver higher-quality product representations compared to non-contextual embeddings. Eschauzier (2020) uses Transformers on cart co-occurrence patterns with the specific goal of basket completion -while similar in the masking procedure, the breadth of the work and the evaluation methodology is very different: as convincingly argued by Requena et al. (2020) , benchmarking models on unrealistic datasets make findings less relevant for practitioners outside of \"Big Tech\". Our work features extensive tests on real-world datasets, which are indeed representative of a large portion of the mid-to-long tail of the market; moreover, we benchmark several fine-tuning strategies from the latest NLP literature (Section 5.2), sharing -together with our code -important practical lessons for academia and industry peers. The closest work in the literature as far as architecture goes is BERT4Rec (Sun et al., 2019) , i.e. a model based on Transformers trained end-to-end for recommendations. The focus of this work is not so much the gains induced by Transformers in sequence modelling, but instead is the quality of the representations obtained through unsupervised pretraining -while recommendations are important, the breadth of prod2vec literature (Bianchi et al., 2021b,a; shows the need for a more thorough and general assessment. Our methodology helps uncover a tighter-than-expected gap between the models in downstream tasks, and our industry-specific benchmarks allow us to draw novel conclusions on optimal model design across a variety of scenarios, and to give practitioners actionable insights for deployment.",
"cite_spans": [
{
"start": 86,
"end": 108,
"text": "(Grbovic et al., 2015)",
"ref_id": "BIBREF10"
},
{
"start": 216,
"end": 245,
"text": "(Caselles-Dupr\u00e9 et al., 2018;",
"ref_id": "BIBREF6"
},
{
"start": 246,
"end": 270,
"text": "Tagliabue et al., 2020a)",
"ref_id": "BIBREF42"
},
{
"start": 765,
"end": 786,
"text": "Requena et al. (2020)",
"ref_id": "BIBREF35"
},
{
"start": 1319,
"end": 1337,
"text": "(Sun et al., 2019)",
"ref_id": "BIBREF39"
},
{
"start": 1675,
"end": 1700,
"text": "(Bianchi et al., 2021b,a;",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The Prod2BERT model is taking inspiration from BERT architecture and aims to learn contextdependent vector representation of products from online session logs. By considering a shopping session as a \"sentence\" and the products shoppers interact with as \"words\", we can transfer masked language modeling (MLM) from NLP to eCommerce. Framing sessions as sentences is a natural modelling choice for several reasons: first, it mimics the successful architecture of prod2vec; second, by exploiting BERT bi-directional nature, each prediction of a masked token/product will make use of past and future shopping choices: if a shopping journey is (typically) a progression of intent from exploration to purchase (Harbich et al., 2017) , it seems natural that sequential modelling may capture relevant dimensions in the underlying vocabu- lary/catalog. Once trained, Prod2BERT becomes capable of predicting masked tokens, as well as providing context-specific product embeddings for downstream tasks.",
"cite_spans": [
{
"start": 704,
"end": 726,
"text": "(Harbich et al., 2017)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Overview",
"sec_num": "3.1"
},
{
"text": "As shown in Figure 1 , Prod2BERT is based on a transformed based architecture Vaswani et al. (2017) , emulating the successful BERT model. Please note that, different from BERT's original implementation, a white-space tokenizer is first used to split an input session into tokens, each one representing a product ID; tokens are combined with positional encodings via addition and fed into a stack of self-attention layers, where each layer contains a block for multi-head attention, followed by a simple feed forward network. After obtaining the output from the last self-attention layer, the vectors corresponding to the masked tokens pass through a softmax to generate the final predictions.",
"cite_spans": [
{
"start": 78,
"end": 99,
"text": "Vaswani et al. (2017)",
"ref_id": "BIBREF55"
}
],
"ref_spans": [
{
"start": 12,
"end": 20,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "3.2"
},
{
"text": "Similar to ; Sun et al. (2019), we train Prod2BERT from scratch with the MLM objective. A random portion of the tokens (i.e., the product IDs) in the original sequence are chosen for possible replacements with the MASK token; and the masked version of the sequence is fed into the model as input: Figure 2 shows qualitatively the relevant data transformations, from the original Figure 2 : Transformation of sequential data, from the original data generating process -i.e. a shopping session -, to telemetry data collected by the SDK, to the masked sequence fed into Prod2BERT.",
"cite_spans": [],
"ref_spans": [
{
"start": 297,
"end": 305,
"text": "Figure 2",
"ref_id": null
},
{
"start": 379,
"end": 387,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Training Objective",
"sec_num": "3.3"
},
{
"text": "shopping session, to the telemetry data, to the final masking sequence. The target output sequence is exactly the original sequence without any masking, thus the training objective is to predict the original value of the masked tokens, based on the context provided by their surrounding unmasked tokens. The model learns to minimize categorical crossentropy loss, taking into account only the predicted masked tokens, i.e. the output of the non-masked tokens is discarded for back-propagation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Objective",
"sec_num": "3.3"
},
{
"text": "There is growing literature investigating how different hyperparameters and architectural choices can affect Transformer-based models. For example, Lan et al. (2020) observed diminishing returns when increasing the number of layers after a certain point; showed improved performance when modifying masking strategy and using duplicated data; finally, Kaplan et al. (2020) reported slightly different findings from previous studies on factors influencing Transformers performance. Hence, it is worth studying the role of hyperparameters and model designs for Prod2BERT, in order to narrow down which settings are the best given the specific target of our work, i.e. product representations. Table 1 shows the relevant hyperparameter and design variants for Prod2BERT; following improvement in data generalization reported by , when duplicated = 1 we augmented the original dataset repeating each session 5 times. 3 We set the embedding size to 64 after preliminary optimizations: as other values offered no improvements, we report results only for one size.",
"cite_spans": [
{
"start": 148,
"end": 165,
"text": "Lan et al. (2020)",
"ref_id": "BIBREF15"
},
{
"start": 351,
"end": 371,
"text": "Kaplan et al. (2020)",
"ref_id": null
},
{
"start": 912,
"end": 913,
"text": "3",
"ref_id": null
}
],
"ref_spans": [
{
"start": 690,
"end": 697,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Hyperparameters and Design Choices",
"sec_num": "3.4"
},
{
"text": "We benchmark Prod2BERT against the industry standard prod2vec (Grbovic et al., 2015) . More specifically, we train a CBOW model with negative sampling over shopping sessions (Mikolov et al., 2013) . Since the role of hyperparameters in prod2vec has been extensively studied before (Caselles-Dupr\u00e9 et al., 2018), we prepare embeddings according to the best practices in Bianchi et al. 2020 While prod2vec is chosen because of our focus on the quality of the learned representations -and not just performance on sequential inference per se -it is worth nothing that kNN (Latifi et al., 2020) over appropriate spaces is also a surprisingly hard baseline to beat in many practical recommendation settings. It is worth mentioning that for both prod2vec and Prod2BERT we are mainly interested in producing a dense space capturing the latent similarity between SKUs: other important relationships between products (substitution (Zuo et al., 2020), hierarchy (Nickel and Kiela, 2017) etc.) may require different embedding techniques (or extensions, such as interaction-specific embeddings (Zhao et al., 2020) ).",
"cite_spans": [
{
"start": 62,
"end": 84,
"text": "(Grbovic et al., 2015)",
"ref_id": "BIBREF10"
},
{
"start": 174,
"end": 196,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF24"
},
{
"start": 568,
"end": 589,
"text": "(Latifi et al., 2020)",
"ref_id": "BIBREF17"
},
{
"start": 951,
"end": 975,
"text": "(Nickel and Kiela, 2017)",
"ref_id": "BIBREF28"
},
{
"start": 1081,
"end": 1100,
"text": "(Zhao et al., 2020)",
"ref_id": "BIBREF60"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Prod2vec: a Baseline Model",
"sec_num": "4.1"
},
{
"text": "We collected search logs and detailed shopping sessions from two partnering shops, Shop A and Shop B: similarly to the dataset released by Requena et al. (2020) , we employ the standard definition of \"session\" from Google Analytics 4 , with a total of five different product actions tracked: detail, add, purchase, remove mid-sized digital shops, with revenues between 25 and 100 millions USD/year; however, they differ in many aspects, from traffic, to conversion rate, to catalog structure: Shop A is in the sport apparel category, whereas Shop B is in home improvement. Sessions for training are sampled with undisclosed probability from the period of March-December 2019; testing sessions are a completely disjoint dataset from January 2020. After pre-processing 6 , descriptive statistics for the training set for Shop A and Shop B are detailed in Table 2 . For fairness of comparison, the exact same datasets are used for both Prod2BERT and prod2vec.",
"cite_spans": [
{
"start": 139,
"end": 160,
"text": "Requena et al. (2020)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [
{
"start": 853,
"end": 860,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4.2"
},
{
"text": "Testing on fine-grained, recent data from multiple shops is important to support the internal validity (i.e. \"is this improvement due to the model or some underlying data quirks?\") and the external validity (i.e. \"can this method be applied robustly across deployments, e.g. Tagliabue et al. (2020b)\"?) of our findings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4.2"
},
{
"text": "Next Event Prediction (NEP) is our first evaluation task, since it is a standard way to evaluate the quality of product representations (Letham et al., 2013; Caselles-Dupr\u00e9 et al., 2018) : briefly, NEP consists in predicting the next action the shopper is going to perform given her past actions. Hence, in the case of Prod2BERT, we mask the last item of every session and fit the sequence as input to a pre-trained Prod2BERT model 7 . Provided with the model's output sequence, we take the top K most likely values for the masked token, and perform comparison with the true interaction. As for prod2vec, we perform the NEP task by following industry best practices : given a type is not considered when preparing session for training. trained prod2vec, we take all the before-last items in a session to construct a session vector by average pooling, and use kNN to predict the last item 8 . Following industry standards, nDCG@K (Mitra and Craswell, 2018) with K = 10 is the chosen metric 9 , and all tests ran on 10, 000 testing cases (test set is randomly sampled first, and then shared across Prod2BERT and prod2vec to guarantee a fair comparison). Table 3 : nDCG@10 on NEP task for both shops with Prod2BERT and prod2vec (bold are best scores for Prod2BERT; underline are best scores for prod2vec). Table 3 reports results on the NEP task by highlighting some key configurations that led to competitive performances. Prod2BERT is significantly superior to prod2vec, scoring up to 40% higher than the best prod2vec configurations. Since shopping sessions are significantly shorter than sentence lengths in Devlin et al. (2019), we found that changing masking probability from 0.15 (value from standard BERT) to 0.25 consistently improved performance by making the training more effective. As for the number of layers, similar to Lan et al. (2020), we found that adding layers helps only up until a point: with l = 8, training Prod2BERT with more layers resulted in a catastrophic drop in model performance for the smaller Shop A; however, the",
"cite_spans": [
{
"start": 136,
"end": 157,
"text": "(Letham et al., 2013;",
"ref_id": "BIBREF18"
},
{
"start": 158,
"end": 186,
"text": "Caselles-Dupr\u00e9 et al., 2018)",
"ref_id": "BIBREF6"
},
{
"start": 929,
"end": 955,
"text": "(Mitra and Craswell, 2018)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [
{
"start": 1152,
"end": 1159,
"text": "Table 3",
"ref_id": null
},
{
"start": 1303,
"end": 1310,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiment #1: Next Event Prediction",
"sec_num": "5.1"
},
{
"text": "Time A-B Cost A-B prod2vec 4-20 0.006-0.033$ Prod2BERT 240-1200 48.96-244.8$ Table 4 : Time (minutes) and cost (USD) for training one model instance, per shop: prod2vec is trained on a c4.large instance, Prod2BERT is trained (10 epochs) on a Tesla V100 16GB GPU from p3.8xlarge instance. same model trained on the bigger Shop B obtained a small boost. Finally, duplicating training data has been shown to bring consistent improvements: while keeping all other hyperparameters constant, using duplicated data results in an up to 9% increase in nDCG@10, not to mention that after only 5 training epochs the model outperforms other configurations trained for 10 epochs or more.",
"cite_spans": [],
"ref_spans": [
{
"start": 77,
"end": 84,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "While encouraging, the performance gap between Prod2BERT and prod2vec is consistent with Transformers performance on sequential tasks (Sun et al., 2019) . However, as argued in Section 1.1, product representations are used as input to many downstream systems, making it essential to evaluate how the learned embeddings generalize outside of the pure sequential setting. Our second experiment is therefore designed to test how well contextual representations transfer to other eCommerce tasks, helping us to assess the accuracy/cost tradeoff when difference in training resources between the two models is significant: as reported by Table 4, the difference (in USD) between prod2vec and Prod2BERT is several order of magnitudes. 10",
"cite_spans": [
{
"start": 134,
"end": 152,
"text": "(Sun et al., 2019)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "A crucial element in the success of Transformerbased language model is the possibility of adapting the representation learned through pre-training to new tasks: for example, the original Devlin et al. (2019) fine-tuned the pre-trained model on 11 downstream NLP tasks. However, the practical significance of these results is still unclear: on one hand, Li et al. (2020) ; Reimers and Gurevych (2019) observed that sometimes BERT contextual embeddings can underperform a simple GloVe (Pennington et al., 2014) model; on the other, Mosbach et al. (2020) highlights catastrophic forgetting, vanishing gradients and data variance as important factors in practical failures. Hence, given the range of downstream applications and the active debate on transferability in NLP, we investigate how Prod2BERT representations perform when used in the intent prediction task.",
"cite_spans": [
{
"start": 187,
"end": 207,
"text": "Devlin et al. (2019)",
"ref_id": "BIBREF8"
},
{
"start": 353,
"end": 369,
"text": "Li et al. (2020)",
"ref_id": "BIBREF20"
},
{
"start": 372,
"end": 399,
"text": "Reimers and Gurevych (2019)",
"ref_id": "BIBREF34"
},
{
"start": 483,
"end": 508,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF30"
},
{
"start": 530,
"end": 551,
"text": "Mosbach et al. (2020)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment #2: Intent Prediction",
"sec_num": "5.2"
},
{
"text": "Intent prediction is the task of guessing whether a shopping session will eventually end in the user adding items to the cart (signaling purchasing intention). Since small increases in conversion can be translated into massive revenue boosting, this task is both a crucial problem in the industry and an active area of research (Toth et al., 2017; Requena et al., 2020) . To implement the intent prediction task, we randomly sample from our dataset 20, 000 sessions ending with an add-to-cart actions and 20, 000 sessions without add-to-cart, and split the resulting dataset for training, validation and test. Hence, given the list of previous products that a user has interacted with, the goal of the intent model is to predict whether an add-to-cart event will happen or not. We experimented with several adaptation techniques inspired by the most recent NLP literature (Peters et al., 2019; Li et al., 2020) :",
"cite_spans": [
{
"start": 328,
"end": 347,
"text": "(Toth et al., 2017;",
"ref_id": "BIBREF50"
},
{
"start": 348,
"end": 369,
"text": "Requena et al., 2020)",
"ref_id": "BIBREF35"
},
{
"start": 872,
"end": 893,
"text": "(Peters et al., 2019;",
"ref_id": "BIBREF31"
},
{
"start": 894,
"end": 910,
"text": "Li et al., 2020)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment #2: Intent Prediction",
"sec_num": "5.2"
},
{
"text": "1. Feature extraction (static): we extract the contextual representations from a target hidden layer of pre-trained Prod2BERT, and through average pooling, feed them as input to a multilayer perceptron (MLP) classifier to generate the binary prediction. In addition to alternating between the first hidden layer (enc 0) to the last hidden layer (enc 3), we also tried concatenation (concat), i.e. combining embeddings of all hidden layers via concatenation before average pooling.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment #2: Intent Prediction",
"sec_num": "5.2"
},
{
"text": "2. Feature extraction (learned): we implement a linear weighted combination of all hidden layers (wal), with learnable parameters, as input features to the MLP model (Peters et al., 2019) .",
"cite_spans": [
{
"start": 166,
"end": 187,
"text": "(Peters et al., 2019)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment #2: Intent Prediction",
"sec_num": "5.2"
},
{
"text": "3. Fine-tuning: we take the pre-trained model up until the last hidden layer and add the MLP classifier on top for intent prediction (finetune). During training, both Prod2BERT and task-specific parameters are trainable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment #2: Intent Prediction",
"sec_num": "5.2"
},
{
"text": "As for our baseline, i.e. prod2vec, we implement the intent prediction task by encoding each product within a session with its prod2vec embeddings, and Table 5 : Accuracy scores in the intent prediction task (best scores for each shop in bold).",
"cite_spans": [],
"ref_spans": [
{
"start": 152,
"end": 159,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiment #2: Intent Prediction",
"sec_num": "5.2"
},
{
"text": "feeding them to a LSTM network (so that it can learn sequential information) followed by a binary classifier to obtain the final prediction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment #2: Intent Prediction",
"sec_num": "5.2"
},
{
"text": "From our experiments, Table 5 highlights the most interesting results obtained from adapting to the new task the best-performing Prod2BERT and prod2vec models from NEP. As a first consideration, the shallowest layer of Prod2BERT for feature extraction outperforms all other layers, and even beats concatenation and weighted average strategies 11 . Second, the quality of contextual representations of Prod2BERT is highly dependent on the amount of data used in the pre-training phase.",
"cite_spans": [],
"ref_spans": [
{
"start": 22,
"end": 29,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5.2.1"
},
{
"text": "Comparing Table 3 with Table 5 , even though the model delivers strong results in the NEP task on Shop A, its performance on the intent prediction task is weak, as it remains inferior to prod2vec across all settings. In other words, the limited amount of traffic from Shop A is not enough to let Prod2BERT form high-quality product representations; however, the model can still effectively perform well on the NEP task, especially since the nature of NEP is closely aligned with the pretraining task. Third, fine-tuning instability is encountered and has a severe impact on model performance. Since the amount of data available for intent prediction is not nearly as important as the data utilized for pre-training Prod2BERT, overfitting proved to be a challenging aspect throughout our fine-tuning experiments. Fourth, by comparing the results of our best method against the model learnt with prod2vec embeddings, we observed prod2vec embeddings can only provide limited values for intent estimation and the LSTM-based model stops to improve very quickly; in contrast, the features provided by Prod2BERT embeddings seem to encode more valuable information, allowing the model to be trained for longer epochs and eventually reaching a higher accuracy score. As a more general consideration -reinforced by a qualitative visual assessment of clusters in the resulting vector space -, the performance gap is very small, especially considering that long training and extensive optimizations are needed to take advantage of the contextual embeddings.",
"cite_spans": [],
"ref_spans": [
{
"start": 10,
"end": 30,
"text": "Table 3 with Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5.2.1"
},
{
"text": "Inspired by the success of Transformer-based models in NLP, this work explores contextualized product representations as trained through a BERT-inspired neural network, Prod2BERT. By thoroughly benchmarking Prod2BERT against prod2vec in a multi-shop setting, we were able to uncover important insights on the relationship between hyperparameters, adaptation strategies and eCommerce performances on one side, and we could quantify for the first time quality gains across different deployment scenarios, on the other. If we were to sum up our findings for interested practitioners, these are our highlights:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "1. Generally speaking, our experimental setting proved that pre-training Prod2BERT with Mask Language Modeling can be applied successfully to sequential prediction problems in eCommerce. These results provide independent confirmation for the findings in Sun et al. (2019) , where BERT was used for in-session recommendations over academic datasets. However, the tighter gap on downstream tasks suggests that Transformers' ability to model long-range dependencies may be more important than pure representational quality in the NEP task, as also confirmed by human inspection of the product spaces (see Appendix A for comparative t-SNE plots).",
"cite_spans": [
{
"start": 254,
"end": 271,
"text": "Sun et al. (2019)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "2. Our investigation on adapting pre-trained contextual embeddings for downstream tasks featured several strategies in feature extraction and fine-tuning. Our analysis showed that feature-based adaptation leads to the peak performance, as compared to its fine-tuning counterpart.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "3. Dataset size does indeed matter: as evident from the performance difference in Table 5, Prod2BERT shows bigger gains with the largest amount of training data available. Considering the amount of resources needed to train and optimize Prod2BERT (Section 5.1.1), the gains of contextualized embedding may not be worth the investment for shops outside the top 5k in the Alexa ranking 12 ; on the other hand, our results demonstrate that with careful optimization, shops with a large user base and significant resources may achieve superior results with Prod2BERT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "While our findings are encouraging, there are still many interesting questions to tackle when pushing Prod2BERT further. In particular, our results require a more detailed discussion with respect to the success of BERT for textual representations, with a focus on the differences between words and products: for example, an important aspect of BERT is the tokenizer, that splits words into subwords; this component is absent in our setting because there exists no straightforward concept of \"sub-product\" -while far from conclusive, it should be noted that our preliminary experiments using categories as \"morphemes\" that attach to product identifiers did not produce significant improvements. We leave the answer to these questions -as well as the possibility of adapting Prod2BERT to even more tasks -to the next iteration of this project.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "As a parting note, we would like to emphasize that Prod2BERT has been so far the largest and (economically) more significant experiment run by Coveo: while we do believe that the methodology and findings here presented have significant practical value for the community, we also recognize that, for example, not all possible ablation studies were performed in the present work. As Bianchi and Hovy (2021) describe, replicating and comparing some models is rapidly becoming prohibitive in term of costs for both companies and universities. Even if the debate on the social impact of largescale models often feels very complex (Thompson et al., 2020; Bender et al., 2021) -and, sometimes, removed from our day-to-day duties -Prod2BERT gave us a glimpse of what unequal access to resources may mean in more meaningful contexts. While we (as in \"humanity we\") try to find a solution, we (as in \"authors we\") may find temporary",
"cite_spans": [
{
"start": 625,
"end": 648,
"text": "(Thompson et al., 2020;",
"ref_id": null
},
{
"start": 649,
"end": 669,
"text": "Bender et al., 2021)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "User data has been collected by Coveo in the process of providing business services: data is collected and processed in an anonymized fashion, in compliance with existing legislation. In particular, the target dataset uses only anonymous uuids to label events and, as such, it does not contain any information that can be linked to physical entities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ethical Considerations",
"sec_num": "7"
},
{
"text": "Figures 3 to 6 represent browsing sessions projected in two-dimensions with t-SNE (van der Maaten and Hinton, 2008) : for each browsing session, we retrieve the corresponding type (e.g. shoes, pants, etc.) of each product in the session, and use majority voting to assign the most frequent product type to the session. Hence, the dots are colorcoded by product type and each dot represents a unique session from our logs. It is easy to notice that, first, both contextual and non-contextual embeddings built with a smaller amount of data, i.e. Figures 3 and 4 from Shop A, have a less clear separation between clusters; moreover, the quality of Prod2BERT seems even lower than prod2vec, as there exists a larger central area where all types are heavily overlapping. Second, comparing Figure 5 with Figure 6 , both Prod2BERT and prod2vec improve, which confirms Prod2BERT, given enough pre-training data, is able to deliver better separations in terms of product types and more meaningful representations.",
"cite_spans": [
{
"start": 91,
"end": 115,
"text": "Maaten and Hinton, 2008)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [
{
"start": 544,
"end": 559,
"text": "Figures 3 and 4",
"ref_id": "FIGREF3"
},
{
"start": 784,
"end": 792,
"text": "Figure 5",
"ref_id": "FIGREF5"
},
{
"start": 798,
"end": 806,
"text": "Figure 6",
"ref_id": "FIGREF6"
}
],
"eq_spans": [],
"section": "Ethical Considerations",
"sec_num": "7"
},
{
"text": "Code available at https://github.com/vinid/ prodb",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "As an indication of the market opportunity, in the space of AI-powered search and recommendations we recently witnessed Algolia(Techcrunch, 2019a) and Lucidworks raising 100M USD(Techcrunch, 2019c), Coveo raising 227M CAD(Techcrunch, 2019b), Bloomreach raising 115M USD(Techcrunch, 2021).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This procedure ensures that each sequence can be masked in 5 different ways during training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://support.google.com/analytics/ answer/2731565?hl=en 5 Please note that, as in many previous embedding studies(Caselles-Dupr\u00e9 et al., 2018;, action",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We only keep sessions that have between 3 and 20 product interactions, to eliminate unreasonably short sessions and ensure computation efficiency.7 Note that this is similar to the word prediction task for cloze sentences in the NLP literature(Petroni et al., 2019).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Previous work using LSTM in NEP(Tagliabue et al., 2020b) showed some improvements over kNN; however, the differences cannot explain the gap we have found between prod2vec and Prod2BERT. Hence, kNN is chosen here for consistency with the relevant literature.9 We also tracked HR@10, but given insights were similar, we omitted it for brevity in what follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Costs are from official AWS pricing, with 0.10 USD/h for the c4.large (https://aws.amazon.com/ it/ec2/pricing/on-demand/), and 12,24 USD/h for the p3.8xlarge (https://aws.amazon.com/it/ec2/ instance-types/p3/). While obviously cost optimizations are possible, the \"naive\" pricing is a good proxy to appreciate the difference between the two methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This is consistent withPeters et al. (2019), which states that inner layers of a pre-trained BERT encode more transferable features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "See https://www.alexa.com/topsites. solace knowing that good ol' prod2vec is still pretty competitive.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The use of word2vec model in sentiment analysis: A survey",
"authors": [
{
"first": "Samar",
"middle": [],
"last": "Al-Saqqa",
"suffix": ""
},
{
"first": "Arafat",
"middle": [],
"last": "Awajan",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 International Conference on Artificial Intelligence, Robotics and Control",
"volume": "",
"issue": "",
"pages": "39--43",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samar Al-Saqqa and Arafat Awajan. 2019. The use of word2vec model in sentiment analysis: A sur- vey. In Proceedings of the 2019 International Con- ference on Artificial Intelligence, Robotics and Con- trol, pages 39-43.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "On the dangers of stochastic parrots: Can language models be too big?",
"authors": [
{
"first": "Emily",
"middle": [
"M"
],
"last": "Bender",
"suffix": ""
},
{
"first": "Timnit",
"middle": [],
"last": "Gebru",
"suffix": ""
},
{
"first": "Angelina",
"middle": [],
"last": "Mcmillan-Major",
"suffix": ""
},
{
"first": "Shmargaret",
"middle": [],
"last": "Shmitchell",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT '21",
"volume": "",
"issue": "",
"pages": "610--623",
"other_ids": {
"DOI": [
"10.1145/3442188.3445922"
]
},
"num": null,
"urls": [],
"raw_text": "Emily M. Bender, Timnit Gebru, Angelina McMillan- Major, and Shmargaret Shmitchell. 2021. On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM Confer- ence on Fairness, Accountability, and Transparency, FAccT '21, page 610-623, New York, NY, USA. As- sociation for Computing Machinery.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Language in a (search) box: Grounding language learning in real-world human-machine interaction",
"authors": [
{
"first": "Federico",
"middle": [],
"last": "Bianchi",
"suffix": ""
},
{
"first": "Ciro",
"middle": [],
"last": "Greco",
"suffix": ""
},
{
"first": "Jacopo",
"middle": [],
"last": "Tagliabue",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "4409--4415",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Federico Bianchi, Ciro Greco, and Jacopo Tagliabue. 2021a. Language in a (search) box: Grounding lan- guage learning in real-world human-machine inter- action. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, pages 4409-4415, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "On the gap between adoption and understanding in nlp",
"authors": [
{
"first": "Federico",
"middle": [],
"last": "Bianchi",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2021,
"venue": "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Federico Bianchi and Dirk Hovy. 2021. On the gap between adoption and understanding in nlp. In Find- ings of the Association for Computational Linguis- tics: ACL-IJCNLP 2021. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Query2Prod2Vec: Grounded word embeddings for eCommerce",
"authors": [
{
"first": "Federico",
"middle": [],
"last": "Bianchi",
"suffix": ""
},
{
"first": "Jacopo",
"middle": [],
"last": "Tagliabue",
"suffix": ""
},
{
"first": "Bingqing",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Papers",
"volume": "",
"issue": "",
"pages": "154--162",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Federico Bianchi, Jacopo Tagliabue, and Bingqing Yu. 2021b. Query2Prod2Vec: Grounded word em- beddings for eCommerce. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Papers, pages 154-162, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Fantastic embeddings and how to align them: Zero-shot inference in a multi-shop scenario",
"authors": [
{
"first": "Federico",
"middle": [],
"last": "Bianchi",
"suffix": ""
},
{
"first": "Jacopo",
"middle": [],
"last": "Tagliabue",
"suffix": ""
},
{
"first": "Bingqing",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Luca",
"middle": [],
"last": "Bigon",
"suffix": ""
},
{
"first": "Ciro",
"middle": [],
"last": "Greco",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the SIGIR 2020 eCom workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Federico Bianchi, Jacopo Tagliabue, Bingqing Yu, Luca Bigon, and Ciro Greco. 2020. Fantastic embed- dings and how to align them: Zero-shot inference in a multi-shop scenario. In Proceedings of the SIGIR 2020 eCom workshop, July 2020, Virtual Event, pub- lished at http://ceur-ws.org (to appear).",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Word2vec applied to recommendation: hyperparameters matter",
"authors": [
{
"first": "Hugo",
"middle": [],
"last": "Caselles-Dupr\u00e9",
"suffix": ""
},
{
"first": "Florian",
"middle": [],
"last": "Lesaint",
"suffix": ""
},
{
"first": "Jimena",
"middle": [],
"last": "Royo-Letelier",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 12th ACM Conference on Recommender Systems, RecSys",
"volume": "",
"issue": "",
"pages": "352--356",
"other_ids": {
"DOI": [
"10.1145/3240323.3240377"
]
},
"num": null,
"urls": [],
"raw_text": "Hugo Caselles-Dupr\u00e9, Florian Lesaint, and Jimena Royo-Letelier. 2018. Word2vec applied to recom- mendation: hyperparameters matter. In Proceedings of the 12th ACM Conference on Recommender Sys- tems, RecSys 2018, Vancouver, BC, Canada, Octo- ber 2-7, 2018, pages 352-356. ACM.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Generative pretraining from pixels",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Rewon",
"middle": [],
"last": "Child",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Heewoo",
"middle": [],
"last": "Jun",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 37th International Conference on Machine Learning",
"volume": "2020",
"issue": "",
"pages": "1691--1703",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, and Ilya Sutskever. 2020. Generative pretraining from pixels. In Proceed- ings of the 37th International Conference on Ma- chine Learning, ICML 2020, 13-18 July 2020, Vir- tual Event, volume 119 of Proceedings of Machine Learning Research, pages 1691-1703. PMLR.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "ProdBERT: Shopping basket completion using bidirectional encoder representations from transformers",
"authors": [
{
"first": "Ruben",
"middle": [],
"last": "Eschauzier",
"suffix": ""
}
],
"year": 2020,
"venue": "Bachelor's Thesis",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ruben Eschauzier. 2020. ProdBERT: Shopping basket completion using bidirectional encoder representa- tions from transformers. In Bachelor's Thesis.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "E-commerce in your inbox: Product recommendations at scale",
"authors": [
{
"first": "Mihajlo",
"middle": [],
"last": "Grbovic",
"suffix": ""
},
{
"first": "Vladan",
"middle": [],
"last": "Radosavljevic",
"suffix": ""
},
{
"first": "Nemanja",
"middle": [],
"last": "Djuric",
"suffix": ""
},
{
"first": "Narayan",
"middle": [],
"last": "Bhamidipati",
"suffix": ""
},
{
"first": "Jaikit",
"middle": [],
"last": "Savla",
"suffix": ""
},
{
"first": "Varun",
"middle": [],
"last": "Bhagwan",
"suffix": ""
},
{
"first": "Doug",
"middle": [],
"last": "Sharp",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining",
"volume": "",
"issue": "",
"pages": "1809--1818",
"other_ids": {
"DOI": [
"10.1145/2783258.2788627"
]
},
"num": null,
"urls": [],
"raw_text": "Mihajlo Grbovic, Vladan Radosavljevic, Nemanja Djuric, Narayan Bhamidipati, Jaikit Savla, Varun Bhagwan, and Doug Sharp. 2015. E-commerce in your inbox: Product recommendations at scale. In Proceedings of the 21th ACM SIGKDD Inter- national Conference on Knowledge Discovery and Data Mining, Sydney, NSW, Australia, August 10-13, 2015, pages 1809-1818. ACM.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "node2vec: Scalable feature learning for networks",
"authors": [
{
"first": "Aditya",
"middle": [],
"last": "Grover",
"suffix": ""
},
{
"first": "Jure",
"middle": [],
"last": "Leskovec",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining",
"volume": "",
"issue": "",
"pages": "855--864",
"other_ids": {
"DOI": [
"10.1145/2939672.2939754"
]
},
"num": null,
"urls": [],
"raw_text": "Aditya Grover and Jure Leskovec. 2016. node2vec: Scalable feature learning for networks. In Proceed- ings of the 22nd ACM SIGKDD International Con- ference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, August 13-17, 2016, pages 855-864. ACM.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Discovering customer journey maps using a mixture of markov models",
"authors": [
{
"first": "Matthieu",
"middle": [],
"last": "Harbich",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Bernard",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Berkes",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Garbinato",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Andritsos",
"suffix": ""
}
],
"year": 2017,
"venue": "SIMPDA",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthieu Harbich, Ga\u00ebl Bernard, P. Berkes, B. Garbinato, and P. Andritsos. 2017. Discov- ering customer journey maps using a mixture of markov models. In SIMPDA.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "and Dario Amodei. 2020. Scaling laws for neural language models",
"authors": [
{
"first": "Jared",
"middle": [],
"last": "Kaplan",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Mccandlish",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Henighan",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Tom",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Brown",
"suffix": ""
},
{
"first": "Rewon",
"middle": [],
"last": "Chess",
"suffix": ""
},
{
"first": "Scott",
"middle": [],
"last": "Child",
"suffix": ""
},
{
"first": "Alec",
"middle": [],
"last": "Gray",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2001.08361"
]
},
"num": null,
"urls": [],
"raw_text": "Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. 2020. Scaling laws for neural language models. arXiv preprint arXiv:2001.08361.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Word translation without parallel data",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Marc'aurelio",
"middle": [],
"last": "Ranzato",
"suffix": ""
},
{
"first": "Ludovic",
"middle": [],
"last": "Denoyer",
"suffix": ""
},
{
"first": "Herv\u00e9",
"middle": [],
"last": "J\u00e9gou",
"suffix": ""
}
],
"year": 2018,
"venue": "6th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guillaume Lample, Alexis Conneau, Marc'Aurelio Ranzato, Ludovic Denoyer, and Herv\u00e9 J\u00e9gou. 2018. Word translation without parallel data. In 6th Inter- national Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 -May 3, 2018, Conference Track Proceedings. OpenRe- view.net.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "ALBERT: A lite BERT for self-supervised learning of language representations",
"authors": [
{
"first": "Zhenzhong",
"middle": [],
"last": "Lan",
"suffix": ""
},
{
"first": "Mingda",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Goodman",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Piyush",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Radu",
"middle": [],
"last": "Soricut",
"suffix": ""
}
],
"year": 2020,
"venue": "8th International Conference on Learning Representations",
"volume": "2020",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. ALBERT: A lite BERT for self-supervised learning of language representations. In 8th Inter- national Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A solution to plato's problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge",
"authors": [
{
"first": "K",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Susan",
"middle": [
"T"
],
"last": "Landauer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dumais",
"suffix": ""
}
],
"year": 1997,
"venue": "Psychological review",
"volume": "104",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas K Landauer and Susan T Dumais. 1997. A solution to plato's problem: The latent semantic analysis theory of acquisition, induction, and rep- resentation of knowledge. Psychological review, 104(2):211.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Session-aware recommendation: A surprising quest for the state-of-the-art",
"authors": [
{
"first": "Sara",
"middle": [],
"last": "Latifi",
"suffix": ""
},
{
"first": "Noemi",
"middle": [],
"last": "Mauro",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Jannach",
"suffix": ""
}
],
"year": 2020,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sara Latifi, Noemi Mauro, and D. Jannach. 2020. Session-aware recommendation: A surprising quest for the state-of-the-art. ArXiv, abs/2011.03424.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Sequential event prediction. Machine learning",
"authors": [
{
"first": "Benjamin",
"middle": [],
"last": "Letham",
"suffix": ""
},
{
"first": "Cynthia",
"middle": [],
"last": "Rudin",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Madigan",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "93",
"issue": "",
"pages": "357--380",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Benjamin Letham, Cynthia Rudin, and David Madigan. 2013. Sequential event prediction. Machine learn- ing, 93(2-3):357-380.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Improving distributional similarity with lessons learned from word embeddings",
"authors": [
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
}
],
"year": 2015,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "3",
"issue": "",
"pages": "211--225",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00134"
]
},
"num": null,
"urls": [],
"raw_text": "Omer Levy, Yoav Goldberg, and Ido Dagan. 2015. Im- proving distributional similarity with lessons learned from word embeddings. Transactions of the Associ- ation for Computational Linguistics, 3:211-225.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "On the sentence embeddings from pre-trained language models",
"authors": [
{
"first": "Bohan",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Junxian",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Mingxuan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "9119--9130",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.733"
]
},
"num": null,
"urls": [],
"raw_text": "Bohan Li, Hao Zhou, Junxian He, Mingxuan Wang, Yiming Yang, and Lei Li. 2020. On the sentence embeddings from pre-trained language models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9119-9130, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "RoBERTa: A robustly optimized BERT pretraining approach",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y. Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, M. Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. ArXiv, abs/1907.11692.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Temporalcontextual recommendation in real-time",
"authors": [
{
"first": "Yifei",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "(",
"middle": [],
"last": "Balakrishnan",
"suffix": ""
},
{
"first": ")",
"middle": [],
"last": "Murali",
"suffix": ""
},
{
"first": "Haibin",
"middle": [],
"last": "Narayanaswamy",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ding",
"suffix": ""
}
],
"year": 2020,
"venue": "KDD '20: The 26th ACM SIGKDD Conference on Knowledge Discovery and Data Mining",
"volume": "",
"issue": "",
"pages": "2291--2299",
"other_ids": {
"DOI": [
"https://dl.acm.org/doi/10.1145/3394486.3403278"
]
},
"num": null,
"urls": [],
"raw_text": "Yifei Ma, Balakrishnan (Murali) Narayanaswamy, Haibin Lin, and Hao Ding. 2020. Temporal- contextual recommendation in real-time. In KDD '20: The 26th ACM SIGKDD Conference on Knowl- edge Discovery and Data Mining, Virtual Event, CA, USA, August 23-27, 2020, pages 2291-2299. ACM.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Viualizing data using t-sne",
"authors": [
{
"first": "Laurens",
"middle": [],
"last": "Van Der Maaten",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
}
],
"year": 2008,
"venue": "Journal of Machine Learning Research",
"volume": "9",
"issue": "",
"pages": "2579--2605",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Laurens van der Maaten and Geoffrey Hinton. 2008. Viualizing data using t-sne. Journal of Machine Learning Research, 9:2579-2605.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Gregory",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "CoRR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013. Efficient estimation of word rep- resentations in vector space. CoRR, abs/1301.3781.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "An introduction to neural information retrieval",
"authors": [
{
"first": "Bhaskar",
"middle": [],
"last": "Mitra",
"suffix": ""
},
{
"first": "Nick",
"middle": [],
"last": "Craswell",
"suffix": ""
}
],
"year": 2018,
"venue": "Foundations and Trends\u00ae in Information Retrieval",
"volume": "13",
"issue": "1",
"pages": "1--126",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bhaskar Mitra and Nick Craswell. 2018. An introduc- tion to neural information retrieval. Foundations and Trends\u00ae in Information Retrieval, 13(1):1-126.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "On the stability of fine-tuning bert: Misconceptions, explanations, and strong baselines",
"authors": [
{
"first": "Marius",
"middle": [],
"last": "Mosbach",
"suffix": ""
},
{
"first": "Maksym",
"middle": [],
"last": "Andriushchenko",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Klakow",
"suffix": ""
}
],
"year": 2020,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marius Mosbach, Maksym Andriushchenko, and D. Klakow. 2020. On the stability of fine-tuning bert: Misconceptions, explanations, and strong base- lines. ArXiv, abs/2006.04884.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "dna2vec: Consistent vector representations of variable-length k-mers",
"authors": [
{
"first": "Patrick",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2017,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Patrick Ng. 2017. dna2vec: Consistent vector rep- resentations of variable-length k-mers. ArXiv, abs/1701.06279.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Poincar\u00e9 embeddings for learning hierarchical representations",
"authors": [
{
"first": "Maximilian",
"middle": [],
"last": "Nickel",
"suffix": ""
},
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "6338--6347",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maximilian Nickel and Douwe Kiela. 2017. Poincar\u00e9 embeddings for learning hierarchical representa- tions. In Advances in Neural Information Process- ing Systems 30: Annual Conference on Neural In- formation Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 6338-6347.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "What the [MASK]? making sense of language-specific BERT models",
"authors": [
{
"first": "Debora",
"middle": [],
"last": "Nozza",
"suffix": ""
},
{
"first": "Federico",
"middle": [],
"last": "Bianchi",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2003.02912"
]
},
"num": null,
"urls": [],
"raw_text": "Debora Nozza, Federico Bianchi, and Dirk Hovy. 2020. What the [MASK]? making sense of language-specific BERT models. arXiv preprint arXiv:2003.02912.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "GloVe: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {
"DOI": [
"10.3115/v1/D14-1162"
]
},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Confer- ence on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar. Association for Computational Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "To tune or not to tune? adapting pretrained representations to diverse tasks",
"authors": [
{
"first": "Matthew",
"middle": [
"E"
],
"last": "Peters",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019)",
"volume": "",
"issue": "",
"pages": "7--14",
"other_ids": {
"DOI": [
"10.18653/v1/W19-4302"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew E. Peters, Sebastian Ruder, and Noah A. Smith. 2019. To tune or not to tune? adapting pre- trained representations to diverse tasks. In Proceed- ings of the 4th Workshop on Representation Learn- ing for NLP (RepL4NLP-2019), pages 7-14, Flo- rence, Italy. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Association for Computational Linguistics",
"authors": [
{
"first": "Fabio",
"middle": [],
"last": "Petroni",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rockt\u00e4schel",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Anton",
"middle": [],
"last": "Bakhtin",
"suffix": ""
},
{
"first": "Yuxiang",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Miller",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "2463--2473",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1250"
]
},
"num": null,
"urls": [],
"raw_text": "Fabio Petroni, Tim Rockt\u00e4schel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander Miller. 2019. Language models as knowl- edge bases? In Proceedings of the 2019 Confer- ence on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 2463-2473, Hong Kong, China. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Website personalization: Improving conversion with personalized shopping experiences",
"authors": [
{
"first": "Ann",
"middle": [],
"last": "Pichestapong",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ann Pichestapong. 2019. Website personalization: Im- proving conversion with personalized shopping ex- periences.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Sentence-BERT: Sentence embeddings using Siamese BERTnetworks",
"authors": [
{
"first": "Nils",
"middle": [],
"last": "Reimers",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "3982--3992",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1410"
]
},
"num": null,
"urls": [],
"raw_text": "Nils Reimers and Iryna Gurevych. 2019. Sentence- BERT: Sentence embeddings using Siamese BERT- networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 3982-3992, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Shopper intent prediction from clickstream e-commerce data with minimal browsing information",
"authors": [
{
"first": "Borja",
"middle": [],
"last": "Requena",
"suffix": ""
},
{
"first": "Giovanni",
"middle": [],
"last": "Cassani",
"suffix": ""
},
{
"first": "Jacopo",
"middle": [],
"last": "Tagliabue",
"suffix": ""
},
{
"first": "Ciro",
"middle": [],
"last": "Greco",
"suffix": ""
},
{
"first": "Lucas",
"middle": [],
"last": "Lacasa",
"suffix": ""
}
],
"year": 2020,
"venue": "Scientific Reports",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1038/s41598-020-73622-y"
]
},
"num": null,
"urls": [],
"raw_text": "Borja Requena, Giovanni Cassani, Jacopo Tagliabue, Ciro Greco, and Lucas Lacasa. 2020. Shopper intent prediction from clickstream e-commerce data with minimal browsing information. Scientific Reports, 2020:16983.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "A primer in BERTology: What we know about how BERT works",
"authors": [
{
"first": "Anna",
"middle": [],
"last": "Rogers",
"suffix": ""
},
{
"first": "Olga",
"middle": [],
"last": "Kovaleva",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Rumshisky",
"suffix": ""
}
],
"year": 2020,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "8",
"issue": "",
"pages": "842--866",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00349"
]
},
"num": null,
"urls": [],
"raw_text": "Anna Rogers, Olga Kovaleva, and Anna Rumshisky. 2020. A primer in BERTology: What we know about how BERT works. Transactions of the Associ- ation for Computational Linguistics, 8:842-866.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Global retail ecommerce sales",
"authors": [
{
"first": "",
"middle": [],
"last": "Statista Research Department",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Statista Research Department. 2020. Global retail e- commerce sales 2014-2023.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Energy and policy considerations for deep learning in NLP",
"authors": [
{
"first": "Emma",
"middle": [],
"last": "Strubell",
"suffix": ""
},
{
"first": "Ananya",
"middle": [],
"last": "Ganesh",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3645--3650",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1355"
]
},
"num": null,
"urls": [],
"raw_text": "Emma Strubell, Ananya Ganesh, and Andrew McCal- lum. 2019. Energy and policy considerations for deep learning in NLP. In Proceedings of the 57th Annual Meeting of the Association for Computa- tional Linguistics, pages 3645-3650, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Bert4rec: Sequential recommendation with bidirectional encoder representations from transformer",
"authors": [
{
"first": "Fei",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Changhua",
"middle": [],
"last": "Pei",
"suffix": ""
},
{
"first": "Xiao",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Wenwu",
"middle": [],
"last": "Ou",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Jiang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 28th ACM International Conference on Information and Knowledge Management, CIKM 2019, Beijing",
"volume": "",
"issue": "",
"pages": "1441--1450",
"other_ids": {
"DOI": [
"10.1145/3357384.3357895"
]
},
"num": null,
"urls": [],
"raw_text": "Fei Sun, Jun Liu, Jian Wu, Changhua Pei, Xiao Lin, Wenwu Ou, and Peng Jiang. 2019. Bert4rec: Se- quential recommendation with bidirectional encoder representations from transformer. In Proceedings of the 28th ACM International Conference on Informa- tion and Knowledge Management, CIKM 2019, Bei- jing, China, November 3-7, 2019, pages 1441-1450. ACM.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Sigir 2021 ecommerce workshop data challenge",
"authors": [
{
"first": "Jacopo",
"middle": [],
"last": "Tagliabue",
"suffix": ""
},
{
"first": "Ciro",
"middle": [],
"last": "Greco",
"suffix": ""
},
{
"first": "Jean-Francis",
"middle": [],
"last": "Roy",
"suffix": ""
},
{
"first": "Bingqing",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Patrick",
"middle": [
"John"
],
"last": "Chia",
"suffix": ""
},
{
"first": "Federico",
"middle": [],
"last": "Bianchi",
"suffix": ""
},
{
"first": "Giovanni",
"middle": [],
"last": "Cassani",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacopo Tagliabue, Ciro Greco, Jean-Francis Roy, Bingqing Yu, Patrick John Chia, Federico Bianchi, and Giovanni Cassani. 2021. Sigir 2021 e- commerce workshop data challenge.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Shopping in the multiverse: A counterfactual approach to insession attribution. ArXiv, abs",
"authors": [
{
"first": "Jacopo",
"middle": [],
"last": "Tagliabue",
"suffix": ""
},
{
"first": "Bingqing",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacopo Tagliabue and Bingqing Yu. 2020. Shopping in the multiverse: A counterfactual approach to in- session attribution. ArXiv, abs/2007.10087.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "How to grow a (product) tree: Personalized category suggestions for eCommerce type-ahead",
"authors": [
{
"first": "Jacopo",
"middle": [],
"last": "Tagliabue",
"suffix": ""
},
{
"first": "Bingqing",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Marie",
"middle": [],
"last": "Beaulieu",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of The 3rd Workshop on e-Commerce and NLP",
"volume": "",
"issue": "",
"pages": "7--18",
"other_ids": {
"DOI": [
"10.18653/v1/2020.ecnlp-1.2"
]
},
"num": null,
"urls": [],
"raw_text": "Jacopo Tagliabue, Bingqing Yu, and Marie Beaulieu. 2020a. How to grow a (product) tree: Personalized category suggestions for eCommerce type-ahead. In Proceedings of The 3rd Workshop on e-Commerce and NLP, pages 7-18, Seattle, WA, USA. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "The embeddings that came in from the cold: Improving vectors for new and rare products with content-based inference",
"authors": [
{
"first": "Jacopo",
"middle": [],
"last": "Tagliabue",
"suffix": ""
},
{
"first": "Bingqing",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Federico",
"middle": [],
"last": "Bianchi",
"suffix": ""
}
],
"year": 2020,
"venue": "RecSys 2020: Fourteenth ACM Conference on Recommender Systems, Virtual Event",
"volume": "",
"issue": "",
"pages": "577--578",
"other_ids": {
"DOI": [
"10.1145/3383313.3411477"
]
},
"num": null,
"urls": [],
"raw_text": "Jacopo Tagliabue, Bingqing Yu, and Federico Bianchi. 2020b. The embeddings that came in from the cold: Improving vectors for new and rare products with content-based inference. In RecSys 2020: Four- teenth ACM Conference on Recommender Systems, Virtual Event, Brazil, September 22-26, 2020, pages 577-578. ACM.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "MultiQA: An empirical investigation of generalization and transfer in reading comprehension",
"authors": [
{
"first": "Alon",
"middle": [],
"last": "Talmor",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Berant",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4911--4921",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1485"
]
},
"num": null,
"urls": [],
"raw_text": "Alon Talmor and Jonathan Berant. 2019. MultiQA: An empirical investigation of generalization and trans- fer in reading comprehension. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 4911-4921, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Algolia finds $110m from accel and salesforce",
"authors": [
{
"first": "",
"middle": [],
"last": "Techcrunch",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Techcrunch. 2019a. Algolia finds $110m from accel and salesforce.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Coveo raises 227m at 1b valuation",
"authors": [
{
"first": "",
"middle": [],
"last": "Techcrunch",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Techcrunch. 2019b. Coveo raises 227m at 1b valua- tion.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Lucidworks raises $100m to expand in ai finds",
"authors": [
{
"first": "",
"middle": [],
"last": "Techcrunch",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Techcrunch. 2019c. Lucidworks raises $100m to ex- pand in ai finds.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Bloomreach raises $150m on $900m valuation and acquires exponea",
"authors": [
{
"first": "",
"middle": [],
"last": "Techcrunch",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Techcrunch. 2021. Bloomreach raises $150m on $900m valuation and acquires exponea.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "2020. The computational limits of deep learning",
"authors": [
{
"first": "C",
"middle": [],
"last": "Neil",
"suffix": ""
},
{
"first": "Kristjan",
"middle": [],
"last": "Thompson",
"suffix": ""
},
{
"first": "Keeheon",
"middle": [],
"last": "Greenewald",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manso",
"suffix": ""
}
],
"year": null,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Neil C. Thompson, Kristjan Greenewald, Keeheon Lee, and G. Manso. 2020. The computational limits of deep learning. ArXiv, abs/2007.05558.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Predicting shopping behavior with mixture of rnns",
"authors": [
{
"first": "Arthur",
"middle": [],
"last": "Toth",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Fabbrizio",
"suffix": ""
},
{
"first": "Ankur",
"middle": [],
"last": "Datta",
"suffix": ""
}
],
"year": 2017,
"venue": "eCOM@SIGIR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arthur Toth, L. Tan, G. Fabbrizio, and Ankur Datta. 2017. Predicting shopping behavior with mixture of rnns. In eCOM@SIGIR.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Vanessa Murdock, and Maarten de Rijke. 2020. Challenges and research opportunities in ecommerce search and recommendations",
"authors": [
{
"first": "Manos",
"middle": [],
"last": "Tsagkias",
"suffix": ""
},
{
"first": "Tracy",
"middle": [
"Holloway"
],
"last": "King",
"suffix": ""
},
{
"first": "Surya",
"middle": [],
"last": "Kallumadi",
"suffix": ""
}
],
"year": null,
"venue": "SIGIR Forum",
"volume": "54",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manos Tsagkias, Tracy Holloway King, Surya Kallumadi, Vanessa Murdock, and Maarten de Ri- jke. 2020. Challenges and research opportunities in ecommerce search and recommendations. In SIGIR Forum, volume 54.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "2020. U.s. census bureau news",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "U.S. Department of Commerce. 2020. U.s. census bu- reau news.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "Meta-prod2vec: Product embeddings using side-information for recommendation",
"authors": [
{
"first": "Flavian",
"middle": [],
"last": "Vasile",
"suffix": ""
},
{
"first": "Elena",
"middle": [],
"last": "Smirnova",
"suffix": ""
},
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 10th ACM Conference on Recommender Systems",
"volume": "",
"issue": "",
"pages": "225--232",
"other_ids": {
"DOI": [
"10.1145/2959100.2959160"
]
},
"num": null,
"urls": [],
"raw_text": "Flavian Vasile, Elena Smirnova, and Alexis Conneau. 2016a. Meta-prod2vec: Product embeddings using side-information for recommendation. In Proceed- ings of the 10th ACM Conference on Recommender Systems, Boston, MA, USA, September 15-19, 2016, pages 225-232. ACM.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "Meta-prod2vec: Product embeddings using side-information for recommendation",
"authors": [
{
"first": "Flavian",
"middle": [],
"last": "Vasile",
"suffix": ""
},
{
"first": "Elena",
"middle": [],
"last": "Smirnova",
"suffix": ""
},
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 10th ACM Conference on Recommender Systems",
"volume": "",
"issue": "",
"pages": "225--232",
"other_ids": {
"DOI": [
"10.1145/2959100.2959160"
]
},
"num": null,
"urls": [],
"raw_text": "Flavian Vasile, Elena Smirnova, and Alexis Conneau. 2016b. Meta-prod2vec: Product embeddings using side-information for recommendation. In Proceed- ings of the 10th ACM Conference on Recommender Systems, Boston, MA, USA, September 15-19, 2016, pages 225-232. ACM.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4- 9, 2017, Long Beach, CA, USA, pages 5998-6008.",
"links": null
},
"BIBREF57": {
"ref_id": "b57",
"title": "From itdl to place2vec: Reasoning about place type similarity and relatedness by learning embeddings from augmented spatial contexts",
"authors": [
{
"first": "Bo",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "Krzysztof",
"middle": [],
"last": "Janowicz",
"suffix": ""
},
{
"first": "Gengchen",
"middle": [],
"last": "Mai",
"suffix": ""
},
{
"first": "Song",
"middle": [],
"last": "Gao",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 25th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems, SIGSPATIAL '17",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/3139958.3140054"
]
},
"num": null,
"urls": [],
"raw_text": "Bo Yan, Krzysztof Janowicz, Gengchen Mai, and Song Gao. 2017. From itdl to place2vec: Reasoning about place type similarity and relatedness by learn- ing embeddings from augmented spatial contexts. In Proceedings of the 25th ACM SIGSPATIAL Interna- tional Conference on Advances in Geographic Infor- mation Systems, SIGSPATIAL '17, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF58": {
"ref_id": "b58",
"title": "Parameter-efficient transfer from sequential behaviors for user modeling and recommendation",
"authors": [
{
"first": "Fajie",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "Xiangnan",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Alexandros",
"middle": [],
"last": "Karatzoglou",
"suffix": ""
},
{
"first": "Liguang",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval, SIGIR 2020, Virtual Event",
"volume": "",
"issue": "",
"pages": "1469--1478",
"other_ids": {
"DOI": [
"10.1145/3397271.3401156"
]
},
"num": null,
"urls": [],
"raw_text": "Fajie Yuan, Xiangnan He, Alexandros Karatzoglou, and Liguang Zhang. 2020. Parameter-efficient trans- fer from sequential behaviors for user modeling and recommendation. In Proceedings of the 43rd Inter- national ACM SIGIR conference on research and development in Information Retrieval, SIGIR 2020, Virtual Event, China, July 25-30, 2020, pages 1469- 1478. ACM.",
"links": null
},
"BIBREF59": {
"ref_id": "b59",
"title": "Towards personalized and semantic retrieval: An end-to-end solution for ecommerce search via embedding learning",
"authors": [
{
"first": "Han",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Songlin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Zhiling",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Yunjiang",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Yun",
"middle": [],
"last": "Xiao",
"suffix": ""
},
{
"first": "Weipeng",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "Wenyun",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval, SIGIR 2020, Virtual Event",
"volume": "",
"issue": "",
"pages": "2407--2416",
"other_ids": {
"DOI": [
"10.1145/3397271.3401446"
]
},
"num": null,
"urls": [],
"raw_text": "Han Zhang, Songlin Wang, Kang Zhang, Zhiling Tang, Yunjiang Jiang, Yun Xiao, Weipeng Yan, and Wenyun Yang. 2020. Towards personalized and semantic retrieval: An end-to-end solution for e- commerce search via embedding learning. In Pro- ceedings of the 43rd International ACM SIGIR con- ference on research and development in Information Retrieval, SIGIR 2020, Virtual Event, China, July 25- 30, 2020, pages 2407-2416. ACM.",
"links": null
},
"BIBREF60": {
"ref_id": "b60",
"title": "The difference between a click and a cart-add: Learning interaction-specific embeddings",
"authors": [
{
"first": "Xiaoting",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Raphael",
"middle": [],
"last": "Louca",
"suffix": ""
},
{
"first": "Diane",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Liangjie",
"middle": [],
"last": "Hong",
"suffix": ""
}
],
"year": 2020,
"venue": "Companion Proceedings of the Web Conference 2020, WWW '20",
"volume": "",
"issue": "",
"pages": "454--460",
"other_ids": {
"DOI": [
"10.1145/3366424.3386197"
]
},
"num": null,
"urls": [],
"raw_text": "Xiaoting Zhao, Raphael Louca, Diane Hu, and Liangjie Hong. 2020. The difference between a click and a cart-add: Learning interaction-specific embeddings. In Companion Proceedings of the Web Conference 2020, WWW '20, page 454-460, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF61": {
"ref_id": "b61",
"title": "2020. A flexible large-scale similar product identification system in e-commerce",
"authors": [
{
"first": "L",
"middle": [],
"last": "Zhen Zuo",
"suffix": ""
},
{
"first": "Michinari",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Momma",
"suffix": ""
},
{
"first": "Yikai",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Ni",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhen Zuo, L. Wang, Michinari Momma, W. Wang, Yikai Ni, Jianfeng Lin, and Y. Sun. 2020. A flexi- ble large-scale similar product identification system in e-commerce.",
"links": null
},
"BIBREF62": {
"ref_id": "b62",
"title": "A Visualization of Session Embeddings",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A Visualization of Session Embeddings",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Overall architecture of Prod2BERT pretrained on MLM task."
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "and employ the following configuration: window = 15, iterations = 30, ns exponent = 0.75, dimensions = [48, 100]."
},
"FIGREF3": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "T-SNE plot of browsing session vector space from Shop A and built with the first hidden layer of pre-trained Prod2BERT."
},
"FIGREF4": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "T-SNE plot of browsing session vector space from Shop A and built with prod2vec embeddings."
},
"FIGREF5": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "T-SNE plot of browsing session vector space from Shop B and built with the first hidden layer of pre-trained Prod2BERT."
},
"FIGREF6": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "T-SNE plot of browsing session vector space from Shop B and built with prod2vec embeddings."
},
"TABREF1": {
"num": null,
"html": null,
"content": "<table/>",
"text": "Hyperparameters and their ranges.",
"type_str": "table"
},
"TABREF3": {
"num": null,
"html": null,
"content": "<table/>",
"text": "Descriptive statistics for the training dataset. pct shows 50 th and 75 th percentiles of the session length.",
"type_str": "table"
}
}
}
}