|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T10:33:07.145077Z" |
|
}, |
|
"title": "", |
|
"authors": [], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Alternative recommender systems are critical for ecommerce companies. They guide customers to explore a massive product catalog and assist customers to find the right products among an overwhelming number of options. However, it is a non-trivial task to recommend alternative products that fit customers' needs. In this paper, we use both textual product information (e.g. product titles and descriptions) and customer behavior data to recommend alternative products. Our results show that the coverage of alternative products is significantly improved in offline evaluations as well as recall and precision. The final A/B test shows that our algorithm increases the conversion rate by 12% in a statistically significant way. In order to better capture the semantic meaning of product information, we build a Siamese Network with Bidirectional LSTM to learn product embeddings. In order to learn a similarity space that better matches the preference of real customers, we use co-compared data from historical customer behavior as labels to train the network. In addition, we use NMSLIB to accelerate the computationally expensive kNN computation for millions of products so that the alternative recommendation is able to scale across the entire catalog of a major ecommerce site.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Alternative recommender systems are critical for ecommerce companies. They guide customers to explore a massive product catalog and assist customers to find the right products among an overwhelming number of options. However, it is a non-trivial task to recommend alternative products that fit customers' needs. In this paper, we use both textual product information (e.g. product titles and descriptions) and customer behavior data to recommend alternative products. Our results show that the coverage of alternative products is significantly improved in offline evaluations as well as recall and precision. The final A/B test shows that our algorithm increases the conversion rate by 12% in a statistically significant way. In order to better capture the semantic meaning of product information, we build a Siamese Network with Bidirectional LSTM to learn product embeddings. In order to learn a similarity space that better matches the preference of real customers, we use co-compared data from historical customer behavior as labels to train the network. In addition, we use NMSLIB to accelerate the computationally expensive kNN computation for millions of products so that the alternative recommendation is able to scale across the entire catalog of a major ecommerce site.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Recommender systems are pervasive in ecommerce and other web systems (Zhang et al. 2019) . Alternative product recommendation is an important way to help customers easily find the right products and speed up their buying decision process. For example, if a customer is viewing a \"25.5 cu. ft. Counter Depth French Door Refrigerator in Stainless Steel\", she may also be interested in other french door refrigerators in different brands but with similar features such as capacity, counter depth, material, etc.", |
|
"cite_spans": [ |
|
{ |
|
"start": 69, |
|
"end": 88, |
|
"text": "(Zhang et al. 2019)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "There are two main ways to obtain an alternative product list for a given product. First is a content-based recommendation approach. If two products have similar attributes or content so that one can be replaced by the other, we can consider them as alternative products. Word2vec has been used to learn item embeddings for comparing item similarities (Caselles-Dupre, Lesaint, and Royo-Letelier 2018). However, this unsupervised learning process does not guarantee the embedding distance is consistent with customers' shopping preference. The second way is to leverage customer behavior to find alternative products in the style of item-to-item collaborative filtering (Linden, Smith, and York 2003) . If customers frequently consider two products together, one product can be recommended as an alternative for the other. Unfortunately, this approach has a cold start problem.", |
|
"cite_spans": [ |
|
{ |
|
"start": 670, |
|
"end": 700, |
|
"text": "(Linden, Smith, and York 2003)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this work, we formulate the recommendation problem into a supervised product embedding learning process. To be specific, we develop a deep learning based embedding approach using Siamese Network, which leverages both product content (including title and description) and customer behavior to generate Top-N recommendations for an anchor product. Our contributions are as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 Recommend alternative products using both product textual information and customer behavior data. This allows us to better handle both the cold start and relevancy problems.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 Use a Bidirectional LSTM structure to better capture the semantic meaning of product textual information.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 Build a Siamese Network to incorporate co-compared customer behavior data to guide the supervised learning process and generate a product embedding space that better matches customer's preference.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 Our model outperforms baselines in both offline validations and an online A/B test.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Mingming Guo, Nian Yan, Xiquan Cui, San He Wu, Unaiza Ahsan, Rebecca West and Khalifeh Al Jadda The Home Depot, Atlanta, GA, USA {mingming_guo, nian_yan, xiquan_cui, san_h_wu, unaiza_ahsan, rebecca_west, khalifeh_al_jadda}@homedepot.com", |
|
"cite_spans": [ |
|
{ |
|
"start": 121, |
|
"end": 189, |
|
"text": "GA, USA {mingming_guo, nian_yan, xiquan_cui, san_h_wu, unaiza_ahsan,", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recommendations at Scale", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We have the textual information = { ! , \u2026 , \" } (a concatenation of product title and description) of a catalog of products = { ! , \u2026 , \" } to make recommendations. The goal of the alternative recommendation is to learn a embedding projection function # so that the embedding of an anchor product that is viewed by a customer # ( $ ) is close to the embeddings of its alternatives # ( % ) . In this paper, we use the cosine similarity between the embeddings of # ( $ ) and # ( % ) as the energy function.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Problem Formulation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "# = \u2329' ! () \" ) ,' ! () # ) \u232a \u2016' ! () \" ) \u2016\u2016' ! () # ) \u2016", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Problem Formulation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The problem is how to learn a function as the embedding projection function # to better capture the semantic meanings of the product textual information and project a sequence of tokens / into an embedding vector of size d. The total loss over the training data set = 0 $ (/) , % (/) , (/) 2 is given by", |
|
"cite_spans": [ |
|
{ |
|
"start": 286, |
|
"end": 289, |
|
"text": "(/)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Problem Formulation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "# ( ) = \u2211 # (/) ( 0 /1! $ (/) , % (/) , (/) ) (2)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Problem Formulation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "where the instance loss function # (/) is a contrastive loss function. It consists of a term 2 for the positive cases ( (/) = 1), where the product pair are alternative to each other. In addition, it consists of a term 3 for the negative cases ( (/) = 0), where the product pair are not often considered together by customers.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Problem Formulation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "! (#) = (#) % $ & (#) , ' (#) ' + (1 \u2212 (#) ) ( ( & (#) , ' (#) ) (3)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Problem Formulation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The loss functions for the positive and negative cases are given by:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Problem Formulation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "2 7 $ (/) , % (/) 8 = |1 \u2212 # | (4) ( $ & (#) , ' (#) ' = - | | ! > 0 0 \u210e", |
|
"eq_num": "(5)" |
|
} |
|
], |
|
"section": "Problem Formulation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Based on the loss function, the problem is how to build a network that can learn part of the product information that is important for customers and project a product to the right embedding space that is consistent with customers' preference.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Problem Formulation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Textual Data and Co-compared Data Product Information: From the ecommerce site catalog data, we extract the product ID, product title and description as the raw textual data with an example in Table 1 . Co-compared Data: Customers can select several products on a search result page for cocomparison to verify how they are similar and different based on their features. Those products are considered alternative to each other. The cocompared is a strong signal of the similarity between products within same product taxonomy. We extract co-compared data from clickstream to create the training data. Some examples of the cocompared data are shown in Table 2 . We build a Siamese Network (Bromley et al. 1994) with Bidirectional LSTM (Graves and Schmidhuber 2005) components to learn and generate embeddings for all products. The product embedding space better captures the semantic meaning of the product textural information and customer preferences. Textual data are in a sequential format and the order of the texts matters for the network. We choose Bidirectional LSTM to learn representation in both directions from the input sequences. We use Keras with TensorFlow to build and train the network. We choose RMSprop (Hinton et al. 2012) as the optimizer. The loss function is the binary cross entropy. The network architecture is shown in Figure 1 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 687, |
|
"end": 708, |
|
"text": "(Bromley et al. 1994)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 733, |
|
"end": 762, |
|
"text": "(Graves and Schmidhuber 2005)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 1221, |
|
"end": 1241, |
|
"text": "(Hinton et al. 2012)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 193, |
|
"end": 200, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
}, |
|
{ |
|
"start": 650, |
|
"end": 657, |
|
"text": "Table 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1344, |
|
"end": 1352, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Deep Learning Embedding Approach", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Positive and negative sampling: we filter out the products without titles and/or descriptions from the co-compared data. We form a connected graph from the co-compared product pairs. For example, if product A and B are co-compared and product B and C are co-compared, then (A, B, C) forms a connected graph. If products D and E are cocompared and E and F are co-compared, then (D, E, F) forms a connection graph. We create positive samples for each product by randomly sampling another product within the same set, e.g. [A, C, 1] . We also create negative samples for each product in a connected graph by randomly sampling a different connected graph first, then randomly sampling a product in that graph, e.g. [A, D, -1], as shown in Figure 2 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 520, |
|
"end": 529, |
|
"text": "[A, C, 1]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 735, |
|
"end": 743, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Fig. 2 Connected Graphs", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "What's the time period (year) 331900 65684 1 The negative sampling space is much larger than the positive sampling space because only a small number of products are frequently cocompared together by our customers. Thus, for each anchor product, we sample more negative samples than positive samples. Based on our experiments and empirically analysis, for each positive sample, three negative samples are created which gives the best performance on the validation loss when training the model. The statistics of the training data is shown in Table 3 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 541, |
|
"end": 548, |
|
"text": "Table 3", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "How many products", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The Siamese Network training process takes about 10 hours to converge. The next step is to load the best model weights to generate product embeddings. Specifically, from the Siamese Network, we remove the last cosine similarity layer and the second input branch which processes the second product of the product pairs. We only use the Embedding layer and the Bidirectional LSTM layer. The final result is the concatenation of the hidden state of the product title and the hidden state of the product description.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training the Model and Generating Embeddings", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We generate millions of embeddings based on product titles and descriptions. For each product, the task is to compute distances with the rest of millions of product embeddings using a similarity metric, e.g. cosine similarity, and rank the similarity scores from higher to lower to get the Top-N recommended products. According to the detailed analysis from (Aum\u00fcller et al. 2019), we choose NMSLIB (Boytsov et al. 2016) library to conduct heavy kNN computations because it has high performance in both recall and queries per second.", |
|
"cite_spans": [ |
|
{ |
|
"start": 399, |
|
"end": 420, |
|
"text": "(Boytsov et al. 2016)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Scalable Recommendation Generation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In this section, we describe how we evaluate the effectiveness and efficiency of our deep learning model with offline evaluation and online A/B test. We use our production data to validate the results since this is a unique case for us. We did not find exact similar open data set with similar customer behaviors that can be used for our evaluation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Performance Evaluation", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "This baseline algorithm uses product attributes to generate recommendations. The attributes contain numerical and categorical data. The categorical features are converted into numerical format using one-hot encoding. The distance between two products is computed using cosine similarity. This is the content-based method we compare with.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithms: 1) Baseline 1: Attributed Based", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "This baseline algorithm uses the actual customer co-compared data. The recommendations are ranked by the co-comparison counts. Due to the cold-start problem, many products in the catalog do not have such recommendations even we create labels from the co-compared data. This is the collaborative filtering method we compare with since it's based on item-to-item relationships built by customer browsing behaviors.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2) Baseline 2: Frequently Compared", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For Deep Learning Based, we choose 0.8 as the cutting threshold for the cosine similarity score. This threshold is selected and validated based on the judgement from our human expert validators after they examine thousands of random sampled anchors from the catalog data and the recommendations generated from our model. We only keep the recommendations that have at least 0.8 similarity with each anchor product.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "3) Proposed: Deep Learning Based", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Precision Recall ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "1) Precision and Recall:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Two weeks of actual customer purchase data from clickstream data is used to evaluate the performance of all 3 algorithms based on precision and recall. There are total 1.1 million purchase sessions. In this comparison, we use the raw data regardless if each session has all two baselines. This is a fair comparison since not all anchors can be covered by both algorithms. For example, a product may not have the same set of attributes as other products so this product cannot be covered by Attributed Based algorithm. This is because there are vast variants of similar products without same set of attributes. Another scenario is that this product has never been compared with other products by our customers so this product cannot be covered by Frequently Compared algorithm. For the Deep Learning Based, we compare its recommendations with the purchased items. Table 4 shows our algorithm performs much better than the baseline algorithms for all top 1, 5, and 10 items precision and recall scores, especially for precision top 1, recall top 5 and top 10. The main reasons are: i) Frequently Compared recommends co-compared products by customers and only covers small sets of products; ii) Attributed Based approach has a higher coverage but a lower relevancy.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 863, |
|
"end": 871, |
|
"text": "Table 4", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Comparison 1:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In this comparison, we select sessions that have both Attributed Based and Frequently Compared. Table 5 shows our Deep Learning Based still performs much better than Attributed Based but not Frequently Compared. The reason is that the label we used to train our model is from cocompared data, so our model has the upbound from Frequently Compared's performance. This experiment validated our hypothesis. 2) Coverage: The anchor coverages of all the algorithms are also computed. The Attributes Based and Frequently Compared approaches cover 31.5% and 47.1% of anchors, respectively, and those two numbers are increased to 81.2% and 83.4% with the incremental increase from our Deep Learning Based approach. Since most of our products have titles and descriptions, so our Deep Learning Based significantly boosts the coverage of anchor products from our catalog to have good recommendations.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 96, |
|
"end": 103, |
|
"text": "Table 5", |
|
"ref_id": "TABREF7" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Comparison 2:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Conversion Rate: The A/B test was run for three weeks and success was measured using conversion rate. Conversion rate is the number of purchases divided by number of visits which captures the similarity between anchor and recommendations. Our deep learning model outperforms the existing hybrid algorithm which combined Attribute Based and Frequently Compared with a 12% higher conversion rate. This is a very successful test for our business. We're implementing the deep learning algorithm on our production site.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Online A/B Testing:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The traditional method for recommender systems is content-based recommendations (Lops et al. 2011) . This method can handle the cold start problem well. Collaborative Filtering is another method based on user behaviors. For example, Matrix Factorization (Koren et al. 2009 ) is a widely used method for collaborative filtering. Our two baseline algorithms, one is considered as content-based and the other is considered as collaborative filtering using user behavior data with the co-compared format. Deep learning now has been widely used not only in the academic community, but also in industrial recommender system settings, such as Airbnb's listing recommendations (Grbovic and Cheng 2018) and Pinterest's recommendation engine (Ying et al. 2018) . Most of recent deep learning papers (e.g., Wang et al. 2019; Ebesu, Shen, and Fang 2018) have been focused on sequential recommendations. (Neculoiu et al. 2016) presents a deep network using Siamese architecture with character-level Bidirectional LSTM for job title normalization. (Mueller and Thyagarajan 2016) also presents a Siamese adaptation of the LSTM to learn sentence embedding. However, this work needs human annotated labels while our labels are extracted from clickstream data. Our work more focuses on providing alternative recommendations by learning product embedding from product textual data and customer signals.", |
|
"cite_spans": [ |
|
{ |
|
"start": 80, |
|
"end": 98, |
|
"text": "(Lops et al. 2011)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 254, |
|
"end": 272, |
|
"text": "(Koren et al. 2009", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 669, |
|
"end": 693, |
|
"text": "(Grbovic and Cheng 2018)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 732, |
|
"end": 750, |
|
"text": "(Ying et al. 2018)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 796, |
|
"end": 813, |
|
"text": "Wang et al. 2019;", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 814, |
|
"end": 841, |
|
"text": "Ebesu, Shen, and Fang 2018)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Recommender Systems are core functions for online retailers to increase their revenue. To help customers easily find alternative products in an automated way, we develop a deep learning approach to generate product embeddings based on a Siamese Network with Bidirectional LSTM. We extract co-compared data from customer clickstream and product textual data to train the network and generate the embedding space. Our approach significantly improves the coverage of similar products as well as improving recall and precision. Our algorithm also shows promising results on conversion rate in an online A/B test.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "ANN-Benchmarks: A Benchmarking Tool for Approximate Nearest Neighbor Algorithms", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Aum\u00fcller", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Bernhardsson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Faithfull", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Information Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Aum\u00fcller, M.; Bernhardsson, E.; Faithfull, A. 2019. ANN-Benchmarks: A Benchmarking Tool for Approximate Nearest Neighbor Algorithms. Information Systems.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Off the Beaten Path: Let's Replace Term-Based Retrieval with k-NN Search", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Boytsov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Novak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Malkov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Nyberg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "proceedings of CIKM", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Boytsov, L.; Novak, D.; Malkov, Y.; Nyberg, E. 2016. Off the Beaten Path: Let's Replace Term- Based Retrieval with k-NN Search. In proceedings of CIKM.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Signature verification using a \"Siamese\" time delay neural network", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Bromley", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "I", |
|
"middle": [], |
|
"last": "Guyon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Lecun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Sackinger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Shah", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "Proceedings of Advances in Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "737--744", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bromley, J.; Guyon, I.; LeCun, Y.; Sackinger, E.; and Shah, R. 1994. Signature verification using a \"Siamese\" time delay neural network. In Proceedings of Advances in Neural Information Processing Systems, 737-744.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Word2vec applied to recommendation: hyperparameters matter", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Caselles-Dupre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Lesaint", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Royo-Letelier", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of RecSys", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "352--356", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Caselles-Dupre, H.; Lesaint, F.; and Royo-Letelier, J. 2018. Word2vec applied to recommendation: hyperparameters matter. In Proceedings of RecSys, 352-356.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Collaborative memory network for recommendation systems", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Ebesu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Shen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Fang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of SIGIR", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "515--524", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ebesu, T.; Shen, B.; and Fang, Y. 2018. Collaborative memory network for recommendation systems. In Proceedings of SIGIR, 515-524.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Framewise phoneme classification with Bidirectional LSTM and other neural network architectures", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Graves", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Schmidhuber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of IEEE International Join Conference on Neural Networks", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Graves, A. and Schmidhuber, J. 2005. Framewise phoneme classification with Bidirectional LSTM and other neural network architectures. In Proceedings of IEEE International Join Conference on Neural Networks, July 31-Aug 4.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Real-time personalization using embeddings for search ranking at Airbnb", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Grbovic", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Cheng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proc. Of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "311--320", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Grbovic, M. and Cheng, H. 2018. Real-time personalization using embeddings for search ranking at Airbnb. In Proc. Of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 311-320.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Neural networks for machine learning lecture 6a overview of mini-batch gradient descent", |
|
"authors": [ |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Hinton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Srivastava", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Swersky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hinton, G.; Srivastava, N.; and Swersky, K. 2012. Neural networks for machine learning lecture 6a overview of mini-batch gradient descent.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Matrix factorization techniques for recommender systems", |
|
"authors": [ |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Koren", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Bell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Volinsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "In Proceedings of IEEE Computer", |
|
"volume": "42", |
|
"issue": "8", |
|
"pages": "30--37", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Koren, Y.; Bell, R.; and Volinsky C. 2009. Matrix factorization techniques for recommender systems. In Proceedings of IEEE Computer, Vol. 42, No. 8, 30-37.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Amazon.com Recommendations: Item-to-Item Collaborative Filtering", |
|
"authors": [ |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Linden", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Smith", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "York", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "IEEE Internet Computing", |
|
"volume": "7", |
|
"issue": "", |
|
"pages": "76--80", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Linden, G.; Smith, B.; and York, J. 2003. Amazon.com Recommendations: Item-to-Item Collaborative Filtering. In IEEE Internet Computing, Vol. 7, Issue 1, 76-80.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Content-based recommender systems: state of the art and trends", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Lops", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Gemmis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Semeraro", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Recommender Systems Handbook", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "73--100", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lops, P.; Gemmis, M. de; and Semeraro, G. 2011. Content-based recommender systems: state of the art and trends. In Recommender Systems Handbook, 73-100.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Neural graph collaborative filtering", |
|
"authors": [ |
|
{ |
|
"first": "X", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "X", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Feng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Chua", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of SIGIR", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wang, X.; He, X.; Wang, M.; Feng, F.; and Chua T.S. 2019. Neural graph collaborative filtering. In Proceedings of SIGIR, July 21-25.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Graph convolutional neural networks for web-scale recommender systems", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Ying", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Eksombatchai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Hamilton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Leskovec", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "24 th SIGKDD", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "974--983", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ying, R.; He, R.; Chen, K.; Eksombatchai, P.; Hamilton W. L.; and Leskovec, J. 2018. Graph convolutional neural networks for web-scale recommender systems. In 24 th SIGKDD, 974-983.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Deep learning based recommender system: a survey and new perspectives", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Yao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Tay", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "In Journal of ACM Computing Surveys (CSUR)", |
|
"volume": "52", |
|
"issue": "5", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhang, S.; Yao, L.; Sun, A.; Tay, Y. 2019. Deep learning based recommender system: a survey and new perspectives. In Journal of ACM Computing Surveys (CSUR), Vol 52, Issue 1, No. 5.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"TABREF1": { |
|
"content": "<table/>", |
|
"html": null, |
|
"num": null, |
|
"type_str": "table", |
|
"text": "" |
|
}, |
|
"TABREF3": { |
|
"content": "<table/>", |
|
"html": null, |
|
"num": null, |
|
"type_str": "table", |
|
"text": "Training Data Statistics" |
|
}, |
|
"TABREF5": { |
|
"content": "<table/>", |
|
"html": null, |
|
"num": null, |
|
"type_str": "table", |
|
"text": "Precision and Recall" |
|
}, |
|
"TABREF7": { |
|
"content": "<table/>", |
|
"html": null, |
|
"num": null, |
|
"type_str": "table", |
|
"text": "Precision and Recall" |
|
} |
|
} |
|
} |
|
} |