ACL-OCL / Base_JSON /prefixE /json /ecnlp /2020.ecnlp-1.2.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:33:30.670082Z"
},
"title": "How to Grow a (Product) Tree Personalized Category Suggestions for eCommerce Type-Ahead",
"authors": [
{
"first": "Jacopo",
"middle": [],
"last": "Tagliabue",
"suffix": "",
"affiliation": {
"laboratory": "Coveo Labs",
"institution": "",
"location": {
"region": "New York",
"country": "USA"
}
},
"email": "[email protected]"
},
{
"first": "Bingqing",
"middle": [],
"last": "Yu",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Marie",
"middle": [],
"last": "Beaulieu",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In an attempt to balance precision and recall in the search page, leading digital shops have been effectively nudging users into select category facets as early as in the type-ahead suggestions. In this work, we present Session-Path, a novel neural network model that improves facet suggestions on two counts: first, the model is able to leverage session embeddings to provide scalable personalization; second, SessionPath predicts facets by explicitly producing a probability distribution at each node in the taxonomy path. We benchmark SessionPath on two partnering shops against count-based and neural models, and show how business requirements and model behavior can be combined in a principled way.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "In an attempt to balance precision and recall in the search page, leading digital shops have been effectively nudging users into select category facets as early as in the type-ahead suggestions. In this work, we present Session-Path, a novel neural network model that improves facet suggestions on two counts: first, the model is able to leverage session embeddings to provide scalable personalization; second, SessionPath predicts facets by explicitly producing a probability distribution at each node in the taxonomy path. We benchmark SessionPath on two partnering shops against count-based and neural models, and show how business requirements and model behavior can be combined in a principled way.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Modern eCommerce search engines need to work on millions of products; in an effort to fight \"zero result\" pages, digital shops often sacrifice precision to increase recall 1 , relying on Learn2Rank (Liu, 2009) to show the most relevant results in the top positions (Matveeva et al., 2006) . While this strategy is effective in web search, when users rarely go after page one (Granka et al., 2004; Guan and Cutrell, 2007) , it is only partially successful in product search: shoppers may spend time browsing several pages in the result set and re-order products based on custom criteria ( Figure 1) ; analyzing industry data, up to 20% of clicked products occur not on the first page, with re-ranking in approximately 10% of search sessions.",
"cite_spans": [
{
"start": 198,
"end": 209,
"text": "(Liu, 2009)",
"ref_id": "BIBREF22"
},
{
"start": 265,
"end": 288,
"text": "(Matveeva et al., 2006)",
"ref_id": "BIBREF24"
},
{
"start": 375,
"end": 396,
"text": "(Granka et al., 2004;",
"ref_id": "BIBREF12"
},
{
"start": 397,
"end": 420,
"text": "Guan and Cutrell, 2007)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 588,
"end": 597,
"text": "Figure 1)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Leading eCommerce websites leverage machine learning to suggest facets -i.e. product categories, such as Video Games for \"nintento switch\" -during type-ahead ( Figure 2 ): narrowing down candidate products explicitly by matching the selected categories, shops are able to present less noisy result pages and increase the perceived relevance of their search engine. In this work we present Session-Path, a scalable and personalized model to solve facet prediction for type-ahead suggestions: given a shopping session and candidate queries in the suggestion dropdown menu, the model is asked to predict the best category facet to help users narrow down search intent. A big advantage of Session-Path is that it can complement any existing stack by adding facet prediction to items as retrieved by the type-ahead API.",
"cite_spans": [],
"ref_spans": [
{
"start": 160,
"end": 168,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We summarize the main contributions of this work as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 we devise, implement and benchmark several models of incremental complexity (as measured by features and engineering requirements); starting from a non-personalized count-based baseline, we arrive at Session-Path, an encoder-decoder architecture that explicitly models the real-time generation of paths in the catalog taxonomy; Figure 2 : Facet suggestions during type-ahead: shoppers can be nudged to pick a facet before querying, to help the search engine present more relevant results.",
"cite_spans": [],
"ref_spans": [
{
"start": 330,
"end": 338,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 we discuss the importance of false positives and false negatives in the relevant business context, and provide decision criteria to adjust the precision/recall boundary after training. By combining the predictions of the neural network with a decision module, we show how model behavior can be tuned in a principled way by human decision makers, without interfering with the underlying inference process or introducing ad hoc manual rules.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To the best of our knowledge, SessionPath is the first type-ahead model that allows dynamic facet predictions: linguistic input and in-session intent are combined to adjust the target taxonomy depth (sport / basketball vs sport / basketball / lebron) based on real-time shopper behavior and model confidence. For this reason, we believe the methods and results here presented will be of great interest to any digital shop struggling to strike the right balance between precision and recall in a catalog with tens-of-thousands-to-millions of items.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2 Less (Choice) is More: Considerations From Industry Use Cases",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The problem of narrowing down the result set before re-ranking is a known concern for mid-to-bigsize shops: as shown in Figure 1 -A, a common solution is to invite shoppers to select a category facet when still typing. Aside from UX considerations, restricting the result set may be beneficial for other reasons. On one hand, decision science proved that providing shoppers with more alternatives is actually less efficient (the so-called \"paradox of choice\" (Scheibehenne et al., 2010; Iyengar and Lepper, 2001 )) -insofar as SessionPath helps avoiding unnecessary \"cognitive load\", it may be a welcomed ally in fighting irrational decision making; on the other, by restricting result set through Figure 3 : Shoppers in Session A and Session B have different sport intent, as shown by the products visited. By combining linguistic and behavioral in-session data, SessionPath provides in real-time personalized facet suggestions to the same \"nike\" query in the type-ahead.",
"cite_spans": [
{
"start": 459,
"end": 486,
"text": "(Scheibehenne et al., 2010;",
"ref_id": "BIBREF30"
},
{
"start": 487,
"end": 511,
"text": "Iyengar and Lepper, 2001",
"ref_id": "BIBREF17"
}
],
"ref_spans": [
{
"start": 120,
"end": 128,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 698,
"end": 706,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "facet selection, the model may reduce the long-tail effect of many queries on product visibility: when results are too many and items frequently changed, standard Learn2Rank approaches tend to penalize less popular items (Abdollahpouri et al., 2017; Anderson, 2006) , which end up buried among noisy results far from the first few pages and never collect enough relevance feedback to rise through the top.",
"cite_spans": [
{
"start": 221,
"end": 249,
"text": "(Abdollahpouri et al., 2017;",
"ref_id": "BIBREF0"
},
{
"start": 250,
"end": 265,
"text": "Anderson, 2006)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, we extend industry best practices of facet suggestion in type-ahead by providing a solution that is dynamic in two ways: i) given the same query, session context may be used to provide a contextualized suggestion ( Figure 3) ; ii) given two queries, the model will decide in real-time how deep in the taxonomy path the proposed suggestion needs to be ( Figure 4 ): for some queries, a generic facet may be optimal (as we do not want to narrow the result set too much), for others a more specific suggestion may be more suitable. Given the natural trade-off between precision and recall at different depths, Section 7.2 is devoted to provide a principled solution.",
"cite_spans": [],
"ref_spans": [
{
"start": 229,
"end": 238,
"text": "Figure 3)",
"ref_id": null
},
{
"start": 367,
"end": 375,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Facet selection. Facet selection is linked to query classification on the research side (Lin et al., 2018; Skinner and Kallumadi, 2019) and query scoping on the product side, i.e. pre-selecting, say, the facet color with value black for a query such as \"black basketball shoes\" (Liberman and Lempel, 2014; Vandic et al., 2013) . Scoping may result in an aggressive restriction of the result set, lowering recall excessively: in most cases, an acceptable shopping experience would need to combine scoping with query expansion (Diaz et al., 2016) . SessionPath is more flexible than query classification, by supporting explicit path prediction and incorporating in-session information; it is more gentle than scop-ing (by nudging transparently the final user instead of forcing a selection behind the scene); it is more principled than expansion in balancing precision and recall.",
"cite_spans": [
{
"start": 88,
"end": 106,
"text": "(Lin et al., 2018;",
"ref_id": "BIBREF21"
},
{
"start": 107,
"end": 135,
"text": "Skinner and Kallumadi, 2019)",
"ref_id": "BIBREF32"
},
{
"start": 278,
"end": 305,
"text": "(Liberman and Lempel, 2014;",
"ref_id": "BIBREF20"
},
{
"start": 306,
"end": 326,
"text": "Vandic et al., 2013)",
"ref_id": "BIBREF34"
},
{
"start": 525,
"end": 544,
"text": "(Diaz et al., 2016)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "3"
},
{
"text": "Deep Learning in IR. The development of deep learning models for IR has been mostly restricted to the retrieve-and-rerank paradigm (Mitra and Craswell, 2017; Guo et al., 2016) . Some recent works have been focused specifically on ranking suggestions for type-ahead: neural language models are proposed by Park and Chiba (2017); Wang et al. (2018b) ; specifically in eCommerce, Kannadasan and Aslanyan (2019) employs fastText to represent queries in the ranking phase and Yu et al. (2020) leverages deep image features for in-session personalization. While this work employs deep neural networks both for feature encoding and the inference itself, the proposed methods are agnostic on the underlying retrieval algorithm, as long as platforms can enrich type-ahead response with the predicted category. By providing a gentle entry point into existing workflows, a great product strength of SessionPath is the possibility of deploying the new functionalities with minimal changes to any architecture, neural or traditional (see also Appendix A).",
"cite_spans": [
{
"start": 131,
"end": 157,
"text": "(Mitra and Craswell, 2017;",
"ref_id": "BIBREF27"
},
{
"start": 158,
"end": 175,
"text": "Guo et al., 2016)",
"ref_id": "BIBREF16"
},
{
"start": 328,
"end": 347,
"text": "Wang et al. (2018b)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "3"
},
{
"text": "Suggesting a category facet can be modelled with the help of few formal definitions. A target shop E has products p 1 , p 2 , ...p n \u2208 P (e.g. nike air max 97) and categories c 1,1 , c 1,2 , ...c n,m \u2208 C, where c n,m is the category n at depth m (e.g. at m = 1, [soccer, volley, football, basketball] , at m = 2 [shoes, pants, t-shirts], etc.); a taxonomy tree T m is an indexed mapping P \u2192 C m , assigning a category to products for each depth m (e.g. air max 97 \u2192 0 root, \u2192 1 soccer, \u2192 2 shoes, \u2192 3 messi etc.); root is the base category in the taxonomy, and it is common to all products (we will omit it for brevity in all our examples). In what follows, we use path to denote a sequence of categories (hierarchically structured) in our target shop (e.g. root / soccer / shoes / messi), and nodes to denote the categories in a path (e.g. soccer is a node of soccer / shoes / messi).",
"cite_spans": [
{
"start": 262,
"end": 300,
"text": "[soccer, volley, football, basketball]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Statement",
"sec_num": "4"
},
{
"text": "Given a browsing session s containing products p x , p y , ...p z , and a candidate type-ahead query q, the model's goal is to learn both the optimal depth value m and, for each k \u2264 m, a contextual function f (q, s) \u2192 C k . As we shall see in the ensuing Figure 4 : Functional flow for SessionPath: the current session and the candidate query \"shoes\" are embedded and fed to the model; the distribution over possible categories at each step of the taxonomy is passed to a decision module, that either cuts the generation at that step or includes the step in the final prediction. The decision process is repeated until either the module cuts or a max-length path is generated.",
"cite_spans": [],
"ref_spans": [
{
"start": 255,
"end": 263,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Problem Statement",
"sec_num": "4"
},
{
"text": "section, SessionPath solution to this challenge is two-fold: a model generating a path first, and a decision module to pick the appropriate depth m (Figure 4 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 148,
"end": 157,
"text": "(Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Problem Statement",
"sec_num": "4"
},
{
"text": "We approach the challenge incrementally, by first developing a count-based model (CM) that learns a mapping from queries to all paths (i.e. sport and sport / soccer are treated purely as \"labels\", so they are two completely unrelated target classes for the model); CM will both serve as a baseline for more sophisticated methods and as a fast reference implementation not requiring any deep learning infrastructure. We improve on CM with SessionPath, a model based on deep neural networks. From a product perspective, it is important to remember ( Figure 4 ) that a decision module is called after a path prediction is made: we discuss how to tune this crucial part after establishing the general performance of the proposed models.",
"cite_spans": [],
"ref_spans": [
{
"start": 548,
"end": 556,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Baseline and Personalized Models",
"sec_num": "5"
},
{
"text": "The intuition behind the count-based model is that we may gain insights on relevant paths linked to a query from the clicks on search results. Therefore, we can build a simple count-based model by creating a map from each query in the search logs to their frequently associated paths. To build this map, we first retrieve all products clicked after each query, along with their path; for a given query, we can then calculate the percentage of occurrence of each path in the clicked products. Since the model is not hierarchical, it is important to note that sport and sport / basketball will be treated as completely disjoint target classes for the prediction. To avoid noisy results, we empirically determined a frequency threshold for paths to be counted as relevant to a certain query (80%); at prediction time, given a query in the training set, we retrieve all the paths associated with it and return the one with longest depth; for unseen queries, no prediction is made.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Baseline Model",
"sec_num": "5.1"
},
{
"text": "The main conceptual improvements over CM are three:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modelling Session Context and Taxonomy Paths",
"sec_num": "5.2"
},
{
"text": "\u2022 SessionPath produces predictions also for queries not in the training set;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modelling Session Context and Taxonomy Paths",
"sec_num": "5.2"
},
{
"text": "\u2022 SessionPath introduces personalization, by combining the linguistic information contained in the query with in-session shopping intent;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modelling Session Context and Taxonomy Paths",
"sec_num": "5.2"
},
{
"text": "\u2022 SessionPath is trained to produce the most accurate path by explicitly making a new prediction at each node, not predicting paths in a one-out-of-many scenario; in other words, Ses-sionPath knows that sport and sport / basketball are related, and that the second path is generated from the first when a given distribution over sport activities is present.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modelling Session Context and Taxonomy Paths",
"sec_num": "5.2"
},
{
"text": "To represent the current session in a dense architecture, we first train a skip-gram prod2vec model over user data for the entire website, mapping product to 50-dimensional vectors (Mikolov et al., 2013a; Grbovic et al., 2015) . At training and serving time SessionPath retrieves the embeddings of the products in the target session, and use average pooling to calculate the context vector from the sequence of embeddings, as shown by Covington et al. (2016) ; Yu et al. (2020) . To represent the candidate query, an encoding of linguistic behavior that generalizes to unseen queries is needed. We tested different strategies:",
"cite_spans": [
{
"start": 181,
"end": 204,
"text": "(Mikolov et al., 2013a;",
"ref_id": "BIBREF25"
},
{
"start": 205,
"end": 226,
"text": "Grbovic et al., 2015)",
"ref_id": "BIBREF14"
},
{
"start": 435,
"end": 458,
"text": "Covington et al. (2016)",
"ref_id": "BIBREF8"
},
{
"start": 461,
"end": 477,
"text": "Yu et al. (2020)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Modelling Session Context and Taxonomy Paths",
"sec_num": "5.2"
},
{
"text": "\u2022 word2vec: we train a skip-gram model from Mikolov et al. (2013b) over product short descriptions from the catalog. Since most search queries are less than three words long, we opted for a simple and fast average pooling of the embeddings in the tokenized query;",
"cite_spans": [
{
"start": 44,
"end": 66,
"text": "Mikolov et al. (2013b)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Modelling Session Context and Taxonomy Paths",
"sec_num": "5.2"
},
{
"text": "\u2022 character-based language model: inspired by Skinner (2018), we train a char-based language model (single LSTM layer with hidden dimension 50) on search logs and product descriptions from the target shop; a standard LSTM approach was found ineffective in preliminary tests, so we opted instead for using the \"Balanced pooling\" strategy from Skinner and Kallumadi (2019) , where the dense representation for the query is obtained by taking the last network state and then concatenating it together with average-pooling (Wang et al., 2018a) , max-pooling, and min-pooling;",
"cite_spans": [
{
"start": 342,
"end": 370,
"text": "Skinner and Kallumadi (2019)",
"ref_id": "BIBREF32"
},
{
"start": 519,
"end": 539,
"text": "(Wang et al., 2018a)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Modelling Session Context and Taxonomy Paths",
"sec_num": "5.2"
},
{
"text": "\u2022 pre-trained language model: we map the query to a 768-size vector using BERT (Devlin et al., 2019) (as pre-trained for the target language by Magnini et al. (2006) );",
"cite_spans": [
{
"start": 144,
"end": 165,
"text": "Magnini et al. (2006)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Modelling Session Context and Taxonomy Paths",
"sec_num": "5.2"
},
{
"text": "\u2022 Search2Prod2Vec + unigrams: we propose a \"small-data\" variation to Search2Vec by Grbovic et al. 2016, where queries (on a web search engine) are embedded through events happening before and after the search event.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modelling Session Context and Taxonomy Paths",
"sec_num": "5.2"
},
{
"text": "Adapting the intuition to product search, we propose to represent queries through the embeddings of products clicked in the search result page; in particular, each query q is the weighted average of the corresponding prod2vec embeddings; it can be argued that the clicking event is analogous to a \"pointing\" signal (Tagliabue and Cohn-Gordon, 2019) , when the meaning of a word (\"shoes\") is understood as a function from the string to a set of objects falling under that concept (e.g. Chierchia and McConnell-Ginet (2000) ).",
"cite_spans": [
{
"start": 315,
"end": 348,
"text": "(Tagliabue and Cohn-Gordon, 2019)",
"ref_id": "BIBREF33"
},
{
"start": 485,
"end": 521,
"text": "Chierchia and McConnell-Ginet (2000)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Modelling Session Context and Taxonomy Paths",
"sec_num": "5.2"
},
{
"text": "In the spirit of compositional semantics (Baroni et al., 2014), we generalize this representation to unseen queries by building a unigrambased language model, so that \"nike shoes\" gets its meaning from the composition (average pooling) of the meaning of nike and shoes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modelling Session Context and Taxonomy Paths",
"sec_num": "5.2"
},
{
"text": "To generate a path explicitly, we opted for an encoder-decoder architecture. The encoder employs the wide-and-deep approach popularized by Cheng et al. (2016) , and concatenates textual and non-textual feature to obtain a wide representation of the current context, which is passed through a dense layer to represent the final encoded state. The decoder is a word-based language model (Zoph and Le, 2016) which produces a sequence of nodes (e.g.",
"cite_spans": [
{
"start": 139,
"end": 158,
"text": "Cheng et al. (2016)",
"ref_id": "BIBREF6"
},
{
"start": 385,
"end": 404,
"text": "(Zoph and Le, 2016)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Modelling Session Context and Taxonomy Paths",
"sec_num": "5.2"
},
{
"text": "Queries ( sport, basketball, etc.) conditioned on the representation created by the encoder; more specifically, the architecture of the decoder consists of a single LSTM with 128 cells, a fully-connected layer and a final layer with softmax output activation. The output dimension corresponds to the total number of distinct nodes found in all the paths of the training data, including two additional tokens to encode the start-of-sequence and end-of-sequence. For training, the decoder uses the encoded information to fill its initial cell states; at each timestep, we use teacher forcing to pass the target character, offset by one position, as the next input character to the decoder (Williams and Zipser, 1989) . Empirically, we found that robust parameters for the deep learning methods are a learning rate of 0.001, time decay of 0.00001, early stopping with patience = 20, and mini-batch of size 128; furthermore, the Adam optimizer with cross-entropy loss is used for all networks, with training up to 300 epochs. Once trained, the model can generate a path given an encoded session representation and a start-of-sequence token: after the first step, the decoder uses autoregression sequence generation (Bahdanau et al., 2015) to predict the next output token.",
"cite_spans": [
{
"start": 687,
"end": 714,
"text": "(Williams and Zipser, 1989)",
"ref_id": "BIBREF37"
},
{
"start": 1211,
"end": 1234,
"text": "(Bahdanau et al., 2015)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Shop",
"sec_num": null
},
{
"text": "We leverage behavioral and search data from two partnering shops in Company's network: Shop 1 and Shop 2 have uniform data ingestion, making it easy to compare how well models generalize; they are mid-size shops, with annual revenues between 20 and 100 million dollars. Shop 1 and Shop 2 differ however in many respects: they are in different verticals (apparel vs home improvement), they have a different catalog structure (603 paths organized in 2-to-4 levels for each product vs 985 paths in 3 levels for all products), and different traffic (top 200k vs top 15k in the Alexa Ranking). Descriptive statistics for the training dataset can be found in ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "6"
},
{
"text": "We perform offline experiments using search logs for Shop 1 and Shop 2: for each search event in the dataset, we use products seen before the query (if any) to build a session vector as explained in Section 5.2; the path of the products clicked after the query is used as the target variable for the model under examination.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "7"
},
{
"text": "We benchmark CM and SessionPath from Section 5, plus a multi-layer perceptron (MLP) to investigate the performance of an intermediate model: while not as straightforward as CM, MLP is considerably easier to train and serve than Session-Path and it may therefore be a compelling architectural choice for many shops (see Appendix A for practical engineering details); MLP concatenates the session vector with the BERT encoding of the candidate query, and produces a distribution over all possible full-length paths (one-out-of-many classification, where the target class comprises all the paths at the maximum depth for the catalog at hand). Table 2 shows accuracy scores for three different depth levels in the predicted path: SP+BERT is SessionPath using BERT to encode linguistic behavior, SP+W2V is using word2vec, SP+SV is using Search2Prod2Vec and SP+LSTM is using the language model. Every SessionPath variant outperforms the count-based and neural baselines, with Search2Prod2Vec providing up to 150% increase over CM and 67% over MLP. CM score is penalized not only by the inability to generalize to unseen queries: even when considering previously seen queries in the test set, SP+SV's accuracy is significantly higher (0.58 vs 0.27 at D = last), showing that neural methods are more effective in capturing the underlying dynamics. Linguistic representations learned directly over the target shop outperform bigger models pre-trained on generic text sources, highlighting some differences between general-purpose embeddings and shop-specific ones, and suggesting that off-theshelf NLP models may not be readily applied to short, keyword-based queries. While fairly accurate, SP+W2V is much slower to train compared to SP+SV and harder to scale across clients, as it relies on having enough content in the catalog to train models that successfully deal with shop lingo. On a final language-related note, it is worth stressing that click-based embeddings built for SP+SV show not just better performance over seen queries (which is expected), but better generalization ability in the unseen part as well compared to BERT embeddings (0.82 vs 0.70 at D = 1 for Shop 1, 0.76 vs 0.63 for Shop 2).",
"cite_spans": [],
"ref_spans": [
{
"start": 640,
"end": 647,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Making predictions",
"sec_num": "7.1"
},
{
"text": "In the spirit of ablation studies, we rerun SP+SV and SP+BERT without session vector. Interestingly enough, context seems to play a slightly different role in the two shops and the two models: SP+BERT is greatly helped by contextual information, especially for unseen queries (0.28 vs 0.21 at D = last for Shop 1, 0.40 vs 0.15 for Shop 2), but the effect for SP+SV is smaller (0.50 vs 0.42 for Shop 2); while models on Shop 2 show a bigger drop in performance when removing session information, generally (and unsurprisingly) session-aware models provide better generalization on unseen queries across the board. By comparing SessionPath with a simpler neural model (such as MLP), it is clear that session plays a bigger role in MLP, suggesting that SessionPath architecture is able to better leverage linguistic information across cases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Making predictions",
"sec_num": "7.1"
},
{
"text": "Finally, we investigate sample efficiency of chosen methods by training on smaller fractions of the original training dataset: Table 3 reports accuracy of four methods when downsampling the training set for Shop 1 to 1/10 th and 1/4 th of the dataset size. CM's inability to generalize cripples its total score; MLP is confirmed to be simple yet effective, performing significantly better than the count-based baseline; SP+SV is confirmed to be Model (D=last) 1/10 1/4 CM 0.18 0.20 MLP+BERT 0.28 0.30 SP+BERT 0.31 0.34 SP+SV 0.51 0.53 the best performing model, and even with only 1/10 th of samples outperforms all other models from Table 2 : by leveraging the bias encoded in the hierarchical structure of the products, SP+SV allows paths that share nodes (sport, sport / basketball) to also share statistical evidence, resulting in a very efficient learning. Accuracy provides a strong argument on the efficacy of the proposed models in industry, and it is in fact widely employed in the relevant literature: Vandic et al. (2013) employs click-based accuracy for label prediction, while Molino et al. (2018) (in a customer service use cases) uses accuracy at different depths for sequential predictions that are somewhat similar to SessionPath. However, accuracy by itself falls short to tell the whole story on product decisions: working with Coveo's clients, it is clear that not all shops are born equal -some (e.g. mono-brand fashion shops) strongly favor a smaller and cleaner result page; others (e.g. marketplaces) favor bigger, even if noisier, result sets. Section 7.2 presents our contribution in analyzing the business context and proposes viable solutions.",
"cite_spans": [
{
"start": 1012,
"end": 1032,
"text": "Vandic et al. (2013)",
"ref_id": "BIBREF34"
},
{
"start": 1090,
"end": 1110,
"text": "Molino et al. (2018)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [
{
"start": 127,
"end": 134,
"text": "Table 3",
"ref_id": "TABREF4"
},
{
"start": 634,
"end": 641,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Making predictions",
"sec_num": "7.1"
},
{
"text": "Consider the two possible decisions in the scenario depicted in Figure 5: given \"nike shoes\" as query and basketball shoes as session context, Session-Path prediction is shoes / nike / basketball. According to scenario 1, a decision is made to cut the path at shoes / nike: the resulting set of products contain a mixed set of shoes from the target brand, with no specific sport affinity; in scenario 2, the decision module allows the prediction of a longer path, shoes / nike / basketball: the result page is smaller and only contains basketball shoes of the target brand. Intuitively, a perfect model would choose 2 only when it is \"confident\" of the underlying intention, as expressed through the combination of language and behavioral clues; when the model is less confident, it should stick to 1 to avoid hiding from the shopper's possible interesting products. Figure 5 : Two scenarios for the decision module after SessionPath generates the shoes / nike / basketball path, with input query \"nike shoes\" and Lebron James basketball shoes in the session. In Scenario 1 (blue), we cut the result set after the second node -shoes / nike -resulting in a mix set of shoes; in Scenario 2 (red), we use the full path -shoes / nike / basketball -resulting in only basketball shoes (dotted line products). How can we define what is the optimal choice?",
"cite_spans": [],
"ref_spans": [
{
"start": 64,
"end": 73,
"text": "Figure 5:",
"ref_id": null
},
{
"start": 867,
"end": 875,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Tuning the decision module",
"sec_num": "7.2"
},
{
"text": "To quantify how much confident the model is at any given node in the predicted path, at each node s n we output the multinomial distribution d over the next node s n+1 2 and calculate the Gini coefficient of d, g(d):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tuning the decision module",
"sec_num": "7.2"
},
{
"text": "g(d) = n i=1 n j=1 |x i \u2212 x j | 2n 2x (GI)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tuning the decision module",
"sec_num": "7.2"
},
{
"text": "where n is the total number of classes in the distribution d, x i is the probability of the class i andx is the mean probability of the distribution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tuning the decision module",
"sec_num": "7.2"
},
{
"text": "Once GI = g(d) is computed, a decision rule DR(GI) for the decision module in Figure 4 is given by:",
"cite_spans": [],
"ref_spans": [
{
"start": 78,
"end": 86,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Tuning the decision module",
"sec_num": "7.2"
},
{
"text": "DR(x) = 1 if x \u2265 ct 0 otherwise",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tuning the decision module",
"sec_num": "7.2"
},
{
"text": "where 1 means that the module is confident enough to add the node to the final path that will be shown to the user, while 0 means the path generation is stopped at the current node. ct is our confidence threshold: since different values of ct imply more or less aggressive behavior from the model, it is important to tune ct by taking into account the relevant business constraints.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tuning the decision module",
"sec_num": "7.2"
},
{
"text": "2 Non-existent paths account for less then 0.005% of all the paths in the test set, proving that SessionPath is able to accurately learn transitions between nodes and suggesting that an explicit check at decision time is unnecessary. Of course, if needed, a \"safety check\" may be performed at query time by the search engine, to verify that filtering by the suggested path will result in a non-empty set. Due to the contextual and interactive nature of SessionPath, we turn search logs into a \"simulation\" of the interactions between hypothetical shoppers and our model (Kuzi et al., 2019) . In particular, for any given search event in the test datasetcomprising products seen in the session, the actual query issued, all the products returned by the search engine, the products clicked from the shopper in the result page -, and a model prediction (e.g. sport / basketball), we construct two items:",
"cite_spans": [
{
"start": 570,
"end": 589,
"text": "(Kuzi et al., 2019)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tuning the decision module",
"sec_num": "7.2"
},
{
"text": "\u2022 golden truth set: which is the set of the paths corresponding to the items the shopper deemed relevant in that context (relevance is therefore assessed as pseudo-feedback from clicks);",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tuning the decision module",
"sec_num": "7.2"
},
{
"text": "\u2022 filtered result set: which is the set of products returned by the engine, filtering out those not matching the prediction by the model (i.e. simulating the engine is actually working with the categories suggested by SessionPath).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tuning the decision module",
"sec_num": "7.2"
},
{
"text": "With the golden truth set, the filtered result set and the original result page, we can calculate precision and recall at different values of ct (please refer to Appendix B for a full worked out example). Table 4 reports the chosen metrics calculated for Shop 1 at different values of ct; the trade-off between the two dimensions makes all the point Pareto-optimal: there is no way to increase performance in one dimension without hurting the other. Going from the first configuration (ct = 0.996) to the second (ct = 0.993) causes a big jump in the metric space, with the model losing some recall to gain considerably in precision. To get a sense of how the model is performing in practice, Figure 6 shows three sessions for the query \"nike shoes\": when session context is empty (session 1), the model defaults to the broadest category (sneakers); when session is running-based or basketballbased, the model adjusts its aggressiveness depending on the threshold we set. It is interesting to note that while the prediction for 2 at ct = 0.97 is wrong at the last node (product is a7, not a3), the model is still making a reasonable guess (e.g. by guessing sport and brand correctly).",
"cite_spans": [],
"ref_spans": [
{
"start": 205,
"end": 212,
"text": "Table 4",
"ref_id": "TABREF6"
},
{
"start": 692,
"end": 700,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Tuning the decision module",
"sec_num": "7.2"
},
{
"text": "In our experience, the adoption of data-driven models in traditional digital shops is often received with some skepticism over the \"supervision\" by business experts (Baer and Kamalnath, 2017) : a common solution is to avoid the use of neural networks, in favor of model interpretability. Session-Path's decision-based approach dares to dream a different dream, as the proposed architecture shows that we can retain the accuracy of deep learning and still provide a meaningful interface to business users -here, in the form of a precision/recall space to be explored with an easy-to-understand parameter.",
"cite_spans": [
{
"start": 165,
"end": 191,
"text": "(Baer and Kamalnath, 2017)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tuning the decision module",
"sec_num": "7.2"
},
{
"text": "This research paper introduced SessionPath, a personalized and scalable model that dynamically suggests product paths in type-ahead systems; Session-Path was benchmarked on data from two shops and tested against count-based and neural models, with explicit complexity-accuracy trade-offs. Finally, we proposed a confidence-based decision rule inspired by customer discussions: by abstracting away model behavior in one parameter, we wish to solve the often hard interplay between business requirements and machine behavior; furthermore, by leveraging a hierarchical structure of product concepts, the model produces predictions that are suitable to a prima facie human inspection (e.g. Figure 6 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 686,
"end": 694,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "8"
},
{
"text": "While our evaluation shows very encouraging results, the next step will be to A/B test the proposed models on a variety of target clients: Shop 1 and Shop 2 data comes from search logs of a lastgeneration search engine, which possibly skewed model behavior in subtle ways. With more data, it will be possible to extend the current work in some important directions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "8"
},
{
"text": "1. while this work showed that SessionPath is effective, the underlying deep architecture can be improved further: on one hand, by doing more extensive optimization; on the other, by focusing on how to best perform linguistic generalization: transfer learning (between tasks as proposed by Skinner and Kallumadi (2019) , or across clients, as described in Yu et al. (2020) ) is a powerful tool that could be used to improve performances further; Figure 6 : Sample SessionPath predictions for the candidate query \"nike shoes\", with two thresholds (gray, green) and three sessions, 1, 2, 3 (no product for session 1, a pair of running shoes for 2, a pair of basketball shoes for 3). The model reacts quickly both across sessions (switching to relevant parts of the underlying product catalog) and across threshold values, making more aggressive decisions at a lower value (green).",
"cite_spans": [
{
"start": 290,
"end": 318,
"text": "Skinner and Kallumadi (2019)",
"ref_id": "BIBREF32"
},
{
"start": 356,
"end": 372,
"text": "Yu et al. (2020)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [
{
"start": 446,
"end": 454,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "8"
},
{
"text": "2. the same model can be applied with almost no changes to the search workflow, to provide a principled way to do personalized query scoping. A preliminary A/B test on Shop 1 using the MLP model on a minor catalog facet yielded a small (2%) but statistically significant improvement (p < 0.05) on clickthrough rate and we look forward to extending our testing;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "8"
},
{
"text": "3. we could model path depth within the decoder itself, by teaching the model when to stop; as an alternative to learning a decision rule in a supervised setting, we could leverage reinforcement learning and let the system improve through iterations -in particular, the choice of cutting the path for a given query and session vector could be cast in terms of contextual bandits;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "8"
},
{
"text": "4. finally, precision and recall at different depths are just a first start; preliminary tests with balanced accuracy on selected examples show promising results, but we look forward to performing user studies to deepen our understanding of the ideal decision mechanism.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "8"
},
{
"text": "Personalization engines for digital shops are expected to drive an increase in profits by 15% by the end of 2020 (Gillespie et al., 2018) ; facet suggestions help personalizing the search experience as early as in the type-ahead drop-down window: considering that search users account globally for almost 14% of the total revenues (Charlton, 2013) , and that category suggestions may improve clickthrough-rate and reduce cognitive load, Session-Path (and similar models) may play an important role in next-generation online experiences. Figure 7 : High-level functional overview of an industry standard API for type-ahead suggestions: query seed and possibly session information about the user are sent by the client to the server, where some retrieval and re-ranking module produces the final top-k suggestions and prepares the response for front-end consumption.",
"cite_spans": [
{
"start": 113,
"end": 137,
"text": "(Gillespie et al., 2018)",
"ref_id": "BIBREF11"
},
{
"start": 331,
"end": 347,
"text": "(Charlton, 2013)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 537,
"end": 545,
"text": "Figure 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "8"
},
{
"text": "As depicted in Figure 8 , category suggestions can be quickly added to any existing infrastructure by treating the current engine as a \"black-box\" and adding path predictions at run-time for the first (or the first k, since requests to the model at that point can be batched with little overhead) query candidate(s). In this scenario, the decoupling between retrieval and suggestions is absolute, which may be a good idea when the stacks are very different (say, traditional retrieval and neural suggestions), but less extreme solutions are obviously possible. The crucial engineering point is that path prediction (using any of the methods from Section 7) can be added and tested quickly, with few conceptual and engineering dependencies: the more traditional the existing stack, the more an incremental approach is recommended: count-based first -since predictions can be served simply from an in-memory map -, MLP second -since predictions require a small neural network, but they are fast enough to only require CPU at query time -, and finally the full Ses-sionPath -which requires dedicated hardware considerations to be effective in the time constraints of the type-ahead use case. As a practical suggestion, we also found quite effective when using simpler models (e.g. MLP) to first test it at a given depth: for example, you start by only classifying the most likely nodes in template sport / ?, and then incrementally increase the target classes by adding more diverse paths.",
"cite_spans": [],
"ref_spans": [
{
"start": 15,
"end": 23,
"text": "Figure 8",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "8"
},
{
"text": "Adding a lightweight wrapper around the original bare-bone endpoint allows for other improvements as well: for example, considering typical power-law of query logs, a caching layer can be used to avoid a full retrieving-and-rerank iteration for frequent queries; obviously, this and similar features are independent from SessionPath itself. (Figure 7 , a simple wrapper around the existing module sends the same session information and the top-n suggestions to Ses-sionPath, for dynamic path prediction. The final response is then obtained by simply augmenting the existing response containing query candidates with category predictions. Figure 9 : A sample row in the test set, displaying search results (7 products in 4 paths) for the query \"shoes\" and a session containing a pair of LeBron James basketball shoes. In this example, the shopper clicked on products P 1 and P 4 .",
"cite_spans": [],
"ref_spans": [
{
"start": 341,
"end": 350,
"text": "(Figure 7",
"ref_id": null
},
{
"start": 638,
"end": 646,
"text": "Figure 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "8"
},
{
"text": "For the sake of reproducibility, we present a worked out example of metrics calculations for offline testing of the decision module (Section 7.2). Figure 9 depicts an historical interaction from the search logs: a session containing a product, a query issued by the user and the search result page (\"serp\"), containing seven items belonging to the following paths:",
"cite_spans": [],
"ref_spans": [
{
"start": 147,
"end": 155,
"text": "Figure 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "B Metrics Calculation: a Worked-Out Example",
"sec_num": null
},
{
"text": "P 1 = sport / basketball / lebron P 2 = sport / basketball / lebron P 3 = sport / basketball / lebron P 4 = sport / running / sneakers P 5 = sport / basketball / jerseys P 6 = sport / basketball / curry P 7 = sport / running / sneakers. Click-through data (i.e. products in the serp clicked by the user) indicates that P 1 and P 4 are relevant, and so the associated paths are ground truths (sport/basketball/lebron and sport/running/sneakers). We now present the full calculations in three scenarios, corresponding to three level of depths in the predicted path.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Metrics Calculation: a Worked-Out Example",
"sec_num": null
},
{
"text": "Scenario 1 (general): prediction is sport. In this case, result set would be intact, so: True Positives (TP) are P 1 , P 2 , P 3 , P 4 , P 7 , False Positives (FP) are P 5 , P 6 , False Negatives (FN) are \u2205. Precision is: TP / (TP + FP) = 5/(5 + 2) = 0.71, Recall is: TP / (TP + FN) = 5/(5 + 0) = 1.0 (with no cut, all truths are retrieved so 1.0 is the expected result).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Metrics Calculation: a Worked-Out Example",
"sec_num": null
},
{
"text": "Scenario 2 (intermediate): prediction is sport/basketball.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Metrics Calculation: a Worked-Out Example",
"sec_num": null
},
{
"text": "In this case, filtering the result set according to the decision made by the model would give P 1 , P 2 , P 3 , P 5 , P 6 as the final set. So: TP = P 1 , P 2 , P 3 , FP = P 5 , P 6 , FN = P 4 , P 7 ; Precision = 3/(3+2) = 0.6, Recall = 3/(3+2) = 0.6. Scenario 3 (specific): prediction is sport / basketball / lebron. In this case, filtering the result set according to the decision made by the model would give P 1 , P 2 , P 3 as the final set. So: TP = P 1 , P 2 , P 3 , FP = \u2205, FN = P 4 , P 7 ; Precision = 3/(3 + 0) = 1.0, Recall = 3/(3 + 2) = 0.6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Metrics Calculation: a Worked-Out Example",
"sec_num": null
},
{
"text": "The full calculations show very clearly the natural trade-off discussed at length in Section 7.2: the deeper the path, the more precise are the results but also the higher the chance of hiding valuable products from the shopper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Metrics Calculation: a Worked-Out Example",
"sec_num": null
},
{
"text": "The \"nintendo switch\" query for a gaming console returns 50k results on Amazon.com at the time of drafting this footnote; 50k results are more products than the entire catalog of a mid-size shop such as Shop 1 below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Thanks to (in order of appearance) Andrea Polonioli, Federico Bianchi, Ciro Greco, Piero Molino for helpful comments to previous versions of this article. We also wish to thank our anonymous reviewers, who greatly helped in improving the clarity of our exposition.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": " Figure 7 represents a functional overview of a typeahead service: when User X on a shop starts typing a query after browsing some products, the query seed and the session context are sent to the server. An existing engine -traditional or neural -will then take the query and the context and produce a list of top-k query candidates, ranked by relevance, which are then sent back to the client to populate the dropdown window of the search bar.",
"cite_spans": [],
"ref_spans": [
{
"start": 1,
"end": 9,
"text": "Figure 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "A Architectural Considerations",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Controlling popularity bias in learning-to-rank recommendation",
"authors": [
{
"first": "Himan",
"middle": [],
"last": "Abdollahpouri",
"suffix": ""
},
{
"first": "Robin",
"middle": [],
"last": "Burke",
"suffix": ""
},
{
"first": "Bamshad",
"middle": [],
"last": "Mobasher",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/3109859.3109912"
]
},
"num": null,
"urls": [],
"raw_text": "Himan Abdollahpouri, Robin Burke, and Bamshad Mobasher. 2017. Controlling popularity bias in learning-to-rank recommendation.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The Long Tail: Why the Future of Business Is Selling Less of More",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Anderson",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Anderson. 2006. The Long Tail: Why the Future of Business Is Selling Less of More. Hyperion.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Controlling machine-learning algorithms and their biases",
"authors": [
{
"first": "Tobias",
"middle": [],
"last": "Baer",
"suffix": ""
},
{
"first": "Vishnu",
"middle": [],
"last": "Kamalnath",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tobias Baer and Vishnu Kamalnath. 2017. Controlling machine-learning algorithms and their biases.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Frege in space: A program of compositional distributional semantics",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "Raffaela",
"middle": [],
"last": "Bernardi",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Zamparelli",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Baroni, Raffaela Bernardi, and Roberto Zampar- elli. 2014. Frege in space: A program of composi- tional distributional semantics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Is site search less important for niche retailers?",
"authors": [
{
"first": "Graham",
"middle": [],
"last": "Charlton",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Graham Charlton. 2013. Is site search less important for niche retailers?",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Wide & deep learning for recommender systems",
"authors": [
{
"first": "",
"middle": [],
"last": "Heng-Tze",
"suffix": ""
},
{
"first": "Levent",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Jeremiah",
"middle": [],
"last": "Koc",
"suffix": ""
},
{
"first": "Tal",
"middle": [],
"last": "Harmsen",
"suffix": ""
},
{
"first": "Tushar",
"middle": [],
"last": "Shaked",
"suffix": ""
},
{
"first": "Hrishi",
"middle": [],
"last": "Chandra",
"suffix": ""
},
{
"first": "Glen",
"middle": [],
"last": "Aradhye",
"suffix": ""
},
{
"first": "Gregory",
"middle": [
"S"
],
"last": "Anderson",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Mustafa",
"middle": [],
"last": "Chai",
"suffix": ""
},
{
"first": "Rohan",
"middle": [],
"last": "Ispir",
"suffix": ""
},
{
"first": "Zakaria",
"middle": [],
"last": "Anil",
"suffix": ""
},
{
"first": "Lichan",
"middle": [],
"last": "Haque",
"suffix": ""
},
{
"first": "Vihan",
"middle": [],
"last": "Hong",
"suffix": ""
},
{
"first": "Xiaobing",
"middle": [],
"last": "Jain",
"suffix": ""
},
{
"first": "Hemal",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Shah",
"suffix": ""
}
],
"year": 2016,
"venue": "DLRS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Heng-Tze Cheng, Levent Koc, Jeremiah Harmsen, Tal Shaked, Tushar Chandra, Hrishi Aradhye, Glen An- derson, Gregory S. Corrado, Wei Chai, Mustafa Ispir, Rohan Anil, Zakaria Haque, Lichan Hong, Vihan Jain, Xiaobing Liu, and Hemal Shah. 2016. Wide & deep learning for recommender systems. In DLRS 2016.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Meaning and Grammar (2nd Ed.): An Introduction to Semantics",
"authors": [
{
"first": "Gennaro",
"middle": [],
"last": "Chierchia",
"suffix": ""
},
{
"first": "Sally",
"middle": [],
"last": "Mcconnell-Ginet",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gennaro Chierchia and Sally McConnell-Ginet. 2000. Meaning and Grammar (2nd Ed.): An Introduction to Semantics. MIT Press, Cambridge, MA, USA.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Deep neural networks for youtube recommendations",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Covington",
"suffix": ""
},
{
"first": "Jay",
"middle": [],
"last": "Adams",
"suffix": ""
},
{
"first": "Emre",
"middle": [],
"last": "Sargin",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 10th ACM Conference on Recommender Systems, RecSys '16",
"volume": "",
"issue": "",
"pages": "191--198",
"other_ids": {
"DOI": [
"10.1145/2959100.2959190"
]
},
"num": null,
"urls": [],
"raw_text": "Paul Covington, Jay Adams, and Emre Sargin. 2016. Deep neural networks for youtube recommendations. In Proceedings of the 10th ACM Conference on Recommender Systems, RecSys '16, page 191-198, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Query expansion with locally-trained word embeddings",
"authors": [
{
"first": "Fernando",
"middle": [],
"last": "Diaz",
"suffix": ""
},
{
"first": "Mitra",
"middle": [],
"last": "Bhaskar",
"suffix": ""
},
{
"first": "Nick",
"middle": [],
"last": "Craswell",
"suffix": ""
}
],
"year": 2016,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fernando Diaz, Bhaskar Mitra, and Nick Craswell. 2016. Query expansion with locally-trained word embeddings. ArXiv, abs/1605.07891.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Magic quadrant for digital commerce",
"authors": [
{
"first": "Penny",
"middle": [],
"last": "Gillespie",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Daigler",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lowndes",
"suffix": ""
},
{
"first": "Christina",
"middle": [],
"last": "Klock",
"suffix": ""
},
{
"first": "Yanna",
"middle": [],
"last": "Dharmasthira",
"suffix": ""
},
{
"first": "Sandy",
"middle": [],
"last": "Shen",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Penny Gillespie, Jason Daigler, Mike Lowndes, Christina Klock, Yanna Dharmasthira, and Sandy Shen. 2018. Magic quadrant for digital commerce. Technical report, Gartner.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Eye-tracking analysis of user behavior in www search",
"authors": [
{
"first": "Laura",
"middle": [
"A"
],
"last": "Granka",
"suffix": ""
},
{
"first": "Thorsten",
"middle": [],
"last": "Joachims",
"suffix": ""
},
{
"first": "Geri",
"middle": [],
"last": "Gay",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 27th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '04",
"volume": "",
"issue": "",
"pages": "478--479",
"other_ids": {
"DOI": [
"10.1145/1008992.1009079"
]
},
"num": null,
"urls": [],
"raw_text": "Laura A. Granka, Thorsten Joachims, and Geri Gay. 2004. Eye-tracking analysis of user behavior in www search. In Proceedings of the 27th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '04, page 478-479, New York, NY, USA. Associa- tion for Computing Machinery.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Scalable semantic matching of queries to ads in sponsored search advertising",
"authors": [
{
"first": "Mihajlo",
"middle": [],
"last": "Grbovic",
"suffix": ""
},
{
"first": "Nemanja",
"middle": [],
"last": "Djuric",
"suffix": ""
},
{
"first": "Vladan",
"middle": [],
"last": "Radosavljevic",
"suffix": ""
},
{
"first": "Fabrizio",
"middle": [],
"last": "Silvestri",
"suffix": ""
},
{
"first": "Ricardo",
"middle": [],
"last": "Baeza-Yates",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Ordentlich",
"suffix": ""
},
{
"first": "Lee",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Gavin",
"middle": [],
"last": "Owens",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 39th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '16",
"volume": "",
"issue": "",
"pages": "375--384",
"other_ids": {
"DOI": [
"10.1145/2911451.2911538"
]
},
"num": null,
"urls": [],
"raw_text": "Mihajlo Grbovic, Nemanja Djuric, Vladan Radosavl- jevic, Fabrizio Silvestri, Ricardo Baeza-Yates, An- drew Feng, Erik Ordentlich, Lee Yang, and Gavin Owens. 2016. Scalable semantic matching of queries to ads in sponsored search advertising. In Proceedings of the 39th International ACM SIGIR Conference on Research and Development in Infor- mation Retrieval, SIGIR '16, page 375-384, New York, NY, USA. Association for Computing Machin- ery.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "E-commerce in your inbox: Product recommendations at scale",
"authors": [
{
"first": "Mihajlo",
"middle": [],
"last": "Grbovic",
"suffix": ""
},
{
"first": "Vladan",
"middle": [],
"last": "Radosavljevic",
"suffix": ""
},
{
"first": "Nemanja",
"middle": [],
"last": "Djuric",
"suffix": ""
},
{
"first": "Narayan",
"middle": [],
"last": "Bhamidipati",
"suffix": ""
},
{
"first": "Jaikit",
"middle": [],
"last": "Savla",
"suffix": ""
},
{
"first": "Varun",
"middle": [],
"last": "Bhagwan",
"suffix": ""
},
{
"first": "Doug",
"middle": [],
"last": "Sharp",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of KDD '15",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/2783258.2788627"
]
},
"num": null,
"urls": [],
"raw_text": "Mihajlo Grbovic, Vladan Radosavljevic, Nemanja Djuric, Narayan Bhamidipati, Jaikit Savla, Varun Bhagwan, and Doug Sharp. 2015. E-commerce in your inbox: Product recommendations at scale. In Proceedings of KDD '15.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "An eye tracking study of the effect of target rank on web search",
"authors": [
{
"first": "Zhiwei",
"middle": [],
"last": "Guan",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Cutrell",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '07",
"volume": "",
"issue": "",
"pages": "417--420",
"other_ids": {
"DOI": [
"10.1145/1240624.1240691"
]
},
"num": null,
"urls": [],
"raw_text": "Zhiwei Guan and Edward Cutrell. 2007. An eye tracking study of the effect of target rank on web search. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '07, page 417-420, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A deep relevance matching model for ad-hoc retrieval",
"authors": [
{
"first": "Jiafeng",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Yixing",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Qingyao",
"middle": [],
"last": "Ai",
"suffix": ""
},
{
"first": "W",
"middle": [
"Bruce"
],
"last": "Croft",
"suffix": ""
}
],
"year": 2016,
"venue": "CIKM '16",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiafeng Guo, Yixing Fan, Qingyao Ai, and W. Bruce Croft. 2016. A deep relevance matching model for ad-hoc retrieval. In CIKM '16.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "When choice is demotivating: Can one desire too much of a good thing",
"authors": [
{
"first": "Sheena",
"middle": [],
"last": "Iyengar",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Lepper",
"suffix": ""
}
],
"year": 2001,
"venue": "Journal of personality and social psychology",
"volume": "79",
"issue": "",
"pages": "995--1006",
"other_ids": {
"DOI": [
"10.1037/0022-3514.79.6.995"
]
},
"num": null,
"urls": [],
"raw_text": "Sheena Iyengar and Mark Lepper. 2001. When choice is demotivating: Can one desire too much of a good thing? Journal of personality and social psychology, 79:995-1006.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Personalized query autocompletion through a lightweight representation of the user context",
"authors": [
{
"first": "Grigor",
"middle": [],
"last": "Manojkumar Rangasamy Kannadasan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Aslanyan",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manojkumar Rangasamy Kannadasan and Grigor Aslanyan. 2019. Personalized query auto- completion through a lightweight representation of the user context. CoRR, abs/1905.01386.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Help me search: Leveraging user-system collaboration for query construction to improve accuracy for difficult queries",
"authors": [
{
"first": "Saar",
"middle": [],
"last": "Kuzi",
"suffix": ""
},
{
"first": "Abhishek",
"middle": [],
"last": "Narwekar",
"suffix": ""
},
{
"first": "Anusri",
"middle": [],
"last": "Pampari",
"suffix": ""
},
{
"first": "Chengxiang",
"middle": [],
"last": "Zhai",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR'19",
"volume": "",
"issue": "",
"pages": "1221--1224",
"other_ids": {
"DOI": [
"10.1145/3331184.3331362"
]
},
"num": null,
"urls": [],
"raw_text": "Saar Kuzi, Abhishek Narwekar, Anusri Pampari, and ChengXiang Zhai. 2019. Help me search: Lever- aging user-system collaboration for query construc- tion to improve accuracy for difficult queries. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Infor- mation Retrieval, SIGIR'19, page 1221-1224, New York, NY, USA. Association for Computing Machin- ery.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Approximately optimal facet value selection",
"authors": [
{
"first": "Sonya",
"middle": [],
"last": "Liberman",
"suffix": ""
},
{
"first": "Ronny",
"middle": [],
"last": "Lempel",
"suffix": ""
}
],
"year": 2014,
"venue": "Sci. Comput. Program",
"volume": "94",
"issue": "P1",
"pages": "18--31",
"other_ids": {
"DOI": [
"10.1016/j.scico.2013.07.019"
]
},
"num": null,
"urls": [],
"raw_text": "Sonya Liberman and Ronny Lempel. 2014. Approxi- mately optimal facet value selection. Sci. Comput. Program., 94(P1):18-31.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Ecommerce product query classification using implicit user's feedback from clicks",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Datta",
"suffix": ""
},
{
"first": "G",
"middle": [
"D"
],
"last": "Fabbrizio",
"suffix": ""
}
],
"year": 2018,
"venue": "2018 IEEE International Conference on Big Data (Big Data)",
"volume": "",
"issue": "",
"pages": "1955--1959",
"other_ids": {
"DOI": [
"10.1109/BigData.2018.8622008"
]
},
"num": null,
"urls": [],
"raw_text": "Y. Lin, A. Datta, and G. D. Fabbrizio. 2018. E- commerce product query classification using im- plicit user's feedback from clicks. In 2018 IEEE International Conference on Big Data (Big Data), pages 1955-1959.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Learning to rank for information retrieval",
"authors": [
{
"first": "Tie-Yan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2009,
"venue": "Found. Trends Inf. Retr",
"volume": "3",
"issue": "3",
"pages": "225--331",
"other_ids": {
"DOI": [
"10.1561/1500000016"
]
},
"num": null,
"urls": [],
"raw_text": "Tie-Yan Liu. 2009. Learning to rank for information retrieval. Found. Trends Inf. Retr., 3(3):225-331.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Annotazione di contenuti concettuali in un corpus italiano: I -cab",
"authors": [
{
"first": "Bernardo",
"middle": [],
"last": "Magnini",
"suffix": ""
},
{
"first": "Amedeo",
"middle": [],
"last": "Cappelli",
"suffix": ""
},
{
"first": "Emanuele",
"middle": [],
"last": "Pianta",
"suffix": ""
},
{
"first": "Manuela",
"middle": [],
"last": "Speranza",
"suffix": ""
},
{
"first": "Bartalesi",
"middle": [],
"last": "Lenzi",
"suffix": ""
},
{
"first": "Rachele",
"middle": [],
"last": "Sprugnoli",
"suffix": ""
},
{
"first": "Lorenza",
"middle": [],
"last": "Romano",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Girardi",
"suffix": ""
},
{
"first": "Matteo",
"middle": [],
"last": "Negri",
"suffix": ""
}
],
"year": 2006,
"venue": "Proc.of SILFI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bernardo Magnini, Amedeo Cappelli, Emanuele Pi- anta, Manuela Speranza, V Bartalesi Lenzi, Rachele Sprugnoli, Lorenza Romano, Christian Girardi, and Matteo Negri. 2006. Annotazione di contenuti con- cettuali in un corpus italiano: I -cab. In Proc.of SILFI 2006.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "High accuracy retrieval with multiple nested ranker",
"authors": [
{
"first": "Irina",
"middle": [],
"last": "Matveeva",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Burges",
"suffix": ""
},
{
"first": "Timo",
"middle": [],
"last": "Burkard",
"suffix": ""
},
{
"first": "Andy",
"middle": [],
"last": "Laucius",
"suffix": ""
},
{
"first": "Leon",
"middle": [],
"last": "Wong",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '06",
"volume": "",
"issue": "",
"pages": "437--444",
"other_ids": {
"DOI": [
"10.1145/1148170.1148246"
]
},
"num": null,
"urls": [],
"raw_text": "Irina Matveeva, Chris Burges, Timo Burkard, Andy Laucius, and Leon Wong. 2006. High accuracy re- trieval with multiple nested ranker. In Proceedings of the 29th Annual International ACM SIGIR Confer- ence on Research and Development in Information Retrieval, SIGIR '06, page 437-444, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 26th International Conference on Neural Information Processing Systems",
"volume": "2",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Cor- rado, and Jeffrey Dean. 2013a. Distributed represen- tations of words and phrases and their composition- ality. In Proceedings of the 26th International Con- ference on Neural Information Processing Systems - Volume 2, NIPS'13, pages 3111-3119, USA. Curran Associates Inc.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in Neural Information Processing Systems",
"volume": "26",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013b. Distributed representa- tions of words and phrases and their composition- ality. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, editors, Ad- vances in Neural Information Processing Systems 26, pages 3111-3119. Curran Associates, Inc.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Neural models for information retrieval",
"authors": [
{
"first": "Bhaskar",
"middle": [],
"last": "Mitra",
"suffix": ""
},
{
"first": "Nick",
"middle": [],
"last": "Craswell",
"suffix": ""
}
],
"year": 2017,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bhaskar Mitra and Nick Craswell. 2017. Neu- ral models for information retrieval. ArXiv, abs/1705.01509.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Cota: Improving the speed and accuracy of customer support through ranking and deep networks",
"authors": [
{
"first": "Piero",
"middle": [],
"last": "Molino",
"suffix": ""
},
{
"first": "Huaixiu",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Yi-Chia",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Piero Molino, Huaixiu Zheng, and Yi-Chia Wang. 2018. Cota: Improving the speed and accuracy of customer support through ranking and deep net- works. Proceedings of the 24th ACM SIGKDD In- ternational Conference on Knowledge Discovery & Data Mining.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "A neural language model for query auto-completion",
"authors": [
{
"first": "Hoon",
"middle": [],
"last": "Dae",
"suffix": ""
},
{
"first": "Rikio",
"middle": [],
"last": "Park",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chiba",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/3077136.3080758"
]
},
"num": null,
"urls": [],
"raw_text": "Dae Hoon Park and Rikio Chiba. 2017. A neural lan- guage model for query auto-completion.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Can There Ever Be Too Many Options? A Meta-Analytic Review of Choice Overload",
"authors": [
{
"first": "Benjamin",
"middle": [],
"last": "Scheibehenne",
"suffix": ""
},
{
"first": "Rainer",
"middle": [],
"last": "Greifeneder",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"M"
],
"last": "Todd",
"suffix": ""
}
],
"year": 2010,
"venue": "Journal of Consumer Research",
"volume": "37",
"issue": "3",
"pages": "409--425",
"other_ids": {
"DOI": [
"10.1086/651235"
]
},
"num": null,
"urls": [],
"raw_text": "Benjamin Scheibehenne, Rainer Greifeneder, and Pe- ter M. Todd. 2010. Can There Ever Be Too Many Options? A Meta-Analytic Review of Choice Over- load. Journal of Consumer Research, 37(3):409- 425.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Product categorization with lstms and balanced pooling views",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Skinner",
"suffix": ""
}
],
"year": 2018,
"venue": "eCOM@SIGIR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Skinner. 2018. Product categorization with lstms and balanced pooling views. In eCOM@SIGIR.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Ecommerce query classification using product taxonomy mapping: A transfer learning approach",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Skinner",
"suffix": ""
},
{
"first": "Surya",
"middle": [],
"last": "Kallumadi",
"suffix": ""
}
],
"year": 2019,
"venue": "eCOM@SIGIR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Skinner and Surya Kallumadi. 2019. E- commerce query classification using product taxon- omy mapping: A transfer learning approach. In eCOM@SIGIR.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Lexical learning as an online optimal experiment: Building efficient search engines through humanmachine collaboration",
"authors": [
{
"first": "Jacopo",
"middle": [],
"last": "Tagliabue",
"suffix": ""
},
{
"first": "Reuben",
"middle": [],
"last": "Cohn-Gordon",
"suffix": ""
}
],
"year": 2019,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacopo Tagliabue and Reuben Cohn-Gordon. 2019. Lexical learning as an online optimal experiment: Building efficient search engines through human- machine collaboration. ArXiv, abs/1910.14164.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Facet selection algorithms for web product search",
"authors": [
{
"first": "Damir",
"middle": [],
"last": "Vandic",
"suffix": ""
},
{
"first": "Flavius",
"middle": [],
"last": "Frasincar",
"suffix": ""
},
{
"first": "Uzay",
"middle": [],
"last": "Kaymak",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 22nd ACM International Conference on Information & Knowledge Management, CIKM '13",
"volume": "",
"issue": "",
"pages": "2327--2332",
"other_ids": {
"DOI": [
"10.1145/2505515.2505664"
]
},
"num": null,
"urls": [],
"raw_text": "Damir Vandic, Flavius Frasincar, and Uzay Kaymak. 2013. Facet selection algorithms for web product search. In Proceedings of the 22nd ACM Inter- national Conference on Information & Knowledge Management, CIKM '13, page 2327-2332, New York, NY, USA. Association for Computing Machin- ery.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "LRMM: Learning to recommend with missing modalities",
"authors": [
{
"first": "Cheng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Mathias",
"middle": [],
"last": "Niepert",
"suffix": ""
},
{
"first": "Hui",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "3360--3370",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1373"
]
},
"num": null,
"urls": [],
"raw_text": "Cheng Wang, Mathias Niepert, and Hui Li. 2018a. LRMM: Learning to recommend with missing modalities. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Process- ing, pages 3360-3370, Brussels, Belgium. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Realtime query completion via deep language models",
"authors": [
{
"first": "Po-Wei",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2018,
"venue": "CEUR Workshop Proceedings. CEUR-WS.org",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Po-Wei Wang et al. 2018b. Realtime query completion via deep language models. In eCOM@SIGIR, vol- ume 2319 of CEUR Workshop Proceedings. CEUR- WS.org.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "A learning algorithm for continually running fully recurrent neural networks",
"authors": [
{
"first": "Ronald",
"middle": [
"J"
],
"last": "Williams",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Zipser",
"suffix": ""
}
],
"year": 1989,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronald J. Williams and David Zipser. 1989. A learn- ing algorithm for continually running fully recurrent neural networks.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "An image is worth a thousand features: Scalable product representations for insession type-ahead personalization",
"authors": [
{
"first": "Bingqing",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Jacopo",
"middle": [],
"last": "Tagliabue",
"suffix": ""
},
{
"first": "Federico",
"middle": [],
"last": "Bianchi",
"suffix": ""
},
{
"first": "Ciro",
"middle": [],
"last": "Greco",
"suffix": ""
}
],
"year": 2020,
"venue": "Companion Proceedings of the Web Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/3366424.3386198"
]
},
"num": null,
"urls": [],
"raw_text": "Bingqing Yu, Jacopo Tagliabue, Federico Bianchi, and Ciro Greco. 2020. An image is worth a thousand features: Scalable product representations for in- session type-ahead personalization. In Companion Proceedings of the Web Conference, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Neural architecture search with reinforcement learning",
"authors": [
{
"first": "Barret",
"middle": [],
"last": "Zoph",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2016,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Barret Zoph and Quoc V. Le. 2016. Neural archi- tecture search with reinforcement learning. ArXiv, abs/1611.01578.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Price re-ordering on Amazon.com, showing degrading relevance in the result set when querying for a console -\"nintendo switch\" -and then re-ranking based on price.",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF1": {
"text": "A lightweight SessionPath functional integration: starting from a standard flow",
"type_str": "figure",
"uris": null,
"num": null
},
"TABREF1": {
"content": "<table/>",
"text": "Descriptive statistics for the dataset.",
"html": null,
"type_str": "table",
"num": null
},
"TABREF3": {
"content": "<table/>",
"text": "Accuracy scores for depth = 1, depth = 2, depth = last, divided by Shop 1 (top) and Shop 2 (bottom). We report the mean over 5 runs, with SD if SD \u2265 0.01.",
"html": null,
"type_str": "table",
"num": null
},
"TABREF4": {
"content": "<table/>",
"text": "Accuracy scores (D=last) when training on portions of the original dataset for Shop 1.",
"html": null,
"type_str": "table",
"num": null
},
"TABREF6": {
"content": "<table/>",
"text": "Precision and recall at different decision thresholds for Shop 1.",
"html": null,
"type_str": "table",
"num": null
}
}
}
}