ACL-OCL / Base_JSON /prefixN /json /naacl /2021.naacl-industry.20.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:22:47.797369Z"
},
"title": "Query2Prod2Vec Grounded Word Embeddings for eCommerce",
"authors": [
{
"first": "Federico",
"middle": [],
"last": "Bianchi",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Jacopo",
"middle": [],
"last": "Tagliabue",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Bingqing",
"middle": [],
"last": "Yu",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present Query2Prod2Vec, a model that grounds lexical representations for product search in product embeddings: in our model, meaning is a mapping between words and a latent space of products in a digital shop. We leverage shopping sessions to learn the underlying space and use merchandising annotations to build lexical analogies for evaluation: our experiments show that our model is more accurate than known techniques from the NLP and IR literature. Finally, we stress the importance of data efficiency for product search outside of retail giants, and highlight how Query2Prod2Vec fits with practical constraints faced by most practitioners.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "We present Query2Prod2Vec, a model that grounds lexical representations for product search in product embeddings: in our model, meaning is a mapping between words and a latent space of products in a digital shop. We leverage shopping sessions to learn the underlying space and use merchandising annotations to build lexical analogies for evaluation: our experiments show that our model is more accurate than known techniques from the NLP and IR literature. Finally, we stress the importance of data efficiency for product search outside of retail giants, and highlight how Query2Prod2Vec fits with practical constraints faced by most practitioners.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The eCommerce market reached in recent years an unprecedented scale: in 2020, 3.9 trillion dollars were spent globally in online retail (Cramer-Flood, 2020) . While shoppers make significant use of search functionalities, improving their experience is a never-ending quest (Econsultancy, 2020) , as outside of few retail giants users complain about sub-optimal performances (Baymard Institute, 2020). As the technology behind the industry increases in sophistication, neural architectures are gradually becoming more common (Tsagkias et al., 2020) and, with them, the need for accurate word embeddings for Information Retrieval (IR) and downstream Natural Language Processing (NLP) tasks .",
"cite_spans": [
{
"start": 136,
"end": 156,
"text": "(Cramer-Flood, 2020)",
"ref_id": "BIBREF10"
},
{
"start": 273,
"end": 293,
"text": "(Econsultancy, 2020)",
"ref_id": "BIBREF13"
},
{
"start": 524,
"end": 547,
"text": "(Tsagkias et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Unfortunately, the success of standard and contextual embeddings from the NLP literature (Mikolov et al., 2013a; Devlin et al., 2019) could not be immediately translated to the product search scenario, due to some peculiar challenges , such as short text, industry-specific jargon (Bai et al., 2018) , lowresource languages; moreover, specific embedding strategies have often been developed in the context of high-traffic websites (Grbovic et al., 2016) , which limit their applicability in many practical scenarios. In this work, we propose a sample efficient word embedding method for IR in eCommerce, and benchmark it against SOTA models over industry data provided by partnering shops. We summarize our contributions as follows:",
"cite_spans": [
{
"start": 89,
"end": 112,
"text": "(Mikolov et al., 2013a;",
"ref_id": "BIBREF21"
},
{
"start": 113,
"end": 133,
"text": "Devlin et al., 2019)",
"ref_id": "BIBREF12"
},
{
"start": 281,
"end": 299,
"text": "(Bai et al., 2018)",
"ref_id": "BIBREF0"
},
{
"start": 431,
"end": 453,
"text": "(Grbovic et al., 2016)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1. we propose a method to learn dense representations of words for eCommerce: we name our method Query2Prod2Vec, as the mapping between words and the latent space is mediated by the product domain;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2. we evaluate the lexical representations learned by Query2Prod2Vec on an analogy task against SOTA models in NLP and IR; benchmarks are run on two independent shops, differing in traffic, industry and catalog size;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "3. we detail a procedure to generate synthetic embeddings, which allow us to tackle the \"cold start\" challenge;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "4. we release our implementations, to help the community with the replication of our findings on other shops 1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "While perhaps not fundamental to its industry significance, it is important to remark that grounded lexical learning is well aligned with theoretical considerations on meaning in recent (and less recent) literature (Bender and Koller, 2020; Bisk et al., 2020; Montague, 1974) .",
"cite_spans": [
{
"start": 215,
"end": 240,
"text": "(Bender and Koller, 2020;",
"ref_id": "BIBREF3"
},
{
"start": 241,
"end": 259,
"text": "Bisk et al., 2020;",
"ref_id": "BIBREF7"
},
{
"start": 260,
"end": 275,
"text": "Montague, 1974)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In product search, when the shopper issues a query (e.g. \"sneakers\") on a shop, the shop search engine returns a list of K products matching the query intent and possibly some contextual factor -the shopper at that point may either leave the website, or click on n products to further explore the offering and eventually make a purchase. Unlike web search, which is exclusively performed at massive scale, product search is a problem that both big and small retailers have to solve: while word embeddings have revolutionized many areas of NLP (Mikolov et al., 2013a) , word embeddings for product queries are especially challenging to obtain at scale, when considering the huge variety of use cases in the overall eCommerce industry. In particular, based on industry data and first-hand experience with dozens of shops in our network, we identify four constraints for effective word embeddings in eCommerce:",
"cite_spans": [
{
"start": 543,
"end": 566,
"text": "(Mikolov et al., 2013a)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Embeddings for Product Search: an Industry Perspective",
"sec_num": "2"
},
{
"text": "1. Short text. Most product queries are very short -60% of all queries in our dataset are one-word queries, > 80% are two words or less; the advantage of contextualized embeddings may therefore be limited, while lexical vectors are fundamental for downstream NLP tasks Bianchi et al., 2020a) . For this reason, the current work specifically addresses the quality of word embeddings 2 .",
"cite_spans": [
{
"start": 269,
"end": 291,
"text": "Bianchi et al., 2020a)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Embeddings for Product Search: an Industry Perspective",
"sec_num": "2"
},
{
"text": "2. Low-resource languages. Even shops that have the majority of their traffic on English domain typically have smaller shops in lowresource languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Embeddings for Product Search: an Industry Perspective",
"sec_num": "2"
},
{
"text": "3. Data sparsity. In Shop X below, only 9% of all shopping sessions have a search interaction 3 . Search sparsity, coupled with verticalspecific jargon and the usual long tail of search queries, makes data-hungry models unlikely to succeed for most shops.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Embeddings for Product Search: an Industry Perspective",
"sec_num": "2"
},
{
"text": "2 Irrespectively of how the lexical vectors are computed, query embeddings can be easily recovered with the usual techniques (e.g. sum or average word embeddings ): as we mention in the concluding remarks, investigating compositionality is an important part of our overall research agenda.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Embeddings for Product Search: an Industry Perspective",
"sec_num": "2"
},
{
"text": "3 This is a common trait verified across industries and sizes: among dozens of shops in our network, 30% is the highest search vs no-search session ratio; Shop Y below is around 29%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Embeddings for Product Search: an Industry Perspective",
"sec_num": "2"
},
{
"text": "4. Computational capacity. The majority of the market has the necessity to strike a good trade-off between quality of lexical representations and the cost of training and deploying models, both as hardware expenses and as additional maintenance/training costs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Embeddings for Product Search: an Industry Perspective",
"sec_num": "2"
},
{
"text": "The embedding strategy we propose -Query2Prod2Vec -has been designed to allow efficient learning of word embeddings for product queries. Our findings are useful to a wide range of practitioners: large shops launching in new languages/countries, mid-and-small shops transitioning to dense IR architectures and the raising wave of multi-tenant players 4 : as A.I. providers grow by deploying their solutions on multiple shops, \"cold start\" scenarios are an important challenge to the viability of their business model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Embeddings for Product Search: an Industry Perspective",
"sec_num": "2"
},
{
"text": "The literature on learning representations for lexical items in NLP is vast and growing fast; as an overview of classical methods, Baroni et al. (2014) benchmarks several count-based and neural techniques (Landauer and Dumais, 1997; Mikolov et al., 2013b) ; recently, context-aware embeddings (Peters et al., 2018; Devlin et al., 2019) have demonstrated state-of-the-art performances in several semantic tasks (Rogers et al., 2020; Nozza et al., 2020) , including document-based search (Nogueira et al., 2020) , in which target entities are long documents, instead of product (Craswell et al., 2020) . To address IR-specific challenges, other embedding strategies have been proposed: Search2Vec (Grbovic et al., 2016) uses interactions with ads and pages as context in the typical context-target setting of skip-gram models (Mikolov et al., 2013b) ; QueryNGram2Vec (Bai et al., 2018) additionally learns embeddings for ngrams of word appearing in queries to better cover the long tail. The idea of using vectors (from images) as an aid to query representation has also been suggested as a heuristic device by , in the context of personalized language models; this work is the first to our knowledge to benchmark embeddings on lexical semantics (not tuned for domain-specific tasks), and investigate sample efficiency for small-data contexts.",
"cite_spans": [
{
"start": 131,
"end": 151,
"text": "Baroni et al. (2014)",
"ref_id": "BIBREF1"
},
{
"start": 205,
"end": 232,
"text": "(Landauer and Dumais, 1997;",
"ref_id": "BIBREF18"
},
{
"start": 233,
"end": 255,
"text": "Mikolov et al., 2013b)",
"ref_id": "BIBREF22"
},
{
"start": 293,
"end": 314,
"text": "(Peters et al., 2018;",
"ref_id": "BIBREF27"
},
{
"start": 315,
"end": 335,
"text": "Devlin et al., 2019)",
"ref_id": "BIBREF12"
},
{
"start": 410,
"end": 431,
"text": "(Rogers et al., 2020;",
"ref_id": "BIBREF28"
},
{
"start": 432,
"end": 451,
"text": "Nozza et al., 2020)",
"ref_id": "BIBREF26"
},
{
"start": 486,
"end": 509,
"text": "(Nogueira et al., 2020)",
"ref_id": "BIBREF25"
},
{
"start": 576,
"end": 599,
"text": "(Craswell et al., 2020)",
"ref_id": "BIBREF11"
},
{
"start": 695,
"end": 717,
"text": "(Grbovic et al., 2016)",
"ref_id": "BIBREF15"
},
{
"start": 824,
"end": 847,
"text": "(Mikolov et al., 2013b)",
"ref_id": "BIBREF22"
},
{
"start": 865,
"end": 883,
"text": "(Bai et al., 2018)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "3"
},
{
"text": "In Query2Prod2Vec, the representation for a query q is built through the representation of the objects that q refers to. Consider a typical shopperengine interaction in the context of product search: the shopper issues a query, e.g. \"shoes\", the engine replies with a noisy set of potential referents, e.g. pairs of shoes from the shop inventory, among which the shopper may select relevant items. Hence, this dynamics is reminiscent of a cooperative language game (Lewis, 1969) , in which shoppers give noisy feedback to the search engine on the meaning of the queries. A full specification of Query2Prod2Vec therefore involves a representation of the target domain of reference (i.e. products in a digital shop) and a denotation function.",
"cite_spans": [
{
"start": 465,
"end": 478,
"text": "(Lewis, 1969)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Query2Prod2Vec",
"sec_num": "4"
},
{
"text": "We represent products in a target shop through a prod2vec model built with anonymized shopping sessions containing user-product interactions. Embeddings are trained by solving the same optimization problem as in classical word2vec (Mikolov et al., 2013a) : word2vec becomes prod2vec by substituting words in a sentence with products viewed in a shopping session (Mu et al., 2018) . The utility of prod2vec is independently justified (Grbovic et al., 2015; and, more importantly, the referential approach leverages the abundance of browsing-based interactions, as compared to search-based interactions: by learning product embeddings from abundant behavioral data first, we sidestep a major obstacle to reliable word representation in eCommerce. Hyperparameter optimization follows the guidelines in Bianchi et al. (2020a) , with a total of 26,057 (Shop X) and 84,575 (Shop Y) product embeddings available for downstream processing 5 .",
"cite_spans": [
{
"start": 231,
"end": 254,
"text": "(Mikolov et al., 2013a)",
"ref_id": "BIBREF21"
},
{
"start": 362,
"end": 379,
"text": "(Mu et al., 2018)",
"ref_id": "BIBREF24"
},
{
"start": 433,
"end": 455,
"text": "(Grbovic et al., 2015;",
"ref_id": "BIBREF16"
},
{
"start": 799,
"end": 821,
"text": "Bianchi et al. (2020a)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Building a Target Domain",
"sec_num": "4.1"
},
{
"text": "The fundamental intuition of Query2Prod2Vec is treating clicks after q as a noisy feedback mapping q to a portion of the latent product space. In particular, we compute the embedding for q by averaging the product embeddings of all products clicked after it, using frequency as a weighting factor (i.e. products clicked often contribute more). The model has one free parameter, rank, which controls how many embeddings are used to build the representation for q: if rank=k, only the k most clicked products after q are used. The results in Table 1 are obtained with rank=5, as we leave to future work to investigate the role of this parameter.",
"cite_spans": [],
"ref_spans": [
{
"start": 540,
"end": 547,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Learning Embeddings",
"sec_num": "4.2"
},
{
"text": "The lack of large-scale search logs in the case of new deployments is a severe issue for successful training. The referential nature of Query2Prod2Vec provides a fundamental competitive advantage over models building embeddings from past linguistic behavior only, as synthetic embeddings can be generated as long as cheap session data is available to obtain an initial prod2vec model. As detailed in the ensuing section, the process happens in two stages, event generation and embeddings creation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Embeddings",
"sec_num": "4.2"
},
{
"text": "The procedure to create synthetic embeddings is detailed in Algorithm 1: it takes as input a list of words, a pre-defined number of sampling iterations, a popularity distribution over products 6 , and it returns a list of synthetic search events, that is, a mapping between words and lists of products \"clicked\". Simulating the search event can be achieved through the existing search engine, as, from a practical standpoint, some IR system must already be in place given the use case under consideration. To avoid over-relying on the quality of IR and prove the robustness of the method, all the simulations below are not performed with the actual production API, but with a custom-built inverted index over product meta-data, with a simple TF-IDF weighting and Boolean search.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Creating Synthetic Embeddings",
"sec_num": "4.3"
},
{
"text": "For the second stage, we can treat the synthetic click events produced by Algorithm 1 as a dropin replacement for user-generated events -that is, for any query q, we calculate an embedding by averaging the product embeddings of the relevant products, weighted by frequency 7 . Putting the two stages together, Query2Prod2Vec can not only produce reliable query embeddings based on historical data, but also learn approximate embeddings for a large vocabulary before being exposed Algorithm 1: Generation of synthetic click events.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Creating Synthetic Embeddings",
"sec_num": "4.3"
},
{
"text": "Data: a list of words W , a pre-defined number N of simulations per word, a distribution D over products. Result: A dataset of synthetic clicked events: E E \u2190 empty mapping; foreach word w in W do product_list \u2190 Search(w); for i = 1 to N do p \u2190 Sample (product_list, D); append the entry (w, p) to E; end end return E to any search interaction: in Section 7 we report the performance of Query2Prod2Vec when using only synthetic embeddings 8 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Creating Synthetic Embeddings",
"sec_num": "4.3"
},
{
"text": "Following best practices in the multi-tenant literature , we benchmark all models on different shops to test their robustness. In particular, we obtained catalog data, search logs and anonymized shopping sessions from two partnering shops, Shop X and Shop Y: Shop X is a sport apparel shop with Alexa ranking of approximately 200k, representing a prototypical shop in the middle of the long tail; Shop Y is a home improvement shop with Alexa ranking of approximately 10k, representing an intermediate size between Shop X and public companies in the space. Linguistic data is in Italian for both shops, and training is done on random sessions from the period June-October 2019: after sampling, removal of bot-like sessions and pre-processing, we are left with 722,479 sessions for Shop X, and 1,986,452 sessions for Shop Y.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "5.1"
},
{
"text": "We leverage the unique opportunity to join catalog data, search logs and shopping sessions to extensively benchmark Query2Prod2Vec against a variety of methods from NLP and IR.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "5.2"
},
{
"text": "\u2022 Word2Vec and FastText. We train a CBOW (Mikolov et al., 2013a ) and a FastText model (Bojanowski et al., 2017) over product descriptions in the catalog;",
"cite_spans": [
{
"start": 41,
"end": 63,
"text": "(Mikolov et al., 2013a",
"ref_id": "BIBREF21"
},
{
"start": 87,
"end": 112,
"text": "(Bojanowski et al., 2017)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "5.2"
},
{
"text": "\u2022 UmBERTo. We use RoBERTa trained on Italian data -UmBERTo 9 . The s embedding of the last layer of the architecture is the query embedding;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "5.2"
},
{
"text": "\u2022 Search2Vec. We implement the skip-gram model from Grbovic et al. (2016) , by feeding the model with sessions composed of search queries and user clicks. Following the original model, we also train a time-sensitive variant, in which time between actions is used to weight query-click pairs differently;",
"cite_spans": [
{
"start": 52,
"end": 73,
"text": "Grbovic et al. (2016)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "5.2"
},
{
"text": "\u2022 Query2Vec. We implement a different context-target model, inspired by Egg (2019): embeddings are learned by the model when it tries to predict a (purchased or clicked) item starting from a query;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "5.2"
},
{
"text": "\u2022 QueryNGram2Vec. We implement the model from Bai et al. (2018) . Besides learning representations through a skip-gram model as in Grbovic et al. (2016) , the model learns the embeddings of unigrams to help cover the long tail for which no direct embedding is available.",
"cite_spans": [
{
"start": 46,
"end": 63,
"text": "Bai et al. (2018)",
"ref_id": "BIBREF0"
},
{
"start": 131,
"end": 152,
"text": "Grbovic et al. (2016)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "5.2"
},
{
"text": "To guarantee a fair comparison, all models are trained on the same sessions. For all baselines, we follow the same hyperparameters found in the cited works: the dimension of query embedding vectors is set to 50, except that 768-dimensional vectors are used for UmBERTo, as provided by the pre-trained model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "5.2"
},
{
"text": "As discussed in Section 1, a distinguishing feature of Query2Prod2Vec is grounding, that is, the relation between words and an external domainin this case, products. It is therefore interesting not only to assess a possible quantitative gap in the quality of the representations produced by the baseline models, but also to remark the qualitative difference at the core of the proposed method: if words are about something, pure co-occurrence patterns may be capturing only fragments of lexical meaning (Bianchi et al., 2021) .",
"cite_spans": [
{
"start": 503,
"end": 525,
"text": "(Bianchi et al., 2021)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "5.2"
},
{
"text": "As discussed in Section 2, we consider evaluation tasks focused on word meaning, without using product-based similarity (as that would implicitly and unfairly favor referential embeddings). Analogy-based tasks (Mikolov et al., 2013a) are a popular choice to measure semantic accuracy of embeddings, where a model is asked to fill templates like man : king = woman : ?; however, preparing analogies for digital shops presents non trivial challenges for human annotators: these would in fact need to know both the language and the underlying space (\"air max\" is closer to \"nike\" than to \"adidas\"), with the additional complication that many candidates may not have \"determinate\" answers (e.g. if Adidas is to Gazelle, then Nike is to what exactly?). In building our testing framework, we keep the intuition that analogies are an effective way to test for lexical meaning and the assumption that human-level concepts should be our ground truth: in particular, we programmatically produce analogies by leveraging existing human labelling, as indirectly provided by the merchandisers who built product catalogs 10 .",
"cite_spans": [
{
"start": 210,
"end": 233,
"text": "(Mikolov et al., 2013a)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Solving Analogies in eCommerce",
"sec_num": "6"
},
{
"text": "We extract words from the merchandising taxonomy of the target shops, focusing on three most frequent fields in query logs: product type, brand and sport activity for Shop X; product type, brand and part of the house for Shop Y. Our goal is to go from taxonomy to analogies, that is, showing how for each pair of taxonomy types (e.g. brand : sport), we can produce two pairs of tokens (Wilson : tennis, Cressi : scubadiving), and create two analogies: b1 : s1 = b2 : ? (target: s2) and b2: s2 = b1 : ? (target: s1) for testing purposes. For each type in a pair (e.g. brand : sport), we repeat the following for all possible values of brand (e.g. \"Wilson\", \"Nike\") -given a brand B:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Test Set Preparation",
"sec_num": "6.1"
},
{
"text": "1. we loop over the catalog and record all values of sport, along with their frequency, for the products made by B. For example, for B = N ike, the distribution may be: {\"soccer\": 10, \"basketball\": 8, \"scubadiving\": 0 }; for B = W ilson, it may be: {\"tennis\": 8};",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Test Set Preparation",
"sec_num": "6.1"
},
{
"text": "10 It is important to note that this categorization is done by product experts for navigation and inventory purposes: all product labels are produced independently from any NLP consideration.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Test Set Preparation",
"sec_num": "6.1"
},
{
"text": "2. we calculate the Gini coefficient (Catalano et al., 2009) over the distribution on the values of sport and choose a conservative Gini threshold, i.e. 75th percentile: the goal of this threshold is to avoid \"undetermined\" analogies, such as Adidas : Gazelle = Nike : ?. The intuition behind the use of a dispersion measure is that product analogies are harder if the relevant label is found across a variety of products 11 .",
"cite_spans": [
{
"start": 37,
"end": 60,
"text": "(Catalano et al., 2009)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Test Set Preparation",
"sec_num": "6.1"
},
{
"text": "With all the Gini coefficients and a chosen threshold, we are now ready to generate the analogies, by repeating the following for all values of brandgiven a brand B we can repeat the following sampling process K times (K = 10 for our experiments):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Test Set Preparation",
"sec_num": "6.1"
},
{
"text": "1. if B's Gini value for its distribution of sport labels is below our chosen threshold, we skip B; if B's value is above, we associate to B its most frequent sport value, e.g. Wilson : tennis. This is the source pair of the analogy; to generate a target pair, we sample randomly a brand C with high Gini together with its most frequent value, e.g. Atomic : skiing;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Test Set Preparation",
"sec_num": "6.1"
},
{
"text": "2. we add to the final test set two analogies: Wilson : tennis = Atomic : ?, and Atomic : skiing = Wilson : ?.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Test Set Preparation",
"sec_num": "6.1"
},
{
"text": "The procedure is designed to generate test examples conservatively, but of fairly high quality, as for example Garmin : watches = Arena : bathing cap (the analogy relates two brands which sell only one type of items), or racket : tennis = bathing cap : indoor swimming (the analogy relates \"tools\" that are needed in two activities). A total of 1208 and 606 test analogies are used for the analogy task (AT) for, respectively, Shop X and Shop Y: we benchmark all models by reporting Hit Rate at different cutoffs (Vasile et al., 2016) , and we also report how many analogies are covered by the lexicon learned by the models (coverage is the ratio of analogies for which all embeddings are available in the relevant space). Table 1 reports model performance for the chosen cutoffs. Query2Prod2Vec (as trained on real",
"cite_spans": [
{
"start": 513,
"end": 534,
"text": "(Vasile et al., 2016)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [
{
"start": 723,
"end": 730,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Test Set Preparation",
"sec_num": "6.1"
},
{
"text": "HR@5,10 for X HR@5,10 for Y CV (X/Y) Acc on ST data) has the best performance 12 , while maintaining a very competitive coverage. More importantly, following our considerations in Section 2, results confirm that producing competitive embeddings on shops with different constraints is a challenging task for existing techniques, as models tend to either rely on specific query distribution (e.g. Search2Vec (time)), or the availability of linguistic and catalog resources with good coverage (e.g. Word2Vec). Query2Prod2Vec is the only model performing with comparable quality in the two scenarios, further strengthening the methodological importance of running benchmarks on more than one shop if findings are to be trusted by a large group of practitioners.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "To investigate sample efficiency, we run two further experiments on Shop X: first, we run AT giving only 1/3 of the original data to Query2Prod2Vec (both for the prod2vec space, and for the denotation). The small-dataset version of Query2Prod2Vec still outperforms all other full-dataset models in Table 1 (HR@5,10 = 0.276 / 0.380). Second, we train a Query2Prod2Vec model only with simulated data produced as explained in Section 4 -that is, with zero data from real search logs. The entirely simulated Query2Prod2Vec shows performance competitive with the small-dataset version (HR@5,10 = 0.259 / 0.363) 13 , outperforming all baselines. As a further independent check, we supplement AT with a small semantic similarity task (ST)",
"cite_spans": [],
"ref_spans": [
{
"start": 298,
"end": 305,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Sample Efficiency and User Studies",
"sec_num": "7.1"
},
{
"text": "12 HR@20 was also computed, but omitted for brevity as it confirmed the general trend.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sample Efficiency and User Studies",
"sec_num": "7.1"
},
{
"text": "13 A similar result was obtained on Shop Y, and it is omitted for brevity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sample Efficiency and User Studies",
"sec_num": "7.1"
},
{
"text": "on Shop X 14 : two native speakers are asked to solve a small set (46) of manually curated questions in the form: \"Given the word Nike, which is the most similar, Adidas or Wilson?\". ST is meant to (partially) capture how much the embedding spaces align with lexical intuitions of generic speakers, independently of the product search dynamics. Table 1 reports results treating human ratings as ground truth and using cosine similarity on the learned embeddings for all models 15 . Query2Prod2Vec outperforms all other methods, further suggesting that the representations learned through referential information capture some aspects of lexical knowledge.",
"cite_spans": [
{
"start": 477,
"end": 479,
"text": "15",
"ref_id": null
}
],
"ref_spans": [
{
"start": 345,
"end": 352,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Sample Efficiency and User Studies",
"sec_num": "7.1"
},
{
"text": "As stressed in Section 2, accuracy and resources form a natural trade-off for industry practitioners. Therefore, it is important to highlight that, our model is not just more accurate, but significantly more efficient to train: the best performing Query2Prod2Vec takes 30 minutes (CPU only) to be completed for the larger Shop Y, while other competitive models such as Search2Vec(time) and QueryNGram2Vec require 2 to 4 hours 16 . Being able to quickly generate many models allows for cost-effective analysis and optimization; moreover, infrastructure cost is heavily related to ethical and social issues on energy consumption in NLP (Strubell et al., 2019) .",
"cite_spans": [
{
"start": 634,
"end": 657,
"text": "(Strubell et al., 2019)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Computational Requirements",
"sec_num": "7.2"
},
{
"text": "In this work, we learned reference-based word embeddings for product search: Query2Prod2Vec significantly outperforms other embedding strategies on lexical tasks, and consistently provides good performance in small-data and zero-data scenarios, with the help of synthetic embeddings. In future work, we will extend our analysis to i) specific IR tasks, within the recent paradigm of the dual encoder model (Karpukhin et al., 2020) , and ii) compositional tasks, trying a systematic replication of the practical success obtained by through image-based heuristics.",
"cite_spans": [
{
"start": 406,
"end": 430,
"text": "(Karpukhin et al., 2020)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "8"
},
{
"text": "When looking at models like Query2Prod2Vec in the larger industry landscape, we hope our methodology can help the field broaden its horizons: while retail giants indubitably played a major role in moving eCommerce use cases to the center of NLP research, finding solutions that address a larger portion of the market is not just practically important, but also an exciting agenda of its own 17 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "8"
},
{
"text": "Coveo collects anonymized user data when providing its business services in full compliance with existing legislation (e.g. GDPR). The training dataset used for all models employs anonymous UUIDs to label events and sessions and, as such, it does not contain any information that can be linked to shoppers or physical entities; in particular, data is ingested through a standardized client-side integration, as specified in our public protocol.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ethical Considerations",
"sec_num": "9"
},
{
"text": "As an indication of the market opportunity, only in 2019 and only in the space of AI-powered search and recommendations, we witnessed Coveo(Techcrunch), Algolia(Techcrunch, 2019a) and Lucidworks(Techcrunch, 2019b) raising more than 100M USD each from venture funds.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Final parameters for prod2vec are: dimension = 50, win_size = 10, iterations = 30, ns_exponent = 0.75.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Please note that data on product popularity can be easily obtained through marketing tools, such as Google Analytics.7 Please note that while this work focuses on lexical quality, the same strategy can be applied to complex queries in a \"cold start\" scenario.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "All the experiments are performed with N = 500 simulated search events per query.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://huggingface.co/Musixmatch/ umberto-commoncrawl-cased-v1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In other words, Wilson : tennis = Atomic : ? (skiing) is a better analogy than Adidas : Gazelle = Nike : ?.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Shop X is chosen since it is easier to find speakers familiar with sport apparel than DIY items.15 Inter-rater agreement was substantial, with Cohen Kappa Score=0.67(McHugh, 2012).16 Training is performed on a Tesla V100 16GB GPU. As a back of the envelope calculation, training QueryNGram2Vec on a AWS p3 large instance costs around 12 USD, while a standard CPU container for Query2Prod2Vec costs less than 1 USD.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Please note that a previous draft of this article appeared on arxivhttps://arxiv.org/abs/2104.02061after the review process, but before the camera-ready submission.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We wish to thank Nadia Labai, Patrick John Chia, Andrea Polonioli, Ciro Greco and three anonymous reviewers for helpful comments on previous versions of this article. The authors wish to thank Coveo for the support and the computational resources used for the project. Federico Bianchi is a member of the Bocconi Institute for Data Science and Analytics (BIDSA) and the Data and Marketing Insights (DMI) unit.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Scalable query n-gram embedding for improving matching and relevance in sponsored search",
"authors": [
{
"first": "Xiao",
"middle": [],
"last": "Bai",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Ordentlich",
"suffix": ""
},
{
"first": "Yuanyuan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Andy",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Adwait",
"middle": [],
"last": "Ratnaparkhi",
"suffix": ""
},
{
"first": "Reena",
"middle": [],
"last": "Somvanshi",
"suffix": ""
},
{
"first": "Aldi",
"middle": [],
"last": "Tjahjadi",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD '18",
"volume": "",
"issue": "",
"pages": "52--61",
"other_ids": {
"DOI": [
"10.1145/3219819.3219897"
]
},
"num": null,
"urls": [],
"raw_text": "Xiao Bai, Erik Ordentlich, Yuanyuan Zhang, Andy Feng, Adwait Ratnaparkhi, Reena Somvanshi, and Aldi Tjahjadi. 2018. Scalable query n-gram em- bedding for improving matching and relevance in sponsored search. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD '18, pages 52-61, New York, NY, USA. ACM.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Don't count, predict! a systematic comparison of context-counting vs. context-predicting semantic vectors",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "Georgiana",
"middle": [],
"last": "Dinu",
"suffix": ""
},
{
"first": "Germ\u00e1n",
"middle": [],
"last": "Kruszewski",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "238--247",
"other_ids": {
"DOI": [
"10.3115/v1/P14-1023"
]
},
"num": null,
"urls": [],
"raw_text": "Marco Baroni, Georgiana Dinu, and Germ\u00e1n Kruszewski. 2014. Don't count, predict! a systematic comparison of context-counting vs. context-predicting semantic vectors. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 238-247, Baltimore, Maryland. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Site Search for Ecommerce",
"authors": [
{
"first": "Baymard",
"middle": [],
"last": "Institute",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Baymard Institute. 2020. Site Search for Ecommerce.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Climbing towards NLU: On meaning, form, and understanding in the age of data",
"authors": [
{
"first": "Emily",
"middle": [
"M"
],
"last": "Bender",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Koller",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5185--5198",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emily M. Bender and Alexander Koller. 2020. Climb- ing towards NLU: On meaning, form, and under- standing in the age of data. In Proceedings of the 58th Annual Meeting of the Association for Compu- tational Linguistics, pages 5185-5198, Online. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Language in a (Search) Box: Grounding Language Learning in Real-World Human-Machine Interaction",
"authors": [
{
"first": "Federico",
"middle": [],
"last": "Bianchi",
"suffix": ""
},
{
"first": "Ciro",
"middle": [],
"last": "Greco",
"suffix": ""
},
{
"first": "Jacopo",
"middle": [],
"last": "Tagliabue",
"suffix": ""
}
],
"year": 2021,
"venue": "NAACL-HLT. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Federico Bianchi, Ciro Greco, and Jacopo Tagliabue. 2021. Language in a (Search) Box: Grounding Lan- guage Learning in Real-World Human-Machine In- teraction. In NAACL-HLT. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Fantastic embeddings and how to align them: Zero-shot inference in a multi-shop scenario",
"authors": [
{
"first": "Federico",
"middle": [],
"last": "Bianchi",
"suffix": ""
},
{
"first": "Jacopo",
"middle": [],
"last": "Tagliabue",
"suffix": ""
},
{
"first": "Bingqing",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Luca",
"middle": [],
"last": "Bigon",
"suffix": ""
},
{
"first": "Ciro",
"middle": [],
"last": "Greco",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the SIGIR 2020 eCom workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Federico Bianchi, Jacopo Tagliabue, Bingqing Yu, Luca Bigon, and Ciro Greco. 2020a. Fantastic em- beddings and how to align them: Zero-shot infer- ence in a multi-shop scenario. In Proceedings of the SIGIR 2020 eCom workshop.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Bert goes shopping: Comparing distributional models for product representations",
"authors": [
{
"first": "Federico",
"middle": [],
"last": "Bianchi",
"suffix": ""
},
{
"first": "Bingqing",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Jacopo",
"middle": [],
"last": "Tagliabue",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2012.09807"
]
},
"num": null,
"urls": [],
"raw_text": "Federico Bianchi, Bingqing Yu, and Jacopo Tagliabue. 2020b. Bert goes shopping: Comparing distribu- tional models for product representations. arXiv preprint arXiv:2012.09807.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Experience grounds language",
"authors": [
{
"first": "Yonatan",
"middle": [],
"last": "Bisk",
"suffix": ""
},
{
"first": "Ari",
"middle": [],
"last": "Holtzman",
"suffix": ""
},
{
"first": "Jesse",
"middle": [],
"last": "Thomason",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Andreas",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Joyce",
"middle": [],
"last": "Chai",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
},
{
"first": "Angeliki",
"middle": [],
"last": "Lazaridou",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "May",
"suffix": ""
},
{
"first": "Aleksandr",
"middle": [],
"last": "Nisnevich",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Pinto",
"suffix": ""
},
{
"first": "Joseph",
"middle": [],
"last": "Turian",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "8718--8735",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.703"
]
},
"num": null,
"urls": [],
"raw_text": "Yonatan Bisk, Ari Holtzman, Jesse Thomason, Jacob Andreas, Yoshua Bengio, Joyce Chai, Mirella Lap- ata, Angeliki Lazaridou, Jonathan May, Aleksandr Nisnevich, Nicolas Pinto, and Joseph Turian. 2020. Experience grounds language. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 8718-8735, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Enriching word vectors with subword information",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "135--146",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00051"
]
},
"num": null,
"urls": [],
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Associa- tion for Computational Linguistics, 5:135-146.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Measuring resource inequality: The gini coefficient",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Catalano",
"suffix": ""
},
{
"first": "Tanya",
"middle": [],
"last": "Leise",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Pfaff",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.5038/1936-4660.2.2.4"
]
},
"num": null,
"urls": [],
"raw_text": "Michael Catalano, Tanya Leise, and Thomas Pfaff. 2009. Measuring resource inequality: The gini co- efficient. Numeracy, 2.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Global Ecommerce 2020. Ecommerce Decelerates amid Global Retail Contraction but Remains a Bright Spot",
"authors": [
{
"first": "Ethan",
"middle": [],
"last": "Cramer-Flood",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ethan Cramer-Flood. 2020. Global Ecommerce 2020. Ecommerce Decelerates amid Global Retail Con- traction but Remains a Bright Spot.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Overview of the trec 2019 deep learning track",
"authors": [
{
"first": "Nick",
"middle": [],
"last": "Craswell",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Bhaskar Mitra",
"suffix": ""
},
{
"first": "Daniel",
"middle": [
"Fernando"
],
"last": "Yilmaz",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Campos",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Voorhees",
"suffix": ""
}
],
"year": 2020,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nick Craswell, Bhaskar Mitra, E. Yilmaz, Daniel Fer- nando Campos, and E. Voorhees. 2020. Overview of the trec 2019 deep learning track. ArXiv, abs/2003.07820.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Site search: retailers still have a lot to learn",
"authors": [
{
"first": "",
"middle": [],
"last": "Econsultancy",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Econsultancy. 2020. Site search: retailers still have a lot to learn.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Query2vec: Search query expansion with query embeddings",
"authors": [
{
"first": "Alex",
"middle": [
"Egg"
],
"last": "",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Egg. 2019. Query2vec: Search query expansion with query embeddings.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Scalable semantic matching of queries to ads in sponsored search advertising",
"authors": [
{
"first": "Mihajlo",
"middle": [],
"last": "Grbovic",
"suffix": ""
},
{
"first": "Nemanja",
"middle": [],
"last": "Djuric",
"suffix": ""
},
{
"first": "Vladan",
"middle": [],
"last": "Radosavljevic",
"suffix": ""
},
{
"first": "Fabrizio",
"middle": [],
"last": "Silvestri",
"suffix": ""
},
{
"first": "Ricardo",
"middle": [],
"last": "Baeza-Yates",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Ordentlich",
"suffix": ""
},
{
"first": "Lee",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Gavin",
"middle": [],
"last": "Owens",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 39th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '16",
"volume": "",
"issue": "",
"pages": "375--384",
"other_ids": {
"DOI": [
"10.1145/2911451.2911538"
]
},
"num": null,
"urls": [],
"raw_text": "Mihajlo Grbovic, Nemanja Djuric, Vladan Radosavl- jevic, Fabrizio Silvestri, Ricardo Baeza-Yates, An- drew Feng, Erik Ordentlich, Lee Yang, and Gavin Owens. 2016. Scalable semantic matching of queries to ads in sponsored search advertising. In Proceedings of the 39th International ACM SIGIR Conference on Research and Development in Infor- mation Retrieval, SIGIR '16, page 375-384, New York, NY, USA. Association for Computing Machin- ery.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "E-commerce in your inbox: Product recommendations at scale",
"authors": [
{
"first": "Mihajlo",
"middle": [],
"last": "Grbovic",
"suffix": ""
},
{
"first": "Vladan",
"middle": [],
"last": "Radosavljevic",
"suffix": ""
},
{
"first": "Nemanja",
"middle": [],
"last": "Djuric",
"suffix": ""
},
{
"first": "Narayan",
"middle": [],
"last": "Bhamidipati",
"suffix": ""
},
{
"first": "Jaikit",
"middle": [],
"last": "Savla",
"suffix": ""
},
{
"first": "Varun",
"middle": [],
"last": "Bhagwan",
"suffix": ""
},
{
"first": "Doug",
"middle": [],
"last": "Sharp",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of KDD '15",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/2783258.2788627"
]
},
"num": null,
"urls": [],
"raw_text": "Mihajlo Grbovic, Vladan Radosavljevic, Nemanja Djuric, Narayan Bhamidipati, Jaikit Savla, Varun Bhagwan, and Doug Sharp. 2015. E-commerce in your inbox: Product recommendations at scale. In Proceedings of KDD '15.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Dense passage retrieval for open-domain question answering",
"authors": [
{
"first": "Vladimir",
"middle": [],
"last": "Karpukhin",
"suffix": ""
},
{
"first": "Barlas",
"middle": [],
"last": "Oguz",
"suffix": ""
},
{
"first": "Sewon",
"middle": [],
"last": "Min",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Ledell",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Wen-Tau",
"middle": [],
"last": "Yih",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "6769--6781",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.550"
]
},
"num": null,
"urls": [],
"raw_text": "Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Nat- ural Language Processing (EMNLP), pages 6769- 6781, Online. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A solution to plato's problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge",
"authors": [
{
"first": "K",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Susan",
"middle": [
"T"
],
"last": "Landauer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dumais",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas K. Landauer and Susan T. Dumais. 1997. A so- lution to plato's problem: The latent semantic anal- ysis theory of acquisition, induction, and representa- tion of knowledge.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Interrater reliability: The kappa statistic",
"authors": [
{
"first": "Mary",
"middle": [],
"last": "Mchugh",
"suffix": ""
}
],
"year": 2012,
"venue": "Biochemia medica :\u010dasopis Hrvatskoga dru\u0161tva medicinskih biokemi\u010dara / HDMB",
"volume": "22",
"issue": "",
"pages": "276--82",
"other_ids": {
"DOI": [
"10.11613/BM.2012.031"
]
},
"num": null,
"urls": [],
"raw_text": "Mary McHugh. 2012. Interrater reliability: The kappa statistic. Biochemia medica :\u010dasopis Hrvatskoga dru\u0161tva medicinskih biokemi\u010dara / HDMB, 22:276- 82.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Gregory",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "CoRR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. CoRR, abs/1301.3781.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 26th International Conference on Neural Information Processing Systems",
"volume": "2",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Cor- rado, and Jeffrey Dean. 2013b. Distributed represen- tations of words and phrases and their composition- ality. In Proceedings of the 26th International Con- ference on Neural Information Processing Systems -Volume 2, NIPS'13, page 3111-3119, Red Hook, NY, USA. Curran Associates Inc.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "English as a formal language",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Montague",
"suffix": ""
}
],
"year": 1974,
"venue": "Formal Philosophy: Selected Papers of Richard Montague",
"volume": "",
"issue": "",
"pages": "188--222",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Montague. 1974. English as a formal lan- guage. In Richmond H. Thomason, editor, Formal Philosophy: Selected Papers of Richard Montague, pages 188-222. Yale University Press, New Haven, London.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Revisiting skip-gram negative sampling model with regularization",
"authors": [
{
"first": "Cun",
"middle": [],
"last": "Mu",
"suffix": ""
},
{
"first": "Guang",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zheng",
"middle": [],
"last": "Yan",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cun Mu, Guang Yang, and Zheng Yan. 2018. Revis- iting skip-gram negative sampling model with regu- larization. CoRR, abs/1804.00306.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Document ranking with a pretrained sequence-to-sequence model",
"authors": [
{
"first": "Rodrigo",
"middle": [],
"last": "Nogueira",
"suffix": ""
},
{
"first": "Zhiying",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2020,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rodrigo Nogueira, Zhiying Jiang, and Jimmy Lin. 2020. Document ranking with a pretrained sequence-to-sequence model. In EMNLP.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "What the [MASK]? making sense of language-specific BERT models",
"authors": [
{
"first": "Debora",
"middle": [],
"last": "Nozza",
"suffix": ""
},
{
"first": "Federico",
"middle": [],
"last": "Bianchi",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2003.02912"
]
},
"num": null,
"urls": [],
"raw_text": "Debora Nozza, Federico Bianchi, and Dirk Hovy. 2020. What the [MASK]? making sense of language-specific BERT models. arXiv preprint arXiv:2003.02912.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "2227--2237",
"other_ids": {
"DOI": [
"10.18653/v1/N18-1202"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. In Proceedings of the 2018 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana. Association for Computational Linguistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "A primer in BERTology: What we know about how BERT works",
"authors": [
{
"first": "Anna",
"middle": [],
"last": "Rogers",
"suffix": ""
},
{
"first": "Olga",
"middle": [],
"last": "Kovaleva",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Rumshisky",
"suffix": ""
}
],
"year": 2020,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "8",
"issue": "",
"pages": "842--866",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00349"
]
},
"num": null,
"urls": [],
"raw_text": "Anna Rogers, Olga Kovaleva, and Anna Rumshisky. 2020. A primer in BERTology: What we know about how BERT works. Transactions of the Associ- ation for Computational Linguistics, 8:842-866.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Energy and policy considerations for deep learning in nlp",
"authors": [
{
"first": "Emma",
"middle": [],
"last": "Strubell",
"suffix": ""
},
{
"first": "Ananya",
"middle": [],
"last": "Ganesh",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3645--3650",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emma Strubell, Ananya Ganesh, and Andrew McCal- lum. 2019. Energy and policy considerations for deep learning in nlp. In Proceedings of the 57th An- nual Meeting of the Association for Computational Linguistics, pages 3645-3650.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Shopping in the multiverse: A counterfactual approach to insession attribution",
"authors": [
{
"first": "Jacopo",
"middle": [],
"last": "Tagliabue",
"suffix": ""
},
{
"first": "Bingqing",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the SIGIR 2020 Workshop on eCommerce",
"volume": "20",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacopo Tagliabue and Bingqing Yu. 2020. Shopping in the multiverse: A counterfactual approach to in- session attribution. In Proceedings of the SIGIR 2020 Workshop on eCommerce (ECOM 20).",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "How to grow a (product) tree: Personalized category suggestions for eCommerce type-ahead",
"authors": [
{
"first": "Jacopo",
"middle": [],
"last": "Tagliabue",
"suffix": ""
},
{
"first": "Bingqing",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Marie",
"middle": [],
"last": "Beaulieu",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of The 3rd Workshop on e-Commerce and NLP",
"volume": "",
"issue": "",
"pages": "7--18",
"other_ids": {
"DOI": [
"10.18653/v1/2020.ecnlp-1.2"
]
},
"num": null,
"urls": [],
"raw_text": "Jacopo Tagliabue, Bingqing Yu, and Marie Beaulieu. 2020a. How to grow a (product) tree: Personalized category suggestions for eCommerce type-ahead. In Proceedings of The 3rd Workshop on e-Commerce and NLP, pages 7-18, Seattle, WA, USA. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "The embeddings that came in from the cold: Improving vectors for new and rare products with content-based inference",
"authors": [
{
"first": "Jacopo",
"middle": [],
"last": "Tagliabue",
"suffix": ""
},
{
"first": "Bingqing",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Federico",
"middle": [],
"last": "Bianchi",
"suffix": ""
}
],
"year": 2020,
"venue": "Fourteenth ACM Conference on Recommender Systems, RecSys '20",
"volume": "",
"issue": "",
"pages": "577--578",
"other_ids": {
"DOI": [
"10.1145/3383313.3411477"
]
},
"num": null,
"urls": [],
"raw_text": "Jacopo Tagliabue, Bingqing Yu, and Federico Bianchi. 2020b. The embeddings that came in from the cold: Improving vectors for new and rare products with content-based inference. In Fourteenth ACM Conference on Recommender Systems, RecSys '20, page 577-578, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "coveo-raises-227m-at-1b-valuation",
"authors": [
{
"first": "",
"middle": [],
"last": "Techcrunch",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Techcrunch. coveo-raises-227m-at-1b-valuation.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Algolia finds $110m from accel and salesforce",
"authors": [
{
"first": "",
"middle": [],
"last": "Techcrunch",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Techcrunch. 2019a. Algolia finds $110m from accel and salesforce.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Lucidworks raises $100m to expand in ai finds",
"authors": [
{
"first": "",
"middle": [],
"last": "Techcrunch",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Techcrunch. 2019b. Lucidworks raises $100m to ex- pand in ai finds.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Vanessa Murdock, and Maarten de Rijke. 2020. Challenges and research opportunities in ecommerce search and recommendations",
"authors": [
{
"first": "Manos",
"middle": [],
"last": "Tsagkias",
"suffix": ""
},
{
"first": "Tracy",
"middle": [
"Holloway"
],
"last": "King",
"suffix": ""
},
{
"first": "Surya",
"middle": [],
"last": "Kallumadi",
"suffix": ""
}
],
"year": null,
"venue": "SIGIR Forum",
"volume": "54",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manos Tsagkias, Tracy Holloway King, Surya Kallumadi, Vanessa Murdock, and Maarten de Ri- jke. 2020. Challenges and research opportunities in ecommerce search and recommendations. In SIGIR Forum, volume 54.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Meta-prod2vec: Product embeddings using side-information for recommendation",
"authors": [
{
"first": "Flavian",
"middle": [],
"last": "Vasile",
"suffix": ""
},
{
"first": "Elena",
"middle": [],
"last": "Smirnova",
"suffix": ""
},
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 10th ACM Conference on Recommender Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Flavian Vasile, Elena Smirnova, and Alexis Conneau. 2016. Meta-prod2vec: Product embeddings using side-information for recommendation. Proceedings of the 10th ACM Conference on Recommender Sys- tems.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Blending search and discovery: Tag-based query refinement with contextual reinforcement learning",
"authors": [
{
"first": "Bingqing",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Jacopo",
"middle": [],
"last": "Tagliabue",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bingqing Yu and Jacopo Tagliabue. 2020. Blend- ing search and discovery: Tag-based query refine- ment with contextual reinforcement learning. In EComNLP.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "An image is worth a thousand features: Scalable product representations for in-session type-ahead personalization",
"authors": [
{
"first": "Bingqing",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Jacopo",
"middle": [],
"last": "Tagliabue",
"suffix": ""
},
{
"first": "Ciro",
"middle": [],
"last": "Greco",
"suffix": ""
},
{
"first": "Federico",
"middle": [],
"last": "Bianchi",
"suffix": ""
}
],
"year": 2020,
"venue": "Companion Proceedings of the Web Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bingqing Yu, Jacopo Tagliabue, Ciro Greco, and Fed- erico Bianchi. 2020. An image is worth a thou- sand features: Scalable product representations for in-session type-ahead personalization. Companion Proceedings of the Web Conference 2020.",
"links": null
}
},
"ref_entries": {}
}
}