ACL-OCL / Base_JSON /prefixE /json /ecnlp /2021.ecnlp-1.5.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:33:52.938506Z"
},
"title": "Campaign Keyword Augmentation via Generative Methods",
"authors": [
{
"first": "Haoran",
"middle": [],
"last": "Shi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Amazon.com Inc Seattle",
"location": {
"settlement": "Washington",
"country": "USA"
}
},
"email": "[email protected]"
},
{
"first": "Zhibiao",
"middle": [],
"last": "Rao",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Amazon.com Inc Seattle",
"location": {
"settlement": "Washington",
"country": "USA"
}
},
"email": "[email protected]"
},
{
"first": "Yongning",
"middle": [],
"last": "Wu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Amazon.com Inc Seattle",
"location": {
"settlement": "Washington",
"country": "USA"
}
},
"email": "[email protected]"
},
{
"first": "Zuohua",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Amazon.com Inc Seattle",
"location": {
"settlement": "Washington",
"country": "USA"
}
},
"email": "[email protected]"
},
{
"first": "Chu",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Amazon.com Inc Seattle",
"location": {
"settlement": "Washington",
"country": "USA"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Keyword augmentation is a fundamental problem for sponsored search modeling and business. Machine generated keywords can be recommended to advertisers for better campaign discoverability as well as used as features for sourcing and ranking models. Generating highquality keywords is difficult, especially for cold campaigns with limited or even no historical logs; and the industry trend of including multiple products in a single ad campaign is making the problem more challenging. In this paper, we propose a keyword augmentation method based on generative seq2seq model and triebased search mechanism, which is able to generate high-quality keywords for any products or product lists. We conduct human annotations, offline analysis, and online experiments to evaluate the performance of our method against benchmarks in terms of augmented keyword quality as well as lifted ad exposure. The experiment results demonstrate that our method is able to generate more valid keywords which can serve as an efficient addition to advertiser selected keywords.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Keyword augmentation is a fundamental problem for sponsored search modeling and business. Machine generated keywords can be recommended to advertisers for better campaign discoverability as well as used as features for sourcing and ranking models. Generating highquality keywords is difficult, especially for cold campaigns with limited or even no historical logs; and the industry trend of including multiple products in a single ad campaign is making the problem more challenging. In this paper, we propose a keyword augmentation method based on generative seq2seq model and triebased search mechanism, which is able to generate high-quality keywords for any products or product lists. We conduct human annotations, offline analysis, and online experiments to evaluate the performance of our method against benchmarks in terms of augmented keyword quality as well as lifted ad exposure. The experiment results demonstrate that our method is able to generate more valid keywords which can serve as an efficient addition to advertiser selected keywords.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Sponsored search has proved to be an efficient and inspiring way of connecting shoppers with interesting products. Advertisers have the freedom to provide a list of targeting keywords with associated bidding prices to the ad platform, so that their ad campaigns can match to shopper queries either lexically or semantically. The quantity and quality of targeting keywords are fundamental to the performance of the ad campaign: insufficient keywords can hardly get the campaigns with enough exposure; and low-quality ones will match shopper queries with irrelevant ads, leading to low conversion and damages to customer experiences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Efficient and optimal keyword selection is challenging and time consuming because it requires deep understanding of the ad industry as well as the sponsored search platform. Furthermore, an ad campaign used to be designed for a single product traditionally, but ads with richer information start to appear in the recent years. Nowadays, an ad campaign can contain multiple products, brand stores, or even rich media contents. Consequently, the keyword selection task becomes even more crucial and challenging for advertisers campaign creation and management.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we present an end-to-end machine learning solution to generate keywords for ad campaigns. The method applies to single-product campaigns as well as campaigns with any number of products. It only relies on product information like product titles, hence efficient on newly created campaigns without any performance logs in the past. We conduct offline and online experiments on the proposed method and observe significant improvements over traditional statistical methods in terms of keyword quality. Specifically, we highlight our contributions as the following:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We propose an end-to-end solution for keyword generation. It can be applied to recommendation of high-quality keywords for advertisers as well as semantic augmentation for better ad exposure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 The keyword generation method relies on product metadata but not historical performance data of ad. Therefore, the method applies to tail or newly-created campaigns.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Our method is able to handle single-productcampaign as well as multi-product-campaign by leveraging semantic meanings of each product in the latent space.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 The quality and superiority of the generated keywords are validated by human audits, offline analysis as well as online experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Considerable research work has been devoted to keyword augmentation techniques because of its important applications in information retrieval, indexing, and digital library management. The majority of existing work focuses on processing documents with statistical information including term co-occurrence and frequency (Campos et al., 2020) . In particular, Rose et al. (2010) proposed RAKE to split the document into candidate phrases by word delimiters and calculate their scores with cooccurrence counts. Ravi et al. (2010) first applied statistical machine translation model for keyword candidate generation and ranking. With rapid development of deep learning models, neural machine translation has surpassed statistical translation in many benchmarks, where recurrent neural networks (RNNs) and gating mechanisms are popular building blocks to model sequence dependencies and alignments (Hochreiter and Schmidhuber, 1997; Cho et al., 2014) . However, extracting highquality and diverse keywords from short document like ad campaigns remains a difficult problem due to the lack of context. Query expansion for improved product or ad discovery, as an application of keyword augmentation, is crucial to e-commerce search engines and recommender systems. He et al. (2016) applies LSTM architecture to rewriting query into web document index space. However, the long tail distribution of the query space hinders the deployment of complicated generative models. It is well known that infrequent queries account for a large portion of the e-commerce daily queries. In Lian et al. (2019) , a lightweight neural network for infrequent queries is trained, incurring even more engineering burdens for deployment. It also proposed the method of using trie-based search to normalize the decoding in the constrained semantic space, which is further investigated in Chen et al. (2020) .",
"cite_spans": [
{
"start": 319,
"end": 340,
"text": "(Campos et al., 2020)",
"ref_id": null
},
{
"start": 358,
"end": 376,
"text": "Rose et al. (2010)",
"ref_id": "BIBREF11"
},
{
"start": 508,
"end": 526,
"text": "Ravi et al. (2010)",
"ref_id": "BIBREF10"
},
{
"start": 893,
"end": 927,
"text": "(Hochreiter and Schmidhuber, 1997;",
"ref_id": "BIBREF6"
},
{
"start": 928,
"end": 945,
"text": "Cho et al., 2014)",
"ref_id": "BIBREF4"
},
{
"start": 1257,
"end": 1273,
"text": "He et al. (2016)",
"ref_id": "BIBREF5"
},
{
"start": 1567,
"end": 1585,
"text": "Lian et al. (2019)",
"ref_id": "BIBREF7"
},
{
"start": 1857,
"end": 1875,
"text": "Chen et al. (2020)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Expanding advertiser bidding keywords is another growing research area. Qiao et al. (2017) applies keyword clustering and topic modeling to retrieve similar keywords and Zhou et al. (2019) conducts keywords expansion in the constrained domains through neural generative models. In addition, Zhang et al. (2014) formulates the keyword recommendation problem as a mixed integer optimization problem, where they collect candidate keywords whose relevance score to the ad group exceed a threshold and handle the keyword selection problem by maximizing revenue. Such methods rely on the quality of advertiser bidding keywords. Campaigns with sub-optimal or misused keywords may suffer significantly.",
"cite_spans": [
{
"start": 72,
"end": 90,
"text": "Qiao et al. (2017)",
"ref_id": "BIBREF8"
},
{
"start": 170,
"end": 188,
"text": "Zhou et al. (2019)",
"ref_id": "BIBREF13"
},
{
"start": 291,
"end": 310,
"text": "Zhang et al. (2014)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In this section, we present our products-to-keyword framework and algorithm for campaign keyword augmentation. The framework is compatible with any seq2seq components with encoders and decoders. Given an ad campaign C including a set of products {p 1 , p 2 , . . . , p n }, our goal is to generate a list of relevant keywords {k 1 , k 2 , . . . , k m }. We will describe how we generate keywords for each product first and later generalize to ad campaigns with multiple products.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3"
},
{
"text": "We choose to use organic search click data for model training, which includes the pairs of queries and clicked products in search log. Compared to sponsored search data, it can guide the model to generate more keywords than existing ads system as shown in Lian et al. (2019) . We lowercase shopper queries and product titles, and then apply pretrained T5 tokenizer (Raffel et al., 2020) for tokenization. Note that the vocabulary space for shopper queries and product titles are ever-growing, but the subword encoding space is stable. Therefore, subword tokenization is an efficient method to handle the out-of-vocabulary issue which hurts the fluency of generated queries.",
"cite_spans": [
{
"start": 256,
"end": 274,
"text": "Lian et al. (2019)",
"ref_id": "BIBREF7"
},
{
"start": 365,
"end": 386,
"text": "(Raffel et al., 2020)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset and Preprocessing",
"sec_num": "3.1"
},
{
"text": "In the following, we use X = [x 1 , x 2 , ...x L ] to denote tokenized product title whose length is L. Let \u03b8 be the trainable model parameters, and Q = [q 0 , q 1 , q 2 , . . . , q S ] as the padded tokenized target query, where q 0 is the special start token and q S the special end token. For training, we feed the model with the product title X and the first s query token",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling",
"sec_num": "3.2"
},
{
"text": "Q <s = [q 0 , q 1 , . . . , q s\u22121 ], to predict the next query token q s , where 1 \u2264 s \u2264 S.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling",
"sec_num": "3.2"
},
{
"text": "We adopt the seq2seq model training with teacher forcing, where multi-layer Gated Recurrent Units (GRU) are used in the encoder and the decoder (Cho et al., 2014) . The encoder transforms the tokenized sequence into the latent space with an embedding layer and a GRU encoder. Then the decoder transforms the latent vector back to a predicted distribution over token vocabulary given all previously decoded tokens as inputs. The token embedding layer for the encoder and the decoder are shared. We use cross entropy loss to maximize the likelihood of the model generating the correct next token for each training data point (X, Q <s , q s ). The objective function is written as",
"cite_spans": [
{
"start": 144,
"end": 162,
"text": "(Cho et al., 2014)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling",
"sec_num": "3.2"
},
{
"text": "L(\u03b8) = \u2212 S s=1 log p(q s |X, Q <s ; \u03b8))). (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling",
"sec_num": "3.2"
},
{
"text": "Intuitively, the desired generated keywords should be diverse to accommodate different aspects of the products, and relevant to promote the products to right shoppers. In the model inference phase, the encoding is the same as in training, while in decoding process beam search is usually used for larger search space. However, standard beam search will generate similar sequences with minimal diversity. To resolve this issue, we build the trie T Q on all tokenized queries in our training dataset to normalize the decoding. Specifically in the i-th decoding step, the decoder outputs the probability of p(q s |X, Q <s over the vocabulary. Then we extract all children nodes of Q <s in the trie and keep those with highest probability in the candidate beam for future decoding. In this way, it is guaranteed that the generated sequence exists in the canonical query space as a path traversal in the trie ending with the special end token. We define such queries as valid queries since they reflect the word selection of shoppers. The prebuilt Trie and the inference workflow for one product title is illustrated in Figure 1 and 2 respectively. Now we discuss the handling of multiple products within one campaign. A naive solution is to generate keywords for each product, and then aggregate all generated keywords. Alternatively, we propose to encode each product title into the latent space, and apply the decoder to the averaged title encodings. These two methods are denoted as Generation by Keyword Aggregation (G-KA) and Generation by Hidden State Mixing (G-HSM). ",
"cite_spans": [],
"ref_spans": [
{
"start": 1115,
"end": 1123,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Keyword Generation",
"sec_num": "3.3"
},
{
"text": "In this section, we compare the performance of the proposed methods with empirical study. In Section 4.1, we explain how we collect our experimental data including training, validation, and testing; then we introduce benchmarking methods and parameter setup in Section 4.2; evaluation metrics are explained in Section 4.3; and eventually in Section 4.4, we illustrate experimental results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "We collect query-product pairs in search click logs from September 2020 to March 2021. To reduce the noise, we apply a series of filtering: 1) remove stop-words in queries and product titles; 2) remove tokens with non-alphanumeric characters; 3) remove pairs with empty query or title; 4) remove query-product pairs with less than 1024 clicks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Preparation",
"sec_num": "4.1"
},
{
"text": "In total, we collect 6.2M pairs of queries and products, where more than 95% of the queries have less than or equal to 6 tokens. We split them into training set (5.2M) and validation set (1M). To prevent frequent queries dominating the result and encourage diversity, we normalize the weight of all pairs to the same for training stage.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Preparation",
"sec_num": "4.1"
},
{
"text": "For testing, we use cold campaigns to benchmark the keyword augmentation model performance, which are campaigns with less than 100 impressions from January 2021 to March 2021. Since we use organic search log for training, there is no overlap between training and testing data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Preparation",
"sec_num": "4.1"
},
{
"text": "The benchmark methods include heuristics based on search log as well as trending keyword genera-tion methods. We use ADV to denote targeting keywords provided by advertisers, and OS to denote keywords generated by organic search logs heuristically. More specifically, we extract those queries which lead to the click of the campaign products in organic search, and collect those distinct queries as keywords for the campaign. We also include RAKE in our comparison which is a popular opensourced keyword extraction algorithm based on lexical co-occurrence statistics. To achieve better extraction performance, we run RAKE on the concatenation of all product titles in the campaign, and keep the keywords with length between 2 and 6. In addition, we compare the two variants of our proposed solutions, G-KA and G-HSM:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Benchmarks and Parameters Setup",
"sec_num": "4.2"
},
{
"text": "\u2022 G-KA: We select top 8 generated queries with lowest perplexity from each product.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Benchmarks and Parameters Setup",
"sec_num": "4.2"
},
{
"text": "\u2022 G-HSM: We select top 3 products in terms of sales in each campaign and averaged their latent encodings for decoding. We select top 8 generated queries for each campaign too.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Benchmarks and Parameters Setup",
"sec_num": "4.2"
},
{
"text": "For both variants, the encoder and decoders are 6-layer GRUs with 256 hidden dimensions, and the beam search size is set as 20. We choose the model with the lowest loss on the validation dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Benchmarks and Parameters Setup",
"sec_num": "4.2"
},
{
"text": "We sample 1500 keyword-campaign pairs from each method for human annotations. Each campaign will be associated with a landing page URL including all targeted products. Three different auditors are assigned to label each pair as exactly relevant, partially relevant, and irrelevant. We take the majority decision as the final label of each pair. For simplicity, we merge exactly relevant and partially relevant labels, and report the ratio of relevance for different methods. To evaluate whether the generated keywords are able to effectively promote ad exposures, we calculate the total traffics incurred by generated keywords as a metric, and report the median total traffics as the Exposure column of Table 1 . We also report the median value of the number of generated keywords for each method as the Count column, and use Exposure divided by Count to evaluate the traffic incurred by each individual keyword.",
"cite_spans": [],
"ref_spans": [
{
"start": 703,
"end": 710,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Evaluation Method",
"sec_num": "4.3"
},
{
"text": "In addition, we conduct online A/B testing by enriching the campaign keywords with generated results from G-KA for ad sourcing and comparing with the existing system in terms of total ad impressions. All other components in the system, including relevance and ranking logics, are consistent for control and treatment. Table 1 illustrates the performance of different methods in terms of the number of generated keywords, relevance ratio and exposure. For the testing campaigns without many impressions, advertisers bid on a few relevant keywords which lead to poor ad exposures. Such impression shortage issue is one of the motivations for our work, and we use this method as the baseline.",
"cite_spans": [],
"ref_spans": [
{
"start": 318,
"end": 325,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Evaluation Method",
"sec_num": "4.3"
},
{
"text": "RAKE is able to extract relevant keywords from the product titles, but their exposure is quite low. Such results indicate vocabulary gap exists a between product titles and shopper queries. Organic search connects the products to the relevant queries but the amount of queries are much fewer than the baseline. Intuitively, this is because advertisers are aware of historical queries related to their products.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "4.4"
},
{
"text": "G-KA and G-HSM provide a moderate number of keywords with ads exposure much larger than baseline (+1665% and +2194%), though the relevance rate are lower than standard baseline. The boost of Exposure/Count also demonstrates the effectiveness of the proposed keyword generation methods with seq2seq learning framework and triebased decoding. In addition, the G-HSM shows superiority over G-KA in terms of keyword relevancy and validity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "4.4"
},
{
"text": "In our online experiment, our model increases ad impressions by 5.3%, which demonstrates the contribution from the proposed keyword augmentation methods. Note that relevance and ranking logics are the same for both control and treatment groups. Only augmented keywords not covered by existing advertiser selected keywords with good quality are able to yield additional ad exposures.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "4.4"
},
{
"text": "In this paper, we formulate the sponsored search keyword augmentation task as a seq2seq learning problem in the constrained space. We present a general framework which incorporates seq2seq architecture and trie-based pruning for query generation from product titles. We compare the proposed method with baselines and other existing methods, and show that our method is able to generate relevant keywords which bring up the campaign exposure significantly. In the future, we would like to explore more structured decoding strategies combined with trie to improve the generation quality, and take more factors into account when generating keywords including long-tail keywords and keyword competitiveness.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "5"
}
],
"back_matter": [
{
"text": "We would like to thank to Hongyu Zhu, Weiming Wu, Barry Bai, Hirohisa Fujita for their help to set up the online A/B testing, and all the reviewers for their valuable suggestions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF1": {
"ref_id": "b1",
"title": "Yake! keyword extraction from single documents using multiple local features",
"authors": [],
"year": null,
"venue": "Information Sciences",
"volume": "509",
"issue": "",
"pages": "257--289",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yake! keyword extraction from single documents using multiple local features. Information Sciences, 509:257-289.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Parallel sentence mining by constrained decoding",
"authors": [
{
"first": "Pinzhen",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Nikolay",
"middle": [],
"last": "Bogoychev",
"suffix": ""
},
{
"first": "Kenneth",
"middle": [],
"last": "Heafield",
"suffix": ""
},
{
"first": "Faheem",
"middle": [],
"last": "Kirefu",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pinzhen Chen, Nikolay Bogoychev, Kenneth Heafield, and Faheem Kirefu. 2020. Parallel sentence mining by constrained decoding. In Proceedings of the 58th",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Annual Meeting of the Association for Computational Linguistics",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "1672--1678",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual Meeting of the Association for Computational Linguistics, pages 1672-1678.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Learning phrase representations using rnn encoder-decoder for statistical machine translation",
"authors": [
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Bart",
"middle": [],
"last": "Van Merri\u00ebnboer",
"suffix": ""
},
{
"first": "Caglar",
"middle": [],
"last": "Gulcehre",
"suffix": ""
},
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Fethi",
"middle": [],
"last": "Bougares",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1406.1078"
]
},
"num": null,
"urls": [],
"raw_text": "Kyunghyun Cho, Bart Van Merri\u00ebnboer, Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Learning to rewrite queries",
"authors": [
{
"first": "Yunlong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Jiliang",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Ouyang",
"suffix": ""
},
{
"first": "Changsung",
"middle": [],
"last": "Kang",
"suffix": ""
},
{
"first": "Dawei",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Yi",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 25th ACM International on Conference on Information and Knowledge Management",
"volume": "",
"issue": "",
"pages": "1443--1452",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yunlong He, Jiliang Tang, Hua Ouyang, Changsung Kang, Dawei Yin, and Yi Chang. 2016. Learning to rewrite queries. In Proceedings of the 25th ACM In- ternational on Conference on Information and Knowl- edge Management, pages 1443-1452.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735- 1780.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "An end-to-end generative retrieval method for sponsored search engine-decoding efficiently into a closed target domain",
"authors": [
{
"first": "Yijiang",
"middle": [],
"last": "Lian",
"suffix": ""
},
{
"first": "Zhijie",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Jinlong",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Kefeng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Chunwei",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "Muchenxuan",
"middle": [],
"last": "Tong",
"suffix": ""
},
{
"first": "Wenying",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Hanju",
"middle": [],
"last": "Guan",
"suffix": ""
},
{
"first": "Ying",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Ying",
"middle": [],
"last": "Cao",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1902.00592"
]
},
"num": null,
"urls": [],
"raw_text": "Yijiang Lian, Zhijie Chen, Jinlong Hu, Kefeng Zhang, Chunwei Yan, Muchenxuan Tong, Wenying Han, Hanju Guan, Ying Li, Ying Cao, et al. 2019. An end-to-end generative retrieval method for sponsored search engine-decoding efficiently into a closed tar- get domain. arXiv preprint arXiv:1902.00592.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Finding competitive keywords from query logs to enhance search engine advertising",
"authors": [
{
"first": "Dandan",
"middle": [],
"last": "Qiao",
"suffix": ""
},
{
"first": "Jin",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Qiang",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Guoqing",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2017,
"venue": "Information & Management",
"volume": "54",
"issue": "4",
"pages": "531--543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dandan Qiao, Jin Zhang, Qiang Wei, and Guoqing Chen. 2017. Finding competitive keywords from query logs to enhance search engine advertising. Information & Management, 54(4):531-543.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Exploring the limits of transfer learning with a unified text-to-text transformer",
"authors": [
{
"first": "Colin",
"middle": [],
"last": "Raffel",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Roberts",
"suffix": ""
},
{
"first": "Katherine",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Sharan",
"middle": [],
"last": "Narang",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Matena",
"suffix": ""
},
{
"first": "Yanqi",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"J"
],
"last": "Liu",
"suffix": ""
}
],
"year": 2020,
"venue": "Journal of Machine Learning Research",
"volume": "21",
"issue": "140",
"pages": "1--67",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Kather- ine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140):1-67.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Automatic generation of bid phrases for online advertising",
"authors": [
{
"first": "Sujith",
"middle": [],
"last": "Ravi",
"suffix": ""
},
{
"first": "Andrei",
"middle": [],
"last": "Broder",
"suffix": ""
},
{
"first": "Evgeniy",
"middle": [],
"last": "Gabrilovich",
"suffix": ""
},
{
"first": "Vanja",
"middle": [],
"last": "Josifovski",
"suffix": ""
},
{
"first": "Sandeep",
"middle": [],
"last": "Pandey",
"suffix": ""
},
{
"first": "Bo",
"middle": [],
"last": "Pang",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the third ACM international conference on Web search and data mining",
"volume": "",
"issue": "",
"pages": "341--350",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sujith Ravi, Andrei Broder, Evgeniy Gabrilovich, Vanja Josifovski, Sandeep Pandey, and Bo Pang. 2010. Au- tomatic generation of bid phrases for online advertis- ing. In Proceedings of the third ACM international conference on Web search and data mining, pages 341-350.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Automatic keyword extraction from individual documents",
"authors": [
{
"first": "Stuart",
"middle": [],
"last": "Rose",
"suffix": ""
},
{
"first": "Dave",
"middle": [],
"last": "Engel",
"suffix": ""
},
{
"first": "Nick",
"middle": [],
"last": "Cramer",
"suffix": ""
},
{
"first": "Wendy",
"middle": [],
"last": "Cowley",
"suffix": ""
}
],
"year": 2010,
"venue": "Text mining: applications and theory",
"volume": "1",
"issue": "",
"pages": "1--20",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stuart Rose, Dave Engel, Nick Cramer, and Wendy Cowley. 2010. Automatic keyword extraction from individual documents. Text mining: applications and theory, 1:1-20.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Bid keyword suggestion in sponsored search based on competitiveness and relevance. Information processing & management",
"authors": [
{
"first": "Ying",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Weinan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Bin",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Xiaojie",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "Tie-Yan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "50",
"issue": "",
"pages": "508--523",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ying Zhang, Weinan Zhang, Bin Gao, Xiaojie Yuan, and Tie-Yan Liu. 2014. Bid keyword suggestion in sponsored search based on competitiveness and relevance. Information processing & management, 50(4):508-523.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Domainconstrained advertising keyword generation",
"authors": [
{
"first": "Hao",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Minlie",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Yishun",
"middle": [],
"last": "Mao",
"suffix": ""
},
{
"first": "Changlei",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Shu",
"suffix": ""
},
{
"first": "Xiaoyan",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2019,
"venue": "The World Wide Web Conference",
"volume": "",
"issue": "",
"pages": "2448--2459",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hao Zhou, Minlie Huang, Yishun Mao, Changlei Zhu, Peng Shu, and Xiaoyan Zhu. 2019. Domain- constrained advertising keyword generation. In The World Wide Web Conference, pages 2448-2459.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "An illustration of the Trie built on queries",
"uris": null,
"type_str": "figure"
},
"FIGREF1": {
"num": null,
"text": "An illustration of the keyword generation process. Tokens in red color with strikethrough line are removed by beam search, and \"men adjustable hoodie\" is pruned by the query trie. Details of the encoder/decoder are omitted.",
"uris": null,
"type_str": "figure"
},
"TABREF1": {
"type_str": "table",
"num": null,
"html": null,
"text": "Performance comparison.",
"content": "<table/>"
}
}
}
}