ACL-OCL / Base_JSON /prefixN /json /nlp4convai /2020.nlp4convai-1.9.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:55:49.686972Z"
},
"title": "Automating Template Creation for Ranking-Based Dialogue Models",
"authors": [
{
"first": "Jingxiang",
"middle": [],
"last": "Chen",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Heba",
"middle": [],
"last": "Elfardy",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Simi",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Andrea",
"middle": [],
"last": "Kahn",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Jared",
"middle": [],
"last": "Kramer",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Dialogue response generation models that use template ranking rather than direct sequence generation allow model developers to limit generated responses to pre-approved messages. However, manually creating templates is timeconsuming and requires domain expertise. To alleviate this problem, we explore automating the process of creating dialogue templates by using unsupervised methods to cluster historical utterances and selecting representative utterances from each cluster. Specifically, we propose an end-to-end model called Deep Sentence Encoder Clustering (DSEC) that uses an auto-encoder structure to jointly learn the utterance representation and construct template clusters. We compare this method to a random baseline that randomly assigns templates to clusters as well as a strong baseline that performs the sentence encoding and the utterance clustering sequentially. To evaluate the performance of the proposed method, we perform an automatic evaluation with two annotated customer service datasets to assess clustering effectiveness, and a human-in-the-loop experiment using a live customer service application to measure the acceptance rate of the generated templates. DSEC performs best in the automatic evaluation, beats both the sequential and random baselines on most metrics in the human-in-theloop experiment, and shows promising results when compared to gold/manually created templates.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Dialogue response generation models that use template ranking rather than direct sequence generation allow model developers to limit generated responses to pre-approved messages. However, manually creating templates is timeconsuming and requires domain expertise. To alleviate this problem, we explore automating the process of creating dialogue templates by using unsupervised methods to cluster historical utterances and selecting representative utterances from each cluster. Specifically, we propose an end-to-end model called Deep Sentence Encoder Clustering (DSEC) that uses an auto-encoder structure to jointly learn the utterance representation and construct template clusters. We compare this method to a random baseline that randomly assigns templates to clusters as well as a strong baseline that performs the sentence encoding and the utterance clustering sequentially. To evaluate the performance of the proposed method, we perform an automatic evaluation with two annotated customer service datasets to assess clustering effectiveness, and a human-in-the-loop experiment using a live customer service application to measure the acceptance rate of the generated templates. DSEC performs best in the automatic evaluation, beats both the sequential and random baselines on most metrics in the human-in-theloop experiment, and shows promising results when compared to gold/manually created templates.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Dialogue response generation has been an active area of research in recent years. Response generation can be used in human-to-bot conversational systems (Qiu et al., 2017) or to generate quick replies in human-to-human conversational systems (Kannan et al., 2016; Pasternack et al., 2017) .",
"cite_spans": [
{
"start": 153,
"end": 171,
"text": "(Qiu et al., 2017)",
"ref_id": "BIBREF19"
},
{
"start": 242,
"end": 263,
"text": "(Kannan et al., 2016;",
"ref_id": "BIBREF8"
},
{
"start": 264,
"end": 288,
"text": "Pasternack et al., 2017)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Response generation approaches fall under two broad categories: (1) direct sequence generation using an encoder-decoder architecture (Vinyals and Le, 2015; Serban et al., 2016) or (2) response ranking, in which the model developer specifies a predefined template pool and an encoder model is used to score pairs of conversation history and candidate template response Zhou et al., 2018; Kannan et al., 2016) . Using template ranking rather than direct sequence generation allows model developers to limit generated responses to pre-approved messages, preventing the model from producing impolite or ungrammatical responses. In addition, sequence generation models have a tendency to favor safe, generic responses (Baheti et al., 2018; Shao et al., 2017; Zhang et al., 2018; Li et al., 2016) , and template ranking models can be used to ensure that the system generates information-rich responses that drive the conversation towards an end goal. However, manually creating templates is time-consuming and requires domain expertise. For certain use cases such as customer service, templates need to be continually updated to reflect policy changes, further adding to this cost. In addition, manually created templates may differ subtly from actual agent utterances in model training data and thus may not be selected by the ranking model.",
"cite_spans": [
{
"start": 133,
"end": 155,
"text": "(Vinyals and Le, 2015;",
"ref_id": "BIBREF23"
},
{
"start": 156,
"end": 176,
"text": "Serban et al., 2016)",
"ref_id": "BIBREF20"
},
{
"start": 368,
"end": 386,
"text": "Zhou et al., 2018;",
"ref_id": "BIBREF27"
},
{
"start": 387,
"end": 407,
"text": "Kannan et al., 2016)",
"ref_id": "BIBREF8"
},
{
"start": 713,
"end": 734,
"text": "(Baheti et al., 2018;",
"ref_id": "BIBREF1"
},
{
"start": 735,
"end": 753,
"text": "Shao et al., 2017;",
"ref_id": "BIBREF21"
},
{
"start": 754,
"end": 773,
"text": "Zhang et al., 2018;",
"ref_id": "BIBREF25"
},
{
"start": 774,
"end": 790,
"text": "Li et al., 2016)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we explore automating the creation of a template pool for a customer service chat application through clustering historical agent utterances and choosing representative utterances from each cluster. To the best of our knowledge, research on automatic template creation using utterance clustering has been limited.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The structure of this paper is as follows. In section 2, we describe the data and text preprocessing methods we used to extract template candidates from historical chat transcripts. In section 3, we describe our proposed approach for template generation: an end-to-end approach that uses an auto-encoder structure to jointly learn the utterance representation and construct template clusters. In addition, we describe a strong baseline that we propose for comparison: a sequential approach in which we first learn the utterance representation and then construct template clusters. In section 4, we describe the automatic and human-in-the-loop evaluations that we conducted and our findings, and in section 5 we draw conclusions and propose future research directions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We select template responses from a dataset of agent utterances extracted from historical chat transcripts. To construct this dataset, we collect anonymized transcripts of conversations between customers and customer service agents (CSAs) in two domains: (1) Cancel Membership (CM), and (2) Tracking shows delivered but order not received (DNR). In the anonymized transcripts, all unique customer identifiers (UCI) are replaced with a special token: \"GENERIC SLOT\". We further extract all agent utterances 1 in these transcripts and exclude those occurring only once in the data. The intuition behind this is that if an utterance only occurred once, it is not likely to be useful as a template. We end up with approximately 550K agent utterances in each domain. The DNR domain contains longer utterances than the CM domain (an average of 12 words per sentence vs. 11 for CM) and a larger vocabulary size (22.9K for DNR vs. 19.2K for CM).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "2"
},
{
"text": "To create our evaluation data, we select a random sample of approximately 1,000 utterances from each domain and have it annotated for \"Cluster ID\". For the annotation task, we ask the annotators to come up with cluster IDs as they are annotating the utterances and then consolidate these clusters after they are done assigning all utterances to clusters. We have one annotator per domain and a gold annotator that further refines the clusters for both domains. For each domain we ask the annotator to do the following:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Guidelines",
"sec_num": "2.1"
},
{
"text": "1. Starting with the first utterance, define the first cluster to convey the semantic meaning of this utterance and give a descriptive name for the cluster.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Guidelines",
"sec_num": "2.1"
},
{
"text": "2. For each utterance in the list, either assign it to an existing cluster (if appropriate) or define a new cluster. 3. When assigning utterances to clusters, ignore the tense and utterance type (statement versus question). E.g., \"I canceled your membership\", \"I will cancel your membership\", and \"Should I cancel your membership?\" will all belong to the same cluster. 4. All noisy/unneeded utterances that are not related to the current domain or that do not contain information that can be useful for resolving the customer's issue should be excluded. 5. After finishing all of the utterances, go through the list of clusters to merge redundant ones and map the utterances to the new list of cluster IDs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Guidelines",
"sec_num": "2.1"
},
{
"text": "The annotation process resulted in 44 and 43 clusters for the CM and DNR domains respectively. Table 1 shows sample utterances from some clusters.",
"cite_spans": [],
"ref_spans": [
{
"start": 95,
"end": 102,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Annotation Guidelines",
"sec_num": "2.1"
},
{
"text": "We cluster agent utterances using a novel end-toend approach, Deep Sentence Encoder Clustering (DSEC), in which the utterance representation and the clustering model are jointly learned. We compare this against two baselines: (1) a weak baseline in which templates are sampled randomly from the dataset, and (2) a sequential baseline in which the utterance representation and the clustering model are learned sequentially. For the baseline system, we use dense features to represent each utterance and explore the use of different embedding types-GloVe (Pennington et al., 2014) , ELMo (Peters et al., 2018b,a) , and BERT (Devlin et al., 2018)as well as the effect of using in-domain data on the performance of the system.",
"cite_spans": [
{
"start": 553,
"end": 578,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF16"
},
{
"start": 586,
"end": 610,
"text": "(Peters et al., 2018b,a)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "3"
},
{
"text": "For both DSEC and the sequential baseline, after the clusters have been obtained, we create the template pool by selecting the highest-confidence utterance in each cluster. The confidence is either the probability that the utterance falls in the cluster (for DSEC), or the distance between the utterance and its cluster centroid (for the sequential baseline). We propose an end-to-end auto-encoder structure (Figure 1 ) that learns a sentence encoding layer that aims to achieve two goals simultaneously: (1) generate a feature representation from which the input utterance can be reconstructed as accurately as possible, and (2) construct template clusters by introducing a clustering-oriented loss. To achieve these two goals, we minimize a weighted (w) sum of reconstruction loss (L r ) and clustering loss (L c ).",
"cite_spans": [],
"ref_spans": [
{
"start": 408,
"end": 417,
"text": "(Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Approach",
"sec_num": "3"
},
{
"text": "L = L r + \u03c9 * L c",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "3"
},
{
"text": "To build the auto-encoder structure, we utilize a deep bi-directional Long Short-Term Memory (LSTM) network (Hochreiter and Schmidhuber, 1997) . We first use a word embedding layer, and then train a multi-layer bi-directional LSTM as the encoder. We choose a bi-directional network since subsequent words can sometimes facilitate the prediction of previous words. For example, it is easy to infer that the previous word has a high probability of being \"I\" if we know that the current word is \"am\". The final output of the hidden layer is then used as the input to the decoder. Padding is used to normalize sentence length, and a softmax function is added on top of the decoder to reconstruct the input. It is intuitive that the vectors generated by the encoder are good representations of the sentences they encode if they contain enough information to reconstruct these sentences.",
"cite_spans": [
{
"start": 108,
"end": 142,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "3"
},
{
"text": "For clustering, we define the loss using a soft assignment between the sentence embedding and the cluster centroids, similar to Xie et al. (2016) . In particular, we first use the Student's t-distribution as a kernel to measure the similarity between the sentence encoder z i and each of the centroid points \u00b5 j :",
"cite_spans": [
{
"start": 128,
"end": 145,
"text": "Xie et al. (2016)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "3"
},
{
"text": "q ij = (1 + z i \u2212 \u00b5 j 2 /\u03b1) \u2212(\u03b1+1)/2 j (1 + z i \u2212 \u00b5 j 2 /\u03b1) \u2212(\u03b1+1)/2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "3"
},
{
"text": "where q ij indicates the probability of assigning sentence i to cluster j. The degree of freedom \u03b1 is set to be 1. The sentence clustering loss is defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "3"
},
{
"text": "L = KL(P Q) = i j p ij log p ij q ij",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "3"
},
{
"text": "in which the soft target distribution P is defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "3"
},
{
"text": "p ij = q 2 ij / i q ij j (q 2 ij / i q ij )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "3"
},
{
"text": "One potential deficiency of using the target distribution, as Guo et al. (2017) pointed out, is that such a loss emphasizes data points with large p ij (i.e. high confidence) hence is less impacted by mistakes for the points farther away from the centroid or ones that are close to the decision boundary hence can lead to underfitting if many such points exist. This problem can be more severe in sentence clustering than image clustering since image clustering usually has a more well defined objective whereas sentence clustering can be ambiguous and subjective. We find that different annotators often suggest different cluster labels for many of the sentences. To alleviate this issue, we suggest setting a threshold on the probability q ij to filter out utterances with weak cluster signals when tuning or evaluating the model. Note that our goal is to select representative utterances from each cluster to form a reliable template pool. In this way it is most important to maximize the quality of utterances with high estimated confidences.",
"cite_spans": [
{
"start": 62,
"end": 79,
"text": "Guo et al. (2017)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "3"
},
{
"text": "In practice, we initialize the reconstruction coefficients by first training the auto-encoder separately, i.e. setting \u03c9 = 0. This \"warm-start\" approach helps accelerate the convergence rate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "3"
},
{
"text": "Our proposed method borrows the loss from Xie et al. (2016) but addresses a different problem. First, Xie et al. (2016) use a convolutional network to learn an image representation. We target sentence reconstruction along with clustering, and thus propose an LSTM structure to capture the time series aspect of the sequence. Second, we use a pre-trained model fit on our own customer service data to initialize the parameters, and thus our model does not have to be very deep, which makes it less computationally intensive to train.",
"cite_spans": [
{
"start": 42,
"end": 59,
"text": "Xie et al. (2016)",
"ref_id": "BIBREF24"
},
{
"start": 102,
"end": 119,
"text": "Xie et al. (2016)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "3"
},
{
"text": "Since there is no prior research targeting the task of automating template creation for ranking-based dialogue models, we propose a strong baseline that embeds the utterances and clusters them sequentially. To ensure that the baseline we are comparing against is effective, we explore the use of publicly available/pretrained embedding models versus models that are trained on indomain customer service data. Additionally, we experiment with a traditional word embedding model, GloVe, in which the representation of each word in the vocabulary is the same regardless of the context it is appearing in, as well as contextual embeddings in which the representation depends on the entire context in which a word is used, namely ELMo and BERT. For in-domain data, we use approximately 118 million utterances to train a customer service (CS) GloVe model and an attention-based ELMo model. Once we obtain the representation for each utterance using a specific embedding model, we then use a pooling layer to obtain the utterance representation. For the pooling layer, we use weighted-mean pooling, in which each word is weighted by the \"Term Frequency Inverse Document Frequency\" (tf-idf) score (Aizawa, 2003) , with documents defined as utterances in this case. 2 Finally, we cluster the utterance representations. We experiment with K-means (MacQueen et al., 1967) , AffinityPropagation (Frey and Dueck, 2007) , spectral (Shi and Malik, 2000) , Ward's (Murtagh and Legendre, 2014) , Agglomerative (M\u00fcllner, 2011) and Birch (Zhang et al., 1997) clustering. For K-means, we use the centroid as the representation of the cluster, while for other algorithms, we take the mean pooling for all templates in the cluster as the centroid, compute the distance from each template to the centroid, and choose the template that is the shortest distance from the centroid. In the experiments described in Section 4, we select the clustering method with the best normalized mutual information score (NMI) as our baseline. We find that this is always achieved by either Ward's or Birch.",
"cite_spans": [
{
"start": 1189,
"end": 1203,
"text": "(Aizawa, 2003)",
"ref_id": "BIBREF0"
},
{
"start": 1257,
"end": 1258,
"text": "2",
"ref_id": null
},
{
"start": 1337,
"end": 1360,
"text": "(MacQueen et al., 1967)",
"ref_id": "BIBREF12"
},
{
"start": 1383,
"end": 1405,
"text": "(Frey and Dueck, 2007)",
"ref_id": "BIBREF4"
},
{
"start": 1417,
"end": 1438,
"text": "(Shi and Malik, 2000)",
"ref_id": "BIBREF22"
},
{
"start": 1448,
"end": 1476,
"text": "(Murtagh and Legendre, 2014)",
"ref_id": "BIBREF14"
},
{
"start": 1493,
"end": 1508,
"text": "(M\u00fcllner, 2011)",
"ref_id": "BIBREF13"
},
{
"start": 1519,
"end": 1539,
"text": "(Zhang et al., 1997)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sequential Baseline",
"sec_num": "3.2"
},
{
"text": "We evaluate clustering performance using both automatic and human-in-the-loop evaluations. For all experiments, we fix the cluster number at 50 for all models to ensure that the template pool has good coverage of common situations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments & Results",
"sec_num": "4"
},
{
"text": "To evaluate the quality of the generated clusters, we compare the ground truth-from our gold labeled data-with predicted labels using normalized mutual information score (NMI), unsupervised clustering accuracy (ACC; Xie et al. (2016) ), and Rand index adjusted for chance (ARI; Hubert and Arabie (1985) ). We evaluate the performance of DSEC when compared to (1) the sequential baseline and (2) a weak baseline that randomly assigns each utterance to one of the clusters. Tables 2 and 3 show the results of the automatic evaluation on the labeled CM and DNR datasets. For DSEC, the validation accuracy of reconstruction is approximately 93% for both datasets, indicating that the auto-encoder vector extracts the sentence information well. On CM, DSEC achieves the best NMI and ACC, while the sequential method, with the ELMo-CS embedding and weighted mean pooling of tf-idf features, has the best ARI results overall. The models using in-domain embeddings outperform others with pretrained embeddings. Note that the metrics NMI, ACC, and ARI are not always consistent when compared across different methods. For example, Glove-CS has a high ARI score but under-performs with all the other automatic metrics.",
"cite_spans": [
{
"start": 216,
"end": 233,
"text": "Xie et al. (2016)",
"ref_id": "BIBREF24"
},
{
"start": 278,
"end": 302,
"text": "Hubert and Arabie (1985)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 472,
"end": 486,
"text": "Tables 2 and 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Automatic Evaluation",
"sec_num": "4.1"
},
{
"text": "In addition, clustering performs better on DNR dataset than on CM. This is potentially because the CM domain contains a broader range of customer issues corresponding to different membership types and hence is more challenging to represent using utterance clustering. For example, the templates can be quite different for canceling a free trial, a regular subscription, and certain memberships with an additional subscription attached.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic Evaluation",
"sec_num": "4.1"
},
{
"text": "Overall, none of the proposed methods achieve the accuracy of some image clustering work, such as Guo et al. (2017) . As discussed before, image clustering and text clustering are very different tasks, and sentence clustering can be quite subjective. Rephrasing or adding content to sentences can make such clustering challenging even for humans. For example, it is non-trivial to decide whether the following sentences should be clustered together: \"I will cancel your membership\", \"I'll cancel your membership and issue a refund\", and \"The membership will be canceled starting today and you will not be able to use the free subscription\". Note that the second and third sentences both contain additional information as opposed to the first sentence. In practice, we encourage annotators to define each cluster as precisely as possible, even if it results in a large number of clusters. This can increase the coverage of the generated template pool but decrease the performance of clustering in the automatic evaluation. To determine the true impact of clustering on our downstream task, response generation, we conduct a human-in-the-loop evaluation in which we use the generated template pool along with a neural response ranking model to recommend responses to CSAs handling customer service contacts.",
"cite_spans": [
{
"start": 98,
"end": 115,
"text": "Guo et al. (2017)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic Evaluation",
"sec_num": "4.1"
},
{
"text": "To evaluate the effectiveness of clustering for the downstream task of response generation, we use a human-in-the-loop research platform through which CSAs handle live customer contacts. Specifically, we train template-based neural response ranking models for CM and DNR similar to the model proposed by Lu et al. (2019) , and then use them to select responses from the template pools generated using the methods proposed above. Note that training the response ranking model is independent of template creation. We then test the resulting model and template pool using this platform. Instead of showing CSAs the standard chat box, the platform presents ten suggested responses chosen by the trained model from the pool generated by one of the clustering approaches. These 10 suggestions come from different clusters since we only send one template per cluster to the ranking model. They are based on the complete conversation history up to this point and are updated each time the customer or the agent sends a response. The CSA can pick any of the suggested templates as a response, or type their own text if none of the templates appears appropriate. An ideal template pool should minimize the chance that CSAs need to type their own text, and also have no overlapping templates in it.",
"cite_spans": [
{
"start": 304,
"end": 320,
"text": "Lu et al. (2019)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Human-in-the-Loop Evaluation",
"sec_num": "4.2"
},
{
"text": "We choose the following metrics for the humanin-the-loop evaluation, reported in In this experiment, we compare the performance of (1) the end-to-end approach (DSEC), (2) the sequential setup that performs best in the automatic evaluation (GloVe-CS with weighted-mean pooling and Ward's clustering), and (3) a random baseline in which we randomly select 50 utterances from the dataset to be used as templates. Additionally, we include a human/gold baseline for which the template pool is manually created and refined by collecting feedback from agents over the course of one month. The utterance acceptance rate indicates that DSEC outperforms both the random and the sequential baseline and performs only slightly worse than the human template pool. As expected, the \"all suggestions accepted\" rate is much lower for the CM dataset due to limited agent resources.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human-in-the-Loop Evaluation",
"sec_num": "4.2"
},
{
"text": "DSEC than for the gold/human pool, but better than for the other automated methods. We find that the sequential approach manages to minimize the length of the conversation (i.e. the number of CSA utterances). One possibility is that it results in a better coverage rate so that it can guide the agents to solve contacts more efficiently than the other methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human-in-the-Loop Evaluation",
"sec_num": "4.2"
},
{
"text": "We measure coverage by asking agents to report missing templates. Agents reported a few missing templates for all of the automatically generated pools. The variance in this metric is high because the experiment is only run for about 200 contacts for each experimental configuration. In this way, corner examples may not show up for all of the configurations, and a larger experiment is needed to determine exactly how many templates are missing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human-in-the-Loop Evaluation",
"sec_num": "4.2"
},
{
"text": "Lastly, the sequential baseline results in a higher depth of first rejection than the manual approach. A possible cause is that this approach leads to a larger proportion of shorter contacts: The sequential approach has 4% more contacts that have less than 10 CSA utterances than the manual one. This could indicate that automatically generated templates can increase the efficiency of contact handling by steering CSAs away from utterances that could lead to longer conversations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human-in-the-Loop Evaluation",
"sec_num": "4.2"
},
{
"text": "We present DSEC, an end-to-end sentence encoding and clustering approach that can help automate template creation for template-based conversational models. The purpose is to avoid the human effort required to manually create a template pool when training a response generation model for a conversational system. We evaluate the proposed approach on two customer service datasets and find that it outperforms both a strong sequential baseline and a random baseline in most cases. In addition,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion & Future Work",
"sec_num": "5"
},
{
"text": "\"Utterance\" is defined as all that is typed before sending the message to the customer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We also experimented with max and unweighted-mean pooling but achieved better results using weighted-mean pooling.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For the human-in-the-loop experiment, we only include",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank the three anonymous reviewers for their feedback and insights in improving the work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
},
{
"text": "Rand we use the resulting template pools in a human-inthe-loop experiment and observe that the template pool created using DSEC performs only slightly worse than a manually created template pool that takes over a month of human effort to develop.Gold SeqIn future work, we plan on exploring building a pipeline that can automatically polish and update the generated template pool using feedback from agents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metric",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "An information-theoretic perspective of tf-idf measures",
"authors": [
{
"first": "Akiko",
"middle": [],
"last": "Aizawa",
"suffix": ""
}
],
"year": 2003,
"venue": "Information Processing & Management",
"volume": "39",
"issue": "1",
"pages": "45--65",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Akiko Aizawa. 2003. An information-theoretic per- spective of tf-idf measures. Information Processing & Management, 39(1):45-65.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Generating more interesting responses in neural conversation models with distributional constraints",
"authors": [
{
"first": "Ashutosh",
"middle": [],
"last": "Baheti",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Dolan",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashutosh Baheti, Alan Ritter, Jiwei Li, and Bill Dolan. 2018. Generating more interesting responses in neural conversation models with distributional con- straints. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Process- ing.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Learning phrase representations using rnn encoder-decoder for statistical machine translation",
"authors": [
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Bart",
"middle": [],
"last": "Van Merri\u00ebnboer",
"suffix": ""
},
{
"first": "Caglar",
"middle": [],
"last": "Gulcehre",
"suffix": ""
},
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Fethi",
"middle": [],
"last": "Bougares",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1406.1078"
]
},
"num": null,
"urls": [],
"raw_text": "Kyunghyun Cho, Bart Van Merri\u00ebnboer, Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Clustering by Passing Messages Between Data Points",
"authors": [
{
"first": "B",
"middle": [
"J"
],
"last": "Frey",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Dueck",
"suffix": ""
}
],
"year": 2007,
"venue": "Science",
"volume": "315",
"issue": "5814",
"pages": "972--976",
"other_ids": {
"DOI": [
"10.1126/science.1136800"
]
},
"num": null,
"urls": [],
"raw_text": "B. J. Frey and D. Dueck. 2007. Clustering by Passing Messages Between Data Points. Science, 315(5814):972-976.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Deep clustering with convolutional autoencoders",
"authors": [
{
"first": "Xifeng",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Xinwang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "En",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Jianping",
"middle": [],
"last": "Yin",
"suffix": ""
}
],
"year": 2017,
"venue": "International Conference on Neural Information Processing",
"volume": "",
"issue": "",
"pages": "373--382",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xifeng Guo, Xinwang Liu, En Zhu, and Jianping Yin. 2017. Deep clustering with convolutional autoen- coders. In International Conference on Neural In- formation Processing, pages 373-382. Springer.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Comparing partitions",
"authors": [
{
"first": "Lawrence",
"middle": [],
"last": "Hubert",
"suffix": ""
},
{
"first": "Phipps",
"middle": [],
"last": "Arabie",
"suffix": ""
}
],
"year": 1985,
"venue": "Journal of classification",
"volume": "2",
"issue": "1",
"pages": "193--218",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lawrence Hubert and Phipps Arabie. 1985. Compar- ing partitions. Journal of classification, 2(1):193- 218.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Smart reply: Automated response suggestion for email",
"authors": [
{
"first": "Anjuli",
"middle": [],
"last": "Kannan",
"suffix": ""
},
{
"first": "Karol",
"middle": [],
"last": "Kurach",
"suffix": ""
},
{
"first": "Sujith",
"middle": [],
"last": "Ravi",
"suffix": ""
},
{
"first": "Tobias",
"middle": [],
"last": "Kaufmann",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Tomkins",
"suffix": ""
},
{
"first": "Balint",
"middle": [],
"last": "Miklos",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Laszlo",
"middle": [],
"last": "Lukacs",
"suffix": ""
},
{
"first": "Marina",
"middle": [],
"last": "Ganea",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Young",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining",
"volume": "",
"issue": "",
"pages": "955--964",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anjuli Kannan, Karol Kurach, Sujith Ravi, Tobias Kaufmann, Andrew Tomkins, Balint Miklos, Greg Corrado, Laszlo Lukacs, Marina Ganea, Peter Young, et al. 2016. Smart reply: Automated re- sponse suggestion for email. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 955- 964. ACM.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A diversity-promoting objective function for neural conversation models",
"authors": [
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Brockett",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Dolan",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "110--119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A diversity-promoting objec- tive function for neural conversation models. In Pro- ceedings of the 2016 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 110-119.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Dialogue learning with human teaching and feedback in end-to-end trainable task-oriented dialogue systems",
"authors": [
{
"first": "Bing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Gokhan",
"middle": [],
"last": "Tur",
"suffix": ""
},
{
"first": "Dilek",
"middle": [],
"last": "Hakkani-Tur",
"suffix": ""
},
{
"first": "Pararth",
"middle": [],
"last": "Shah",
"suffix": ""
},
{
"first": "Larry",
"middle": [],
"last": "Heck",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "2060--2069",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bing Liu, Gokhan Tur, Dilek Hakkani-Tur, Pararth Shah, and Larry Heck. 2018. Dialogue learning with human teaching and feedback in end-to-end train- able task-oriented dialogue systems. In Proceedings of the 2018 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Pa- pers), volume 1, pages 2060-2069.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Goal-oriented end-to-end conversational models with profile features in a real-world setting",
"authors": [
{
"first": "Yichao",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Manisha",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Jared",
"middle": [],
"last": "Kramer",
"suffix": ""
},
{
"first": "Heba",
"middle": [],
"last": "Elfardy",
"suffix": ""
},
{
"first": "Andrea",
"middle": [],
"last": "Kahn",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "2",
"issue": "",
"pages": "48--55",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yichao Lu, Manisha Srivastava, Jared Kramer, Heba El- fardy, Andrea Kahn, Song Wang, and Vikas Bhard- waj. 2019. Goal-oriented end-to-end conversational models with profile features in a real-world setting. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 2 (Industry Papers), pages 48-55.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Some methods for classification and analysis of multivariate observations",
"authors": [
{
"first": "James",
"middle": [],
"last": "Macqueen",
"suffix": ""
}
],
"year": 1967,
"venue": "Proceedings of the fifth Berkeley symposium on mathematical statistics and probability",
"volume": "1",
"issue": "",
"pages": "281--297",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James MacQueen et al. 1967. Some methods for clas- sification and analysis of multivariate observations. In Proceedings of the fifth Berkeley symposium on mathematical statistics and probability, volume 1, pages 281-297. Oakland, CA, USA.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Modern hierarchical, agglomerative clustering algorithms",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "M\u00fcllner",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1109.2378[cs,stat].ArXiv:1109.2378"
]
},
"num": null,
"urls": [],
"raw_text": "Daniel M\u00fcllner. 2011. Modern hierarchical, agglom- erative clustering algorithms. arXiv:1109.2378 [cs, stat]. ArXiv: 1109.2378.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Wards Hierarchical Agglomerative Clustering Method: Which Algorithms Implement Wards Criterion",
"authors": [
{
"first": "Fionn",
"middle": [],
"last": "Murtagh",
"suffix": ""
},
{
"first": "Pierre",
"middle": [],
"last": "Legendre",
"suffix": ""
}
],
"year": 2014,
"venue": "Journal of Classification",
"volume": "31",
"issue": "3",
"pages": "274--295",
"other_ids": {
"DOI": [
"10.1007/s00357-014-9161-z"
]
},
"num": null,
"urls": [],
"raw_text": "Fionn Murtagh and Pierre Legendre. 2014. Wards Hier- archical Agglomerative Clustering Method: Which Algorithms Implement Wards Criterion? Journal of Classification, 31(3):274-295.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Building smart replies for member messages",
"authors": [
{
"first": "Jeff",
"middle": [],
"last": "Pasternack",
"suffix": ""
},
{
"first": "Nimesh",
"middle": [],
"last": "Chakravarthi",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Leon",
"suffix": ""
},
{
"first": "Nandeesh",
"middle": [],
"last": "Rajashekar",
"suffix": ""
},
{
"first": "Birjodh",
"middle": [],
"last": "Tiwana",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeff Pasternack, Nimesh Chakravarthi, Adam Leon, Nandeesh Rajashekar, Birjodh Tiwana, and Bing Zhao. 2017. Building smart replies for member mes- sages.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word rep- resentation. In Proceedings of the 2014 conference on empirical methods in natural language process- ing (EMNLP), pages 1532-1543.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Dissecting contextual word embeddings: Architecture and representation",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Wen-Tau",
"middle": [],
"last": "Yih",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1499--1509",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Peters, Mark Neumann, Luke Zettlemoyer, and Wen-tau Yih. 2018a. Dissecting contextual word embeddings: Architecture and representation. In Proceedings of the 2018 Conference on Empiri- cal Methods in Natural Language Processing, pages 1499-1509.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "E",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of NAACL-HLT",
"volume": "",
"issue": "",
"pages": "2227--2237",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018b. Deep contextualized word rep- resentations. In Proceedings of NAACL-HLT, pages 2227-2237.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Alime chat: A sequence to sequence and rerank based chatbot engine",
"authors": [
{
"first": "Minghui",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "Feng-Lin",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Siyu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Xing",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Yan",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Weipeng",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Haiqing",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Chu",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "498--503",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minghui Qiu, Feng-Lin Li, Siyu Wang, Xing Gao, Yan Chen, Weipeng Zhao, Haiqing Chen, Jun Huang, and Wei Chu. 2017. Alime chat: A sequence to sequence and rerank based chatbot engine. In Pro- ceedings of the 55th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 2: Short Papers), pages 498-503.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Building end-to-end dialogue systems using generative hierarchical neural network models",
"authors": [
{
"first": "Iulian",
"middle": [],
"last": "Vlad Serban",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Sordoni",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Aaron",
"suffix": ""
},
{
"first": "Joelle",
"middle": [],
"last": "Courville",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pineau",
"suffix": ""
}
],
"year": 2016,
"venue": "AAAI",
"volume": "16",
"issue": "",
"pages": "3776--3784",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Iulian Vlad Serban, Alessandro Sordoni, Yoshua Ben- gio, Aaron C Courville, and Joelle Pineau. 2016. Building end-to-end dialogue systems using gener- ative hierarchical neural network models. In AAAI, volume 16, pages 3776-3784.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Generating high-quality and informative conversation responses with sequence-to-sequence models",
"authors": [
{
"first": "Yuanlong",
"middle": [],
"last": "Shao",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Gouws",
"suffix": ""
},
{
"first": "Denny",
"middle": [],
"last": "Britz",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Goldie",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Strope",
"suffix": ""
},
{
"first": "Ray",
"middle": [],
"last": "Kurzweil",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuanlong Shao, Stephan Gouws, Denny Britz, Anna Goldie, Brian Strope, and Ray Kurzweil. 2017. Gen- erating high-quality and informative conversation re- sponses with sequence-to-sequence models. In Pro- ceedings of the 2017 Conference on Empirical Meth- ods in Natural Language Processing.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Normalized Cuts and Image Segmentation",
"authors": [
{
"first": "Jianbo",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Jitendra",
"middle": [],
"last": "Malik",
"suffix": ""
}
],
"year": 2000,
"venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence",
"volume": "22",
"issue": "8",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jianbo Shi and Jitendra Malik. 2000. Normalized Cuts and Image Segmentation. IEEE Transactions on Pat- tern Analysis and Machine Intelligence, 22(8):18.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "A neural conversational model",
"authors": [
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Quoc",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1506.05869"
]
},
"num": null,
"urls": [],
"raw_text": "Oriol Vinyals and Quoc Le. 2015. A neural conversa- tional model. arXiv preprint arXiv:1506.05869.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Unsupervised deep embedding for clustering analysis",
"authors": [
{
"first": "Junyuan",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Ross",
"middle": [],
"last": "Girshick",
"suffix": ""
},
{
"first": "Ali",
"middle": [],
"last": "Farhadi",
"suffix": ""
}
],
"year": 2016,
"venue": "International conference on machine learning",
"volume": "",
"issue": "",
"pages": "478--487",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Junyuan Xie, Ross Girshick, and Ali Farhadi. 2016. Unsupervised deep embedding for clustering analy- sis. In International conference on machine learn- ing, pages 478-487.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Learning to control the specificity in neural response generation",
"authors": [
{
"first": "Ruqing",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jiafeng",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Yixing",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Yanyan",
"middle": [],
"last": "Lan",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Xueqi",
"middle": [],
"last": "Cheng",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ruqing Zhang, Jiafeng Guo, Yixing Fan, Yanyan Lan, Jun Xu, and Xueqi Cheng. 2018. Learning to con- trol the specificity in neural response generation. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), volume 1.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "BIRCH: A New Data Clustering Algorithm and Its Applications",
"authors": [
{
"first": "Tian",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Raghu",
"middle": [],
"last": "Ramakrishnan",
"suffix": ""
},
{
"first": "Miron",
"middle": [],
"last": "Livny",
"suffix": ""
}
],
"year": 1997,
"venue": "Data Mining and Knowledge Discovery",
"volume": "1",
"issue": "2",
"pages": "141--182",
"other_ids": {
"DOI": [
"10.1023/A:1009783824328"
]
},
"num": null,
"urls": [],
"raw_text": "Tian Zhang, Raghu Ramakrishnan, and Miron Livny. 1997. BIRCH: A New Data Clustering Algorithm and Its Applications. Data Mining and Knowledge Discovery, 1(2):141-182.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Multi-turn response selection for chatbots with deep attention matching network",
"authors": [
{
"first": "Xiangyang",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Lu",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Daxiang",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Yi",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Ying",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Wayne",
"middle": [
"Xin"
],
"last": "Zhao",
"suffix": ""
},
{
"first": "Dianhai",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiangyang Zhou, Lu Li, Daxiang Dong, Yi Liu, Ying Chen, Wayne Xin Zhao, Dianhai Yu, and Hua Wu. 2018. Multi-turn response selection for chatbots with deep attention matching network. In Proceed- ings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), volume 1.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"type_str": "table",
"html": null,
"text": "Your refund of GENERIC SLOT will be credited to your original payment method within 7 to 10 business days.CMConfirming refund request I see that you have used the membership benefits, and because of that I can offer GENERIC SLOT refund.",
"content": "<table><tr><td colspan=\"2\">Domain Cluster Description</td><td>Utterances</td></tr><tr><td>CM</td><td>Informing the customer</td><td/></tr><tr><td/><td>of confirmation e-mail</td><td/></tr><tr><td/><td/><td>Sounds good?</td></tr><tr><td>CM</td><td>Greeting</td><td>Good afternoon.</td></tr><tr><td>CM</td><td colspan=\"2\">Asking for confirmation Can you please confirm the last four digits and the expiration</td></tr><tr><td/><td/><td>date of the payment method that has been charged?</td></tr><tr><td>DNR</td><td>Confirming refund op-</td><td>Would you like the refund to be back on your gift or credit card?</td></tr><tr><td/><td>tions</td><td/></tr><tr><td>DNR</td><td>Apology</td><td>I do apologize for the inconvenience if it was tagged as delivered</td></tr><tr><td/><td/><td>but nowhere to be found.</td></tr><tr><td>DNR</td><td>Confirming order status</td><td>It seems that the package was already lost and mismarked as</td></tr><tr><td/><td/><td>delivered.</td></tr></table>",
"num": null
},
"TABREF1": {
"type_str": "table",
"html": null,
"text": "",
"content": "<table><tr><td>shows delivered</td></tr></table>",
"num": null
},
"TABREF2": {
"type_str": "table",
"html": null,
"text": "",
"content": "<table><tr><td>3 :</td></tr></table>",
"num": null
},
"TABREF3": {
"type_str": "table",
"html": null,
"text": "Results of Automatic Evaluation on CM Data",
"content": "<table><tr><td/><td colspan=\"7\">Rand-BL Glove GloVe-CS BERT ELMo ELMo-CS DSEC</td></tr><tr><td>NMI</td><td>0.23</td><td>0.49</td><td>0.62</td><td>0.39</td><td>0.40</td><td>0.61</td><td>0.63</td></tr><tr><td>ACC</td><td>0.10</td><td>0.31</td><td>0.47</td><td>0.2</td><td>0.24</td><td>0.41</td><td>0.51</td></tr><tr><td>ARI</td><td>0.00</td><td>0.15</td><td>0.32</td><td>0.12</td><td>0.09</td><td>0.26</td><td>0.34</td></tr></table>",
"num": null
},
"TABREF4": {
"type_str": "table",
"html": null,
"text": "",
"content": "<table><tr><td>: Results of Automatic Evaluation on DNR Data</td></tr><tr><td>1. Top-10 acceptance rate: The percentage of</td></tr><tr><td>utterances for which the CSA selects one of</td></tr><tr><td>the suggested responses.</td></tr><tr><td>2. Top-1 acceptance rate: The percentage of ut-</td></tr><tr><td>terances for which the CSA selects the first</td></tr><tr><td>suggested response.</td></tr><tr><td>3. All suggestions accepted: The percentage of</td></tr><tr><td>contacts that are handled using only suggested</td></tr><tr><td>utterances.</td></tr><tr><td>4. Average depth of first rejection: The percent-</td></tr><tr><td>age of utterances in the conversation that oc-</td></tr><tr><td>cur before the agent rejects all suggestions</td></tr><tr><td>and types their own text.</td></tr><tr><td>5. Unique rate: This measures the variation of</td></tr><tr><td>the template pool, calculated as one minus the</td></tr><tr><td>percentage of templates that can be removed</td></tr><tr><td>without reducing the coverage. Ideally, this</td></tr><tr><td>number would be 1.0.</td></tr><tr><td>6. Number of missing templates: The number</td></tr><tr><td>of utterances that are reported missing from</td></tr><tr><td>agents. Ideally, this number would be 0.</td></tr></table>",
"num": null
}
}
}
}