ACL-OCL / Base_JSON /prefixC /json /coling /2020.coling-industry.5.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:43:25.001933Z"
},
"title": "Semantic Diversity for Natural Language Understanding Evaluation in Dialog Systems",
"authors": [
{
"first": "Enrico",
"middle": [],
"last": "Palumbo",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Amazon Alexa Turin",
"location": {
"postCode": "10126",
"country": "Italy"
}
},
"email": "[email protected]"
},
{
"first": "Andrea",
"middle": [],
"last": "Mezzalira",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Amazon Alexa Turin",
"location": {
"postCode": "10126",
"country": "Italy"
}
},
"email": "[email protected]"
},
{
"first": "Cristina",
"middle": [],
"last": "Marco",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Amazon Alexa Turin",
"location": {
"postCode": "10126",
"country": "Italy"
}
},
"email": "[email protected]"
},
{
"first": "Alessandro",
"middle": [],
"last": "Manzotti",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Amazon Alexa Turin",
"location": {
"postCode": "10126",
"country": "Italy"
}
},
"email": "[email protected]"
},
{
"first": "Daniele",
"middle": [],
"last": "Amberti",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Amazon Alexa Turin",
"location": {
"postCode": "10126",
"country": "Italy"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The quality of Natural Language Understanding (NLU) models is typically evaluated using aggregated metrics on a large number of utterances. In a dialog system, though, the manual analysis of failures on specific utterances is a time-consuming and yet critical endeavor to guarantee a high-quality customer experience. A crucial question for this analysis is how to create a test set of utterances that covers a diversity of possible customer requests. In this paper, we introduce the task of generating a test set with high semantic diversity for NLU evaluation in dialog systems and we describe an approach to address it. The approach starts by extracting high-traffic utterance patterns. Then, for each pattern, it achieves high diversity selecting utterances from different regions of the utterance embedding space. We compare three selection strategies based on clustering of utterances in the embedding space, on solving the maximum distance optimization problem and on simple heuristics such as random uniform sampling and popularity. The evaluation shows that the highest semantic and lexicon diversity is obtained by a greedy maximum sum of distance solver in a comparable runtime with the clustering and the heuristics approaches.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "The quality of Natural Language Understanding (NLU) models is typically evaluated using aggregated metrics on a large number of utterances. In a dialog system, though, the manual analysis of failures on specific utterances is a time-consuming and yet critical endeavor to guarantee a high-quality customer experience. A crucial question for this analysis is how to create a test set of utterances that covers a diversity of possible customer requests. In this paper, we introduce the task of generating a test set with high semantic diversity for NLU evaluation in dialog systems and we describe an approach to address it. The approach starts by extracting high-traffic utterance patterns. Then, for each pattern, it achieves high diversity selecting utterances from different regions of the utterance embedding space. We compare three selection strategies based on clustering of utterances in the embedding space, on solving the maximum distance optimization problem and on simple heuristics such as random uniform sampling and popularity. The evaluation shows that the highest semantic and lexicon diversity is obtained by a greedy maximum sum of distance solver in a comparable runtime with the clustering and the heuristics approaches.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In the past years, voice-first dialog systems have become ubiquitous in the market, with an ever increasing number of features, languages and customer requests. A crucial component of these systems is the Natural Language Understanding (NLU) model. The NLU model maps customer requests onto specific actions that the device has to perform. In practice, this means classifying an utterance into a domain, intent and slots (Su et al., 2018) . For instance, given the customer's utterance \"play madonna\", an NLU model returns: (Music, PlayMusicIntent, play ArtistName) where Music is the domain, PlayMusicIntent is the intent and the slot is ArtistName. When a new algorithm for NLU is proposed in a research environment, the evaluation is typically performed by aggregating metrics such as Slot Error Rate (SER) (Makhoul et al., 1999) and Semantic Error Rate (SemER) (Su et al., 2018 ) on a large test set of utterances. However, in a production environment, aggregated metrics alone are not sufficient, as they may hide failures on specific business critical utterances. Thus, whenever a change is introduced into an NLU model, failures need to be manually reviewed to determine whether they represent an issue for the customers. The manual review of failures is a crucial, and yet very time-consuming operation. Hence, the question: how to create a test set that makes the analysis more efficient including a diversity of patterns, utterances and possibile failure causes? The problem of maximizing semantic diversity in text is common in tasks such as text summarization (Zhu et al., 2007) , text generation (Xu et al., 2018) , keyphrase extraction (Bennani-Smires et al., 2018) , machine translation (Shu et al., 2019) , data augmentation in dialog systems (Hou et al., 2018; . However, to the best our knowledge, semantic diversity has never been used to create test sets for the evaluation of natural language understanding models in dialog systems.",
"cite_spans": [
{
"start": 421,
"end": 438,
"text": "(Su et al., 2018)",
"ref_id": "BIBREF18"
},
{
"start": 810,
"end": 832,
"text": "(Makhoul et al., 1999)",
"ref_id": "BIBREF11"
},
{
"start": 865,
"end": 881,
"text": "(Su et al., 2018",
"ref_id": "BIBREF18"
},
{
"start": 1572,
"end": 1590,
"text": "(Zhu et al., 2007)",
"ref_id": "BIBREF20"
},
{
"start": 1609,
"end": 1626,
"text": "(Xu et al., 2018)",
"ref_id": "BIBREF19"
},
{
"start": 1650,
"end": 1679,
"text": "(Bennani-Smires et al., 2018)",
"ref_id": "BIBREF2"
},
{
"start": 1702,
"end": 1720,
"text": "(Shu et al., 2019)",
"ref_id": "BIBREF17"
},
{
"start": 1759,
"end": 1777,
"text": "(Hou et al., 2018;",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "1"
},
{
"text": "In this paper, we introduce an approach to automate the creation of test sets with high semantic diversity for the evaluation of the NLU model in a dialog system. The approach works as follows. First, we filter the dataset extracting a set of high-traffic pattern. Then, for each pattern, we map utterances into an embedding space to represent the semantics of the different slot values. Finally, we create test sets comparing three selection algorithms based on partitioning the space in groups and selecting representatives or on directly solving a maximum sum of distance optimization problem to achieve high diversity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "1"
},
{
"text": "The approach can be divided in three major steps: pattern extraction, encoding and selection (Fig. 1) . Figure 1 : A bird's eye view over the proposed approach. High-traffic patterns are extracted and, for a specific pattern, utterances are embedded into a vector space in the encoding stage. Then, the selection stage selects points that are far apart in the vector space to create a test set with high diversity (red points),",
"cite_spans": [],
"ref_spans": [
{
"start": 93,
"end": 101,
"text": "(Fig. 1)",
"ref_id": null
},
{
"start": 104,
"end": 112,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Approach",
"sec_num": "2"
},
{
"text": "As of today, pattern-based rules such as Finite State Transducers (FSTs) (Karttunen, 2000) still play a very important role in NLU models. FSTs work by mapping into domain, intent and slots utterances that exactly match structures such as \"play SongName\", \"play SongName please\", \"please can you play SongName\". We call these structures \"semantic frames\", and, together with domain and intent, they are a suitable definition of \"pattern\" that can break in an FST. Given a domain d \u2208 D, an intent i \u2208 I and a semantic frames c \u2208 C, we define a pattern p \u2208 P as:",
"cite_spans": [
{
"start": 73,
"end": 90,
"text": "(Karttunen, 2000)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pattern Extraction",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p = (d, i, c)",
"eq_num": "(1)"
}
],
"section": "Pattern Extraction",
"sec_num": "2.1"
},
{
"text": "such as \"Music, PlayMusicIntent, play SongName\" or \"Weather, GetWeatherForecastIntent, what is the weather like in CityName\". We use a dataset composed by~5M annotated utterances that contains 400 high-traffic patterns. Even within a specific pattern, though, the variability can be high and the selection strategy should be diversity-aware. Consider the example of \"play SongName\": a huge amount of possible songs are present in the dataset. The resulting test set should include a diversity of songs, both in terms of lexicon, that is different wordings, and also in terms of semantics, for instance different musical genres.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pattern Extraction",
"sec_num": "2.1"
},
{
"text": "A crucial point for measuring diversity is finding an adequate vector representation of words and utterances where similarity metrics can be easily applied. word2vec embeddings (Mikolov et al., 2013) have shown the effectiveness of the Continuous Bag of Words and Skip-gram architectures to learn word representations, gaining tremendous popularity. FastText (Bojanowski et al., 2017) improves the word2vec model including subword information (character n-grams) into the skip-gram architecture. In this work, we use FastText to map utterances into embeddings. This means that the model is trained to predict, given a character n-gram as input, the surrounding character n-grams in a predefined window. Given an utterance s(p) of a pattern p and its K character n-grams k i (s(p)) we obtain the vector representation of the utterance\u015d(p):\u015d",
"cite_spans": [
{
"start": 177,
"end": 199,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF12"
},
{
"start": 359,
"end": 384,
"text": "(Bojanowski et al., 2017)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Encoding",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "(p) = 1 K K i=1 f asttext_pretrained_vector(k i (s(p)))",
"eq_num": "(2)"
}
],
"section": "Encoding",
"sec_num": "2.2"
},
{
"text": "Currently, popular models such as ELMo (Peters et al., 2018) or BERT (Devlin et al., 2019) further improve word representations by considering the context or embedding the whole sentence based on neighboring sentences (Kiros et al., 2015) . We choose FastText over more sophisticated embedding models because it is frugal (fast at retrieval times on CPU), and it provides pre-trained models for 157 different languages. The major drawback of averaging character n-grams embeddings in this way is that we lose information on how the sentence is structured, e.g. the order of the tokens. However, given that we perform the encoding in pattern-wise manner, the structure of the sentence is fixed as described in Sec.2.1 and the variations mostly come from the values that occur in the slots.",
"cite_spans": [
{
"start": 39,
"end": 60,
"text": "(Peters et al., 2018)",
"ref_id": "BIBREF14"
},
{
"start": 69,
"end": 90,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF5"
},
{
"start": 218,
"end": 238,
"text": "(Kiros et al., 2015)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Encoding",
"sec_num": "2.2"
},
{
"text": "Definition 1 Given a pattern p, M = |p| is the total number of utterances in the pattern Definition 2 Given a pattern p, m \u2264 M is the total number of utterances to be selected for the pattern Definition 3 Given the vector representation of an utterance\u015d(p), d = |\u015d(p)| is the number of dimensions of the vector. Definition 4 X(p) = (\u015d 1 (p), ...,\u015d M (p)) is the matrix that contains the vector representations of all the utterances in a pattern p",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selection",
"sec_num": "2.3"
},
{
"text": "We compare the following approaches to select points from the vector space:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selection",
"sec_num": "2.3"
},
{
"text": "PSA The Part and Select Algorithm (PSA) (Salomon et al., 2013) has two steps: first, it partitions the space grouping similar points; then it selects a diverse subset by choosing one member for each of the groups. To partition the space in m subsets, PSA makes m \u2212 1 divisions of a single set into two subsets. Given the minimum and maximum values of a feature a j = min i (X ij ) and b j = max i (X ij ), the diameter of a subset is defined as A = max j (b j \u2212 a j ). The partitioning of the space works iteratively, searching among all the subsets the one that has the maximum diameter A, and splitting in half the subset along the feature j that maximizes the diameter. Then, for each of the m subset, the point that is closest to the center of the hyperretangle is selected. PSA has a runtime complexity that is O(M * m * d).",
"cite_spans": [
{
"start": 40,
"end": 62,
"text": "(Salomon et al., 2013)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Selection",
"sec_num": "2.3"
},
{
"text": "KMeans KMeans (Hartigan and Wong, 1979) is arguably the most popular clustering algorithm, it works by dividing the data in a predefined number of groups minimizing the within-cluster sum of squares. For each pattern with M utterances, we apply KMeans to obtain m clusters, and then we select the nearest point to the centroid to be part of the subset. KMeans has a runtime complexity of O(M * m * d).",
"cite_spans": [
{
"start": 7,
"end": 39,
"text": "KMeans (Hartigan and Wong, 1979)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Selection",
"sec_num": "2.3"
},
{
"text": "MaxSum MaxSum (Ghosh, 1996) solves the optimization problem of finding a subset of points that have the maximum sum of distances among each other. Given that the problem is NP-hard, we use a greedy approach that iteratively selects points that maximize the objective and has a linear runtime complexity O(M * m * d).",
"cite_spans": [
{
"start": 14,
"end": 27,
"text": "(Ghosh, 1996)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Selection",
"sec_num": "2.3"
},
{
"text": "As baselines, we also include the Random Sampler, which selects m points per pattern using a uniform distribution, and the Popularity Sampler, which selects the most frequently used m utterances for each pattern. For all selection algorithms, we set as the default percentage of utterances to select for each pattern f = 0.01. Given the number of utterances in a pattern M and f , we determine the number of points to select and set the number of clusters m = M f in the clustering algorithms. When using the proposed approach, we recommend to set the value of f depending on the desired size of the test set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selection",
"sec_num": "2.3"
},
{
"text": "We evaluate the inherent diversity of the test sets that the selection algorithms generate measuring how 'distant' two utterances are on average in the subsets that we generate using the following metrics:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "3"
},
{
"text": "\u2022 SelfBLEU (Zhu et al., 2018) was recently introduced to measure the diversity of artifically generated text, it computes BLEU (Papineni et al., 2002) comparing a set of utterances with themselves rather than with a reference. We use it as follows:",
"cite_spans": [
{
"start": 11,
"end": 29,
"text": "(Zhu et al., 2018)",
"ref_id": "BIBREF21"
},
{
"start": 127,
"end": 150,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Self BLEU = 1 N N p=1 avg i,j (1 \u2212 BLEU (s(p) i , s(p) j ))",
"eq_num": "(3)"
}
],
"section": "Evaluation",
"sec_num": "3"
},
{
"text": "\u2022 Jaccard: average word overlap across test utterances",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Jaccard = 1 N N p=1 avg i,j (1 \u2212 word_overlap(s(p) i , s(p) j ))",
"eq_num": "(4)"
}
],
"section": "Evaluation",
"sec_num": "3"
},
{
"text": "\u2022 Word Embedding Diversity (WED): similar to the Word Embedding Similarity (Agirre et al., 2016) , it is the average cosine distance between embeddings of vectors in the test set:",
"cite_spans": [
{
"start": 75,
"end": 96,
"text": "(Agirre et al., 2016)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "W ED = 1 N N p=1 avg i,j (1 \u2212 cosine_similarity(\u015d(p) i ,\u015d(p) j ))",
"eq_num": "(5)"
}
],
"section": "Evaluation",
"sec_num": "3"
},
{
"text": "Note that SelfBLEU and Jaccard only consider word and n-gram level similarities, whereas W ED can also take into account word semantics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "3"
},
{
"text": "We compare the selection algorithms computing the relative percentage improvement with respect to random selection on the diversity metrics (Tab. 1). The results show that, in general, all diversity-aware algorithms achieve higher diversity with respect to Random and Popularity generates the lowest diversity. MaxSum solver obtains the best diversity both at the semantic (WED) and at the lexicon level (SelfBLEU, Jaccard). Interestingly, PSA performs better than KMeans for metrics that take into account words and n-grams overlaps, i.e. at a lexicon level, whereas KMeans works better for WED, which measures embedding distance at a semantic level. Random is the fastest algorithm, but the runtime is comparable for all the algorithms. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "In this paper, we have introduced the problem of creating a test set with high semantic diversity to evaluate the NLU model of a dialog system. We have described the problem motivation and we have introduced an approach to address it. The experimental comparison among different diversity-aware selection algorithms shows that the MaxSum sampler obtains the best diversity, both at the semantic (WED) and at the lexicon level (SelfBLEU, Jaccard). For all the diversity-aware approaches (PSA, KMeans, MaxSum), runtime is comparable to simple heuristics such as random and popularity selection. As a future work, we will create a ground truth to see how well our diversity metrics correlate with human judgement. The ground truth will also be key to exploring the effectiveness of hybrid approaches that combine diverisity and coverage, taking into the frequency of customer requests. We also plan to experiment with more encoding algorithms, such as frugal light-weight transformer-based approaches that have been recently proposed (Sanh et al., 2019) and have shown to better represent complex utterances.",
"cite_spans": [
{
"start": 1031,
"end": 1050,
"text": "(Sanh et al., 2019)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Semeval-2016 task 1: Semantic textual similarity, monolingual and cross-lingual evaluation",
"authors": [
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "Carmen",
"middle": [],
"last": "Banea",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
},
{
"first": "Aitor",
"middle": [],
"last": "Gonzalez Agirre",
"suffix": ""
},
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
},
{
"first": "German",
"middle": [
"Rigau"
],
"last": "Claramunt",
"suffix": ""
},
{
"first": "Janyce",
"middle": [],
"last": "Wiebe",
"suffix": ""
}
],
"year": 2016,
"venue": "SemEval-2016. 10th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eneko Agirre, Carmen Banea, Daniel Cer, Mona Diab, Aitor Gonzalez Agirre, Rada Mihalcea, German Rigau Claramunt, and Janyce Wiebe. 2016. Semeval-2016 task 1: Semantic textual similarity, monolingual and cross-lingual evaluation. In SemEval-2016. 10th International Workshop on Semantic Evaluation; 2016",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Simple unsupervised keyphrase extraction using sentence embeddings",
"authors": [
{
"first": "Kamil",
"middle": [],
"last": "Bennani-Smires",
"suffix": ""
},
{
"first": "Claudiu",
"middle": [],
"last": "Musat",
"suffix": ""
},
{
"first": "Andreea",
"middle": [],
"last": "Hossmann",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Baeriswyl",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Jaggi",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 22nd Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "221--229",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kamil Bennani-Smires, Claudiu Musat, Andreea Hossmann, Michael Baeriswyl, and Martin Jaggi. 2018. Sim- ple unsupervised keyphrase extraction using sentence embeddings. In Proceedings of the 22nd Conference on Computational Natural Language Learning, pages 221-229, Brussels, Belgium, October. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Enriching word vectors with subword information",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "135--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135-146.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Efficient semi-supervised learning for natural language understanding by optimizing diversity",
"authors": [
{
"first": "Eunah",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "He",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "Varun",
"middle": [],
"last": "Lalor",
"suffix": ""
},
{
"first": "William M",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Campbell",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.04196"
]
},
"num": null,
"urls": [],
"raw_text": "Eunah Cho, He Xie, John P Lalor, Varun Kumar, and William M Campbell. 2019. Efficient semi-supervised learning for natural language understanding by optimizing diversity. arXiv preprint arXiv:1910.04196.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirec- tional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Computational aspects of the maximum diversity problem",
"authors": [
{
"first": "B",
"middle": [],
"last": "Jay",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ghosh",
"suffix": ""
}
],
"year": 1996,
"venue": "Operations research letters",
"volume": "19",
"issue": "4",
"pages": "175--181",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jay B Ghosh. 1996. Computational aspects of the maximum diversity problem. Operations research letters, 19(4):175-181.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Algorithm as 136: A k-means clustering algorithm",
"authors": [
{
"first": "A",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hartigan",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Manchek",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wong",
"suffix": ""
}
],
"year": 1979,
"venue": "Journal of the Royal Statistical Society. Series C (Applied Statistics)",
"volume": "28",
"issue": "1",
"pages": "100--108",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John A Hartigan and Manchek A Wong. 1979. Algorithm as 136: A k-means clustering algorithm. Journal of the Royal Statistical Society. Series C (Applied Statistics), 28(1):100-108.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Sequence-to-sequence data augmentation for dialogue language understanding",
"authors": [
{
"first": "Yutai",
"middle": [],
"last": "Hou",
"suffix": ""
},
{
"first": "Yijia",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Wanxiang",
"middle": [],
"last": "Che",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1807.01554"
]
},
"num": null,
"urls": [],
"raw_text": "Yutai Hou, Yijia Liu, Wanxiang Che, and Ting Liu. 2018. Sequence-to-sequence data augmentation for dialogue language understanding. arXiv preprint arXiv:1807.01554.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Applications of finite-state transducers in natural language processing",
"authors": [
{
"first": "Lauri",
"middle": [],
"last": "Karttunen",
"suffix": ""
}
],
"year": 2000,
"venue": "International Conference on Implementation and Application of Automata",
"volume": "",
"issue": "",
"pages": "34--46",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lauri Karttunen. 2000. Applications of finite-state transducers in natural language processing. In International Conference on Implementation and Application of Automata, pages 34-46. Springer.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Skip-thought vectors",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Kiros",
"suffix": ""
},
{
"first": "Yukun",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Russ",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "Raquel",
"middle": [],
"last": "Zemel",
"suffix": ""
},
{
"first": "Antonio",
"middle": [],
"last": "Urtasun",
"suffix": ""
},
{
"first": "Sanja",
"middle": [],
"last": "Torralba",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Fidler",
"suffix": ""
}
],
"year": 2015,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "3294--3302",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan Kiros, Yukun Zhu, Russ R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Skip-thought vectors. In Advances in neural information processing systems, pages 3294-3302.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Performance measures for information extraction",
"authors": [
{
"first": "John",
"middle": [],
"last": "Makhoul",
"suffix": ""
},
{
"first": "Francis",
"middle": [],
"last": "Kubala",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Weischedel",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of DARPA broadcast news workshop",
"volume": "",
"issue": "",
"pages": "249--252",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Makhoul, Francis Kubala, Richard Schwartz, Ralph Weischedel, et al. 1999. Performance measures for information extraction. In Proceedings of DARPA broadcast news workshop, pages 249-252. Herndon, VA.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111-3119.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th annual meeting on association for computational linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311-318. Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "E",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1802.05365"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettle- moyer. 2018. Deep contextualized word representations. arXiv preprint arXiv:1802.05365.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Psa -a new scalable space partition based selection algorithm for moeas",
"authors": [
{
"first": "Shaul",
"middle": [],
"last": "Salomon",
"suffix": ""
},
{
"first": "Gideon",
"middle": [],
"last": "Avigad",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Goldvard",
"suffix": ""
},
{
"first": "Oliver",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2013,
"venue": "EVOLVE -A Bridge between Probability, Set Oriented Numerics, and Evolutionary Computation II",
"volume": "",
"issue": "",
"pages": "137--151",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shaul Salomon, Gideon Avigad, Alex Goldvard, and Oliver Sch\u00fctze. 2013. Psa -a new scalable space par- tition based selection algorithm for moeas. In Oliver Sch\u00fctze, Carlos A. Coello Coello, Alexandru-Adrian Tantar, Emilia Tantar, Pascal Bouvry, Pierre Del Moral, and Pierrick Legrand, editors, EVOLVE -A Bridge be- tween Probability, Set Oriented Numerics, and Evolutionary Computation II, pages 137-151, Berlin, Heidelberg. Springer Berlin Heidelberg.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter",
"authors": [
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.01108"
]
},
"num": null,
"urls": [],
"raw_text": "Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Generating diverse translations with sentence codes",
"authors": [
{
"first": "Raphael",
"middle": [],
"last": "Shu",
"suffix": ""
},
{
"first": "Hideki",
"middle": [],
"last": "Nakayama",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1823--1827",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Raphael Shu, Hideki Nakayama, and Kyunghyun Cho. 2019. Generating diverse translations with sentence codes. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1823-1827.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A re-ranker scheme for integrating large scale nlu models",
"authors": [
{
"first": "Chengwei",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Rahul",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Spyros",
"middle": [],
"last": "Shankar Ananthakrishnan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Matsoukas",
"suffix": ""
}
],
"year": 2018,
"venue": "2018 IEEE Spoken Language Technology Workshop (SLT)",
"volume": "",
"issue": "",
"pages": "670--676",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chengwei Su, Rahul Gupta, Shankar Ananthakrishnan, and Spyros Matsoukas. 2018. A re-ranker scheme for integrating large scale nlu models. In 2018 IEEE Spoken Language Technology Workshop (SLT), pages 670- 676. IEEE.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Diversity-promoting gan: A cross-entropy based generative adversarial network for diversified text generation",
"authors": [
{
"first": "Jingjing",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Xuancheng",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Junyang",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "3940--3949",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jingjing Xu, Xuancheng Ren, Junyang Lin, and Xu Sun. 2018. Diversity-promoting gan: A cross-entropy based generative adversarial network for diversified text generation. In Proceedings of the 2018 Conference on Empir- ical Methods in Natural Language Processing, pages 3940-3949.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Improving diversity in ranking using absorbing random walks",
"authors": [
{
"first": "Xiaojin",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Andrew",
"suffix": ""
},
{
"first": "Jurgen",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Van Gael",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Andrzejewski",
"suffix": ""
}
],
"year": 2007,
"venue": "Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference",
"volume": "",
"issue": "",
"pages": "97--104",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaojin Zhu, Andrew B Goldberg, Jurgen Van Gael, and David Andrzejewski. 2007. Improving diversity in ranking using absorbing random walks. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference, pages 97-104.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Texygen: A benchmarking platform for text generation models",
"authors": [
{
"first": "Yaoming",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Sidi",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Jiaxian",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Weinan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yong",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2018,
"venue": "The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval",
"volume": "",
"issue": "",
"pages": "1097--1100",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yaoming Zhu, Sidi Lu, Lei Zheng, Jiaxian Guo, Weinan Zhang, Jun Wang, and Yong Yu. 2018. Texygen: A bench- marking platform for text generation models. In The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, pages 1097-1100.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Turn off the light in the living room Turn off the light In the bedroom Turn off the light in the room Turn off the light in the kitchen Turn off the light in the hall Turn off the light in the bathroom Turn off the light in the garage Turn off the plug in the kitchen Turn off the plug in the bedroom Turn off the tv in the hall Turn off the tv in the bedroom Turn off the tv in the kitchen Turn off the lamp in the bathroom Turn off the lamp in the kitchen Turn off the lamp in the bedroom Utterance embedding Coveragedriven subset extraction Turn off the light \u2026 Turn off the plug \u2026 Turn off the tv \u2026 Turn off the lamp\u2026 Turn off the tv in the living room Turn off the lamp in the bedroom Turn off the plug in the kitchen Turn off the light in the room Turn off the light in the kitchen Turn off the light in the hall Turn off the light in the bedroom Pattern extraction Encoding Selection"
},
"TABREF0": {
"content": "<table><tr><td colspan=\"7\">Algorithm WED SelfBLEU-2 SelfBLEU-3 SelfBLEU-4 Jaccard Runtime (s)</td></tr><tr><td>PSA</td><td>24.73</td><td>2.33</td><td>1.97</td><td>1.53</td><td>1.93</td><td>12178</td></tr><tr><td>KMeans</td><td>26.13</td><td>2.20</td><td>1.96</td><td>1.59</td><td>1.87</td><td>13679</td></tr><tr><td>MaxSum</td><td>38.7</td><td>6.96</td><td>6.13</td><td>4.9</td><td>5.31</td><td>12219</td></tr><tr><td>Random</td><td>0.0</td><td>0.0</td><td>0.0</td><td>0.0</td><td>0.0</td><td>11959</td></tr><tr><td colspan=\"2\">Popularity -12.98</td><td>-10.88</td><td>-9.66</td><td>-7.75</td><td>-7.75</td><td>12011</td></tr></table>",
"text": "Diversity comparison of the selectiong algorithms as a relative % change with respect to Random sampling. In SelfBLEU-n, n is the size of the n-gram used. Results are significant for all pairs of algorithms and for all metrics with a paired t-test with p<0.05.",
"num": null,
"type_str": "table",
"html": null
}
}
}
}