Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "K19-1005",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:05:20.262505Z"
},
"title": "Large-scale, Diverse, Paraphrastic Bitexts via Sampling and Clustering",
"authors": [
{
"first": "J",
"middle": [
"Edward"
],
"last": "Hu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Johns Hopkins University",
"location": {}
},
"email": ""
},
{
"first": "Abhinav",
"middle": [],
"last": "Singh",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Johns Hopkins University",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Nils",
"middle": [],
"last": "Holzenberger",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Johns Hopkins University",
"location": {}
},
"email": ""
},
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Johns Hopkins University",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Johns Hopkins University",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Producing diverse paraphrases of a sentence is a challenging task. Natural paraphrase corpora are scarce and limited, while existing large-scale resources are automatically generated via back-translation and rely on beam search, which tends to lack diversity. We describe PARABANK 2, a new resource that contains multiple diverse sentential paraphrases, produced from a bilingual corpus using negative constraints, inference sampling, and clustering. We show that PARABANK 2 significantly surpasses prior work in both lexical and syntactic diversity while being meaningpreserving, as measured by human judgments and standardized metrics. Further, we illustrate how such paraphrastic resources may be used to refine contextualized encoders, leading to improvements in downstream tasks.",
"pdf_parse": {
"paper_id": "K19-1005",
"_pdf_hash": "",
"abstract": [
{
"text": "Producing diverse paraphrases of a sentence is a challenging task. Natural paraphrase corpora are scarce and limited, while existing large-scale resources are automatically generated via back-translation and rely on beam search, which tends to lack diversity. We describe PARABANK 2, a new resource that contains multiple diverse sentential paraphrases, produced from a bilingual corpus using negative constraints, inference sampling, and clustering. We show that PARABANK 2 significantly surpasses prior work in both lexical and syntactic diversity while being meaningpreserving, as measured by human judgments and standardized metrics. Further, we illustrate how such paraphrastic resources may be used to refine contextualized encoders, leading to improvements in downstream tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The ability to understand and produce paraphrases is a basic competency task, one that is often used as a teaching aid to validate if a student understands a statement or a concept. Current deep learning systems struggle with this task, exhibiting brittleness to both understanding and producing paraphrastic expressions (Iyyer et al., 2018) .",
"cite_spans": [
{
"start": 321,
"end": 341,
"text": "(Iyyer et al., 2018)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "One crucial factor behind this incompetence is the dearth of sentential paraphrastic data. Many works have sought to leverage the relative abundance of sub-sentential paraphrastic resources in paraphrase detection or generation (Napoles et al., 2016 ). Yet, they fail to capture contextualized word choices or syntactical variations, as wordor phrase-level resources cannot incorporate information from the whole input sentence.",
"cite_spans": [
{
"start": 228,
"end": 249,
"text": "(Napoles et al., 2016",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recent works have focused on leveraging bilingual resources to create large sentence-level paraphrastic collections using translation-based methods Hu et al., 2019 However, these works are confined to using beam search in decoding, which tend not to produce diverse candidates. One approach to force diverse translations is the use of hard lexical constraints at inference time (Hu et al., 2019) . While effective in some cases, current approaches to automatic selection of such constraints is based on heuristics and task-oriented trial-and-error.",
"cite_spans": [
{
"start": 148,
"end": 163,
"text": "Hu et al., 2019",
"ref_id": "BIBREF22"
},
{
"start": 378,
"end": 395,
"text": "(Hu et al., 2019)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We present a novel resource with accurate and collectively diverse paraphrases, generated using stochastic decoding and clustering. By collectively diverse, we mean that the paraphrases of a given sentence cover a wide lexical and syntactic spectrum. Given a bilingual input pair, our core idea is to sample a large space of outputs from a translation system, cluster the results according to a notion of token-sequence similarity, score them with two translation models (one in each direction), and then select the best item from each cluster. We believe that sampling from the word distribution at each decoder time-step bet-ter preserves the decoder's level of uncertainty, which is intrinsic to the goals of paraphrasing. We also sample ancillary lexical constraints to discourage, instead of explicitly prohibiting (Hu et al., 2019) , certain words from being used by the decoder. While our experiment produces a largescale English resource, our approach is dependent only on the availability of large bitexts and so is language-agnostic. We chose to build an English resource from CzEng to enable a direct comparison with and Hu et al. (2019) .",
"cite_spans": [
{
"start": 820,
"end": 837,
"text": "(Hu et al., 2019)",
"ref_id": "BIBREF22"
},
{
"start": 1132,
"end": 1148,
"text": "Hu et al. (2019)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our contributions include:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 A large, high quality paraphrase collection 1 with up to 5 paraphrases per reference, close to 100 million pairs in total, which are more diverse than prior work in two distinct ways, as measured by standardized metrics;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 An evaluation of semantic similarity, lexical and syntactic diversity, compared against prior works, along with results on Sentence Textual Similarity (STS) Benchmark;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Experiments on how our resource can be leveraged to improve performance on a set of language tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Prior works in constructing sentential paraphrastic resources have worked from large collections of bitext, producing translations of the foreign language sentence which, when paired with the target-language reference, constitute a set of paraphrases. Working from the very large CzEng parallel corpus, Wieting and Gimpel (2018) produced a single paraphrase for each English sentence by translating from the Czech source. Hu et al. (2019) expanded on this by translating the Czech sentence several times, using positive or negative constraints obtained from the English reference. In terms of producing diverse paraphrases, both approaches are limited because they rely on beam search. There are potentially billions of paraphrases of a sentence (Dreyer and Marcu, 2012 ), yet beam search with recurrent models can only search a constant subset of them (in the beam size). There are techniques for producing more diverse paraphrases, such as the use of positive and negative constraints (Hu et al., 2019) or syntactic fragments (Iyyer et al., 2018) , but these require the user to manually specify them, which can be cumbersome and unreliable.",
"cite_spans": [
{
"start": 422,
"end": 438,
"text": "Hu et al. (2019)",
"ref_id": "BIBREF22"
},
{
"start": 746,
"end": 769,
"text": "(Dreyer and Marcu, 2012",
"ref_id": "BIBREF12"
},
{
"start": 987,
"end": 1004,
"text": "(Hu et al., 2019)",
"ref_id": "BIBREF22"
},
{
"start": 1028,
"end": 1048,
"text": "(Iyyer et al., 2018)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Paraphrase generation pipeline",
"sec_num": "2"
},
{
"text": "We follow these prior works in working with the CzEng, a Czech-English dataset (Bojar et al., 2016b) , due to its size, diverse domain coverage, and rich syntactic variations , and to allow for a direct comparison in methodologies. However, we propose a new approach to paraphrase generation designed to increase paraphrastic diversity, using a multi-step process: the first part of the pipeline generates a large number of candidate paraphrases through a random process, and the second part whittles them down to a much shorter list. For each {source, tar-get} input pair, we run the following pipeline:",
"cite_spans": [
{
"start": 79,
"end": 100,
"text": "(Bojar et al., 2016b)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Paraphrase generation pipeline",
"sec_num": "2"
},
{
"text": "1. Constrained sampling. We sample translations using a source\u2192target translation model with lexical constraints. We obtain negative constraints by randomly selecting a set of tokens from the \"source\", so that they are not allowed to appear in the translations. Then, we decode each translation by sampling from only the top-k most probable tokens at each time step, after excluding constrained tokens ( \u00a72.1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Paraphrase generation pipeline",
"sec_num": "2"
},
{
"text": "2. Dual scoring. The set of samples is then scored against the original source input using a target\u2192source translation model. The scores from the forward and backward models are summed ( \u00a72.2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Paraphrase generation pipeline",
"sec_num": "2"
},
{
"text": "3. Clustering. The samples are then clustered. The best item from each cluster (according to the summed score) is then returned ( \u00a72.3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Paraphrase generation pipeline",
"sec_num": "2"
},
{
"text": "Sampling is a more effective way to explore model search space than beam search, particularly in auto-regressive models that do not permit dynamic programming. We introduce two means by which we can expand the hypothesis space, and produce a more diverse set of paraphrases, relative to straightforward beam search.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constrained sampling",
"sec_num": "2.1"
},
{
"text": "Top-k sampling In auto-regressive neural MT, the standard sampling approach would be to choose a word w t at each decoder timestep t by sampling from the distribution P (w t | w 1...t\u22121 ). This approach has been found effective over 1best beam search in generating source sentences in back-translation (Edunov et al., 2018) . However, for paraphrasing, this is not ideal, since words that are not semantically licensed by the source may be selected. Instead, we propose top-k sampling, in which we choose w t from the top k most-probable tokens at each time step. This way, we allow the model to sample flexibly, vastly opening up the hypothesis space, without creating a large risk of producing nonsensical translations.",
"cite_spans": [
{
"start": 302,
"end": 323,
"text": "(Edunov et al., 2018)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Constrained sampling",
"sec_num": "2.1"
},
{
"text": "Randomized negative constraints Negative constraints are tokens that are not permitted in the decoder output. They are not formally described in the literature, but an implementation was provided with the associated positive constraints (Post and Vilar, 2018) . Negative constraints can be provided as tokens or phrases; the decoder tracks the progress of generation through each constraint and adds an infinite cost to the final word of any constraints, precluding its selection in both sampling and beam search. In order to further increase sample diversity when generating the hypotheses ( \u00a72.1), we obtain negative constraints from the source by randomly choosing a subset of tokens. We do this independently multiple times for each input sentence. This provides new sets of constraints for the inputs, independent of the decoding.",
"cite_spans": [
{
"start": 237,
"end": 259,
"text": "(Post and Vilar, 2018)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Constrained sampling",
"sec_num": "2.1"
},
{
"text": "Note that we use subword regularization (Kudo, 2018) during training, causing different subword segmentations to be applied to training data types each time they are encountered and helping to build more robust models. We only constrain on the Viterbi segmentation, effectively discouraging negatively constrained words from appearing in the output, instead of prohibiting them, since there are often ways for the model to produce a word by generating a different decomposition.",
"cite_spans": [
{
"start": 40,
"end": 52,
"text": "(Kudo, 2018)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Constrained sampling",
"sec_num": "2.1"
},
{
"text": "Some semantic changes during paraphrasing, especially omission, are not well-reflected by the (forward) probability p generate from the generating model. However, a model running in the other direction can penalize this omission, as found by Goto and Tanaka (2017) . Thus, we obtain the back-translation probability p back of each sampled candidate paraphrase, and define the final score for each candidate paraphrase as the joint probability p * = p generate * p back , which is the sum of negative log-likelihood.",
"cite_spans": [
{
"start": 242,
"end": 264,
"text": "Goto and Tanaka (2017)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Back-translation likelihoods",
"sec_num": "2.2"
},
{
"text": "The above process produces a large set of translations of the source sentence. Many of them will be minor variants of one another, but we expect that there will be a lot of variety in the large pool. The task now is to reduce this pool to a small set of collectively diverse paraphrastic candidates.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Edit-distance-based clustering",
"sec_num": "2.3"
},
{
"text": "We address this problem with k-means clustering via Levenshtein (or edit) distance (Miller et al., 2009) . We compute this on lowercased, segmented candidates, after striping punctuation. Clusters are initialized with the k furthest candidates measured by edit-distance. We also add the reference sentence as the centroid of an additional cluster and skip the re-centering for that cluster. This improves the chance of the k clusters congregating candidates different from the reference in different ways. When the clustering has converged, we take the candidate with the best score from each cluster (except for the one with the reference sentence), rank them by score, and take the best n as the final output.",
"cite_spans": [
{
"start": 83,
"end": 104,
"text": "(Miller et al., 2009)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Edit-distance-based clustering",
"sec_num": "2.3"
},
{
"text": "All of our experiments are based on the CzEng 1.7 corpus, a subset of CzEng 1.6 (Bojar et al., 2016b) that has been chosen for higher quality. Based on experience with data quality issues in neural MT ; Junczys-Dowmunt, 2018), we decided to further clean the corpus. First, we normalize Unicode punctuation, and keep only bilingual pairs whose English side can be encoded with latin-1 and Czech side with latin-2. We then filter the data with dual cross-entropy filtering (Junczys-Dowmunt, 2018). We use Sockeye (Hieber et al., 2017) to train two NMT models, CS-EN and EN-CS, on a relatively clean subset of the data provided for WMT 2018 (Bojar et al., 2016a) : Europarl, Wiki titles, and news commentary. We use 4 layer Transformer models (Vaswani et al., 2017) trained to convergence, with held-out likelihood evaluated on a random 500sentence subset of the WMT16 and WMT17 news test data. These models are then used to score all the remaining CzEng data after deduplication. We kept all sentences with a model score (negative log-likelihood) of less than 3.5. After applying the above two filters, we keep 19, 723, 003 out of the 57, 065, 358 pairs in CzEng 1.7.",
"cite_spans": [
{
"start": 512,
"end": 533,
"text": "(Hieber et al., 2017)",
"ref_id": "BIBREF20"
},
{
"start": 639,
"end": 660,
"text": "(Bojar et al., 2016a)",
"ref_id": "BIBREF4"
},
{
"start": 741,
"end": 763,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF46"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3.1"
},
{
"text": "We train two new translation models on the filtered data, the CS-EN generation model (for generating English candidates via sampling) and the EN-CS scoring model (for providing backwards scores of the candidates). Both are Transformer models built with AWS SOCKEYE. The generation model is a 12 layer Transformer with a model and embedding size of 768, 12 attention heads, a feedforward layer size of 3072. The scoring model has 6 layers, model and embedding size of 512, 8 attention heads, and a feed-forward layer size of 2048.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Translation models",
"sec_num": "3.2"
},
{
"text": "All training data is pre-processed with subword sampling using SentencePiece 2 (Kudo, 2018) with a vocabulary size of 20k and character coverage of 0.9999. We used separate models for Czech and English. At inference time, we use the Viterbi segmentation of each input sentence, for both the generation and scoring models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Translation models",
"sec_num": "3.2"
},
{
"text": "There are a few parameters involved in the samplescore-cluster pipeline. For each Czech input sentence, we generate 5 sets of random constraints ( \u00a72.1), creating 5 variants of the input. From each of these inputs, we generate 30 samples using topk sampling with k = 10 (i.e., at each timestep, the model randomly chooses from the top 10 most probable words, according to their scaled distribution, and excluding negatively constrained words). The resulting 150 sentences are scored, and anything with a combined score greater than 3.5 is thrown out. The remaining sentences are clustered into 8 clusters, one of them centered on the English reference. The reference cluster is thrown out, and a list of the best-scoring translation from the remaining 7 clusters is constructed. From this list, the top 5 translations are returned as hypotheses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameters",
"sec_num": "3.3"
},
{
"text": "We follow the evaluation framework of Hu et al. (2019) , which judged semantic similarity between paraphrases and their reference through human evaluation, and lexical diversity via automatic metrics. We use the evaluation result made public by Hu et al. (2019) to enable a direct comparison. Rather than focusing on improving seman-tic similarity, which is limited by the quality of the bilingual resource, we seek to build a resource that contains both more lexical and syntactical diversity.",
"cite_spans": [
{
"start": 38,
"end": 54,
"text": "Hu et al. (2019)",
"ref_id": "BIBREF22"
},
{
"start": 245,
"end": 261,
"text": "Hu et al. (2019)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "3.4"
},
{
"text": "We obtained the evaluation set from Hu et al. (2019) , which contains 400 English sentences from CzEng. Due to additional filtering, 24 out of 400 (6%) reference sentences aren't in PARA-BANK 2 and therefore excluded in this evaluation.",
"cite_spans": [
{
"start": 36,
"end": 52,
"text": "Hu et al. (2019)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "3.4"
},
{
"text": "We set the output size n = 5. After sorting the candidates by negative log-likelihood for each reference, we treat candidates at each rank as an individual system to investigate the expected quality of paraphrases under our approach. For references that produce fewer than 5 paraphrases, the paraphrase with the highest negative log-likelihood is duplicated to fill in ranks that otherwise would be empty. We also artificially pick the paraphrase with the maximum, minimum, and median human semantic similarity judgment under each reference as three additional oracle systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "3.4"
},
{
"text": "For a fair comparison, we used the evaluation setup released by Hu et al. (2019) , which uses the interface from EASL (Sakaguchi and Van Durme, 2018) to collect semantic similarity and gammaticality judgments. Each human annotator is presented with a reference sentence and five paraphrases from different sources. Annotators use a slider bar under each paraphrase to rate the semantic similarity from 0 (Opposite/Irrelevant) to 100 (Identical Meaning). Annotators are also asked to comment on whether the paraphrase is ungrammatical or nonsensical. The reference sentence is repeated next to the paraphrase for easier visual comparison.",
"cite_spans": [
{
"start": 64,
"end": 80,
"text": "Hu et al. (2019)",
"ref_id": "BIBREF22"
},
{
"start": 118,
"end": 149,
"text": "(Sakaguchi and Van Durme, 2018)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic similarity via human judgments",
"sec_num": "3.5"
},
{
"text": "Each paraphrase receives at least 3 independent judgments. Following Hu et al. 2019, we randomly add in the reference sentence as a paraphrase and filter out annotators who fail to score them 100 more than 10% of such encounters. The result includes only annotators who contributed at least 25 judgments and is shown in Tab. 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic similarity via human judgments",
"sec_num": "3.5"
},
{
"text": "BLEU has been a successful metric in evaluating MT systems. However, as noted earlier, monolingual paraphrasing has inherently different objectives than cross-lingual translation. BLEU, in tandem with human evaluation in semantic similarity, makes a good metric for paraphrastic diversity. Table 1 : Paraphrastic diversity measured by (1-BLEU)\u00d7100, bag-of-word intersection/union score\u00d7100, and Tree edit-distance. Systems from this work that receive the best human judgments, worst human judgments, and the median, are included in the table. A higher 1-BLEU suggests higher paraphrastic diversity; a higher Intersection/Union score suggests a higher lexical diversity; a higher Tree edit-distance suggests a higher syntactic diversity. Best in each column, excluding oracle systems, is in bold. * denotes best oracle systems.",
"cite_spans": [],
"ref_spans": [
{
"start": 290,
"end": 297,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Paraphrastic diversity",
"sec_num": "3.6"
},
{
"text": "Here, we use 1-BLEU to measure how different the paraphrases are to the references. We generate 5 paraphrases for each reference sentence using the approach outlined in this work. To account for randomness, we average over two independent runs in the result, shown in Tab. 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Paraphrastic diversity",
"sec_num": "3.6"
},
{
"text": "We consider two sources of paraphrastic diversity: 1) lexical diversity, the use of different words; and 2) syntactic diversity, the change of sentence or phrasal structure. We separately measure them using bag-of-word Intersection/Union scores and parse-tree edit-distances, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Paraphrastic diversity",
"sec_num": "3.6"
},
{
"text": "Lexical diversity A sentence is lexically different from the reference when it uses lexical paraphrases (e.g., synonyms) to convey similar meanings. We calculate the case-insensitive piece Intersection/Union score after striping punctuation and the SentencePiece white space symbol. All pieces are put to lowercase and into a set. The more pieces the two sentences share, the higher the score will be. The Intersection/Union scores between the reference and the paraphrases are shown in Tab. 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Paraphrastic diversity",
"sec_num": "3.6"
},
{
"text": "We consider the editdistance between the parse trees of the reference and the paraphrase as a metric of syntactic diversity. Parse tree edit-distance is considered a useful feature in NLP tasks (Yao et al., 2013) . The more syntactic variations there are between two sentences, the larger the tree edit-distance will be. We consider only the top 3 levels of the parse trees, excluding any terminals. Sentences are parsed with Stanford CoreNLP (Manning et al., 2014) ; the tree edit-distance is calculated with the APTED (Pawlik and Augsten, 2015a,b) algorithm. The average tree edit-distance for each system is shown in Tab. 1. Hu et al. (2019) produced multiple paraphrases for each reference. While shown to be diverse compared to the reference, the authors did not investigate whether these paraphrases are trivial rewrites of one another, as it is likely the case with beam search under a few lexical constraints. Our clustering step is specifically designed to retrieve collectively diverse paraphrases.",
"cite_spans": [
{
"start": 194,
"end": 212,
"text": "(Yao et al., 2013)",
"ref_id": "BIBREF51"
},
{
"start": 443,
"end": 465,
"text": "(Manning et al., 2014)",
"ref_id": "BIBREF30"
},
{
"start": 520,
"end": 549,
"text": "(Pawlik and Augsten, 2015a,b)",
"ref_id": null
},
{
"start": 628,
"end": 644,
"text": "Hu et al. (2019)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic diversity",
"sec_num": null
},
{
"text": "We use the same metrics to evaluate pairs of systems from our work and compare them against PARABANK (Hu et al., 2019) , as shown in Tab. 2. The max/min/median systems are oracle systems derived from human semantic similarity judgment scores. The human judgments from Tab. 1 show our paraphrases are of comparable quality to PARABANK, while maintaining a much higher degree of diversity among paraphrases of the same reference, as shown by automatic metrics.",
"cite_spans": [
{
"start": 101,
"end": 118,
"text": "(Hu et al., 2019)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Diversity among paraphrases",
"sec_num": null
},
{
"text": "In addition to evaluating via human judgments, we consider the same evaluation mechanism as PARANMT of paraphrase corpora as training data for the Semantic Textual Similarity (STS) task. STS aims to measure the degree of equivalence in meaning or semantics between a pair of sentences. Notably, Agirre et al. (2016) having been a part of the SemEval workshop (2012 -2017). The evaluation consists of human annotated English sentence pairs, scored on a scale of 0 to 5 to quantify similarity of meaning, with 0 being the least, and 5 the most similar. compared three encoding mechanisms: WORD, TRIGRAM and LSTM. The WORD model (Wieting et al., 2016) averages the embedding for each word in the sentence into a fixed length vector embedding for the sentence; the TRIGRAM model (Huang et al., 2013) averages over character trigrams; and the LSTM (Hochreiter and Schmidhuber, 1997) approach averages over the final hidden states to obtain the sentence embedding.",
"cite_spans": [
{
"start": 295,
"end": 315,
"text": "Agirre et al. (2016)",
"ref_id": "BIBREF0"
},
{
"start": 626,
"end": 648,
"text": "(Wieting et al., 2016)",
"ref_id": "BIBREF48"
},
{
"start": 775,
"end": 795,
"text": "(Huang et al., 2013)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic similarity on STS Benchmark",
"sec_num": "3.7"
},
{
"text": "Encoders are trained on paraphrase pairs (s, s ) with a margin based loss function l(s, s , t, t ) =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic similarity on STS Benchmark",
"sec_num": "3.7"
},
{
"text": "max(0, \u03b4 \u2212 cos[g(s), g(s )] + cos[g(s), g(t)])+ max(0, \u03b4 \u2212 cos[g(s), g(s )] + cos[g(s ), g(t )])",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic similarity on STS Benchmark",
"sec_num": "3.7"
},
{
"text": "where g is one of (WORD, TRIGRAM, LSTM) and (t, t ) is a negative sample selected from a megabatch, an aggregation of m minibatches . 3 We evaluate the WORD model trained 4 on PARANMT, PARABANK and PARABANK 2 (our work). We retrieved the paraphrases from PARA- 3 We confirmed this loss with Wieting and Gimpel, that it captures their open implementation, which we employ. Wieting and Gimpel (2018) described their loss as: max(0, \u03b4 \u2212 cos(g(s), g(s )) + cos(g(s), g(t))), which is equivalent under their assumption the paraphrases are equivalent. BANK and our work that share the same references as PARANMT-5M. Our work is evaluated as 5 systems, based on the rank in the output; the last available paraphrase is used when lower ranks are empty. We also include a system that uses a pair of paraphrases, instead of a reference and a paraphrase. We keep PARABANK paraphrases that have a bag-of-word intersection/union score of 0.7 or less, and use the 1-best based on regression scores. In Tab. 3, we report Pearson's r and Spearman's r on the STS'16 test set. Sentence embeddings trained on our work exhibit higher correlation with human judgments, which reflects the superior paraphrastic diversity of the corpus.",
"cite_spans": [
{
"start": 134,
"end": 135,
"text": "3",
"ref_id": null
},
{
"start": 261,
"end": 262,
"text": "3",
"ref_id": null
},
{
"start": 291,
"end": 318,
"text": "Wieting and Gimpel, that it",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic similarity on STS Benchmark",
"sec_num": "3.7"
},
{
"text": "Paraphrastic data can be used to fine-tune contextualized encoders such as BERT (Devlin et al., 2018) . We frame the fine-tuning task as paraphrase identification (Das and Smith, 2009) , where given a pair of sentences, the task is to classify them as paraphrases or non-paraphrases. sentence in PARANMT-5M, the sentence embeddings generated by the WORD model trained in \u00a73.7. For each sentence s, we then find the (approximate) nearest neighbour n which is not s , among all of the sentences. We thus obtain two pairs, where (s, s ) is a paraphrase pair, and (s, n) is a non-paraphrase pair. We use these to train a binary classifier with cross-entropy loss. We then use this BERT fine-tuned on paraphrases (henceforth pBERT) for fine-tuning on SQuAD 2.0 (Rajpurkar et al., 2018) and 4 NLP tasks present in the General Language Understanding Evaluation (GLUE) benchmark (Wang et al., 2019) : Quora Question Pairs (QQP) (Chen et al., 2017) , Multi-Genre Natural Language Inference (MNLI) (Williams et al., 2018) , the Semantic Textual Similarity Benchmark (STS-B) (Agirre et al., 2016) , and the Microsoft Research Paraphrase Corpus (MRPC) (Dolan et al., 2004) . Following the model formulation, hyper-parameter selection and training procedure specified in Devlin et al. (2018) , we add a single task-specific, randomly initialized output layer for the classifier.",
"cite_spans": [
{
"start": 80,
"end": 101,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF9"
},
{
"start": 163,
"end": 184,
"text": "(Das and Smith, 2009)",
"ref_id": "BIBREF8"
},
{
"start": 756,
"end": 780,
"text": "(Rajpurkar et al., 2018)",
"ref_id": "BIBREF41"
},
{
"start": 871,
"end": 890,
"text": "(Wang et al., 2019)",
"ref_id": "BIBREF47"
},
{
"start": 920,
"end": 939,
"text": "(Chen et al., 2017)",
"ref_id": "BIBREF7"
},
{
"start": 988,
"end": 1011,
"text": "(Williams et al., 2018)",
"ref_id": "BIBREF50"
},
{
"start": 1064,
"end": 1085,
"text": "(Agirre et al., 2016)",
"ref_id": "BIBREF0"
},
{
"start": 1140,
"end": 1160,
"text": "(Dolan et al., 2004)",
"ref_id": "BIBREF10"
},
{
"start": 1258,
"end": 1278,
"text": "Devlin et al. (2018)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Improving contextualized encoders with paraphrastic data",
"sec_num": "3.8"
},
{
"text": "We present our results in Tab. 4 and Tab. 5. We observe gains for STS-B, MRPC and QQP, tasks strongly related to paraphrase identification. Fine-tuning on our paraphrase corpus also improves performance on SQuAD, a questionanswering task, while slightly degrading performance on MNLI. Overall, simple fine-tuning of BERT on our corpus leads to improvements on downstream tasks, in particular when the task is related to paraphrase detection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Improving contextualized encoders with paraphrastic data",
"sec_num": "3.8"
},
{
"text": "Paraphrastic resources exist across different scopes (i.e., lexical, phrasal, sentential) and different creation strategies (i.e., manually curated, automatically generated). For a more comprehensive survey on data-driven approaches to paraphrasing, please refer to Madnani and Dorr (2010) .",
"cite_spans": [
{
"start": 266,
"end": 289,
"text": "Madnani and Dorr (2010)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related works 4.1 Paraphrastic resources",
"sec_num": "4"
},
{
"text": "Sub-sentential resources WordNet (Miller, 1995) , FrameNet (Baker et al., 1998) , and VerbNet (Schuler, 2006) can be used to extract paraphrastic expressions at lexical levels. They contain the grouping of words or phrases that share similar semantics and sometimes entailment relations.",
"cite_spans": [
{
"start": 33,
"end": 47,
"text": "(Miller, 1995)",
"ref_id": "BIBREF32"
},
{
"start": 59,
"end": 79,
"text": "(Baker et al., 1998)",
"ref_id": "BIBREF1"
},
{
"start": 94,
"end": 109,
"text": "(Schuler, 2006)",
"ref_id": "BIBREF44"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related works 4.1 Paraphrastic resources",
"sec_num": "4"
},
{
"text": "While FrameNet and VerbNet do have example sentences or frames where lexical units are put into contexts, there is no explicit paraphrastic relations among these examples. Also, these datasets tend to be small, as they were curated manually. There have been efforts to augment such resources with automatic methods (Snow et al., 2006; Pavlick et al., 2015b) , but they are still confined to lexical level and sometimes require the use of other paraphrastic resources (Pavlick et al., 2015b) .",
"cite_spans": [
{
"start": 315,
"end": 334,
"text": "(Snow et al., 2006;",
"ref_id": "BIBREF45"
},
{
"start": 335,
"end": 357,
"text": "Pavlick et al., 2015b)",
"ref_id": "BIBREF37"
},
{
"start": 467,
"end": 490,
"text": "(Pavlick et al., 2015b)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related works 4.1 Paraphrastic resources",
"sec_num": "4"
},
{
"text": "PPDB (Ganitkevitch et al., 2013; Pavlick et al., 2015a) automated the generation of lexical paraphrases via bilingual pivoting, taking advantage of the relative abundance of bilingual corpora. While significantly larger and more informative (e.g., ranking, entailment relations, etc.) than the above manually curated resources, PPDB suffers from ambiguity as words or phrases are removed from their sentential contexts.",
"cite_spans": [
{
"start": 5,
"end": 32,
"text": "(Ganitkevitch et al., 2013;",
"ref_id": "BIBREF18"
},
{
"start": 33,
"end": 55,
"text": "Pavlick et al., 2015a)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related works 4.1 Paraphrastic resources",
"sec_num": "4"
},
{
"text": "Sentential resources There exists multiple human translations in the same language for some classic readings. Barzilay and McKeown (2001) sought to extract lexical paraphrastic expression from such sources. Unfortunately such resources -along with those manually constructed for text generation research (Robin, 1995; Pang et al., 2003) -are small and limited in domain.",
"cite_spans": [
{
"start": 110,
"end": 137,
"text": "Barzilay and McKeown (2001)",
"ref_id": "BIBREF2"
},
{
"start": 304,
"end": 317,
"text": "(Robin, 1995;",
"ref_id": "BIBREF42"
},
{
"start": 318,
"end": 336,
"text": "Pang et al., 2003)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related works 4.1 Paraphrastic resources",
"sec_num": "4"
},
{
"text": "PARANMT and PARABANK are two much larger sentential paraphrastic resources created through back-translation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related works 4.1 Paraphrastic resources",
"sec_num": "4"
},
{
"text": "Real life is sometimes thoughtless and mean. Hey, stop right there! PARANMT:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reference:",
"sec_num": null
},
{
"text": "real life is sometimes reckless and cruel . hey , stop .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reference:",
"sec_num": null
},
{
"text": "The real life is occasionally ruthless and cruel. Stay where you are! The real world is occasionally ruthless and cruel. The real life is sometimes reckless and cruel. Our work:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PARABANK:",
"sec_num": null
},
{
"text": "True life is sometimes ruthless and cruel. Hold your position! Actual life is sometimes ruthless and cruel.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PARABANK:",
"sec_num": null
},
{
"text": "Stay where you are! Sometimes real life is ruthless and cruel.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PARABANK:",
"sec_num": null
},
{
"text": "Stay in position! Real life can be inconsiderate, cruel sometimes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PARABANK:",
"sec_num": null
},
{
"text": "Remain where you are! Real living is a harsh and unscrupulous one, at times. Stay put! Table 6 : Selected examples from our work, compared to paraphrastic resources with prior approaches. Our work has paraphrases that are not only different from the reference, but also diverse among themselves.",
"cite_spans": [],
"ref_spans": [
{
"start": 87,
"end": 94,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "PARABANK:",
"sec_num": null
},
{
"text": "PARANMT is an automatically generated sentential paraphrastic resource through back-translating bilingual resources. It leveraged the imperfect ability of Neural Machine Translation (NMT) to recreate the translation target by conditioning on the source side of the bitext.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Translation-based Approaches",
"sec_num": "4.2"
},
{
"text": "PARABANK took a similar approach but with the inclusion of lexical constraints from the target side of the bitext. This step allows for multiple translations from one bilingual sentence pair and promotes lexical diversity. Their work, despite being larger and shown to be less noisy than PARANMT, relies on heuristics to produce hard constraints on the decoder, which often causes unintended changes in semantics or grammar.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Translation-based Approaches",
"sec_num": "4.2"
},
{
"text": "Both works largely follow standard approaches in NMT, generating 1-best hypotheses given a source text and a set of constraints using beam search. Sentential paraphrasing, nevertheless, has fundamentally different objectives than MT. The latter strives to find the best elicitation that is both fluent and semantically close to the foreign text to convey information across languages. The former, on the other hand, seeks syntactically and lexically diverse expressions that convey the same meaning, with the goal of capturing the intrinsic flexibility and uncertainty of human communications. This work attempts to adapt the methodology to these objectives of monolingual paraphrasing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Translation-based Approaches",
"sec_num": "4.2"
},
{
"text": "In the context of semantic parsing, Berant and Liang (2014) use a paraphrase classification module to determine the match between a canonical utterance and a logical form, both using a phrase table and distributed representations. To improve question answering (QA), Duboue and Chu-Carroll (2006) generate paraphrases of a given question using back-translation, and optionally replace the original question with the most relevant paraphrase. Dong et al. (2017) tackle QA by marginalizing the probability of an answer over a set of paraphrases, generated using rule-based and NMT-based methods. Fader et al. (2013) use a corpus of questions with paraphrases, to construct a corpus of semantically equivalent queries.",
"cite_spans": [
{
"start": 36,
"end": 59,
"text": "Berant and Liang (2014)",
"ref_id": "BIBREF3"
},
{
"start": 267,
"end": 296,
"text": "Duboue and Chu-Carroll (2006)",
"ref_id": "BIBREF13"
},
{
"start": 442,
"end": 460,
"text": "Dong et al. (2017)",
"ref_id": "BIBREF11"
},
{
"start": 594,
"end": 613,
"text": "Fader et al. (2013)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Leveraging paraphrases in NLP",
"sec_num": "4.3"
},
{
"text": "The task of paraphrase identification, which we use as a fine-tuning objective, has been studied as a task in itself. Das and Smith (2009) use grammars to perform generative modeling of paraphrases. Madnani et al. (2012) identify paraphrases by relying only on MT metrics as features. Ferreira et al. (2018) feed sentence similarity measured with hand-crafted features to machine learning algorithms. Convolutional neural networks have been introduced by Yin and Sch\u00fctze (2015) and Chen et al. (2018) , and further augmented with LSTMs (Kubal and Nimkar, 2018) and attention mechanisms (Fan et al., 2018) .",
"cite_spans": [
{
"start": 118,
"end": 138,
"text": "Das and Smith (2009)",
"ref_id": "BIBREF8"
},
{
"start": 199,
"end": 220,
"text": "Madnani et al. (2012)",
"ref_id": "BIBREF29"
},
{
"start": 285,
"end": 307,
"text": "Ferreira et al. (2018)",
"ref_id": "BIBREF17"
},
{
"start": 482,
"end": 500,
"text": "Chen et al. (2018)",
"ref_id": "BIBREF6"
},
{
"start": 586,
"end": 604,
"text": "(Fan et al., 2018)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Leveraging paraphrases in NLP",
"sec_num": "4.3"
},
{
"text": "A presumed goal for building a sentential paraphrase resource is to capture different ways of expressing the same thing: diversity matters. Previous work on paraphrastic resource creation relied on decoding techniques from NMT using bilingual corpora, with limited success in promoting diverse expressions. We have presented a new community resource produced by sampling and clustering. We evaluated our method against prior works Hu et al., 2019) and found significant gains in both lexical and syntactic diversity. Further, we've shown how straightforward fine-tuning of a state-of-the-art contextual encoder on our resource can improve performance on a variety of language tasks.",
"cite_spans": [
{
"start": 431,
"end": 447,
"text": "Hu et al., 2019)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and future work",
"sec_num": "5"
},
{
"text": "Available at http://nlp.jhu.edu/parabank2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/google/ sentencepiece",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was supported in part by a National Science Foundation collaborative grant (BCS-1748969/BCS-1749025) The MegaAttitude Project: Investigating selection and polysemy at the scale of the lexicon.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Semeval-2016 task 1: Semantic textual similarity, monolingual and cross-lingual evaluation",
"authors": [
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "Carmen",
"middle": [],
"last": "Banea",
"suffix": ""
},
{
"first": "Daniel",
"middle": [
"M"
],
"last": "Cer",
"suffix": ""
},
{
"first": "Mona",
"middle": [
"T"
],
"last": "Diab",
"suffix": ""
},
{
"first": "Aitor",
"middle": [],
"last": "Gonzalez-Agirre",
"suffix": ""
},
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
},
{
"first": "German",
"middle": [],
"last": "Rigau",
"suffix": ""
},
{
"first": "Janyce",
"middle": [],
"last": "Wiebe",
"suffix": ""
}
],
"year": 2016,
"venue": "The Association for Computer Linguistics",
"volume": "",
"issue": "",
"pages": "497--511",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eneko Agirre, Carmen Banea, Daniel M. Cer, Mona T. Diab, Aitor Gonzalez-Agirre, Rada Mihalcea, Ger- man Rigau, and Janyce Wiebe. 2016. Semeval-2016 task 1: Semantic textual similarity, monolingual and cross-lingual evaluation. In SemEval@NAACL- HLT, pages 497-511. The Association for Computer Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The berkeley framenet project",
"authors": [
{
"first": "Collin",
"middle": [
"F"
],
"last": "Baker",
"suffix": ""
},
{
"first": "Charles",
"middle": [
"J"
],
"last": "Fillmore",
"suffix": ""
},
{
"first": "John",
"middle": [
"B"
],
"last": "Lowe",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of ACL/ICCL, ACL '98",
"volume": "",
"issue": "",
"pages": "86--90",
"other_ids": {
"DOI": [
"10.3115/980845.980860"
]
},
"num": null,
"urls": [],
"raw_text": "Collin F. Baker, Charles J. Fillmore, and John B. Lowe. 1998. The berkeley framenet project. In Proceed- ings of ACL/ICCL, ACL '98, pages 86-90, Strouds- burg, PA, USA.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Extracting paraphrases from a parallel corpus",
"authors": [
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kathleen R Mckeown",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Regina Barzilay and Kathleen R McKeown. 2001. Ex- tracting paraphrases from a parallel corpus. In Pro- ceedings of ACL.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Semantic parsing via paraphrasing",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Berant",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1415--1425",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan Berant and Percy Liang. 2014. Semantic parsing via paraphrasing. In Proceedings of the 52nd Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), vol- ume 1, pages 1415-1425.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Findings of the 2016 conference on machine translation",
"authors": [
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Rajen",
"middle": [],
"last": "Chatterjee",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Federmann",
"suffix": ""
},
{
"first": "Yvette",
"middle": [],
"last": "Graham",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Huck",
"suffix": ""
},
{
"first": "Antonio",
"middle": [
"Jimeno"
],
"last": "Yepes",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Varvara",
"middle": [],
"last": "Logacheva",
"suffix": ""
},
{
"first": "Christof",
"middle": [],
"last": "Monz",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the First Conference on Machine Translation",
"volume": "2",
"issue": "",
"pages": "131--198",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ond\u0159ej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Matthias Huck, An- tonio Jimeno Yepes, Philipp Koehn, Varvara Lo- gacheva, Christof Monz, et al. 2016a. Findings of the 2016 conference on machine translation. In Pro- ceedings of the First Conference on Machine Trans- lation: Volume 2, Shared Task Papers, volume 2, pages 131-198.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "CzEng 1.6: Enlarged Czech-English Parallel Corpus with Processing Tools Dockered",
"authors": [
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Du\u0161ek",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Kocmi",
"suffix": ""
},
{
"first": "Jind\u0159ich",
"middle": [],
"last": "Libovick\u00fd",
"suffix": ""
},
{
"first": "Michal",
"middle": [],
"last": "Nov\u00e1k",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Popel",
"suffix": ""
},
{
"first": "Roman",
"middle": [],
"last": "Sudarikov",
"suffix": ""
},
{
"first": "Du\u0161an",
"middle": [],
"last": "Vari\u0161",
"suffix": ""
}
],
"year": 2016,
"venue": "Text, Speech, and Dialogue: 19th International Conference, TSD 2016, number 9924 in Lecture Notes in Computer Science",
"volume": "",
"issue": "",
"pages": "231--238",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ond\u0159ej Bojar, Ond\u0159ej Du\u0161ek, Tom Kocmi, Jind\u0159ich Li- bovick\u00fd, Michal Nov\u00e1k, Martin Popel, Roman Su- darikov, and Du\u0161an Vari\u0161. 2016b. CzEng 1.6: En- larged Czech-English Parallel Corpus with Process- ing Tools Dockered. In Text, Speech, and Dialogue: 19th International Conference, TSD 2016, number 9924 in Lecture Notes in Computer Science, pages 231-238, Cham / Heidelberg / New York / Dor- drecht / London. Masaryk University, Springer In- ternational Publishing.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Gated convolutional neural network for sentence matching. memory",
"authors": [
{
"first": "Peixin",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Wu",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Zhi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Lanhua",
"middle": [],
"last": "You",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peixin Chen, Wu Guo, Zhi Chen, Jian Sun, and Lanhua You. 2018. Gated convolutional neural network for sentence matching. memory, 1:3.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "First quora dataset release: Question pairs",
"authors": [
{
"first": "Zihan",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Hongbo",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xiaoji",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Leqi",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zihan Chen, Hongbo Zhang, Xiaoji Zhang, and Leqi Zhao. 2017. First quora dataset release: Question pairs.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Paraphrase identification as probabilistic quasi-synchronous recognition",
"authors": [
{
"first": "Dipanjan",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Noah",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP",
"volume": "1",
"issue": "",
"pages": "468--476",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dipanjan Das and Noah A Smith. 2009. Paraphrase identification as probabilistic quasi-synchronous recognition. In Proceedings of the Joint Confer- ence of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Lan- guage Processing of the AFNLP: Volume 1-Volume 1, pages 468-476. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "BERT: pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language under- standing. CoRR, abs/1810.04805.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Unsupervised construction of large paraphrase corpora: Exploiting massively parallel news sources",
"authors": [
{
"first": "Bill",
"middle": [],
"last": "Dolan",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Quirk",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Brockett",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 20th International Conference on Computational Linguistics, COLING '04",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.3115/1220355.1220406"
]
},
"num": null,
"urls": [],
"raw_text": "Bill Dolan, Chris Quirk, and Chris Brockett. 2004. Un- supervised construction of large paraphrase corpora: Exploiting massively parallel news sources. In Pro- ceedings of the 20th International Conference on Computational Linguistics, COLING '04, Strouds- burg, PA, USA. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Learning to paraphrase for question answering",
"authors": [
{
"first": "Li",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Mallinson",
"suffix": ""
},
{
"first": "Siva",
"middle": [],
"last": "Reddy",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1708.06022"
]
},
"num": null,
"urls": [],
"raw_text": "Li Dong, Jonathan Mallinson, Siva Reddy, and Mirella Lapata. 2017. Learning to paraphrase for question answering. arXiv preprint arXiv:1708.06022.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Hyter: Meaning-equivalent semantics for translation evaluation",
"authors": [
{
"first": "Markus",
"middle": [],
"last": "Dreyer",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "162--171",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Markus Dreyer and Daniel Marcu. 2012. Hyter: Meaning-equivalent semantics for translation eval- uation. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, pages 162-171. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Answering the question you wish they had asked: The impact of paraphrasing for question answering",
"authors": [
{
"first": "Pablo",
"middle": [],
"last": "Duboue",
"suffix": ""
},
{
"first": "Jennifer",
"middle": [],
"last": "Chu-Carroll",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Human Language Technology Conference of the NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pablo Duboue and Jennifer Chu-Carroll. 2006. An- swering the question you wish they had asked: The impact of paraphrasing for question answering. In Proceedings of the Human Language Technol- ogy Conference of the NAACL, Companion Volume: Short Papers.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Understanding back-translation at scale",
"authors": [
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "489--500",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sergey Edunov, Myle Ott, Michael Auli, and David Grangier. 2018. Understanding back-translation at scale. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Process- ing, pages 489-500. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Paraphrase-driven learning for open question answering",
"authors": [
{
"first": "Anthony",
"middle": [],
"last": "Fader",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Etzioni",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1608--1618",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anthony Fader, Luke Zettlemoyer, and Oren Etzioni. 2013. Paraphrase-driven learning for open question answering. In Proceedings of the 51st Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1608- 1618.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A globalization-semantic matching neural network for paraphrase identification",
"authors": [
{
"first": "Wutao",
"middle": [],
"last": "Miao Fan",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Mingming",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Ping",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th ACM International Conference on Information and Knowledge Management",
"volume": "",
"issue": "",
"pages": "2067--2075",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Miao Fan, Wutao Lin, Yue Feng, Mingming Sun, and Ping Li. 2018. A globalization-semantic matching neural network for paraphrase identification. In Pro- ceedings of the 27th ACM International Conference on Information and Knowledge Management, pages 2067-2075. ACM.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Combining sentence similarities measures to identify paraphrases",
"authors": [
{
"first": "Rafael",
"middle": [],
"last": "Ferreira",
"suffix": ""
},
{
"first": "D",
"middle": [
"C"
],
"last": "George",
"suffix": ""
},
{
"first": "Fred",
"middle": [],
"last": "Cavalcanti",
"suffix": ""
},
{
"first": "Rafael",
"middle": [
"Dueire"
],
"last": "Freitas",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lins",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Steven",
"suffix": ""
},
{
"first": "Marcelo",
"middle": [],
"last": "Simske",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Riss",
"suffix": ""
}
],
"year": 2018,
"venue": "Computer Speech & Language",
"volume": "47",
"issue": "",
"pages": "59--73",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rafael Ferreira, George DC Cavalcanti, Fred Freitas, Rafael Dueire Lins, Steven J Simske, and Marcelo Riss. 2018. Combining sentence similarities mea- sures to identify paraphrases. Computer Speech & Language, 47:59-73.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Ppdb: The paraphrase database",
"authors": [
{
"first": "Juri",
"middle": [],
"last": "Ganitkevitch",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings NAACL-HLT 2013",
"volume": "",
"issue": "",
"pages": "758--764",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2013. Ppdb: The paraphrase database. In Proceedings NAACL-HLT 2013, pages 758-764.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Detecting untranslated content for neural machine translation",
"authors": [
{
"first": "Isao",
"middle": [],
"last": "Goto",
"suffix": ""
},
{
"first": "Hideki",
"middle": [],
"last": "Tanaka",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the First Workshop on Neural Machine Translation",
"volume": "",
"issue": "",
"pages": "47--55",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Isao Goto and Hideki Tanaka. 2017. Detecting untrans- lated content for neural machine translation. In Pro- ceedings of the First Workshop on Neural Machine Translation, pages 47-55, Vancouver. Association for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Sockeye: A toolkit for neural machine translation",
"authors": [
{
"first": "Felix",
"middle": [],
"last": "Hieber",
"suffix": ""
},
{
"first": "Tobias",
"middle": [],
"last": "Domhan",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Denkowski",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Vilar",
"suffix": ""
},
{
"first": "Artem",
"middle": [],
"last": "Sokolov",
"suffix": ""
},
{
"first": "Ann",
"middle": [],
"last": "Clifton",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Felix Hieber, Tobias Domhan, Michael Denkowski, David Vilar, Artem Sokolov, Ann Clifton, and Matt Post. 2017. Sockeye: A toolkit for neural machine translation. CoRR, abs/1712.05690.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "PARABANK: Monolingual bitext generation and sentential paraphrasing via lexically-constrained neural machine translation",
"authors": [
{
"first": "J",
"middle": [
"Edward"
],
"last": "Hu",
"suffix": ""
},
{
"first": "Rachel",
"middle": [],
"last": "Rudinger",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of AAAI 2019",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Edward Hu, Rachel Rudinger, Matt Post, and Ben- jamin Van Durme. 2019. PARABANK: Monolin- gual bitext generation and sentential paraphrasing via lexically-constrained neural machine translation. In Proceedings of AAAI 2019, Hawaii, USA. AAAI.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Learning deep structured semantic models for web search using clickthrough data",
"authors": [
{
"first": "Po-Sen",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
}
],
"year": 2013,
"venue": "ACM International Conference on Information and Knowledge Management (CIKM)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Po-Sen Huang, , Jianfeng Gao, , and and. 2013. Learn- ing deep structured semantic models for web search using clickthrough data. ACM International Confer- ence on Information and Knowledge Management (CIKM).",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Adversarial example generation with syntactically controlled paraphrase networks",
"authors": [
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Wieting",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1875--1885",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohit Iyyer, John Wieting, Kevin Gimpel, and Luke Zettlemoyer. 2018. Adversarial example generation with syntactically controlled paraphrase networks. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1875-1885, New Orleans, Louisiana. Association for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Dual conditional cross-entropy filtering of noisy parallel corpora",
"authors": [
{
"first": "Marcin",
"middle": [],
"last": "Junczys-Dowmunt",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Conference on Machine Translation: Shared Task Papers",
"volume": "",
"issue": "",
"pages": "888--895",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marcin Junczys-Dowmunt. 2018. Dual conditional cross-entropy filtering of noisy parallel corpora. In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, pages 888-895. Association for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "A hybrid deep learning architecture for paraphrase identification",
"authors": [
{
"first": "R",
"middle": [],
"last": "Divesh",
"suffix": ""
},
{
"first": "Anant V",
"middle": [],
"last": "Kubal",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nimkar",
"suffix": ""
}
],
"year": 2018,
"venue": "2018 9th International Conference on Computing, Communication and Networking Technologies (ICCCNT)",
"volume": "",
"issue": "",
"pages": "1--6",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Divesh R Kubal and Anant V Nimkar. 2018. A hy- brid deep learning architecture for paraphrase iden- tification. In 2018 9th International Conference on Computing, Communication and Networking Tech- nologies (ICCCNT), pages 1-6. IEEE.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Subword regularization: Improving neural network translation models with multiple subword candidates",
"authors": [
{
"first": "Taku",
"middle": [],
"last": "Kudo",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "66--75",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Taku Kudo. 2018. Subword regularization: Improv- ing neural network translation models with multiple subword candidates. In Proceedings of the 56th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 66-75. Association for Computational Linguistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Generating phrasal and sentential paraphrases: A survey of data-driven methods",
"authors": [
{
"first": "Nitin",
"middle": [],
"last": "Madnani",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [
"J"
],
"last": "Dorr",
"suffix": ""
}
],
"year": 2010,
"venue": "Computational Linguistics",
"volume": "36",
"issue": "3",
"pages": "341--387",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nitin Madnani and Bonnie J Dorr. 2010. Generat- ing phrasal and sentential paraphrases: A survey of data-driven methods. Computational Linguistics, 36(3):341-387.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Re-examining machine translation metrics for paraphrase identification",
"authors": [
{
"first": "Nitin",
"middle": [],
"last": "Madnani",
"suffix": ""
},
{
"first": "Joel",
"middle": [],
"last": "Tetreault",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Chodorow",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "182--190",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nitin Madnani, Joel Tetreault, and Martin Chodorow. 2012. Re-examining machine translation metrics for paraphrase identification. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, pages 182-190. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "The stanford corenlp natural language processing toolkit",
"authors": [
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Bauer",
"suffix": ""
},
{
"first": "Jenny",
"middle": [],
"last": "Finkel",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bethard",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Mcclosky",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations",
"volume": "",
"issue": "",
"pages": "55--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David McClosky. 2014. The stanford corenlp natural language pro- cessing toolkit. In Proceedings of 52nd Annual Meeting of the Association for Computational Lin- guistics: System Demonstrations, pages 55-60, Bal- timore, Maryland. Association for Computational Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "String Metric, Damerau?Levenshtein Distance, Spell Checker, Hamming Distance",
"authors": [
{
"first": "Frederic",
"middle": [
"P"
],
"last": "Miller",
"suffix": ""
},
{
"first": "Agnes",
"middle": [
"F"
],
"last": "Vandome",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Mcbrewster",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Frederic P. Miller, Agnes F. Vandome, and John McBrewster. 2009. Levenshtein Distance: Informa- tion Theory, Computer Science, String (Computer Science), String Metric, Damerau?Levenshtein Dis- tance, Spell Checker, Hamming Distance. Alpha Press.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Wordnet: a lexical database for english",
"authors": [
{
"first": "A",
"middle": [],
"last": "George",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Miller",
"suffix": ""
}
],
"year": 1995,
"venue": "Communications of the ACM",
"volume": "38",
"issue": "11",
"pages": "39--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George A Miller. 1995. Wordnet: a lexical database for english. Communications of the ACM, 38(11):39- 41.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Sentential paraphrasing as black-box machine translation",
"authors": [
{
"first": "Courtney",
"middle": [],
"last": "Napoles",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
}
],
"year": 2016,
"venue": "Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "62--66",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Courtney Napoles, Chris Callison-Burch, and Matt Post. 2016. Sentential paraphrasing as black-box machine translation. In Proceedings of the NAACL 2016, pages 62-66, San Diego, California. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Analyzing uncertainty in neural machine translation",
"authors": [
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Marc'aurelio",
"middle": [],
"last": "Ranzato",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 35th International Conference on Machine Learning",
"volume": "80",
"issue": "",
"pages": "3956--3965",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Myle Ott, Michael Auli, David Grangier, and Marc'Aurelio Ranzato. 2018. Analyzing uncer- tainty in neural machine translation. In Proceed- ings of the 35th International Conference on Ma- chine Learning, volume 80 of Proceedings of Ma- chine Learning Research, pages 3956-3965, Stock- holmsmssan, Stockholm Sweden. PMLR.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Syntax-based alignment of multiple translations: Extracting paraphrases and generating new sentences",
"authors": [
{
"first": "Bo",
"middle": [],
"last": "Pang",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of HLT/NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bo Pang, Kevin Knight, and Daniel Marcu. 2003. Syntax-based alignment of multiple translations: Extracting paraphrases and generating new sen- tences. In Proceedings of HLT/NAACL.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Ppdb 2.0: Better paraphrase ranking, finegrained entailment relations, word embeddings, and style classification",
"authors": [
{
"first": "Ellie",
"middle": [],
"last": "Pavlick",
"suffix": ""
},
{
"first": "Pushpendre",
"middle": [],
"last": "Rastogi",
"suffix": ""
},
{
"first": "Juri",
"middle": [],
"last": "Ganitkevitch",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of ACL/IJCNLP",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ellie Pavlick, Pushpendre Rastogi, Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2015a. Ppdb 2.0: Better paraphrase ranking, fine- grained entailment relations, word embeddings, and style classification. In Proceedings of ACL/IJCNLP, volume 2.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Framenet+: Fast paraphrastic tripling of framenet",
"authors": [
{
"first": "Ellie",
"middle": [],
"last": "Pavlick",
"suffix": ""
},
{
"first": "Travis",
"middle": [],
"last": "Wolfe",
"suffix": ""
},
{
"first": "Pushpendre",
"middle": [],
"last": "Rastogi",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Dredze",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the ACL/IJCNLP",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ellie Pavlick, Travis Wolfe, Pushpendre Rastogi, Chris Callison-Burch, Mark Dredze, and Benjamin Van Durme. 2015b. Framenet+: Fast paraphras- tic tripling of framenet. In Proceedings of the ACL/IJCNLP, volume 2.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Efficient computation of the tree edit distance",
"authors": [
{
"first": "Mateusz",
"middle": [],
"last": "Pawlik",
"suffix": ""
},
{
"first": "Nikolaus",
"middle": [],
"last": "Augsten",
"suffix": ""
}
],
"year": 2015,
"venue": "ACM Trans. Database Syst",
"volume": "40",
"issue": "1",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/2699485"
]
},
"num": null,
"urls": [],
"raw_text": "Mateusz Pawlik and Nikolaus Augsten. 2015a. Effi- cient computation of the tree edit distance. ACM Trans. Database Syst., 40(1):3:1-3:40.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Tree edit distance: Robust and memory-efficient. Information Systems",
"authors": [
{
"first": "Mateusz",
"middle": [],
"last": "Pawlik",
"suffix": ""
},
{
"first": "Nikolaus",
"middle": [],
"last": "Augsten",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1016/j.is.2015.08.004"
]
},
"num": null,
"urls": [],
"raw_text": "Mateusz Pawlik and Nikolaus Augsten. 2015b. Tree edit distance: Robust and memory-efficient. Infor- mation Systems, 56.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Fast lexically constrained decoding with dynamic beam allocation for neural machine translation",
"authors": [
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Vilar",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1314--1324",
"other_ids": {
"DOI": [
"10.18653/v1/N18-1119"
]
},
"num": null,
"urls": [],
"raw_text": "Matt Post and David Vilar. 2018. Fast lexically con- strained decoding with dynamic beam allocation for neural machine translation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, Volume 1 (Long Pa- pers), pages 1314-1324. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Know what you don't know: Unanswerable questions for squad",
"authors": [
{
"first": "Pranav",
"middle": [],
"last": "Rajpurkar",
"suffix": ""
},
{
"first": "Robin",
"middle": [],
"last": "Jia",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don't know: Unanswerable ques- tions for squad. CoRR, abs/1806.03822.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Revision-based Generation of Natural Language Summaries Providing Historical Background: Corpus-based Analysis, Design, Implementation and Evaluation",
"authors": [
{
"first": "Jacques",
"middle": [],
"last": "Pierre Robin",
"suffix": ""
}
],
"year": 1995,
"venue": "",
"volume": "",
"issue": "",
"pages": "95--33653",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacques Pierre Robin. 1995. Revision-based Genera- tion of Natural Language Summaries Providing His- torical Background: Corpus-based Analysis, De- sign, Implementation and Evaluation. Ph.D. the- sis, New York, NY, USA. UMI Order No. GAX95- 33653.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Efficient online scalar annotation with bounded support",
"authors": [
{
"first": "Keisuke",
"middle": [],
"last": "Sakaguchi",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "208--218",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Keisuke Sakaguchi and Benjamin Van Durme. 2018. Efficient online scalar annotation with bounded sup- port. In Proceedings of ACL, pages 208-218, Mel- bourne, Australia. ACL.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "VerbNet: A Broad-Coverage, Comprehensive Verb Lexicon",
"authors": [
{
"first": "Karin Kipper",
"middle": [],
"last": "Schuler",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karin Kipper Schuler. 2006. VerbNet: A Broad- Coverage, Comprehensive Verb Lexicon. Ph.D. the- sis, University of Pennsylvania.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Semantic taxonomy induction from heterogenous evidence",
"authors": [
{
"first": "Rion",
"middle": [],
"last": "Snow",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of ICCL/ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rion Snow, Daniel Jurafsky, and Andrew Y. Ng. 2006. Semantic taxonomy induction from heterogenous evidence. In Proceedings of ICCL/ACL.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "30",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 30, pages 5998-6008. Curran As- sociates, Inc.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "GLUE: A multi-task benchmark and analysis platform for natural language understanding",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Amanpreet",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Samuel",
"middle": [
"R"
],
"last": "Bowman",
"suffix": ""
}
],
"year": 2019,
"venue": "the Proceedings of ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019. GLUE: A multi-task benchmark and analysis plat- form for natural language understanding. In the Pro- ceedings of ICLR.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Towards universal paraphrastic sentence embeddings",
"authors": [
{
"first": "John",
"middle": [],
"last": "Wieting",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Karen",
"middle": [],
"last": "Livescu",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2016. Towards universal paraphrastic sen- tence embeddings. CoRR, abs/1511.08198.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "PARANMT-50M: Pushing the limits of paraphrastic sentence embeddings with millions of machine translations",
"authors": [
{
"first": "John",
"middle": [],
"last": "Wieting",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of ACL 2018",
"volume": "",
"issue": "",
"pages": "451--462",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Wieting and Kevin Gimpel. 2018. PARANMT- 50M: Pushing the limits of paraphrastic sentence embeddings with millions of machine translations. In Proceedings of ACL 2018, pages 451-462. ACL.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "A broad-coverage challenge corpus for sentence understanding through inference",
"authors": [
{
"first": "Adina",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "Nikita",
"middle": [],
"last": "Nangia",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1112--1122",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sen- tence understanding through inference. In Proceed- ings of the 2018 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112-1122. Association for Computational Linguistics.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Answer extraction as sequence tagging with tree edit distance",
"authors": [
{
"first": "Xuchen",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "858--867",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xuchen Yao, Benjamin Van Durme, Chris Callison- Burch, and Peter Clark. 2013. Answer extraction as sequence tagging with tree edit distance. In Pro- ceedings of the 2013 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 858-867, Atlanta, Georgia. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Convolutional neural network for paraphrase identification",
"authors": [
{
"first": "Wenpeng",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "901--911",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wenpeng Yin and Hinrich Sch\u00fctze. 2015. Convolu- tional neural network for paraphrase identification. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 901-911.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Contrived example paraphrases from previous work (unconstrained and constrained-used with permission) and ours (clustered).",
"type_str": "figure",
"uris": null,
"num": null
},
"TABREF0": {
"type_str": "table",
"content": "<table><tr><td/><td/><td/><td>Unconstrained</td></tr><tr><td>I Source</td><td colspan=\"2\">Target (Reference)</td><td>Paraphrase</td></tr><tr><td/><td/><td>\u2296</td><td>Constrained</td></tr><tr><td/><td>\u2026</td><td/><td>\u2026</td></tr><tr><td colspan=\"4\">I took this by mistake. I took it by mistake. \u2295</td></tr><tr><td colspan=\"2\">\u2296</td><td>\u2296</td></tr><tr><td/><td/><td/><td>Clustered</td></tr><tr><td/><td/><td/><td>I took by accident.</td></tr><tr><td colspan=\"2\">I picked up accidentally.</td><td/><td>I took mistake.</td></tr><tr><td colspan=\"3\">I picked it accidentally.</td><td>I took it.</td></tr></table>",
"html": null,
"text": "). took this by mistake. I took it by mistake. vzal jsem ho omylem. I took this by mistake. I took it by accident. vzal jsem ho omylem.",
"num": null
},
"TABREF2": {
"type_str": "table",
"content": "<table><tr><td>Systems Compared</td><td>1-BLEU\u2191</td><td>\u2229/\u222a \u2193</td><td>Tree ED\u2191</td></tr><tr><td>PARABANK 17 /PARABANK 34</td><td>20.58</td><td>80.93</td><td>2.26</td></tr><tr><td>Our work 1</td><td/><td/><td/></tr><tr><td/><td/><td/><td>: the use</td></tr></table>",
"html": null,
"text": "/Our work 3 64.16\u00b1.21 52.77\u00b1.48 5.51\u00b1.01 Our work 3 /Our work 5 71.05\u00b1.22 45.00\u00b1.51 6.40\u00b1.19 Our work 1 /Our work 5 69.46\u00b1.27 46.79\u00b1.12 6.25\u00b1.18 Our work max /Our work min 66.03\u00b1.86 49.10\u00b1.16 5.84\u00b1.33",
"num": null
},
"TABREF3": {
"type_str": "table",
"content": "<table/>",
"html": null,
"text": "Collective diversity within our work compared to PARABANK, as measured by (1-BLEU)\u00d7100, intersection/union score\u00d7100, and parse tree edit-distance.",
"num": null
},
"TABREF5": {
"type_str": "table",
"content": "<table/>",
"html": null,
"text": "Pearson's r \u00d7 100 and Spearman's r \u00d7 100 computed on STS 2016 task. Our work 1/5 contains paraphrase pairs from system 1 paired with system 5 , while all other systems are paired with the reference sentence.",
"num": null
},
"TABREF7": {
"type_str": "table",
"content": "<table><tr><td/><td/><td/><td>Numbers re-</td></tr><tr><td>ported on Dev set</td><td/><td/></tr><tr><td/><td>Type</td><td colspan=\"2\">BERT pBERT</td></tr><tr><td>F1</td><td colspan=\"2\">HasAns 76.81</td><td>74.21</td></tr><tr><td/><td colspan=\"2\">NoAns 71.44</td><td>74.95</td></tr><tr><td/><td>Total</td><td>74.12</td><td>74.58</td></tr><tr><td colspan=\"3\">Exact Match HasAns 70.34</td><td>68.00</td></tr><tr><td/><td colspan=\"2\">NoAns 71.44</td><td>74.95</td></tr><tr><td/><td>Total</td><td>70.89</td><td>71.48</td></tr></table>",
"html": null,
"text": "F1 scores are reported for QQP and MRPC, Spearman correlations are reported for STS-B, and accuracy scores are reported for MNLI.",
"num": null
},
"TABREF8": {
"type_str": "table",
"content": "<table/>",
"html": null,
"text": "SQuAD 2.0 results on dev set.",
"num": null
}
}
}
}