ACL-OCL / Base_JSON /prefixM /json /msr /2020.msr-1.7.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:13:25.139224Z"
},
"title": "Surface Realization Using Pretrained Language Models",
"authors": [
{
"first": "Farhood",
"middle": [],
"last": "Farahnak",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Concordia University Montreal",
"location": {
"country": "Canada"
}
},
"email": ""
},
{
"first": "Laya",
"middle": [],
"last": "Rafiee",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Concordia University Montreal",
"location": {
"country": "Canada"
}
},
"email": ""
},
{
"first": "Leila",
"middle": [],
"last": "Kosseim",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Concordia University Montreal",
"location": {
"country": "Canada"
}
},
"email": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Fevens",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Concordia University Montreal",
"location": {
"country": "Canada"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In the context of Natural Language Generation, surface realization is the task of generating the linear form of a text following a given grammar. Surface realization models usually consist of a cascade of complex sub-modules, either rule-based or neural network-based, each responsible for a specific sub-task. In this work, we show that a single encoder-decoder language model can be used in an end-to-end fashion for all sub-tasks of surface realization. The model is designed based on the BART language model that receives a linear representation of unordered and non-inflected tokens in a sentence along with their corresponding Universal Dependency information and produces the linear sequence of inflected tokens along with the missing words. The model was evaluated on the shallow and deep tracks of the 2020 Surface Realization Shared Task (SR'20) using both human and automatic evaluation. The results indicate that despite its simplicity, our model achieves competitive results among all participants in the shared task.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "In the context of Natural Language Generation, surface realization is the task of generating the linear form of a text following a given grammar. Surface realization models usually consist of a cascade of complex sub-modules, either rule-based or neural network-based, each responsible for a specific sub-task. In this work, we show that a single encoder-decoder language model can be used in an end-to-end fashion for all sub-tasks of surface realization. The model is designed based on the BART language model that receives a linear representation of unordered and non-inflected tokens in a sentence along with their corresponding Universal Dependency information and produces the linear sequence of inflected tokens along with the missing words. The model was evaluated on the shallow and deep tracks of the 2020 Surface Realization Shared Task (SR'20) using both human and automatic evaluation. The results indicate that despite its simplicity, our model achieves competitive results among all participants in the shared task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Natural Language Generation (NLG) models aim to generate fluent human-like texts given structured data. This involves both content planning (selecting the content to communicate) and surface realization (selecting, ordering, and inflecting the actual words) (Hovy et al., 1997; Reiter and Dale, 2000) . This paper focuses on the second sub-task: Surface Realization (SR).",
"cite_spans": [
{
"start": 258,
"end": 277,
"text": "(Hovy et al., 1997;",
"ref_id": "BIBREF9"
},
{
"start": 278,
"end": 300,
"text": "Reiter and Dale, 2000)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Unlike many tasks in Natural Language Processing (NLP), the performance of SR models are still below human performance. In order to fill this gap and encourage more research in this field, several shared tasks in NLG and SR have been proposed. In particular, since 2018, the Surface Realization Shared Tasks (Mille et al., 2018; Mille et al., 2019; Mille et al., 2020) were introduced to provide common-ground datasets for developing and evaluating NLG systems. This year, the task (Mille et al., 2020) proposed two tracks in several languages including English: 1) Track1: shallow track and 2) Track2: deep track. In the shallow track, unordered and lemmatized tokens with Universal Dependency (UD) structures (de Marneffe et al., 2014) were provided to participants and systems were required to reorder and inflect the tokens to produce final sentences. The deep track was similar to the shallow track but functional words and surface-oriented morphological information were not provided and had to be inferred by the systems. Therefor, in addition to determining the order and the inflection of tokens, systems participating in the deep track had to guess the omitted words.",
"cite_spans": [
{
"start": 308,
"end": 328,
"text": "(Mille et al., 2018;",
"ref_id": "BIBREF16"
},
{
"start": 329,
"end": 348,
"text": "Mille et al., 2019;",
"ref_id": "BIBREF17"
},
{
"start": 349,
"end": 368,
"text": "Mille et al., 2020)",
"ref_id": "BIBREF18"
},
{
"start": 482,
"end": 502,
"text": "(Mille et al., 2020)",
"ref_id": "BIBREF18"
},
{
"start": 711,
"end": 737,
"text": "(de Marneffe et al., 2014)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Considering that the input data is in the form of Universal Dependency (UD) structure, data-to-text models, and graph-to-text models in particular, seem to be the right choice for the task of surface realization. However, in this study, we take a different path to tackle the problem. Our proposed model is designed based on text-to-text approaches using a pretrained encoder-decoder language model. More specifically, a BART language model is used for the task of surface realization for both the shallow and the deep tracks. The proposed approach is an end-to-end model trained on a linearized representation of the graph of the sentences with their corresponding Universal Dependency information. The results on the English datasets demonstrate the potential of these models for surface realization tasks and in general data-to-text problems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the following sections, we first review recent and related approaches in surface realization and textto-text models, then in Section 3, our proposed model is explained. Section 4 describes the results of our model on the surface realization datasets. Finally, Section 5 discusses conclusion and future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Surface realization typically involves three tasks: syntactic realization, morphological realization, and orthographic realization (Reiter and Dale, 2000) . Syntactic realization tries to identify the proper ordering of the input data, whereas morphological and orthographic realization are responsible for word inflections, punctuation, and formatting.",
"cite_spans": [
{
"start": 131,
"end": 154,
"text": "(Reiter and Dale, 2000)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Several surface realization models presented at the previous Surface Realization Shared Task (Mille et al., 2019) used a cascade of pointer-based models for syntactic realization followed by another neural network module for morphological and orthographic realization (Du and Black, 2019; Farahnak et al., 2019; Mazzei and Basile, 2019) . For example, Du et al. (2019) utilized a graph attention network (GAT) (Veli\u010dkovi\u0107 et al., 2018) for encoding the input sentences and a pointer decoder (Vinyals et al., 2015) to select the next element from their graph. Whereas Yu et al. (2019) used a bidirectional Tree-LSTM (Zhou et al., 2016) as the encoder and an LSTM (Hochreiter and Schmidhuber, 1997) as the decoder and multiple LSTM modules for morphological and orthographic realization tasks. These two approaches achieved the highest performance among all participating systems at SR'19.",
"cite_spans": [
{
"start": 93,
"end": 113,
"text": "(Mille et al., 2019)",
"ref_id": "BIBREF17"
},
{
"start": 268,
"end": 288,
"text": "(Du and Black, 2019;",
"ref_id": "BIBREF1"
},
{
"start": 289,
"end": 311,
"text": "Farahnak et al., 2019;",
"ref_id": "BIBREF4"
},
{
"start": 312,
"end": 336,
"text": "Mazzei and Basile, 2019)",
"ref_id": "BIBREF15"
},
{
"start": 410,
"end": 435,
"text": "(Veli\u010dkovi\u0107 et al., 2018)",
"ref_id": "BIBREF23"
},
{
"start": 491,
"end": 513,
"text": "(Vinyals et al., 2015)",
"ref_id": "BIBREF24"
},
{
"start": 615,
"end": 634,
"text": "(Zhou et al., 2016)",
"ref_id": "BIBREF27"
},
{
"start": 662,
"end": 696,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "On the other hand, when the text to generate is conditioned on the content provided in the form of graphs, tables, etc then data-to-text generation models are utilized. As a family of data-to-text models, graph-to-text generation tries to generate natural text given its input graph. Graph-to-text generation models employ graph encoders to obtain a suitable representation from the input graph. Several applications such as text summarization (Duan et al., 2017) , question answering (Fan et al., 2019) , as well as surface realization (Du and Black, 2019) have used these types of generation models.",
"cite_spans": [
{
"start": 444,
"end": 463,
"text": "(Duan et al., 2017)",
"ref_id": "BIBREF2"
},
{
"start": 485,
"end": 503,
"text": "(Fan et al., 2019)",
"ref_id": "BIBREF3"
},
{
"start": 537,
"end": 557,
"text": "(Du and Black, 2019)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Recently, several works have proposed to use language models in the form of text-to-text for what is inherently data-to-text and particularly graph-to-text tasks (Kale, 2020; Mager et al., 2020; Harkous et al., 2020) . Instead of modeling the node and edges of the graphs, they mapped the graph structure as sequences of words and let the language model encode the graph information. Kale (2020) takes advantage of T5 (Raffel et al., 2019) for data-to-text problems, whereas Mager et al. (2020) and Harkous et al. (2020) utilize GPT2 (Radford et al., 2019) .",
"cite_spans": [
{
"start": 162,
"end": 174,
"text": "(Kale, 2020;",
"ref_id": "BIBREF10"
},
{
"start": 175,
"end": 194,
"text": "Mager et al., 2020;",
"ref_id": "BIBREF14"
},
{
"start": 195,
"end": 216,
"text": "Harkous et al., 2020)",
"ref_id": "BIBREF7"
},
{
"start": 418,
"end": 439,
"text": "(Raffel et al., 2019)",
"ref_id": "BIBREF20"
},
{
"start": 534,
"end": 556,
"text": "(Radford et al., 2019)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Following this recent trend of text-to-text models for graph-based problems, we developed an end-toend approach based on a language model for the problem of data-to-text generation. The approach maps the given Universal Dependency structures to surface forms to tackle all the tasks required for the surface realization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Following the success of pretrained language models for data-to-text generation (Kale, 2020; Harkous et al., 2020) , we tackled the surface realization problem using a similar approach. BART is an encoder-decoder language model based on transformers (Vaswani et al., 2017) . Specifically, BART is a denoising autoencoder model trained on several denoising tasks making it applicable for a variety of downstream NLP tasks. The language model is trained to first encode the input, and then generate the text based on its input representation. The encoder-decoder architecture of BART makes it suitable for our task where the encoder module encodes the graph representation into an embedding space then the decoder generates the inflected form of the input one token at each decoding step. Hence, the model performs syntactic, morphological, and orthographic realization all at the same time.",
"cite_spans": [
{
"start": 80,
"end": 92,
"text": "(Kale, 2020;",
"ref_id": "BIBREF10"
},
{
"start": 93,
"end": 114,
"text": "Harkous et al., 2020)",
"ref_id": "BIBREF7"
},
{
"start": 250,
"end": 272,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "In order to take advantage of a pretrained language model, BART in our case, we first need to map the graph structure of the input into a linear representation (i.e. plain text). For this task, given all the Universal Dependency (UD) information provided by the SR'20 organizers, we considered using only LEMMA, UPOS, FEATS, HEAD, and DEPREL for each node. Figure 1 shows the mapping structure with Figure 1 : A sample token with its linearized representation. The sample token with index 26 and its Universal Dependency (UD) information in (a) will be encoded using (b) to generate its linearized representation in (c).",
"cite_spans": [],
"ref_spans": [
{
"start": 357,
"end": 365,
"text": "Figure 1",
"ref_id": null
},
{
"start": 399,
"end": 407,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "an example of a linearized node. As Figure 1 shows, we concatenate each token with its corresponding features where special tokens <, >, | identify the boundary of each feature and special tokens such as the beginning of sentence <s> and end of sentence <\\s> determine the boundary of each node (i.e. token). In these mapped representations, instead of the index for the HEAD feature, the actual token of the HEAD is used.",
"cite_spans": [],
"ref_spans": [
{
"start": 36,
"end": 44,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "As opposed to previous work where tokens are selected among the input (Du and Black, 2019; Mazzei and Basile, 2019) , the BART decoder generates tokens at each decoding step. As a result, we have no control on the number of tokens that should be generated. This can lead the model to generate extra tokens or ignore some of the input tokens. To alleviate this issue, we considered the tokenized form of the target sentences as the targets of the model. This allows the model to learn to map each node representation in its input into a token in the output and achieve a 1:1 mapping between the number of nodes in the input and the number of output tokens. An alternative would be to train on the detokenized output form but for the sake of automatic evaluation of the shared task, we did not follow this option.",
"cite_spans": [
{
"start": 70,
"end": 90,
"text": "(Du and Black, 2019;",
"ref_id": "BIBREF1"
},
{
"start": 91,
"end": 115,
"text": "Mazzei and Basile, 2019)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "The proposed text-to-text surface realizer model was evaluated on the English datasets of the SR'20 (Mille et al., 2020) . The datasets were created using the Universal Dependency (UD) datasets (de Marneffe et al., 2014) where the tokens within each sentence were randomly shuffled and the inflections were removed. The training and development sets were accompanied by 7 features. Out of these, the FEATS feature contained the relative linear order with respect to the governor (Lin) in addition to more than 40 morphological sub-features from the universal feature inventory. Among the four English datasets provided for training, we only trained on en ewt-ud-train while the performance of the model was measured on all the eight test sets provided by the organizers (see Table 1 ).",
"cite_spans": [
{
"start": 100,
"end": 120,
"text": "(Mille et al., 2020)",
"ref_id": "BIBREF18"
},
{
"start": 194,
"end": 220,
"text": "(de Marneffe et al., 2014)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 775,
"end": 782,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.1"
},
{
"text": "We used the pretrained BART-large model provided in the Huggingface library (Wolf et al., 2019) . This model has 12 layers in each encoder and decoder modules. For each task, we trained our models with the same cross-entropy loss function as suggested in the original paper of BART and AdamW algorithm (Loshchilov and Hutter, 2019) with batch size of 2 and the learning rate of 1e \u2212 5 for 20 epochs where these hyper-parameters are chosen based on the development set. For efficiency in training time, sentences longer than 35 tokens were removed from the training set. During inference, we used beam search with a size of 5.",
"cite_spans": [
{
"start": 76,
"end": 95,
"text": "(Wolf et al., 2019)",
"ref_id": "BIBREF25"
},
{
"start": 302,
"end": 331,
"text": "(Loshchilov and Hutter, 2019)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Configuration",
"sec_num": "4.2"
},
{
"text": "Our surface realization model was evaluated on all seven test sets from SR'19 and one new test set (en Wikipedia) for SR'20. Results with all test sets are presented in Table 1 where in-domain refers to set-ups where test sets and training sets are from the same domain, while out-of-domain refers to set-ups where test sets are not in the same domain as the training sets. Two types of test sets were provided: ud and pred. The ud test sets contain the original UD datasets; whereas the pred test sets contain automatic predicted parse trees. Since we only trained our model on en ewt-ud-train, all the other test sets should be considered out-of-domain for our model. Our models were evaluated both with automatic metrics and human evaluation. For the automatic metrics, as shown in Table 1 , each test set was evaluated using four different metrics: BLEU, NIST, DIST, and BERTScore. Figure 2 compares the BLEU score of our model with all other participants with the test sets en ewt-ud and en Wikipedia for both shallow (T1) and deep (T2) tracks. As Figure 2 shows, our model's performance falls close to the median of the other models in the shared task for the shallow track. However, in the deep track, our model performs best (Figure 2d ) or close to the best (Figure 2c ). These results seem to indicate that BART can better predict functional words and surface-oriented morphological information (required only in deep track) and this is mainly due to the denoising approaches used for training BART.",
"cite_spans": [],
"ref_spans": [
{
"start": 169,
"end": 176,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 785,
"end": 792,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 886,
"end": 894,
"text": "Figure 2",
"ref_id": "FIGREF0"
},
{
"start": 1053,
"end": 1061,
"text": "Figure 2",
"ref_id": "FIGREF0"
},
{
"start": 1233,
"end": 1243,
"text": "(Figure 2d",
"ref_id": "FIGREF0"
},
{
"start": 1267,
"end": 1277,
"text": "(Figure 2c",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "For human evaluation, SR'20 used Direct Assessment (Graham et al., 2017) where human assessors rated the output of systems based on meaning similarity (relative to a human-authored reference sentence) and readability (without reference sentences). Figure 3 shows the average score for the meaning similarity and readability on both shallow and deep tracks on two test sets: en ewt-ud nad en Wikipedia. In terms of readability, in the deep track, our model achieved the highest score after Human (see Figure 3a) . However, in the shallow track, our performance is at the median of all participants. On the other hand, the meaning similarity of our model is lower than the best performing model (see Figure 3b ). These comparisons indicate that although our model is able to generate well-written texts, their meaning are not close enough to the target sentences.",
"cite_spans": [
{
"start": 51,
"end": 72,
"text": "(Graham et al., 2017)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 248,
"end": 256,
"text": "Figure 3",
"ref_id": "FIGREF2"
},
{
"start": 500,
"end": 510,
"text": "Figure 3a)",
"ref_id": "FIGREF2"
},
{
"start": 698,
"end": 707,
"text": "Figure 3b",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "In order to have a better understanding of the model's behaviour, we further analyzed the generated outputs of our model. Our decoder generates tokens from its encoder hidden space, which is in contrast with ordering models where the output is selected from their input. This characteristic can have two impacts: A) the model does not restrict the output to the words in the input and it has no control over the number tokens it should generate. As a result, the model can generate extra tokens or ignore tokens from the input. Example A in Table 2 highlights this issue where the model did not generate the token Ali. B) the model generates long tokens in more than a single decoding step which makes it too complex for the model to have control over the details of such tokens. This behaviour is shown in Table 2 Example B. In this example, even though the model was able to generate a very complex URL similar to its input, there are some missing parts such as 27 or even extra parts such as .html. It should be noted that in Example B, the entire URL is considered as a single token.",
"cite_spans": [],
"ref_spans": [
{
"start": 541,
"end": 548,
"text": "Table 2",
"ref_id": "TABREF3"
},
{
"start": 807,
"end": 814,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "In this work, we used a language model (text-to-text) to tackle the graph-to-text surface realization problem. We showed that using a pretrained encoder-decoder language model such as BART, can allow the model to reconstruct the tokens' orders and inflections in an end-to-end fashion without any modification in the network architecture. This shows that the pretrained language model is able to encode the graph information and map this representation into a human readable text. The results of the deep track showed that our proposed model achieved better performance compared with the systems submitted to SR'20 (Mille et al., 2020) where the model has to guess the functional words and morphological information. This high performance directly comes from the pretrained language model characteristics of BART. As for future work, we are planning to experiment with different graph linearization schemes. Also, we would like to use a multilingual language model such as mBART on other languages as well as other language models. Finally, we would like to evaluate the incorporation of a copy mechanism (Gu et al., 2016) in generating long and complex tokens such as URLs.",
"cite_spans": [
{
"start": 615,
"end": 635,
"text": "(Mille et al., 2020)",
"ref_id": "BIBREF18"
},
{
"start": 1105,
"end": 1122,
"text": "(Gu et al., 2016)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
}
],
"back_matter": [
{
"text": "The authors would like to thank the anonymous reviewers for their feedback on a previous version of this paper. This work was financially supported by the Natural Sciences and Engineering Research Council of Canada (NSERC).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Universal Stanford dependencies: A cross-linguistic typology",
"authors": [
{
"first": "Marie-Catherine",
"middle": [],
"last": "De Marneffe",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Dozat",
"suffix": ""
},
{
"first": "Natalia",
"middle": [],
"last": "Silveira",
"suffix": ""
},
{
"first": "Katri",
"middle": [],
"last": "Haverinen",
"suffix": ""
},
{
"first": "Filip",
"middle": [],
"last": "Ginter",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC-2014)",
"volume": "",
"issue": "",
"pages": "4585--4592",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marie-Catherine de Marneffe, Timothy Dozat, Natalia Silveira, Katri Haverinen, Filip Ginter, Joakim Nivre, and Christopher D. Manning. 2014. Universal Stanford dependencies: A cross-linguistic typology. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC-2014), pages 4585-4592, Reykjavik, Iceland, May. European Languages Resources Association (ELRA).",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Learning to order graph elements with application to multilingual surface realization",
"authors": [
{
"first": "Wenchao",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Alan",
"middle": [
"W"
],
"last": "Black",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2nd Workshop on Multilingual Surface Realisation (MSR 2019)",
"volume": "",
"issue": "",
"pages": "18--24",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wenchao Du and Alan W Black. 2019. Learning to order graph elements with application to multilingual surface realization. In Proceedings of the 2nd Workshop on Multilingual Surface Realisation (MSR 2019), pages 18-24.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Question generation for question answering",
"authors": [
{
"first": "Nan",
"middle": [],
"last": "Duan",
"suffix": ""
},
{
"first": "Duyu",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "866--874",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nan Duan, Duyu Tang, Peng Chen, and Ming Zhou. 2017. Question generation for question answering. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 866-874.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Using local knowledge graph construction to scale Seq2Seq models to multi-document inputs",
"authors": [
{
"first": "Angela",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Gardent",
"suffix": ""
},
{
"first": "Chlo\u00e9",
"middle": [],
"last": "Braud",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP 2019)",
"volume": "",
"issue": "",
"pages": "4186--4196",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Angela Fan, Claire Gardent, Chlo\u00e9 Braud, and Antoine Bordes. 2019. Using local knowledge graph construc- tion to scale Seq2Seq models to multi-document inputs. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP 2019), pages 4186-4196, Hong Kong, China, November. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "The Concordia NLG Surface Realizer at SRST 2019",
"authors": [
{
"first": "Farhood",
"middle": [],
"last": "Farahnak",
"suffix": ""
},
{
"first": "Laya",
"middle": [],
"last": "Rafiee",
"suffix": ""
},
{
"first": "Leila",
"middle": [],
"last": "Kosseim",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Fevens",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2nd Workshop on Multilingual Surface Realisation (MSR 2019)",
"volume": "",
"issue": "",
"pages": "63--67",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Farhood Farahnak, Laya Rafiee, Leila Kosseim, and Thomas Fevens. 2019. The Concordia NLG Surface Realizer at SRST 2019. In Proceedings of the 2nd Workshop on Multilingual Surface Realisation (MSR 2019), pages 63-67.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Can machine translation systems be evaluated by the crowd alone",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Graham",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Moffat",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Zobel",
"suffix": ""
}
],
"year": 2017,
"venue": "Nat. Lang. Eng",
"volume": "23",
"issue": "",
"pages": "3--30",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y. Graham, Timothy Baldwin, A. Moffat, and J. Zobel. 2017. Can machine translation systems be evaluated by the crowd alone. Nat. Lang. Eng., 23:3-30.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Incorporating copying mechanism in sequence-tosequence learning",
"authors": [
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Zhengdong",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Hang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "O",
"middle": [
"K"
],
"last": "Victor",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1631--1640",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O.K. Li. 2016. Incorporating copying mechanism in sequence-to- sequence learning. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1631-1640, Berlin, Germany, August. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Have your text and use it too! End-to-end neural data-totext generation with semantic fidelity",
"authors": [
{
"first": "Hamza",
"middle": [],
"last": "Harkous",
"suffix": ""
},
{
"first": "Isabel",
"middle": [],
"last": "Groves",
"suffix": ""
},
{
"first": "Amir",
"middle": [],
"last": "Saffari",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.06577"
]
},
"num": null,
"urls": [],
"raw_text": "Hamza Harkous, Isabel Groves, and Amir Saffari. 2020. Have your text and use it too! End-to-end neural data-to- text generation with semantic fidelity. arXiv preprint arXiv:2004.06577.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Language generation. Survey of the State of the Art in Human Language Technology",
"authors": [
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Guenter",
"middle": [],
"last": "Gertjan Van Noord",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bateman",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "131--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eduard Hovy, Gertjan van Noord, Guenter Neumann, and John Bateman. 1997. Language generation. Survey of the State of the Art in Human Language Technology, pages 131-146.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Text-to-text pre-training for data-to-text tasks",
"authors": [
{
"first": "Mihir",
"middle": [],
"last": "Kale",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2005.10433"
]
},
"num": null,
"urls": [],
"raw_text": "Mihir Kale. 2020. Text-to-text pre-training for data-to-text tasks. arXiv preprint arXiv:2005.10433.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Marjan",
"middle": [],
"last": "Ghazvininejad",
"suffix": ""
},
{
"first": "Abdelrahman",
"middle": [],
"last": "Mohamed",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "7871--7880",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural lan- guage generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871-7880, Online, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Multilingual denoising pre-training for neural machine translation",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Xian",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "Marjan",
"middle": [],
"last": "Ghazvininejad",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2001.08210"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020. Multilingual denoising pre-training for neural machine translation. arXiv preprint arXiv:2001.08210.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Decoupled weight decay regularization",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Loshchilov",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Hutter",
"suffix": ""
}
],
"year": 2019,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In International Conference on Learning Representations, Ernest N. Morial Convention Center, New Orleans, May.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "GPT-too: A language-model-first approach for AMR-to-text generation",
"authors": [
{
"first": "Manuel",
"middle": [],
"last": "Mager",
"suffix": ""
},
{
"first": "Ram\u00f3n",
"middle": [],
"last": "Fernandez Astudillo",
"suffix": ""
},
{
"first": "Tahira",
"middle": [],
"last": "Naseem",
"suffix": ""
},
{
"first": "Arafat",
"middle": [],
"last": "Md",
"suffix": ""
},
{
"first": "Young-Suk",
"middle": [],
"last": "Sultan",
"suffix": ""
},
{
"first": "Radu",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Florian",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Roukos",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL-2020)",
"volume": "",
"issue": "",
"pages": "1846--1852",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manuel Mager, Ram\u00f3n Fernandez Astudillo, Tahira Naseem, Md Arafat Sultan, Young-Suk Lee, Radu Florian, and Salim Roukos. 2020. GPT-too: A language-model-first approach for AMR-to-text generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL-2020), pages 1846-1852, Online, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "The dipinfounito realizer at srst'19: Learning to rank and deep morphology prediction for multilingual surface realization",
"authors": [
{
"first": "Alessandro",
"middle": [],
"last": "Mazzei",
"suffix": ""
},
{
"first": "Valerio",
"middle": [],
"last": "Basile",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2nd Workshop on Multilingual Surface Realisation (MSR 2019)",
"volume": "",
"issue": "",
"pages": "81--87",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alessandro Mazzei and Valerio Basile. 2019. The dipinfounito realizer at srst'19: Learning to rank and deep morphology prediction for multilingual surface realization. In Proceedings of the 2nd Workshop on Multilingual Surface Realisation (MSR 2019), pages 81-87.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Underspecified universal dependency structures as inputs for multilingual surface realisation",
"authors": [
{
"first": "Simon",
"middle": [],
"last": "Mille",
"suffix": ""
},
{
"first": "Anja",
"middle": [],
"last": "Belz",
"suffix": ""
},
{
"first": "Bernd",
"middle": [],
"last": "Bohnet",
"suffix": ""
},
{
"first": "Leo",
"middle": [],
"last": "Wanner",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 11th International Conference on Natural Language Generation",
"volume": "",
"issue": "",
"pages": "199--209",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Simon Mille, Anja Belz, Bernd Bohnet, and Leo Wanner. 2018. Underspecified universal dependency structures as inputs for multilingual surface realisation. In Proceedings of the 11th International Conference on Nat- ural Language Generation, pages 199-209, Tilburg University, The Netherlands, November. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "The Second Multilingual Surface Realisation Shared Task (SR'19): Overview and Evaluation Results",
"authors": [
{
"first": "Simon",
"middle": [],
"last": "Mille",
"suffix": ""
},
{
"first": "Anja",
"middle": [],
"last": "Belz",
"suffix": ""
},
{
"first": "Bernd",
"middle": [],
"last": "Bohnet",
"suffix": ""
},
{
"first": "Yvette",
"middle": [],
"last": "Graham",
"suffix": ""
},
{
"first": "Leo",
"middle": [],
"last": "Wanner",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2nd Workshop on Multilingual Surface Realisation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Simon Mille, Anja Belz, Bernd Bohnet, Yvette Graham, and Leo Wanner. 2019. The Second Multilingual Surface Realisation Shared Task (SR'19): Overview and Evaluation Results. In Proceedings of the 2nd Workshop on Multilingual Surface Realisation (MSR 2019), 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP 2019), Hong Kong, China.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "The third multilingual surface realisation shared task (SR'20): Overview and evaluation results",
"authors": [
{
"first": "Simon",
"middle": [],
"last": "Mille",
"suffix": ""
},
{
"first": "Anya",
"middle": [],
"last": "Belz",
"suffix": ""
},
{
"first": "Bernd",
"middle": [],
"last": "Bohnet",
"suffix": ""
},
{
"first": "Thiago",
"middle": [],
"last": "Castro Ferreira",
"suffix": ""
},
{
"first": "Yvette",
"middle": [],
"last": "Graham",
"suffix": ""
},
{
"first": "Leo",
"middle": [],
"last": "Wanner",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 3nd Workshop on Multilingual Surface Realisation (MSR 2020)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Simon Mille, Anya Belz, Bernd Bohnet, Thiago Castro Ferreira, Yvette Graham, and Leo Wanner. 2020. The third multilingual surface realisation shared task (SR'20): Overview and evaluation results. In Proceedings of the 3nd Workshop on Multilingual Surface Realisation (MSR 2020), Dublin, Ireland, December. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Language models are unsupervised multitask learners",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Rewon",
"middle": [],
"last": "Child",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Dario",
"middle": [],
"last": "Amodei",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2019,
"venue": "OpenAI Blog",
"volume": "",
"issue": "8",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8).",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Exploring the limits of transfer learning with a unified text-to-text transformer",
"authors": [
{
"first": "Colin",
"middle": [],
"last": "Raffel",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Roberts",
"suffix": ""
},
{
"first": "Katherine",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Sharan",
"middle": [],
"last": "Narang",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Matena",
"suffix": ""
},
{
"first": "Yanqi",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Peter J",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.10683"
]
},
"num": null,
"urls": [],
"raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv preprint arXiv:1910.10683.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Building Natural Language Generation Systems",
"authors": [
{
"first": "Ehud",
"middle": [],
"last": "Reiter",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Dale",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ehud Reiter and Robert Dale. 2000. Building Natural Language Generation Systems. Cambridge University Press, New York, NY, USA.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems (NIPS 2017)",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems (NIPS 2017), pages 5998-6008, Long Beach Convention Center, Long Beach.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Graph Attention Networks. International Conference on Learning Representations",
"authors": [
{
"first": "Petar",
"middle": [],
"last": "Veli\u010dkovi\u0107",
"suffix": ""
},
{
"first": "Guillem",
"middle": [],
"last": "Cucurull",
"suffix": ""
},
{
"first": "Arantxa",
"middle": [],
"last": "Casanova",
"suffix": ""
},
{
"first": "Adriana",
"middle": [],
"last": "Romero",
"suffix": ""
},
{
"first": "Pietro",
"middle": [],
"last": "Li\u00f2",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Petar Veli\u010dkovi\u0107, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Li\u00f2, and Yoshua Bengio. 2018. Graph Attention Networks. International Conference on Learning Representations (ICLR 2018).",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Pointer Networks",
"authors": [
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Meire",
"middle": [],
"last": "Fortunato",
"suffix": ""
},
{
"first": "Navdeep",
"middle": [],
"last": "Jaitly",
"suffix": ""
}
],
"year": 2015,
"venue": "Advances in Neural Information Processing Systems 28 (NIPS 2015)",
"volume": "",
"issue": "",
"pages": "2692--2700",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer Networks. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28 (NIPS 2015), pages 2692-2700. Curran Associates, Inc.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Huggingface's transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "R\u00e9mi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "Joe",
"middle": [],
"last": "Davison",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Shleifer",
"suffix": ""
},
{
"first": "Clara",
"middle": [],
"last": "Patrick Von Platen",
"suffix": ""
},
{
"first": "Yacine",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Jernite",
"suffix": ""
},
{
"first": "Canwen",
"middle": [],
"last": "Plu",
"suffix": ""
},
{
"first": "Teven",
"middle": [
"Le"
],
"last": "Xu",
"suffix": ""
},
{
"first": "Sylvain",
"middle": [],
"last": "Scao",
"suffix": ""
},
{
"first": "Mariama",
"middle": [],
"last": "Gugger",
"suffix": ""
},
{
"first": "Quentin",
"middle": [],
"last": "Drame",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Lhoest",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rush",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cis- tac, Tim Rault, R\u00e9mi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2019. Huggingface's transformers: State-of-the-art natural language processing. ArXiv, abs/1910.03771.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Imsurreal: Ims at the surface realization shared task 2019",
"authors": [
{
"first": "Xiang",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Agnieszka",
"middle": [],
"last": "Falenska",
"suffix": ""
},
{
"first": "Marina",
"middle": [],
"last": "Haid",
"suffix": ""
},
{
"first": "Ngoc",
"middle": [
"Thang"
],
"last": "Vu",
"suffix": ""
},
{
"first": "Jonas",
"middle": [],
"last": "Kuhn",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2nd Workshop on Multilingual Surface Realisation (MSR 2019) (EMNLP 2019)",
"volume": "",
"issue": "",
"pages": "50--58",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiang Yu, Agnieszka Falenska, Marina Haid, Ngoc Thang Vu, and Jonas Kuhn. 2019. Imsurreal: Ims at the surface realization shared task 2019. In Proceedings of the 2nd Workshop on Multilingual Surface Realisation (MSR 2019) (EMNLP 2019), pages 50-58, Hong Kong, China.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Modelling sentence pairs with tree-structured attentive encoder",
"authors": [
{
"first": "Yao",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Cong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yan",
"middle": [],
"last": "Pan",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers",
"volume": "",
"issue": "",
"pages": "2912--2922",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yao Zhou, Cong Liu, and Yan Pan. 2016. Modelling sentence pairs with tree-structured attentive encoder. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 2912-2922, Osaka, Japan, December. The COLING 2016 Organizing Committee.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"text": "Comparison of all systems submitted to SR'20 on the en ewt-ud and en Wikipedia test sets for the shallow (T1) and the deep (T2) tracks evaluated based on their BLEU score.",
"type_str": "figure"
},
"FIGREF2": {
"num": null,
"uris": null,
"text": "Human evaluation comparison of systems submitted in SR'20 on the en ewt-ud and en Wikipedia test sets for shallow (T1) and deep (T2) tracks evaluated based on their (a) readability and (b) meaning similarity.",
"type_str": "figure"
},
"TABREF1": {
"num": null,
"type_str": "table",
"text": "Results of our submission in the shallow (T1) and deep (T2) tracks of SR'20. Highlighted values indicate the highest score among all participants achieved by our model.",
"html": null,
"content": "<table/>"
},
"TABREF3": {
"num": null,
"type_str": "table",
"text": "Examples of unsuccessful outputs generated by our model on the shallow (T1) and deep (T2) tracks.",
"html": null,
"content": "<table/>"
}
}
}
}