|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T14:59:07.641878Z" |
|
}, |
|
"title": "Encodings of Source Syntax: Similarities in NMT Representations Across Target Languages", |
|
"authors": [ |
|
{ |
|
"first": "Tyler", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Chang", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Carleton College Northfield", |
|
"location": { |
|
"region": "MN" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Anna", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Rafferty", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Carleton College Northfield", |
|
"location": { |
|
"region": "MN" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We train neural machine translation (NMT) models from English to six target languages, using NMT encoder representations to predict ancestor constituent labels of source language words. We find that NMT encoders learn similar source syntax regardless of NMT target language, relying on explicit morphosyntactic cues to extract syntactic features from source sentences. Furthermore, the NMT encoders outperform RNNs trained directly on several of the constituent label prediction tasks, suggesting that NMT encoder representations can be used effectively for natural language tasks involving syntax. However, both the NMT encoders and the directly-trained RNNs learn substantially different syntactic information from a probabilistic context-free grammar (PCFG) parser. Despite lower overall accuracy scores, the PCFG often performs well on sentences for which the RNN-based models perform poorly, suggesting that RNN architectures are constrained in the types of syntax they can learn.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We train neural machine translation (NMT) models from English to six target languages, using NMT encoder representations to predict ancestor constituent labels of source language words. We find that NMT encoders learn similar source syntax regardless of NMT target language, relying on explicit morphosyntactic cues to extract syntactic features from source sentences. Furthermore, the NMT encoders outperform RNNs trained directly on several of the constituent label prediction tasks, suggesting that NMT encoder representations can be used effectively for natural language tasks involving syntax. However, both the NMT encoders and the directly-trained RNNs learn substantially different syntactic information from a probabilistic context-free grammar (PCFG) parser. Despite lower overall accuracy scores, the PCFG often performs well on sentences for which the RNN-based models perform poorly, suggesting that RNN architectures are constrained in the types of syntax they can learn.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Neural machine translation (NMT) encoder representations have been used successfully for crosstask and cross-lingual transfer learning in a variety of natural language contexts (Eriguchi et al., 2018; McCann et al., 2017; Neubig and Hu, 2018) . Previous work has investigated whether these representations encode syntactic information (Shi et al., 2016) , as syntactic information is useful in many natural language tasks (Chen et al., 2017; Punyakanok et al., 2008) . The deep recurrent neural network (RNN) architectures used by many NMT encoders can learn syntactic features, even without explicit supervision (Blevins et al., 2018; Futrell et al., 2019) ; NMT encoders specifically have been found to encode information about ancestor constituent labels for words (Blevins et al., 2018) and even full syntactic parses of source language sentences (Shi et al., 2016) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 177, |
|
"end": 200, |
|
"text": "(Eriguchi et al., 2018;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 201, |
|
"end": 221, |
|
"text": "McCann et al., 2017;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 222, |
|
"end": 242, |
|
"text": "Neubig and Hu, 2018)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 335, |
|
"end": 353, |
|
"text": "(Shi et al., 2016)", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 422, |
|
"end": 441, |
|
"text": "(Chen et al., 2017;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 442, |
|
"end": 466, |
|
"text": "Punyakanok et al., 2008)", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 613, |
|
"end": 635, |
|
"text": "(Blevins et al., 2018;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 636, |
|
"end": 657, |
|
"text": "Futrell et al., 2019)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 768, |
|
"end": 790, |
|
"text": "(Blevins et al., 2018)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 851, |
|
"end": 869, |
|
"text": "(Shi et al., 2016)", |
|
"ref_id": "BIBREF33" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Cross-linguistically, there is mixed evidence for how target language impacts the encoding of information in NMT encoder representations. Kudugunta et al. (2019) found that representations clustered based on target language family when sentence representations were aligned in a shared space. However, Belinkov et al. (2017) found only small effects of target language on the ability of NMT encoder states to predict part-of-speech (POS) tags. Because POS tags are typically reliant only on local features within sentences, these contrasting results could suggest that (1) localized encoded information is independent of NMT target language, or (2) encoded syntactic information in general is independent of NMT target language. In this work, we address the second possibility.", |
|
"cite_spans": [ |
|
{ |
|
"start": 138, |
|
"end": 161, |
|
"text": "Kudugunta et al. (2019)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 302, |
|
"end": 324, |
|
"text": "Belinkov et al. (2017)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "To evaluate more global syntactic information in NMT encoder representations, we assess the ability of NMT encoder states to predict ancestor constituent labels of words; this task is adopted from Blevins et al. (2018) . Extending Blevins et al. (2018) , we train NMT models towards multiple target languages and evaluate performance on individual constituent labels (e.g. noun phrases). We find that significant syntactic information is encoded regardless of target language, and target language has little impact on the syntactic information learned by NMT encoders. Furthermore, we find that NMT encoders rely on explicit morphosyntactic cues to extract syntactic information from sentences.", |
|
"cite_spans": [ |
|
{ |
|
"start": 197, |
|
"end": 218, |
|
"text": "Blevins et al. (2018)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 221, |
|
"end": 252, |
|
"text": "Extending Blevins et al. (2018)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Finally, by training deep RNNs directly on the constituent label prediction task, we find that RNNs with explicit syntactic training data learn similar syntax to the NMT encoders. In contrast, a probabilistic context-free grammar (PCFG) parser performs significantly differently from both RNNbased models, suggesting that RNNs may be constrained by their reliance on explicit syntactic cues.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We trained NMT models from English to six different target languages, assessing the ability of NMT encoder states to predict POS, parent, grandparent, and great-grandparent constituent labels of words.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methodology", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "NMT models were trained on the United Nations (UN) Parallel Corpus, using the fully aligned subcorpus of approximately 11 million sentences translated to all six UN official languages: English, Spanish, French, Russian, Arabic, and Chinese (Ziemski et al., 2016) . NMT models were trained from English to each target language using OpenNMT's PyTorch implementation (Klein et al., 2017) with byte pair encoding for subword tokenization in all languages (Sennrich et al., 2016) . Each NMT encoder and decoder was a unidirectional four-layer long short-term memory (LSTM; Hochreiter and Schmidhuber, 1997) network with 500 dimensions, using dot-product global attention in the decoder (Luong et al., 2015 ). Each NMT model was trained for 11 epochs (approximately 2,000,000 steps) using Adam optimization (Kingma and Ba, 2014 ). 1 The model with the best performance on the UN evaluation dataset for each language was used to generate encoder representations in the constituent label prediction task.", |
|
"cite_spans": [ |
|
{ |
|
"start": 240, |
|
"end": 262, |
|
"text": "(Ziemski et al., 2016)", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 365, |
|
"end": 385, |
|
"text": "(Klein et al., 2017)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 452, |
|
"end": 475, |
|
"text": "(Sennrich et al., 2016)", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 569, |
|
"end": 602, |
|
"text": "Hochreiter and Schmidhuber, 1997)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 682, |
|
"end": 701, |
|
"text": "(Luong et al., 2015", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 802, |
|
"end": 822, |
|
"text": "(Kingma and Ba, 2014", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 826, |
|
"end": 827, |
|
"text": "1", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "NMT Models", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Dataset Constituent label predictions used treeparsed sentences from the CoNLL-2012 dataset, containing sentences from English news and magazine articles, web data, and transcribed conversational speech (Pradhan et al., 2012) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 203, |
|
"end": 225, |
|
"text": "(Pradhan et al., 2012)", |
|
"ref_id": "BIBREF28" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Constituent Label Predictions", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "As in Blevins et al. (2018) , constituent label models were trained on the CoNLL-2012 development dataset and tested on the test dataset. A subset of the CoNLL-2012 training dataset was used as an evaluation dataset; the training, evaluation, and test datasets each contained approximately 160,000 English words.", |
|
"cite_spans": [ |
|
{ |
|
"start": 6, |
|
"end": 27, |
|
"text": "Blevins et al. (2018)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Constituent Label Predictions", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "We trained simple feedforward neural networks to predict ancestor constituent labels (POS, parent, grandparent, and greatgrandparent) of words, given the NMT encoder state after reading the word. The NMT encoders were kept fixed during constituent label training. We used the deepest encoder layer as our encoder representation; deeper layers have been shown to perform better on constituent label prediction tasks (Blevins et al., 2018) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 415, |
|
"end": 437, |
|
"text": "(Blevins et al., 2018)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Prediction models", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Each feedforward network contained one 500dimensional hidden layer, and each model was trained until it completed 10 consecutive epochs with no improvement on the evaluation dataset. To account for variation between models based on random initialization of weights and shuffling of the training data, we trained 20 feedforward models for each combination of NMT encoder target language and constituent label (POS, parent, grandparent, or great-grandparent).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Prediction models", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Baselines We computed a baseline accuracy for each constituent label prediction task by simply predicting the most frequent constituent label given the current input word (e.g. given the current input word \"dog,\" the most frequent POS tag would be NN for \"singular noun\"). This baseline accuracy is the maximum possible accuracy for a deterministic model that only knows the current input word.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Prediction models", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "NMT encoders learned syntax. As shown in Figure 1 , NMT encoder representations for all target languages except the autoencoder resulted in accuracy scores above the baseline for the parent, grandparent, and great-grandparent constituent label tasks (adjusted p < 0.001 for all comparisons, using one sample t-tests). The English autoencoder was the only target language without consistent performance above the baselines for these tasks; NMT autoencoders have been found to memorize sentences without learning syntactic information (Shi et al., 2016) . These results indicate that with the exception of autoencoders, NMT encoder representations contain syntactic information regardless of target language.", |
|
"cite_spans": [ |
|
{ |
|
"start": 533, |
|
"end": 551, |
|
"text": "(Shi et al., 2016)", |
|
"ref_id": "BIBREF33" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 41, |
|
"end": 49, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Models performed poorly for POS tags. In contrast to Blevins et al. (2018) but in line with Belinkov et al. (2017) , all target languages performed slightly below the baseline for the POS prediction task (adjusted p < 0.001 for all comparisons, using one sample t-tests). This result may be because POS encodes less useful information than other features for machine translation tasks. For instance, Belinkov et al. (2017) found that models performed above the baseline if the task was modified to predict semantic tags.", |
|
"cite_spans": [ |
|
{ |
|
"start": 53, |
|
"end": 74, |
|
"text": "Blevins et al. (2018)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 92, |
|
"end": 114, |
|
"text": "Belinkov et al. (2017)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 400, |
|
"end": 422, |
|
"text": "Belinkov et al. (2017)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "While there were statistically significant differences in accuracy between target languages for all four constituent label tasks (one-way ANOVA, p < 0.001 for all tasks), these differences were quite small. The non-English target languages varied by less than 2% within each of the parent, grandparent, and great-grandparent constituent label tasks (see Figure 1 ).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 354, |
|
"end": 362, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Similarities Across Target Languages", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "NMT encoders learned similar syntax. To further test the hypothesis of similar syntactic information across encoder representations, we assessed the performance of the NMT encoders on individual constituent labels (e.g. noun phrases). To do this, we considered the constituent label predictions as the results of a binary classification task for each individual label. For instance, when considering the noun POS tag, all POS tags were separated into two categories: noun and not noun. Then, we computed F1 scores for individual constituent labels for each NMT model, allowing us to quantify similarities between NMT encoders based on individual label performance.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Similarities Across Target Languages", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Individual constituent label F1 scores correlated extremely highly between non-English target languages (all pairwise Pearson correlations r > 0.93 for the POS task; r > 0.98 for the parent task; r > 0.99 for the grandparent and great-grandparent tasks). In other words, the models performed well or poorly on the same individual labels regardless of target language. Figure 2 shows individual constituent label F1 scores for each NMT target language, displaying the three most frequent labels for each constituent label task. Similar to the overall accuracy scores, raw differences in F1 scores were small between non-English target languages.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 368, |
|
"end": 376, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Similarities Across Target Languages", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "In particular, the similar F1 scores were not simply proportional to label frequencies. For instance, all target languages performed similarly well when identifying noun grandparent constituents (25% of grandparent labels, F1 scores 0.59-0.60) and question-sentence grandparent constituents (0.6% of grandparent labels, F1 scores 0.55-0.61), despite over a 20% difference in corresponding label frequencies. 2 Similar F1 scores across non-English target languages suggest that NMT encoders encode very similar syntactic information regardless of target language.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Similarities Across Target Languages", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Translation quality still varied. Despite similar syntactic information encoded across target languages, the NMT models exhibited a wide range of BLEU scores, as shown in Table 1 . This indicates that morphological and non-syntactic features have large impacts on translation performance. For instance, inflectional morphology (e.g. verb conjugation and noun pluralization) has been found Figure 2: Mean F1 scores (based on 20 feedforward models) for individual constituent label predictions, treating each prediction task as a binary classification task. Bars indicate two standard deviations from the mean. We display the three most frequent labels for each task, comparing across all target languages. Each label's frequency in the CoNLL-2012 test set is displayed on its corresponding plot.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 171, |
|
"end": 178, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Similarities Across Target Languages", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "to account for differences in performance between languages in language modeling tasks (Cotterell et al., 2018) , although these results vary depending on the metric used for morphological complexity (Mielke et al., 2019). Because differences in translation performance could not be easily explained using encoded syntactic information alone, it seems likely that the NMT models were either unable to extract more syntactic information from the training data or that the models did not find additional syntactic information to be useful.", |
|
"cite_spans": [ |
|
{ |
|
"start": 87, |
|
"end": 111, |
|
"text": "(Cotterell et al., 2018)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Similarities Across Target Languages", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "To gain a better understanding of how NMT encoders extract syntax, we conducted a qualitative analysis of sentences for which the constituent label prediction models exhibited high error rates.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Linguistic Analysis of Errors", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We selected sentences based on the great-grandparent constituent label task because this task exhibited the highest accuracy scores above the baseline, indicating a large amount of learned syntax. There were high pairwise correlation scores for per-sentence greatgrandparent constituent label accuracies between all non-English target languages (all Pearson correlations r > 0.85), so we selected sentences simply based on their average constituent label accuracy across the five non-English target languages. We considered the 50 complete sentences with the highest average great-grandparent constituent accuracies and the 50 complete sentences with the lowest average great-grandparent constituent accuracies. 3 The top 50 sentences all had average great-grandparent accuracies above 90%, and the bottom 50 sentences all had accuracies below 35%. Linguistic patterns found in the top and bottom 50 sentences are compiled in Table 2. NMT encoders relied on explicit cues. The bottom 50 sentences contained a disproportionate number of null features. These features omit words or morphemes that would indicate syntactic structure in a sentence. For instance, null copulas omit forms of the verb \"to be,\" as in the sentence \"He pronounced the homework [was] finished.\" Appositives, where two noun phrases are placed one after another to describe the same entity (e.g. \"Grant, the star baker\"), serve as relative clauses with the usual explicit syntactic cues omitted (e.g. \"Grant, [who is] the star baker\"). Of the bottom 50 sentences, 16 contained at least one null copula or appositive; the \u2022 Head before 9 5 \u2022 Head after 0 10 top 50 sentences contained none of either feature. This suggests that when generating encoder representations, NMT models typically do not identify syntactic structures based on non-explicit cues. However, the models performed well on complex syntactic structures containing explicit morphosyntactic cues. They performed well on sentences containing infinitives (e.g. \"to eat\" or \"to pillage\") and negation (e.g. \"I did not eat\"), exhibiting far more of these features in the top 50 sentences than in the bottom 50 sentences (see Table 2 ). Both infinitives and negation have clear morphosyntactic cues indicating sentence structure. The \"to\" in each infinitive clearly introduces the infinitized verb, and the word \"not\" before a verb clearly indicates a negated clause. These results suggest that NMT encoders rely on explicit morphosyntactic cues to extract syntactic structure from sentences.", |
|
"cite_spans": [ |
|
{ |
|
"start": 712, |
|
"end": 713, |
|
"text": "3", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 926, |
|
"end": 934, |
|
"text": "Table 2.", |
|
"ref_id": "TABREF3" |
|
}, |
|
{ |
|
"start": 2158, |
|
"end": 2165, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Selection of sentences", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "NMT encoders recognized embedded sentences. In fact, the NMT encoders were able to use morphosyntactic cues to identify embedded sentences. An embedded sentence appears within another phrase (e.g. within the verb phrase \"said that [sentence]\"). The phrase head which introduces an embedded sentence can appear before or after the em-bedded sentence (e.g. \"Alex said [sentence]\" versus \"[sentence], said Alex\"). Because the NMT encoders were forward-directional RNNs, they could not be expected to recognize embedded sentences where the corresponding phrase head appeared after the embedded sentence. However, the models performed well on many sentences where the phrase head appeared before the embedded sentence, exhibiting nine such structures in the top 50 sentences (see Table 2 ). In many of these sentences, the head and complementizer (e.g. \"said that\" or \"dogs that\") clearly indicate the beginning of an embedded sentence.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 775, |
|
"end": 782, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Selection of sentences", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Interestingly, the NMT encoders were often able to recognize embedded sentences even when there was a null complementizer introducing the embedded sentence, such as \"that\" omitted in \"The dog wished [that] he was taller.\" Of the nine embedded sentences in the top 50 sentences, six had a null complementizer. This result may partially be explained by verb bias, the tendency for certain verbs to be followed by particular types of phrases (Garnsey et al., 1997) . For instance, the verb \"prove\" is more often followed by a sentence complement (e.g. \"proved [that] the criminal was lying\") than a direct object (e.g. \"proved the theorem\"). People are more likely to omit complementizers when the head verb biases heavily towards a sentence complement (Ferreira and Schotter, 2013) ; in these cases, the verb itself serves as a syntactic cue for the upcoming embedded sentence. Of the six null complementizers in the top 50 sentences, five followed a sentence-complement-biased verb. Then, it appears that NMT encoders are able to recognize embedded sentences using a combination of verb bias and explicit complementizers.", |
|
"cite_spans": [ |
|
{ |
|
"start": 439, |
|
"end": 461, |
|
"text": "(Garnsey et al., 1997)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 750, |
|
"end": 779, |
|
"text": "(Ferreira and Schotter, 2013)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Selection of sentences", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The similarity of syntactic information in NMT encoder representations across target languages could suggest that regardless of target language, a similar amount of syntactic information is helpful for translation. However, it is also possible that the structure of the constituent label task limited the syntactic information the encoders could represent, as predicting a label based only on a partial sentence is an inherently ambiguous task. A third alternative is that the RNN encoder architectures limited the information preserved in each representation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "NMT Syntax vs. Other Models", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "To further explore how well the NMT encoders extracted syntactic information from raw sentences, we compared their constituent label prediction performance to two alternative models: an RNN trained directly for the constituent label task, and a probabilistic context-free grammar (PCFG) parser. In contrast to the NMT encoders, the RNN can learn representations that are best suited for retaining syntax; like the NMT encoders, it sees one word at a time. The PCFG is trained with complete syntactic information for partial sentences, and its prediction task is an entire hierarchical structure, rather than a single type of label. These comparisons can show whether there are syntactic features that are predictable but systematically missed by the NMT encoder representations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "NMT Syntax vs. Other Models", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We trained unidirectional fourlayer LSTM models with 500 dimensions to directly predict constituent labels (POS, parent, grandparent, great-grandparent) when provided a sentence stopping at a given word. These RNNs were trained on the CoNLL-2012 development dataset (the same dataset as the feedforward models based on NMT encoder representations in Section 2.2). To account for variance in RNN training, we trained 10 RNNs for each constituent label task, and each RNN was trained until it completed 10 consecutive epochs without improvement on the evaluation dataset.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "RNN models", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "NMT representations outperformed the RNNs. Average accuracies for the RNN models in each constituent label task are shown in Figure 3 , compared with the feedforward models trained from NMT encoder representations. Surprisingly, the RNN models trained directly for the constituent label tasks performed worse than the NMT encoder representation models for the parent, grandparent, and great-grandparent constituent tasks. The NMT encoder representations' improvement over the other models increased consistently as the constituent labels moved higher in the syntax tree (i.e. the NMT encoders exhibited the greatest advantage in the great-grandparent constituent task).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 125, |
|
"end": 133, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "RNN models", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Because the RNNs had the same architecture as the NMT encoders, it is likely that the directlytrained RNNs were limited by the amount of training data provided (about 160,000 examples). The NMT encoder representations would be able to rely more heavily on patterns learned during NMT training and thus would be able to make better use of the limited training data for the constituent label prediction tasks. It is also possible that the hyperparameters used for the NMT encoders were not optimal for the directly-trained RNNs. That said, the NMT encoder representations' high performance on the constituent label tasks supports existing literature finding that NMT encoder representations contain information useful for a variety of natural language tasks (Eriguchi et al., 2018; McCann et al., 2017) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 756, |
|
"end": 779, |
|
"text": "(Eriguchi et al., 2018;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 780, |
|
"end": 800, |
|
"text": "McCann et al., 2017)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "RNN models", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The RNNs and NMT encoded similar syntax. Next, to assess whether the directly-trained RNNs learned different syntactic information from the NMT encoders, we compared the RNN and the NMT encoder representations' performance on individual sentences. We primarily considered greatgrandparent constituent accuracies, the task for which all models performed most above the baseline.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "RNN models", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For each sentence of length at least three, we considered the mean great-grandparent constituent label accuracy, averaging across all non-English target languages for the NMT encoder accuracies. Figure 4 shows the correlation between persentence accuracies from the NMT encoder representation models and the directly-trained RNN models. There was a high degree of correlation between the two types of models (Pearson correla- tion r = 0.84), indicating that the directly-trained RNNs learned similar syntactic information to the NMT encoders.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 195, |
|
"end": 203, |
|
"text": "Figure 4", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "RNN models", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "It may be that the directly-trained RNNs and the NMT encoders learned similar syntactic information because they both used the same RNN architecture. Therefore, we tested constituent label performance when using the probabilistic contextfree grammar (PCFG) syntactic parser provided by Stanford NLP (Klein and Manning, 2003) . We trained the PCFG on parse trees of partial sentences stopping at each word in the CoNLL-2012 development dataset, the same dataset used to train the RNN-based models. While the PCFG was not trained specifically for the constituent label prediction task, its explicit syntactic architecture (encoding a context-free grammar) provides a useful The PCFG encoded different syntax. The PCFG's constituent label accuracies are shown in Figure 3 , along with the RNN and NMT encoder representation accuracies. As expected, because the PCFG was not trained specifically for the constituent label prediction task, the PCFG had slightly lower accuracies than the RNN-based models. However, the PCFG exhibited interesting patterns when considering its performance on individual sentences.", |
|
"cite_spans": [ |
|
{ |
|
"start": 299, |
|
"end": 324, |
|
"text": "(Klein and Manning, 2003)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 760, |
|
"end": 768, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "PCFG Parser", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "As with the other models, the PCFG's mean great-grandparent constituent label accuracies were considered for each sentence of length at least three. Figure 5 (comparing the PCFG with the NMT encoder representations) can then be compared to Figure 4 (comparing the directly-trained RNNs with the NMT encoder representations). The two plots indicate that the PCFG performed substantially differently from the RNN-based models. Notably, there is a set of sentences for which the PCFG obtained perfect accuracy while the NMT encoders had substantially lower accuracies (demonstrated by the horizontal line of dots at the top of Figure 5 ). Both RNN-based models' accuracies correlated approximately the same amount with the baseline (most-frequent tag per word) model as with the PCFG; all correlations between models are shown in Table 3 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 149, |
|
"end": 157, |
|
"text": "Figure 5", |
|
"ref_id": "FIGREF3" |
|
}, |
|
{ |
|
"start": 240, |
|
"end": 248, |
|
"text": "Figure 4", |
|
"ref_id": "FIGREF2" |
|
}, |
|
{ |
|
"start": 624, |
|
"end": 633, |
|
"text": "Figure 5", |
|
"ref_id": "FIGREF3" |
|
}, |
|
{ |
|
"start": 828, |
|
"end": 835, |
|
"text": "Table 3", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "PCFG Parser", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Furthermore, for the worst 50 sentences for the NMT encoder representations (the sentences found in Section 3.2), the PCFG performed 9% better than the NMT encoder representation models and 6% better than the directly-trained RNN models, despite an overall 7-9% lower accuracy than both RNN-based models. This suggests that PCFGs can perform well on specific sentences that RNNs perform poorly on; for instance, PCFGs may be less reliant on explicit morphosyntactic cues. The PCFG's high performance on these specific sentences explains results finding that explicit syntactic information provides improvements to NMT systems even though NMT systems already implicitly encode syntax (Chen et al., 2017; Chiang et al., 2009; Li et al., 2017; Wu et al., 2017) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 683, |
|
"end": 702, |
|
"text": "(Chen et al., 2017;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 703, |
|
"end": 723, |
|
"text": "Chiang et al., 2009;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 724, |
|
"end": 740, |
|
"text": "Li et al., 2017;", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 741, |
|
"end": 757, |
|
"text": "Wu et al., 2017)", |
|
"ref_id": "BIBREF35" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "PCFG Parser", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "NMT syntax is independent of target language. We found that NMT encoders learn similar source syntactic information regardless of target language, consistently outperforming RNNs trained specifically for the constituent label prediction task. These results help explain the success of NMT encoder representations in cross-task transfer learning, and they open up further questions regarding the extent of similarity between NMT encoder representations across target languages.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "For instance, Schwenk and Douze (2017) found that multilingual NMT encoder representations cluster more based on semantic than syntactic similarity, indicating that semantic information may play a more prominent role than syntax in machine translation. Across target languages, Poliak et al. (2018) found inconsistencies for which target language's representations resulted in the best performance on semantic understanding tasks. This could suggest that semantic information in NMT encoder representations is also similar across target languages.", |
|
"cite_spans": [ |
|
{ |
|
"start": 14, |
|
"end": 38, |
|
"text": "Schwenk and Douze (2017)", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 278, |
|
"end": 298, |
|
"text": "Poliak et al. (2018)", |
|
"ref_id": "BIBREF27" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "RNNs learn limited syntax. Both the NMT encoders and the directly-trained RNNs relied on explicit morphosyntactic cues to extract syntactic information from sentences. This result aligns with findings that RNNs rely on syntax heuristics to obtain high performance on tasks (McCoy et al., 2019) , performing poorly on sentences requiring knowledge of complex syntactic structures (Linzen et al., 2016; Marvin and Linzen, 2018) . NMT encoders specifically have been found not to encode fine-grained syntactic information (Shi et al., 2016) . These limitations can be partially overcome by training an RNN model for a variety of different tasks (Enguehard et al., 2017) ; alternatively, we found that a PCFG syntactic parser encoded significantly different syntactic information from RNN-based models, performing well on many sentences for which RNNs performed poorly.", |
|
"cite_spans": [ |
|
{ |
|
"start": 273, |
|
"end": 293, |
|
"text": "(McCoy et al., 2019)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 379, |
|
"end": 400, |
|
"text": "(Linzen et al., 2016;", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 401, |
|
"end": 425, |
|
"text": "Marvin and Linzen, 2018)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 519, |
|
"end": 537, |
|
"text": "(Shi et al., 2016)", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 642, |
|
"end": 666, |
|
"text": "(Enguehard et al., 2017)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "In some ways, the RNNs' reliance on explicit syntactic cues is similar to sentence processing in people. Many sentences are syntactically ambiguous before they are completed (notably gardenpath sentences such as \"The horse raced past the barn fell\"), and people generally re-evaluate upon reading the disambiguating feature (Frazier and Rayner, 1982; Qian et al., 2018) . Thus, it may be implausible for an online system to identify non-explicit syntactic features given only partial sentences. Compounding this problem, RNNs are unable to re-evaluate past inputs and hidden states upon reading disambiguating words. The successes of bidirectional and Transformer models (Devlin et al., 2019; Peters et al., 2018a; Vaswani et al., 2017 ) may be due partially to their ability to combine later information with representations of earlier words. Indeed, contextual word representations generated by these bidirectional models have been found to encode significant syntactic information (Peters et al., 2018b) ; future work could study whether bidirectional architectures are less reliant on explicit morphosyntactic cues.", |
|
"cite_spans": [ |
|
{ |
|
"start": 324, |
|
"end": 350, |
|
"text": "(Frazier and Rayner, 1982;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 351, |
|
"end": 369, |
|
"text": "Qian et al., 2018)", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 671, |
|
"end": 692, |
|
"text": "(Devlin et al., 2019;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 693, |
|
"end": 714, |
|
"text": "Peters et al., 2018a;", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 715, |
|
"end": 735, |
|
"text": "Vaswani et al., 2017", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 984, |
|
"end": 1006, |
|
"text": "(Peters et al., 2018b)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "In this work, we found that NMT encoder representations across target languages encode similar source syntax, and this syntax is comparable to the syntax learned by RNNs trained directly on syntactic tasks. However, explicit syntactic architectures may be necessary for tasks requiring fine-tuned syntactic parses. Our results have many implications in transfer learning and multilingual sentence representations: a better understanding of the information contained in sentence representations provides necessary insight into the tasks these representations can be used for.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "The first 10 epochs used learning rate 0.0002; the learning rate was halved every 30,000 steps during the final epoch.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "There was a loose correlation between F1 scores and label frequencies, but this correlation could not fully account for the similarity of F1 scores across target languages.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Sentences were marked as \"complete\" by a native English speaker. We considered only sentences from text sources (e.g. not transcribed conversational speech).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We would like to thank Cherlon Ussery for helpful linguistic perspectives on our results, and the Carleton College Cognitive Science Department for making this work possible.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Evaluating layers of representation in neural machine translation on part-of-speech and semantic tagging tasks", |
|
"authors": [ |
|
{ |
|
"first": "Yonatan", |
|
"middle": [], |
|
"last": "Belinkov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Llu\u00eds", |
|
"middle": [], |
|
"last": "M\u00e0rquez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hassan", |
|
"middle": [], |
|
"last": "Sajjad", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nadir", |
|
"middle": [], |
|
"last": "Durrani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fahim", |
|
"middle": [], |
|
"last": "Dalvi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Glass", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the Eighth International Joint Conference on Natural Language Processing", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1--10", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yonatan Belinkov, Llu\u00eds M\u00e0rquez, Hassan Sajjad, Nadir Durrani, Fahim Dalvi, and James Glass. 2017. Evaluating layers of representation in neural ma- chine translation on part-of-speech and semantic tagging tasks. In Proceedings of the Eighth In- ternational Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1-10, Taipei, Taiwan. Asian Federation of Natural Lan- guage Processing.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Deep RNNs encode soft hierarchical syntax", |
|
"authors": [ |
|
{ |
|
"first": "Terra", |
|
"middle": [], |
|
"last": "Blevins", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Omer", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "14--19", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P18-2003" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Terra Blevins, Omer Levy, and Luke Zettlemoyer. 2018. Deep RNNs encode soft hierarchical syntax. In Pro- ceedings of the 56th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 2: Short Papers), pages 14-19, Melbourne, Australia. Asso- ciation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Improved neural machine translation with a syntax-aware encoder and decoder", |
|
"authors": [ |
|
{ |
|
"first": "Huadong", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shujian", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Chiang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jiajun", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1936--1945", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P17-1177" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Huadong Chen, Shujian Huang, David Chiang, and Ji- ajun Chen. 2017. Improved neural machine trans- lation with a syntax-aware encoder and decoder. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1936-1945, Vancouver, Canada. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "001 new features for statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Chiang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Knight", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics", |
|
"volume": "11", |
|
"issue": "", |
|
"pages": "218--226", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Chiang, Kevin Knight, and Wei Wang. 2009. 11,001 new features for statistical machine transla- tion. In Proceedings of Human Language Technolo- gies: The 2009 Annual Conference of the North American Chapter of the Association for Compu- tational Linguistics, pages 218-226, Boulder, Col- orado. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Are all languages equally hard to language-model?", |
|
"authors": [ |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Cotterell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Mielke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Eisner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Brian", |
|
"middle": [], |
|
"last": "Roark", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "536--541", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N18-2085" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ryan Cotterell, Sebastian J. Mielke, Jason Eisner, and Brian Roark. 2018. Are all languages equally hard to language-model? In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 536-541, New Orleans, Louisiana. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "4171--4186", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N19-1423" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Exploring the syntactic abilities of RNNs with multi-task learning", |
|
"authors": [ |
|
{ |
|
"first": "Emile", |
|
"middle": [], |
|
"last": "Enguehard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoav", |
|
"middle": [], |
|
"last": "Goldberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tal", |
|
"middle": [], |
|
"last": "Linzen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 21st Conference on Computational Natural Language Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3--14", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/K17-1003" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Emile Enguehard, Yoav Goldberg, and Tal Linzen. 2017. Exploring the syntactic abilities of RNNs with multi-task learning. In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), pages 3-14, Vancouver, Canada. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Zero-shot cross-lingual classification using multilingual neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Akiko", |
|
"middle": [], |
|
"last": "Eriguchi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Melvin", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Orhan", |
|
"middle": [], |
|
"last": "Firat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hideto", |
|
"middle": [], |
|
"last": "Kazawa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wolfgang", |
|
"middle": [], |
|
"last": "Macherey", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "ArXiv", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Akiko Eriguchi, Melvin Johnson, Orhan Firat, Hideto Kazawa, and Wolfgang Macherey. 2018. Zero-shot cross-lingual classification using multilingual neural machine translation. ArXiv, abs/1809.04686.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Do verb bias effects on sentence production reflect sensitivity to comprehension or production factors?", |
|
"authors": [ |
|
{ |
|
"first": "Victor", |
|
"middle": [], |
|
"last": "Ferreira", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Elizabeth", |
|
"middle": [], |
|
"last": "Schotter", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "The Quarterly Journal of Experimental Psychology", |
|
"volume": "66", |
|
"issue": "8", |
|
"pages": "1548--1571", |
|
"other_ids": { |
|
"DOI": [ |
|
"http://www.tandfonline.com/doi/abs/10.1080/17470218.2012.753924" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Victor Ferreira and Elizabeth Schotter. 2013. Do verb bias effects on sentence production reflect sen- sitivity to comprehension or production factors? The Quarterly Journal of Experimental Psychology, 66(8):1548-1571.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Making and correcting errors during sentence comprehension: Eye movements in the analysis of structurally ambiguous sentences", |
|
"authors": [ |
|
{ |
|
"first": "Lyn", |
|
"middle": [], |
|
"last": "Frazier", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Keith", |
|
"middle": [], |
|
"last": "Rayner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1982, |
|
"venue": "Cognitive Psychology", |
|
"volume": "14", |
|
"issue": "2", |
|
"pages": "178--210", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lyn Frazier and Keith Rayner. 1982. Making and cor- recting errors during sentence comprehension: Eye movements in the analysis of structurally ambiguous sentences. Cognitive Psychology, 14(2):178-210.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Neural language models as psycholinguistic subjects: Representations of syntactic state", |
|
"authors": [ |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Futrell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ethan", |
|
"middle": [], |
|
"last": "Wilcox", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Takashi", |
|
"middle": [], |
|
"last": "Morita", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peng", |
|
"middle": [], |
|
"last": "Qian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Miguel", |
|
"middle": [], |
|
"last": "Ballesteros", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roger", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "32--42", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N19-1004" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Richard Futrell, Ethan Wilcox, Takashi Morita, Peng Qian, Miguel Ballesteros, and Roger Levy. 2019. Neural language models as psycholinguistic sub- jects: Representations of syntactic state. In Proceed- ings of the 2019 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 32-42, Minneapolis, Minnesota. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "The contributions of verb bias and plausibility to the comprehension of temporarily ambiguous sentences", |
|
"authors": [ |
|
{ |
|
"first": "Susan", |
|
"middle": [], |
|
"last": "Garnsey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Neal", |
|
"middle": [], |
|
"last": "Pearlmutter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Elizabeth", |
|
"middle": [], |
|
"last": "Myers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Melanie", |
|
"middle": [], |
|
"last": "Lotocky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Journal of Memory and Language", |
|
"volume": "37", |
|
"issue": "1", |
|
"pages": "58--93", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1006/jmla.1997.2512" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Susan Garnsey, Neal Pearlmutter, Elizabeth Myers, and Melanie Lotocky. 1997. The contributions of verb bias and plausibility to the comprehension of tem- porarily ambiguous sentences. Journal of Memory and Language, 37(1):58-93.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Long short-term memory", |
|
"authors": [ |
|
{ |
|
"first": "Sepp", |
|
"middle": [], |
|
"last": "Hochreiter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J\u00fcrgen", |
|
"middle": [], |
|
"last": "Schmidhuber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Neural computation", |
|
"volume": "9", |
|
"issue": "", |
|
"pages": "1735--80", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1162/neco.1997.9.8.1735" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9:1735- 80.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Adam: A method for stochastic optimization", |
|
"authors": [ |
|
{ |
|
"first": "Diederik", |
|
"middle": [], |
|
"last": "Kingma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jimmy", |
|
"middle": [], |
|
"last": "Ba", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. International Conference on Learning Representations.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Accurate unlexicalized parsing", |
|
"authors": [ |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Christopher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "423--430", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/1075096.1075150" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dan Klein and Christopher D. Manning. 2003. Ac- curate unlexicalized parsing. In Proceedings of the 41st Annual Meeting of the Association for Compu- tational Linguistics, pages 423-430, Sapporo, Japan. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "OpenNMT: Opensource toolkit for neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Guillaume", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoon", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuntian", |
|
"middle": [], |
|
"last": "Deng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jean", |
|
"middle": [], |
|
"last": "Senellart", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Rush", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of ACL 2017, System Demonstrations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "67--72", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senel- lart, and Alexander Rush. 2017. OpenNMT: Open- source toolkit for neural machine translation. In Proceedings of ACL 2017, System Demonstrations, pages 67-72, Vancouver, Canada. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Investigating multilingual nmt representations at scale", |
|
"authors": [ |
|
{ |
|
"first": "Sneha", |
|
"middle": [], |
|
"last": "Kudugunta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ankur", |
|
"middle": [], |
|
"last": "Bapna", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Isaac", |
|
"middle": [], |
|
"last": "Caswell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Orhan", |
|
"middle": [], |
|
"last": "Firat", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1565--1575", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sneha Kudugunta, Ankur Bapna, Isaac Caswell, and Orhan Firat. 2019. Investigating multilingual nmt representations at scale. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1565-1575.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Modeling source syntax for neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Junhui", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Deyi", |
|
"middle": [], |
|
"last": "Xiong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhaopeng", |
|
"middle": [], |
|
"last": "Tu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Muhua", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Min", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guodong", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "688--697", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P17-1064" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Junhui Li, Deyi Xiong, Zhaopeng Tu, Muhua Zhu, Min Zhang, and Guodong Zhou. 2017. Modeling source syntax for neural machine translation. In Proceed- ings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 688-697, Vancouver, Canada. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Assessing the ability of LSTMs to learn syntax-sensitive dependencies", |
|
"authors": [ |
|
{ |
|
"first": "Tal", |
|
"middle": [], |
|
"last": "Linzen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Emmanuel", |
|
"middle": [], |
|
"last": "Dupoux", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoav", |
|
"middle": [], |
|
"last": "Goldberg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "4", |
|
"issue": "", |
|
"pages": "521--535", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1162/tacl_a_00115" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. 2016. Assessing the ability of LSTMs to learn syntax-sensitive dependencies. Transactions of the Association for Computational Linguistics, 4:521- 535.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Effective approaches to attention-based neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Thang", |
|
"middle": [], |
|
"last": "Luong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hieu", |
|
"middle": [], |
|
"last": "Pham", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1412--1421", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D15-1166" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thang Luong, Hieu Pham, and Christopher D. Man- ning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natu- ral Language Processing, pages 1412-1421, Lis- bon, Portugal. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Targeted syntactic evaluation of language models", |
|
"authors": [ |
|
{ |
|
"first": "Rebecca", |
|
"middle": [], |
|
"last": "Marvin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tal", |
|
"middle": [], |
|
"last": "Linzen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1192--1202", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D18-1151" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rebecca Marvin and Tal Linzen. 2018. Targeted syn- tactic evaluation of language models. In Proceed- ings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1192-1202, Brussels, Belgium. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Learned in translation: Contextualized word vectors", |
|
"authors": [ |
|
{ |
|
"first": "Bryan", |
|
"middle": [], |
|
"last": "Mccann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Bradbury", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Caiming", |
|
"middle": [], |
|
"last": "Xiong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bryan McCann, James Bradbury, Caiming Xiong, and Richard Socher. 2017. Learned in translation: Con- textualized word vectors. In Advances in Neural In- formation Processing Systems 30.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference", |
|
"authors": [ |
|
{ |
|
"first": "Tom", |
|
"middle": [], |
|
"last": "Mccoy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ellie", |
|
"middle": [], |
|
"last": "Pavlick", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tal", |
|
"middle": [], |
|
"last": "Linzen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3428--3448", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P19-1334" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tom McCoy, Ellie Pavlick, and Tal Linzen. 2019. Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. In Proceed- ings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3428-3448, Florence, Italy. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "What kind of language is hard to language-model?", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Sebastian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Mielke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyle", |
|
"middle": [], |
|
"last": "Cotterell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Brian", |
|
"middle": [], |
|
"last": "Gorman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Roark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Eisner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4975--4989", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P19-1491" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sebastian J. Mielke, Ryan Cotterell, Kyle Gorman, Brian Roark, and Jason Eisner. 2019. What kind of language is hard to language-model? In Pro- ceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4975- 4989, Florence, Italy. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Rapid adaptation of neural machine translation to new languages", |
|
"authors": [ |
|
{ |
|
"first": "Graham", |
|
"middle": [], |
|
"last": "Neubig", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Junjie", |
|
"middle": [], |
|
"last": "Hu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "875--880", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D18-1103" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Graham Neubig and Junjie Hu. 2018. Rapid adapta- tion of neural machine translation to new languages. In Proceedings of the 2018 Conference on Empiri- cal Methods in Natural Language Processing, pages 875-880, Brussels, Belgium. Association for Com- putational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Deep contextualized word representations", |
|
"authors": [ |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Peters", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Neumann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohit", |
|
"middle": [], |
|
"last": "Iyyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matt", |
|
"middle": [], |
|
"last": "Gardner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "2227--2237", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N18-1202" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018a. Deep contextualized word rep- resentations. In Proceedings of the 2018 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Dissecting contextual word embeddings: Architecture and representation", |
|
"authors": [ |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Peters", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Neumann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wen-Tau", |
|
"middle": [], |
|
"last": "Yih", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1499--1509", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D18-1179" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matthew Peters, Mark Neumann, Luke Zettlemoyer, and Wen-tau Yih. 2018b. Dissecting contextual word embeddings: Architecture and representation. In Proceedings of the 2018 Conference on Em- pirical Methods in Natural Language Processing, pages 1499-1509, Brussels, Belgium. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "On the evaluation of semantic phenomena in neural machine translation using natural language inference", |
|
"authors": [ |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Poliak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yonatan", |
|
"middle": [], |
|
"last": "Belinkov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Glass", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Van Durme", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "513--523", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N18-2082" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Adam Poliak, Yonatan Belinkov, James Glass, and Ben- jamin Van Durme. 2018. On the evaluation of se- mantic phenomena in neural machine translation us- ing natural language inference. In Proceedings of the 2018 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Pa- pers), pages 513-523, New Orleans, Louisiana. As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "CoNLL-2012 shared task: Modeling multilingual unrestricted coreference in OntoNotes", |
|
"authors": [ |
|
{ |
|
"first": "Alessandro", |
|
"middle": [], |
|
"last": "Sameer Pradhan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nianwen", |
|
"middle": [], |
|
"last": "Moschitti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Olga", |
|
"middle": [], |
|
"last": "Xue", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuchen", |
|
"middle": [], |
|
"last": "Uryupina", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Joint Conference on EMNLP and CoNLL -Shared Task", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--40", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Olga Uryupina, and Yuchen Zhang. 2012. CoNLL- 2012 shared task: Modeling multilingual unre- stricted coreference in OntoNotes. In Joint Confer- ence on EMNLP and CoNLL -Shared Task, pages 1-40, Jeju Island, Korea. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "The importance of syntactic parsing and inference in semantic role labeling", |
|
"authors": [ |
|
{ |
|
"first": "Vasin", |
|
"middle": [], |
|
"last": "Punyakanok", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Roth", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wen-Tau", |
|
"middle": [], |
|
"last": "Yih", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Computational Linguistics", |
|
"volume": "34", |
|
"issue": "2", |
|
"pages": "257--287", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1162/coli.2008.34.2.257" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vasin Punyakanok, Dan Roth, and Wen-tau Yih. 2008. The importance of syntactic parsing and inference in semantic role labeling. Computational Linguistics, 34(2):257-287.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "A comparison of online and offline measures of good-enough processing in garden-path sentences. Language", |
|
"authors": [ |
|
{ |
|
"first": "Zhiying", |
|
"middle": [], |
|
"last": "Qian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Susan", |
|
"middle": [], |
|
"last": "Garnsey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kiel", |
|
"middle": [], |
|
"last": "Christianson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Cognition and Neuroscience", |
|
"volume": "33", |
|
"issue": "2", |
|
"pages": "227--254", |
|
"other_ids": { |
|
"DOI": [ |
|
"https://www.tandfonline.com/doi/abs/10.1080/23273798.2017.1379606" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhiying Qian, Susan Garnsey, and Kiel Christianson. 2018. A comparison of online and offline measures of good-enough processing in garden-path sentences. Language, Cognition and Neuroscience, 33(2):227- 254.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Learning joint multilingual sentence representations with neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Holger", |
|
"middle": [], |
|
"last": "Schwenk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matthijs", |
|
"middle": [], |
|
"last": "Douze", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 2nd Workshop on Representation Learning for NLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "157--167", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W17-2619" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Holger Schwenk and Matthijs Douze. 2017. Learn- ing joint multilingual sentence representations with neural machine translation. In Proceedings of the 2nd Workshop on Representation Learning for NLP, pages 157-167, Vancouver, Canada. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Neural machine translation of rare words with subword units", |
|
"authors": [ |
|
{ |
|
"first": "Rico", |
|
"middle": [], |
|
"last": "Sennrich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Barry", |
|
"middle": [], |
|
"last": "Haddow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandra", |
|
"middle": [], |
|
"last": "Birch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1715--1725", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P16-1162" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715- 1725, Berlin, Germany. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Does string-based neural MT learn source syntax?", |
|
"authors": [ |
|
{ |
|
"first": "Xing", |
|
"middle": [], |
|
"last": "Shi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Inkit", |
|
"middle": [], |
|
"last": "Padhi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Knight", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1526--1534", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D16-1159" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xing Shi, Inkit Padhi, and Kevin Knight. 2016. Does string-based neural MT learn source syntax? In Pro- ceedings of the 2016 Conference on Empirical Meth- ods in Natural Language Processing, pages 1526- 1534, Austin, Texas. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Attention is all you need", |
|
"authors": [ |
|
{ |
|
"first": "Ashish", |
|
"middle": [], |
|
"last": "Vaswani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noam", |
|
"middle": [], |
|
"last": "Shazeer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Niki", |
|
"middle": [], |
|
"last": "Parmar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jakob", |
|
"middle": [], |
|
"last": "Uszkoreit", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Llion", |
|
"middle": [], |
|
"last": "Jones", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aidan", |
|
"middle": [], |
|
"last": "Gomez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lukasz", |
|
"middle": [], |
|
"last": "Kaiser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Illia", |
|
"middle": [], |
|
"last": "Polosukhin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Advances in neural information processing systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5998--6008", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Improved neural machine translation with source syntax", |
|
"authors": [ |
|
{ |
|
"first": "Shuangzhi", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dongdong", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI-17", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4179--4185", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.24963/ijcai.2017/584" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shuangzhi Wu, Ming Zhou, and Dongdong Zhang. 2017. Improved neural machine translation with source syntax. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelli- gence, IJCAI-17, pages 4179-4185.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "The united nations parallel corpus v1.0", |
|
"authors": [ |
|
{ |
|
"first": "Micha\u0142", |
|
"middle": [], |
|
"last": "Ziemski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marcin", |
|
"middle": [], |
|
"last": "Junczys-Dowmunt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bruno", |
|
"middle": [], |
|
"last": "Pouliquen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3530--3534", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Micha\u0142 Ziemski, Marcin Junczys-Dowmunt, and Bruno Pouliquen. 2016. The united nations parallel cor- pus v1.0. In Proceedings of the Tenth Interna- tional Conference on Language Resources and Eval- uation (LREC 2016), pages 3530-3534, Portoro\u017e, Slovenia. European Language Resources Associa- tion (ELRA).", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "Results for the constituent label prediction tasks, trained from NMT encoder representations. Dots indicate mean accuracies (based on 20 feedforward models), bars indicate two standard deviations from the mean, and dashed lines represent baseline accuracies.", |
|
"num": null |
|
}, |
|
"FIGREF1": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "Average accuracies on the constituent label prediction tasks for all four types of model.", |
|
"num": null |
|
}, |
|
"FIGREF2": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "Mean great-grandparent constituent label accuracies per sentence for the NMT encoder-based models and the directly-trained RNNs. Each dot represents a sentence.", |
|
"num": null |
|
}, |
|
"FIGREF3": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "Mean great-grandparent constituent label accuracies per sentence for the NMT encoder-based models and the PCFG parser.", |
|
"num": null |
|
}, |
|
"TABREF0": { |
|
"content": "<table><tr><td/><td/><td colspan=\"2\">Tokenized BLEU</td><td/><td/></tr><tr><td>AR</td><td>EN</td><td>ES</td><td>FR</td><td>RU</td><td>ZH</td></tr><tr><td colspan=\"6\">37.3 99.9 56.3 44.8 37.8 24.9</td></tr><tr><td/><td colspan=\"4\">Detokenized BLEU</td><td/></tr><tr><td>AR</td><td>EN</td><td>ES</td><td>FR</td><td>RU</td><td>ZH</td></tr><tr><td colspan=\"5\">38.0 100.0 56.3 44.5 37.4</td><td/></tr></table>", |
|
"text": "BLEU scores before and after detokenizing the NMT translations for the UN test set. The detokenized BLEU score was not computed for Chinese because words were generally not separated by spaces in the Chinese dataset.", |
|
"num": null, |
|
"html": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF3": { |
|
"content": "<table/>", |
|
"text": "", |
|
"num": null, |
|
"html": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF5": { |
|
"content": "<table/>", |
|
"text": "Pairwise Pearson correlations for per-sentence great-grandparent constituent label accuracies, computed between all four types of model. contrast to the RNN-based models.", |
|
"num": null, |
|
"html": null, |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |