|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T10:15:25.355369Z" |
|
}, |
|
"title": "An Attentive Recurrent Model for Incremental Prediction of Sentence-final Verbs", |
|
"authors": [ |
|
{ |
|
"first": "Wenyan", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Alvin", |
|
"middle": [], |
|
"last": "Grissom", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Haverford College", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Jordan", |
|
"middle": [], |
|
"last": "Boyd-Graber", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Maryland", |
|
"location": {} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Verb prediction is important for understanding human processing of verb-final languages, with practical applications to real-time simultaneous interpretation from verb-final to verbmedial languages. While previous approaches use classical statistical models, we introduce an attention-based neural model to incrementally predict final verbs on incomplete sentences in Japanese and German SOV sentences. To offer flexibility to the model, we further incorporate synonym awareness. Our approach both better predicts the final verbs in Japanese and German and provides more interpretable explanations of why those verbs are selected. 1 German is rich in both SOV and SVO sentences. It has been argued that its underlying structure is SOV (Bach, 1962; Koster, 1975), but this is not immediately relevant to our task. German Cazeneuve dankte dort den M\u00e4nnern und sagte, ohne deren k\u00fchlen Kopf h\u00e4tte es vielleicht ein \"furchtbares Drama\" gegeben. English Cazeneuve thanked the men there and said that without their cool heads there might have been a \"terrible drama\". Japanese \u307e\u305f\u5927\u548c\u56fd\u5948\u826f\u770c\u306e\u845b\u57ce\u5c71\u306b \u7bed\u308a\u5bc6\u6559\u306e\u5bbf\u66dc\u79d8\u6cd5\u3092\u7fd2\u5f97\u3057\u305f\u3068\u3082 \u8a00 \u8a00 \u8a00\u308f \u308f \u308f. English It also said that he was acquainted with a secret lodging accommodation in Katsuragiyama in Nara Prefecture of Yamato.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Verb prediction is important for understanding human processing of verb-final languages, with practical applications to real-time simultaneous interpretation from verb-final to verbmedial languages. While previous approaches use classical statistical models, we introduce an attention-based neural model to incrementally predict final verbs on incomplete sentences in Japanese and German SOV sentences. To offer flexibility to the model, we further incorporate synonym awareness. Our approach both better predicts the final verbs in Japanese and German and provides more interpretable explanations of why those verbs are selected. 1 German is rich in both SOV and SVO sentences. It has been argued that its underlying structure is SOV (Bach, 1962; Koster, 1975), but this is not immediately relevant to our task. German Cazeneuve dankte dort den M\u00e4nnern und sagte, ohne deren k\u00fchlen Kopf h\u00e4tte es vielleicht ein \"furchtbares Drama\" gegeben. English Cazeneuve thanked the men there and said that without their cool heads there might have been a \"terrible drama\". Japanese \u307e\u305f\u5927\u548c\u56fd\u5948\u826f\u770c\u306e\u845b\u57ce\u5c71\u306b \u7bed\u308a\u5bc6\u6559\u306e\u5bbf\u66dc\u79d8\u6cd5\u3092\u7fd2\u5f97\u3057\u305f\u3068\u3082 \u8a00 \u8a00 \u8a00\u308f \u308f \u308f. English It also said that he was acquainted with a secret lodging accommodation in Katsuragiyama in Nara Prefecture of Yamato.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Final verb prediction is fundamental to human language processing in languages with subjectobject-verb (SOV) word order, such as German 1 and Japanese, (Kamide et al., 2003; Momma et al., 2014; Chow et al., 2018) particularly for simultaneous interpretation, where an interpreter generates a translation in real time. Instead of waiting until the entire sentence is completed, simultaneous interpretation requires translation of the source text units while the interlocutor is speaking.", |
|
"cite_spans": [ |
|
{ |
|
"start": 136, |
|
"end": 137, |
|
"text": "1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 152, |
|
"end": 173, |
|
"text": "(Kamide et al., 2003;", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 174, |
|
"end": 193, |
|
"text": "Momma et al., 2014;", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 194, |
|
"end": 212, |
|
"text": "Chow et al., 2018)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "When human simultaneous interpreters translate from an SOV language to an SVO one incrementally-without waiting for the final verb at the end of a sentence-they must use strategies to reduce the lag, or delay, between the time they hear the source words and the time they translate them (Wilss, 1978; He et al., 2016) . One strategy is final verb prediction: since the verb comes late in the source sentence but early in the target translation, if the verb is predicted in advance, it can be translated before it is heard, allowing for a more Figure 1 : An example of the verb position difference between SOV and SVO languages, where the final verb in German and Japanese is expected much earlier in their English translation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 287, |
|
"end": 300, |
|
"text": "(Wilss, 1978;", |
|
"ref_id": "BIBREF38" |
|
}, |
|
{ |
|
"start": 301, |
|
"end": 317, |
|
"text": "He et al., 2016)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 543, |
|
"end": 551, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\"simultaneous\" (or monotonic) translation (J\u00f6rg, 1997; Bevilacqua, 2009; He et al., 2015) . Furthermore, Chernov et al. (2004) argue that simultaneous interpreters' probabilty estimates and predictions of the verbal and semantic structure of preceeding messages facilitates simultaneity in human simultaneous interpretation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 42, |
|
"end": 54, |
|
"text": "(J\u00f6rg, 1997;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 55, |
|
"end": 72, |
|
"text": "Bevilacqua, 2009;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 73, |
|
"end": 89, |
|
"text": "He et al., 2015)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 105, |
|
"end": 126, |
|
"text": "Chernov et al. (2004)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Like for human translation, simultaneous machine translation (SMT), becomes more monotonic for SOV-SVO with better verb prediction (Grissom II et al., 2014; Gu et al., 2017; Alinejad et al., 2018) . Earlier work used pattern-matching rules (Matsubara et al., 2000) , n-gram language models (Grissom II et al., 2014) , or a logistic regression with linguistic features (Grissom II et al., 2016) . Recent neural simultaneous translation systems have integrated prediction into the encoder-decoder model or argued that these predictions, including verb predictions, are made implicitly by such models (Gu et al., 2017; Alinejad et al., 2018) , but they have not systmatically studied the late-occurring verb predictions themselves.", |
|
"cite_spans": [ |
|
{ |
|
"start": 131, |
|
"end": 156, |
|
"text": "(Grissom II et al., 2014;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 157, |
|
"end": 173, |
|
"text": "Gu et al., 2017;", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 174, |
|
"end": 196, |
|
"text": "Alinejad et al., 2018)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 240, |
|
"end": 264, |
|
"text": "(Matsubara et al., 2000)", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 290, |
|
"end": 315, |
|
"text": "(Grissom II et al., 2014)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 368, |
|
"end": 393, |
|
"text": "(Grissom II et al., 2016)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 598, |
|
"end": 615, |
|
"text": "(Gu et al., 2017;", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 616, |
|
"end": 638, |
|
"text": "Alinejad et al., 2018)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Auch die deutschen Skispringer k\u00f6nnen sich Hoffnungen auf ihre erste Medaille bei den Winterspielen in Vancouver [machen, schaffen, tun] .", |
|
"cite_spans": [ |
|
{ |
|
"start": 113, |
|
"end": 136, |
|
"text": "[machen, schaffen, tun]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "German", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The German ski jumpers can also hope for their first medal at the Winter Games in Vancouver. Figure 2 : An example of alternatives of final verbs (\"machen\", \"schaffen\", \"tun\") that preserve same general meaning in German and do not influence its translation in English.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 93, |
|
"end": 101, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "English", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "While neural models can identify complex patterns from feature-rich datasets (Goldberg, 2017) , less research has gone into problem of longdistance prediction, particularly for sentence-final verbs, where predictions must be made with incomplete information. We introduce a neural model, Attentive Neural Verb Inference for Incremental Language (ANVIIL) for verb prediction, which predicts verbs earlier and with higher accuracy. Moreover, we make ANVIIL's predictions more flexible by introducing synonym awareness. Self-attention also allows visualizes why a certain verb is selected and how it relates to specific tokens in the observed subsentence.", |
|
"cite_spans": [ |
|
{ |
|
"start": 77, |
|
"end": 93, |
|
"text": "(Goldberg, 2017)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "English", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Given an SOV sentence, we want to predict the final verb as soon as possible in an incremental setting. For example, in Figure 1 , the final verb, \"gegeben\", in German is expected to be translated together with \"h\u00e4tte es\" as \"there would have been\" in the middle of the English translation.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 120, |
|
"end": 128, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The Problem of Verb Prediction", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Human interpreters will often predict a related verb rather than the exact verb in a reference translation, while preserving the same general meaning, since predicting the exact verb in a reference translation is difficult (J\u00f6rg, 1997) . For instance, in Figure 2 , besides \"machen\", verbs such as \"schaffen\" and \"tun\" also offen pair with \"Hoffnungen\" to express \"hope for\" in English. We therefore include two verb prediction tasks: first, we learn to predict the exact verb; second, we learn to predict verbs semantically similar to the exact reference verb. We describe these two tasks below.", |
|
"cite_spans": [ |
|
{ |
|
"start": 223, |
|
"end": 235, |
|
"text": "(J\u00f6rg, 1997)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 255, |
|
"end": 263, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The Problem of Verb Prediction", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We follow Grissom II et al. (2016) , who formulate final verb prediction as sequential classification: a sentence is revealed to the classifier incrementally, and the classifier predicts the exact verb at each time step. While Grissom II et al. (2016) use logistic regression with engineered linguistic features, we use a recurrent neural model with self-attention, which learns embeddings 2 and a context representation that captures relations between tokens, regardless of the distance. Verbs are predicted by classifying on the learned representation of incomplete sentences.", |
|
"cite_spans": [ |
|
{ |
|
"start": 10, |
|
"end": 34, |
|
"text": "Grissom II et al. (2016)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 227, |
|
"end": 251, |
|
"text": "Grissom II et al. (2016)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Exact Prediction", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "We also extend the idea in Section 2.1 to allow for synonym-aware predictions: for example, the verb synonym \"give\", used in place of \"provide\", preserves the intended meaning in most circumstances and can be considered a successful prediction. Instead of training the model to focus on one fixed verb for each input, we encourage the model to be confident about a set of verb candidates which are generally correct in the context.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Synonym-aware Prediction", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "This section describes ANVIIL's structure. Gated recurrent neural networks (RNNs), such as LSTMs (Hochreiter and Schmidhuber, 1997) and gated recurrent units (Cho et al., 2014, GRUs) , can capture long-range dependencies in text, which we need for effective verb prediction.", |
|
"cite_spans": [ |
|
{ |
|
"start": 97, |
|
"end": 131, |
|
"text": "(Hochreiter and Schmidhuber, 1997)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 158, |
|
"end": 182, |
|
"text": "(Cho et al., 2014, GRUs)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Neural Model for Verb Prediction", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We construct an RNN-based classifier with selfattention (Lin et al., 2017) for predicting sentencefinal verbs (Figure 3 ). This is a natural encoding of the problem, as it explicitly models how interpreters might receive information and update their verb predictions. The hidden states of the sequence model can be either at the word or character level.", |
|
"cite_spans": [ |
|
{ |
|
"start": 56, |
|
"end": 74, |
|
"text": "(Lin et al., 2017)", |
|
"ref_id": "BIBREF27" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 110, |
|
"end": 119, |
|
"text": "(Figure 3", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "A Neural Model for Verb Prediction", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Following Yang et al. (2016) , we encode input sequences using the bidirectional GRU (BiGRU). 3 Given an incomplete sentence prefix Token sequences at the input layer are mapped to embeddings, which go to the GRU. The dot product of attention weights and hidden states pass through a dense layer to predict the verb.", |
|
"cite_spans": [ |
|
{ |
|
"start": 10, |
|
"end": 28, |
|
"text": "Yang et al. (2016)", |
|
"ref_id": "BIBREF41" |
|
}, |
|
{ |
|
"start": 94, |
|
"end": 95, |
|
"text": "3", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BiGRU Sequence Encoder", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "x = (x 1 , x 2 , \u2022 \u2022 \u2022 , x l ) of length l, BiGRU takes as input the embeddings (w 1 , w 2 , \u2022 \u2022 \u2022 , w l ), where w i is the d-dimensional embedding vector of x i . At time", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BiGRU Sequence Encoder", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "step t, the forward and backward hidden states are:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BiGRU Sequence Encoder", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "\u2212 \u2192 h t = \u2212 \u2212 \u2192 GRU(w t , \u2212\u2212\u2192 h t\u22121 ) \u2190 \u2212 h t = \u2190 \u2212 \u2212 GRU(w t , \u2190\u2212\u2212 h t+1 ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BiGRU Sequence Encoder", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "( 1)These are concatenated as", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BiGRU Sequence Encoder", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "h t = [ \u2212 \u2192 h t ; \u2190 \u2212 h t ]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BiGRU Sequence Encoder", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "and we represent the input sequence as", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BiGRU Sequence Encoder", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "H = (h 1 , h 2 , \u2022 \u2022 \u2022 , h l ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BiGRU Sequence Encoder", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "( 2)As we only use a prefix of the sentence as input for prediction, we won't be able to see backward messages from unrevealed. However, once we see those words, later words in the prefix do change the internal representation of earlier words in H, creating a more powerful overall representation that uses more of the available context.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BiGRU Sequence Encoder", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Embedding vectors for the input can be word embeddings or character embeddings, yielding a word-based or a character-based model; we try both in Section 4.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BiGRU Sequence Encoder", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Following Lin et al. 2017, we apply self-attention with multiple views of the input sequence to obtain a weighted context vector v. By viewing the sequence multiple times, it allows different attentions to be assigned at each time. Using a two layer multilayer perceptron (MLP) without bias and a softmax function over the sequence length, we have an r-by-l attention matrix A, which includes r attention vectors extracted from r views of x:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Structured Self-attention", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "A = softmax(W s 2 tanh(W s 1 H T ))", |
|
"eq_num": "(3)" |
|
} |
|
], |
|
"section": "Structured Self-attention", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We sum over all r attention vectors and normalize, yielding a single attention vector a with normalized weights (Figure 3 ). By assigning each hidden state its attention a t , we acquire an overall representation of the sequence:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 112, |
|
"end": 121, |
|
"text": "(Figure 3", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Structured Self-attention", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "v = l \u2211 t=1 a t h t .", |
|
"eq_num": "(4)" |
|
} |
|
], |
|
"section": "Structured Self-attention", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "For an incomplete input prefix x, the target verb is y \u2208 Y = {1, 2, . . . , K}. Based on the high-level representation v of the input sequence, we compute the probability of each verb k and select the one with the highest probability as the predicted verb:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Verb Predictor", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "p(y | v) = e fy(v) \u2211 K k=1 e f k (v)", |
|
"eq_num": "(5)" |
|
} |
|
], |
|
"section": "Verb Predictor", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "where f k (v) is the logit from the dense layer.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Verb Predictor", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "As there is only one ground-truth verb y for the input, we maximize the log-likelihood of the correct verb with cross-entropy loss:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Exact Verb Prediction", |
|
"sec_num": "3.3.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "L = \u2212 K \u2211 k=1 q(k | v) log p(k | v)", |
|
"eq_num": "(6)" |
|
} |
|
], |
|
"section": "Exact Verb Prediction", |
|
"sec_num": "3.3.1" |
|
}, |
|
{ |
|
"text": "where q(k | v) is the ground-truth distribution over the verbs, which equals 1 if k = y, or 0 otherwise.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Exact Verb Prediction", |
|
"sec_num": "3.3.1" |
|
}, |
|
{ |
|
"text": "In addition to the exact verb y, we add verbs that are of similar meaning to y in to a synonym set Y \u2032 \u2282 Y , creating a verb candidate pool for each input sample. Instead of maximizing the loglikelihood of the fixed verb y, we maximize the log-likelihood of the most probable verb candidate y \u2032 \u2208 Y \u2032 dynamically through training:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Synonym-aware Verb Prediction", |
|
"sec_num": "3.3.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "L = \u2212 K \u2211 k=1 q \u2032 (k | v) log p(k | v)", |
|
"eq_num": "(7)" |
|
} |
|
], |
|
"section": "Synonym-aware Verb Prediction", |
|
"sec_num": "3.3.2" |
|
}, |
|
{ |
|
"text": "where", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Synonym-aware Verb Prediction", |
|
"sec_num": "3.3.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "q \u2032 (k | v) = \u23a7 \u23a8 \u23a9 1, if k = argmax k\u2208Y \u2032 p(k | v) 0, otherwise.", |
|
"eq_num": "(8)" |
|
} |
|
], |
|
"section": "Synonym-aware Verb Prediction", |
|
"sec_num": "3.3.2" |
|
}, |
|
{ |
|
"text": "As the candidate can be different in each step, overall the likelihood of any verb candidate in the synonym set is maximized in the training process. Table 1 : Dataset for final-verb prediction. We extract sentences with the most frequent 100-300 verbs in German and Japanese verb final sentences. Using normalized Japanese verbs reduces the sparsity of the verbs and improves coverage of sentences.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 150, |
|
"end": 157, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Synonym-aware Verb Prediction", |
|
"sec_num": "3.3.2" |
|
}, |
|
{ |
|
"text": "We first test exact prediction on both Japanese and German verb-final sentences with both word-based and character-based models.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Exact Prediction Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We use German and Japanese verb-final sentences between ten and fifty tokens (Table 1 ) that end in the 100 to 300 most common verbs (Wolfel et al., 2008) . For each sentence, the extracted final verb becomes the label; the token sequence preceding it (the preverb) is the input. We split sentences into train (64%), evaluation (16%) and test (20%) sets. For Japanese, we use the Kyoto Free Translation Task (KFT) corpus of Wikipedia articles. Since Japanese is unsegmented, we use the morphological analyzer MeCab (Kudo, 2005) for tokenization. Like Grissom II et al. (2016) , we strip out post-verbal copulas and normalize verb forms to the dictionary ru (non-past tense) form. We also consider suru light verb constructions a single unit.", |
|
"cite_spans": [ |
|
{ |
|
"start": 133, |
|
"end": 154, |
|
"text": "(Wolfel et al., 2008)", |
|
"ref_id": "BIBREF40" |
|
}, |
|
{ |
|
"start": 515, |
|
"end": 527, |
|
"text": "(Kudo, 2005)", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 551, |
|
"end": 575, |
|
"text": "Grissom II et al. (2016)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 77, |
|
"end": 85, |
|
"text": "(Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Datasets", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "For German, we use the Wortschatz Leipzig news corpus from 1995 to 2015 (Goldhahn et al., 2012) . German sentences ending with a verb (we throw out verb medial sentences) are tokenized and POS-tagged with TreeTagger (Schmid, 1995) . Since German sentences may end with two verbsfor example, a verb followed by ist, we only predict the content verb, i.e., the first verb in the two-verb sequence. Unlike Japanese, we leave German verbs inflected, as there is less variation (usually past participle or infinitive form).", |
|
"cite_spans": [ |
|
{ |
|
"start": 72, |
|
"end": 95, |
|
"text": "(Goldhahn et al., 2012)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 216, |
|
"end": 230, |
|
"text": "(Schmid, 1995)", |
|
"ref_id": "BIBREF36" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datasets", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Because we predict from partial input, we train on incrementally longer preverb subsequences. Each subsequence is an independent input sample during training, and each preverb is truncated into five progressively longer subsentences: 30%, 50%, 70%, 90%, and 100%. 4", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training Data Representation", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "We train both word-and character-based models for German and Japanese verb prediction. We use the dev sets to manually tune hyperparameters for accuracy-word embedding size, hidden layer size, dropout rates and learning rate.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training Details", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Character-based Model For input character sequences, we learn 64-dimensional embeddings and encode them with a two-layer BiGRU of 256 hidden units. The embeddings are randomly initialized with PyTorch defaults and updated during training jointly with other parameters. Mini-batch sizes are 256 for German but 128 for Japanese's smaller corpus. We use the evaluation set for tuning and set the embedding dropout rate as 0.6 and the RNN dropout rate as 0.2 while averaging from five views for attention vectors. We optimize with Adam (Kingma and Ba, 2015) with an initial learning rate of 10 \u22124 , decaying by 0.1 when loss increases. Training takes approximately two (Japanese) and four (German) hours on one 6GB GTX1060 GPU.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training Details", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "We use a vocabulary of 50,000 for German and Japanese; we use the <UNK> token for out-of-vocabulary tokens. The embedding size is 300. We encode the input embeddings with a two-layer BiGRU with 512 hidden units. Other hyperparameters are unchanged from the character-based model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Word-based Model", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We compare ANVIIL to the logistic regression model 5 in Grissom II et al. (2016) on the 100 most frequent verbs in the corpus (Figure 4) . For both languages, ANVIIL has higher accuracy than previous work (Figure 5 ), especially early in the sentence. While word-based models work best for German, character-based models work best for Japanese, perhaps because it is agglutinative. Figure 6 compares other encodings of preverbs (at a character level) in Japanese. In general, AN-VIIL has higher accuracy on verb prediction tasks. German (inflected) and Japanese (normalized) verb prediction. ANVIIL consistently has higher accuracy than LogReg from Grissom II et al. (2016) , and word-based prediction is slightly better for German but worse for Japanese. Sentence Revealed (%) Japanese Figure 5 : Accuracy when classifying among the most common 100, 200, and 300 verbs. ANVIIL consistently outperforms the best-performing model described in Grissom II et al. (2016) , especially early in the sentences.", |
|
"cite_spans": [ |
|
{ |
|
"start": 56, |
|
"end": 80, |
|
"text": "Grissom II et al. (2016)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 530, |
|
"end": 548, |
|
"text": "German (inflected)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 649, |
|
"end": 673, |
|
"text": "Grissom II et al. (2016)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 942, |
|
"end": 966, |
|
"text": "Grissom II et al. (2016)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 126, |
|
"end": 136, |
|
"text": "(Figure 4)", |
|
"ref_id": "FIGREF1" |
|
}, |
|
{ |
|
"start": 205, |
|
"end": 214, |
|
"text": "(Figure 5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 382, |
|
"end": 390, |
|
"text": "Figure 6", |
|
"ref_id": "FIGREF3" |
|
}, |
|
{ |
|
"start": 787, |
|
"end": 795, |
|
"text": "Figure 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "We now describe synonym-aware verb prediction (Section 4). We use 2,214,523 German sentences ending with 100 most frequent lemmatized verbs. For each sentence, we extract the preverb as in Section 4.1, but in this case, the target is not just a single verb. For each lemmatized verb, we extract its synonyms among the 100 verbs using Germanet synsets (Hamp and Feldweg, 1997; Henrich and Hinrichs, 2010) . If synonyms exist, we include them all in a list as candidate target verbs for the input as in Figure 2 . of the sentences in the dataset. Similarly, we train incrementally on subsequences of the preverb as in Section 4.3. We learn high-level representations of the preverb using word-level embeddings and use the same training parameters as in Section 4.3", |
|
"cite_spans": [ |
|
{ |
|
"start": 351, |
|
"end": 375, |
|
"text": "(Hamp and Feldweg, 1997;", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 376, |
|
"end": 403, |
|
"text": "Henrich and Hinrichs, 2010)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 501, |
|
"end": 509, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Synonym-aware Prediction", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "During training, instead of maximizing the exact verb's log-likelihood, we maximize the loglikelihood of any verb in the synonym-set, encouraging the model to be confident about any verb that fits in the context.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Synonym-aware Prediction", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We compare accuracy for predicting exact and synonym-aware verbs with different objects in training. In synonym-aware prediction, we consider the prediction successful if it is one of the candidate verbs. Compared to predicting the exact verb, while being less focused on the fixed verb, synonym-aware prediction further improves the predication accuracy (Figure 7 ), but only slightly. ANVIIL clearly outperforms the feature engineering linear models on Japanese across the entire sentence, even when the number of verbs to choose from is larger; and on German, ANVIIL outperforms previous models when the number of verbs to choose from is the same (Figure 4 ). This is may be due to the long-range dependencies which are not captured in the logistic regression model. Syn Eval Figure 7 : Accuracy across time on exact/synonymaware match with exact/synonym-aware training. Accuracy increases slightly with the addition of the synonym-aware matching.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 355, |
|
"end": 364, |
|
"text": "(Figure 7", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 650, |
|
"end": 659, |
|
"text": "(Figure 4", |
|
"ref_id": "FIGREF1" |
|
}, |
|
{ |
|
"start": 779, |
|
"end": 787, |
|
"text": "Figure 7", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Verb Prediction Results", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "We now analyze our model's predictions. While previous work (Grissom II et al., 2016) examines the contribution of features by examining the model itself, our approach does not rely on feature engineering. To examine our model, we instead use a heatmap to visualize the time course attention values in sentences, allowing us to see on what the model focuses when predicting.", |
|
"cite_spans": [ |
|
{ |
|
"start": 60, |
|
"end": 85, |
|
"text": "(Grissom II et al., 2016)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Visualization and Analysis", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We visualize how our model makes its predictions in Figure 8 and Figure 9 . In both languages, the model not only focuses on the most recent revealed word, but also focuses attention to relevant longdistance dependencies. Predictions are, as expected, also more confident and accurate when approaching the end of the preverb. This is consistent with the verb prediction process for human interpreters (Wilss, 1978) and with previous work (Grissom II et al., 2016) . With increasing information, the number of possible alternatives gradually declines. Figure 10 visualizes how the model makes synonym-aware predictions.", |
|
"cite_spans": [ |
|
{ |
|
"start": 401, |
|
"end": 414, |
|
"text": "(Wilss, 1978)", |
|
"ref_id": "BIBREF38" |
|
}, |
|
{ |
|
"start": 438, |
|
"end": 463, |
|
"text": "(Grissom II et al., 2016)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 52, |
|
"end": 60, |
|
"text": "Figure 8", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 65, |
|
"end": 73, |
|
"text": "Figure 9", |
|
"ref_id": "FIGREF6" |
|
}, |
|
{ |
|
"start": 551, |
|
"end": 560, |
|
"text": "Figure 10", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Visualization of the Prediction Process", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "As described in Section 4.3, we implement both character-based and word-based models for verb prediction. For Japanese final-verb prediction, the Figure 8: Attention during German verb prediction. The model usually attends to the most recent word, but focuses on \"es\", which can be used as the subject of an existential phrase (Joseph, 2000) in combination with the verb \"geben\". Thus, it focuses on an interpretation of \"es\" as the subject, consistently attends to \"es\" throughout the sentence, and correctly predicts \"geben\" (for consistency with the Japanese examples, we show the model that predicts the normalized-infinitive-form of the verb). character-based model has higher prediction accuracy. Unlike the word-based model, it does not require use of a morphological analyzer and has a smaller vocabulary size. The word-based model, however, works better for German verb prediction and word-based heatmaps are more interpretable than character-based ones for German. We show word-based heatmaps for exact prediction in Figure 8 and Figure 11 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 327, |
|
"end": 341, |
|
"text": "(Joseph, 2000)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1027, |
|
"end": 1035, |
|
"text": "Figure 8", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1040, |
|
"end": 1049, |
|
"text": "Figure 11", |
|
"ref_id": "FIGREF7" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Character-based versus Word-based", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "We show an example of how synonym-aware prediction can make the task easier in Figure 12 . By providing synonyms during training, the model makes an alternative prediction \"zeigen\" (present, show) for the original verb \"einsetzen\" (use).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 79, |
|
"end": 88, |
|
"text": "Figure 12", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Synonym-aware versus Exact Prediction", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "Previous work suggests that case markers play a key role in both human and machine verb prediction for Japanese (Grissom II et al., 2016) . Japanese has explicit postposition case markers which mark the roles of the words in a sentence. By examining the accuracy of predictions when the most recent token is a case marker, we can gain insight into their contributions to the predictions. Figure 13 considers the instances where the most recent token observed is the given case marker; in these situations, the accuracy of predicting one of the 100 most frequent verbs is much higher than in general. It is unsurprising that the quotative particles have higher accuracy at the end of the sentence, since the set of verbs that follow them is highly constrained-e.g., say, think, announce, etc. Quotative particles for the entire sentence occur immediately before to final verb. More general particles, such as ga (NOM) and wo (ACC) show a smaller increase in accuracy.", |
|
"cite_spans": [ |
|
{ |
|
"start": 112, |
|
"end": 137, |
|
"text": "(Grissom II et al., 2016)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 388, |
|
"end": 397, |
|
"text": "Figure 13", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Case Markers", |
|
"sec_num": "6.4" |
|
}, |
|
{ |
|
"text": "This section examines previous work on prediction in humans, simultaneous interpretation, and Figure 10 : Attention during German synonym-aware verb prediction. The model constantly focuses on \"skispringer\" (ski jumpers), which is the subject of the verb and predicts \"machen\" and \"schaffen\" from three of the verb candidates. Psycholinguistics has examined argument structure using verb-final b\u01ce-construction sentences in Chinese (Chow et al., 2015 (Chow et al., , 2018 . Kamide et al. (2003) find that case markers facilitate verb predictions for humans, likely because they provide clues about the semantic roles of the marked words in sentences. In sentence production, Momma et al. (2015) suggest that humans plan verbs after selecting a subject but before objects.", |
|
"cite_spans": [ |
|
{ |
|
"start": 431, |
|
"end": 449, |
|
"text": "(Chow et al., 2015", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 450, |
|
"end": 470, |
|
"text": "(Chow et al., , 2018", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 473, |
|
"end": 493, |
|
"text": "Kamide et al. (2003)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 674, |
|
"end": 693, |
|
"text": "Momma et al. (2015)", |
|
"ref_id": "BIBREF30" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 94, |
|
"end": 103, |
|
"text": "Figure 10", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Empirical work on German verb prediction first investigated German-English simultaneous interpreters in J\u00f6rg (1997) : professional interpreters often predict verbs. Matsubara et al. (2000) introduce early verb prediction into Japanese-English SMT by predicting verbs in the target language. Grissom II et al. (2014) and Gu et al. (2017) use verb prediction in the source language and learn when to trust the predictions with reinforcement learning, while Oda et al. (2015) predict syntactic constituents and do the same. Grissom II et al. (2016) predict verbs with linear classifiers and compare the predictions to human performance. We extend that approach with a modern model that explains which cues the model uses to predict verbs.", |
|
"cite_spans": [ |
|
{ |
|
"start": 104, |
|
"end": 115, |
|
"text": "J\u00f6rg (1997)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 165, |
|
"end": 188, |
|
"text": "Matsubara et al. (2000)", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 291, |
|
"end": 315, |
|
"text": "Grissom II et al. (2014)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 320, |
|
"end": 336, |
|
"text": "Gu et al. (2017)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 455, |
|
"end": 472, |
|
"text": "Oda et al. (2015)", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 521, |
|
"end": 545, |
|
"text": "Grissom II et al. (2016)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "In interactive translation (Peris et al., 2017) and simultaneous translation (Alinejad et al., 2018; Ma et al., 2019) systems, neural methods for next word prediction improve translation. BERT (Devlin et al., 2019) uses masked deep bidirectional language Figure 12 : Imperfect synonym-aware prediction process on a German sentence. The predicted synonym \"zeigen\" (show/appear) in context is not a perfect replacement for the correct verb \"einsetzen\" (put in place), but it better preserves the general meaning of the sentence: \"This money had been made available to the country for the process of EU membership and should now appear for refugee assistance.\" Figure 13 : Case markers correlate with improved verb prediction compared to overall verb prediction (Figure 4) . Some case markers, such as to, have large jumps in accuracy toward the end, while others, such as wo do not. We examine nominative (NOM), instructive (INS), accusative (ACC), dative (DAT), quotative (QUOT), and essive (ESS) markers. models and contextualized representations (Peters et al., 2018) for pretraining and gain improvements in word prediction and classification. We incorporate bidirectional encoding to verb prediction.", |
|
"cite_spans": [ |
|
{ |
|
"start": 77, |
|
"end": 100, |
|
"text": "(Alinejad et al., 2018;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 101, |
|
"end": 117, |
|
"text": "Ma et al., 2019)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 193, |
|
"end": 214, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 1047, |
|
"end": 1068, |
|
"text": "(Peters et al., 2018)", |
|
"ref_id": "BIBREF35" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 255, |
|
"end": 264, |
|
"text": "Figure 12", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 658, |
|
"end": 667, |
|
"text": "Figure 13", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 759, |
|
"end": 769, |
|
"text": "(Figure 4)", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Existing neural attention models for sequential classification are commonly trained on complete input (Yang et al., 2016; Shen and Lee, 2016; Bahdanau et al., 2014) . Classification on incomplete sequences and long-distance sentence-final verb prediction remains difficult and under-explored.", |
|
"cite_spans": [ |
|
{ |
|
"start": 102, |
|
"end": 121, |
|
"text": "(Yang et al., 2016;", |
|
"ref_id": "BIBREF41" |
|
}, |
|
{ |
|
"start": 122, |
|
"end": 141, |
|
"text": "Shen and Lee, 2016;", |
|
"ref_id": "BIBREF37" |
|
}, |
|
{ |
|
"start": 142, |
|
"end": 164, |
|
"text": "Bahdanau et al., 2014)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "We present a synonym-aware neural model for incremental verb prediction using BiGRU with selfattention. It outperforms existing models in predicting the most frequent sentence-final verbs in both Japanese and German. As we predict the verbs incrementally, our method can be directly applied to solve real-time sequential classification or prediction problems. SMT systems for SOV to SVO simultaneous MT can also benefit from our work to reduce translation latency. We show that larger datasets always help with predicting the sentencefinal verbs, suggesting that larger corpora will further improve results.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "Character and word embeddings are learned from scratch, as pretrained embeddings(Bojanowski et al., 2017) did not improve prediction.3 While it may be initially counterintuitive to use a BiGRU for an incremental task, since we make predictions at each time step independently-i.e., without consulting prior predictions-there is no need to restrict ourselves to a unidirectional model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "As input sequence lengths vary, we pad input samples with zeros and train in minibatches a la neural MT(Doetsch et al., 2017;Morishita et al., 2017).5 This model uses token unigrams and bigrams, case marker bigrams, and the last observed case marker as features.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This material is based upon work supported by the National Science Foundation under Grant No. 1748663 (UMD). The views expressed in this paper are our own. We thank Graham Neubig and Hal Daum\u00e9 III for useful feedback.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Prediction improves simultaneous neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Ashkan", |
|
"middle": [], |
|
"last": "Alinejad", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maryam", |
|
"middle": [], |
|
"last": "Siahbani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anoop", |
|
"middle": [], |
|
"last": "Sarkar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Conference of Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3022--3027", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D18-1337" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ashkan Alinejad, Maryam Siahbani, and Anoop Sarkar. 2018. Prediction improves simultaneous neural ma- chine translation. In Conference of Empirical Meth- ods in Natural Language Processing, pages 3022- 3027.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "The order of elements in a transformational grammar of German", |
|
"authors": [ |
|
{ |
|
"first": "Emmon", |
|
"middle": [], |
|
"last": "Bach", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1962, |
|
"venue": "Language", |
|
"volume": "38", |
|
"issue": "3", |
|
"pages": "263--269", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Emmon Bach. 1962. The order of elements in a transformational grammar of German. Language, 38(3):263-269.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Neural machine translation by jointly learning to align and translate", |
|
"authors": [ |
|
{ |
|
"first": "Dzmitry", |
|
"middle": [], |
|
"last": "Bahdanau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly learning to align and translate. arXiv e-prints.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "The position of the verb in Germanic languages and simultaneous interpretation. The Interpreters' Newsletter", |
|
"authors": [ |
|
{ |
|
"first": "Lorenzo", |
|
"middle": [], |
|
"last": "Bevilacqua", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "", |
|
"volume": "14", |
|
"issue": "", |
|
"pages": "1--31", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lorenzo Bevilacqua. 2009. The position of the verb in Germanic languages and simultaneous interpreta- tion. The Interpreters' Newsletter, 14:1-31.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Enriching word vectors with subword information", |
|
"authors": [ |
|
{ |
|
"first": "Piotr", |
|
"middle": [], |
|
"last": "Bojanowski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edouard", |
|
"middle": [], |
|
"last": "Grave", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Armand", |
|
"middle": [], |
|
"last": "Joulin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "5", |
|
"issue": "", |
|
"pages": "135--146", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1162/tacl_a_00051" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Associa- tion for Computational Linguistics, 5:135-146.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Inference and Anticipation in Simultaneous Interpreting: A Probability-prediction Model. Benjamins translation library", |
|
"authors": [ |
|
{ |
|
"first": "G", |
|
"middle": [ |
|
"V" |
|
], |
|
"last": "Chernov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Setton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Hild", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "J", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "G.V. Chernov, R. Setton, and A. Hild. 2004. Infer- ence and Anticipation in Simultaneous Interpreting: A Probability-prediction Model. Benjamins transla- tion library. J. Benjamins Publishing Company.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Learning phrase representations using RNN encoder-decoder for statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bart", |
|
"middle": [], |
|
"last": "Van Merrienboer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "\u00c7aglar", |
|
"middle": [], |
|
"last": "G\u00fcl\u00e7ehre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fethi", |
|
"middle": [], |
|
"last": "Bougares", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Holger", |
|
"middle": [], |
|
"last": "Schwenk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Conference of Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/v1/D14-1179" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kyunghyun Cho, Bart van Merrienboer, \u00c7aglar G\u00fcl\u00e7ehre, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representa- tions using RNN encoder-decoder for statistical ma- chine translation. In Conference of Empirical Meth- ods in Natural Language Processing.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Wait a second! delayed impact of argument roles on on-line verb prediction. Language", |
|
"authors": [ |
|
{ |
|
"first": "Ellen", |
|
"middle": [], |
|
"last": "Wing-Yee Chow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Suiping", |
|
"middle": [], |
|
"last": "Lau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Colin", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Phillips", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Cognition and Neuroscience", |
|
"volume": "33", |
|
"issue": "7", |
|
"pages": "803--828", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1080/23273798.2018.1427878" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wing-Yee Chow, Ellen Lau, Suiping Wang, and Colin Phillips. 2018. Wait a second! delayed impact of ar- gument roles on on-line verb prediction. Language, Cognition and Neuroscience, 33(7):803-828.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "A \"bag-of-arguments\" mechanism for initial verb predictions. Language", |
|
"authors": [ |
|
{ |
|
"first": "Cybelle", |
|
"middle": [], |
|
"last": "Wing-Yee Chow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ellen", |
|
"middle": [], |
|
"last": "Smith", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Colin", |
|
"middle": [], |
|
"last": "Lau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Phillips", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Cognition and Neuroscience", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--20", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wing-Yee Chow, Cybelle Smith, Ellen Lau, and Colin Phillips. 2015. A \"bag-of-arguments\" mechanism for initial verb predictions. Language, Cognition and Neuroscience, pages 1-20.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N19-1423" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "A comprehensive study of batch construction strategies for recurrent neural networks in MXNet", |
|
"authors": [ |
|
{ |
|
"first": "Patrick", |
|
"middle": [], |
|
"last": "Doetsch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pavel", |
|
"middle": [], |
|
"last": "Golik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hermann", |
|
"middle": [], |
|
"last": "Ney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "IEEE International Conference on Acoustics, Speech, and Signal Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Patrick Doetsch, Pavel Golik, and Hermann Ney. 2017. A comprehensive study of batch construction strate- gies for recurrent neural networks in MXNet. IEEE International Conference on Acoustics, Speech, and Signal Processing.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Neural Network Methods for Natural Language Processing. Synthesis Lectures on Human Language Technologies", |
|
"authors": [ |
|
{ |
|
"first": "Yoav", |
|
"middle": [], |
|
"last": "Goldberg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yoav Goldberg. 2017. Neural Network Methods for Natural Language Processing. Synthesis Lectures on Human Language Technologies.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Building large monolingual dictionaries at the Leipzig corpora collection: From 100 to 200 languages", |
|
"authors": [ |
|
{ |
|
"first": "Dirk", |
|
"middle": [], |
|
"last": "Goldhahn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Eckart", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Uwe", |
|
"middle": [], |
|
"last": "Quasthoff", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "International Language Resources and Evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dirk Goldhahn, Thomas Eckart, and Uwe Quasthoff. 2012. Building large monolingual dictionaries at the Leipzig corpora collection: From 100 to 200 lan- guages. In International Language Resources and Evaluation.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Incremental prediction of sentence-final verbs: Humans versus machines", |
|
"authors": [ |
|
{ |
|
"first": "Alvin", |
|
"middle": [], |
|
"last": "Grissom", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "I", |
|
"middle": [ |
|
"I" |
|
], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Naho", |
|
"middle": [], |
|
"last": "Orita", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jordan", |
|
"middle": [], |
|
"last": "Boyd-Graber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Conference on Computational Natural Language Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "95--104", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/K16-1010" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alvin Grissom II, Naho Orita, and Jordan Boyd-Graber. 2016. Incremental prediction of sentence-final verbs: Humans versus machines. In Conference on Computational Natural Language Learning, pages 95-104.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Don't until the final verb wait: Reinforcement learning for simultaneous machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Alvin", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Grissom", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "I", |
|
"middle": [ |
|
"I" |
|
], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "He", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jordan", |
|
"middle": [], |
|
"last": "Boyd-Graber", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Morgan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hal", |
|
"middle": [], |
|
"last": "Daum\u00e9", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iii", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Conference of Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/v1/D14-1140" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alvin C. Grissom II, He He, Jordan Boyd-Graber, John Morgan, and Hal Daum\u00e9 III. 2014. Don't until the final verb wait: Reinforcement learning for simulta- neous machine translation. In Conference of Empir- ical Methods in Natural Language Processing.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Learning to translate in real-time with neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Jiatao", |
|
"middle": [], |
|
"last": "Gu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Graham", |
|
"middle": [], |
|
"last": "Neubig", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "O", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Victor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jiatao Gu, Graham Neubig, Kyunghyun Cho, and Vic- tor O.K. Li. 2017. Learning to translate in real-time with neural machine translation. European Chapter of the Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Automatic Information Extraction and Building of Lexical Semantic Resources for NLP Applications", |
|
"authors": [ |
|
{ |
|
"first": "Birgit", |
|
"middle": [], |
|
"last": "Hamp", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Helmut", |
|
"middle": [], |
|
"last": "Feldweg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Birgit Hamp and Helmut Feldweg. 1997. Germanet-a lexical-semantic net for german. Automatic Infor- mation Extraction and Building of Lexical Semantic Resources for NLP Applications.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Interpretese vs. translationese: The uniqueness of human strategies in simultaneous interpretation", |
|
"authors": [ |
|
{ |
|
"first": "He", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jordan", |
|
"middle": [], |
|
"last": "Boyd-Graber", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hal", |
|
"middle": [], |
|
"last": "Daum\u00e9", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iii", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Conference of the North American Chapter", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N16-1111" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "He He, Jordan Boyd-Graber, and Hal Daum\u00e9 III. 2016. Interpretese vs. translationese: The uniqueness of human strategies in simultaneous interpretation. In Conference of the North American Chapter of the Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Syntax-based rewriting for simultaneous machine translation", |
|
"authors": [ |
|
{ |
|
"first": "He", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alvin", |
|
"middle": [], |
|
"last": "Grissom", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "I", |
|
"middle": [ |
|
"I" |
|
], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jordan", |
|
"middle": [], |
|
"last": "Boyd-Graber", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hal", |
|
"middle": [], |
|
"last": "Daum\u00e9", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iii", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Conference of Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D15-1006" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "He He, Alvin Grissom II, Jordan Boyd-Graber, and Hal Daum\u00e9 III. 2015. Syntax-based rewriting for simul- taneous machine translation. In Conference of Em- pirical Methods in Natural Language Processing.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "GernEdiT-the GermaNet editing tool", |
|
"authors": [ |
|
{ |
|
"first": "Verena", |
|
"middle": [], |
|
"last": "Henrich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Erhard", |
|
"middle": [], |
|
"last": "Hinrichs", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "ternational Language Resources and Evaluation. European Languages Resources Association (ELRA)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Verena Henrich and Erhard Hinrichs. 2010. GernEdiT-the GermaNet editing tool. In In- ternational Language Resources and Evaluation. European Languages Resources Association (ELRA).", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Long short-term memory", |
|
"authors": [ |
|
{ |
|
"first": "Sepp", |
|
"middle": [], |
|
"last": "Hochreiter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J\u00fcrgen", |
|
"middle": [], |
|
"last": "Schmidhuber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Neural computation", |
|
"volume": "9", |
|
"issue": "8", |
|
"pages": "1735--1780", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Bridging the gap: Verb anticipation in German-English simultaneous interpreting", |
|
"authors": [ |
|
{ |
|
"first": "Udo", |
|
"middle": [], |
|
"last": "J\u00f6rg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Translation as Intercultural Communication: Selected Papers from the EST Congress", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Udo J\u00f6rg. 1997. Bridging the gap: Verb anticipation in German-English simultaneous interpreting. In M. Snell-Hornby, Z. Jettmarov\u00e1, and K. Kaindl, ed- itors, Translation as Intercultural Communication: Selected Papers from the EST Congress, Prague 1995.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "What gives with es gibt?", |
|
"authors": [ |
|
{ |
|
"first": "Brian", |
|
"middle": [ |
|
"Joseph" |
|
], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "American Journal of Germanic Linguistics and Literatures", |
|
"volume": "12", |
|
"issue": "", |
|
"pages": "243--265", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Brian Joseph. 2000. What gives with es gibt? Amer- ican Journal of Germanic Linguistics and Litera- tures, 12:243-265.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "The time-course of prediction in incremental sentence processing: Evidence from anticipatory eye movements", |
|
"authors": [ |
|
{ |
|
"first": "Yuki", |
|
"middle": [], |
|
"last": "Kamide", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gerry", |
|
"middle": [], |
|
"last": "Altmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sarah", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Haywood", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Journal of Memory and Language", |
|
"volume": "49", |
|
"issue": "1", |
|
"pages": "133--156", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yuki Kamide, Gerry Altmann, and Sarah L Haywood. 2003. The time-course of prediction in incremen- tal sentence processing: Evidence from anticipatory eye movements. Journal of Memory and Language, 49(1):133-156.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Adam: A method for stochastic optimization", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Diederik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jimmy", |
|
"middle": [], |
|
"last": "Kingma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ba", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the International Conference on Learning Repre- sentations.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Dutch as an SOV language. Linguistic analysis", |
|
"authors": [], |
|
"year": 1975, |
|
"venue": "", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "111--136", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jan Koster. 1975. Dutch as an SOV language. Linguis- tic analysis, 1(2):111-136.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Mecab : Yet another partof-speech and morphological analyzer", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Kudo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "T. Kudo. 2005. Mecab : Yet another part- of-speech and morphological analyzer. http://mecab.sourceforge.net/.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "A structured self-attentive sentence embedding", |
|
"authors": [ |
|
{ |
|
"first": "Zhouhan", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Minwei", |
|
"middle": [], |
|
"last": "Feng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cicero", |
|
"middle": [], |
|
"last": "Nogueira", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mo", |
|
"middle": [], |
|
"last": "Santos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bing", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bowen", |
|
"middle": [], |
|
"last": "Xiang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhouhan Lin, Minwei Feng, Cicero Nogueira dos San- tos, Mo Yu, Bing Xiang, Bowen Zhou, and Yoshua Bengio. 2017. A structured self-attentive sentence embedding. In Proceedings of the International Conference on Learning Representations.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "STACL: Simultaneous translation with implicit anticipation and controllable latency", |
|
"authors": [ |
|
{ |
|
"first": "Mingbo", |
|
"middle": [], |
|
"last": "Ma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Liang", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hao", |
|
"middle": [], |
|
"last": "Xiong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Renjie", |
|
"middle": [], |
|
"last": "Zheng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kaibo", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Baigong", |
|
"middle": [], |
|
"last": "Zheng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chuanqiang", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhongjun", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hairong", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xing", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hua", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Haifeng", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P19-1289" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mingbo Ma, Liang Huang, Hao Xiong, Renjie Zheng, Kaibo Liu, Baigong Zheng, Chuanqiang Zhang, Zhongjun He, Hairong Liu, Xing Li, Hua Wu, and Haifeng Wang. 2019. STACL: Simultaneous trans- lation with implicit anticipation and controllable la- tency using prefix-to-prefix framework.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Simultaneous Japanese-English interpretation based on early predition of English verb", |
|
"authors": [ |
|
{ |
|
"first": "Shigeki", |
|
"middle": [], |
|
"last": "Matsubara", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Keiichi", |
|
"middle": [], |
|
"last": "Iwashima", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nobuo", |
|
"middle": [], |
|
"last": "Kawaguchi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Katsuhiko", |
|
"middle": [], |
|
"last": "Toyama", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yasuyoshi", |
|
"middle": [], |
|
"last": "Inagaki", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Symposium on Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shigeki Matsubara, Keiichi Iwashima, Nobuo Kawaguchi, Katsuhiko Toyama, and Yasuyoshi Inagaki. 2000. Simultaneous Japanese-English in- terpretation based on early predition of English verb. In Symposium on Natural Language Processing.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "The timing of verb selection in japanese sentence production", |
|
"authors": [ |
|
{ |
|
"first": "Shota", |
|
"middle": [], |
|
"last": "Momma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Slevc", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Colin", |
|
"middle": [], |
|
"last": "Phillips", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Journal of experimental psychology. Learning, memory, and cognition", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shota Momma, L Robert Slevc, and Colin Phillips. 2015. The timing of verb selection in japanese sen- tence production. Journal of experimental psychol- ogy. Learning, memory, and cognition.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "The timing of verb selection in english active and passive sentences", |
|
"authors": [ |
|
{ |
|
"first": "Shota", |
|
"middle": [], |
|
"last": "Momma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Slevc", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Colin", |
|
"middle": [], |
|
"last": "Phillips", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shota Momma, Robert Slevc, and Colin Phillips. 2014. The timing of verb selection in english active and passive sentences.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "An empirical study of mini-batch creation strategies for neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Makoto", |
|
"middle": [], |
|
"last": "Morishita", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yusuke", |
|
"middle": [], |
|
"last": "Oda", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Graham", |
|
"middle": [], |
|
"last": "Neubig", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Koichiro", |
|
"middle": [], |
|
"last": "Yoshino", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Katsuhito", |
|
"middle": [], |
|
"last": "Sudoh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Satoshi", |
|
"middle": [], |
|
"last": "Nakamura", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "The First Workshop on Neural Machine Translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W17-3208" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Makoto Morishita, Yusuke Oda, Graham Neubig, Koichiro Yoshino, Katsuhito Sudoh, and Satoshi Nakamura. 2017. An empirical study of mini-batch creation strategies for neural machine translation. In The First Workshop on Neural Machine Translation.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Syntax-based simultaneous translation through prediction of unseen syntactic constituents", |
|
"authors": [ |
|
{ |
|
"first": "Yusuke", |
|
"middle": [], |
|
"last": "Oda", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Graham", |
|
"middle": [], |
|
"last": "Neubig", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sakriani", |
|
"middle": [], |
|
"last": "Sakti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tomoki", |
|
"middle": [], |
|
"last": "Toda", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Satoshi", |
|
"middle": [], |
|
"last": "Nakamura", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yusuke Oda, Graham Neubig, Sakriani Sakti, Tomoki Toda, and Satoshi Nakamura. 2015. Syntax-based si- multaneous translation through prediction of unseen syntactic constituents. Proceedings of the Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Interactive neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Miguel", |
|
"middle": [], |
|
"last": "Lvaro Peris", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Francisco", |
|
"middle": [], |
|
"last": "Domingo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Casacuberta", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Computer Speech and Language", |
|
"volume": "45", |
|
"issue": "", |
|
"pages": "201--220", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "lvaro Peris, Miguel Domingo, and Francisco Casacu- berta. 2017. Interactive neural machine translation. Computer Speech and Language, 45:201-220.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Deep contextualized word representations", |
|
"authors": [ |
|
{ |
|
"first": "Matthew", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Peters", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Neumann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohit", |
|
"middle": [], |
|
"last": "Iyyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matt", |
|
"middle": [], |
|
"last": "Gardner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the North American Chapter of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N18-1202" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. In Proceedings of the North American Chapter of the Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "Improvements in part-ofspeech tagging with an application to german", |
|
"authors": [ |
|
{ |
|
"first": "Helmut", |
|
"middle": [], |
|
"last": "Schmid", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Proceedings of the ACL SIGDAT-Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Helmut Schmid. 1995. Improvements in part-of- speech tagging with an application to german. In Proceedings of the ACL SIGDAT-Workshop.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "Neural attention models for sequence classification: Analysis and application to key term extraction and dialogue act detection", |
|
"authors": [ |
|
{ |
|
"first": "Hung-Yi", |
|
"middle": [], |
|
"last": "Sheng-Syun Shen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Conference of the International Speech Communication Association", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sheng-syun Shen and Hung-yi Lee. 2016. Neural at- tention models for sequence classification: Analysis and application to key term extraction and dialogue act detection. In Conference of the International Speech Communication Association.", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "Syntactic anticipation in", |
|
"authors": [ |
|
{ |
|
"first": "Wolfram", |
|
"middle": [], |
|
"last": "Wilss", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1978, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wolfram Wilss. 1978. Syntactic anticipation in", |
|
"links": null |
|
}, |
|
"BIBREF39": { |
|
"ref_id": "b39", |
|
"title": "English simultaneous interpreting", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "German", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "Language Interpretation and Communication", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "German-English simultaneous interpreting. In Lan- guage Interpretation and Communication.", |
|
"links": null |
|
}, |
|
"BIBREF40": { |
|
"ref_id": "b40", |
|
"title": "Simultaneous machine translation of German lectures into English: Insvestigating research challenges for the future", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Wolfel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Kolss", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Kraft", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Niehues", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Paulik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Waibel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "IEEE Spoken Language Technology Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M. Wolfel, M. Kolss, F. Kraft, J. Niehues, M. Paulik, and A. Waibel. 2008. Simultaneous machine transla- tion of German lectures into English: Insvestigating research challenges for the future. In IEEE Spoken Language Technology Workshop.", |
|
"links": null |
|
}, |
|
"BIBREF41": { |
|
"ref_id": "b41", |
|
"title": "Hierarchical attention networks for document classification", |
|
"authors": [ |
|
{ |
|
"first": "Zichao", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Diyi", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaodong", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Smola", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eduard", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Hovy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the North American Chapter of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N16-1174" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alexander J. Smola, and Eduard H. Hovy. 2016. Hi- erarchical attention networks for document classifi- cation. In Proceedings of the North American Chap- ter of the Association for Computational Linguistics.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null, |
|
"text": "ANVIIL." |
|
}, |
|
"FIGREF1": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null, |
|
"text": "Comparing word and character representations for" |
|
}, |
|
"FIGREF3": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null, |
|
"text": "ANVIIL's BiGRU with self-attention outperforms other most settings on predicting the 100 most common verbs in Japanese." |
|
}, |
|
"FIGREF6": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null, |
|
"text": "Attention during Japanese verb prediction. Attention and prediction transition through time on a Japanese sentence. The genitive case marker no, in bright yellow, has a high attention weight, as do the characters making in the noun before it. Case marker-adjacent nouns, including before the genitive no (twice) and the accusative wo have slightly less. Toward the end of the sentence, attention shifts to the quotative particle to, which significantly limits possible completions." |
|
}, |
|
"FIGREF7": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null, |
|
"text": "Progression of attention weights of a word-based model on a German sentence. The model successfully captures the passive voice in the sentence where \"wird erwartet\" is often translated together as \"is expected\". Full translation of the example is: Chancellor Merkel is expected to speak in London next week. simultaneous machine translation." |
|
} |
|
} |
|
} |
|
} |