|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:34:32.611847Z" |
|
}, |
|
"title": "CopyNext: Explicit Span Copying and Alignment in Sequence to Sequence Models", |
|
"authors": [ |
|
{ |
|
"first": "Abhinav", |
|
"middle": [], |
|
"last": "Singh", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Johns Hopkins University", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Patrick", |
|
"middle": [], |
|
"last": "Xia", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Johns Hopkins University", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Guanghui", |
|
"middle": [], |
|
"last": "Qin", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Johns Hopkins University", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Mahsa", |
|
"middle": [], |
|
"last": "Yarmohammadi", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Johns Hopkins University", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Benjamin", |
|
"middle": [ |
|
"Van" |
|
], |
|
"last": "Durme", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Johns Hopkins University", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Copy mechanisms are employed in sequence to sequence models (seq2seq) to generate reproductions of words from the input to the output. These frameworks, operating at the lexical type level, fail to provide an explicit alignment that records where each token was copied from. Further, they require contiguous token sequences from the input (spans) to be copied individually. We present a model with an explicit token-level copy operation and extend it to copying entire spans. Our model provides hard alignments between spans in the input and output, allowing for nontraditional applications of seq2seq, like information extraction. We demonstrate the approach on Nested Named Entity Recognition, achieving near state-of-the-art accuracy with an order of magnitude increase in decoding speed. 1", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Copy mechanisms are employed in sequence to sequence models (seq2seq) to generate reproductions of words from the input to the output. These frameworks, operating at the lexical type level, fail to provide an explicit alignment that records where each token was copied from. Further, they require contiguous token sequences from the input (spans) to be copied individually. We present a model with an explicit token-level copy operation and extend it to copying entire spans. Our model provides hard alignments between spans in the input and output, allowing for nontraditional applications of seq2seq, like information extraction. We demonstrate the approach on Nested Named Entity Recognition, achieving near state-of-the-art accuracy with an order of magnitude increase in decoding speed. 1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Sequence transduction converts a sequence of input tokens to a sequence of output tokens. It is a dominant framework for generation tasks, such as machine translation, dialogue, and summarization. Seq2seq can also be used for Information Extraction (IE), where the target structure is decoded as a linear output based on an encoded (linear) representation of the input.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "As IE is traditionally considered a structured prediction task, it remains today that IE systems are assumed to produce an annotation on the input text. That is, predicting which specific tokens of an input string led to, e.g., the label of PERSON. This is in contrast to text generation which rarely, if ever, needs hard alignments between the input and the desired output. Our work explores a novel extension to seq2seq that provides such alignments. 1 Our source code:", |
|
"cite_spans": [ |
|
{ |
|
"start": 453, |
|
"end": 454, |
|
"text": "1", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "https://github.com/ abhinonymous/copynext Figure 1 : Sequence transduction outputs for nested named entities in an example sentence using: (a) seq2seq, (b) pointer network, (c) Copy-only, and (d) CopyNext model. The numbers are predictions of indices corresponding to the tokens in the input sequence. CN refers to the CopyNext symbol, our proposed method of denoting the operation that copies the next token from the input. In (d), the next token from token 4 would be 5.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 42, |
|
"end": 50, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Specifically, we extend pointer (or copy) networks. Unlike the algorithmic tasks originally targeted by Vinyals et al. (2015) , tasks in NLP tend to copy spans from the input rather than discontiguous tokens. This is prevalent for copying named entities in dialogue (Gu et al., 2016; Eric and Manning, 2017) , entire sentences in summarization (See et al., 2017; Song et al., 2018) , or even single words (if subtokenized). The need to efficiently copy spans motivates our introduction of an inductive bias that copies contiguous tokens. Like a pointer network, our model copies the first token of a span. However, for subsequent timesteps, our model generates a \"CopyNext\" symbol (CN) instead of copying another token from source. CopyNext represents the operation of copying the word following the last predicted word from the input sequence. Figure 1 highlights the difference between output sequences for several transductive models, including our CopyNext model. We apply our model for the Nested Named Entity Recognition (NNER) task (Ringland et al., 2019) . Unlike traditional named entity recognition, named entity mentions in NNER may be subsequences of Figure 1 ). We find that both explicit copying and CopyNext lead to a system faster than prior work and better than a simple seq2seq baseline. It is, however, outperformed by a much slower model that performs an exhaustive search over the space of potential labels, a solution that does not scale to large complex label sets.", |
|
"cite_spans": [ |
|
{ |
|
"start": 104, |
|
"end": 125, |
|
"text": "Vinyals et al. (2015)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 266, |
|
"end": 283, |
|
"text": "(Gu et al., 2016;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 284, |
|
"end": 307, |
|
"text": "Eric and Manning, 2017)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 344, |
|
"end": 362, |
|
"text": "(See et al., 2017;", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 363, |
|
"end": 381, |
|
"text": "Song et al., 2018)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 1039, |
|
"end": 1062, |
|
"text": "(Ringland et al., 2019)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 845, |
|
"end": 851, |
|
"text": "Figure", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1163, |
|
"end": 1171, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Pointer networks (Vinyals et al., 2015; Jia and Liang, 2016; Merity et al., 2016) are seq2seq models that employ a soft attention distribution (Bahdanau et al., 2014) to produce an output sequence consisting of values from the input sequence. Pointer-generator networks (Miao and Blunsom, 2016; Gulcehre et al., 2016, inter alia) extend the range of output types by combining the distribution from the pointer with a vocabulary distribution from a generator. Thus, these models operate on the type level. In contrast, our model operates at the token level. Instead of using soft attention distribution of the encoder states, we use hard attention, resulting in a single encoder state, or a single token, to feed to the decoder. This enables explicit copying of span offsets.", |
|
"cite_spans": [ |
|
{ |
|
"start": 17, |
|
"end": 39, |
|
"text": "(Vinyals et al., 2015;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 40, |
|
"end": 60, |
|
"text": "Jia and Liang, 2016;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 61, |
|
"end": 81, |
|
"text": "Merity et al., 2016)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 143, |
|
"end": 166, |
|
"text": "(Bahdanau et al., 2014)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 270, |
|
"end": 294, |
|
"text": "(Miao and Blunsom, 2016;", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 295, |
|
"end": 329, |
|
"text": "Gulcehre et al., 2016, inter alia)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Closest to our work, Zhou et al. (2018) and Panthaplackel et al. (2020) have tackled span copying by extending pointer-generator networks and predicting both start and end indices of entire spans that need to be copied. Using those offsets, they perform a forced decoding of the predicted tokens within the span. These works focus on text generation tasks, like sentence summarization, question generation, and editing. In contrast, we are concerned with information extraction tasks as transduction, where hard alignments to the input sentence are crucial and output sequences must represent a valid linearized structure. Specifically, we study nested named entity recognition (NNER).", |
|
"cite_spans": [ |
|
{ |
|
"start": 21, |
|
"end": 39, |
|
"text": "Zhou et al. (2018)", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Prior work uses several approaches to model NNER: machine reading comprehension (Li et al., 2019) , transition-based methods , mention hypergraphs (Lu and Roth, 2015; Katiyar and Cardie, 2018) , and seq2seq models (Strakov\u00e1 et al., 2019) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 80, |
|
"end": 97, |
|
"text": "(Li et al., 2019)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 147, |
|
"end": 166, |
|
"text": "(Lu and Roth, 2015;", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 167, |
|
"end": 192, |
|
"text": "Katiyar and Cardie, 2018)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 214, |
|
"end": 237, |
|
"text": "(Strakov\u00e1 et al., 2019)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We formulate the task as transforming the input sentence X to a linearized sequence Y which represents the gold structure: labeled spans. Specifically, Y contains input word indices, CopyNext symbols, and labels from a label set L.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model Description", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "As described earlier, the model ( Figure 2 ) is reminiscent of pointer networks. We extend its capabilities by introducing the notion of a \"Copy Next\" operation where the network predicts to copy the word sequentially after the previous prediction.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 34, |
|
"end": 42, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Model Description", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Embedding Layer This layer embeds a sequence of tokens X = hx 1 , x 2 , ..., x N 0 i into a sequence of vectors x = hx 1 , x 2 , ..., x N i by using (possibly contextualized) word embeddings. The gold labels are adjusted to account for tokenization.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Encoder", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Architecture The input embedding is further encoded by a stacked bidirectional LSTM (Hochre-iter and Schmidhuber, 1997) into encoder states e = he 1 , e 2 , ..., e N i where each state is a concatenation of the forward ( ! f ) and backward ( f ) outputs of the last layer of the LSTM and e i 2 R D :", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Encoder", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "e j i = [ ! f j (e j 1 i , e j i 1 ); f j (e j 1 i , e j i+1 )], (1)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Encoder", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "where e j i is the j-th layer encoder hidden state at timestep i and D is the hidden size of the LSTM.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Encoder", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The target for the transducer is the linearized representation of the nested named entity spans and labels. We generate a decision y that either points to (a) a timestep in the encoder sequence, marking the starting index of a span, or (b) the CopyNext symbol, which operates by advancing the right boundary of the span to include the next (sub)word of the input sequence, or (c) a label l 2 L, signifying both the end of the span and classifying the span.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoder", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We learn D-dimensional embeddings for each label l 2 L. The vectors corresponding to the start index of a span and the Copy-Next operation are the encoder outputs e i where i is equal to the start index or index pointed to by CopyNext and are fed directly to the decoder. 2", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Input Embeddings", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Architecture The decoder is a stacked LSTM taking as input either an encoder state e i or a label embedding and produces decoder state d t 2 R D .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Input Embeddings", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We predict scores for making a labeling decision, a CopyNext operation, or pointing to a token in the input. At each decoding step t, for labels, we train a linear layer W L 2 R D\u21e5|L| with input d t and output scores l t . Likewise, we do the same for the CopyNext symbol using a linear layer W C 2 R D\u21e51 with input d t and output score c t . The score of pointing to an index i in the input sequence is calculated by dot product:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decision Vector", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "s i t = e i \u2022 d t .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decision Vector", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The decision distribution y t is then:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decision Vector", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "y t = softmax([s t ; l t ; c t ]), y t 2 R N +|L|+1 . (2)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decision Vector", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Our training objective is the cross-entropy loss:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training and Prediction", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "= X t X k y k t ,y ? t log(y k t )", |
|
"eq_num": "(3)" |
|
} |
|
], |
|
"section": "Training and Prediction", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "where y ? is the gold decision, k 2 [0, N + |L| + 1) (representing all three kinds of possible decisions: 2 We will use ei to refer to e", |
|
"cite_spans": [ |
|
{ |
|
"start": 106, |
|
"end": 107, |
|
"text": "2", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training and Prediction", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "( 1) i . t and 0 otherwise. The summation over index t covers the whole dataset.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training and Prediction", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "At prediction time we find the decision y t with the greatest probability (y t = arg max i (y i t )) at decoder step t. 3 The input to the decoder at t + 1 timestamp can be one of three things: (1) the output e i of the encoder when y t points to the index i of the input sequence, (2) the embedding of the label l predicted at t when y t points to the label l 2 L, or (3) the output e i+1 of the encoder where i was the input to the decoder at t when y t points to the CopyNext operation. The decoder halts when the hEOS i label is predicted or the maximum output sequence length is reached.", |
|
"cite_spans": [ |
|
{ |
|
"start": 120, |
|
"end": 121, |
|
"text": "3", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training and Prediction", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "To ensure well-formed target output sequences, we use a state machine (Figure 3 ) to mask parts of y t that would lead to an illegal sequence at t + 1.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 70, |
|
"end": 79, |
|
"text": "(Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Training and Prediction", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Our experiments analyze the effects of various choices in different components of the system. We use the NNE dataset and splits from Ringland et al. (2019) , resulting in 43,457, 1,989, and 3,762 sentences in the training, development, and test splits. Experiments for model development and analysis use the development set.", |
|
"cite_spans": [ |
|
{ |
|
"start": 133, |
|
"end": 155, |
|
"text": "Ringland et al. (2019)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments and Results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We first establish the best performing text embeddings which we fix for the Table 2 : NNER accuracy and speed on the test set for external baselines and our models. *Seq2seq is based on a reference implementation to ensure correctness, but not efficiency: it has the same asymptotics as the Copy and CopyNext models, and can be considered similar in speed.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 76, |
|
"end": 83, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Text Representation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "rest of the experiments. The rationale is that the embedding will provide an orthogonal boost in accuracy to the network with respect to the other changes in the network structure. We find in Table 1 that RoBERTa large (Liu et al., 2019) is best. 4", |
|
"cite_spans": [ |
|
{ |
|
"start": 219, |
|
"end": 237, |
|
"text": "(Liu et al., 2019)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 192, |
|
"end": 199, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Text Representation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Linearization Strategy Previous work (Zhang et al., 2019) has shown that linearization scheme affects model performance. We experiment with several variants of sorting spans in ascending order based on their start index. We also try sorting based on end index and copying the previous token instead. We find sorting based on end index performs poorly, while sorting by start all perform similarly. Our final linearization strategy sorts by start index, then span length (longer spans first). Additional ties (in span label) are broken randomly.", |
|
"cite_spans": [ |
|
{ |
|
"start": 37, |
|
"end": 57, |
|
"text": "(Zhang et al., 2019)", |
|
"ref_id": "BIBREF28" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Text Representation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "RoBERTa Embedding Layer Recent work suggests that NER information may be stored in the lower layers of an encoder (Hewitt and Manning, 2019; Tenney et al., 2019) . We found using the 15th layer of RoBERTa rather than the final one (24th), is slightly helpful (see Appendix A.1).", |
|
"cite_spans": [ |
|
{ |
|
"start": 114, |
|
"end": 140, |
|
"text": "(Hewitt and Manning, 2019;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 141, |
|
"end": 161, |
|
"text": "Tenney et al., 2019)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Text Representation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In Table 2 , we evaluate our bestperforming (dev.) model on the test set. We compare our approach against the previous best approaches reported in Ringland et al. (2019) : hypergraph-based (Hypergraph, and transition-based (Transition, Wang et al., 2018 ) models proposed to recognize nested mentions. We also contrast the CopyNext model against a baseline seq2seq model and one with only a hard copy operation (see (a) and (c) in Figure 1 ). Prior work has given an analysis of the run-time of their approach. Based on their concern about asymptotic speed we also provide the following analysis and practical speed efficiency of the systems and their accuracies. 5 We find that Hypergraph outperforms the Copy-Next model by 4.7 F1 with most of the difference in recall. This is likely due to the exhaustive search used by Hypergraph, as our model is 16.7 times faster. An analysis of their code and algorithm reveals that their lower bound time complexity \u2326(mn) is higher than ours \u2326(n), n is length of input sequence and m is number of mention types. Since the average decoder length is low, the best case scenario often occurs. The Transition system has 6.3 times faster prediction speed compared to Hypergraph, however, it comes with 17.8% absolute drop in F1 accuracy. Our model is substantially faster than both. Furthermore, we show that both an explicit Copy and the CopyNext operation are useful, resulting in gains of 8.1 F1 and 11.3 F1 over a seq2seq baseline.", |
|
"cite_spans": [ |
|
{ |
|
"start": 147, |
|
"end": 169, |
|
"text": "Ringland et al. (2019)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 223, |
|
"end": 253, |
|
"text": "(Transition, Wang et al., 2018", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 664, |
|
"end": 665, |
|
"text": "5", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 10, |
|
"text": "Table 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 431, |
|
"end": 439, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "NNER Results", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The errors made by the model on the development set can be clustered broadly into four main types: (1) correct span detection but mislabeled, (2) correct label but incorrect span detection (either subset or superset of correct span), (3) both span and label were incorrectly predicted, and (4) missing spans entirely. Table 7 in Appendix A.1 provides examples.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 318, |
|
"end": 325, |
|
"text": "Table 7", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "NNER Error Analysis", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We propose adopting pointer and copy networks with hard attention and extending these models with a CopyNext operation, enabling sequential copying of spans given just the start index of the span. On a traditionally structured prediction task of NNER, we use a sequence transduction model with the CopyNext operation, leading to a competitive model that provides a 16.7x speedup relative to current state of the art (which performs an exhaustive search), at a cost of 4.7% loss in F1, largely due to lower recall.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Future Work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Our model is a step forward in structured prediction as sequence transduction. We have found in initial experiments on event extraction similar relative improvements to that discussed here: future work will investigate applications to richer transductive semantic parsing models (Zhang et al., 2019; Cai and Lam, 2019) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 279, |
|
"end": 299, |
|
"text": "(Zhang et al., 2019;", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 300, |
|
"end": 318, |
|
"text": "Cai and Lam, 2019)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Future Work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Initial experiments with beam search suggest an expensive tradeoff between time and performance (Appendix A.1).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We experimented with mean pooling RoBERTa vectors for subwords(Zhang et al., 2019) to maintain the same span lengths as input. Pooling subword units led to poorer performance (see Appendix A.1Table 3).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Efficiency is measured using wall clock time for the entire test set, performed with Intel Xeon 2.10GHz CPU and a single GeForce GTX 1080 TI GPU.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This work was supported in part by IARPA BETTER (#2019-19051600005), DARPA AIDA (FA8750-18-2-0015) and KAIROS (FA8750-19-2-0034). The views and conclusions contained in this work are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, or endorsements of DARPA, ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Neural machine translation by jointly learning to align and translate", |
|
"authors": [ |
|
{ |
|
"first": "Dzmitry", |
|
"middle": [], |
|
"last": "Bahdanau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1409.0473" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Core semantic first: A top-down approach for AMR parsing", |
|
"authors": [ |
|
{ |
|
"first": "Deng", |
|
"middle": [], |
|
"last": "Cai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wai", |
|
"middle": [], |
|
"last": "Lam", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3799--3809", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D19-1393" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Deng Cai and Wai Lam. 2019. Core semantic first: A top-down approach for AMR parsing. In Proceed- ings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Inter- national Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), pages 3799-3809, Hong Kong, China. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "4171--4186", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N19-1423" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "A copyaugmented sequence-to-sequence architecture gives good performance on task-oriented dialogue", |
|
"authors": [ |
|
{ |
|
"first": "Mihail", |
|
"middle": [], |
|
"last": "Eric", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "468--473", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mihail Eric and Christopher Manning. 2017. A copy- augmented sequence-to-sequence architecture gives good performance on task-oriented dialogue. In Pro- ceedings of the 15th Conference of the European Chapter of the Association for Computational Lin- guistics: Volume 2, Short Papers, pages 468-473, Valencia, Spain. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Incorporating copying mechanism in sequence-to-sequence learning", |
|
"authors": [ |
|
{ |
|
"first": "Jiatao", |
|
"middle": [], |
|
"last": "Gu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhengdong", |
|
"middle": [], |
|
"last": "Lu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hang", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "O", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Victor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1631--1640", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P16-1154" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O.K. Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In Proceedings of the 54th Annual Meeting of the Association for Com- putational Linguistics (Volume 1: Long Papers), pages 1631-1640, Berlin, Germany. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Pointing the unknown words", |
|
"authors": [ |
|
{ |
|
"first": "Caglar", |
|
"middle": [], |
|
"last": "Gulcehre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sungjin", |
|
"middle": [], |
|
"last": "Ahn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ramesh", |
|
"middle": [], |
|
"last": "Nallapati", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bowen", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "140--149", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P16-1014" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Caglar Gulcehre, Sungjin Ahn, Ramesh Nallapati, Bowen Zhou, and Yoshua Bengio. 2016. Pointing the unknown words. In Proceedings of the 54th Annual Meeting of the Association for Computa- tional Linguistics (Volume 1: Long Papers), pages 140-149, Berlin, Germany. Association for Compu- tational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "A structural probe for finding syntax in word representations", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Hewitt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Christopher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "4129--4138", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N19-1419" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John Hewitt and Christopher D. Manning. 2019. A structural probe for finding syntax in word repre- sentations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4129-4138, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Long short-term memory", |
|
"authors": [ |
|
{ |
|
"first": "Sepp", |
|
"middle": [], |
|
"last": "Hochreiter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J\u00fcrgen", |
|
"middle": [], |
|
"last": "Schmidhuber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Neural computation", |
|
"volume": "9", |
|
"issue": "8", |
|
"pages": "1735--1780", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Data recombination for neural semantic parsing", |
|
"authors": [ |
|
{ |
|
"first": "Robin", |
|
"middle": [], |
|
"last": "Jia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Percy", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1606.03622" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Robin Jia and Percy Liang. 2016. Data recombina- tion for neural semantic parsing. arXiv preprint arXiv:1606.03622.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Spanbert: Improving pre-training by representing and predicting spans", |
|
"authors": [ |
|
{ |
|
"first": "Mandar", |
|
"middle": [], |
|
"last": "Joshi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Danqi", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yinhan", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Weld", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Omer", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "8", |
|
"issue": "0", |
|
"pages": "64--77", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel Weld, Luke Zettlemoyer, and Omer Levy. 2020. Spanbert: Improving pre-training by representing and predict- ing spans. Transactions of the Association for Com- putational Linguistics, 8(0):64-77.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Nested named entity recognition revisited", |
|
"authors": [ |
|
{ |
|
"first": "Arzoo", |
|
"middle": [], |
|
"last": "Katiyar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Claire", |
|
"middle": [], |
|
"last": "Cardie", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "861--871", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Arzoo Katiyar and Claire Cardie. 2018. Nested named entity recognition revisited. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, Volume 1 (Long Pa- pers), pages 861-871.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "A unified mrc framework for named entity recognition", |
|
"authors": [ |
|
{ |
|
"first": "Xiaoya", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jingrong", |
|
"middle": [], |
|
"last": "Feng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuxian", |
|
"middle": [], |
|
"last": "Meng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qinghong", |
|
"middle": [], |
|
"last": "Han", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fei", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jiwei", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1910.11476" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xiaoya Li, Jingrong Feng, Yuxian Meng, Qinghong Han, Fei Wu, and Jiwei Li. 2019. A unified mrc framework for named entity recognition. arXiv preprint arXiv:1910.11476.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Roberta: A robustly optimized BERT pretraining approach", |
|
"authors": [ |
|
{ |
|
"first": "Yinhan", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Myle", |
|
"middle": [], |
|
"last": "Ott", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Naman", |
|
"middle": [], |
|
"last": "Goyal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jingfei", |
|
"middle": [], |
|
"last": "Du", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mandar", |
|
"middle": [], |
|
"last": "Joshi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Danqi", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Omer", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Lewis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Veselin", |
|
"middle": [], |
|
"last": "Stoyanov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining ap- proach. CoRR, abs/1907.11692.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Joint mention extraction and classification with mention hypergraphs", |
|
"authors": [ |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Lu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Roth", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "857--867", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wei Lu and Dan Roth. 2015. Joint mention extraction and classification with mention hypergraphs. In Pro- ceedings of the 2015 Conference on Empirical Meth- ods in Natural Language Processing, pages 857- 867.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Pointer sentinel mixture models", |
|
"authors": [ |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Merity", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Caiming", |
|
"middle": [], |
|
"last": "Xiong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Bradbury", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1609.07843" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2016. Pointer sentinel mixture mod- els. arXiv preprint arXiv:1609.07843.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Language as a latent variable: Discrete generative models for sentence compression", |
|
"authors": [ |
|
{ |
|
"first": "Yishu", |
|
"middle": [], |
|
"last": "Miao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Phil", |
|
"middle": [], |
|
"last": "Blunsom", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "319--328", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yishu Miao and Phil Blunsom. 2016. Language as a latent variable: Discrete generative models for sen- tence compression. In Proceedings of the 2016 Con- ference on Empirical Methods in Natural Language Processing, pages 319-328.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Copy that! editing sequences by copying spans", |
|
"authors": [ |
|
{ |
|
"first": "Sheena", |
|
"middle": [], |
|
"last": "Panthaplackel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Miltiadis", |
|
"middle": [], |
|
"last": "Allamanis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marc", |
|
"middle": [], |
|
"last": "Brockschmidt", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sheena Panthaplackel, Miltiadis Allamanis, and Marc Brockschmidt. 2020. Copy that! editing sequences by copying spans.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Glove: Global vectors for word representation", |
|
"authors": [ |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Pennington", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1532--1543", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word rep- resentation. In Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 1532-1543.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Deep contextualized word representations", |
|
"authors": [ |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Peters", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Neumann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohit", |
|
"middle": [], |
|
"last": "Iyyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matt", |
|
"middle": [], |
|
"last": "Gardner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "2227--2237", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N18-1202" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. In Proceedings of the 2018 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "NNE: A dataset for nested named entity recognition in English newswire", |
|
"authors": [ |
|
{ |
|
"first": "Nicky", |
|
"middle": [], |
|
"last": "Ringland", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiang", |
|
"middle": [], |
|
"last": "Dai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ben", |
|
"middle": [], |
|
"last": "Hachey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sarvnaz", |
|
"middle": [], |
|
"last": "Karimi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cecile", |
|
"middle": [], |
|
"last": "Paris", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Curran", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5176--5181", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P19-1510" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nicky Ringland, Xiang Dai, Ben Hachey, Sarvnaz Karimi, Cecile Paris, and James R. Curran. 2019. NNE: A dataset for nested named entity recognition in English newswire. In Proceedings of the 57th An- nual Meeting of the Association for Computational Linguistics, pages 5176-5181, Florence, Italy. Asso- ciation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Get to the point: Summarization with pointergenerator networks", |
|
"authors": [ |
|
{ |
|
"first": "Abigail", |
|
"middle": [], |
|
"last": "See", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Peter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1073--1083", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P17-1099" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointer- generator networks. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073- 1083, Vancouver, Canada. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Structureinfused copy mechanisms for abstractive summarization", |
|
"authors": [ |
|
{ |
|
"first": "Kaiqiang", |
|
"middle": [], |
|
"last": "Song", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lin", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fei", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 27th International Conference on Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1717--1729", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kaiqiang Song, Lin Zhao, and Fei Liu. 2018. Structure- infused copy mechanisms for abstractive summariza- tion. In Proceedings of the 27th International Con- ference on Computational Linguistics, pages 1717- 1729, Santa Fe, New Mexico, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Neural architectures for nested ner through linearization", |
|
"authors": [ |
|
{ |
|
"first": "Jana", |
|
"middle": [], |
|
"last": "Strakov\u00e1", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Milan", |
|
"middle": [], |
|
"last": "Straka", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1908.06926" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jana Strakov\u00e1, Milan Straka, and Jan Haji\u010d. 2019. Neu- ral architectures for nested ner through linearization. arXiv preprint arXiv:1908.06926.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Bert rediscovers the classical nlp pipeline", |
|
"authors": [ |
|
{ |
|
"first": "Ian", |
|
"middle": [], |
|
"last": "Tenney", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dipanjan", |
|
"middle": [], |
|
"last": "Das", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ellie", |
|
"middle": [], |
|
"last": "Pavlick", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1905.05950" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019. Bert rediscovers the classical nlp pipeline. arXiv preprint arXiv:1905.05950.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Neural segmental hypergraphs for overlapping mention recognition", |
|
"authors": [ |
|
{ |
|
"first": "Bailin", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Lu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "204--214", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D18-1019" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bailin Wang and Wei Lu. 2018. Neural segmental hy- pergraphs for overlapping mention recognition. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 204-214, Brussels, Belgium. Association for Com- putational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "A neural transition-based model for nested mention recognition", |
|
"authors": [ |
|
{ |
|
"first": "Bailin", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Lu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yu", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hongxia", |
|
"middle": [], |
|
"last": "Jin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1011--1017", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D18-1124" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bailin Wang, Wei Lu, Yu Wang, and Hongxia Jin. 2018. A neural transition-based model for nested mention recognition. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Process- ing, pages 1011-1017, Brussels, Belgium. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Xlnet: Generalized autoregressive pretraining for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Zhilin", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zihang", |
|
"middle": [], |
|
"last": "Dai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yiming", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jaime", |
|
"middle": [], |
|
"last": "Carbonell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Russ", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Quoc V", |
|
"middle": [], |
|
"last": "Salakhutdinov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Le", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "32", |
|
"issue": "", |
|
"pages": "5753--5763", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In Advances in Neural In- formation Processing Systems 32, pages 5753-5763. Curran Associates, Inc.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "AMR parsing as sequence-tograph transduction", |
|
"authors": [ |
|
{ |
|
"first": "Sheng", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xutai", |
|
"middle": [], |
|
"last": "Ma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Duh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Van Durme", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "80--94", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P19-1009" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sheng Zhang, Xutai Ma, Kevin Duh, and Benjamin Van Durme. 2019. AMR parsing as sequence-to- graph transduction. In Proceedings of the 57th An- nual Meeting of the Association for Computational Linguistics, pages 80-94, Florence, Italy. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Sequential copying networks", |
|
"authors": [ |
|
{ |
|
"first": "Qingyu", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nan", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Furu", |
|
"middle": [], |
|
"last": "Wei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Thirty-Second AAAI Conference on Artificial Intelligence", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Qingyu Zhou, Nan Yang, Furu Wei, and Ming Zhou. 2018. Sequential copying networks. In Thirty- Second AAAI Conference on Artificial Intelligence.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"text": "For each decoder timestep a decision vector chooses between labeling, a CopyNext operation, or pointing to an input token. The decoder input comes from either an encoder state or a label embedding. other named entity mentions (such as [[last] year] in", |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF1": { |
|
"num": null, |
|
"text": "Performance in terms of Accuracy (%F1) and Speed (relative to Hypergraph). The CopyNext model is nearly as accurate as Hypergraph while over 16 times faster.", |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"TABREF1": { |
|
"text": "", |
|
"type_str": "table", |
|
"content": "<table><tr><td colspan=\"2\">: A comparison of networks used for embed-</td></tr><tr><td colspan=\"2\">ding input tokens before feeding them into the encoder</td></tr><tr><td>LSTM. These are evaluated on NNER.</td><td/></tr><tr><td>index, label or CopyNext) and y k t ,y ? t y ?</td><td>is 1 if y k t =</td></tr></table>", |
|
"num": null, |
|
"html": null |
|
} |
|
} |
|
} |
|
} |