ACL-OCL / Base_JSON /prefixC /json /codi /2020.codi-1.10.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:27:36.094239Z"
},
"title": "Joint Modeling of Arguments for Event Understanding",
"authors": [
{
"first": "Yunmo",
"middle": [
"Chen"
],
"last": "Tongfei",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Johns Hopkins University",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Chen",
"middle": [],
"last": "Benjamin",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Johns Hopkins University",
"location": {}
},
"email": ""
},
{
"first": "Van",
"middle": [],
"last": "Durme",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Johns Hopkins University",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We recognize the task of event argument linking in documents as similar to that of intent slot resolution in dialogue, providing a Transformer-based model that extends from a recently proposed solution to resolve references to slots. The approach allows for joint consideration of argument candidates given a detected event, which we illustrate leads to state-of-the-art performance in multi-sentence argument linking. 1",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "We recognize the task of event argument linking in documents as similar to that of intent slot resolution in dialogue, providing a Transformer-based model that extends from a recently proposed solution to resolve references to slots. The approach allows for joint consideration of argument candidates given a detected event, which we illustrate leads to state-of-the-art performance in multi-sentence argument linking. 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Given an event recognized in text, we are concerned with finding its associated arguments. Significant work has focused at the level of single sentence contexts, such as in semantic role labeling (SRL; Gildea and Jurafsky, 2000; Ouchi et al., 2018, inter alia) . Unfortunately even perfect performance in SRL will be limited by the existence of arguments outside the sentence boundary, leading to prior work Silberer and Frank, 2012; Ebner et al., 2020) on an alternative paradigm variously called implicit role resolution or argument linking, where an event trigger (e.g. \"attack\") evokes a set of roles (e.g. AT-TACKER, TARGET) to be filled, and they are linked to explicit argument mentions found in text. In argument linking, possible candidate arguments are first detected, then linked to specific roles of detected events. This bears similarity to coreference resolution, where document-level context can be aptly utilized. For an example, see Figure 1 .",
"cite_spans": [
{
"start": 202,
"end": 228,
"text": "Gildea and Jurafsky, 2000;",
"ref_id": "BIBREF8"
},
{
"start": 229,
"end": 260,
"text": "Ouchi et al., 2018, inter alia)",
"ref_id": null
},
{
"start": 408,
"end": 433,
"text": "Silberer and Frank, 2012;",
"ref_id": "BIBREF22"
},
{
"start": 434,
"end": 453,
"text": "Ebner et al., 2020)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 950,
"end": 958,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This formulation is similar to the resolution of referring expressions in conversational dialogues (\u00c7 elikyilmaz et al., 2014) , where a current utterance is considered to invoke an intent (e.g. BUY-BOOK) , accompanied by a number of slots (e.g. 1 Our code can be found at https://github.com/ wanmok/joint-arglinking. NAME, AUTHOR, PUBLISHER, etc.). Even more than in event argument linking, in dialogue systems the sentence-level (utterance-level) context often fails to contain all salient arguments (slots): slots from previous rounds of dialogue may often be relevant to the current intent. 2 We propose a novel model for joint modeling of potential arguments inspired by Chen et al. (2019) for slot-filling in dialogue systems, which proposed to jointly predict spans that are relevant to the intent of the current round of dialogue. Over detected arguments, a Transformer (Vaswani et al., 2017) encoder is placed upon the event trigger and potential arguments to jointly learn the relations between the event trigger and its arguments. The input to this Transformer is no longer tokens but spans: given the Transformer output of each span, a classification loss is utilized to perform argument role classification. We demonstrate this leads to state-of-theart performance on the RAMS argument linking dataset introduced by Ebner et al. (2020), 3 showing the benefits of joint modeling when linking arguments to roles of events.",
"cite_spans": [
{
"start": 99,
"end": 126,
"text": "(\u00c7 elikyilmaz et al., 2014)",
"ref_id": null
},
{
"start": 195,
"end": 204,
"text": "BUY-BOOK)",
"ref_id": null
},
{
"start": 246,
"end": 247,
"text": "1",
"ref_id": null
},
{
"start": 595,
"end": 596,
"text": "2",
"ref_id": null
},
{
"start": 676,
"end": 694,
"text": "Chen et al. (2019)",
"ref_id": "BIBREF3"
},
{
"start": 878,
"end": 900,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF23"
},
{
"start": 1350,
"end": 1351,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Implicit role resolution Palmer et al. (1986) treated unfilled semantic roles as special cases of anaphora and coreference resolution. Starting from the SemEval 2010 Task 10: Linking Roles (Ruppenhofer et al., 2010) , there have been more recent modeling efforts on this task. approached this with their SRL system SE-MAFOR , casting the task as extended SRL by admitting constituents (potential arguments) from context larger than sentence boundaries. Silberer and Frank (2012) considered the problem as an anaphora resolution task within the discourse context. Ebner et al. (2020) similarly considered the task as related to anaphora resolution, and introduced a new dataset, RAMS, for exploring non-local argument linking. See O'Gorman 2019and Ebner et al. (2020) for further background.",
"cite_spans": [
{
"start": 25,
"end": 45,
"text": "Palmer et al. (1986)",
"ref_id": "BIBREF20"
},
{
"start": 189,
"end": 215,
"text": "(Ruppenhofer et al., 2010)",
"ref_id": "BIBREF21"
},
{
"start": 453,
"end": 478,
"text": "Silberer and Frank (2012)",
"ref_id": "BIBREF22"
},
{
"start": 747,
"end": 766,
"text": "Ebner et al. (2020)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "2 Background",
"sec_num": "97"
},
{
"text": "Event extraction In event extraction there are historically three subtasks: detecting event triggers, detecting entity mentions, and then argument role prediction, where relations between mentions and triggers are predicted in accordance to the event type's predefined set of roles under a closed ontology. Prior work has proposed pipeline system of the subtasks (Ji and Grishman, 2008; Li et al., 2013; Yang and Mitchell, 2016, inter alia) , or as a joint model over the three tasks (Nguyen and Nguyen, 2019; Lin et al., 2020, inter alia). Our work could be seen as a version of argument role prediction, but which operates beyond sentence boundaries.",
"cite_spans": [
{
"start": 363,
"end": 386,
"text": "(Ji and Grishman, 2008;",
"ref_id": "BIBREF11"
},
{
"start": 387,
"end": 403,
"text": "Li et al., 2013;",
"ref_id": "BIBREF14"
},
{
"start": 404,
"end": 440,
"text": "Yang and Mitchell, 2016, inter alia)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "2 Background",
"sec_num": "97"
},
{
"text": "Frame-based SLU In dialogue systems, semantic frame based spoken language understanding (SLU) is one of the most commonly applied SLU technologies for human-computer interaction. Such systems often output an interpretation of dialogues represented as intents and slots (Wang et al., 2011) . \u00c7 elikyilmaz et al. 2014and Bapna et al. (2017) proposed models to resolve references to slots in the dialogue, tracking conversation states across multiple dialogue turns. Dhingra et al. (2017) augmented such methods with external knowledge bases (KBs) to create a multi-turn dialogue agent which helps users search KBs. Chen et al. (2019) proposed joint models over potential slots in dialogue to output which contextual slots should be carried over to the most recent utterance. Our approach is inspired by this work, by drawing analogies between concepts in SLU (intents / slots) and those in IE (events / arguments) (see Table 1 ).",
"cite_spans": [
{
"start": 269,
"end": 288,
"text": "(Wang et al., 2011)",
"ref_id": "BIBREF25"
},
{
"start": 319,
"end": 338,
"text": "Bapna et al. (2017)",
"ref_id": "BIBREF0"
},
{
"start": 464,
"end": 485,
"text": "Dhingra et al. (2017)",
"ref_id": "BIBREF6"
},
{
"start": 613,
"end": 631,
"text": "Chen et al. (2019)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 917,
"end": 924,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "2 Background",
"sec_num": "97"
},
{
"text": "Following Ebner et al. (2020) we consider argument linking as the task of choosing amongst detected mention span candidates given detected event trigger spans. Given a document",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formulation",
"sec_num": "3"
},
{
"text": "= ( 1 , \u2022 \u2022 \u2022 , )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formulation",
"sec_num": "3"
},
{
"text": "where each is a word, entity mention set (candidate arguments) containing mentions = [ : ] \u2208 where and demarcates the left and right boundary (both inclusive), and a event trigger span = [ : ], an argument linking model predicts the role (or absence) of each mention with respect to the event.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formulation",
"sec_num": "3"
},
{
"text": "An event ontology can be formulated as a set of event types T , where each type \u2208 T is associated with a set of roles ( ), 4 while other roles are nonpermissible. We denote the union of all roles for all event types, plus an empty role (a dummy role denoting an argument is not part of the event structure) as R = \u2208T ( ) \u222a { }.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formulation",
"sec_num": "3"
},
{
"text": "Argument and trigger representation We compute a fixed-length vector with dimension for each argument and trigger span as their representations. To compute this, we first pass the document through a pre-trained contextualizing model (BERT (Devlin et al., 2019) here). 5 We split documents into sentences and feed each sentence to BERT for encoding. Each token might be split into more than 1 subword units-in this case we take the average of these subword representations so that each token has 1 vector representation w \u2208 R tok , following Zhang et al. (2019) .",
"cite_spans": [
{
"start": 268,
"end": 269,
"text": "5",
"ref_id": null
},
{
"start": 541,
"end": 560,
"text": "Zhang et al. (2019)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "4"
},
{
"text": "For an argument span = ( , \u2022 \u2022 \u2022 , ), we follow to generate a span embedding. 6 The span embedding m for mention span comprises of three parts, the representation of its left boundary, its right boundary, and a learned pooling over the tokens in the span. This learned pooling utilized a global attention query vector q \u2208 R tok , and computes the weighted sum of all tokens with respect to the attention scores derived from q:",
"cite_spans": [
{
"start": 78,
"end": 79,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "= exp q T w = exp q T w ; c = = \u2022 w ,",
"eq_num": "(1)"
}
],
"section": "Approach",
"sec_num": "4"
},
{
"text": "It showed footage of ambulances arriving at the Kilis State hospital and medical personnel unloading children on stretchers and a girl wrapped in a blanket , as well as a handful of adults.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "4"
},
{
"text": "\" They hit the school , they hit the school , \" wailed a Syrian woman who was unloaded from an ambulance onto a wheelchair .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "4"
},
{
"text": "The Observatory and al -Halaby also reported an air raid on the village of Kaljibrin near Azaz . and pass that through a 2-layer feed-forward neural network to yield a fixed-length vector m \u2208 R span for each argument span :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "m = FFNN arg ( [w ; w ; c]) .",
"eq_num": "(2)"
}
],
"section": "Approach",
"sec_num": "4"
},
{
"text": "Similarly, for any trigger span = [ : ], we employ a different set of parameters:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "t = FFNN trig ( [w ; w ; c]) .",
"eq_num": "(3)"
}
],
"section": "Approach",
"sec_num": "4"
},
{
"text": "Joint modeling of arguments We propose a joint model for all the arguments with respect to the given event trigger with event type (see Figure 1 ). We form a sequence (t, m 1 , m 2 , \u2022 \u2022 \u2022 , m ) with the trigger span encoding as the prefix, then followed by the representations of all the candidate mentions, then fed to a Transformer encoder (Vaswani et al., 2017) . A Transformer, by its self-attention mechanism, naturally models the relation between every trigger-argument and argument-argument pair. Note two major differences as compared to a Transformer that runs on tokens: (1) each input to the Transformer represents a span instead of a token, following Chen et al. (2019) ; (2) since the arguments do not take an explicit sequential order, we forgo the positional embeddings in Transformers, effectively modeling the input as a set of spans instead of a sequence (self-attention exhibits the property of permutation invariance without positional embeddings ). For each argument span input m , we pass the output from the Transformer encoderm to linear layer with the output size being the size of the role set R. Softmax is applied to the output of size |R|, with the non-permissible roles masked out, yielding a distribution over the set of roles designated by the given event type, plus the non-argument role:",
"cite_spans": [
{
"start": 343,
"end": 365,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF23"
},
{
"start": 664,
"end": 682,
"text": "Chen et al. (2019)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 136,
"end": 144,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Approach",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "( | , ) = exp w T m \u2208 ( )\u222a{ } exp w T m",
"eq_num": "(4)"
}
],
"section": "Approach",
"sec_num": "4"
},
{
"text": "The model could hence be trained using a crossentropy loss function to maximize such likelihood.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "4"
},
{
"text": "As we draw the connections between SLU in dialogue systems and argument linking in information extraction, we focus primarily on evaluating the model a discourse-level dataset, RAMS (Ebner et al., 2020) . First however we look at a more established dataset, ACE 2005 (Walker et al., 2006) 7 , to verify if our model can reasonable performance compared to prior work in event understanding. While ACE 2005 is annotated only at the sentencelevel, our model may still be applied in this setting. For detailed experimental setup, see Appendix A.",
"cite_spans": [
{
"start": 182,
"end": 202,
"text": "(Ebner et al., 2020)",
"ref_id": "BIBREF7"
},
{
"start": 267,
"end": 290,
"text": "(Walker et al., 2006) 7",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "Baseline Aside from joint modeling of arguments, we also include an independent model as a case in ablation studies (while our proposed method labeled as joint). The independent model removes the Transformer encoder (cf. Equation 4), but directly applies a feed-forward neural network atop of the trigger representation and each argument representation to classify the role (or absence) of the argument with respect to the event trigger. 8",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "( | , ) = exp w T ind ( [t; m]) \u2208 ( )\u222a{ } exp w T ind ( [t; m])",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "The result from model would show the difference between the proposed joint argument modeling approach v.s. a simpler, independent model. Metrics We use precision, recall, and F 1 -score as metrics. A link between the trigger and an argument is considered correct, if and only if the predicted argument span offsets and role matches the gold reference. We report using micro-average among F 1 -scores across different roles.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "We use ACE 2005 as a sanity check for our discourse-context model to verify its ability to perform sentence-context extraction. We follow Lin et al. (2020) 's pre-processing and dataset splits for event extraction task (statistics see Table 2 ). Table 3 reports the experimental results on ACE 2005. Although the results are not directly comparable since our model has access to gold trigger/argument spans (Lin et al. (2020) does not), we can observe similar levels of performance, suggesting our method may be competitive when applied to event understanding beyond sentence boundaries.",
"cite_spans": [
{
"start": 138,
"end": 155,
"text": "Lin et al. (2020)",
"ref_id": "BIBREF15"
},
{
"start": 407,
"end": 425,
"text": "(Lin et al. (2020)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 235,
"end": 242,
"text": "Table 2",
"ref_id": "TABREF3"
},
{
"start": 246,
"end": 253,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "ACE 2005",
"sec_num": "5.1"
},
{
"text": "Roles Across Multiple Sentences (RAMS; Ebner et al., 2020) is an event extraction dataset that considers discourse-level, non-local arguments in document-level context. We follow the train/dev/test split provided in the dataset, with statistics shown in Table 2 . Experiments setup follow the configuration employed for ACE 2005. Table 4 ).",
"cite_spans": [
{
"start": 32,
"end": 38,
"text": "(RAMS;",
"ref_id": null
},
{
"start": 39,
"end": 58,
"text": "Ebner et al., 2020)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 254,
"end": 261,
"text": "Table 2",
"ref_id": "TABREF3"
},
{
"start": 330,
"end": 337,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "RAMS",
"sec_num": "5.2"
},
{
"text": "RAMS. Following the same conditions as Ebner et al. (2020) , our joint model outperforms that work, and our independent baseline, by a substantial margin of 6.6%, illustrating the benefit of modeling potential arguments jointly. We analyze the performance of our model on non-local arguments, i.e., arguments that are not in the same sentence as the event trigger (Table 5) . Our model's performance on non-local arguments is on par with local arguments, demonstrating the ability to handle non-local argument linking.",
"cite_spans": [
{
"start": 39,
"end": 58,
"text": "Ebner et al. (2020)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 364,
"end": 373,
"text": "(Table 5)",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "RAMS",
"sec_num": "5.2"
},
{
"text": "Case study We here show one example where the joint model performs better than the independent model. The joint model correctly labeled all the roles, while the independent model failed on two. We hypothesize that joint modeling of the arguments will avoid these cases where multiple spans are labeled with the same role.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RAMS",
"sec_num": "5.2"
},
{
"text": "... Stratfor analyst Sim Tack:\" This was indeed an Islamic State attack, rather than an accidental explosion.\" New satellite imagery appears to reveal extensive damage to a strategically significant airbase in central Syria used by Russian forces ... We proposed a joint modeling approach for argument linking that considers the interdependent relationships among argument mentions conditioning on a specific event. Our approach extends from recent work in dialogue systems, viewing a document as essentially a single-side discourse, and where event arguments are recognized as similar to slots that potentially carryover across utterances. Experimental results show our approach achieves superior performance on a recently introduced dataset for modeling discourse-level contexts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RAMS",
"sec_num": "5.2"
},
{
"text": "E.g., fromChen et al. (2019): What's the weather in San Francisco? ... Any good Mexican restaurants there?3 https://nlp.jhu.edu/rams.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For example, in the ACE 2005 dataset, (ATTACK) = {ATTACKER, TARGET, INSTRUMENT, TIME, PLACE}.5 Documents are chuncked into max-length 512 segments while respecting sentence boundaries, and each is fed to BERT respectively.6 The width embeddings in are not used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://catalog.ldc.upenn.edu/ LDC2006T06.8 This scoring function for triples ( , , ) is similar to Ebner et al. (2020)'s model. However, their model is trained to maximize the posterior probability of the correct argument given a trigger and a role, whereas in our independent baseline here the probability of the correct role given a trigger and an argument candidate is maximized.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "According toChen et al. (2019), increasing the number of attention heads substantially improves the model performance, so we prefer more attention heads over more encoder layers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This research was supported by the JHU HLTCOE, DARPA AIDA, and IARPA BETTER. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes. The views and conclusions contained in this publication are those of the authors and should not be interpreted as representing official policies or endorsements of DARPA or the U.S. Government.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": "Experimental Details We use BERT (BERT-BASE-CASED here) as the encoder for text embedding. The models are setup with tok = span = 768, and are trained using AdamW optimizer (Loshchilov and Hutter, 2019) with learning rate of 3 \u00d7 10 \u22125 for 200 epochs, and the tolerance = 1 \u00d7 10 \u22128 . We employ gradient clipping to avoid exploding gradients with maximum gradient norm 5.0. We also use a linear learning rate scheduler to warmup models for the first 200 iterations.The Transformer encoder has 3 layers with 64 attention heads 9 , and its feed-forward neural networks (FFNNs) for computing the argument / trigger representations are set to have the dim of 2,048. For mention representations, we use two-layer FFNNs with hidden size of 768. Note there are two different sets of parameters for constructing trigger representations and argument representations. All non-linearities used in the paper are GELU (Hendrycks and Gimpel, 2016) . Dropout with rate 0.2 is applied in each levels in the feedforward neural network for argument / trigger representation computation, and also in each layer in the Transformer encoder.For model selection, we pick the best performing model on the dev set and then run it on the test set. Early stopping is used with patience = 10, i.e., if the performance on the dev set did not increase after epochs, stop training.In terms of hyperparameter sweep, we perform grid search over a combination of hyperparameters shown in Table 6 , and choose the set performed best on the dev set.Our models are trained on one Nvidia GTX 1080 Ti GPU. For the joint model, the training time is around 30 mins/epoch, and it takes 70 epochs (around 20 hours) to converge on average. For the independent model, it takes 15mins/epoch and converges in 5 epochs (around 50 mins) on average.",
"cite_spans": [
{
"start": 173,
"end": 202,
"text": "(Loshchilov and Hutter, 2019)",
"ref_id": "BIBREF16"
},
{
"start": 903,
"end": 931,
"text": "(Hendrycks and Gimpel, 2016)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 1452,
"end": 1459,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "A Appendix",
"sec_num": null
},
{
"text": "Range # Encoder layers {1, 2, 3, 4, 5, 6} # Attention heads {12, 64, 128} Learning rate {1 \u00d7 10 \u22125 , 3 \u00d7 10 \u22125 , 5 \u00d7 10 \u22125 } Warmup steps {0, 100, 200, \u2022 \u2022 \u2022 , 500, 1000} ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hyperparameter",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Sequential dialogue context modeling for spoken language understanding",
"authors": [
{
"first": "Ankur",
"middle": [],
"last": "Bapna",
"suffix": ""
},
{
"first": "G\u00f6khan",
"middle": [],
"last": "T\u00fcr",
"suffix": ""
},
{
"first": "Dilek",
"middle": [],
"last": "Hakkani-T\u00fcr",
"suffix": ""
},
{
"first": "Larry",
"middle": [
"P"
],
"last": "Heck",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue",
"volume": "",
"issue": "",
"pages": "103--114",
"other_ids": {
"DOI": [
"10.18653/v1/w17-5514"
]
},
"num": null,
"urls": [],
"raw_text": "Ankur Bapna, G\u00f6khan T\u00fcr, Dilek Hakkani-T\u00fcr, and Larry P. Heck. 2017. Sequential dialogue context modeling for spoken language understanding. In Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue, pages 103-114.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Resolving referring expressions in conversational dialogs for natural user interfaces",
"authors": [
{
"first": "Zhaleh",
"middle": [],
"last": "Asli \u00c7 Elikyilmaz",
"suffix": ""
},
{
"first": "Dilek",
"middle": [],
"last": "Feizollahi",
"suffix": ""
},
{
"first": "Ruhi",
"middle": [],
"last": "Hakkani-T\u00fcr",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sarikaya",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2094--2104",
"other_ids": {
"DOI": [
"10.3115/v1/d14-1223"
]
},
"num": null,
"urls": [],
"raw_text": "Asli \u00c7 elikyilmaz, Zhaleh Feizollahi, Dilek Hakkani- T\u00fcr, and Ruhi Sarikaya. 2014. Resolving refer- ring expressions in conversational dialogs for natu- ral user interfaces. In Proceedings of the 2014 Con- ference on Empirical Methods in Natural Language Processing, pages 2094-2104.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "SEMAFOR: frame argument resolution with log-linear models",
"authors": [
{
"first": "Desai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Schneider",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 5th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "264--267",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Desai Chen, Nathan Schneider, Dipanjan Das, and Noah A. Smith. 2010. SEMAFOR: frame argument resolution with log-linear models. In Proceedings of the 5th International Workshop on Semantic Evalua- tion, pages 264-267.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Improving long distance slot carryover in spoken dialogue systems",
"authors": [
{
"first": "Tongfei",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Chetan",
"middle": [],
"last": "Naik",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Pushpendre",
"middle": [],
"last": "Rastogi",
"suffix": ""
},
{
"first": "Lambert",
"middle": [],
"last": "Mathias",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the First Workshop on NLP for Conversational AI",
"volume": "",
"issue": "",
"pages": "96--105",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tongfei Chen, Chetan Naik, Hua He, Pushpendre Ras- togi, and Lambert Mathias. 2019. Improving long distance slot carryover in spoken dialogue systems. In Proceedings of the First Workshop on NLP for Conversational AI, pages 96-105. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Probabilistic frame-semantic parsing",
"authors": [
{
"first": "Dipanjan",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Schneider",
"suffix": ""
},
{
"first": "Desai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2010,
"venue": "Human Language Technologies: Conference of the North American Chapter of the Association of Computational Linguistics, Proceedings",
"volume": "",
"issue": "",
"pages": "948--956",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dipanjan Das, Nathan Schneider, Desai Chen, and Noah A. Smith. 2010. Probabilistic frame-semantic parsing. In Human Language Technologies: Con- ference of the North American Chapter of the Asso- ciation of Computational Linguistics, Proceedings, pages 948-956.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "BERT: pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/n19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, pages 4171-4186.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Towards end-to-end reinforcement learning of dialogue agents for information access",
"authors": [
{
"first": "Bhuwan",
"middle": [],
"last": "Dhingra",
"suffix": ""
},
{
"first": "Lihong",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Xiujun",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Yun-Nung",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Faisal",
"middle": [],
"last": "Ahmed",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Deng",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "484--495",
"other_ids": {
"DOI": [
"10.18653/v1/P17-1045"
]
},
"num": null,
"urls": [],
"raw_text": "Bhuwan Dhingra, Lihong Li, Xiujun Li, Jianfeng Gao, Yun-Nung Chen, Faisal Ahmed, and Li Deng. 2017. Towards end-to-end reinforcement learning of dia- logue agents for information access. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 484-495.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Multi-sentence argument linking",
"authors": [
{
"first": "Seth",
"middle": [],
"last": "Ebner",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Culkin",
"suffix": ""
},
{
"first": "Kyle",
"middle": [],
"last": "Rawlins",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "2020",
"issue": "",
"pages": "8057--8077",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Seth Ebner, Patrick Xia, Ryan Culkin, Kyle Rawlins, and Benjamin Van Durme. 2020. Multi-sentence ar- gument linking. In Proceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics, ACL 2020, Online, July 5-10, 2020, pages 8057-8077.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Automatic labeling of semantic roles",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2000,
"venue": "38th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "512--520",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Gildea and Daniel Jurafsky. 2000. Automatic labeling of semantic roles. In 38th Annual Meet- ing of the Association for Computational Linguistics, pages 512-520.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Deep semantic role labeling: What works and what's next",
"authors": [
{
"first": "Luheng",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "473--483",
"other_ids": {
"DOI": [
"10.18653/v1/P17-1044"
]
},
"num": null,
"urls": [],
"raw_text": "Luheng He, Kenton Lee, Mike Lewis, and Luke Zettle- moyer. 2017. Deep semantic role labeling: What works and what's next. In Proceedings of the 55th Annual Meeting of the Association for Computa- tional Linguistics, pages 473-483.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Gaussian error linear units (gelus). CoRR",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Hendrycks",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Hendrycks and Kevin Gimpel. 2016. Gaussian er- ror linear units (gelus). CoRR, abs/1606.08415.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Refining event extraction through cross-document inference",
"authors": [
{
"first": "Heng",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Grishman",
"suffix": ""
}
],
"year": 2008,
"venue": "ACL 2008, Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "254--262",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Heng Ji and Ralph Grishman. 2008. Refining event extraction through cross-document inference. In ACL 2008, Proceedings of the 46th Annual Meet- ing of the Association for Computational Linguistics, pages 254-262.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Set transformer: A framework for attention-based permutation-invariant neural networks",
"authors": [
{
"first": "Juho",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Yoonho",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Jungtaek",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Adam",
"middle": [
"R"
],
"last": "Kosiorek",
"suffix": ""
},
{
"first": "Seungjin",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Yee Whye",
"middle": [],
"last": "Teh",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 36th International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "3744--3753",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Juho Lee, Yoonho Lee, Jungtaek Kim, Adam R. Ko- siorek, Seungjin Choi, and Yee Whye Teh. 2019. Set transformer: A framework for attention-based permutation-invariant neural networks. In Proceed- ings of the 36th International Conference on Ma- chine Learning, pages 3744-3753.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "End-to-end neural coreference resolution",
"authors": [
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luheng",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "188--197",
"other_ids": {
"DOI": [
"10.18653/v1/d17-1018"
]
},
"num": null,
"urls": [],
"raw_text": "Kenton Lee, Luheng He, Mike Lewis, and Luke Zettle- moyer. 2017. End-to-end neural coreference reso- lution. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 188-197.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Joint event extraction via structured prediction with global features",
"authors": [
{
"first": "Qi",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Ji",
"middle": [],
"last": "Heng",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "73--82",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qi Li, Heng Ji, and Liang Huang. 2013. Joint event extraction via structured prediction with global fea- tures. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 73-82.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A joint neural model for information extraction with global features",
"authors": [
{
"first": "Ying",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Heng",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Lingfei",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "2020",
"issue": "",
"pages": "7999--8009",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ying Lin, Heng Ji, Fei Huang, and Lingfei Wu. 2020. A joint neural model for information extraction with global features. In Proceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics, ACL 2020, Online, July 5-10, 2020, pages 7999-8009.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Decoupled weight decay regularization",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Loshchilov",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Hutter",
"suffix": ""
}
],
"year": 2019,
"venue": "7th International Conference on Learning Representations. OpenReview.net",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In 7th International Conference on Learning Representations. OpenRe- view.net.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "One for all: Neural joint modeling of entities and events",
"authors": [
{
"first": "Minh",
"middle": [],
"last": "Trung",
"suffix": ""
},
{
"first": "Thien",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Huu Nguyen",
"suffix": ""
}
],
"year": 2019,
"venue": "The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019",
"volume": "",
"issue": "",
"pages": "6851--6858",
"other_ids": {
"DOI": [
"10.1609/aaai.v33i01.33016851"
]
},
"num": null,
"urls": [],
"raw_text": "Trung Minh Nguyen and Thien Huu Nguyen. 2019. One for all: Neural joint modeling of entities and events. In The Thirty-Third AAAI Conference on Ar- tificial Intelligence, AAAI 2019, pages 6851-6858.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Bringing Together Computational and Linguistic Models of Implicit Role Interpretation",
"authors": [
{
"first": "Timothy",
"middle": [
"J"
],
"last": "O'gorman",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Timothy J. O'Gorman. 2019. Bringing Together Com- putational and Linguistic Models of Implicit Role In- terpretation. Ph.D. thesis, University of Colorado at Boulder.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A span selection model for semantic role labeling",
"authors": [
{
"first": "Hiroki",
"middle": [],
"last": "Ouchi",
"suffix": ""
},
{
"first": "Hiroyuki",
"middle": [],
"last": "Shindo",
"suffix": ""
},
{
"first": "Yuji",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1630--1642",
"other_ids": {
"DOI": [
"10.18653/v1/d18-1191"
]
},
"num": null,
"urls": [],
"raw_text": "Hiroki Ouchi, Hiroyuki Shindo, and Yuji Matsumoto. 2018. A span selection model for semantic role la- beling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1630-1642.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Recovering implicit information",
"authors": [
{
"first": "Martha",
"middle": [
"S"
],
"last": "Palmer",
"suffix": ""
},
{
"first": "Deborah",
"middle": [
"A"
],
"last": "Dahl",
"suffix": ""
},
{
"first": "Rebecca",
"middle": [
"J"
],
"last": "Schiffman",
"suffix": ""
},
{
"first": "Lynette",
"middle": [],
"last": "Hirschman",
"suffix": ""
},
{
"first": "Marcia",
"middle": [],
"last": "Linebarger",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Dowding",
"suffix": ""
}
],
"year": 1986,
"venue": "24th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "10--19",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martha S. Palmer, Deborah A. Dahl, Rebecca J. Schiff- man, Lynette Hirschman, Marcia Linebarger, and John Dowding. 1986. Recovering implicit informa- tion. In 24th Annual Meeting of the Association for Computational Linguistics, pages 10-19.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Semeval-2010 task 10: Linking events and their participants in discourse",
"authors": [
{
"first": "Josef",
"middle": [],
"last": "Ruppenhofer",
"suffix": ""
},
{
"first": "Caroline",
"middle": [],
"last": "Sporleder",
"suffix": ""
},
{
"first": "Roser",
"middle": [],
"last": "Morante",
"suffix": ""
},
{
"first": "Collin",
"middle": [],
"last": "Baker",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 5th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "45--50",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Josef Ruppenhofer, Caroline Sporleder, Roser Morante, Collin Baker, and Martha Palmer. 2010. Semeval- 2010 task 10: Linking events and their participants in discourse. In Proceedings of the 5th International Workshop on Semantic Evaluation, pages 45-50.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Casting implicit role linking as an anaphora resolution task",
"authors": [
{
"first": "Carina",
"middle": [],
"last": "Silberer",
"suffix": ""
},
{
"first": "Anette",
"middle": [],
"last": "Frank",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the First Joint Conference on Lexical and Computational Semantics",
"volume": "",
"issue": "",
"pages": "1--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carina Silberer and Anette Frank. 2012. Casting im- plicit role linking as an anaphora resolution task. In Proceedings of the First Joint Conference on Lexical and Computational Semantics, pages 1-10.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems 30: Annual Conference on Neural Information Processing Systems, pages 5998-6008.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "ACE 2005 multilingual training corpus (LDC2006T06). Philadelphia: Linguistic Data Consortium",
"authors": [
{
"first": "Christopher",
"middle": [],
"last": "Walker",
"suffix": ""
},
{
"first": "Stephanie",
"middle": [],
"last": "Strassel",
"suffix": ""
},
{
"first": "Julie",
"middle": [],
"last": "Medero",
"suffix": ""
},
{
"first": "Kazuaki",
"middle": [],
"last": "Maeda",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher Walker, Stephanie Strassel, Julie Medero, and Kazuaki Maeda. 2006. ACE 2005 multilingual training corpus (LDC2006T06). Philadelphia: Lin- guistic Data Consortium.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Semantic Frame Based Spoken Language Understanding",
"authors": [
{
"first": "Ye-Yi",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Acero",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "35--80",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ye-Yi Wang, Li Deng, and Alex Acero. 2011. Seman- tic Frame Based Spoken Language Understanding, pages 35-80. Wiley.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Joint extraction of events and entities within a document context",
"authors": [
{
"first": "Bishan",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Tom",
"middle": [
"M"
],
"last": "Mitchell",
"suffix": ""
}
],
"year": 2016,
"venue": "The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "289--299",
"other_ids": {
"DOI": [
"10.18653/v1/n16-1033"
]
},
"num": null,
"urls": [],
"raw_text": "Bishan Yang and Tom M. Mitchell. 2016. Joint extrac- tion of events and entities within a document con- text. In NAACL HLT 2016, The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, pages 289-299.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "AMR parsing as sequence-tograph transduction",
"authors": [
{
"first": "Sheng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xutai",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Duh",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Conference of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "80--94",
"other_ids": {
"DOI": [
"10.18653/v1/p19-1009"
]
},
"num": null,
"urls": [],
"raw_text": "Sheng Zhang, Xutai Ma, Kevin Duh, and Benjamin Van Durme. 2019. AMR parsing as sequence-to- graph transduction. In Proceedings of the 57th Con- ference of the Association for Computational Lin- guistics, pages 80-94.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"text": "An example of our model running over a paragraph. Trigger and argument span representations are computed from BERT, then later fed to a Transformer for jointly modeling the spans to predict their roles.",
"num": null
},
"TABREF1": {
"content": "<table/>",
"text": "",
"html": null,
"type_str": "table",
"num": null
},
"TABREF3": {
"content": "<table><tr><td>Model</td><td>P</td><td>R</td><td>F 1</td></tr><tr><td>Lin et al. (2020)</td><td colspan=\"3\">48.8 53.9 56.8*</td></tr><tr><td>Lin et al. (2020) PoE</td><td>-</td><td>-</td><td>58.6*</td></tr><tr><td>Independent</td><td colspan=\"3\">48.0 76.7 59.0</td></tr><tr><td>Joint</td><td colspan=\"3\">56.0 79.2 65.6</td></tr></table>",
"text": "Dataset statistics.",
"html": null,
"type_str": "table",
"num": null
},
"TABREF4": {
"content": "<table><tr><td>: We verify our model achieves similar per-</td></tr><tr><td>formance to recent work on ACE 2005. PoE denotes</td></tr><tr><td>\"product of experts\", an ensemble model in Lin et al.</td></tr><tr><td>(2020). * Results not directly comparable as we are</td></tr><tr><td>exploring argument linking only.</td></tr></table>",
"text": "",
"html": null,
"type_str": "table",
"num": null
},
"TABREF5": {
"content": "<table><tr><td>shows the performance of our models on</td></tr></table>",
"text": "",
"html": null,
"type_str": "table",
"num": null
},
"TABREF6": {
"content": "<table><tr><td colspan=\"4\">Dist. # Gold args. RAMS-TCD Ours</td></tr><tr><td>\u22122</td><td>79</td><td>75.7</td><td>77.2</td></tr><tr><td>\u22121</td><td>164</td><td>73.7</td><td>74.4</td></tr><tr><td>0</td><td>1,811</td><td>75.0</td><td>79.6</td></tr><tr><td>+1</td><td>87</td><td>76.5</td><td>77.0</td></tr><tr><td>+2</td><td>47</td><td>79.1</td><td>78.7</td></tr></table>",
"text": "Experimental results on RAMS. TCD designates the use of ontology-aware type-constrained decoding, which is similar to our independent model.",
"html": null,
"type_str": "table",
"num": null
},
"TABREF7": {
"content": "<table><tr><td>: Breakdown of the models' performance across</td></tr><tr><td>sentence distances on the RAMS dev set. RAMS-TCD</td></tr><tr><td>refers to Ebner et al. (2020)'s type-constrained decod-</td></tr><tr><td>ing approach (see</td></tr></table>",
"text": "",
"html": null,
"type_str": "table",
"num": null
}
}
}
}