Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S19-1025",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:45:56.787543Z"
},
"title": "An Argument-Marker Model for Syntax-Agnostic Proto-Role Labeling",
"authors": [
{
"first": "Juri",
"middle": [],
"last": "Opitz",
"suffix": "",
"affiliation": {
"laboratory": "Research Training Group AIPHES",
"institution": "",
"location": {
"postCode": "69120",
"settlement": "Heidelberg"
}
},
"email": "[email protected]"
},
{
"first": "Anette",
"middle": [],
"last": "Frank",
"suffix": "",
"affiliation": {
"laboratory": "Research Training Group AIPHES",
"institution": "",
"location": {
"postCode": "69120",
"settlement": "Heidelberg"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Semantic proto-role labeling (SPRL) is an alternative to semantic role labeling (SRL) that moves beyond a categorical definition of roles, following Dowty's feature-based view of proto-roles. This theory determines agenthood vs. patienthood based on a participant's instantiation of more or less typical agent vs. patient properties, such as, for example, volition in an event. To perform SPRL, we develop an ensemble of hierarchical models with selfattention and concurrently learned predicateargument-markers. Our method is competitive with the state-of-the art, overall outperforming previous work in two formulations of the task (multi-label and multi-variate Likert scale prediction). In contrast to previous work, our results do not depend on gold argument heads derived from supplementary gold tree banks.",
"pdf_parse": {
"paper_id": "S19-1025",
"_pdf_hash": "",
"abstract": [
{
"text": "Semantic proto-role labeling (SPRL) is an alternative to semantic role labeling (SRL) that moves beyond a categorical definition of roles, following Dowty's feature-based view of proto-roles. This theory determines agenthood vs. patienthood based on a participant's instantiation of more or less typical agent vs. patient properties, such as, for example, volition in an event. To perform SPRL, we develop an ensemble of hierarchical models with selfattention and concurrently learned predicateargument-markers. Our method is competitive with the state-of-the art, overall outperforming previous work in two formulations of the task (multi-label and multi-variate Likert scale prediction). In contrast to previous work, our results do not depend on gold argument heads derived from supplementary gold tree banks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Deciding on a linguistically sound, clearly defined and broadly applicable inventory of semantic roles is a long-standing issue in linguistic theory and natural language processing. To alleviate issues found with classical thematic role inventories, Dowty (1991) argued for replacing categorical roles with a feature-based, composite notion of semantic roles, introducing the theory of semantic proto-roles (SPR). At its core, it proposes two prominent, composite role types: proto-agent and proto-patient. Proto-roles represent multi-faceted, possibly graded notions of agenthood or patienthood. For example, consider the following sentence from Bram Stoker's Dracula (1897):",
"cite_spans": [
{
"start": 250,
"end": 262,
"text": "Dowty (1991)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(1) He opened it [the letter] and read it gravely. argument is considered an agent or patient follows from the proto-typical properties the argument exhibits: e.g., being manipulated is proto-typical for patient, while volition is proto-typical for an agent. Hence, in both events of (1) the count is determined as agent, and the letter as patient.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Only recently two SPR data sets have been published. Reisinger et al. (2015) developed a property-based proto-role annotation schema with 18 properties. One Amazon Mechanical Turk crowd worker (selected in a pilot annotation) answered questions such as how likely is it that the argument mentioned with the verb changes location? on a 5-point Likert or responded inapplicable. This dataset (news domain) will henceforth be denoted by SPR1. Based on the experiences from the SPR1 annotation process, White et al. (2016) published SPR2 which follows a similar annotation schema. However, in contrast to SPR1, the new data set contains doubly annotated data from the web domain for 14 refined properties.",
"cite_spans": [
{
"start": 53,
"end": 76,
"text": "Reisinger et al. (2015)",
"ref_id": "BIBREF17"
},
{
"start": 499,
"end": 518,
"text": "White et al. (2016)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our work makes the following contributions: In Section \u00a72, we provide an overview of previous SPRL work and outline a common weakness: reliance on gold syntax trees or gold argument heads derived from them. To alleviate this issue, we propose a span-based, hierarchical neural model ( \u00a73) which learns marker embeddings to highlight the predicate-argument structures of events. Our experiments ( \u00a74) show that our model, when combined in a simple voter ensemble, outperforms all previous works. A single model performs only slightly worse, albeit having weaker dependencies than previous methods. In our analysis, we (i) perform ablation experiments to analyze the contributions of different model components. (ii) we observe that the small SPR data size introduces a severe sensitivity to different random initializations of our neural model. We find that combining multiple models in a simple voter ensemble makes SPRL predictions not only slightly better but also significantly more robust. We share our code with the community and make it publicly available. 1 2 Related Work SPRL Teichert et al. (2017) formulate the semantic role labeling task as a multi-label problem and develop a conditional random field model (CRF). Given an argument phrase and a corresponding predicate, the model predicts which of the 18 properties hold. Compared with a simple feature-based linear model by Reisinger et al. (2015) , the CRF exhibits superior performance by more than 10 pp. macro F1. Incorporating features derived from additional gold syntax improves the CRF performance significantly. For treating the task as a multi-label problem, the Likert classes {1, 2, 3} and inapplicable are collapsed to \u2212 and Likert classes {4, 5} are mapped to +. Subsequent works, including ours, conform to this setup. Rudinger et al. (2018) are the first to treat SPRL as a multi-variate Likert scale regression problem. They develop a neural model whose predictions have good correlation with the values in the testing data of both SPR1 and SPR2. In the multi-label setting, their model compares favourably with Teichert et al. (2017) for most proto-role properties and establishes a new state-of-the-art. Pre-training the model in a machine translation setting helps on SPR1 but results in a performance drop on SPR2. The model takes a sentence as input to a Bi-LSTM (Hochreiter and Schmidhuber, 1997) to produce a sequence of hidden states. The prediction is based on the hidden state corresponding to the head of the argument phrase, which is determined by inspection of the gold syntax tree.",
"cite_spans": [
{
"start": 1080,
"end": 1107,
"text": "SPRL Teichert et al. (2017)",
"ref_id": null
},
{
"start": 1388,
"end": 1411,
"text": "Reisinger et al. (2015)",
"ref_id": "BIBREF17"
},
{
"start": 1798,
"end": 1820,
"text": "Rudinger et al. (2018)",
"ref_id": "BIBREF18"
},
{
"start": 2349,
"end": 2383,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recently, Tenney et al. (2019) have demonstrated the capacities of contextualized word embeddings across a wide variety of tasks, including SPRL. However, for SPRL labeling they proceed similar to Rudinger et al. (2018) in the sense that they extract the gold heads of arguments in their dependency-based SPRL approach. Instead of using an LSTM to convert the input sentence to a sequence of vectors they make use of large language models such as ELMo (Peters et al., 2018) or BERT (Devlin et al., 2018) . The contextual vectors corresponding to predicate and the (gold) argument head are processed by a projection layer, self-attention pooling and a two-layer feed forward neural network with sigmoid output activation functions. To compare with Rudinger et al. (2018) , our basic model uses standard GloVe embeddings. When our model is fed with contextual embeddings a further observable performance gain can be achieved.",
"cite_spans": [
{
"start": 10,
"end": 30,
"text": "Tenney et al. (2019)",
"ref_id": "BIBREF22"
},
{
"start": 197,
"end": 219,
"text": "Rudinger et al. (2018)",
"ref_id": "BIBREF18"
},
{
"start": 452,
"end": 473,
"text": "(Peters et al., 2018)",
"ref_id": "BIBREF15"
},
{
"start": 482,
"end": 503,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF2"
},
{
"start": 747,
"end": 769,
"text": "Rudinger et al. (2018)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To summarize, previous state-of-the-art SPRL systems suffer from a common problem: they are dependency-based and their results rely on gold argument heads. Our approach, in contrast, does not rely on any supplementary information from gold syntax trees. In fact, our marker model for SPRL is agnostic to any syntactic theory and acts solely on the basis of argument spans which we highlight with position markers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "SRL The task of automatically identifying predicate-argument structures and assigning roles to arguments was firstly investigated by Gildea and Jurafsky (2002) . Over the past years, SRL has witnessed a large surge in interest. Recently, very competitive end-to-end neural systems have been proposed (He et al., 2018a; Cai et al., 2018; He et al., 2018b) . Strubell et al. (2018) show that injection of syntax can help SRL models and Li et al. (2019) bridge the gap between span-based and dependency-based SRL, achieving new stateof-the-art results both on the span based CoNLL data (Carreras and M\u00e0rquez, 2005; Pradhan et al., 2013) and the dependency-based CoNLL data (Surdeanu et al., 2008; Haji\u010d et al., 2009) . A fully end-to-end S(P)RL system has to solve multiple sub-tasks: identification of predicate-argument structures, sense disambiguation of predicates and as the main and final step, labeling their argument phrases with roles. Up to the present, SPRL works (including ours) focus on the main task and assume the prior steps as complete.",
"cite_spans": [
{
"start": 133,
"end": 159,
"text": "Gildea and Jurafsky (2002)",
"ref_id": "BIBREF4"
},
{
"start": 300,
"end": 318,
"text": "(He et al., 2018a;",
"ref_id": "BIBREF6"
},
{
"start": 319,
"end": 336,
"text": "Cai et al., 2018;",
"ref_id": "BIBREF0"
},
{
"start": 337,
"end": 354,
"text": "He et al., 2018b)",
"ref_id": "BIBREF8"
},
{
"start": 357,
"end": 379,
"text": "Strubell et al. (2018)",
"ref_id": "BIBREF19"
},
{
"start": 434,
"end": 450,
"text": "Li et al. (2019)",
"ref_id": "BIBREF12"
},
{
"start": 583,
"end": 611,
"text": "(Carreras and M\u00e0rquez, 2005;",
"ref_id": "BIBREF1"
},
{
"start": 612,
"end": 633,
"text": "Pradhan et al., 2013)",
"ref_id": "BIBREF16"
},
{
"start": 670,
"end": 693,
"text": "(Surdeanu et al., 2008;",
"ref_id": "BIBREF20"
},
{
"start": 694,
"end": 713,
"text": "Haji\u010d et al., 2009)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Research into SPRL is still in its infancy, especially in comparison to SRL. One among many reasons may be the fact that, in contrast to semantic roles, semantic proto-roles are multidimensional. This introduces more complexity: Figure 2 : Model outline. Input: (i) a sequence of vectors representing the words and (ii) a sequence of vectors which serve to highlight predicate and argument. Processing: 1. element-wise multiplication of the two sequences ( ); 2. generation of hidden states with forward and backward Bi-LSTM reads ( & ); 3. self-attention mechanism builds a new sequence of hidden states by letting every hidden state attend to every other hidden state; 4. concatenation of the hidden states to generate a vector representation ([; ]). Output: (i) use vector representation to output Likert scale auxiliary predictions (FF ReLU ) and (ii) concatenate auxiliary predictions to the vector representation ([; ]) to finally (iii) compute the multi-label predictions at the top level (|P |\u2022FF Sof tmax ; P : set of proto role properties).",
"cite_spans": [],
"ref_spans": [
{
"start": 229,
"end": 237,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "given a predicate and an argument, the task is no more to predict a single label (as in SRL), but a list of multiple labels or even a multi-variate Likert scale. Another reason may be related to the available resources. The published SPR data sets comprise significantly fewer examples. The design of annotation guidelines and pilot studies with the aim of in-depth proto-role annotations is a hard task. In addition, the SPR data were created, at least to a certain extent, in an experimental manner: one of the goals of corpus creation was to explore possible SPR annotation protocols for humans. We hope that a side-effect of this paper is to spark more interest in SPR and SPRL.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Since the work of Teichert et al. (2017) , the SPRL problem has been phrased as follows: given a (sentence, predicate, argument) tuple, we need to predict for all possible argument properties from a given property-inventory whether they hold or not (regression: how likely are they to hold?).",
"cite_spans": [
{
"start": 18,
"end": 40,
"text": "Teichert et al. (2017)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Attentive Marker Model",
"sec_num": "3"
},
{
"text": "Following previous work (Rudinger et al., 2018) , the backbone of our model is a Bi-LSTM. To ensure further comparability, pretrained 300 dimensional GloVe embeddings (Pennington et al., 2014) are used for building the input sequence (e 1 , ..., e T ). In contrast to Rudinger et al. 2018, we multiply a sequence of marker embeddings (m 1 , ..., m T ) element-wise with the sequence of word vectors: (e 1 \u2022 m 1 , ..., e T \u2022 m T ) ( , Figure 2) . We distinguish three different marker embeddings that indicate the position of the argument in question (red, Figure 2 ), the predicate (green, Figure 2 ) and remaining parts of the sentence. This is to some extent similar to He et al. 2017who learn two predicate indicator embeddings which are concatenated to the input vectors and serve the purpose of showing the model whether a token is the predicate or not. However, in SPRL we are also provided with the argument phrase. We will see in the ablation experiments that it is paramount to learn a dedicated embedding. Embedding multiplication instead of concatenation has the advantage of fewer LSTM parameters (smaller input dimension). Besides, it provides the model with the option to learn large coefficients of the word vector dimensions of predicate and argument vectors. This should immediately draw the model's attentiveness to the argument and predicate phrases which now are accentuated.",
"cite_spans": [
{
"start": 24,
"end": 47,
"text": "(Rudinger et al., 2018)",
"ref_id": "BIBREF18"
},
{
"start": 167,
"end": 192,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 434,
"end": 443,
"text": "Figure 2)",
"ref_id": null
},
{
"start": 556,
"end": 564,
"text": "Figure 2",
"ref_id": null
},
{
"start": 590,
"end": 599,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Attentive Marker Model",
"sec_num": "3"
},
{
"text": "The sequence of marked embeddings is further processed by a Bi-LSTM in order to obtain a sequence of hidden states S = (s 1 , ..., s T ). In Figure 2, forward and backward LSTM reads are indicated by and . From there, we take intuitions from Zheng et al. (2018) and compute the next sequence of vectors by letting every hidden state attend to every other hidden state, which is expressed by the following formulas:",
"cite_spans": [
{
"start": 242,
"end": 261,
"text": "Zheng et al. (2018)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [
{
"start": 141,
"end": 147,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "Attentive Marker Model",
"sec_num": "3"
},
{
"text": "h t,t = tanh(QS t + KS t + \u03b2) e t,t = \u03c3(v T h t,t + \u03b1) a t = sof tmax(e t ) z t = t a t,t \u2022 s t Q, K are weight matrices, \u03b2 is a bias vector,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attentive Marker Model",
"sec_num": "3"
},
{
"text": "\u03b1 is a bias scalar and v a weight vector. Letting every hidden state attend to every other hidden state gives the model freedom in computing the argument-predicate composition. This is desirable, since arguments and predicates frequently are in long-distance relationships. For example, in Figure 3 we see that in the SPR1 data predicates and arguments often lie more than 10 words apart and a non-negligible amount of cases consists of distances of more than 20 words. We proceed by concatenation, z = [z 1 ; ...; z T ], and compute intermediate outputs approximating the property-likelihood Likert scales. This is achieved with weight matrix A and ReLU activation functions (F F +ReLU , Figure 2 ):",
"cite_spans": [],
"ref_spans": [
{
"start": 290,
"end": 298,
"text": "Figure 3",
"ref_id": "FIGREF2"
},
{
"start": 689,
"end": 697,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Attentive Marker Model",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "a = ReLU (Az).",
"eq_num": "(1)"
}
],
"section": "Attentive Marker Model",
"sec_num": "3"
},
{
"text": "To perform multi-label prediction with |P | possible labels we use [a; z] for computing the final decisions with 2|P | output neurons and |P | separate weight matrices (|P | * F F +sof tmax , Figure 2 ), one for each property p \u2208 P :",
"cite_spans": [],
"ref_spans": [
{
"start": 192,
"end": 200,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Attentive Marker Model",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "o p = sof tmax(W p [a; z]).",
"eq_num": "(2)"
}
],
"section": "Attentive Marker Model",
"sec_num": "3"
},
{
"text": "For example, given the 18 proto-role properties contained in SPR1, we learn |P | = 18 weight matrices and use the Softmax functions to produce 18 vectors of dimension 2 as outputs. The first dimension o p,0 represents the predicted probability that property p does not apply (o p,1 : probability for p applies). For the regression task, we reduce the number of output neurons from 2|P | to |P | and use ReLU activation functions instead. We hypothesize that the hierarchical structure can support the model in making predictions on the top layer. E.g., if the argument is predicted to be most likely not sentient and very likely to be manipulated, the model may be less tempted to predict an awareness label at the top layer. The auxiliary loss for any datum is given as the mean square error over the auxiliary output neurons:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attentive Marker Model",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "= \u03bb |P | p\u2208P (a p \u2212 a p ) 2",
"eq_num": "(3)"
}
],
"section": "Attentive Marker Model",
"sec_num": "3"
},
{
"text": "In case of the multi-label formulation, our main loss for an example is the average cross entropy loss over every property:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attentive Marker Model",
"sec_num": "3"
},
{
"text": "= \u2212 \u03bb |P | p\u2208P (o p,1 log o p,1 + o p,2 log o p,2 ), (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attentive Marker Model",
"sec_num": "3"
},
{
"text": "where o p,0 = I(\u00acp) and o p,1 = I(p) i.e. the gold label indicator.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attentive Marker Model",
"sec_num": "3"
},
{
"text": "Data We use the same data setup and split as Teichert et al. 2017 Like previous works, we use macro F1 as the global performance metric in the multi-label scenario and macro-averaged Pearson's \u03c1 (arithmetic mean over the correlation coefficients for each property, details can be found in the Supplement A.1) We refer to the system results as reported by Rudinger et al. (2018) . The most recent work, which evaluates large language models on a variety of tasks including SPRL, is denoted by TEN'19 (Tenney et al., 2019) . In this case, we present the micro F1 results as reported in their paper.",
"cite_spans": [
{
"start": 355,
"end": 377,
"text": "Rudinger et al. (2018)",
"ref_id": "BIBREF18"
},
{
"start": 499,
"end": 520,
"text": "(Tenney et al., 2019)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "We introduce four main models: (i) Marker: our basic, span-based singlemodel system. For (ii) MarkerE, we fit an ensemble of 50 Markers with different random seeds. Computationally, training 50 neural models in this task is completely feasible since neither SPR1 nor SPR2 contain more than 10,000 training examples (parallelized training took approximately 2 hours). The ensemble predicts unseen testing data by combining the models' decisions in a simple majority vote when performing multi-label prediction or, when in the regression setup, by computing the mean of the output scores (for every property). We also introduce (iii) MarkerB, and (iv) Mark-erEB. These two systems differ in only one aspect from the previously mentioned models: instead of GloVe word vectors, we feed contextual vectors extracted from the BERT model (Devlin et al., 2018) . More precisely, we use the transformer model BERT-base-uncased 2 and sum the inferred activations over the last four layers. The resulting vectors are concatenated to GloVe vectors and then processed by the Bi-LSTM. We fit all models with gradient descent and apply early stopping on the development data (maximum average Pearson's \u03c1 for multi-variate Likert regression, maximum macro F1 for the multi-label task). Further hyper parameter choices and details about the training are listed in Appendix \u00a7A.2.",
"cite_spans": [
{
"start": 831,
"end": 852,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model instantiation",
"sec_num": null
},
{
"text": "News texts The results on newspaper data (SPR1) are displayed in Table 1 (left-hand side). Our basic ensemble (MarkerE) improves massively in the property location 3 (+19.1 pp. F1). A significant loss is experienced in the property changes possession (-9.6). Overall, our ensemble method outperforms all prior works (REI'15: +17.7 pp. macro F1; TEI'17: +6.2, RUD'18: +1.0). Our ensemble method provided with additional contextual word embeddings (MarkerEB) yields another large performance gain. The old state-of-the-art, RUD'18, is surpassed by more than 6.0 pp. macro F1 (a relative improvement of 8.6%). With regard to some properties, the contextual embeddings provide massive performance gains over our basic MarkerE: stationary (+9.5 pp. F1), makes physical contact (+21.3), change of location (+14.1) and created (+17.3). The only loss is incurred for the property which asks if an argument is destroyed (-12.3 ). This specific property appears to be difficult to predict for all models. The best score in this property is achieved by MarkerE with only 26.6 F1.",
"cite_spans": [
{
"start": 911,
"end": 917,
"text": "(-12.3",
"ref_id": null
}
],
"ref_spans": [
{
"start": 65,
"end": 72,
"text": "Table 1",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Multi-Label Prediction Results",
"sec_num": "4.1"
},
{
"text": "Web texts On the web texts (SPR2), due to less previous works, we also use three label selection strategies as baselines: a majority label baseline, a constant strategy which always selects the positive label and a random baseline which samples a pos-itive target label according to the occurrence ratio in the training data (maj, constant & ran, Table 2 , left-hand side).",
"cite_spans": [],
"ref_spans": [
{
"start": 347,
"end": 354,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Multi-Label Prediction Results",
"sec_num": "4.1"
},
{
"text": "Our basic MarkerE method yields massive improvements over both baselines (more than +10 pp. F1) in 4 out of 14 proto-role properties. For argument changes possession and awareness the improvement over both baselines is more than +25 pp. F1 and for sentient more than +40 pp. However, in the partitive property, the constant-label baseline remains unbeaten by a large margin (-21.7 pp.). Overall, all Marker models yield large performance increases over the baselines. For example, MarkerE yields significant improvements both over random (+27.7 pp. macro F1), constant (+9.5) and majority (+45.0). Intriguingly -while the contextual embeddings provide a massive performance boost on news texts (SPR1), -they appear not to be useful for our model on the web texts. The macro F1 score of MarkerEB is slightly worse (-1 pp.) than that of MarkerE and the micro F1 score is only marginally better (+0.9). The same holds true for the single-model instances: Marker performs better than MarkerB by 2.2 pp. macro F1 albeit marginally worse micro F1 wise by 0.5 pp.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Label Prediction Results",
"sec_num": "4.1"
},
{
"text": "Why exactly the contextual embeddings fail to provide any valuable information when labeling arguments in web texts, we cannot answer with certainty. A plausible cause could be overfitting problems stemming from the increased dimensionality of the input vectors. In fact, the contextual embeddings increase the number of word vector features by more than two times over the dimension of the GloVe embeddings. This inflates the number of parameters in the LSTM's weight matrices. As a consequence, the likelihood of overfitting is increased -an issue which is further aggravated by the fact that SPR2 data are significantly fewer than SPR1 data. SPR2 contains less than five thousand predicate-argument training examples, roughly half the size of SPR1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Label Prediction Results",
"sec_num": "4.1"
},
{
"text": "Another source of problems may be rooted in the target-label construction process for SPR2. This question does not arise when using SPR1 data since all annotations were performed by a single annotator. The SPR2 data, in contrast, contains for each predicate-argument pair, two annotations. In total, the data was annotated by many crowd workers -some of whom provided many and some provided few annotations. Perhaps, averaging Lik- ert scale annotations of two random annotators is not the right way to transform SPR2 to a multilabel task. Future work may investigate new transformation strategies. For example, we can envision a strategy which finds reliable annotators and weighs the choices of those annotators higher than those of less reliable annotators. This should result in an improved SPR2 gold standard for both multilabel and multi-variate Likert scale SPRL systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Label Prediction Results",
"sec_num": "4.1"
},
{
"text": "News texts MarkerE achieves large performance gains for the properties location and change of location (\u2206\u03c1: +0.136 & \u2206\u03c1: +0.066, Table 1 ). This is in accordance with the results for these two properties in the multi-label predic-tion setup. Our model is outperformed by RUD'18 in the property stationary (\u2206\u03c1: -0.045). All in all, MarkerE outperforms RUD'18 (\u2206 macro \u03c1: +0.005). When providing additional contextual word embeddings from the large language model, the correlations intensify for almost all role properties. Overall, the contextual embeddings in MarkerEB yield an observable improvement of +0.063 \u2206 macro \u03c1 over MarkerE (which solely uses GloVe embeddings).",
"cite_spans": [],
"ref_spans": [
{
"start": 129,
"end": 136,
"text": "Table 1",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Likert Scale Regression Results",
"sec_num": "4.2"
},
{
"text": "Web texts Our MarkerE model outperforms RUD'18 slightly by +0.001\u2206\u03c1 (Table 2) . However, if we compare with RUD'18's model setup which achieved the best score on the SPR1 testing data (pre-training with a supervised MT task, macro regression result SPR2: 0.521\u03c1), we Table 3 : Main results, system properties and requirements of SPRL systems. Overall best system is marked in bold, best system using GloVe is underlined, best single-model system is marked by \u2020. STL: supervised transfer-learning (e.g., RUD'18: pre-training on MT task). C-embeddings: contextual word embeddings (BERT-base). ML: multilabel prediction; LR: multi-variate Likert regression. BERT concat: last four BERT layers are concatenated. BERT lin. comb.: optimized linear combination of last four BERT layers (our BERT based models sum the last four layers). The Table is further discussed in the Section Discussion \u00a74.3.",
"cite_spans": [],
"ref_spans": [
{
"start": 68,
"end": 77,
"text": "(Table 2)",
"ref_id": "TABREF4"
},
{
"start": 267,
"end": 274,
"text": "Table 3",
"ref_id": null
},
{
"start": 833,
"end": 841,
"text": "Table is",
"ref_id": null
}
],
"eq_spans": [],
"section": "Likert Scale Regression Results",
"sec_num": "4.2"
},
{
"text": "achieve a significantly higher macro-average (\u2206\u03c1: +0.014). Yet again, when our model is provided with contextual word embeddings, a large performance boost is the outcome. In fact, Mark-erEB outperforms MarkerE by +0.045 \u2206 macro \u03c1 and RUD'18's best performing configuration by +0.046. This stands in contrast to the multilabeling results on this type of data, where the contextual embeddings did not appear to provide any performance increase. As previously discussed ( \u00a74.1), this discrepancy may be rooted in the task generation process of SPR2 which requires transforming two annotations per example to one ground truth (the two annotations per example stem from two out of, in total, 50 workers).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Likert Scale Regression Results",
"sec_num": "4.2"
},
{
"text": "Leaving the contextual word embedding inputs aside, the performance differences of our Marker models to RUD'18 may seem marginal for many properties. Albeit our MarkerE yields an observable improvement of 1.0 pp. macro F1 in the multi-label setup, in the regression setup the performance gains are very small (\u2206\u03c1 SPR1: +0.005, \u2206\u03c1 SPR2: +0.001, Table 1 & 3). In addition, our model as a single model instance (Marker) is outperformed by RUD'18's approach both in the regression and in the multi-label setup. However, it is important to note that the result of our system has substantially fewer dependencies (Table 3) . Firstly, our model does not rely on supplementary gold syntax -in fact, since it is span-based, our model is completely agnostic to any syntax. Besides our approach, only REI'15 does not depend on supplementary gold syntax for the dis-played results. However, all of our models outperform REI'15's feature-based linear classifier in every property (+17 pp. macro F1 in total) except for destroyed (where MarkerEB performed slighly worse by -2.8 pp. F1). Also, results of SRL systems on semantic role labeling data show that span-based SRL systems often lag behind a few points in accuracy (cf. Li et al. (2019) , Table 1). When provided with the syntactic head of the argument phrase, a model may immediately focus on what is important in the argument phrase. When solely fed the argument-span, which is potentially very long, the model has to find the most important parts of the phrase on its own and is more easily distracted. Additionally, identifying the head word of an argument may be more important than getting the boundaries of the span exactly right. In other words, span-based SPRL models may be more robust when confronted with slightly erroneous spans compared to dependencybased models which may be vulnerable to false positive heads. However, this hypothesis has to be confirmed or disproven experimentally in future work.",
"cite_spans": [
{
"start": 1213,
"end": 1229,
"text": "Li et al. (2019)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 344,
"end": 351,
"text": "Table 1",
"ref_id": "TABREF3"
},
{
"start": 607,
"end": 616,
"text": "(Table 3)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4.3"
},
{
"text": "Further information about differences of various SPRL approaches is displayed in Table 3 . In sum, despite having significantly fewer dependencies on external resources, our approach proves to be competitive with all methods from prior works, including neural systems. When combined in a simple ensemble, our model outperforms previous systems. When we feed additional contextual word embeddings, the results can be boosted further by a large margin. In the following section, we show that ensembling SPRL models has another advantage besides predictive performance gains. Namely, it decreases SPRL model sensitivities towards different random initializations. As a result, we find that a simple neural voter committee (ensemble) offers more robust SPRL predictions compared to a randomly selected committeemember (single model).",
"cite_spans": [],
"ref_spans": [
{
"start": 81,
"end": 88,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4.3"
},
{
"text": "Ensembling increases accuracy To investigate whether ensembling improves proto-role labeling results significantly, we conduct McNemar significance tests (McNemar, 1947) comparing the predictions of MarkerB and MarkerEB. The significance test results summarized in Table 4 are unambiguous: for many proto-role properties, ensembling helps to improve performance significantly (SPR1: 14 18 cases; SPR2: 7 14 cases; significance level: p < 0.05). However, for few cases, ensembling resulted in significantly worse predictions (SPR1: change of location; SPR2: change of location, instigated and partitive; significance level: p < 0.05). For the rest of the properties, differences in predictions remain insignificant.",
"cite_spans": [
{
"start": 154,
"end": 169,
"text": "(McNemar, 1947)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 265,
"end": 272,
"text": "Table 4",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "4.4"
},
{
"text": "Ensembling increases robustness Additionally, we find that ensembling increases the robustness of our neural models. Consider Figure 4 , where we display the performance difference of a n-voter ensemble to the same ensemble after one additional voter joined (n+1-voter ensemble). The difference fluctuates wildly while the ensemble is still small. This suggests that a different random seed yields significantly different predictions. Hence, our a single neural Marker model is very vulnerable to the quirks of random numbers. However, when more voters join, we see that the predictions become notably more stable. An outcome which holds true for both data sets and two different ensemble model configurations (MarkerE and MarkerEB). We draw two conclusions from this experiment: (i) a single neural SPRL model is extremely sensitive to different random initial- izations. Finally, (ii) a simple voter ensemble has the potential to alleviate this issue. In fact, when we add more voters, the model converges towards stable predictions which are less influenced by the quirks of random numbers.",
"cite_spans": [],
"ref_spans": [
{
"start": 126,
"end": 134,
"text": "Figure 4",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "4.4"
},
{
"text": "Model Ablations All ablation experiments are conducted with MarkerEB in the multi-label formulation. We proceed by ablating different components in a leave-one-out fashion: (i) the selfattention components of the ensemble model are removed (SelfAtt in Table 5 ); (ii) we abstain from highlighting (a) the arguments and predicates, (b) only the predicates and (c) only the arguments (mark. pred-mark. and arg-mark. in Table 5 ); Finally, (iii) we remove the hierachical structure and do not predict auxiliary outputs (hier. in Table 5 ). From all ablated components, removing simultaneously both predicate and argument-markers hurts the model the most (SPR1: -26.8 pp. macro F1; SPR2: -10.6). Only ablating the argument-marker also causes a great performance loss (SPR1: -17.4, SPR2: -6.6). On the other hand, when only the predicate marker is ablated, the performance decreases only slightly (SPR1: -1.2, SPR2: -1.7). In other words, it appears to be of paramount importance to provide our model with indicators for the argument position in the sentence, but it is of lesser importance to point at the predicate. The self-attention component can boost the model's performance by up to +4.4 pp. F1 on SPR1 and +2.6 on SPR2. The hierarchical structure with intermediate auxiliary Likert scale outputs leads to gains of approximately +1 pp. macro F1 in both data sets. This indicates that indeed the finer Likert scale annotations provide auxiliary information of value when predicting the labels at the top layer, albeit the performance difference appears to be rather small.",
"cite_spans": [],
"ref_spans": [
{
"start": 252,
"end": 259,
"text": "Table 5",
"ref_id": "TABREF9"
},
{
"start": 417,
"end": 424,
"text": "Table 5",
"ref_id": "TABREF9"
},
{
"start": 526,
"end": 533,
"text": "Table 5",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "4.4"
},
{
"text": "In our proposed SPRL ensemble model, predicateargument constructs are highlighted with concurrently learned marker embeddings and self- attention enables the model to capture longdistance relationships between arguments and predicates. The span-based method is competitive with the dependency-based state-of-the-art which uses gold heads. When combined in a simple ensemble, the method overall outperforms the stateof-the-art on newspaper texts (multi-label prediction macro F1: +1.0 pp.). When fed with contextual word embeddings extracted from a large language model, the method outperforms the state-ofthe-art by 6.4 pp. macro F1. Our method is competitive with the state-of-the-art for Likert regression on texts from the web domain. In the multilabel setting, it outperforms all baselines by a large margin. Furthermore, we have shown that a simple Marker model voter ensemble is very suited for conducting SPRL, for two reasons: (i) results for almost every proto-role property are significantly improved and (ii) considerably more robust SPRL predictions are obtained.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "We hope that our work sparks more research into semantic proto-role labeling and corpus creation. Dowty's feature-based view on roles allows us to analyze predicate-argument configurations in great detail -an issue which we think is located in the marrow of computational semantics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "https://gitlab.cl.uni-heidelberg.de/ opitz/sprl",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/google-research/ bert 3 i.e. does the argument describe the location of the event?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work has been supported by the German Research Foundation as part of the Research Training Group Adaptive Preparation of Information from Heterogeneous Sources (AIPHES) under grant no. GRK 1994/1 and by the Leibniz ScienceCampus \"Empirical Linguistics and Computational Language Modeling\", supported by the Leibniz Association under grant no. SAS-2015-IDS-LWC and by the Ministry of Science, Research, and Art of Baden-W\u00fcrttemberg.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
},
{
"text": "A.1 Notes Calculation of macro F1 The global performance metric for multi-label SPRL is defined as 'macro F1'. To ensure full comparability of results, we use the same formula as prior works (Rudinger et al., 2018) :where P and R are Precision and Recall and macro avg. means the unweighted mean of these quantities computed over all proto-role properties. The above macro F1 metric, though not explicitly displayed in the prior work papers, has been confirmed by the main authors (email).Calculation of Pearson's \u03c1 Person's \u03c1 quantifies the linear relationship between two random variables X and Y . Computed over a sample {(x i , y i )} n 1 it is calculated with the following formula:Given |P | proto-role properties and corresponding correlation coefficients \u03c1 1 , ...\u03c1 |P | , the macro Pearson's \u03c1 is calculated as Data split of SPR1 (Teichert et al., 2017) reframed the SPRL task as a multi-label problem.Previously the task was to answer, given a predicate and an argument, one specific proto-role question (binary label or single output regression). Now we need to predict all proto-role questions at once (multi-label or multi-ouput regression). In order to allow this formulation of the task, the authors needed to redefine the original train-dev-test split of SPR1 (recent works, including ours, all use the re-defined split).Reported Numbers In the EMNLP publication of Rudinger et al. (2018) we found a few minor transcription errors in the result tables (confirmed by email communication with the main authors, who plan to upload an errata section). In the case of transcription errors, we took the error-corrected numbers which were sent to us via email.",
"cite_spans": [
{
"start": 191,
"end": 214,
"text": "(Rudinger et al., 2018)",
"ref_id": "BIBREF18"
},
{
"start": 839,
"end": 862,
"text": "(Teichert et al., 2017)",
"ref_id": "BIBREF21"
},
{
"start": 1382,
"end": 1404,
"text": "Rudinger et al. (2018)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Supplemental Material",
"sec_num": null
},
{
"text": "The hyper parameter configuration of our model are displayed in Table 6 . Sequence pruning: Consider that I = {i} is the index of the predicate and J = {j, ..., k} are the indices corresponding to the argument. As long as the input sequence length is longer than maximum length (30, cf. Table 6), we clip left tokens so that the index of the token m < min I \u222a J, then we proceed to clip tokens to the right so that m > max I \u222a J, for the very rare cases that this was not sufficient we proceed to clip tokens with m / \u2208 I \u222a J (the marker sequences are adjusted accordingly). The clipping strategy ensures that predicate and argument tokens are present in every input sequence. Sequences which are shorter than 30 words are prepadded with zero vectors.",
"cite_spans": [],
"ref_spans": [
{
"start": 64,
"end": 71,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "A.2 Hyperparameters & Preprocessing",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A full end-to-end semantic role labeler, syntacticagnostic over syntactic-aware?",
"authors": [
{
"first": "Jiaxun",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "Shexia",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Zuchao",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2753--2765",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiaxun Cai, Shexia He, Zuchao Li, and Hai Zhao. 2018. A full end-to-end semantic role labeler, syntactic- agnostic over syntactic-aware? In Proceedings of the 27th International Conference on Computational Linguistics, pages 2753-2765. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Introduction to the conll-2005 shared task: Semantic role labeling",
"authors": [
{
"first": "Xavier",
"middle": [],
"last": "Carreras",
"suffix": ""
},
{
"first": "Llu\u00eds",
"middle": [],
"last": "M\u00e0rquez",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the ninth conference on computational natural language learning (CoNLL-2005)",
"volume": "",
"issue": "",
"pages": "152--164",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xavier Carreras and Llu\u00eds M\u00e0rquez. 2005. Introduc- tion to the conll-2005 shared task: Semantic role la- beling. In Proceedings of the ninth conference on computational natural language learning (CoNLL- 2005), pages 152-164.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "BERT: pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language under- standing. CoRR, abs/1810.04805.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Thematic proto-roles and argument selection. Language",
"authors": [
{
"first": "David",
"middle": [],
"last": "Dowty",
"suffix": ""
}
],
"year": 1991,
"venue": "",
"volume": "67",
"issue": "",
"pages": "547--619",
"other_ids": {
"DOI": [
"10.2307/415037"
]
},
"num": null,
"urls": [],
"raw_text": "David Dowty. 1991. Thematic proto-roles and argu- ment selection. Language, 67:547-619.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Automatic labeling of semantic roles",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "28",
"issue": "",
"pages": "245--288",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Gildea and Daniel Jurafsky. 2002. Automatic labeling of semantic roles. Computational linguis- tics, 28(3):245-288.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "The conll-2009 shared task: Syntactic and semantic dependencies in multiple languages",
"authors": [
{
"first": "Jan",
"middle": [],
"last": "Haji\u010d",
"suffix": ""
},
{
"first": "Massimiliano",
"middle": [],
"last": "Ciaramita",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Johansson",
"suffix": ""
},
{
"first": "Daisuke",
"middle": [],
"last": "Kawahara",
"suffix": ""
},
{
"first": "Maria",
"middle": [
"Ant\u00f2nia"
],
"last": "Mart\u00ed",
"suffix": ""
},
{
"first": "Llu\u00eds",
"middle": [],
"last": "M\u00e0rquez",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Meyers",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Pad\u00f3",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jan\u0161t\u011bp\u00e1nek",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Thirteenth Conference on Computational Natural Language Learning: Shared Task",
"volume": "",
"issue": "",
"pages": "1--18",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jan Haji\u010d, Massimiliano Ciaramita, Richard Johans- son, Daisuke Kawahara, Maria Ant\u00f2nia Mart\u00ed, Llu\u00eds M\u00e0rquez, Adam Meyers, Joakim Nivre, Sebastian Pad\u00f3, Jan\u0160t\u011bp\u00e1nek, et al. 2009. The conll-2009 shared task: Syntactic and semantic dependencies in multiple languages. In Proceedings of the Thir- teenth Conference on Computational Natural Lan- guage Learning: Shared Task, pages 1-18. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Jointly predicting predicates and arguments in neural semantic role labeling",
"authors": [
{
"first": "Luheng",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "364--369",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luheng He, Kenton Lee, Omer Levy, and Luke Zettle- moyer. 2018a. Jointly predicting predicates and ar- guments in neural semantic role labeling. In Pro- ceedings of the 56th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 2: Short Papers), pages 364-369, Melbourne, Australia. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Deep semantic role labeling: What works and what's next",
"authors": [
{
"first": "Luheng",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "473--483",
"other_ids": {
"DOI": [
"10.18653/v1/P17-1044"
]
},
"num": null,
"urls": [],
"raw_text": "Luheng He, Kenton Lee, Mike Lewis, and Luke Zettle- moyer. 2017. Deep semantic role labeling: What works and what's next. In Proceedings of the 55th Annual Meeting of the Association for Computa- tional Linguistics (Volume 1: Long Papers), pages 473-483, Vancouver, Canada. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Syntax for semantic role labeling, to be, or not to be",
"authors": [
{
"first": "Shexia",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Zuchao",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Hongxiao",
"middle": [],
"last": "Bai",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "2061--2071",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shexia He, Zuchao Li, Hai Zhao, and Hongxiao Bai. 2018b. Syntax for semantic role labeling, to be, or not to be. In Proceedings of the 56th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 2061- 2071.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR, abs/1412.6980.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "End-to-end neural coreference resolution",
"authors": [
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luheng",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "188--197",
"other_ids": {
"DOI": [
"10.18653/v1/D17-1018"
]
},
"num": null,
"urls": [],
"raw_text": "Kenton Lee, Luheng He, Mike Lewis, and Luke Zettle- moyer. 2017. End-to-end neural coreference reso- lution. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Process- ing, pages 188-197, Copenhagen, Denmark. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Dependency or span, end-to-end uniform semantic role labeling",
"authors": [
{
"first": "Zuchao",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Shexia",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Yiqing",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Zhuosheng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xi",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Xiang",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zuchao Li, Shexia He, Hai Zhao, Yiqing Zhang, Zhu- osheng Zhang, Xi Zhou, and Xiang Zhou. 2019. De- pendency or span, end-to-end uniform semantic role labeling. CoRR, abs/1901.05280.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Note on the sampling error of the difference between correlated proportions or percentages",
"authors": [
{
"first": "Quinn",
"middle": [],
"last": "Mcnemar",
"suffix": ""
}
],
"year": 1947,
"venue": "Psychometrika",
"volume": "12",
"issue": "2",
"pages": "153--157",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Quinn McNemar. 1947. Note on the sampling error of the difference between correlated proportions or percentages. Psychometrika, 12(2):153-157.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christo- pher D. Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Nat- ural Language Processing (EMNLP), pages 1532- 1543.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "2227--2237",
"other_ids": {
"DOI": [
"10.18653/v1/N18-1202"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. In Proceedings of the 2018 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana. Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Towards robust linguistic analysis using ontonotes",
"authors": [
{
"first": "Alessandro",
"middle": [],
"last": "Sameer Pradhan",
"suffix": ""
},
{
"first": "Nianwen",
"middle": [],
"last": "Moschitti",
"suffix": ""
},
{
"first": "Hwee Tou",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "Olga",
"middle": [],
"last": "Bj\u00f6rkelund",
"suffix": ""
},
{
"first": "Yuchen",
"middle": [],
"last": "Uryupina",
"suffix": ""
},
{
"first": "Zhi",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhong",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Seventeenth Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "143--152",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Hwee Tou Ng, Anders Bj\u00f6rkelund, Olga Uryupina, Yuchen Zhang, and Zhi Zhong. 2013. Towards ro- bust linguistic analysis using ontonotes. In Proceed- ings of the Seventeenth Conference on Computa- tional Natural Language Learning, pages 143-152.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Semantic proto-roles. Transactions of the Association for Computational Linguistics",
"authors": [
{
"first": "Drew",
"middle": [],
"last": "Reisinger",
"suffix": ""
},
{
"first": "Rachel",
"middle": [],
"last": "Rudinger",
"suffix": ""
},
{
"first": "Francis",
"middle": [],
"last": "Ferraro",
"suffix": ""
},
{
"first": "Craig",
"middle": [],
"last": "Harman",
"suffix": ""
},
{
"first": "Kyle",
"middle": [],
"last": "Rawlins",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "3",
"issue": "",
"pages": "475--488",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Drew Reisinger, Rachel Rudinger, Francis Ferraro, Craig Harman, Kyle Rawlins, and Benjamin Van Durme. 2015. Semantic proto-roles. Transac- tions of the Association for Computational Linguis- tics, 3:475-488.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Neuraldavidsonian semantic proto-role labeling",
"authors": [
{
"first": "Rachel",
"middle": [],
"last": "Rudinger",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Teichert",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Culkin",
"suffix": ""
},
{
"first": "Sheng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "944--955",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rachel Rudinger, Adam Teichert, Ryan Culkin, Sheng Zhang, and Benjamin Van Durme. 2018. Neural- davidsonian semantic proto-role labeling. In Pro- ceedings of the 2018 Conference on Empirical Meth- ods in Natural Language Processing, pages 944- 955. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Linguistically-informed self-attention for semantic role labeling",
"authors": [
{
"first": "Emma",
"middle": [],
"last": "Strubell",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Verga",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Andor",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Weiss",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "5027--5038",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emma Strubell, Patrick Verga, Daniel Andor, David Weiss, and Andrew McCallum. 2018. Linguistically-informed self-attention for semantic role labeling. In Proceedings of the 2018 Confer- ence on Empirical Methods in Natural Language Processing, pages 5027-5038.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "The conll-2008 shared task on joint parsing of syntactic and semantic dependencies",
"authors": [
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Johansson",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Meyers",
"suffix": ""
},
{
"first": "Llu\u00eds",
"middle": [],
"last": "M\u00e0rquez",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Twelfth Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "159--177",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mihai Surdeanu, Richard Johansson, Adam Meyers, Llu\u00eds M\u00e0rquez, and Joakim Nivre. 2008. The conll- 2008 shared task on joint parsing of syntactic and se- mantic dependencies. In Proceedings of the Twelfth Conference on Computational Natural Language Learning, pages 159-177. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Semantic proto-role labeling",
"authors": [
{
"first": "Adam",
"middle": [
"R"
],
"last": "Teichert",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Poliak",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
},
{
"first": "Matthew",
"middle": [
"R"
],
"last": "Gormley",
"suffix": ""
}
],
"year": 2017,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "4459--4466",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam R. Teichert, Adam Poliak, Benjamin Van Durme, and Matthew R. Gormley. 2017. Seman- tic proto-role labeling. In AAAI, pages 4459-4466. AAAI Press.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "What do you learn from context? probing for sentence structure in contextualized word representations",
"authors": [
{
"first": "Ian",
"middle": [],
"last": "Tenney",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Berlin",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Poliak",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Mccoy",
"suffix": ""
},
{
"first": "Najoung",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Bowman",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Ellie",
"middle": [],
"last": "Pavlick",
"suffix": ""
}
],
"year": 2019,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R Thomas McCoy, Najoung Kim, Benjamin Van Durme, Sam Bowman, Dipanjan Das, and Ellie Pavlick. 2019. What do you learn from context? probing for sentence structure in contextu- alized word representations. In International Con- ference on Learning Representations.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Universal decompositional semantics on universal dependencies",
"authors": [
{
"first": "Aaron",
"middle": [],
"last": "Steven White",
"suffix": ""
},
{
"first": "Drew",
"middle": [],
"last": "Reisinger",
"suffix": ""
},
{
"first": "Keisuke",
"middle": [],
"last": "Sakaguchi",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Vieira",
"suffix": ""
},
{
"first": "Sheng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Rachel",
"middle": [],
"last": "Rudinger",
"suffix": ""
},
{
"first": "Kyle",
"middle": [],
"last": "Rawlins",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1713--1723",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aaron Steven White, Drew Reisinger, Keisuke Sak- aguchi, Tim Vieira, Sheng Zhang, Rachel Rudinger, Kyle Rawlins, and Benjamin Van Durme. 2016. Universal decompositional semantics on universal dependencies. In Proceedings of the 2016 Confer- ence on Empirical Methods in Natural Language Processing, pages 1713-1723.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Opentag: Open attribute value extraction from product profiles",
"authors": [
{
"first": "Guineng",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Subhabrata",
"middle": [],
"last": "Mukherjee",
"suffix": ""
},
{
"first": "Xin",
"middle": [
"Luna"
],
"last": "Dong",
"suffix": ""
},
{
"first": "Feifei",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining",
"volume": "",
"issue": "",
"pages": "1049--1058",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guineng Zheng, Subhabrata Mukherjee, Xin Luna Dong, and Feifei Li. 2018. Opentag: Open attribute value extraction from product profiles. In Proceed- ings of the 24th ACM SIGKDD International Con- ference on Knowledge Discovery & Data Mining, pages 1049-1058. ACM.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"text": "Davidsonian' analyses based on SR and SPR of the event open are displayed in Figure 1. The SPR analysis provides more detail about the event and the roles of the involved entities. Whether an SR: \u2203e open(e) \u2227 agent(e, c * ) \u2227 theme(e, l * ) SPR: \u2203e open(e) \u2227 volition(e, c * ) \u2227 aware(e, c * ) \u2227 sentient(e, c * ) \u2227 manipulated(e, l * ) \u2227 changesstate(e, l * ) \u2227 ...",
"uris": null
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"text": "Two different 'Davidsionian' event analyses of open. c * and l * refer to the count and letter entities.",
"uris": null
},
"FIGREF2": {
"type_str": "figure",
"num": null,
"text": "Distribution of the number of words between argument and verb (distance relationship) and sentence lengths in the data sets SPR1 and SPR2.",
"uris": null
},
"FIGREF3": {
"type_str": "figure",
"num": null,
"text": "Adding more voters leads to convergence in SPRL predictions. x-axis: number of voter models partaking in the ensemble. y-axis: F1 mean difference over all proto-roles from the ensemble with x voters compared to the ensemble with x \u2212 1 voters. Thin bars represent standard deviations.",
"uris": null
},
"TABREF1": {
"type_str": "table",
"html": null,
"num": null,
"text": "Baselines As baselines we present the results from previous systems: the state-of-the-art byRudinger et al. (2018) is denoted in our tables as RUD'18, the linear feature-based classifier byReisinger et al. (2015) as REI'15 and the CRF developed byTeichert et al. (2017) as TEI'17.",
"content": "<table><tr><td>; Rudinger et al. (2018); Ten-</td></tr><tr><td>ney et al. (2019). For determining the gold labels,</td></tr><tr><td>we also conform to prior works and (i) collapse</td></tr><tr><td>classes in the multi-label setup from {N A, 1, 2, 3}</td></tr><tr><td>and {4, 5} to classes '\u2212' and '+' and (ii) treat N A</td></tr><tr><td>as 1 in the Likert regression formulation. For dou-</td></tr><tr><td>bly annotated data (SPR2), the Likert scores are</td></tr><tr><td>averaged; in the multi-label setup we consider val-</td></tr><tr><td>ues \u2265 4 as '+' and map lesser scores to '\u2212'. More</td></tr><tr><td>data and pre-processing details are described in the</td></tr><tr><td>Supplement \u00a7A.1.</td></tr></table>"
},
"TABREF3": {
"type_str": "table",
"html": null,
"num": null,
"text": "SPR1 results. bold: better than all previous work; bold: overall best.",
"content": "<table><tr><td/><td/><td/><td/><td colspan=\"2\">multi-label (ML), F1 score</td><td/><td/><td/><td colspan=\"2\">regression (LR), \u03c1</td><td/><td/></tr><tr><td/><td/><td>baselines</td><td/><td/><td>ours</td><td/><td/><td/><td/><td>ours</td><td/><td/></tr><tr><td>property</td><td>maj</td><td colspan=\"6\">ran const MarkerE Marker MarkerEB MarkerB</td><td colspan=\"5\">RUD'18 MarkerE Marker MarkerEB MarkerB</td></tr><tr><td>awareness</td><td colspan=\"3\">0.0 48.9 67.1</td><td>92.7</td><td>92.3</td><td>94.0</td><td>91.1</td><td>0.879</td><td>0.882</td><td>0.868</td><td>0.902</td><td>0.878</td></tr><tr><td>chg location</td><td colspan=\"3\">0.0 12.0 21.7</td><td>28.6</td><td>35.1</td><td>38.0</td><td>18.2</td><td>0.492</td><td>0.517</td><td>0.476</td><td>0.563</td><td>0.507</td></tr><tr><td>chg possession</td><td>0.0</td><td>5.5</td><td>6.6</td><td>33.3</td><td>33.3</td><td>35.6</td><td>41.1</td><td>0.488</td><td>0.520</td><td>0.483</td><td>0.549</td><td>0.509</td></tr><tr><td>chg state</td><td colspan=\"3\">0.0 19.5 31.3</td><td>29.7</td><td>27.1</td><td>41.4</td><td>45.2</td><td>0.352</td><td>0.351</td><td>0.275</td><td>0.444</td><td>0.369</td></tr><tr><td>chg state continuous</td><td>0.0</td><td colspan=\"2\">9.2 21.7</td><td>25.3</td><td>19.8</td><td>26.8</td><td>30.4</td><td>0.352</td><td>0.396</td><td>0.321</td><td>0.483</td><td>0.423</td></tr><tr><td>existed after</td><td colspan=\"3\">94.1 86.1 94.1</td><td>94.0</td><td>92.4</td><td>94.0</td><td>94.5</td><td>0.478</td><td>0.469</td><td>0.403</td><td>0.507</td><td>0.476</td></tr><tr><td>existed before</td><td colspan=\"3\">89.5 80.0 89.5</td><td>91.0</td><td>90.5</td><td>92.0</td><td>89.8</td><td>0.616</td><td>0.645</td><td>0.605</td><td>0.690</td><td>0.664</td></tr><tr><td>existed during</td><td colspan=\"3\">98.0 96.2 97.0</td><td>98.0</td><td>97.8</td><td>98.1</td><td>98.1</td><td>0.358</td><td>0.374</td><td>0.280</td><td>0.354</td><td>0.301</td></tr><tr><td>instigated</td><td colspan=\"3\">0.0 48.9 70.5</td><td>77.9</td><td>78.0</td><td>78.9</td><td>78.7</td><td>0.590</td><td>0.582</td><td>0.540</td><td>0.603</td><td>0.599</td></tr><tr><td>partitive</td><td colspan=\"3\">0.0 10.4 24.2</td><td>2.5</td><td>16.5</td><td>9.2</td><td>2.4</td><td>0.359</td><td>0.283</td><td>0.213</td><td>0.374</td><td>0.330</td></tr><tr><td>sentient</td><td colspan=\"3\">0.0 47.6 44.3</td><td>91.9</td><td>91.6</td><td>93.7</td><td>92.0</td><td>0.880</td><td>0.874</td><td>0.859</td><td>0.892</td><td>0.872</td></tr><tr><td>volitional</td><td colspan=\"3\">0.0 39.1 61.8</td><td>88.1</td><td>87.2</td><td>89.7</td><td>88.5</td><td>0.841</td><td>0.839</td><td>0.825</td><td>0.870</td><td>0.854</td></tr><tr><td>was for benefit</td><td colspan=\"3\">0.0 31.6 48.8</td><td>61.1</td><td>59.2</td><td>60.2</td><td>63.4</td><td>0.578</td><td>0.580</td><td>0.525</td><td>0.598</td><td>0.569</td></tr><tr><td>was used</td><td colspan=\"3\">79.3 66.1 79.3</td><td>77.9</td><td>78.0</td><td>77.6</td><td>79.9</td><td>0.203</td><td>0.173</td><td>0.093</td><td>0.288</td><td>0.264</td></tr><tr><td>micro</td><td colspan=\"3\">65.0 62.9 61.4</td><td>84.0</td><td>83.4</td><td>84.9</td><td>83.9</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>macro</td><td colspan=\"3\">25.9 43.2 61.4</td><td>70.9</td><td>69.7</td><td>69.9</td><td>67.5</td><td>0.534</td><td>0.535</td><td>0.483</td><td>0.580</td><td>0.544</td></tr></table>"
},
"TABREF4": {
"type_str": "table",
"html": null,
"num": null,
"text": "SPR2 results. bold: better than previous work and/or baselines; bold: overall best.",
"content": "<table/>"
},
"TABREF7": {
"type_str": "table",
"html": null,
"num": null,
"text": "McNemar significance test results of Mark-erEB against MarkerB. Counts of properties for which a significance category applies (NS: #properties with insignificant difference).",
"content": "<table/>"
},
"TABREF8": {
"type_str": "table",
"html": null,
"num": null,
"text": "ablated componentDataMarkerEB SelfAtt mark. pred-mark. arg-mark. hier.",
"content": "<table><tr><td>SPR1</td><td>77.5</td><td>73.1</td><td>50.7</td><td>76.3</td><td>60.1 76.7</td></tr><tr><td>SPR2</td><td>69.9</td><td>67.3</td><td>59.3</td><td>68.2</td><td>63.3 68.9</td></tr></table>"
},
"TABREF9": {
"type_str": "table",
"html": null,
"num": null,
"text": "Multi-labeling F1 macro scores for different MarkerEB model configurations over SPR1 and SPR2.",
"content": "<table/>"
}
}
}
}