|
{ |
|
"paper_id": "S19-2003", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:45:58.151878Z" |
|
}, |
|
"title": "SemEval-2019 Task 2: Unsupervised Lexical Frame Induction", |
|
"authors": [ |
|
{ |
|
"first": "Behrang", |
|
"middle": [], |
|
"last": "Qasemizadeh", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Miriam", |
|
"middle": [ |
|
"R L" |
|
], |
|
"last": "Petruck", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Regina", |
|
"middle": [], |
|
"last": "Stodden", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Germany", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Laura", |
|
"middle": [], |
|
"last": "Kallmeyer", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Marie", |
|
"middle": [], |
|
"last": "Candito", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "This paper presents Unsupervised Lexical Frame Induction, Task 2 of the International Workshop on Semantic Evaluation in 2019. Given a set of prespecified syntactic forms in context, the task requires that verbs and their arguments be clustered to resemble semantic frame structures. Results are useful in identifying polysemous words, i.e., those whose frame structures are not easily distinguished, as well as discerning semantic relations of the arguments. Evaluation of unsupervised frame induction methods fell into two tracks: Task A) Verb Clustering based on FrameNet 1.7; and B) Argument Clustering, with B.1) based on FrameNet's core frame elements, and B.2) on VerbNet 3.2 semantic roles. The shared task attracted nine teams, of whom three reported promising results. This paper describes the task and its data, reports on methods and resources that these systems used, and offers a comparison to human annotation.", |
|
"pdf_parse": { |
|
"paper_id": "S19-2003", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "This paper presents Unsupervised Lexical Frame Induction, Task 2 of the International Workshop on Semantic Evaluation in 2019. Given a set of prespecified syntactic forms in context, the task requires that verbs and their arguments be clustered to resemble semantic frame structures. Results are useful in identifying polysemous words, i.e., those whose frame structures are not easily distinguished, as well as discerning semantic relations of the arguments. Evaluation of unsupervised frame induction methods fell into two tracks: Task A) Verb Clustering based on FrameNet 1.7; and B) Argument Clustering, with B.1) based on FrameNet's core frame elements, and B.2) on VerbNet 3.2 semantic roles. The shared task attracted nine teams, of whom three reported promising results. This paper describes the task and its data, reports on methods and resources that these systems used, and offers a comparison to human annotation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "SemEval 2019 Task 2 focused on the unsupervised semantic labeling of a set of prespecified (semantically) unlabeled structures (Figure 1 ). Unsupervised learning methods analyze these structures ( Figure 1a ) to augment them with semantic labels (Figure 1b) . The shape of the manually labeled input frames is constrained to an acyclic connected tree of lexical items (words and multi-word units) of maximum depth 1, where just one root governs several arguments. The task used Berkeley FrameNet (FN) (Ruppenhofer et al., 2016) and Q. Zadeh and Petruck (2019), guidelines for this task, to determine the arguments and label them with semantic information.", |
|
"cite_spans": [ |
|
{ |
|
"start": 501, |
|
"end": 527, |
|
"text": "(Ruppenhofer et al., 2016)", |
|
"ref_id": "BIBREF35" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 127, |
|
"end": 136, |
|
"text": "(Figure 1", |
|
"ref_id": "FIGREF1" |
|
}, |
|
{ |
|
"start": 197, |
|
"end": 206, |
|
"text": "Figure 1a", |
|
"ref_id": "FIGREF1" |
|
}, |
|
{ |
|
"start": 246, |
|
"end": 257, |
|
"text": "(Figure 1b)", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We compared the proposed system results for unsupervised semantic tagging with that of human annotated (or, gold-standard) data in three different subtasks (Figure 2 ). To evaluate the systems, we computed distributional similarities between their generated unsupervised labeled data and human annotated reference data. For computing similarities we used general purpose numeral methods of text clustering, in particular BCUBED F-SCORE (Bagga and Baldwin, 1998) as the single figure of merit to rank the systems. The most important result of the shared task is the creation of a benchmark for a future complex task. This benchmark includes a moderately sized, manually annotated set of frames, where only the verbs of each were included, along with their core frame elements (which uniquely define a frame as Ruppenhofer et al. describe) . To complement FN's core frame elements that have highly specific meanings, the benchmark also includes the annotated argument structures of the verbs based on the generic semantic roles proposed for verb classes in VerbNet 3.2 (Kipper et al., 2000; Palmer et al., 2017) . The benchmark comes with simplified annotation guidelines and a modular annotation sys-tem with browsing and editing capabilities. 1 Complementing the benchmarking are several state-ofthe-art competing baselines, from the participants, that serve as a point of departure for improvements in the future. 2 The rest of this paper is organized as follows: Section 2 contextualizes this task; Section 3 offers a detailed task-description; Section 4 describes the data; Section 5 introduces the evaluation metrics and baselines; Section 6 characterizes the participating systems and unsupervised methods that participants used; Section 7 provides evaluation scores and additional insight about the data; and Section 8 presents concluding remarks.", |
|
"cite_spans": [ |
|
{ |
|
"start": 436, |
|
"end": 461, |
|
"text": "(Bagga and Baldwin, 1998)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 809, |
|
"end": 837, |
|
"text": "Ruppenhofer et al. describe)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1067, |
|
"end": 1088, |
|
"text": "(Kipper et al., 2000;", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 1089, |
|
"end": 1109, |
|
"text": "Palmer et al., 2017)", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 1415, |
|
"end": 1416, |
|
"text": "2", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 156, |
|
"end": 165, |
|
"text": "(Figure 2", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Frame Semantics (Fillmore, 1976) and other theories (Gamerschlag et al., 2014) that adopt typed feature structures for representing knowledge and linguistic structures have developed in parallel over several decades in theoretical linguistic studies about the syntax-semantics interface, as well as in empirical corpus-driven applications in natural language processing. Building repositories of (lexical) semantic frames is a core component in all of these efforts. In formal studies, lexical semantic frame knowledge bases instantiate foundational theories with tangible examples, e.g., to provide supporting evidence for the theory. Practically, frame semantic repositories play a pivotal role in natural language understanding and semantic parsing, both as inspiration for a representation format and for training data-driven machine learning systems, which is required for tasks such as information extraction, question-answering, text summarization, among others.", |
|
"cite_spans": [ |
|
{ |
|
"start": 16, |
|
"end": 32, |
|
"text": "(Fillmore, 1976)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 52, |
|
"end": 78, |
|
"text": "(Gamerschlag et al., 2014)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "However, manually developing frame semantic databases and annotating corpus-derived illustrative examples to support analyses of frames are resource-intensive tasks. The most well-known frame semantic (lexical) resource is FrameNet (Ruppenhofer et al., 2016) , which only covers a (relatively) small set of the vocabulary of contemporary English. While NLP research has integrated FrameNet data into semantic parsing, e.g., Swayamdipta et al. (2018) , these methods cannot extend beyond previously seen training labels, tagging out-of-domain semantics as unknown at best. This limitation does not hinder unsupervised methods, which will port and extend the coverage of semantic parsers, a common challenge in semantic parsing (Hartmann et al., 2017) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 232, |
|
"end": 258, |
|
"text": "(Ruppenhofer et al., 2016)", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 424, |
|
"end": 449, |
|
"text": "Swayamdipta et al. (2018)", |
|
"ref_id": "BIBREF38" |
|
}, |
|
{ |
|
"start": 726, |
|
"end": 749, |
|
"text": "(Hartmann et al., 2017)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Unsupervised frame induction methods can serve as an assistive semantic analytic tool, to build language resources and facilitate linguistic studies. Since the focus is usually to build language resources, most systems (Pennacchiotti et al. (2008) ; Green et al. (2004) ) have used a lexical semantic resource like WordNet (Miller, 1995) to extend coverage of a resource like FrameNet. Some methods, e.g., Modi et al. (2012) and Kallmeyer et al. (2018) , tried to extract FrameNetlike resources automatically without additional semantic information. Others (Ustalov et al. (2018) ; Materna (2012)) addressed frame induction only for verbs with two arguments.", |
|
"cite_spans": [ |
|
{ |
|
"start": 219, |
|
"end": 247, |
|
"text": "(Pennacchiotti et al. (2008)", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 250, |
|
"end": 269, |
|
"text": "Green et al. (2004)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 323, |
|
"end": 337, |
|
"text": "(Miller, 1995)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 406, |
|
"end": 424, |
|
"text": "Modi et al. (2012)", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 429, |
|
"end": 452, |
|
"text": "Kallmeyer et al. (2018)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 557, |
|
"end": 579, |
|
"text": "(Ustalov et al. (2018)", |
|
"ref_id": "BIBREF39" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Lastly, unsupervised frame induction methods can also facilitate linguistic investigations by capturing information about the reciprocal relationships between statistical features and linguistic or extra-linguistic observations (e.g., Reisinger et al. (2015) ). This task aimed to benchmark a class of such unsupervised frame induction methods. The ambitious goal of this task was the unsupervised induction of frame semantic structures from tokenized and morphosyntacally labeled text corpora. We sought to achieve this goal by building an evaluation benchmark for three tasks. Task A dealt with unsupervised labeling of verb lemmas with their frame meaning. Task B involved unsupervised argument role labeling, where B.1 benchmarked unsupervised labeling of frame-specific frame elements (FEs) based on FN, and B.2 benchmarked unsupervised role labeling of arguments in Case Grammar terms (Fillmore, 1968) and against a set of generic semantic roles, taken primarily from VerbNet.", |
|
"cite_spans": [ |
|
{ |
|
"start": 235, |
|
"end": 258, |
|
"text": "Reisinger et al. (2015)", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 891, |
|
"end": 907, |
|
"text": "(Fillmore, 1968)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The task was unsupervised in that it forbade the use of any explicit semantic annotation (only permitting morphosyntactic annotation). Instead, we encouraged the use of unsupervised representation learning methods (e.g., word embeddings, brown clusters) to obtain semantic information. Hence, systems learn and assign semantic labels to test records without appealing to any explicit training labels. For development purposes, developers received a small labeled development set.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task Description", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The goal of this task was to identify verbs that evoke the same frame. The task involved labeling verb uses in context to resemble their categorization based on Frame Semantics (Figure 2a ). Here, we used FN 1.7 as the reference for frame definitions. Hence, the task constituted the unsupervised induction of FN's lexical units, where a lexical unit (LU) is a pairing of a lemma and a frame. For example, we expected that the LUs auction.v, retail.v, sell.v, etc., which evoke the typed situation of COMMERCE SELL, be labeled with the same unsupervised tag. 3 The task resembles word sense induction in that it assigns a class (or sense) label to a verb. In word sense induction (WSI), labels are determined and evaluated on word forms (lemma + part-ofspeech e.g., sell.v or auction.n). WSI evaluations assume that the inventory of senses (set S i s) for different word forms f is devised independently. For instance, assuming f 1 is labeled with the set of senses S 1 and f 2 with S 2 , then S 1 \u2229 S 2 = \u03c6 only if f 1 = f 2 ; and, if f 1 = f 2 then S 1 \u2229 S 2 = \u03c6 (as in other SemEval benchmarks, including Agirre and Soroa (2007) ; Manandhar et al. (2010) ; Jurgens and Klapaftis (2013); Navigli and Vannella (2013)). For instance, in WSI evaluations based on OntoNotes (Hovy et al., 2006) , six different labels from S sell are assigned to the lemma sell.v, and one label s is assigned to auction.v, knowing that s / \u2208 S sell . Typically, lexical semantic relationships among members of S i s (e.g., synonymy, antonymy) are then analyzed independently of WSI (e.g., Lenci and Benotto (2012) ; Girju et al. (2007) ; McCarthy and Navigli (2007) ). In contrast, this task assumes that the sense inventory is defined independent of word forms.", |
|
"cite_spans": [ |
|
{ |
|
"start": 559, |
|
"end": 560, |
|
"text": "3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1108, |
|
"end": 1131, |
|
"text": "Agirre and Soroa (2007)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 1134, |
|
"end": 1157, |
|
"text": "Manandhar et al. (2010)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 1272, |
|
"end": 1291, |
|
"text": "(Hovy et al., 2006)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 1569, |
|
"end": 1593, |
|
"text": "Lenci and Benotto (2012)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 1596, |
|
"end": 1615, |
|
"text": "Girju et al. (2007)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 1618, |
|
"end": 1645, |
|
"text": "McCarthy and Navigli (2007)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 177, |
|
"end": 187, |
|
"text": "(Figure 2a", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Task A: Clustering Verbs", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "This task involves uncovering mapping between word forms f and members of S such that different word forms (i.e., f i = f j ) can be mapped to the same meaning (label), and the same meaning (label) can be mapped to several word forms. We defined S with respect to FrameNet and assumed that its typed-situation frames are units of meaning. So, COMMERCE SELL captures the meaning associated with both sell.v and auction.v., as well as other selling-related words. Hence, in some sense, Task A goes beyond the ordinary WSI task as it also demands identifying (unspecified) lexical semantic relationships between verbs.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task A: Clustering Verbs", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Taking the frames as primary and defining roles relative to each frame, the aim of Task B.1 was to cluster prespecified verb-headed argument structures according to the principles of Frame Semantics, where FrameNet served as the reference for evaluation. This task amounted to unsupervised labeling of frames and core FEs (Figure 2b ). Because FrameNet defines FEs frame-specifically, Task B.1 entails Task A. Given a set of semantically-unlabelled arguments as input (e.g., Figure 1a ), the root nodes (i.e., verbs) are clustered and assigned to a set of unsupervised frame labels \u03c0 i (1 \u2264 i \u2264 n, where n is the number of latent frames). Then, the arguments are labeled with semantic role labels (FEs) interpreted locally given the frame. That is, for any pair of \u03c0 x and \u03c0 y , the set of assigned roles R x to arguments under \u03c0 x are assumed to be independent from R y labels for \u03c0 y (R x \u2229 R y = \u03c6).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 322, |
|
"end": 332, |
|
"text": "(Figure 2b", |
|
"ref_id": "FIGREF2" |
|
}, |
|
{ |
|
"start": 475, |
|
"end": 484, |
|
"text": "Figure 1a", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Task B.1: Unsupervised Frame Semantic Argument Labeling", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We defined Subtask B.2 in parallel to Subtask B.1 and involved an idea from Case Grammar. The ar-guments of a verb in a set of prespecified subcategorization frames were clustered according to a common set of generic semantic roles ( Figure 2c ). Here, the task assumed that semantic roles are universal and generic (e.g., Agent, Patient). Their configuration determines the argument structure of verb-headed phrases. We evaluated this unsupervised labeling of arguments with semantic roles independently of the class, sense, and word form of a verb. We compared the role labels against a set of semantic roles from VerbNet 3.2 (Kipper et al., 2000) . Given a verb instance, no guarantee exists that input argument structures for B.2 and B.1 would be the same.", |
|
"cite_spans": [ |
|
{ |
|
"start": 628, |
|
"end": 649, |
|
"text": "(Kipper et al., 2000)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 234, |
|
"end": 243, |
|
"text": "Figure 2c", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Task B.2: Unsupervised Case Role Labeling", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The dataset consists of manual annotations for verb-headed frame structures anchored in tokenized sentences. These frame structures were manually annotated using the guidelines for this task (Q. Zadeh and Petruck, 2019). For example, as already illustrated, the verb come from.v is annotated in terms of FN's ORIGIN frame and its core FEs, as Example 1 shows.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Dataset", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "( 1)Criticism of futures COMES FROM Wall Street.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Dataset", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Also, using the set of 32 generic semantic role labels in VerbNet 3.2 and two additional roles, COG-NIZER and CONTENT, we annotated arguments of the verb as the following graphic shows.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "ENTITY ORIGIN", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We assumed unique identifiers for sentences, e.g., #s1 for Example 1. The evaluation record for come from.v (Task A) appears below, where #s1 4 5 specifies the position of the verb in the sentence (Example 1). We stripped off the manually asserted labels from the records and passed them to systems for assigning unsupervised labels. Evidently, later a scorer program (Section 5) compared system-generated labels with the manually assigned labels.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Criticism come from Wall Street THEME SOURCE", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We sampled data from the Wall Street Journal (WSJ) corpus of the Penn Treebank. Kallmeyer et al. (2018) provided frame annotations similar to those in this task for a portion of WSJ sentences, using SemLink and EngVallex (Cinkov\u00e1 et al., 2014) to generate frame semantic annotations semi-automatically. That work was based on FrameNet and the Prague Dependency Treebank (PSD) (Haji\u010d et al., 2012) from the Broad-coverage Semantic Dependency resource (Oepen et al., 2016) . We started by annotating a portion of the records in Kallmeyer et al. (2018) , and later deviated from this subset to create a more representative sample of the overall diversity and distribution of verbs in the WSJ corpus using a stratified random sampling method.", |
|
"cite_spans": [ |
|
{ |
|
"start": 80, |
|
"end": 103, |
|
"text": "Kallmeyer et al. (2018)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 221, |
|
"end": 243, |
|
"text": "(Cinkov\u00e1 et al., 2014)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 376, |
|
"end": 396, |
|
"text": "(Haji\u010d et al., 2012)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 450, |
|
"end": 470, |
|
"text": "(Oepen et al., 2016)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 526, |
|
"end": 549, |
|
"text": "Kallmeyer et al. (2018)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Sampling", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The annotation guidelines for this task were slightly different from those of FrameNet and various semantic dependency treebanks. In contrast to FN, which annotates a full span of text as an argument filler, or PropBank, which annotates syntactic constituents of arguments of verbs (Palmer et al., 2005) , we identified the text spans and only annotated a single word or a multi-word unit (MWU), i.e., the semantic head of the span, like annotations in Oepen et al. (2016) and Abstract Meaning Representation (Banarescu et al., 2013) . To illustrate, in Example 1, FN would annotate Criticism of futures as filling the FE ENTITY. We only annotated Criticism, understanding it as the LU that evokes JUDGMENT COMMUNICATION, which in turn represents the meaning of the whole text span. Thus, we assumed that another frame f a fills an argument of a frame. We annotated only the main content word(s) that evoke(s) f a ; these main words are the semantic heads. 4 Multi-word unit semantic heads (e.g., named entities, word form combinations) are annotated as if a single word form, such as Wall Street (# 1), excluding modifiers. In contrast to semantic depen-dency structures (e.g., DELPH-IN MRS-Derived Semantic Dependencies, Enju PredicateArgument Structures, and Tectogramatical Representation in PSD (Oepen et al., 2016 )), we did not commit to the underlying syntactic structure of the sentence since we were not obliged to relabel only syntactic structures. Rather, we annotated words and MWUs if the frame analysis permitted doing so. 5", |
|
"cite_spans": [ |
|
{ |
|
"start": 282, |
|
"end": 303, |
|
"text": "(Palmer et al., 2005)", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 453, |
|
"end": 472, |
|
"text": "Oepen et al. (2016)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 509, |
|
"end": 533, |
|
"text": "(Banarescu et al., 2013)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 957, |
|
"end": 958, |
|
"text": "4", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1300, |
|
"end": 1319, |
|
"text": "(Oepen et al., 2016", |
|
"ref_id": "BIBREF28" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Guidelines", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "We annotated the data in a modular manner and in a semi-controlled environment using an annotation system developed for this purpose. The procedure consisted of four steps: 1) Reading and Comprehension; 2) Choosing a Frame; 3) Annotating Arguments; and 4) Rating, Commenting, or Revising. We tracked and logged all changes in the data as well as annotator interaction with the annotation system upon starting to annotate. The tool measured the time that annotators spent on each record and each annotation step, as well as how annotators moved between steps.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotation Procedure", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "In Step 1, annotators viewed a sentence with one highlighted verb, as in Example 2.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotation Procedure", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "(2) Criticism of futures COMES from Wall Street.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotation Procedure", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "The goal of this step was understanding the meaning of the verb and its semantic function, and identifying semantic heads of arguments and their associated words or MWUs. To continue, an annotator must confirm the understanding of the verb's meaning of the verb, and can identify its semantic arguments. Without confirmation, an annotator would terminate the annotation process for that input sentence and go to the next one.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotation Procedure", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "If confirmed, Step 2 required the annotator to choose the frame that the verb evoked. This step may have included annotating multi-word phrasal verbs, e.g., COMES+FROM (Example 2). The annotation system assisted by providing a list of likely frames for the verb, including a LU lookup function (as in FN), an extended set of LUs derived via statistical methods, and previously logged annotations. After reviewing the definitions of the proposed frames, annotators chose one, or annotated the verb form with a different existing FN frame. Otherwise, the annotator terminated the process and the record moved to the list of \"skipped items\".", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotation Procedure", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "The annotation of arguments, Step 3, required", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotation Procedure", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "(3) Criticism of futures comes from Wall Street.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotation Procedure", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "The tool lists the core FEs and their definitions, and checks the integrity of record annotations to ensure that each core FE is annotated only once. In parallel, annotators add the verb's subcategorization frame and its semantic role. We did not annotate null instantiated FEs (but FN does). During step 3, annotators could go back to the previous step and change their choice of frame type.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotation Procedure", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "For", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotation Procedure", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Step 4, annotators rated their annotation, stating their opinion on how well the annotated instance fit FrameNet's definition and how it compared to other annotated instances. In a sense, annotators measured their confidence in the assigned labels. They did so by selecting a number on a scale from 1 to 5, with 1 not confident at all and 5 the most confident, i.e., the annotation fit perfectly to the chosen FrameNet frame, its definition, and examples. Annotators had the option to add free text comments on each record.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotation Procedure", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "The annotation procedure was rarely straightforward. Given the interdependence of Steps 2 and 3, annotators usually moved back and forth between them. In Step 2 an annotator might believe that a target verb did not belong in any existing FN frame. Likewise, annotators could terminate the annotation process even upon reaching the last step.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotation Procedure", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "At least two annotators verified all annotation used in the evaluation. A main annotator annotated all records in the dataset; two other annotators verified or disputed those annotations. If annotators could not reach an agreement, we removed the record from the SemEval dataset.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Quality Control", |
|
"sec_num": "4.3.1" |
|
}, |
|
{ |
|
"text": "A full analysis of annotator disagreement goes beyond the scope of this work. While the source of annotator disagreement may seem trivial and simple (e.g., only one annotator understood the sentence correctly), we believe that some sentences may have more than one interpretation, all of which are plausible. Like the disagreement resulting from incorrect frame assignment, deciding what frame a verb evokes may be challenging; and resolving the dilemma is not always simple. Choosing between two related frames (e.g., BUILDING vs. INTENTIONALLY CREATE, related via Inheritance in FN), or identifying metaphorical and non-metaphorical uses of a verb requires subtle and sophisticated understanding of the semantics of the language, and of Frame Semantics. At times, disagreements pointed to more complex linguistic issues that remain in debate, e.g., choosing the semantic head of a syntactically complex argument, treating quantifiers, conjunctions, etc. Table 1 shows a statistical summary of the annotation task. The SemEval column reports the statistics for the final set of records, i.e., gold records with double-agreement between annotators, and which we used to evaluate the systems. Total reports the statistics of all analyzed records, from which we chose our SemEval data. Skipped and InProg show the statistics for discarded records and records without a final decision, respectively. Dev shows the statistics for the development set.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 956, |
|
"end": 963, |
|
"text": "Table 1", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Quality Control", |
|
"sec_num": "4.3.1" |
|
}, |
|
{ |
|
"text": "Each of the rows reports a value of a component of the data or annotator interaction with the data. Records indicates the number of annotated verbs and their arguments. Sentences and Tokens indicate the size of the sub-corpus of the annotated records. VF is the number of distinct verb lemmas (273), mapped to the number of distinct frames that the Frames-Type row shows (149) (Figure 3 Confidence reports the average of annotatorassigned confidence scores for annotations per record. Although interpreting this measure demands more work, the averages appear to be as expected. Specifically, SemEval is higher in value than both InProg and Skipped, facts that we associate with double agreement and the choice reviewing process. Still, many records with high confidence scores remained as InProg given the lack of double agreement. Table 5 (Appendix A.1) lists the top 10 frames annotated with their respective highest and lowest confidence ratings averaged by their frequency in SemEval.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 377, |
|
"end": 386, |
|
"text": "(Figure 3", |
|
"ref_id": "FIGREF4" |
|
}, |
|
{ |
|
"start": 832, |
|
"end": 839, |
|
"text": "Table 5", |
|
"ref_id": "TABREF8" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Summary statistics", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "The last two rows of Table 1 are meta-data on the annotation process. Time reports the total time annotators spent in active annotation, engaged in the steps described above (742 hours), excluding the reviewing process (Section 4.3.1) and including the time to annotate MWUs. Total-Move is the total number of logical moves for frame annotation between annotators and the annotation system, i.e., logged changes in the process of frame and core FE annotation. This number excludes annotation of verb subcategorization with generic semantic roles. 6 In SemEval, annotated frames had an average of 2.15 arguments, requiring a minimum of five logical moves to annotate (MWU-less sentences). However, on average, each SemEval record required 14.8 moves. This number is even higher for InProg (18.2); we believe that it indicates the complexity of the annotation task. Table 4 (Appendix A.1) further details annotator activity, with time spent and moves per annotation step. As expected, frame annotation of verbs (Step 2), was the most time consuming part of the task.", |
|
"cite_spans": [ |
|
{ |
|
"start": 547, |
|
"end": 548, |
|
"text": "6", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 21, |
|
"end": 28, |
|
"text": "Table 1", |
|
"ref_id": "TABREF2" |
|
}, |
|
{ |
|
"start": 864, |
|
"end": 871, |
|
"text": "Table 4", |
|
"ref_id": "TABREF7" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Summary statistics", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "Shared task participants received a development set consisting of 600 records from a total of 4,620 records, where Table 4 shows the statistics. The development set contained gold annotations for all three subtasks.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 115, |
|
"end": 122, |
|
"text": "Table 4", |
|
"ref_id": "TABREF7" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Development Dataset", |
|
"sec_num": "4.5" |
|
}, |
|
{ |
|
"text": "For all subtasks, as figure of merit, here we report the performance of participating systems with measures for evaluating text clustering techniques, including the classic measures of Purity (PU), inverse-Purity (IPU), and their harmonic mean (PIF) (Steinbach et al., 2000) , as well as the harmonic mean for BCubed precision and recall (i.e., BCP, BCR, and BCF, respectively) (Bagga and Baldwin, 1998) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 250, |
|
"end": 274, |
|
"text": "(Steinbach et al., 2000)", |
|
"ref_id": "BIBREF37" |
|
}, |
|
{ |
|
"start": 378, |
|
"end": 403, |
|
"text": "(Bagga and Baldwin, 1998)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Metrics", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "To compute these measures for the pairing of reference-labeled data and unsupervised-labeled data (with each having an exact set of annotated items), we built a contingency table T with rows for gold labels and columns for unsupervised system labels. We filled the table with the number of intersecting items, as done in cross-tabulation of results in classification tasks to compute precision and recall. For Task A (Section 3), T tracks the unsupervised system labels and the gold reference labels assigned to verbs. For Task B.1, we labeled the rows and columns of T with tuples (l v , l a ), where l v labels the frame evoking verb and l a labels the FE filler. For Task B.2, the rows and columns in T track the unsupervised system labels and the gold reference labels (generic semantic roles) assigned to arguments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Metrics", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "These performance measures reflect a notion of similarity between the distribution of unsupervised labels and that of the gold reference labels, given certain criteria. Specifically, they define the notions of consistency and completeness of automatically generated clusters based on the evaluation data. Each method measures consistency and completeness in its own way, and alone may lack sufficient information for a clear understanding and analysis of system performance (Amig\u00f3 et al., 2009) . But, as the single metric for system ranking, we used the BCF measure, given its satisfactory behavior in certain situations. Note that we modeled the task and its evaluation as hard clustering, where a record receives only one label, without overlap in any generated category of items.", |
|
"cite_spans": [ |
|
{ |
|
"start": 474, |
|
"end": 494, |
|
"text": "(Amig\u00f3 et al., 2009)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Metrics", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Similar to other clustering tasks, we use baselines of random, all-in-one-cluster (AIN1), and one-cluster-per-instance (1CPI). Additionally, we adapted the baseline of the most frequent sense in WSI for these tasks by introducing the one-cluster-per-head (1CPH) baseline in Task A, and one-cluster-per-syntactic-category (1CPG) for verb argument clustering in Task B.2. 7 For Task B.1, we built a baseline, 1CPGH for labeling verbs with their lemmas (as in 1CPH) and FEs with grammatical relation to their heads (as in 1CPG). We included two more labels lcmpx and rcmpx for frame fillers with no direct syntactic relation to the head verb, if occurring left of or right of the verb, respectively.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Both 1CPH and 1CPG (and their combination for Task B.1) are hard to beat because of the longtailed distribution of the frequency of our test data. E.g., most verbs frequently instantiate one particular frame and rarely other ones. Similarly, a particular role (FE) frequently is filled by words that have a particular grammatical relation to its governing verb; e.g., most subjects of most verb forms receive the agent label in their subcategorization frame (or, an agent-like element in their Frame Semantics representations). Evidently the chosen labels for grammatical relations influences 1CPG and 1CPHG scores. Values reported later (specifically, Tables 6 and 2) could be improved by employing heuristics, e.g., relabeling enhanced dependencies using a few rules.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "We also employed one unsupervised and a second supervised system baselines. For the unsupervised one, we trained the system with data from Kallmeyer et al. (2018) . For the supervised one, we used OPEN-SESAME, a state-of-the-art supervised FrameNet tagger (Swayamdipta et al., 2018) . After converting its output to the format of the present task, we evaluated it similar to other systems. Both systems were trained out-of-thebox with no additional tuning.", |
|
"cite_spans": [ |
|
{ |
|
"start": 139, |
|
"end": 162, |
|
"text": "Kallmeyer et al. (2018)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 256, |
|
"end": 282, |
|
"text": "(Swayamdipta et al., 2018)", |
|
"ref_id": "BIBREF38" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "We received submissions from nine teams (13 participants). Only three chose to submit system description papers. provided a solution for Task A and Task B.2, using both sets of these results to address Task B.1. Task A used language models and Hearst-like patterns to tune and obtain contextualized vector representations for the verbs in the test set. A hierarchical agglomerative clustering method followed, where hyperparameters were set with labeled and unlabeled records from the development and test sets. Task B.2 employed a logistic regression trained over the development set to identify only the most frequent labels. The classifier was based on features obtained from a language model and hand-crafted rules. Using logistic regression and training this algorithm with the development set remains an issue of concern, given the intended unsupervised scenario. While we objected to using the development set to train a supervised system for this subtask, we still report its scores. The differences between its results and those of the other systems may be informative. Still, we considered Arefyev et al.'s results for Task B only complementarily, not to rank the systems. Anwar et al. (2019) proposed a method that was similar to that of . Arefyev et al. used contextualized word embeddings from the BERT language modeling tool Devlin et al. (2018) , whereas Anwar et al. used pre-trained embeddings. They merged the outputs of Tasks A and B.2 for Task B.1. Task A used agglomerative clustering of vectors with concatenated verb representation vectors and vectors that represent usage context. Task B.2 employed hand crafted features, a method to encode syntactic information, and again an agglomerative clustering method. Ribeiro et al. (2019) also reported results for all subtasks using similar techniques to those reported in the other two submitted papers. Ribeiro et al. (2019) used the bidirectional neural language model BERT, which Arefyev et al. (2019) also used. Task A employed contextualized word representations proposed in (Ustalov et al., 2018) , and Biemann's clustering algorithm (Biemann, 2006) . Compared to the two other systems, Ribeiro et al. (2019) exploited input structures, weighted them, and used them elegantly in its algorithm. With the same method but different hyper-parameters for B.2 along with combining results from Task A, Ribeiro et al. (2019) offered a solution to B.1. Table 2 reports the BCF scores for system submissions along with a baseline for each task. 8 As the table shows, each system performs best only in one of the tasks. We report Arefyev et al.'s submission for Tasks B.1 and B.2 only to show the benefit of using a small amount of training data and a supervised method together with a clustering algorithm, provided that such training data is available. As readers know, finding the optimal (actual) number of clusters is an open research area. Participants knew the number of clusters: whereas Arefyev et al. and Anwar et al. used this information, Ribeiro et al. opted for a statistical method tuned with data that we provided.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1183, |
|
"end": 1202, |
|
"text": "Anwar et al. (2019)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 1339, |
|
"end": 1359, |
|
"text": "Devlin et al. (2018)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 1734, |
|
"end": 1755, |
|
"text": "Ribeiro et al. (2019)", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 1873, |
|
"end": 1894, |
|
"text": "Ribeiro et al. (2019)", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 2049, |
|
"end": 2071, |
|
"text": "(Ustalov et al., 2018)", |
|
"ref_id": "BIBREF39" |
|
}, |
|
{ |
|
"start": 2109, |
|
"end": 2124, |
|
"text": "(Biemann, 2006)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 2162, |
|
"end": 2183, |
|
"text": "Ribeiro et al. (2019)", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 2371, |
|
"end": 2392, |
|
"text": "Ribeiro et al. (2019)", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 2961, |
|
"end": 3030, |
|
"text": "Arefyev et al. and Anwar et al. used this information, Ribeiro et al.", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 2420, |
|
"end": 2427, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "System Descriptions", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "The baseline systems, the unsupervised method of Kallmeyer et al. (2018) Table 2 : Summary of Results. The BASELINE for Task A is 1CPH, and for B.1 and B.2 is 1CPHG. Best results appear in bold face; discarded results are crossed out. Table 6 lists all other baselines. of all systems regarding BCF. This result is not surprising since that work did not effectively handle MWUs in the test, where only the head of the MWU was kept. However, the output of Open-SESAME, and its low BCF was indeed surprising.", |
|
"cite_spans": [ |
|
{ |
|
"start": 49, |
|
"end": 72, |
|
"text": "Kallmeyer et al. (2018)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 73, |
|
"end": 80, |
|
"text": "Table 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 235, |
|
"end": 242, |
|
"text": "Table 6", |
|
"ref_id": "TABREF11" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results and Data Analysis", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "We fed Open-SESAME the sentences from the test set; it identified approximately 5k frames. However, the overlap with the test set was only 1,216 records (identification problem in Open-SESAME). These 1,216 records exhibit a mismatch between 536 of the arguments and their respective target verbs. We ignored the system's extra or incorrectly generated arguments, and replaced the missing items with those of the 1CPHG baseline records. We then used the resulting records for evaluation against the task's gold data as did the task's participants. As Table 3 shows, the unsupervised method outperforms the supervised system for all tasks by a wide margin, i.e.,the unsupervised label set can carry more information than does the supervised label set. We compared results for confidence measure that annotators assigned to records. First, we split the evaluation records according to their assigned confidence value into five subsets E i , 1 \u2264 i \u2264 5, such that subset E 1 contained only records with confidence value 1, E 2 contained only record with confidence value 2, etc.. Then we evaluated system outputs on each subset E i and logged that BCF. Later, we performed this evaluation cumulatively using subsets E i s by adding records from all E j s to E i where i < j. Interpreting the obtained values requires careful attention (e.g., changes in the prior probabilities of gold clusters and their cardinality must be taken into account), overall, we observed a similar trend for all systems: as expected, namely a positive correlation between the confidence value and BCF. Thus, what human annotators usually found hard to annotate, automatic systems also found hard to cluster. (The reverse relation does not hold). Or, pessimistically, the level of noise in annotation increases as their associated confidence decreases. (Table 7 in Appendix A.2 details the results.)", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 550, |
|
"end": 557, |
|
"text": "Table 3", |
|
"ref_id": "TABREF5" |
|
}, |
|
{ |
|
"start": 1825, |
|
"end": 1833, |
|
"text": "(Table 7", |
|
"ref_id": "TABREF9" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results and Data Analysis", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Finally, we wanted to identify the frames that machines found difficult to cluster. To estimate difficulty we used the differences in BCF under the following conditions. We repeated the evaluation process 1 \u2264 i \u2264 n times (where n is the number of gold labels for a task) for each system. In each iteration i, we removed all data items of a gold category i. We measured and noted the resulting BCF in the given iteration; we deduced the score from the system performance over the entire gold set. To cancel frequency effects, we normalized the differences by the number of gold data instances. We removed all records annotated as COMMERCE SELL from the evaluation set E to form E . We computed the BCF of the systems over E (E \u2282 E), and measured d = E BCF \u2212 E BCF . We interpreted a positive difference as an easy to cluster gold category i, and a negative difference as a hard to cluster gold category i.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results and Data Analysis", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "The heat maps in Table 8 and Table 9 show a summary of the results for Task A and Task B.2, respectively. All systems performed similarly for approximately 30% of the gold classes. Comparing differences across systems and the baselines of 1CPH and 1CPG reveals (possibly) interesting information. Thus, for example, in Task A, most systems found COMMERCE SELL hard and COMMERCE BUY easy to cluster. Interestingly, a set of six verbs evokes each frame: buy, purchase, buy back, buy up, buy out, buy into for COMMERCE BUY; and sell, retail, auction, place, deal, resell for COMMERCE SELL. From these two sets of verbs, three are polysemous: buy in the former, and place and deal in the latter. Does the morphology of the verbs (e.g., buy-back, resell) make one easy to cluster? Alternatively, are other factors at play, such as the number of verb instances? How these factors might influence the proposed naive BCF-difference model is an open question.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 17, |
|
"end": 36, |
|
"text": "Table 8 and Table 9", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results and Data Analysis", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "We have presented the SemEval 2019 task on unsupervised lexical frame induction. We described the task in detail, provided a summary of methods that participants developed, and compared the results. Although much room for improvement of the task remains, we consider it a step forward. It employed a well-motivated typology of lexical frames to distinguish lexical frame induction tasks. The evaluation data derived from annotations of a well-known resource, namely a portion of WSJ sentences, perhaps the most annotated corpus of English. These features provide opportunities for future investigation, in particular in studies related to reciprocal relations between syntactic and lexical semantic frame structures.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Concluding Remarks", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "One reason to promote using unsupervised methods is their inherent flexibility to embrace unknown data. These methods have a high margin of tolerance for noise, and perform better than supervised method with insufficient training data. For unsupervised data, obtaining or generating training data is easier than doing so with supervised methods because they simply do not require annotation. For example, all participant systems could collect similar unlabeled training data from only syntactically annotated corpora to generate more unlabeled records. Ultimately, such methods can achieve respectable performance, and produce clusters which are both more informative than the unlabeled input and supervised categories (under certain situations). As shown, unsupervised methods can even outperform a state-of-the-art Frame Semantics parser by a wide margin (Section 7), while a very large gap remains for improvements in future work.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Concluding Remarks", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "A.1 Appendix I: Annotation Process", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Appendices", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Step Table 4 shows the amount of effort to develop the SemEval dataset in terms of time and moves that the annotation system recorded. (See Sections 4.3, 4.4).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 5, |
|
"end": 12, |
|
"text": "Table 4", |
|
"ref_id": "TABREF7" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "A.1.1 Time and Moves per Annotation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Time Table Table 6 extends Table 2 . Section 5 defines the abbreviations. A horizontal line separates participating systems and the baselines. Table 7 : Changes in BCF score of systems relative to changes in evaluation records based on assigned confidence measure.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 5, |
|
"end": 36, |
|
"text": "Table Table 6 extends Table 2", |
|
"ref_id": "TABREF11" |
|
}, |
|
{ |
|
"start": 145, |
|
"end": 152, |
|
"text": "Table 7", |
|
"ref_id": "TABREF9" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Annotator Activity", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://sfa.phil.hhu.de/. 2 See https://competitions.codalab.org/ competitions/19159 for accessing the task's language resources, tools, and further technical details.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Dark red small caps indicate FN frames.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The annotation guidelines (Q. Zadeh and Petruck, 2019) discuss decisions about marking semantic heads and the complex situations resulting from it for argument annotation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Q. Zadeh and Petruck describe the issues in detail. that annotators label the core FEs of the chosen frame by first identifying their semantic head, which first may have required marking MWUs, e.g., Wall+Street in Example 3, below.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "With the exception of a few verbs, annotators rarely changed the annotation system's rule-based suggestions of VerbNet semantic roles.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We use syntactic dependencies of the Enhanced Universal Dependencies formalism(Schuster and Manning, 2016).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The full list of baselines and performance measures appear inTable 6of the Appendix.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This research was funded by DFG -SFB991. We thank Timm Lichte, Rainer Oswald, Curt Anderson, and Kurt Erbach. We also thank the LDC for its generous support, and the NVIDIA Corporation for the Titan Xp GPU used in this work.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Semeval-2007 task 02: Evaluating word sense induction and discrimination systems", |
|
"authors": [ |
|
{ |
|
"first": "Eneko", |
|
"middle": [], |
|
"last": "Agirre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aitor", |
|
"middle": [], |
|
"last": "Soroa", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 4th International Workshop on Semantic Evaluation, Se-mEval '07", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "7--12", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eneko Agirre and Aitor Soroa. 2007. Semeval-2007 task 02: Evaluating word sense induction and dis- crimination systems. In Proceedings of the 4th In- ternational Workshop on Semantic Evaluation, Se- mEval '07, pages 7-12, Stroudsburg, PA, USA. As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "A comparison of extrinsic clustering evaluation metrics based on formal constraints", |
|
"authors": [ |
|
{ |
|
"first": "Enrique", |
|
"middle": [], |
|
"last": "Amig\u00f3", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julio", |
|
"middle": [], |
|
"last": "Gonzalo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Javier", |
|
"middle": [], |
|
"last": "Artiles", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Felisa", |
|
"middle": [], |
|
"last": "Verdejo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Inf. Retr", |
|
"volume": "12", |
|
"issue": "4", |
|
"pages": "461--486", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1007/s10791-008-9066-8" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Enrique Amig\u00f3, Julio Gonzalo, Javier Artiles, and Felisa Verdejo. 2009. A comparison of extrinsic clustering evaluation metrics based on formal con- straints. Inf. Retr., 12(4):461-486.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Hm 2 at semeval 2019 task 2: Unsupervised frame induction using contextualized and uncontextualized word embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Saba", |
|
"middle": [], |
|
"last": "Anwar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dmitry", |
|
"middle": [], |
|
"last": "Ustalov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nikolay", |
|
"middle": [], |
|
"last": "Arefyev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Simone", |
|
"middle": [ |
|
"Paolo" |
|
], |
|
"last": "Ponzetto", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Biemann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Panchenko", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of The 13th International Workshop on Semantic Evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Saba Anwar, Dmitry Ustalov, Nikolay Arefyev, Si- mone Paolo Ponzetto, Chris Biemann, and Alexan- der Panchenko. 2019. Hm 2 at semeval 2019 task 2: Unsupervised frame induction using contextu- alized and uncontextualized word embeddings. In Proceedings of The 13th International Workshop on Semantic Evaluation.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Neural granny at semeval 2019 task 2: A combined approach for better modeling of semantic relationships in semantic frame induction", |
|
"authors": [ |
|
{ |
|
"first": "Nikolay", |
|
"middle": [], |
|
"last": "Arefyev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Boris", |
|
"middle": [], |
|
"last": "Sheludko", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adis", |
|
"middle": [], |
|
"last": "Davletov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dmitry", |
|
"middle": [], |
|
"last": "Kharchev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Nevidomsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Panchenko", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of The 13th International Workshop on Semantic Evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nikolay Arefyev, Boris Sheludko, Adis Davletov, Dmitry Kharchev, Alex Nevidomsky, , and Alexan- der Panchenko. 2019. Neural granny at semeval 2019 task 2: A combined approach for better model- ing of semantic relationships in semantic frame in- duction. In Proceedings of The 13th International Workshop on Semantic Evaluation.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Entitybased cross-document coreferencing using the vector space model", |
|
"authors": [ |
|
{ |
|
"first": "Amit", |
|
"middle": [], |
|
"last": "Bagga", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Breck", |
|
"middle": [], |
|
"last": "Baldwin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Proceedings of the 17th International Conference on Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "79--85", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/980451.980859" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Amit Bagga and Breck Baldwin. 1998. Entity- based cross-document coreferencing using the vec- tor space model. In Proceedings of the 17th Inter- national Conference on Computational Linguistics - Volume 1, COLING '98, pages 79-85, Stroudsburg, PA, USA. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Abstract meaning representation for sembanking", |
|
"authors": [ |
|
{ |
|
"first": "Laura", |
|
"middle": [], |
|
"last": "Banarescu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Claire", |
|
"middle": [], |
|
"last": "Bonial", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shu", |
|
"middle": [], |
|
"last": "Cai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Madalina", |
|
"middle": [], |
|
"last": "Georgescu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kira", |
|
"middle": [], |
|
"last": "Griffitt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ulf", |
|
"middle": [], |
|
"last": "Hermjakob", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Knight", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martha", |
|
"middle": [], |
|
"last": "Palmer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nathan", |
|
"middle": [], |
|
"last": "Schneider", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "178--186", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract meaning representation for sembanking. In Proceedings of the 7th Linguis- tic Annotation Workshop and Interoperability with Discourse, pages 178-186. Association for Compu- tational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Chinese whispers -an efficient graph clustering algorithm and its application to natural language processing problems", |
|
"authors": [ |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Biemann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of TextGraphs: the First Workshop on Graph Based Methods for Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "73--80", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chris Biemann. 2006. Chinese whispers -an efficient graph clustering algorithm and its application to nat- ural language processing problems. In Proceedings of TextGraphs: the First Workshop on Graph Based Methods for Natural Language Processing, pages 73-80. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Renewing and revising semlink", |
|
"authors": [ |
|
{ |
|
"first": "Claire", |
|
"middle": [], |
|
"last": "Bonial", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Stowe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martha", |
|
"middle": [], |
|
"last": "Palmer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 2nd Workshop on Linked Data in Linguistics (LDL-2013): Representing and linking lexicons, terminologies and other language data", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "9--17", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Claire Bonial, Kevin Stowe, and Martha Palmer. 2013. Renewing and revising semlink. In Proceedings of the 2nd Workshop on Linked Data in Linguistics (LDL-2013): Representing and linking lexicons, ter- minologies and other language data, pages 9 -17, Pisa, Italy. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "EngVallex -English valency lexicon. LINDAT/CLARIN digital library at the Institute of Formal and Applied Linguistics", |
|
"authors": [ |
|
{ |
|
"first": "Silvie", |
|
"middle": [], |
|
"last": "Cinkov\u00e1", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eva", |
|
"middle": [], |
|
"last": "Fu\u010d\u00edkov\u00e1", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Silvie Cinkov\u00e1, Eva Fu\u010d\u00edkov\u00e1, Jana\u0160indlerov\u00e1, and Jan Haji\u010d. 2014. EngVallex -English valency lex- icon. LINDAT/CLARIN digital library at the In- stitute of Formal and Applied Linguistics, Charles University.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1810.04805" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Frame Semantics and the Nature of Language", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Fillmore", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1976, |
|
"venue": "Origins and Evolution of Language and Speech", |
|
"volume": "280", |
|
"issue": "", |
|
"pages": "20--32", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1111/j.1749-6632.1976.tb25467.x" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "C. J. Fillmore. 1976. Frame Semantics and the Na- ture of Language. Annals of the New York Academy of Sciences, 280(Origins and Evolution of Language and Speech):20-32.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "The case for case", |
|
"authors": [ |
|
{ |
|
"first": "Charles", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Fillmore", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1968, |
|
"venue": "Universals in Linguistic Theory", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--88", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Charles J. Fillmore. 1968. The case for case. In Uni- versals in Linguistic Theory, pages 1-88. Holt Rine- hart and Winston, New York.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "General Introduction", |
|
"authors": [ |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Gamerschlag", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Doris", |
|
"middle": [], |
|
"last": "Gerland", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rainer", |
|
"middle": [], |
|
"last": "Osswald", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wiebke", |
|
"middle": [], |
|
"last": "Petersen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1007/978-3-319-01541-5_1" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thomas Gamerschlag, Doris Gerland, Rainer Osswald, and Wiebke Petersen, editors. 2014. General Intro- duction. Springer International Publishing, Cham.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Semeval-2007 task 04: Classification of semantic relations between nominals", |
|
"authors": [ |
|
{ |
|
"first": "Roxana", |
|
"middle": [], |
|
"last": "Girju", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Preslav", |
|
"middle": [], |
|
"last": "Nakov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vivi", |
|
"middle": [], |
|
"last": "Nastase", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stan", |
|
"middle": [], |
|
"last": "Szpakowicz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Turney", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Deniz", |
|
"middle": [], |
|
"last": "Yuret", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the Fourth International Workshop on Semantic Evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "13--18", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Roxana Girju, Preslav Nakov, Vivi Nastase, Stan Sz- pakowicz, Peter Turney, and Deniz Yuret. 2007. Semeval-2007 task 04: Classification of semantic relations between nominals. In Proceedings of the Fourth International Workshop on Semantic Evalu- ation (SemEval-2007), pages 13-18. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Inducing frame semantic verb classes from WordNet and LDOCE", |
|
"authors": [ |
|
{ |
|
"first": "Rebecca", |
|
"middle": [], |
|
"last": "Green", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bonnie", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Dorr", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philip", |
|
"middle": [], |
|
"last": "Resnik", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the 42Nd Annual Meeting on Association for Computational Linguistics, ACL '04", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/1218955.1219003" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rebecca Green, Bonnie J. Dorr, and Philip Resnik. 2004. Inducing frame semantic verb classes from WordNet and LDOCE. In Proceedings of the 42Nd Annual Meeting on Association for Computational Linguistics, ACL '04, Stroudsburg, PA, USA. Asso- ciation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Announcing prague czech-english dependency treebank 2.0", |
|
"authors": [ |
|
{ |
|
"first": "Jan", |
|
"middle": [], |
|
"last": "Haji\u010d", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eva", |
|
"middle": [], |
|
"last": "Haji\u010dov\u00e1", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jarmila", |
|
"middle": [], |
|
"last": "Panevov\u00e1", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Petr", |
|
"middle": [], |
|
"last": "Sgall", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ond\u0159ej", |
|
"middle": [], |
|
"last": "Bojar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Silvie", |
|
"middle": [], |
|
"last": "Cinkov\u00e1", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eva", |
|
"middle": [], |
|
"last": "Fu\u010d\u00edkov\u00e1", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marie", |
|
"middle": [], |
|
"last": "Mikulov\u00e1", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Petr", |
|
"middle": [], |
|
"last": "Pajas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jan", |
|
"middle": [], |
|
"last": "Popelka", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ji\u0159\u00ed", |
|
"middle": [], |
|
"last": "Semeck\u00fd", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Jana\u0161indlerov\u00e1", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Josef", |
|
"middle": [], |
|
"last": "Jan\u0161t\u011bp\u00e1nek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zde\u0148ka", |
|
"middle": [], |
|
"last": "Toman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zden\u011bk\u017eabokrtsk\u00fd", |
|
"middle": [], |
|
"last": "Ure\u0161ov\u00e1", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the Eighth International Conference on Language Resources and Evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jan Haji\u010d, Eva Haji\u010dov\u00e1, Jarmila Panevov\u00e1, Petr Sgall, Ond\u0159ej Bojar, Silvie Cinkov\u00e1, Eva Fu\u010d\u00edkov\u00e1, Marie Mikulov\u00e1, Petr Pajas, Jan Popelka, Ji\u0159\u00ed Se- meck\u00fd, Jana\u0160indlerov\u00e1, Jan\u0160t\u011bp\u00e1nek, Josef Toman, Zde\u0148ka Ure\u0161ov\u00e1, and Zden\u011bk\u017dabokrtsk\u00fd. 2012. Announcing prague czech-english dependency tree- bank 2.0. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC-2012). European Language Resources Asso- ciation (ELRA).", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Out-of-domain framenet semantic role labeling", |
|
"authors": [ |
|
{ |
|
"first": "Silvana", |
|
"middle": [], |
|
"last": "Hartmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilia", |
|
"middle": [], |
|
"last": "Kuznetsov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Teresa", |
|
"middle": [], |
|
"last": "Martin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iryna", |
|
"middle": [], |
|
"last": "Gurevych", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "471--482", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Silvana Hartmann, Ilia Kuznetsov, Teresa Martin, and Iryna Gurevych. 2017. Out-of-domain framenet se- mantic role labeling. In Proceedings of the 15th Conference of the European Chapter of the Associa- tion for Computational Linguistics: Volume 1, Long Papers, volume 1, pages 471-482.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "OntoNotes: The 90% solution", |
|
"authors": [ |
|
{ |
|
"first": "Eduard", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mitchell", |
|
"middle": [], |
|
"last": "Marcus", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martha", |
|
"middle": [], |
|
"last": "Palmer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lance", |
|
"middle": [], |
|
"last": "Ramshaw", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ralph", |
|
"middle": [], |
|
"last": "Weischedel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers, NAACL-Short '06", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "57--60", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eduard Hovy, Mitchell Marcus, Martha Palmer, Lance Ramshaw, and Ralph Weischedel. 2006. OntoNotes: The 90% solution. In Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers, NAACL-Short '06, pages 57-60, Stroudsburg, PA, USA. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Semeval-2013 task 13: Word sense induction for graded and non-graded senses", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Jurgens", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ioannis", |
|
"middle": [], |
|
"last": "Klapaftis", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the Seventh International Workshop on Semantic Evaluation", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "290--299", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Jurgens and Ioannis Klapaftis. 2013. Semeval- 2013 task 13: Word sense induction for graded and non-graded senses. In Second Joint Conference on Lexical and Computational Semantics (* SEM), Vol- ume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013), volume 2, pages 290-299.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Coarse lexical frame acquisition at the syntax-semantics interface using a latentvariable pcfg model", |
|
"authors": [ |
|
{ |
|
"first": "Laura", |
|
"middle": [], |
|
"last": "Kallmeyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Behrang", |
|
"middle": [], |
|
"last": "Qasemizadeh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jackie Chi Kit", |
|
"middle": [], |
|
"last": "Cheung", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "130--141", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Laura Kallmeyer, Behrang QasemiZadeh, and Jackie Chi Kit Cheung. 2018. Coarse lexical frame acquisi- tion at the syntax-semantics interface using a latent- variable pcfg model. In Proceedings of the Seventh Joint Conference on Lexical and Computational Se- mantics, pages 130-141, New Orleans, Louisiana. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Class-based construction of a verb lexicon", |
|
"authors": [ |
|
{ |
|
"first": "Karin", |
|
"middle": [], |
|
"last": "Kipper", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hoa", |
|
"middle": [ |
|
"Trang" |
|
], |
|
"last": "Dang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martha", |
|
"middle": [], |
|
"last": "Palmer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Proceedings of the Seventeenth National Conference on Artificial Intelligence and Twelfth Conference on Innovative Applications of Artificial Intelligence", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "691--696", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Karin Kipper, Hoa Trang Dang, and Martha Palmer. 2000. Class-based construction of a verb lexicon. In Proceedings of the Seventeenth National Confer- ence on Artificial Intelligence and Twelfth Confer- ence on Innovative Applications of Artificial Intelli- gence, pages 691-696. AAAI Press.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Identifying hypernyms in distributional semantic spaces", |
|
"authors": [ |
|
{ |
|
"first": "Alessandro", |
|
"middle": [], |
|
"last": "Lenci", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Giulia", |
|
"middle": [], |
|
"last": "Benotto", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "75--79", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alessandro Lenci and Giulia Benotto. 2012. Identify- ing hypernyms in distributional semantic spaces. In SemEval 2012, pages 75-79. Association for Com- putational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Semeval-2010 task 14: Word sense induction & disambiguation", |
|
"authors": [ |
|
{ |
|
"first": "Suresh", |
|
"middle": [], |
|
"last": "Manandhar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ioannis", |
|
"middle": [], |
|
"last": "Klapaftis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dmitriy", |
|
"middle": [], |
|
"last": "Dligach", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sameer", |
|
"middle": [], |
|
"last": "Pradhan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 5th International Workshop on Semantic Evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "63--68", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Suresh Manandhar, Ioannis Klapaftis, Dmitriy Dligach, and Sameer Pradhan. 2010. Semeval-2010 task 14: Word sense induction & disambiguation. In Pro- ceedings of the 5th International Workshop on Se- mantic Evaluation, pages 63-68, Uppsala, Sweden. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Lda-frames: An unsupervised approach togenerating semantic frames", |
|
"authors": [ |
|
{ |
|
"first": "Ji\u0159\u00ed", |
|
"middle": [], |
|
"last": "Materna", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Computational Linguistics and Intelligent Text Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "376--387", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1007/978-3-642-28604-9_31" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ji\u0159\u00ed Materna. 2012. Lda-frames: An unsupervised ap- proach togenerating semantic frames. In Compu- tational Linguistics and Intelligent Text Processing, pages 376-387, Berlin, Heidelberg. Springer Berlin Heidelberg.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Semeval-2007 task 10: English lexical substitution task", |
|
"authors": [ |
|
{ |
|
"first": "Diana", |
|
"middle": [], |
|
"last": "Mccarthy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roberto", |
|
"middle": [], |
|
"last": "Navigli", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the Fourth International Workshop on Semantic Evaluation (SemEval-2007)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "48--53", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Diana McCarthy and Roberto Navigli. 2007. Semeval- 2007 task 10: English lexical substitution task. In Proceedings of the Fourth International Workshop on Semantic Evaluation (SemEval-2007), pages 48- 53. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "WordNet: A lexical database for English", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "George", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Miller", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Commun. ACM", |
|
"volume": "38", |
|
"issue": "11", |
|
"pages": "39--41", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1145/219717.219748" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "George A. Miller. 1995. WordNet: A lexical database for English. Commun. ACM, 38(11):39-41.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Unsupervised induction of frame-semantic representations", |
|
"authors": [ |
|
{ |
|
"first": "Ashutosh", |
|
"middle": [], |
|
"last": "Modi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ivan", |
|
"middle": [], |
|
"last": "Titov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandre", |
|
"middle": [], |
|
"last": "Klementiev", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the NAACL-HLT Workshop on the Induction of Linguistic Structure", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--7", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ashutosh Modi, Ivan Titov, and Alexandre Klementiev. 2012. Unsupervised induction of frame-semantic representations. In Proceedings of the NAACL-HLT Workshop on the Induction of Linguistic Structure, pages 1-7, Montr\u00e9al, Canada. Association for Com- putational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Semeval-2013 task 11: Word sense induction and disambiguation within an end-user application", |
|
"authors": [ |
|
{ |
|
"first": "Roberto", |
|
"middle": [], |
|
"last": "Navigli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniele", |
|
"middle": [], |
|
"last": "Vannella", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the Seventh International Workshop on Semantic Evaluation", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "193--201", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Roberto Navigli and Daniele Vannella. 2013. Semeval- 2013 task 11: Word sense induction and disam- biguation within an end-user application. In Second Joint Conference on Lexical and Computational Se- mantics (*SEM), Volume 2: Proceedings of the Sev- enth International Workshop on Semantic Evalua- tion (SemEval 2013), pages 193-201, Atlanta, Geor- gia, USA. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Towards comparability of linguistic graph banks for semantic parsing", |
|
"authors": [ |
|
{ |
|
"first": "Stephan", |
|
"middle": [], |
|
"last": "Oepen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Kuhlmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yusuke", |
|
"middle": [], |
|
"last": "Miyao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Zeman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Silvie", |
|
"middle": [], |
|
"last": "Cinkova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Flickinger", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "LREC 2016", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stephan Oepen, Marco Kuhlmann, Yusuke Miyao, Daniel Zeman, Silvie Cinkova, Dan Flickinger, Jan Hajic, Angelina Ivanova, and Zdenka Uresova. 2016. Towards comparability of linguistic graph banks for semantic parsing. In LREC 2016, Paris, France. ELRA.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Verbnet: Verbnet: Capturing english verb behavior, meaning, and usage", |
|
"authors": [ |
|
{ |
|
"first": "Martha", |
|
"middle": [], |
|
"last": "Palmer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Claire", |
|
"middle": [], |
|
"last": "Bonial", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jena", |
|
"middle": [], |
|
"last": "Hwang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "The Oxford Handbook of Cognitive Science", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1093/oxfordhb/9780199842193.013.15" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Martha Palmer, Claire Bonial, and Jena Hwang. 2017. Verbnet: Verbnet: Capturing english verb behavior, meaning, and usage. In The Oxford Handbook of Cognitive Science. Oxford Press.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "The proposition bank: An annotated corpus of semantic roles", |
|
"authors": [ |
|
{ |
|
"first": "Martha", |
|
"middle": [], |
|
"last": "Palmer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Gildea", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paul", |
|
"middle": [], |
|
"last": "Kingsbury", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Comput. Linguist", |
|
"volume": "31", |
|
"issue": "1", |
|
"pages": "71--106", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1162/0891201053630264" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Martha Palmer, Daniel Gildea, and Paul Kingsbury. 2005. The proposition bank: An annotated corpus of semantic roles. Comput. Linguist., 31(1):71-106.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Automatic induction of framenet lexical units", |
|
"authors": [ |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Pennacchiotti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Diego", |
|
"middle": [ |
|
"De" |
|
], |
|
"last": "Cao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roberto", |
|
"middle": [], |
|
"last": "Basili", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Danilo", |
|
"middle": [], |
|
"last": "Croce", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Roth", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP '08", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "457--465", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marco Pennacchiotti, Diego De Cao, Roberto Basili, Danilo Croce, and Michael Roth. 2008. Automatic induction of framenet lexical units. In Proceedings of the Conference on Empirical Methods in Natu- ral Language Processing, EMNLP '08, pages 457- 465, Stroudsburg, PA, USA. Association for Com- putational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Guidelines for the semantic frame annotation system. corpus annotation guidelines TR.9", |
|
"authors": [ |
|
{ |
|
"first": "Q", |
|
"middle": [], |
|
"last": "Behrang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Miriam", |
|
"middle": [ |
|
"R L" |
|
], |
|
"last": "Zadeh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Petruck", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Behrang Q. Zadeh and Miriam R. L. Petruck. 2019. Guidelines for the semantic frame annotation sys- tem. corpus annotation guidelines TR.9.2018, SFB991 -ICSI.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Semantic proto-roles", |
|
"authors": [ |
|
{ |
|
"first": "Drew", |
|
"middle": [], |
|
"last": "Reisinger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rachel", |
|
"middle": [], |
|
"last": "Rudinger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Francis", |
|
"middle": [], |
|
"last": "Ferraro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Craig", |
|
"middle": [], |
|
"last": "Harman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyle", |
|
"middle": [], |
|
"last": "Rawlins", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Van Durme", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "3", |
|
"issue": "", |
|
"pages": "475--488", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Drew Reisinger, Rachel Rudinger, Francis Ferraro, Craig Harman, Kyle Rawlins, and Benjamin Van Durme. 2015. Semantic proto-roles. Transactions of the Association for Computational Linguistics, 3:475-488.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "L2F/INESC-ID at SemEval-2019 Task 2: Unsupervised Lexical Semantic Frame Induction using Contextualized Word Representations", |
|
"authors": [ |
|
{ |
|
"first": "Eug\u00e9nio", |
|
"middle": [], |
|
"last": "Ribeiro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V\u00e2nia", |
|
"middle": [], |
|
"last": "Mendon\u00e7a", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ricardo", |
|
"middle": [], |
|
"last": "Ribeiro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Martins De Matos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alberto", |
|
"middle": [], |
|
"last": "Sardinha", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ana", |
|
"middle": [ |
|
"L\u00facia" |
|
], |
|
"last": "Santos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lu\u00edsa", |
|
"middle": [], |
|
"last": "Coheur", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of The 13th International Workshop on Semantic Evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eug\u00e9nio Ribeiro, V\u00e2nia Mendon\u00e7a, Ricardo Ribeiro, David Martins de Matos, Alberto Sardinha, Ana L\u00facia Santos, and Lu\u00edsa Coheur. 2019. L2F/INESC-ID at SemEval-2019 Task 2: Un- supervised Lexical Semantic Frame Induction using Contextualized Word Representations. In Proceedings of The 13th International Workshop on Semantic Evaluation.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "FrameNet II: Extended Theory and Practice. ICSI", |
|
"authors": [ |
|
{ |
|
"first": "Josef", |
|
"middle": [], |
|
"last": "Ruppenhofer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Ellsworth", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Miriam", |
|
"middle": [ |
|
"R L" |
|
], |
|
"last": "Petruck", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Johnson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Collin", |
|
"middle": [ |
|
"F" |
|
], |
|
"last": "Baker", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Josef Ruppenhofer, Michael Ellsworth, Miriam R. L. Petruck, Christopher R. Johnson, Collin F. Baker, and Jan Scheffczyk. 2016. FrameNet II: Extended Theory and Practice. ICSI, Berkeley.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "Enhanced English Universal Dependencies: An improved representation for natural language understanding tasks", |
|
"authors": [ |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Schuster", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Christopher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sebastian Schuster and Christopher D. Manning. 2016. Enhanced English Universal Dependencies: An im- proved representation for natural language under- standing tasks. In Proceedings of the Tenth Interna- tional Conference on Language Resources and Eval- uation (LREC 2016), Paris, France. European Lan- guage Resources Association (ELRA).", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "A comparison of document clustering techniques", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Steinbach", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Karypis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Kumar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "KDD Workshop on Text Mining", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M. Steinbach, G. Karypis, and V. Kumar. 2000. A com- parison of document clustering techniques. In KDD Workshop on Text Mining.", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "Syntactic scaffolds for semantic structures", |
|
"authors": [ |
|
{ |
|
"first": "Swabha", |
|
"middle": [], |
|
"last": "Swayamdipta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sam", |
|
"middle": [], |
|
"last": "Thomson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3772--3782", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Swabha Swayamdipta, Sam Thomson, Kenton Lee, Luke Zettlemoyer, Chris Dyer, and Noah A. Smith. 2018. Syntactic scaffolds for semantic structures. In Proceedings of the 2018 Conference on Em- pirical Methods in Natural Language Processing, pages 3772-3782, Brussels, Belgium. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF39": { |
|
"ref_id": "b39", |
|
"title": "Unsupervised semantic frame induction using triclustering", |
|
"authors": [ |
|
{ |
|
"first": "Dmitry", |
|
"middle": [], |
|
"last": "Ustalov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Panchenko", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrey", |
|
"middle": [], |
|
"last": "Kutuzov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Biemann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Simone", |
|
"middle": [ |
|
"Paolo" |
|
], |
|
"last": "Ponzetto", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "55--62", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dmitry Ustalov, Alexander Panchenko, Andrey Kutu- zov, Chris Biemann, and Simone Paolo Ponzetto. 2018. Unsupervised semantic frame induction us- ing triclustering. In ACL, pages 55-62, Melbourne, Australia. ACL.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "Output: Semantic Frame Tagging using labels learned by Unsupervised methods.", |
|
"num": null |
|
}, |
|
"FIGREF1": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "Given semantically unlabeled structures (1a), annotate the input with semantic information learned via unsupervised methods (1b).", |
|
"num": null |
|
}, |
|
"FIGREF2": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "Subtasks of SemEval 2019 Task 2.", |
|
"num": null |
|
}, |
|
"FIGREF3": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "[ #s1 4 5 come from.ORIGIN] Similarly, for Task B.1 and Task B.2, respectively, the evaluation records are as follows here. B.1 [#s1 4 5 come from.ORIGIN Criticism-:-1-:-ENTITY Wall Street-:-6 7-:-ORIGIN] B.2 [#s1 4 5 come from.NA Criticism-:-1-:-THEME Wall Street-:-6 7-:-SOURCE]", |
|
"num": null |
|
}, |
|
"FIGREF4": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "plots the frequency distribution of the annotated frames in the gold data (SemEval).", |
|
"num": null |
|
}, |
|
"FIGREF5": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "Frequency Distribution of Annotated FramesA.1.3 Some Frames and their AveragedConfidence", |
|
"num": null |
|
}, |
|
"TABREF1": { |
|
"text": "in Appendix A.1 plots their frequency distribution.) FElements reports the number of annotated FEs categorized under the number of FE types shown in the FE-Type row. Sem-Arg shows the number of annotated verb arguments with VerbNet-like semantic roles, classified into 32 of 41 possible semantic role categories. Multiword lists the number of annotated MWUs", |
|
"type_str": "table", |
|
"num": null, |
|
"content": "<table><tr><td/><td colspan=\"2\">SemEval Total</td><td colspan=\"3\">Skipped InProg Dev</td></tr><tr><td>Records</td><td>4,620</td><td>5,637</td><td>301</td><td>716</td><td>594</td></tr><tr><td>Sentences</td><td>3,346</td><td>3,803</td><td>294</td><td>675</td><td>582</td></tr><tr><td>Tokens</td><td>90,460</td><td colspan=\"2\">102,067 8,329</td><td>19,151</td><td>15198</td></tr><tr><td colspan=\"2\">Verb-Forms 273</td><td>373</td><td>93</td><td>210</td><td>35</td></tr><tr><td colspan=\"2\">Frame-Type 149</td><td>234</td><td>75</td><td>185</td><td>37</td></tr><tr><td>#FEs</td><td>9,510</td><td>11,269</td><td>373</td><td>1,386</td><td>1,128</td></tr><tr><td>FE-Type</td><td>198</td><td>270</td><td>64</td><td>197</td><td>62</td></tr><tr><td>Sem-Arg</td><td>9,466</td><td>11,215</td><td>370</td><td>1,379</td><td>1,079</td></tr><tr><td colspan=\"2\">Multi-word 2,366</td><td>2,773</td><td>61</td><td>346</td><td>368</td></tr><tr><td>Confidence</td><td>3.30</td><td>3.2</td><td>2.41</td><td>2.5</td><td>3.34</td></tr><tr><td>Time</td><td>539h</td><td>742h</td><td>25h</td><td>177h</td><td>19h</td></tr><tr><td>Total-Move</td><td>68,784</td><td>83,753</td><td>1,903</td><td>13,066</td><td>4,406</td></tr></table>", |
|
"html": null |
|
}, |
|
"TABREF2": { |
|
"text": "", |
|
"type_str": "table", |
|
"num": null, |
|
"content": "<table/>", |
|
"html": null |
|
}, |
|
"TABREF3": { |
|
"text": "performed the worst", |
|
"type_str": "table", |
|
"num": null, |
|
"content": "<table><tr><td>System</td><td>BCF</td><td>BCF</td><td>BCF</td></tr><tr><td colspan=\"2\">Arefyev et al. 70.70</td><td>63.12</td><td>64.09</td></tr><tr><td>Anwar et al.</td><td>68.10</td><td>49.49</td><td>42.1</td></tr><tr><td>Ribeiro et al.</td><td>65.32</td><td>42.75</td><td>45.65</td></tr><tr><td>BASELINE</td><td>65.35</td><td>45.79</td><td>39.03</td></tr><tr><td>Task A</td><td/><td>B.1</td><td>B.2</td></tr></table>", |
|
"html": null |
|
}, |
|
"TABREF5": { |
|
"text": "Open-SESAME Performance", |
|
"type_str": "table", |
|
"num": null, |
|
"content": "<table/>", |
|
"html": null |
|
}, |
|
"TABREF7": { |
|
"text": "Total hours and number of moves for each annotation step for the 4,620 record dataset.", |
|
"type_str": "table", |
|
"num": null, |
|
"content": "<table><tr><td>A.1.2 Plot of frequency of annotated frames</td></tr></table>", |
|
"html": null |
|
}, |
|
"TABREF8": { |
|
"text": "FN frames annotated with the highest and lowest confidence.Table 4details hours spent to derive the evaluation data set. Section 4.3 discusses both tables. The full list of annotations in human readable form is available to browse and comment on at http://corpora.phil. hhu.de/fi/frames.html.", |
|
"type_str": "table", |
|
"num": null, |
|
"content": "<table><tr><td>A.2 Appendix II: Statistical Summary of</td></tr><tr><td>Evaluation and System Submissions</td></tr><tr><td>A.2.1 Unabridged Results</td></tr></table>", |
|
"html": null |
|
}, |
|
"TABREF9": { |
|
"text": "shows system BCF scores for confidence. The table shows changes in the BCF of systems when altering the evaluation set based on the assigned confidence for an annotated record. (See Section 7 for an explanation).", |
|
"type_str": "table", |
|
"num": null, |
|
"content": "<table><tr><td>Frame Type</td><td colspan=\"3\">#VF #Rec Conf</td></tr><tr><td>DECIDING</td><td>1</td><td>13</td><td>4.31</td></tr><tr><td colspan=\"2\">AGREE OR REFUSE TO ACT 1</td><td>15</td><td>4.13</td></tr><tr><td>TAKE PLACE OF</td><td>1</td><td>11</td><td>4</td></tr><tr><td>BEING EMPLOYED</td><td>1</td><td>6</td><td>4</td></tr><tr><td>STATEMENT</td><td>8</td><td>149</td><td>3.97</td></tr><tr><td>TAKING SIDES</td><td>3</td><td>16</td><td>3.88</td></tr><tr><td>ACTIVITY STOP</td><td>4</td><td>16</td><td>3.88</td></tr><tr><td>COMMERCE SELL</td><td>6</td><td>168</td><td>3.82</td></tr><tr><td>BRINGING</td><td>1</td><td>5</td><td>3.8</td></tr><tr><td>GIVE IMPRESSION</td><td>4</td><td>39</td><td>3.79</td></tr><tr><td colspan=\"4\">(a) Frames with Highest Average Confidence</td></tr><tr><td>Frame Type</td><td colspan=\"3\">#VF #Rec Conf</td></tr><tr><td>BEING IN CONTROL</td><td>2</td><td>5</td><td>1.6</td></tr><tr><td>COMING TO BE</td><td>2</td><td>5</td><td>1.8</td></tr><tr><td>OPERATING A SYSTEM</td><td>2</td><td>10</td><td>1.8</td></tr><tr><td>AWARENESS</td><td>1</td><td>6</td><td>1.83</td></tr><tr><td>REMOVING</td><td>3</td><td>8</td><td>1.88</td></tr><tr><td colspan=\"2\">INTENTIONALLY CREATE 6</td><td>19</td><td>1.95</td></tr><tr><td>CERTAINTY</td><td>1</td><td>68</td><td>2.03</td></tr><tr><td>OPINION</td><td>2</td><td>91</td><td>2.1</td></tr><tr><td>THWARTING</td><td>2</td><td>22</td><td>2.32</td></tr><tr><td>FIRST RANK</td><td>1</td><td>21</td><td>2.38</td></tr><tr><td colspan=\"4\">(b) Frames with Lowest Average Confidence</td></tr></table>", |
|
"html": null |
|
}, |
|
"TABREF10": { |
|
"text": "Frame types with the highest (5a) and the lowest (5b) confidence (Conf) by number of records (#Rec) with double annotator agreement. #VF reports the number of distinct verb forms that evoke a frame. 272 78.68 77.62 78.15 70.86 70.54 70.7 Anwar et al. 150 72.4 81.49 76.68 62.17 75.27 68.1 Ribeiro et al. 222 72.84 77.84 75.25 61.25 69.96 65.32 Kallmeyer et al. 776 72.47 72.16 72.31 62.73 63.51 63.12 Anwar et al. 338 55.74 67.79 61.18 43.22 57.9 49.49 Ribeiro et al. 518 52.29 57.56 54.8 39.43 46.69 42.75 Kallmeyer et al. 1023 72.24 49.12 58.48 62.71 37.51 46.94 77.49 56.25 74.46 64.09 Anwar et al. 2 50.43 80.47 62.00 29.58 73.00 42.1 Ribeiro et al. 7 58.25 71.4 64.16 36.88 59.91 45.65 Kallmeyer et al. 37 61.44 51.53 56.05 40.89 37.33 39.03 1CPG 37 61.44 51.53 56.05 40.89 37.33 39.03", |
|
"type_str": "table", |
|
"num": null, |
|
"content": "<table><tr><td>System</td><td>#C</td><td>PU</td><td>IPU</td><td>PIF</td><td>BCP</td><td colspan=\"2\">BCR BCF</td></tr><tr><td>Arefyev et al.</td><td/><td/><td/><td/><td/><td/><td/></tr><tr><td/><td colspan=\"7\">218 73.77 72.86 73.31 64.62 65.48 65.05</td></tr><tr><td>1CPI</td><td>4620</td><td>100</td><td>3.23</td><td>6.25</td><td>100</td><td>3.23</td><td>6.25</td></tr><tr><td>AIN1</td><td colspan=\"2\">1 13.87</td><td colspan=\"2\">100 24.37</td><td>3.78</td><td>100</td><td>7.28</td></tr><tr><td>1CPH</td><td colspan=\"7\">273 82.16 66.95 73.78 75.98 57.33 65.35</td></tr><tr><td>RANDOM</td><td colspan=\"2\">149 15.11</td><td>5.78</td><td>8.36</td><td>6.76</td><td>3.85</td><td>4.9</td></tr><tr><td/><td/><td/><td>Task A</td><td/><td/><td/><td/></tr><tr><td>System</td><td>#C</td><td>PU</td><td>IPU</td><td colspan=\"4\">PIF BCP BCR BCF</td></tr><tr><td>Arefyev et al.</td><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>1CPI</td><td>9510</td><td>100</td><td>4.58</td><td>8.77</td><td>100</td><td>4.58</td><td>8.77</td></tr><tr><td>AIN1</td><td>1</td><td>6.55</td><td>100</td><td>12.3</td><td>1.56</td><td>100</td><td>3.08</td></tr><tr><td>1CPHG</td><td colspan=\"7\">1203 78.46 45.99 57.99 71.11 33.77 45.79</td></tr><tr><td>RANDOM</td><td colspan=\"2\">436 11.34</td><td>6.04</td><td>7.88</td><td>6.03</td><td>4.81</td><td>5.35</td></tr><tr><td/><td/><td/><td>Task B.1</td><td/><td/><td/><td/></tr><tr><td>System</td><td>#C</td><td>PU</td><td>IPU</td><td colspan=\"4\">PIF BCP BCR BCF</td></tr><tr><td colspan=\"4\">Arefyev et al. 81.4 1CPI 14 73.94 9466 100 0.34</td><td>0.67</td><td>100</td><td>0.34</td><td>0.67</td></tr><tr><td>AIN1</td><td colspan=\"2\">1 34.34</td><td colspan=\"3\">100 51.13 21.66</td><td>100</td><td>35.6</td></tr><tr><td>RANDOM</td><td colspan=\"2\">32 34.65</td><td>4.75</td><td colspan=\"2\">8.36 21.89</td><td>3.45</td><td>5.96</td></tr><tr><td/><td/><td/><td>Task B.2</td><td/><td/><td/><td/></tr></table>", |
|
"html": null |
|
}, |
|
"TABREF11": { |
|
"text": "Complete System Results and Baselines", |
|
"type_str": "table", |
|
"num": null, |
|
"content": "<table><tr><td>Cnf</td><td>#I</td><td colspan=\"3\">Arefyev Anwar Ribeiro</td><td>Cnf</td><td>#I</td><td colspan=\"3\">Arefyev Anwar Ribeiro</td></tr><tr><td>1</td><td>4620</td><td>70.7</td><td>68.10</td><td>65.32</td><td>1</td><td>286</td><td>73.79</td><td>70.57</td><td>67.70</td></tr><tr><td>2</td><td>4334</td><td>71.87</td><td>69.28</td><td>66.57</td><td>2</td><td>677</td><td>66.45</td><td>63.80</td><td>60.46</td></tr><tr><td>3</td><td>3657</td><td>74.64</td><td>72.22</td><td>70.17</td><td>3</td><td>1,115</td><td>76.71</td><td>75.98</td><td>70.01</td></tr><tr><td>4</td><td>2542</td><td>76.46</td><td>73.82</td><td>73.43</td><td>4</td><td>2,458</td><td>76.65</td><td>74.05</td><td>73.45</td></tr><tr><td/><td>84</td><td>86.14</td><td>84.65</td><td>85.13</td><td>5</td><td>84</td><td>86.14</td><td>84.65</td><td>85.13</td></tr><tr><td/><td/><td>Task A</td><td/><td/><td/><td/><td>Task A</td><td/><td/></tr><tr><td>Cnf</td><td>#I</td><td colspan=\"3\">Arefyev Anwar Ribeiro</td><td>Cnf</td><td>#I</td><td colspan=\"3\">Arefyev Anwar Ribeiro</td></tr><tr><td>1</td><td>9,510</td><td>63.12</td><td>49.52</td><td>42.75</td><td>1</td><td>493</td><td>68.57</td><td>55.37</td><td>51.84</td></tr><tr><td>2</td><td>9017</td><td>64.20</td><td>50.44</td><td>43.61</td><td>2</td><td>1,411</td><td>59.86</td><td>49.08</td><td>42.16</td></tr><tr><td>3</td><td>7,606</td><td>67.18</td><td>53.40</td><td>46.42</td><td>3</td><td>2,250</td><td>70.67</td><td>57.97</td><td>47.60</td></tr><tr><td>4</td><td>5,356</td><td>68.70</td><td>55.99</td><td>49.20</td><td>4</td><td>5,187</td><td>68.70</td><td>56.01</td><td>49.24</td></tr><tr><td>5</td><td>169</td><td>85.16</td><td>81.85</td><td>65.60</td><td>5</td><td>169</td><td>85.16</td><td>81.85</td><td>65.60</td></tr><tr><td/><td/><td>Task B.1</td><td/><td/><td/><td/><td>Task B.1</td><td/><td/></tr><tr><td>Cnf</td><td>#I</td><td colspan=\"3\">Arefyev Anwar Ribeiro</td><td>Cnf</td><td>#I</td><td colspan=\"3\">Arefyev Anwar Ribeiro</td></tr><tr><td>1</td><td>9,466</td><td>64.09</td><td>42.12</td><td>45.65</td><td>1</td><td>553</td><td>52.69</td><td>39.82</td><td>38.21</td></tr><tr><td>2</td><td>8,911</td><td>64.98</td><td>42.32</td><td>46.27</td><td>2</td><td>1,385</td><td>58.36</td><td>40.99</td><td>41.55</td></tr><tr><td>3</td><td>7,528</td><td>66.47</td><td>42.67</td><td>47.52</td><td>3</td><td>2,236</td><td>69.01</td><td>48.07</td><td>49.4</td></tr><tr><td>4</td><td>5,292</td><td>65.71</td><td>40.67</td><td>46.95</td><td>4</td><td>5,125</td><td>65.44</td><td>40.37</td><td>46.72</td></tr><tr><td>5</td><td>167</td><td>77.19</td><td>55.18</td><td>56.58</td><td>5</td><td>167</td><td>77.19</td><td>55.18</td><td>56.58</td></tr><tr><td/><td/><td>Task B.2</td><td/><td/><td/><td/><td>Task B.2</td><td/><td/></tr><tr><td/><td/><td colspan=\"2\">Cumulative</td><td/><td/><td/><td/><td/><td/></tr></table>", |
|
"html": null |
|
} |
|
} |
|
} |
|
} |