|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T10:44:08.818391Z" |
|
}, |
|
"title": "LOME: Large Ontology Multilingual Extraction", |
|
"authors": [ |
|
{ |
|
"first": "Patrick", |
|
"middle": [], |
|
"last": "Xia", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Johns Hopkins University", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Guanghui", |
|
"middle": [], |
|
"last": "Qin", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Johns Hopkins University", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Siddharth", |
|
"middle": [], |
|
"last": "Vashishtha", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Rochester", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Yunmo", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Johns Hopkins University", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Tongfei", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Chandler", |
|
"middle": [], |
|
"last": "May", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Johns Hopkins University", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Craig", |
|
"middle": [], |
|
"last": "Harman", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Johns Hopkins University", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Kyle", |
|
"middle": [], |
|
"last": "Rawlins", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Johns Hopkins University", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Aaron", |
|
"middle": [ |
|
"Steven" |
|
], |
|
"last": "White", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Johns Hopkins University", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Benjamin", |
|
"middle": [ |
|
"Van" |
|
], |
|
"last": "Durme", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Johns Hopkins University", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We present LOME, a system for performing multilingual information extraction. Given a text document as input, our core system identifies spans of textual entity and event mentions with a FrameNet (Baker et al., 1998) parser. It subsequently performs coreference resolution, fine-grained entity typing, and temporal relation prediction between events. By doing so, the system constructs an event and entity focused knowledge graph. We can further apply third-party modules for other types of annotation, like relation extraction. Our (multilingual) first-party modules either outperform or are competitive with the (monolingual) state-of-the-art. We achieve this through the use of multilingual encoders like XLM-R (Conneau et al., 2020) and leveraging multilingual training data. LOME is available as a Docker container on Docker Hub. In addition, a lightweight version of the system is accessible as a web demo. * Equal Contribution 1 Information on using the Docker container, web demo, and demo video at https://nlp.jhu.edu/demos.", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We present LOME, a system for performing multilingual information extraction. Given a text document as input, our core system identifies spans of textual entity and event mentions with a FrameNet (Baker et al., 1998) parser. It subsequently performs coreference resolution, fine-grained entity typing, and temporal relation prediction between events. By doing so, the system constructs an event and entity focused knowledge graph. We can further apply third-party modules for other types of annotation, like relation extraction. Our (multilingual) first-party modules either outperform or are competitive with the (monolingual) state-of-the-art. We achieve this through the use of multilingual encoders like XLM-R (Conneau et al., 2020) and leveraging multilingual training data. LOME is available as a Docker container on Docker Hub. In addition, a lightweight version of the system is accessible as a web demo. * Equal Contribution 1 Information on using the Docker container, web demo, and demo video at https://nlp.jhu.edu/demos.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "As information extraction capabilities continue to improve due to advances in modeling, encoders, and data collection, we can now look (back) toward making richer predictions at the documentlevel, with a large ontology, and across multiple languages. Recently, Li et al. (2020) noted that despite a growth of open-source NLP software in general, there is still a lack of available software for knowledge extraction. We wish to provide a starting point that allows others to build increasingly comprehensive document-level knowledge graphs of events and entities from text in many languages. 1 Therefore, we demonstrate LOME, a system for multilingual information extraction with large ontologies. Figure 1 shows the high-level pipeline by following a multilingual input example. A sentence-level parser identifies both INGESTION events and their arguments. To connect these events cross-sententially, the system clusters coreferent mentions and predicts the temporal relations between the events. LOME, which supports finegrained entity types, additionally labels entities like the rabbit with LIVING THING/ANIMAL.", |
|
"cite_spans": [ |
|
{ |
|
"start": 261, |
|
"end": 277, |
|
"text": "Li et al. (2020)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 591, |
|
"end": 592, |
|
"text": "1", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 697, |
|
"end": 705, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Several prior packages have also used advances in state-of-the-art models to build comprehensive information extraction systems. Li et al. (2019) present an event, relation, and entity extraction and coreference system for three languages: English, Russian, and Ukrainian. Li et al. (2020, GAIA) extend that work to support cross-media documents. However, both of these systems consist of languagespecific models that operate on monolingual documents after first identifying the language. On the other hand, work prioritizing coverage across tens or hundreds of languages is limited in their scope in extraction (Akbik and Li, 2016; Pan et al., 2017) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 129, |
|
"end": 145, |
|
"text": "Li et al. (2019)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 273, |
|
"end": 295, |
|
"text": "Li et al. (2020, GAIA)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 612, |
|
"end": 632, |
|
"text": "(Akbik and Li, 2016;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 633, |
|
"end": 650, |
|
"text": "Pan et al., 2017)", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Like prior work, LOME is focused on extracting entities and events from raw text documents. However, LOME is language-agnostic; all components prioritize multilinguality. Using XLM-R (Conneau et al., 2020) as the underlying encoder paves the way for both training on multilingual data (where it exists) and inference in many languages. 2 Our pipeline includes a full FrameNet parser for events and their arguments, neural coreference resolution, an entity typing model over large ontologies, and temporal resolution between events.", |
|
"cite_spans": [ |
|
{ |
|
"start": 183, |
|
"end": 205, |
|
"text": "(Conneau et al., 2020)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 336, |
|
"end": 337, |
|
"text": "2", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Our system is designed to be modular: each component is trained independently and tuned on task-specific data. To communicate between modules, we use CONCRETE (Ferraro et al., 2014) , a data schema used in other text processing systems (Peng et al., 2015) . One advantage of using a stan- Figure 1 : Architecture of LOME. The system processes text documents as input and first uses a FrameNet parser to detect entities and events. Then, a suite of models enrich the entities and events with additional predictions. Each individual model can be trained and tuned independently, ensuring modularity of the pipeline. Annotations between models are transferred using CONCRETE, a data schema for NLP.", |
|
"cite_spans": [ |
|
{ |
|
"start": 159, |
|
"end": 181, |
|
"text": "(Ferraro et al., 2014)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 236, |
|
"end": 255, |
|
"text": "(Peng et al., 2015)", |
|
"ref_id": "BIBREF31" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 289, |
|
"end": 297, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "dardized data schema is that it enables modularization and extension. Unless there are annotation dependencies, individual modules can be inserted, replaced, merged, or bypassed depending on the application. We discuss two example applications of our CONCRETE-based modules, one of which further extracts relations and the other performs cross-sentence argument linking for events.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The overarching application of LOME is to extract an entity-and event-centric knowledge graph from a textual document. In particular, we are interested in using these graphs to support a multilingual schema learning task (KAIROS 3 ) for which data has been annotated by the LDC (Cieri et al., 2020) . As a result, some parts of LOME are designed for compatibility with the KAIROS event and entity ontology. Nonetheless, there is significant overlap with publicly available datasets, which we describe for those tasks. Figure 1 presents the architecture of our pipeline. Besides the FrameNet parser, which is run first, the remaining modules can be run in any order, if at all. In addition, our use of a standardized data schema for communication allows for the integration of third-party systems. In this section, we will go into further detail for each task.", |
|
"cite_spans": [ |
|
{ |
|
"start": 278, |
|
"end": 298, |
|
"text": "(Cieri et al., 2020)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 518, |
|
"end": 526, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Tasks", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "FrameNet parsing is a semantic role labeling style task. The goal is to find all the frames and their roles, as well as the trigger spans associated with them in a sentence. Frames are concepts, such as events or entities, in a sentences. Every frame is associated with some roles, and both of them are triggered by spans in the sentence.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "FrameNet Parsing", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Unlike most previous work (Yang and Mitchell, 2017; Peng et al., 2018; , our system is not conditioned on the trigger spans or frames. We perform \"full parsing\" , where the input is a raw sentence, and the output is the complete structure predictions.", |
|
"cite_spans": [ |
|
{ |
|
"start": 26, |
|
"end": 51, |
|
"text": "(Yang and Mitchell, 2017;", |
|
"ref_id": "BIBREF52" |
|
}, |
|
{ |
|
"start": 52, |
|
"end": 70, |
|
"text": "Peng et al., 2018;", |
|
"ref_id": "BIBREF30" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "FrameNet Parsing", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "As the first model in the whole pipeline system, the trigger spans found by the FrameNet parser will be used as candidate spans for all other tasks.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "FrameNet Parsing", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "In coreference resolution, the goal is to cluster spans in the text that refer to the same entity. Neural models for doing so typically encode the text first before identifying possible mentions (Lee et al., 2017; Joshi et al., 2019 Joshi et al., , 2020 . These spans are scored pairwise to determine whether two spans refer to each other. These scores then determine coreference clusters by decoding under a variety of strategies Xu and Choi, 2020) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 195, |
|
"end": 213, |
|
"text": "(Lee et al., 2017;", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 214, |
|
"end": 232, |
|
"text": "Joshi et al., 2019", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 233, |
|
"end": 253, |
|
"text": "Joshi et al., , 2020", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 431, |
|
"end": 449, |
|
"text": "Xu and Choi, 2020)", |
|
"ref_id": "BIBREF51" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Entity Coreference Resolution", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "In this work, we choose a constant-memory variant of that model which also achieves high per- formance (Xia et al., 2020) . The motivation here is robustness: we prioritize the ability to soundly run on all document lengths over slightly better performing but fragile systems. In addition, because this coreference resolution model is part of a broader entity-centric system, the module used in this system does not perform the mention detection step (which is left to the FrameNet parser). Instead, both training and inference assumes given mentions, and the task we are concerned about in this paper is mention linking.", |
|
"cite_spans": [ |
|
{ |
|
"start": 103, |
|
"end": 121, |
|
"text": "(Xia et al., 2020)", |
|
"ref_id": "BIBREF50" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Entity Coreference Resolution", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Entity typing assigns a fine-grained semantic label to a span of text, where the span is a mention of some entity found by the FrameNet parser. Traditionally, labels include PER, GPE, ORG, etc., but recent work in fine-grained entity typing seek to classify spans into types defined by hierarchical type ontologies (e.g. BBN (Weischedel and Brunstein, 2005) , FIGER (Ling and Weld, 2012) , UltraFine 4 (Choi et al., 2018) , COLLIE (Allen et al., 2020)). Such ontologies refine coarse types like PER to fine-grained types such as /person/artist/singer that sits on a type hierarchy. A portion of the AIDA ontology (LDC2019E07) is illustrated in Figure 2 . To support fine-grained ontologies, we employ a recent coarse-to-fine-decoding entity typing model (Chen et al., 2020a) that is specifically designed to assign types that are defined by hierarchical ontologies. The use of a coarse-to-fine model also allows users to select between coarse-and finegrained types. We swap the underlying encoder from ELMo (Peters et al., 2018) to XLM-R to be able to assign types over mentions in different lan- 4 UltraFine is slightly different in that the types are bucketed into 3 categories of different granularity, but without explicit subtyping relations. guages using a single multilingual model, and to enable transfer between languages. The base typing model in Chen et al. (2020a) supports entity typing on entity mentions. We extend this model to gain the ability to perform entity typing on entities, i.e. clusters of entity mentions. Since our decoder is coarse-to-fine and predicts a type at each level of the type hierarchy, we employ Borda voting on each level. Specifically, given a coreference chain comprising mentions m 1,\u2022\u2022\u2022 ,n , and the score for mention m i being typed as type t as s i,t , we perform Borda counting to select the most confident type t * = arg max t i r(i, t) over all t's in a specific type level, where r(i, t) = 1/rank t (s i,t ) is the ranking relevance score used in Borda counting.", |
|
"cite_spans": [ |
|
{ |
|
"start": 325, |
|
"end": 357, |
|
"text": "(Weischedel and Brunstein, 2005)", |
|
"ref_id": "BIBREF47" |
|
}, |
|
{ |
|
"start": 366, |
|
"end": 387, |
|
"text": "(Ling and Weld, 2012)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 402, |
|
"end": 421, |
|
"text": "(Choi et al., 2018)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 1007, |
|
"end": 1028, |
|
"text": "(Peters et al., 2018)", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 1097, |
|
"end": 1098, |
|
"text": "4", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 644, |
|
"end": 652, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Entity Typing", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "The task of temporal relation extraction focuses on finding the chronology of events (e.g., Before, After, Overlaps) in text. Extracting temporal relation is useful for various downstream tasks -curating structured clinical data (Savova et al., 2010; Soysal et al., 2018 ), text summarization (Glava\u0161 and\u0160najder, 2014; Kedzie et al., 2015) , questionanswering (Llorens et al., 2015; Zhou et al., 2019 ), etc. The task is most commonly viewed as a classification task where given a pair of events and its textual context, the temporal relation between them needs to be identified.", |
|
"cite_spans": [ |
|
{ |
|
"start": 229, |
|
"end": 250, |
|
"text": "(Savova et al., 2010;", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 251, |
|
"end": 270, |
|
"text": "Soysal et al., 2018", |
|
"ref_id": "BIBREF37" |
|
}, |
|
{ |
|
"start": 293, |
|
"end": 318, |
|
"text": "(Glava\u0161 and\u0160najder, 2014;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 319, |
|
"end": 339, |
|
"text": "Kedzie et al., 2015)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 360, |
|
"end": 382, |
|
"text": "(Llorens et al., 2015;", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 383, |
|
"end": 400, |
|
"text": "Zhou et al., 2019", |
|
"ref_id": "BIBREF53" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Temporal Relation Extraction", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "The construction of the TimeBank corpus (Pustejovsky et al., 2003) largely spurred the research in temporal relation extraction. It included 14 temporal relation labels. Other corpora (Verhagen et al., 2007 (Verhagen et al., , 2010 Sun et al., 2013; Cassidy et al., 2014) reduced the number of labels to a smaller number owing to lower inter-annotator agreements and sparse annotations. Various types of models (Chambers et al., 2014; Cheng and Miyao, 2017; Leeuwenberg and Moens, 2017; Ning et al., 2017; Vashishtha et al., 2019; Zhou et al., 2021) have been used in the recent years to extract temporal relations from text.", |
|
"cite_spans": [ |
|
{ |
|
"start": 40, |
|
"end": 66, |
|
"text": "(Pustejovsky et al., 2003)", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 184, |
|
"end": 206, |
|
"text": "(Verhagen et al., 2007", |
|
"ref_id": "BIBREF44" |
|
}, |
|
{ |
|
"start": 207, |
|
"end": 231, |
|
"text": "(Verhagen et al., , 2010", |
|
"ref_id": "BIBREF45" |
|
}, |
|
{ |
|
"start": 232, |
|
"end": 249, |
|
"text": "Sun et al., 2013;", |
|
"ref_id": "BIBREF39" |
|
}, |
|
{ |
|
"start": 250, |
|
"end": 271, |
|
"text": "Cassidy et al., 2014)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 458, |
|
"end": 486, |
|
"text": "Leeuwenberg and Moens, 2017;", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 487, |
|
"end": 505, |
|
"text": "Ning et al., 2017;", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 506, |
|
"end": 530, |
|
"text": "Vashishtha et al., 2019;", |
|
"ref_id": "BIBREF43" |
|
}, |
|
{ |
|
"start": 531, |
|
"end": 549, |
|
"text": "Zhou et al., 2021)", |
|
"ref_id": "BIBREF54" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Temporal Relation Extraction", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "In this work, we use Vashishtha et al. (2019)'s best model and retrain it using XLM-R. We evaluate their model using the transfer learning approach described in their work and retrain it on TimeBank-Dense (TBD) (Cassidy et al., 2014). TBD uses a reduced set of 5 temporal relation labels -before, after, includes, is included, and vague.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Temporal Relation Extraction", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "3 System Design", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Temporal Relation Extraction", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "Our system is modularized into separate models and libraries that communicate with each other using CONCRETE, a data format for richly annotating natural language documents (Ferraro et al., 2014) . Each component is independent of each other, which allows for both inserting additional modules or deleting those provided in the default pipeline. We choose this loosely-affiliated design to enable both faster and independent prototyping of individual components, as well as better compartmentalization of our models.", |
|
"cite_spans": [ |
|
{ |
|
"start": 173, |
|
"end": 195, |
|
"text": "(Ferraro et al., 2014)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Modularization", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We emphasize that the system is a pipeline: while individual modules can be further improved, the system is not designed to be trained end-toend and benchmarking the richly-annotated output depends on the application and priorities. In this paper, we only benchmark individual components and describe a couple of applications.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Modularization", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The system can consume, as input, either tokenized or untokenized text, which is first tokenized either by whitespace or with a multilingual tokenizer, PolyGlot. 5 However, this tokenization is not necessarily used by all modules, which may choose to either operate on the raw text itself or on a Sentence-Piece (Kudo and Richardson, 2018) retokenization.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "System Inputs and Outputs", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The system outputs a CONCRETE communication file for each input document. This output file contains annotations including entities, events, coreference, entity types, and temporal relations. This schema used is entirely self-contained and the well-documented library also contains tools for visualizing and inspecting CONCRETE files. 6 For the web demo, the output is displayed in the browser.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "System Inputs and Outputs", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The FrameNet parser is comprised of an XLM-R encoder, a BIO tagger, and a typing module. It encodes the input sentences into a list of vectors, used by both the BIO tagger and the typing module. The goal of BIO tagger is to find trigger spans, which are then labeled by the typing module. To parse a sentence, we run the model to find all frames, and then find their roles conditioned on the frames.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "FrameNet Span Finding", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We train the FrameNet parser on the FrameNet v1.7 corpus following , with statistics in Table 1 . We evaluate the results with exact matching as our metric, 7 and get 56.34 labeled F1 or 66.41 unlabeled F1. Since we are not aware of previous work on both full parsing and a metric for its evaluation, we do not have a baseline. However, we can force the model to perform frame identification given the trigger span, like prior work. These results are shown in Table 2 ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 88, |
|
"end": 95, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
}, |
|
{ |
|
"start": 460, |
|
"end": 467, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "FrameNet Span Finding", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We retrain the model by Xia et al. (2020) with XLM-R (large) as the underlying encoder and with additional multilingual data. The model is a constantmemory variant of neural coreference resolution models. We refer the reader to Xia et al. (2020) for model and training details. Unlike that work, we operate under the assumption that we are provided gold spans. This is motivated by the location of coreference in LOME. In addition, while they use a frozen encoder, we found that finetuning improves performance. 8 Finally, we train on the full OntoNotes 5.0 (Weischedel et al., 2013; , a subset of SemEval 2010 Task 1 (Recasens et al., 2010) , and two additional sources of Russian data, RuCor (Toldova et al., 2014) and AnCor (Budnikov et al., 2019) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 24, |
|
"end": 41, |
|
"text": "Xia et al. (2020)", |
|
"ref_id": "BIBREF50" |
|
}, |
|
{ |
|
"start": 558, |
|
"end": 583, |
|
"text": "(Weischedel et al., 2013;", |
|
"ref_id": "BIBREF48" |
|
}, |
|
{ |
|
"start": 618, |
|
"end": 641, |
|
"text": "(Recasens et al., 2010)", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 688, |
|
"end": 716, |
|
"text": "RuCor (Toldova et al., 2014)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 721, |
|
"end": 750, |
|
"text": "AnCor (Budnikov et al., 2019)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Coreference Resolution", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "We benchmark the performance of our model on each language. We report the average F1 of MUC (Vilain et al., 1995) , B 3 (Bagga and Baldwin, 1998), and CEAF \u03c6 4 (Luo, 2005) by language in Table 3 . We can compare the model's performance to monolingual gold-only baselines, where they exist. For English, we trained an identical model but instead use SpanBERT (Joshi et al., 2020) , an English-only encoder finetuned for English OntoNotes coreference. That model achieves 92.2 average (dev.) F1, compared to our 92.7. There is also a comparable system for Russian AnCor from Le et al. (2019) , which achieves 79.9 F1 using the model from and RuBERT (Kuratov and Arkhipov, 2019) . This shows that our single, multilingual model, can perform similarly to monolingual models, with the advantage that our model does not need to perform language ID. This finding mirrors prior findings showing multilingual encoders are strong cross-lingually (Wu and Dredze, 2019 ", |
|
"cite_spans": [ |
|
{ |
|
"start": 92, |
|
"end": 113, |
|
"text": "(Vilain et al., 1995)", |
|
"ref_id": "BIBREF46" |
|
}, |
|
{ |
|
"start": 160, |
|
"end": 171, |
|
"text": "(Luo, 2005)", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 358, |
|
"end": 378, |
|
"text": "(Joshi et al., 2020)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 573, |
|
"end": 589, |
|
"text": "Le et al. (2019)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 647, |
|
"end": 675, |
|
"text": "(Kuratov and Arkhipov, 2019)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 936, |
|
"end": 956, |
|
"text": "(Wu and Dredze, 2019", |
|
"ref_id": "BIBREF49" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 187, |
|
"end": 194, |
|
"text": "Table 3", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Coreference Resolution", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "We retrain the coarse-to-fine entity typer by Chen et al. (2020a) with XLM-R as the underlying encoder, and using the AIDA ontology as the type label inventory. The dataset annotated from AIDA is relatively small. To make the model more robust, we pre-train the model using extra training data from GAIA (Li et al., 2020) , where they obtained YAGO fine-grained types (Suchanek et al., 2008) from the results of Freebase entity linking, and mapped these types to the AIDA ontology. After pre-training, we fine-tune the model using the AIDA M18 and M36 data with 3-fold crossvalidation, where each fold is distinct in the topics of these documents. The sizes of these datasets are shown in Table 4 . Our models perform well in these datasets. Using one third of the AIDA M36 data as dev, our method obtained 60.1% micro-F 1 score; 9 with pretraining using GAIA extra data, we get 76.5%.", |
|
"cite_spans": [ |
|
{ |
|
"start": 304, |
|
"end": 321, |
|
"text": "(Li et al., 2020)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 368, |
|
"end": 391, |
|
"text": "(Suchanek et al., 2008)", |
|
"ref_id": "BIBREF38" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 689, |
|
"end": 696, |
|
"text": "Table 4", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Entity Typing", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Our system can also be extended to support other commonly used fine-grained entity type ontologies. We report the results in micro-F 1 in Table 5 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 138, |
|
"end": 145, |
|
"text": "Table 5", |
|
"ref_id": "TABREF7" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Entity Typing", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Ontology Prior state-of-the-art Ours ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Entity Typing", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "We retrain Vashishtha et al. (2019) 's best finegrained temporal relation model on UDS-T (Vashishtha et al., 2019) using XLM-R (large). We then use their transfer learning approach and train an SVM model on event-event relations in TimeBank-Dense (TBD) to predict categorical temporal relation labels. With this approach, we see a micro-F1 score of 56 on the test set of TBD. 10 For better performance, we train the same model on additional TempEval3 (TE3) dataset (UzZaman et al., 2013) . Since TE3 and TBD use a different set of temporal relations, we consider only those instances that are labeled with 4 temporal relations from both TE3 and TBD for joint training -before, after, includes (container), and is included (contained). We retrain Vashishtha et al. (2019) 's transfer learning model on the combined TE3 and TBD dataset considering only these 4 relations and evaluate on their combined test set. 11 Results on the combined test set are reported in Table 6 . We use this model as the default temporal relation extraction model in LOME.", |
|
"cite_spans": [ |
|
{ |
|
"start": 11, |
|
"end": 35, |
|
"text": "Vashishtha et al. (2019)", |
|
"ref_id": "BIBREF43" |
|
}, |
|
{ |
|
"start": 89, |
|
"end": 114, |
|
"text": "(Vashishtha et al., 2019)", |
|
"ref_id": "BIBREF43" |
|
}, |
|
{ |
|
"start": 376, |
|
"end": 378, |
|
"text": "10", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 465, |
|
"end": 487, |
|
"text": "(UzZaman et al., 2013)", |
|
"ref_id": "BIBREF42" |
|
}, |
|
{ |
|
"start": 746, |
|
"end": 770, |
|
"text": "Vashishtha et al. (2019)", |
|
"ref_id": "BIBREF43" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 962, |
|
"end": 969, |
|
"text": "Table 6", |
|
"ref_id": "TABREF9" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Temporal Relation Extraction", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "We also test our default model on a Chinese temporal relation extraction dataset . 12 In the zero-shot setting, we get a micro F1 score of 52.6 on the provided dataset, as compared to a majority baseline of 37.5. 13 Similar to the default temporal system in LOME, we use the XLM-R version of Vashishtha et al. (2019) 's model obtaining relation embeddings for the Chinese dataset and train an SVM model using the transfer learning approach to get a micro F1 score of 64.4. 14", |
|
"cite_spans": [ |
|
{ |
|
"start": 83, |
|
"end": 85, |
|
"text": "12", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 292, |
|
"end": 316, |
|
"text": "Vashishtha et al. (2019)", |
|
"ref_id": "BIBREF43" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Temporal Relation Extraction", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "Precision Besides the core components described above, we also discuss the viability of including additional modules that may not fit directly in the core pipeline but can be included depending on the downstream application. For example, the system described above does not predict any relation information, which is needed for the motivating application of downstream schema inference. To do so, we wrote a CONCRETE and Docker wrapper around OneIE and attached it at the end of the pipeline. With our CONCRETE based design, the integration of any third-party module can be done via implementing the AnnotateCommu-nicationService service interface, which can ensure compatibility between LOME and external modules. The OneIE wrapper is one example of an external module.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Relation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "As another example application, we reconfigured our pipeline for the NIST SM-KBP 2020 Task 1 12 We remove the instances with unknown relation from the dataset and convert the predictions with includes and is included relations to the overlaps relation to match the label set of their dataset with our system. 13 The authors were able to provide only half of the dataset with 10,476 event-event pairs, from which we ignore instances with unknown relation, resulting into 9,362 instances.", |
|
"cite_spans": [ |
|
{ |
|
"start": 93, |
|
"end": 95, |
|
"text": "12", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 309, |
|
"end": 311, |
|
"text": "13", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mix and Match Modules: SM-KBP", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "14 The results are the average of the 5-fold cross validation splits provided by . evaluation, which aims to produce document-level knowledge graphs. 15 Each given document may be in English, Russian, or Spanish. On a development set consisting solely of text-only documents, 16 we started with initial predictions made by GAIA (Li et al., 2020) , for entity clusters, entity types, events and relations. Our goal was to recluster and relabel the a dataset for knowledge extraction.", |
|
"cite_spans": [ |
|
{ |
|
"start": 150, |
|
"end": 152, |
|
"text": "15", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 276, |
|
"end": 278, |
|
"text": "16", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 328, |
|
"end": 345, |
|
"text": "(Li et al., 2020)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mix and Match Modules: SM-KBP", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Our pipeline consisted of the multilingual coreference resolution (using the predetermined mention from GAIA) and hierarchical entity typing models discussed in this paper, followed by a separate state-of-the-art argument linking model (Chen et al., 2020b) . We found improved performance 17 with entity coreference (from 29.1 F1 to 33.3 F1), especially in Russian (from 26.2 F1 to 33.3 F1), likely due to our use of multilingual data and contextualized encoders. The improved entity clusters also led to downstream improvements in entity typing and argument linking. This example highlights the ability to pick out subcomponents of LOME and customize according to the downstream task.", |
|
"cite_spans": [ |
|
{ |
|
"start": 236, |
|
"end": 256, |
|
"text": "(Chen et al., 2020b)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mix and Match Modules: SM-KBP", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "We present two methods to interact with the pipeline. The first is a Docker container which contains the libraries, code, and trained models of our pipeline. This is intended to run on batches of documents. As a lighter demo of some of the system capabilities, we also have a web demo intended to interactively run on shorter documents. Docker Our Docker image 18 consists of the four core modules: FrameNet parser, coreference resolution, entity typing, and temporal resolution. Furthermore, there are two options for entity typing: a fine-grained hierarchical model (with the AIDA typing ontology) and a coarse-grained model (with the KAIROS typing ontology). The container and documentation is available on Docker Hub.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Usage", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "As some modules depend on GPU libraries, the image also requires NVIDIA-Docker support. Since there is a high start-up (time) cost for using Docker and loading models, we recommend using this container for batch processing of documents. Further instructions for running can be found on the LOME Docker Hub page.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Usage", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Web Demo We make a few changes for the web demo. 19 To reduce latency, we preload the models into memory and we do not write the CONCRETE communications to disk. At the cost of modularity, this makes the demo lightweight and fast, allowing us to run it on a single 16GB CPU-only server. To present the predictions, our front-end uses AllenNLP-demo. 20 In addition, the web demo is currently limited to FrameNet parsing and coreference resolution, as other models will increase latency and may impede usability. The web demo is intended to highlight only some of the system's capabilities, like its ability to process multilingual documents.", |
|
"cite_spans": [ |
|
{ |
|
"start": 49, |
|
"end": 51, |
|
"text": "19", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 349, |
|
"end": 351, |
|
"text": "20", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Usage", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "To facilitate increased interest in multilingual document-level knowledge extraction with large ontologies, we create and demonstrate LOME, a system for event and entity knowledge graph creation. Given input text documents, LOME runs a full FrameNet parser, coreference resolution, finegrained entity typing, and temporal relation prediction. Furthermore, each component uses XLM-R, allowing our system to support a broader set of languages than previous systems. The pipeline uses a standardized data schema, which invites extending the pipeline with additional modules. By releasing both a Docker image and presenting a lightweight web demo, we hope to enable the community to build on top of LOME for even more comprehensive information extraction. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "XLM-R itself is trained on CommonCrawl data spanning one hundred languages.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "This goal is to develop a system that identifies, links, and temporally sequences complex events. More information at https://www.darpa.mil/program/knowledgedirected-artificial-intelligencereasoning-over-schemas.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://github.com/aboSamoor/polyglot 6 http://hltcoe.github.io/concrete/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "A role is considered to be correctly predicted only when its frame is precisely predicted.8 We use AdamW and a learning rate of 5 \u00d7 10 \u22126 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The train and dev set of TBD has a total of 4,590 instances and the test set has 1,405 instances of event-event relations.11 We consider only event-event relations and the combined dataset has 5,987 (1,249) instances in the train (test) set.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://tac.nist.gov/2020/KBP/SM-KBP/index.html 16 AIDA M36, LDC2020E29.17 This evaluation metric is specific to the NIST SM-KBP 2020 task. It takes entity types into account.18 https://hub.docker.com/r/hltcoe/lome", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://nlp.jhu.edu/demos/lome/ 20 https://github.com/allenai/allennlpdemo.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We thank Anton Belyy, Kenton Murray, Manling Li, Varun Iyer, and Zhuowan Li for helpful discussions and feedback. This work was supported in part by DARPA AIDA (FA8750-18-2-0015) and KAIROS (FA8750-19-2-0034). The views and conclusions contained in this work are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, or endorsements of DARPA or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "POLYGLOT: Multilingual semantic role labeling with unified labels", |
|
"authors": [ |
|
{ |
|
"first": "Alan", |
|
"middle": [], |
|
"last": "Akbik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yunyao", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of ACL-2016 System Demonstrations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--6", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P16-4001" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alan Akbik and Yunyao Li. 2016. POLYGLOT: Multi- lingual semantic role labeling with unified labels. In Proceedings of ACL-2016 System Demonstrations, pages 1-6, Berlin, Germany. Association for Com- putational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Meeting of the Association for Computational Linguistics", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "1--6", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Meeting of the Association for Computational Lin- guistics (Volume 2: Short Papers), pages 1-6, Van- couver, Canada. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Ultra-fine entity typing", |
|
"authors": [ |
|
{ |
|
"first": "Eunsol", |
|
"middle": [], |
|
"last": "Choi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Omer", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yejin", |
|
"middle": [], |
|
"last": "Choi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "87--96", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P18-1009" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eunsol Choi, Omer Levy, Yejin Choi, and Luke Zettle- moyer. 2018. Ultra-fine entity typing. In Proceed- ings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 87-96, Melbourne, Australia. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Denise DiPersio, and Mark Liberman. 2020. A progress report on activities at the Linguistic Data Consortium benefitting the LREC community", |
|
"authors": [ |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Cieri", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Fiumara", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephanie", |
|
"middle": [], |
|
"last": "Strassel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Wright", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "Proceedings of the 12th Language Resources and Evaluation Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3449--3456", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christopher Cieri, James Fiumara, Stephanie Strassel, Jonathan Wright, Denise DiPersio, and Mark Liber- man. 2020. A progress report on activities at the Linguistic Data Consortium benefitting the LREC community. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 3449- 3456, Marseille, France. European Language Re- sources Association.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Unsupervised cross-lingual representation learning at scale", |
|
"authors": [ |
|
{ |
|
"first": "Alexis", |
|
"middle": [], |
|
"last": "Conneau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kartikay", |
|
"middle": [], |
|
"last": "Khandelwal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Naman", |
|
"middle": [], |
|
"last": "Goyal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vishrav", |
|
"middle": [], |
|
"last": "Chaudhary", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guillaume", |
|
"middle": [], |
|
"last": "Wenzek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Francisco", |
|
"middle": [], |
|
"last": "Guzm\u00e1n", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edouard", |
|
"middle": [], |
|
"last": "Grave", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Myle", |
|
"middle": [], |
|
"last": "Ott", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Veselin", |
|
"middle": [], |
|
"last": "Stoyanov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "8440--8451", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.acl-main.747" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 8440- 8451, Online. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Frame-semantic parsing", |
|
"authors": [ |
|
{ |
|
"first": "Dipanjan", |
|
"middle": [], |
|
"last": "Das", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Desai", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Andr\u00e9", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nathan", |
|
"middle": [], |
|
"last": "Martins", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Schneider", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Computational Linguistics", |
|
"volume": "40", |
|
"issue": "1", |
|
"pages": "9--56", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1162/COLI_a_00163" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dipanjan Das, Desai Chen, Andr\u00e9 F. T. Martins, Nathan Schneider, and Noah A. Smith. 2014. Frame-semantic parsing. Computational Linguis- tics, 40(1):9-56.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Concretely annotated corpora", |
|
"authors": [ |
|
{ |
|
"first": "Francis", |
|
"middle": [], |
|
"last": "Ferraro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Max", |
|
"middle": [], |
|
"last": "Thomas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matthew", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Gormley", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Travis", |
|
"middle": [], |
|
"last": "Wolfe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Craig", |
|
"middle": [], |
|
"last": "Harman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Van Durme", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "4th Workshop on Automated Knowledge Base Construction (AKBC)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Francis Ferraro, Max Thomas, Matthew R. Gormley, Travis Wolfe, Craig Harman, and Benjamin Van Durme. 2014. Concretely annotated corpora. In 4th Workshop on Automated Knowledge Base Construc- tion (AKBC).", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Event graphs for information retrieval and multi-document summarization", |
|
"authors": [ |
|
{ |
|
"first": "Goran", |
|
"middle": [], |
|
"last": "Glava\u0161", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jan\u0161najder", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1016/j.eswa.2014.04.004" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Goran Glava\u0161 and Jan\u0160najder. 2014. Event graphs for information retrieval and multi-document sum- marization.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Semantic frame identification with distributed word representations", |
|
"authors": [ |
|
{ |
|
"first": "Karl", |
|
"middle": [], |
|
"last": "Moritz Hermann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dipanjan", |
|
"middle": [], |
|
"last": "Das", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Weston", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kuzman", |
|
"middle": [], |
|
"last": "Ganchev", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1448--1458", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/v1/P14-1136" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Karl Moritz Hermann, Dipanjan Das, Jason Weston, and Kuzman Ganchev. 2014. Semantic frame iden- tification with distributed word representations. In Proceedings of the 52nd Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 1448-1458, Baltimore, Mary- land. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "SpanBERT: Improving pre-training by representing and predicting spans", |
|
"authors": [ |
|
{ |
|
"first": "Mandar", |
|
"middle": [], |
|
"last": "Joshi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Danqi", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yinhan", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Weld", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Omer", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "8", |
|
"issue": "", |
|
"pages": "64--77", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1162/tacl_a_00300" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, and Omer Levy. 2020. SpanBERT: Improving pre-training by representing and predicting spans. Transactions of the Associa- tion for Computational Linguistics, 8:64-77.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "BERT for coreference resolution: Baselines and analysis", |
|
"authors": [ |
|
{ |
|
"first": "Mandar", |
|
"middle": [], |
|
"last": "Joshi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Omer", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Weld", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5803--5808", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D19-1588" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mandar Joshi, Omer Levy, Luke Zettlemoyer, and Daniel Weld. 2019. BERT for coreference reso- lution: Baselines and analysis. In Proceedings of the 2019 Conference on Empirical Methods in Nat- ural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5803-5808, Hong Kong, China. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Predicting salient updates for disaster summarization", |
|
"authors": [ |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Kedzie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kathleen", |
|
"middle": [], |
|
"last": "Mckeown", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fernando", |
|
"middle": [], |
|
"last": "Diaz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1608--1617", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/v1/P15-1155" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chris Kedzie, Kathleen McKeown, and Fernando Diaz. 2015. Predicting salient updates for disaster summa- rization. In Proceedings of the 53rd Annual Meet- ing of the Association for Computational Linguistics and the 7th International Joint Conference on Natu- ral Language Processing (Volume 1: Long Papers), pages 1608-1617, Beijing, China. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing", |
|
"authors": [ |
|
{ |
|
"first": "Taku", |
|
"middle": [], |
|
"last": "Kudo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Richardson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "66--71", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D18-2012" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tok- enizer and detokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66-71, Brussels, Belgium. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Adaptation of deep bidirectional multilingual transformers for russian language", |
|
"authors": [ |
|
{ |
|
"first": "Yuri", |
|
"middle": [], |
|
"last": "Kuratov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mikhail", |
|
"middle": [], |
|
"last": "Arkhipov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Computational Linguistics and Intellectual Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "333--339", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yuri Kuratov and Mikhail Arkhipov. 2019. Adaptation of deep bidirectional multilingual transformers for russian language. In Computational Linguistics and Intellectual Technologies, pages 333-339.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Sentence level representation and language models in the task of coreference resolution for russian", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Le", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Petrov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Kuratov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Burtsev", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Computational Linguistics and Intellectual Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "364--373", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "T. A. Le, M. A. Petrov, Y. M. Kuratov, and M. S. Burt- sev. 2019. Sentence level representation and lan- guage models in the task of coreference resolution for russian. In Computational Linguistics and Intel- lectual Technologies, pages 364-373.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "End-to-end neural coreference resolution", |
|
"authors": [ |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luheng", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Lewis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "188--197", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D17-1018" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kenton Lee, Luheng He, Mike Lewis, and Luke Zettle- moyer. 2017. End-to-end neural coreference reso- lution. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 188-197, Copenhagen, Denmark. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Higher-order coreference resolution with coarse-tofine inference", |
|
"authors": [ |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luheng", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "687--692", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N18-2108" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kenton Lee, Luheng He, and Luke Zettlemoyer. 2018. Higher-order coreference resolution with coarse-to- fine inference. In Proceedings of the 2018 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 2 (Short Papers), pages 687-692, New Orleans, Louisiana. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Structured learning for temporal relation extraction from clinical records", |
|
"authors": [ |
|
{ |
|
"first": "Artuur", |
|
"middle": [], |
|
"last": "Leeuwenberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marie-Francine", |
|
"middle": [], |
|
"last": "Moens", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1150--1158", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Artuur Leeuwenberg and Marie-Francine Moens. 2017. Structured learning for temporal relation extraction from clinical records. In Proceedings of the 15th Conference of the European Chapter of the Associa- tion for Computational Linguistics: Volume 1, Long Papers, pages 1150-1158, Valencia, Spain. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Multilingual entity, relation, event and human value extraction", |
|
"authors": [ |
|
{ |
|
"first": "Manling", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ying", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joseph", |
|
"middle": [], |
|
"last": "Hoover", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Spencer", |
|
"middle": [], |
|
"last": "Whitehead", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Clare", |
|
"middle": [], |
|
"last": "Voss", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Morteza", |
|
"middle": [], |
|
"last": "Dehghani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Heng", |
|
"middle": [], |
|
"last": "Ji", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "110--115", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N19-4019" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Manling Li, Ying Lin, Joseph Hoover, Spencer White- head, Clare Voss, Morteza Dehghani, and Heng Ji. 2019. Multilingual entity, relation, event and hu- man value extraction. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demon- strations), pages 110-115, Minneapolis, Minnesota. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "GAIA: A fine-grained multimedia knowledge extraction system", |
|
"authors": [ |
|
{ |
|
"first": "Manling", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alireza", |
|
"middle": [], |
|
"last": "Zareian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ying", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaoman", |
|
"middle": [], |
|
"last": "Pan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Spencer", |
|
"middle": [], |
|
"last": "Whitehead", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Brian", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bo", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Heng", |
|
"middle": [], |
|
"last": "Ji", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shih-Fu", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Clare", |
|
"middle": [], |
|
"last": "Voss", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Napierski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marjorie", |
|
"middle": [], |
|
"last": "Freedman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "77--86", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.acl-demos.11" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Manling Li, Alireza Zareian, Ying Lin, Xiaoman Pan, Spencer Whitehead, Brian Chen, Bo Wu, Heng Ji, Shih-Fu Chang, Clare Voss, Daniel Napierski, and Marjorie Freedman. 2020. GAIA: A fine-grained multimedia knowledge extraction system. In Pro- ceedings of the 58th Annual Meeting of the As- sociation for Computational Linguistics: System Demonstrations, pages 77-86, Online. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Global inference to Chinese temporal relation extraction", |
|
"authors": [ |
|
{ |
|
"first": "Peifeng", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qiaoming", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guodong", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hongling", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1451--1460", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peifeng Li, Qiaoming Zhu, Guodong Zhou, and Hongling Wang. 2016. Global inference to Chi- nese temporal relation extraction. In Proceedings of COLING 2016, the 26th International Confer- ence on Computational Linguistics: Technical Pa- pers, pages 1451-1460, Osaka, Japan. The COLING 2016 Organizing Committee.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "An attentive fine-grained entity typing model with latent type representation", |
|
"authors": [ |
|
{ |
|
"first": "Ying", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Heng", |
|
"middle": [], |
|
"last": "Ji", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "6197--6202", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D19-1641" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ying Lin and Heng Ji. 2019. An attentive fine-grained entity typing model with latent type representation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 6197- 6202, Hong Kong, China. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "A joint neural model for information extraction with global features", |
|
"authors": [ |
|
{ |
|
"first": "Ying", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Heng", |
|
"middle": [], |
|
"last": "Ji", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fei", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lingfei", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "7999--8009", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.acl-main.713" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ying Lin, Heng Ji, Fei Huang, and Lingfei Wu. 2020. A joint neural model for information extraction with global features. In Proceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics, pages 7999-8009, Online. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Fine-grained entity recognition", |
|
"authors": [ |
|
{ |
|
"first": "Xiao", |
|
"middle": [], |
|
"last": "Ling", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Weld", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "94--100", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xiao Ling and Daniel S. Weld. 2012. Fine-grained en- tity recognition. In Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence, July 22- 26, 2012, Toronto, Ontario, Canada., pages 94-100.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "SemEval-2015 task 5: QA Tem-pEval -evaluating temporal information understanding with question answering", |
|
"authors": [ |
|
{ |
|
"first": "Hector", |
|
"middle": [], |
|
"last": "Llorens", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nathanael", |
|
"middle": [], |
|
"last": "Chambers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Naushad", |
|
"middle": [], |
|
"last": "Uzzaman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nasrin", |
|
"middle": [], |
|
"last": "Mostafazadeh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Allen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Pustejovsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 9th International Workshop on Semantic Evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "792--800", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/S15-2134" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hector Llorens, Nathanael Chambers, Naushad UzZa- man, Nasrin Mostafazadeh, James Allen, and James Pustejovsky. 2015. SemEval-2015 task 5: QA Tem- pEval -evaluating temporal information understand- ing with question answering. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015), pages 792-800, Denver, Colorado. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "On coreference resolution performance metrics", |
|
"authors": [ |
|
{ |
|
"first": "Xiaoqiang", |
|
"middle": [], |
|
"last": "Luo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "25--32", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xiaoqiang Luo. 2005. On coreference resolution per- formance metrics. In Proceedings of Human Lan- guage Technology Conference and Conference on Empirical Methods in Natural Language Processing, pages 25-32, Vancouver, British Columbia, Canada. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "A structured learning approach to temporal relation extraction", |
|
"authors": [ |
|
{ |
|
"first": "Qiang", |
|
"middle": [], |
|
"last": "Ning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhili", |
|
"middle": [], |
|
"last": "Feng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Roth", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1027--1037", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D17-1108" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Qiang Ning, Zhili Feng, and Dan Roth. 2017. A struc- tured learning approach to temporal relation extrac- tion. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1027-1037, Copenhagen, Denmark. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Learning to denoise distantly-labeled data for entity typing", |
|
"authors": [ |
|
{ |
|
"first": "Yasumasa", |
|
"middle": [], |
|
"last": "Onoe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Greg", |
|
"middle": [], |
|
"last": "Durrett", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "2407--2417", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N19-1250" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yasumasa Onoe and Greg Durrett. 2019. Learning to denoise distantly-labeled data for entity typing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Compu- tational Linguistics: Human Language Technolo- gies, Volume 1 (Long and Short Papers), pages 2407-2417, Minneapolis, Minnesota. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Crosslingual name tagging and linking for 282 languages", |
|
"authors": [ |
|
{ |
|
"first": "Xiaoman", |
|
"middle": [], |
|
"last": "Pan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Boliang", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "May", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joel", |
|
"middle": [], |
|
"last": "Nothman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Knight", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Heng", |
|
"middle": [], |
|
"last": "Ji", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1946--1958", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P17-1178" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xiaoman Pan, Boliang Zhang, Jonathan May, Joel Nothman, Kevin Knight, and Heng Ji. 2017. Cross- lingual name tagging and linking for 282 languages. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1946-1958, Vancouver, Canada. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Learning joint semantic parsers from disjoint data", |
|
"authors": [ |
|
{ |
|
"first": "Hao", |
|
"middle": [], |
|
"last": "Peng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sam", |
|
"middle": [], |
|
"last": "Thomson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Swabha", |
|
"middle": [], |
|
"last": "Swayamdipta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1492--1502", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N18-1135" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hao Peng, Sam Thomson, Swabha Swayamdipta, and Noah A. Smith. 2018. Learning joint semantic parsers from disjoint data. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, Volume 1 (Long Pa- pers), pages 1492-1502, New Orleans, Louisiana. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "A concrete Chinese NLP pipeline", |
|
"authors": [ |
|
{ |
|
"first": "Nanyun", |
|
"middle": [], |
|
"last": "Peng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Francis", |
|
"middle": [], |
|
"last": "Ferraro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mo", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicholas", |
|
"middle": [], |
|
"last": "Andrews", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jay", |
|
"middle": [], |
|
"last": "Deyoung", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Max", |
|
"middle": [], |
|
"last": "Thomas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matthew", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Gormley", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Travis", |
|
"middle": [], |
|
"last": "Wolfe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Craig", |
|
"middle": [], |
|
"last": "Harman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Van Durme", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Dredze", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "86--90", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/v1/N15-3018" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nanyun Peng, Francis Ferraro, Mo Yu, Nicholas An- drews, Jay DeYoung, Max Thomas, Matthew R. Gormley, Travis Wolfe, Craig Harman, Benjamin Van Durme, and Mark Dredze. 2015. A concrete Chinese NLP pipeline. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Demon- strations, pages 86-90, Denver, Colorado. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Deep contextualized word representations", |
|
"authors": [ |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Peters", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Neumann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohit", |
|
"middle": [], |
|
"last": "Iyyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matt", |
|
"middle": [], |
|
"last": "Gardner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "2227--2237", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N18-1202" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. In Proceedings of the 2018 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Towards robust linguistic analysis using OntoNotes", |
|
"authors": [ |
|
{ |
|
"first": "Alessandro", |
|
"middle": [], |
|
"last": "Sameer Pradhan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nianwen", |
|
"middle": [], |
|
"last": "Moschitti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hwee Tou", |
|
"middle": [], |
|
"last": "Xue", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anders", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Olga", |
|
"middle": [], |
|
"last": "Bj\u00f6rkelund", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuchen", |
|
"middle": [], |
|
"last": "Uryupina", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhi", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Zhong", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the Seventeenth Conference on Computational Natural Language Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "143--152", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Hwee Tou Ng, Anders Bj\u00f6rkelund, Olga Uryupina, Yuchen Zhang, and Zhi Zhong. 2013. Towards ro- bust linguistic analysis using OntoNotes. In Pro- ceedings of the Seventeenth Conference on Computa- tional Natural Language Learning, pages 143-152, Sofia, Bulgaria. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "The Timebank corpus", |
|
"authors": [ |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Pustejovsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Patrick", |
|
"middle": [], |
|
"last": "Hanks", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roser", |
|
"middle": [], |
|
"last": "Sauri", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "See", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Gaizauskas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrea", |
|
"middle": [], |
|
"last": "Setzer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dragomir", |
|
"middle": [], |
|
"last": "Radev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Beth", |
|
"middle": [], |
|
"last": "Sundheim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Day", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lisa", |
|
"middle": [], |
|
"last": "Ferro", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Corpus linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "James Pustejovsky, Patrick Hanks, Roser Sauri, An- drew See, Robert Gaizauskas, Andrea Setzer, Dragomir Radev, Beth Sundheim, David Day, Lisa Ferro, et al. 2003. The Timebank corpus. In Corpus linguistics, volume 2003, page 40. Lancaster, UK.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "SemEval-2010 task 1: Coreference resolution in multiple languages", |
|
"authors": [ |
|
{ |
|
"first": "Marta", |
|
"middle": [], |
|
"last": "Recasens", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Llu\u00eds", |
|
"middle": [], |
|
"last": "M\u00e0rquez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Emili", |
|
"middle": [], |
|
"last": "Sapena", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"Ant\u00f2nia" |
|
], |
|
"last": "Mart\u00ed", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mariona", |
|
"middle": [], |
|
"last": "Taul\u00e9", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V\u00e9ronique", |
|
"middle": [], |
|
"last": "Hoste", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Massimo", |
|
"middle": [], |
|
"last": "Poesio", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yannick", |
|
"middle": [], |
|
"last": "Versley", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 5th International Workshop on Semantic Evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--8", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marta Recasens, Llu\u00eds M\u00e0rquez, Emili Sapena, M. Ant\u00f2nia Mart\u00ed, Mariona Taul\u00e9, V\u00e9ronique Hoste, Massimo Poesio, and Yannick Versley. 2010. SemEval-2010 task 1: Coreference resolution in multiple languages. In Proceedings of the 5th Inter- national Workshop on Semantic Evaluation, pages 1-8, Uppsala, Sweden. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "Mayo clinical text analysis and knowledge extraction system (ctakes): architecture, component evaluation and applications", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Guergana", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Savova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Masanz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jiaping", |
|
"middle": [], |
|
"last": "Philip V Ogren", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sunghwan", |
|
"middle": [], |
|
"last": "Zheng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Karin", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Sohn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"G" |
|
], |
|
"last": "Kipper-Schuler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Chute", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Journal of the American Medical Informatics Association", |
|
"volume": "17", |
|
"issue": "5", |
|
"pages": "507--513", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1136/jamia.2009.001560" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Guergana K Savova, James J Masanz, Philip V Ogren, Jiaping Zheng, Sunghwan Sohn, Karin C Kipper- Schuler, and Christopher G Chute. 2010. Mayo clin- ical text analysis and knowledge extraction system (ctakes): architecture, component evaluation and ap- plications. Journal of the American Medical Infor- matics Association, 17(5):507-513.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "Clamp-a toolkit for efficiently building customized clinical natural language processing pipelines", |
|
"authors": [ |
|
{ |
|
"first": "Ergin", |
|
"middle": [], |
|
"last": "Soysal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jingqi", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Min", |
|
"middle": [], |
|
"last": "Jiang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yonghui", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Serguei", |
|
"middle": [], |
|
"last": "Pakhomov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hongfang", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hua", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Journal of the American Medical Informatics Association", |
|
"volume": "25", |
|
"issue": "3", |
|
"pages": "331--336", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1093/jamia/ocx132" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ergin Soysal, Jingqi Wang, Min Jiang, Yonghui Wu, Serguei Pakhomov, Hongfang Liu, and Hua Xu. 2018. Clamp-a toolkit for efficiently build- ing customized clinical natural language processing pipelines. Journal of the American Medical Infor- matics Association, 25(3):331-336.", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "YAGO: A large ontology from wikipedia and wordnet", |
|
"authors": [ |
|
{ |
|
"first": "Fabian", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Suchanek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gjergji", |
|
"middle": [], |
|
"last": "Kasneci", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gerhard", |
|
"middle": [], |
|
"last": "Weikum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Journal of Web Semantics", |
|
"volume": "6", |
|
"issue": "3", |
|
"pages": "203--217", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1016/j.websem.2008.06.001" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Fabian M. Suchanek, Gjergji Kasneci, and Gerhard Weikum. 2008. YAGO: A large ontology from wikipedia and wordnet. Journal of Web Semantics, 6(3):203-217.", |
|
"links": null |
|
}, |
|
"BIBREF39": { |
|
"ref_id": "b39", |
|
"title": "Evaluating temporal relations in clinical text: 2012 i2b2 challenge", |
|
"authors": [ |
|
{ |
|
"first": "Weiyi", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anna", |
|
"middle": [], |
|
"last": "Rumshisky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ozlem", |
|
"middle": [], |
|
"last": "Uzuner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Journal of the American Medical Informatics Association", |
|
"volume": "20", |
|
"issue": "5", |
|
"pages": "806--813", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1136/amiajnl-2013-001628" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Weiyi Sun, Anna Rumshisky, and Ozlem Uzuner. 2013. Evaluating temporal relations in clinical text: 2012 i2b2 challenge. Journal of the American Medical Informatics Association, 20(5):806-813.", |
|
"links": null |
|
}, |
|
"BIBREF40": { |
|
"ref_id": "b40", |
|
"title": "Syntactic scaffolds for semantic structures", |
|
"authors": [ |
|
{ |
|
"first": "Swabha", |
|
"middle": [], |
|
"last": "Swayamdipta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sam", |
|
"middle": [], |
|
"last": "Thomson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3772--3782", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D18-1412" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Swabha Swayamdipta, Sam Thomson, Kenton Lee, Luke Zettlemoyer, Chris Dyer, and Noah A. Smith. 2018. Syntactic scaffolds for semantic structures. In Proceedings of the 2018 Conference on Em- pirical Methods in Natural Language Processing, pages 3772-3782, Brussels, Belgium. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF41": { |
|
"ref_id": "b41", |
|
"title": "Evaluating anaphora and coreference resolution for russian. Computational Linguistics and Intellectual Technologies", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "S Toldova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alina", |
|
"middle": [], |
|
"last": "Roytberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maria", |
|
"middle": [], |
|
"last": "Ladygina", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Vasilyeva", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matvei", |
|
"middle": [], |
|
"last": "Azerkovich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Kurzukov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [ |
|
"V" |
|
], |
|
"last": "Sim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Gorshkov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anna", |
|
"middle": [], |
|
"last": "Ivanova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Nedoluzhko", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Grishina", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "681--694", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "S Toldova, A. Roytberg, Alina Ladygina, Maria Vasi- lyeva, Ilya Azerkovich, Matvei Kurzukov, G. Sim, D.V. Gorshkov, A. Ivanova, Anna Nedoluzhko, and Y. Grishina. 2014. Ru-eval-2014: Evaluat- ing anaphora and coreference resolution for rus- sian. Computational Linguistics and Intellectual Technologies, pages 681-694.", |
|
"links": null |
|
}, |
|
"BIBREF42": { |
|
"ref_id": "b42", |
|
"title": "SemEval-2013 task 1: TempEval-3: Evaluating time expressions, events, and temporal relations", |
|
"authors": [ |
|
{ |
|
"first": "Naushad", |
|
"middle": [], |
|
"last": "Uzzaman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hector", |
|
"middle": [], |
|
"last": "Llorens", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Leon", |
|
"middle": [], |
|
"last": "Derczynski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Allen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marc", |
|
"middle": [], |
|
"last": "Verhagen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Pustejovsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the Seventh International Workshop on Semantic Evaluation", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "1--9", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Naushad UzZaman, Hector Llorens, Leon Derczyn- ski, James Allen, Marc Verhagen, and James Puste- jovsky. 2013. SemEval-2013 task 1: TempEval-3: Evaluating time expressions, events, and temporal relations. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013), pages 1- 9, Atlanta, Georgia, USA. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF43": { |
|
"ref_id": "b43", |
|
"title": "Fine-grained temporal relation extraction", |
|
"authors": [ |
|
{ |
|
"first": "Siddharth", |
|
"middle": [], |
|
"last": "Vashishtha", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Van Durme", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aaron", |
|
"middle": [ |
|
"Steven" |
|
], |
|
"last": "White", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2906--2919", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P19-1280" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Siddharth Vashishtha, Benjamin Van Durme, and Aaron Steven White. 2019. Fine-grained temporal relation extraction. In Proceedings of the 57th An- nual Meeting of the Association for Computational Linguistics, pages 2906-2919, Florence, Italy. Asso- ciation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF44": { |
|
"ref_id": "b44", |
|
"title": "SemEval-2007 task 15: TempEval temporal relation identification", |
|
"authors": [ |
|
{ |
|
"first": "Marc", |
|
"middle": [], |
|
"last": "Verhagen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Gaizauskas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Frank", |
|
"middle": [], |
|
"last": "Schilder", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Hepple", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Graham", |
|
"middle": [], |
|
"last": "Katz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Pustejovsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "75--80", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marc Verhagen, Robert Gaizauskas, Frank Schilder, Mark Hepple, Graham Katz, and James Pustejovsky. 2007. SemEval-2007 task 15: TempEval tempo- ral relation identification. In Proceedings of the Fourth International Workshop on Semantic Evalua- tions (SemEval-2007), pages 75-80, Prague, Czech Republic. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF45": { |
|
"ref_id": "b45", |
|
"title": "Semeval-2010 task 13: Tempeval-2", |
|
"authors": [ |
|
{ |
|
"first": "Marc", |
|
"middle": [], |
|
"last": "Verhagen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roser", |
|
"middle": [], |
|
"last": "Saur\u00ed", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tommaso", |
|
"middle": [], |
|
"last": "Caselli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Pustejovsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 5th International Workshop on Semantic Evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "57--62", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marc Verhagen, Roser Saur\u00ed, Tommaso Caselli, and James Pustejovsky. 2010. Semeval-2010 task 13: Tempeval-2. In Proceedings of the 5th International Workshop on Semantic Evaluation, pages 57-62, Up- psala, Sweden. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF46": { |
|
"ref_id": "b46", |
|
"title": "A modeltheoretic coreference scoring scheme", |
|
"authors": [ |
|
{ |
|
"first": "Marc", |
|
"middle": [], |
|
"last": "Vilain", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Burger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Aberdeen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Sixth Message Understanding Conference (MUC-6): Proceedings of a Conference Held in", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marc Vilain, John Burger, John Aberdeen, Dennis Con- nolly, and Lynette Hirschman. 1995. A model- theoretic coreference scoring scheme. In Sixth Mes- sage Understanding Conference (MUC-6): Proceed- ings of a Conference Held in Columbia, Maryland, November 6-8, 1995.", |
|
"links": null |
|
}, |
|
"BIBREF47": { |
|
"ref_id": "b47", |
|
"title": "BBN pronoun coreference and entity type corpus. Philadelphia: Linguistic Data Consortium", |
|
"authors": [ |
|
{ |
|
"first": "Ralph", |
|
"middle": [], |
|
"last": "Weischedel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ada", |
|
"middle": [], |
|
"last": "Brunstein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.35111/9fx9-gz10" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ralph Weischedel and Ada Brunstein. 2005. BBN pro- noun coreference and entity type corpus. Philadel- phia: Linguistic Data Consortium.", |
|
"links": null |
|
}, |
|
"BIBREF48": { |
|
"ref_id": "b48", |
|
"title": "OntoNotes release 5.0. Linguistic Data Consortium", |
|
"authors": [ |
|
{ |
|
"first": "Ralph", |
|
"middle": [], |
|
"last": "Weischedel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martha", |
|
"middle": [], |
|
"last": "Palmer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mitchell", |
|
"middle": [], |
|
"last": "Marcus", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eduard", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sameer", |
|
"middle": [], |
|
"last": "Pradhan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lance", |
|
"middle": [], |
|
"last": "Ramshaw", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nianwen", |
|
"middle": [], |
|
"last": "Xue", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ann", |
|
"middle": [], |
|
"last": "Taylor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeff", |
|
"middle": [], |
|
"last": "Kaufman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michelle", |
|
"middle": [], |
|
"last": "Franchini", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.35111/xmhb-2b84" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ralph Weischedel, Martha Palmer, Mitchell Marcus, Eduard Hovy, Sameer Pradhan, Lance Ramshaw, Ni- anwen Xue, Ann Taylor, Jeff Kaufman, Michelle Franchini, et al. 2013. OntoNotes release 5.0. Lin- guistic Data Consortium, Philadelphia, PA.", |
|
"links": null |
|
}, |
|
"BIBREF49": { |
|
"ref_id": "b49", |
|
"title": "Beto, bentz, becas: The surprising cross-lingual effectiveness of BERT", |
|
"authors": [ |
|
{ |
|
"first": "Shijie", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Dredze", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "833--844", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D19-1077" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shijie Wu and Mark Dredze. 2019. Beto, bentz, be- cas: The surprising cross-lingual effectiveness of BERT. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 833-844, Hong Kong, China. Association for Com- putational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF50": { |
|
"ref_id": "b50", |
|
"title": "Incremental neural coreference resolution in constant memory", |
|
"authors": [ |
|
{ |
|
"first": "Patrick", |
|
"middle": [], |
|
"last": "Xia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jo\u00e3o", |
|
"middle": [], |
|
"last": "Sedoc", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Van Durme", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "8617--8624", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.emnlp-main.695" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Patrick Xia, Jo\u00e3o Sedoc, and Benjamin Van Durme. 2020. Incremental neural coreference resolution in constant memory. In Proceedings of the 2020 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 8617-8624, Online. As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF51": { |
|
"ref_id": "b51", |
|
"title": "Revealing the myth of higher-order inference in coreference resolution", |
|
"authors": [ |
|
{ |
|
"first": "Liyan", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Jinho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Choi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "8527--8533", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.emnlp-main.686" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Liyan Xu and Jinho D. Choi. 2020. Revealing the myth of higher-order inference in coreference resolution. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 8527-8533, Online. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF52": { |
|
"ref_id": "b52", |
|
"title": "A joint sequential and relational model for frame-semantic parsing", |
|
"authors": [ |
|
{ |
|
"first": "Bishan", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tom", |
|
"middle": [], |
|
"last": "Mitchell", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1247--1256", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D17-1128" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bishan Yang and Tom Mitchell. 2017. A joint sequen- tial and relational model for frame-semantic parsing. In Proceedings of the 2017 Conference on Empiri- cal Methods in Natural Language Processing, pages 1247-1256, Copenhagen, Denmark. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF53": { |
|
"ref_id": "b53", |
|
"title": "Going on a vacation\" takes longer than \"going for a walk\": A study of temporal commonsense understanding", |
|
"authors": [ |
|
{ |
|
"first": "Ben", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Khashabi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qiang", |
|
"middle": [], |
|
"last": "Ning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Roth", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3363--3369", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D19-1332" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ben Zhou, Daniel Khashabi, Qiang Ning, and Dan Roth. 2019. \"Going on a vacation\" takes longer than \"going for a walk\": A study of temporal com- monsense understanding. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3363-3369, Hong Kong, China. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF54": { |
|
"ref_id": "b54", |
|
"title": "Clinical temporal relation extraction with probabilistic soft logic regularization and global inference", |
|
"authors": [ |
|
{ |
|
"first": "Yichao", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yu", |
|
"middle": [], |
|
"last": "Yan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rujun", |
|
"middle": [], |
|
"last": "Han", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai-Wei", |
|
"middle": [], |
|
"last": "Harry Caufield", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yizhou", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peipei", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Ping", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Proceedings of AAAI 2021", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yichao Zhou, Yu Yan, Rujun Han, J Harry Caufield, Kai-Wei Chang, Yizhou Sun, Peipei Ping, and Wei Wang. 2021. Clinical temporal relation extrac- tion with probabilistic soft logic regularization and global inference. In Proceedings of AAAI 2021.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF1": { |
|
"uris": null, |
|
"num": null, |
|
"text": "A portion of the AIDA entity type ontology.", |
|
"type_str": "figure" |
|
}, |
|
"FIGREF2": { |
|
"uris": null, |
|
"num": null, |
|
"text": "UltraFine 40.1 (Onoe and Durrett, 2019) 41.5", |
|
"type_str": "figure" |
|
}, |
|
"TABREF1": { |
|
"num": null, |
|
"content": "<table><tr><td>Model</td><td>Accuracy</td></tr><tr><td colspan=\"2\">Yang and Mitchell (2017) 88.2</td></tr><tr><td>Hermann et al. (2014)</td><td>88.4</td></tr><tr><td>Peng et al. (2018)</td><td>90.0</td></tr><tr><td>This work</td><td>91.3</td></tr></table>", |
|
"text": "Statistics of FrameNet v1.7", |
|
"html": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF2": { |
|
"num": null, |
|
"content": "<table/>", |
|
"text": "", |
|
"html": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF4": { |
|
"num": null, |
|
"content": "<table/>", |
|
"text": "", |
|
"html": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF6": { |
|
"num": null, |
|
"content": "<table/>", |
|
"text": "Statistics of the datasets used for training our entity typing model.", |
|
"html": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF7": { |
|
"num": null, |
|
"content": "<table/>", |
|
"text": "Performance of our hierarchical entity typing model across several typing ontologies.", |
|
"html": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF9": { |
|
"num": null, |
|
"content": "<table><tr><td>: Result on the combined test set of TempEval3</td></tr><tr><td>and TimeBank-Dense when trained with just 4 tempo-</td></tr><tr><td>ral relation labels</td></tr><tr><td>5 Extensions</td></tr><tr><td>5.1 Incorporating third-party systems</td></tr></table>", |
|
"text": "", |
|
"html": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF10": { |
|
"num": null, |
|
"content": "<table/>", |
|
"text": "James Allen, Hannah An, Ritwik Bose, Will de Beaumont, and Choh Man Teng. 2020. A broad-coverage deep semantic lexicon for verbs. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 3243-3251, Marseille, France. European Language Resources Association. Amit Bagga and Breck Baldwin. 1998. Algorithms for scoring coreference chains. In The First International Conference on Language Resources and Evaluation Workshop on Linguistics Coreference, pages 563-566. Collin F. Baker, Charles J. Fillmore, and John B. Lowe. 1998. The Berkeley FrameNet project. In 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics, Volume 1, pages 86-90, Montreal, Quebec, Canada. Association for Computational Linguistics. A. E. Budnikov, S Yu Toldova, D. S. Zvereva, D. M. Maximova, and M. I. Ionov. 2019. Ru-eval-2019: Evaluating anaphora and coreference resolution for russian. In Computational Linguistics and Intellectual Technologies -Supplementary Volume. Taylor Cassidy, Bill McDowell, Nathanael Chambers, and Steven Bethard. 2014. An annotation framework for dense event ordering. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 501-506, Baltimore, Maryland. Association for Computational Linguistics. Nathanael Chambers, Taylor Cassidy, Bill McDowell, and Steven Bethard. 2014. Dense event ordering with a multi-pass architecture. Transactions of the Association for Computational Linguistics, 2:273-284. Tongfei Chen, Yunmo Chen, and Benjamin Van Durme. 2020a. Hierarchical entity typing via multi-level learning to rank. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8465-8475, Online. Association for Computational Linguistics. Yunmo Chen, Tongfei Chen, and Benjamin Van Durme. 2020b. Joint modeling of arguments for event understanding. In Proceedings of the First Workshop on Computational Approaches to Discourse, pages 96-101, Online. Association for Computational Linguistics.", |
|
"html": null, |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |