ACL-OCL / Base_JSON /prefixT /json /textgraphs /2021.textgraphs-1.5.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:58:07.556962Z"
},
"title": "GENE: Global Event Network Embedding",
"authors": [
{
"first": "Qi",
"middle": [],
"last": "Zeng",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Illinois Urbana-Champaign",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Manling",
"middle": [],
"last": "Li",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Illinois Urbana-Champaign",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Tuan",
"middle": [],
"last": "Lai",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Illinois Urbana-Champaign",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Heng",
"middle": [],
"last": "Ji",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Illinois Urbana-Champaign",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "UNC Chapel Hill",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Hanghang",
"middle": [],
"last": "Tong",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Illinois Urbana-Champaign",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Current methods for event representation ignore related events in a corpus-level global context. For a deep and comprehensive understanding of complex events, we introduce a new task, Event Network Embedding, which aims to represent events by capturing the connections among events. We propose a novel framework, Global Event Network Embedding (GENE), that encodes the event network with a multi-view graph encoder while preserving the graph topology and node semantics. The graph encoder is trained by minimizing both structural and semantic losses. We develop a new series of structured probing tasks, and show that our approach effectively outperforms baseline models on node typing, argument role classification, and event coreference resolution. 1",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Current methods for event representation ignore related events in a corpus-level global context. For a deep and comprehensive understanding of complex events, we introduce a new task, Event Network Embedding, which aims to represent events by capturing the connections among events. We propose a novel framework, Global Event Network Embedding (GENE), that encodes the event network with a multi-view graph encoder while preserving the graph topology and node semantics. The graph encoder is trained by minimizing both structural and semantic losses. We develop a new series of structured probing tasks, and show that our approach effectively outperforms baseline models on node typing, argument role classification, and event coreference resolution. 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Understanding events is a fundamental human activity. Our minds represent events at various granularity and abstraction levels, which allows us to quickly access and reason about related scenarios. A typical event mention includes an event trigger (the word or phrase that most clearly expresses an event occurrence) and its arguments (i.e., participants in events). The lexical embedding of a trigger is usually not sufficient, because the type of an event often depends on its arguments (Ritter and Rosen, 2000; Xu and Huang, 2013; Weber et al., 2018) . For example, the support verb \"get\" may indicate a Transfer.Ownership event (\"Ellison to spend $10.3 billion to get his company.\") or a Movement.Transport event (\"Airlines are getting flyers to destinations on time more often.\"). In Figure 1, the event type triggered by \"execution\" is Life.Die instead of project implementation. However, such kind of atomic event representation is still overly simplistic since it only captures local information and ignores related events in the global context. Real-world events are inter-connected, as illustrated in the example in Figure 1 . To have a comprehensive representation of the set fire event on an embassy, we need to incorporate its causes (e.g., the preceding execution event) and recent relevant events (e.g., the protests that happened before and after it). To capture these inter-event relations in a global context, we propose the following two assumptions.",
"cite_spans": [
{
"start": 489,
"end": 513,
"text": "(Ritter and Rosen, 2000;",
"ref_id": "BIBREF48"
},
{
"start": 514,
"end": 533,
"text": "Xu and Huang, 2013;",
"ref_id": "BIBREF57"
},
{
"start": 534,
"end": 553,
"text": "Weber et al., 2018)",
"ref_id": "BIBREF56"
}
],
"ref_spans": [
{
"start": 789,
"end": 795,
"text": "Figure",
"ref_id": null
},
{
"start": 1126,
"end": 1134,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Assumption 1. Two events can be connected through the entities involved. On schema or type level, two event types can be connected through multiple paths and form a coherent story (Li et al., 2020) . This observation is also valid on instance level. For the example in Figure 1 , one of the relations between the Set Fire event and the Execution event is the blue path Set Fire, target, Saudi Embassy, affiliation, Saudi Arabia, agent, Execution , which partially supports the fact that angry protesters revenge the death of Nimr al-Nimr against Saudi Arabia by attacking its embassy. This approximation for event-event relations lessens the problems of coarse classification granularity and low inter-annotation agreement (which may be as low as 20% as reported in (Hong et al., 2016) ). Hence, we propose to construct an Event Network, where each event node represents a unique instance labeled with its type, arguments, and attributes. These nodes are connected through multiple instantiated meta-paths (Sun et al., 2011) consisting of their entity arguments and the entity-entity relations. These entities can be co-referential (e.g., two protests on different dates that both occur in Tehran, Iran) or involved in the same semantic relations (both protests targeted the Saudi embassy, which is affiliated with the location entity \"Saudi Arabia\").",
"cite_spans": [
{
"start": 180,
"end": 197,
"text": "(Li et al., 2020)",
"ref_id": "BIBREF35"
},
{
"start": 766,
"end": 785,
"text": "(Hong et al., 2016)",
"ref_id": "BIBREF23"
},
{
"start": 1006,
"end": 1024,
"text": "(Sun et al., 2011)",
"ref_id": "BIBREF51"
}
],
"ref_spans": [
{
"start": 269,
"end": 277,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Assumption 2. The representation of one event depends on its neighboring events in the Figure 1 : An example of Event Network constructed from one VOA news article, where events are connected through entities involved. Each node is an event or entity and each edge represents an argument role or entityentity relation. In this example, Execution event and Set Fire event are connected through two paths, which tell the story of angry protesters revenge the death of Nimral-Nimr against Saudi Arabia by attacking its embassy.",
"cite_spans": [],
"ref_spans": [
{
"start": 87,
"end": 95,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "event network. In Figure 1 , a good representation of the Set Fire event should involve the Execution event because the latter clarifies the grievance motivating the former. We further enrich event representations by introducing more context from the entire event network. Compared with other methods to connect events (e.g., with eventevent relations (Pustejovsky et al., 2003; Cassidy et al., 2014; Hong et al., 2016; Ikuta et al., 2014; ), our representation of each event grounded in an event network is semantically richer.",
"cite_spans": [
{
"start": 352,
"end": 378,
"text": "(Pustejovsky et al., 2003;",
"ref_id": "BIBREF46"
},
{
"start": 379,
"end": 400,
"text": "Cassidy et al., 2014;",
"ref_id": "BIBREF5"
},
{
"start": 401,
"end": 419,
"text": "Hong et al., 2016;",
"ref_id": "BIBREF23"
},
{
"start": 420,
"end": 439,
"text": "Ikuta et al., 2014;",
"ref_id": "BIBREF25"
}
],
"ref_spans": [
{
"start": 18,
"end": 26,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Based on these two hypotheses, we introduce a new task of Event Network Embedding, aiming at representing events with low-dimensional and informative embeddings by incorporating neighboring events. We also propose a novel Global Event Network Embedding Learning (GENE) framework for this task. To capture network topology and preserve node attributes in the event representations, GENE trains a graph encoder by minimizing both structural and semantic losses. To promote relational message passing with focus on different parts of the graph, we propose an innovative multi-view graph encoding method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We design Event Network Structural Probes, an evaluation framework including a series of structural probing tasks, to check the model's capability to implicitly incorporate event network structures. In this work, the learned node embeddings are intrinsically evaluated with node typing and event argument role classification tasks, and applied to the downstream task of event coreference resolution. Experimental results on the augmented Automatic Content Extraction (ACE) dataset show that leveraging global context can significantly enrich the event representations. GENE and its variants significantly outperform the baseline methods on various tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In summary, our contributions are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We formalize the task of event network embedding and accordingly propose a novel unsupervised learning framework, which trains the multi-view graph encoder with topology and semantics learning losses. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "v i = a i , b i , s i , l i \u2208 V",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "represents an event or entity mention, where a i and b i are the start and end word indices in sentence s i , and l i is the node type label. Each edge e ij = i, j, l ij \u2208 E represents an event-entity or entity-entity relation, where i and j are indices of the involved nodes and l ij is the edge type label. In this work, we initialize the semantic representation of each node v i with an m-dimensional attribute vector x i derived from sentence context using a pretrained BERT model (Devlin et al., 2019) .",
"cite_spans": [
{
"start": 485,
"end": 506,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Semantic Proximity (Gao and Huang, 2018) . Given an event network G = {V, E}, the semantic proximity of node v i and node v j is determined by the similarity of node attribute vectors x i and x j . If two nodes are semantically similar in the original space, they should stay similar in the new space.",
"cite_spans": [
{
"start": 19,
"end": 40,
"text": "(Gao and Huang, 2018)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Local Neighborhood. Given G = {V, E}, the local (one-hop) neighborhood N i of node v i is defined as N i = {v j \u2208 V | e ij \u2208 E}. For example, the local neighborhood of one event is composed of its argument entities. Given event-entity node pairs, the task of argument role classification is to label the local neighborhood of events.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Global Neighborhood. Given G = {V, E}, node v j belongs to the global (k-hop with k \u2265 2) neighborhood of node v i , if node v i can walk to node v j in k hops. For example, two events are 3hop neighbors when there is a path from one event to the other through two entity nodes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Event Network Embedding. Given an event network G = {V, E} with n nodes, the task of event network embedding aims to learn a mapping function f : {V, E} \u2192 Y or f :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "R n\u00d7m \u00d7 R n\u00d7n \u2192 R n\u00d7d , where Y = [y i ] \u2208 R n\u00d7d is the node rep- resentation, d",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "is the embedding dimension, and Y should preserve the Semantic Proximity, Local Neighborhood and Global Neighborhood.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Compared to other network embedding tasks, there are three challenges in event network embedding:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach Overview",
"sec_num": "3.1"
},
{
"text": "\u2022 Data Sparsity: We rely on supervised Information Extraction (IE) techniques to construct the event network, because they provide highquality knowledge elements. However, due to the limited number of types in pre-defined ontologies, the constructed event network tends to be sparse. \u2022 Relational Structure: The event network is heterogeneous with edges representing relations of different types. Relation types differ in semantics and will influence message passing. \u2022 Long-Distance Dependency: Global neighborhood preservation requires node embedding to capture the distant relations between two nodes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach Overview",
"sec_num": "3.1"
},
{
"text": "We first initiate the event network by event and entity extraction, event argument role labeling and entity-entity relation extraction. The nodes in the event network are events and entities. If entity coreference resolution results are available, we merge coreferential entity mentions and label the mention text with the first occurring mention. For each node v i , we derive its m-dimensional attribute vector x i with its mention text by averaging the corresponding contextual token embeddings from a pretrained bert-base model. The edges in the event network come from the event argument roles connecting event mentions and entities, and the entity-entity relations. In addition, to alleviate the data sparsity problem we enrich the event network with external Wikipedia entity-entity relations and event narrative orders as a data preprocessing step detailed in Section 5.1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach Overview",
"sec_num": "3.1"
},
{
"text": "We propose an unsupervised Global Event Network Embedding (GENE) learning framework for this task ( Figure 2 ). We first encode the graph with a Relational Graph Convolutional Network (RGCN) (Schlichtkrull et al., 2018) based multiview graph encoder, in which the multi-view component puts focus on various perspectives of the graph. To capture both the semantic and topological contexts, i.e. the node attributes and graph structure, in event node representation, GENE trains the graph encoder by minimizing semantic reconstruction loss and relation discrimination loss.",
"cite_spans": [],
"ref_spans": [
{
"start": 100,
"end": 108,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Approach Overview",
"sec_num": "3.1"
},
{
"text": "Given an event network G = {V, E}, the graph encoder projects the nodes into a set of embeddings Y while preserving the graph structure and node attributes. As shown in Figure 2 , we first feed different views of G to the graph encoder, then integrate encoded node embeddings into Y .",
"cite_spans": [],
"ref_spans": [
{
"start": 169,
"end": 177,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Multi-View Graph Encoder",
"sec_num": "3.2"
},
{
"text": "RGCN. Because of the relational structure of event network, we apply RGCN (Schlichtkrull et al., 2018), a relational variant of GCN (Kipf and Welling, 2017), as the graph encoder. RGCN induces the node embeddings based on the local neighborhood with operations on a heterogeneous graph. It differs from GCN in the type-specific weights in message propagation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-View Graph Encoder",
"sec_num": "3.2"
},
{
"text": "We stack two RGCN layers in the encoder. The hidden state of node v i in the first layer is initiated with node attribute x i . The output of the former layer serves as the input of the next layer. Formally, in each RGCN layer the hidden state h of node v i is updated through message propagation with the hidden states of neighbors (and itself) from the last layer and message aggregation with an addition Figure 2 : An overview of the proposed GENE framework. The event network is encoded by a relational graph convolutional network, which is trained with node reconstruction loss and relation discrimination loss.",
"cite_spans": [],
"ref_spans": [
{
"start": 407,
"end": 415,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Multi-View Graph Encoder",
"sec_num": "3.2"
},
{
"text": "operation and an element-wise activation function.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-View Graph Encoder",
"sec_num": "3.2"
},
{
"text": "h (l+1) i = \u03c3(W (l) 0 h (l) i + r\u2208R j\u2208N r i 1 c i,r W (l) r h (l) j ), where h (l) i \u2208 R d (l)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-View Graph Encoder",
"sec_num": "3.2"
},
{
"text": "is the hidden state of node v i at the l-th layer of RGCN, d (l) is the dimension of the hidden state at the l-th layer, R is the edge relation set, N r i is the neighborhood of node",
"cite_spans": [
{
"start": 53,
"end": 64,
"text": "RGCN, d (l)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-View Graph Encoder",
"sec_num": "3.2"
},
{
"text": "v i under relation type r \u2208 R, W (l)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-View Graph Encoder",
"sec_num": "3.2"
},
{
"text": "r is the trainable weight matrix of relation type r at the l-th layer, c i,r = |N r i | is a normalization constant, and \u03c3 is Leaky ReLU.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-View Graph Encoder",
"sec_num": "3.2"
},
{
"text": "Weight Decomposition. In order to reduce the growing model parameter size and prevent the accompanying over-fitting problem, we follow (Schlichtkrull et al., 2018) and perform basis decomposition on relation weight matrix:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-View Graph Encoder",
"sec_num": "3.2"
},
{
"text": "W (l) r = B b=1 a (l) rb V (l) b ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-View Graph Encoder",
"sec_num": "3.2"
},
{
"text": "where the edge weight W",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-View Graph Encoder",
"sec_num": "3.2"
},
{
"text": "(l) r is a linear combination of basis transformations V (l) b \u2208 R d (l+1) \u00d7d (l)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-View Graph Encoder",
"sec_num": "3.2"
},
{
"text": "with coefficients a rb . This basis decomposition method reduces model parameters by using a much smaller base set B to compose relation set R and can be seen as a way of weight sharing between different relation types.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-View Graph Encoder",
"sec_num": "3.2"
},
{
"text": "Multiple Views. The structure of event networks can be viewed in multiple different perspectives. For example, when entity-entity relations are masked out, an event network degenerates to pieces of isolated events and only local neighborhood will be observed. The advantage of separate modeling is that it enables the graph encoder to focus on different perspectives of the graph and lessens the over-smoothing problem (the tendency of indistinguishable encoded node embeddings). Therefore, we propose to encode the network G = {V, E} from the following views:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-View Graph Encoder",
"sec_num": "3.2"
},
{
"text": "(1) Complete View: We keep all nodes and all edges in this view.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-View Graph Encoder",
"sec_num": "3.2"
},
{
"text": "(2) Event-Entity View: We keep all nodes and only event-entity relations in this view. Events are isolated as single subgraphs, each of which only includes the corresponding event and its argument entities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-View Graph Encoder",
"sec_num": "3.2"
},
{
"text": "(3) Entity-Only View: We only keep entity nodes and entity-entity relations in this view. Information is flowed only among entity nodes and will not be influenced by events.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-View Graph Encoder",
"sec_num": "3.2"
},
{
"text": "(4) Event-Only View: We only keep event nodes and event-event relations in this view. Similarly, events are isolated from entities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-View Graph Encoder",
"sec_num": "3.2"
},
{
"text": "We feed the event network in different views as separate inputs to the graph encoder, and integrate the encoded results in three ways:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-View Graph Encoder",
"sec_num": "3.2"
},
{
"text": "Concatenation. Node embeddings of d v dimen- sions from v views are directly concatenated with y cat = [y 0 \u2022 y 1 \u2022 \u2022 \u2022 y v\u22121 ].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-View Graph Encoder",
"sec_num": "3.2"
},
{
"text": "Averaging. Node embeddings of d dimensions from v views are averaged with",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-View Graph Encoder",
"sec_num": "3.2"
},
{
"text": "y avg = 1 v v j=1 y j . Weighted Averaging. Node embeddings of d dimensions from v views are averaged with y wavg = 1 v v j=1 W j v y j , where W j v",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-View Graph Encoder",
"sec_num": "3.2"
},
{
"text": "is a trainable matrix.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-View Graph Encoder",
"sec_num": "3.2"
},
{
"text": "To capture neighborhood information, we train the graph encoder with relation discrimination loss to learn the graph topology.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topology Learning",
"sec_num": "3.3"
},
{
"text": "L T = i ( r\u2208R j\u2208N r i E[log D r (y i , y j )] + r\u2208R j / \u2208N r i E[log(1 \u2212 D r (y i , y j ))])",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topology Learning",
"sec_num": "3.3"
},
{
"text": "The relation-specific discriminator D r determines the probability score for one node's being connected with another node in relation r:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topology Learning",
"sec_num": "3.3"
},
{
"text": "D r (y i , y j ) = \u03c3(y T i W r D y j ) where W r",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topology Learning",
"sec_num": "3.3"
},
{
"text": "D is a trainable bi-linear scoring matrix and \u03c3 is Sigmoid funtion. We choose binary discriminator over multi-class classifier to capture features required for independent classification decisions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topology Learning",
"sec_num": "3.3"
},
{
"text": "To preserve the node semantics, we perform node attribute reconstruction with a two-layer feedforward neural network:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantics Learning",
"sec_num": "3.4"
},
{
"text": "L S = i x i \u2212 \u03c6(y i ) 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantics Learning",
"sec_num": "3.4"
},
{
"text": "where x i represents the attributes of node v i , y i represents the encoded embedding of node v i , and \u03c6 : R n\u00d7d \u2192 R n\u00d7m denotes the non-linear transformation function. L S loss evaluates how much information required to reconstruct node attributes is preserved in the encoded node embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantics Learning",
"sec_num": "3.4"
},
{
"text": "To encourage the graph encoder to learn both the graph topology and node semantics, we combine the structural loss and semantics loss as the final objective function:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "3.5"
},
{
"text": "L = L T + \u03bbL S",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "3.5"
},
{
"text": "where \u03bb is a weight normalization hyper-parameter.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "3.5"
},
{
"text": "As there is no existing work on comprehensive event representation evaluation, in this work we design an evaluation framework with a series of probing tasks to comprehensively evaluate the model's capability to capture network structures and preserve node attributes. Structural Probes are models trained to predict certain properties from inferred representations, and have been used to understand linguistic properties (Hewitt and Manning, 2019; Conneau et al., 2018) .",
"cite_spans": [
{
"start": 421,
"end": 447,
"text": "(Hewitt and Manning, 2019;",
"ref_id": "BIBREF21"
},
{
"start": 448,
"end": 469,
"text": "Conneau et al., 2018)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Structural Probes for Event Network",
"sec_num": "4"
},
{
"text": "The task of event network embedding requires the embedded distributional node representations to preserve semantic proximity, local neighborhood and global neighborhood. Accordingly, we intrinsically evaluate the semantics preservation with node typing and assess the local neighborhood preservation with event argument role classification. We also apply the node embeddings to a downstream task, event coreference resolution, to extrinsically evaluate the global neighborhood preservation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Structural Probes for Event Network",
"sec_num": "4"
},
{
"text": "Node Typing and Event Argument Role Classification are conducted under the same evaluation setting: given the learned node embeddings, predict the labels with a multi-layer perceptron (MLP) based classifier. If the input of the classifier is of different dimension to the event network embeddings, it will be first projected into the same dimension. The classifier is a two-layer feed-forward neural network with a linear transformation layer, a nonlinear activation operation, a layer normalization, a dropout operation, and another linear transformation layer. The classifier is designed to be simple on purpose so that it will be limited in reasoning ability and thus the evidence for classification will be mainly derived from the node embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Structural Probes for Event Network",
"sec_num": "4"
},
{
"text": "The event or entity type of each node can be inferred from the sentence context of its mentions. As the node attribute vector x i for node v i comes from the contextual word embeddings, x i naturally implies its node type. This characteristic is supposed to be preserved after the node has been further embedded and the embedding dimension has been reduced.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Node Typing",
"sec_num": "4.1"
},
{
"text": "We evaluate the node semantics preservation by checking whether the node types can be recovered from the node embeddings. Given one event or entity node, our evaluation model predicts its type out of 45 labels, which includes 7 coarse-grained entity types, 5 value types, and 33 event types as defined in the NIST Automatic Content Extraction (ACE) task. The performance on this task is compared in terms of multi-label classification Micro F1 score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Node Typing",
"sec_num": "4.1"
},
{
"text": "We detect local neighborhood preservation by evaluating whether the event-entity relation (event argument role) can be recovered from the node embeddings. Given one event node and one entity node, we predict the relation type between each pair of nodes out of 238 labels. Each label consists of an event type and an argument role type as defined in ACE. For example, the argument role label \"Justice:Arrest-Jail:Agent\" can only be correctly selected when the event node implies the type \"Justice:Arrest-Jail\" and the entity node implies its role being the \"Agent\". Compared to the traditional argument role labeling procedure, this setting skips the step of mention identification, which has been done in network construction process. The performance is reported with multi-label classification Micro F1 score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Event Argument Role Classification",
"sec_num": "4.2"
},
{
"text": "The goal of event coreference resolution is to determine which event mentions refer to the same real-world event. The features for similarity computation used in previous work are typically limited to event triggers, arguments and sentence-level contexts Sammons et al., 2015; Lu and Ng, 2016; Chen and Ng, 2016; Duncan et al., 2017; Lai et al., 2021) . However, event arguments are often distributed across the content of an article. Therefore a global event network can ground event mentions into a wider context with related events and help cluster coreferential mentions more accurately.",
"cite_spans": [
{
"start": 255,
"end": 276,
"text": "Sammons et al., 2015;",
"ref_id": "BIBREF49"
},
{
"start": 277,
"end": 293,
"text": "Lu and Ng, 2016;",
"ref_id": "BIBREF38"
},
{
"start": 294,
"end": 312,
"text": "Chen and Ng, 2016;",
"ref_id": "BIBREF8"
},
{
"start": 313,
"end": 333,
"text": "Duncan et al., 2017;",
"ref_id": "BIBREF17"
},
{
"start": 334,
"end": 351,
"text": "Lai et al., 2021)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Event Coreference Resolution",
"sec_num": "4.3"
},
{
"text": "In this task we evaluate the impact of applying event network embedding as additional features on enhancing event coreference resolution. We concatenate the event embeddings learned by the event network and by a fine-tuned SpanBERT model as the input for the scoring function. The training procedure is the same as that in (Joshi et al., 2019) .",
"cite_spans": [
{
"start": 323,
"end": 343,
"text": "(Joshi et al., 2019)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Event Coreference Resolution",
"sec_num": "4.3"
},
{
"text": "We report F1 scores in terms of B 3 (Bagga and Baldwin, 1998), MUC (Vilain et al., 1995) , CEAF e (Luo, 2005) , BLANC (Recasens and Hovy, 2011) metrics, and also their averaged results (AVG).",
"cite_spans": [
{
"start": 67,
"end": 88,
"text": "(Vilain et al., 1995)",
"ref_id": "BIBREF54"
},
{
"start": 98,
"end": 109,
"text": "(Luo, 2005)",
"ref_id": "BIBREF40"
},
{
"start": 118,
"end": 143,
"text": "(Recasens and Hovy, 2011)",
"ref_id": "BIBREF47"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Event Coreference Resolution",
"sec_num": "4.3"
},
{
"text": "We construct corpus-level graphs for training, development, and test sets from the English subset of Automatic Content Extraction (ACE) 2005 dataset 2 . We follow the pre-processing steps in and show the dataset statistics in Table 1. We perform automatic entity linking (Pan et al., 2017) to link entities to Wikipedia. Entity nodes linked to the same Wikipedia entity are merged into one node. We further retrieve entity-entity relations from Wikidata and enrich the event network with these connections, such as the part-whole relation between Tehran and Iran in Figure 1 . We also add narrative event-event relations by connecting every pair of events within one document as edges in the graph.",
"cite_spans": [
{
"start": 271,
"end": 289,
"text": "(Pan et al., 2017)",
"ref_id": "BIBREF44"
}
],
"ref_spans": [
{
"start": 226,
"end": 234,
"text": "Table 1.",
"ref_id": "TABREF3"
},
{
"start": 566,
"end": 574,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Dataset",
"sec_num": "5.1"
},
{
"text": "Non-Graph Event Representation Methods. Mention-based method represents events with contextual representations inferred by BERT (Devlin et al., 2019) . Tuple-based method uses the averaged contextual representations of event mentions and its arguments.",
"cite_spans": [
{
"start": 128,
"end": 149,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline",
"sec_num": "5.2"
},
{
"text": "Graph Representation Methods. Skipgram (Mikolov et al., 2013) learns graph topology by increasing the predicted similarity of adjacent node embeddings and decreasing the similarity of irrelevant node embeddings with random negative sampling:",
"cite_spans": [
{
"start": 39,
"end": 61,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline",
"sec_num": "5.2"
},
{
"text": "L G = i ( j\u2208N i log \u03c3(y T j y i )+ j / \u2208N i log \u03c3(\u2212y T j y i ))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline",
"sec_num": "5.2"
},
{
"text": "Deep Graph Infomax (Velickovic et al., 2019) captures graph topology by maximizing the mutual information between patch representations and higher-level subgraph summary:",
"cite_spans": [
{
"start": 19,
"end": 44,
"text": "(Velickovic et al., 2019)",
"ref_id": "BIBREF53"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline",
"sec_num": "5.2"
},
{
"text": "L D = i ( j\u2208N i E[log D(y i , s)] + j / \u2208N i E[log(1 \u2212 D(y j , s))])",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline",
"sec_num": "5.2"
},
{
"text": "where the subgraph summary s is read out as the average of node embeddings and D is the discriminator deciding the probability score for node's being contained in the summary. For fair comparison, we train the same framework with the following graph representation learning methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline",
"sec_num": "5.2"
},
{
"text": "Event Coreference Resolution. Besides existing methods (Bejan and Harabagiu, 2010b; Liu et al., 2014) we implement the model architecture (Lee et al., 2017) that has achieved the current state-of-the-art results in entity coreference resolution (Joshi et al., 2019) ACE train 521 4,353 3,688 7,888 6,856 7,040 70,992 912 dev 30 494 667 938 723 853 12,572 144 test 40 424 750 897 796 1,543 6,154 121 coreference resolution (Cattan et al., 2020) . We use SpanBERT for contextual embeddings. The detailed methods about the baseline event corefernece resolution framework are described in (Lai et al., 2021) . In this experiment, we compare the performance with and without our event network embeddings as additional features.",
"cite_spans": [
{
"start": 55,
"end": 83,
"text": "(Bejan and Harabagiu, 2010b;",
"ref_id": "BIBREF3"
},
{
"start": 84,
"end": 101,
"text": "Liu et al., 2014)",
"ref_id": "BIBREF37"
},
{
"start": 138,
"end": 156,
"text": "(Lee et al., 2017)",
"ref_id": "BIBREF33"
},
{
"start": 245,
"end": 265,
"text": "(Joshi et al., 2019)",
"ref_id": "BIBREF28"
},
{
"start": 450,
"end": 471,
"text": "(Cattan et al., 2020)",
"ref_id": "BIBREF6"
},
{
"start": 613,
"end": 631,
"text": "(Lai et al., 2021)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [
{
"start": 266,
"end": 426,
"text": "ACE train 521 4,353 3,688 7,888 6,856 7,040 70,992 912 dev 30 494 667 938 723 853 12,572 144 test 40 424 750 897 796 1,543 6,154 121",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Baseline",
"sec_num": "5.2"
},
{
"text": "All models are implemented with Deep Graph Library and Pytorch framework. We train each models for 10 epochs and apply an early stopping strategy with a patience of 3 epochs (if the model does not outperform its best checkpoint for 3 epochs on validation set we will stop the training process). The batch size is 64.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Details",
"sec_num": "5.3"
},
{
"text": "The hyper-parameters are selected based on model performance on development set. The model is optimized with the Adam optimizer with a learning rate of 1e \u2212 5 and a dropout rate of 0.1. The embedding dimension is 256 and the hidden dimension is 512. The lambda in loss function is 1.0. On average it takes approximately four hours to train a model until converge with one Tesla V100 GPU with 16GB DRAM. To improve training efficiency, neighbor pre-sampling is performed for all topology learning losses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Details",
"sec_num": "5.3"
},
{
"text": "We conclude the results shown in Table 2 with the following observations: GENE preserves node semantics well with low-dimensional and informative embeddings. Though with only one third of embedding dimension (typically 256, comparing to 768 in other event representation baselines), our models have higher performance on Node Typing, which shows the node semantics has been well preserved.",
"cite_spans": [],
"ref_spans": [
{
"start": 33,
"end": 40,
"text": "Table 2",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "5.4"
},
{
"text": "Topology learning loss is crucial to event neighborhood proximity preservation. We propose to use relation discrimination loss to learn the graph structure and exam it with argument role classification task. Methods without topology learning objectives (Event as Mention, Event as Tuple, and GENE w/ L T ) have a significant drop of performance on this task, while our proposed model has the best performance because of the similarity and transferability between argument role classification and argument role discrimination in L T . Another reason is that only L T is designed for heterogeneous graphs while SKG and DGI do not consider relation types.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "5.4"
},
{
"text": "In general Multi-view encoder is beneficial. Compared to the single-view variants, our multiview encoder has overall better performance. Keeping complete view has the most closed performance, while discarding event-entity relations yields significant drop on argument role classification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "5.4"
},
{
"text": "Averaging multi-view embeddings is better than Weighted Averaging. Intuitively weighted averaging captures the correlations among different embedding dimensions, promotes salient dimensions and/or teases out unimportant ones within the same view by performing a linear transformation within each view before averaging over views. However, results show that it is not comparable with averaging and concatenation multi-view encoders. One possible reason is that the distribution of embedding within each view is greatly restricted by the input embedding distribution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "5.4"
},
{
"text": "GENE improves the performance on event coreference resolution by connecting events through related entities. SpanBERT model is a strong baseline with better performance compared with the former methods. We show that using our embeddings as additional features, SpanBERT can further improve all event coreference resolution scores. In the following example, SpanBERT model fails to detect the coreference link between event sell and event buy while GENE succeeds by discovering the relation between the entity arguments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "5.4"
},
{
"text": "... The Times said Vivendi Universal was negotiating to sell its flagship theme parks to New York investment firm Blackstone Group as a the first step toward dismantlingits entertainment empire . Remaining Challenges. One of the unsolved challenges is to capture the long distance relation in the encoder in addition to the two encoder layers. Another challenge is the limited ability in entity coreference resolution. In some failing cases, GENE model does not link two events because some of their connecting arguments are expressed as pronouns. This limitation is inherited from the upstream event extraction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "5.4"
},
{
"text": "Event Representation. Some previous efforts enrich event representations by introducing arguments (Levin, 1993; Goldberg, 1995; Ritter and Rosen, 2000; Huang and Ahrens, 2000; Iwata, 2005; Goldberg, 2006; Xu and Huang, 2013; Bies et al., 2016; Do et al., 2017; Kalm et al., 2019) , intent and sentiment (Ding et al., 2019) , and temporal information (Tong et al., 2008) . (Weber et al., 2018) proposes a tensor-based event composition approach to combine a trigger and arguments to represent each event. We extend the definition of scenario to multiple inter-connected events. (Modi, 2016) captures statistical dependencies between events but limits to script data sets where the events are naturally organized in sequential temporal order. Our approach captures a rich variety of explicit semantic connections among complex events. (Hong et al., 2018) learns distributed event representations using supervised multi-task learning, while our framework is based on unsupervised learning. Network Embedding. Our work falls into the scope of unsupervised learning for heterogeneous attributed network embeddings. Heterogeneous network embedding methods Dong et al., 2017; Wang et al., 2019) jointly model nodes and edges. Attributed network embedding approaches (Gao and Huang, 2018; Yang et al., 2015) on the other hand put focus on preserving node attributes when encoding the networks. Event Coreference Resolution. Most existing methods Bejan and Harabagiu, 2010a; Zhang et al., 2015; Peng et al., 2016; Lai et al., 2021) only exploit local features including trigger, argument and sentence context matching. To prevent error propagation, some models perform joint inference between event extraction and event coreference resolution (Lee et al., 2012; Araki and Mitamura, 2015; Lu and Ng, 2017) or incorporate document topic structures (Choubey and . To the best of our knowledge our method is the first to leverage the entire event networks to compute similarity features.",
"cite_spans": [
{
"start": 98,
"end": 111,
"text": "(Levin, 1993;",
"ref_id": "BIBREF34"
},
{
"start": 112,
"end": 127,
"text": "Goldberg, 1995;",
"ref_id": "BIBREF19"
},
{
"start": 128,
"end": 151,
"text": "Ritter and Rosen, 2000;",
"ref_id": "BIBREF48"
},
{
"start": 152,
"end": 175,
"text": "Huang and Ahrens, 2000;",
"ref_id": "BIBREF24"
},
{
"start": 176,
"end": 188,
"text": "Iwata, 2005;",
"ref_id": "BIBREF26"
},
{
"start": 189,
"end": 204,
"text": "Goldberg, 2006;",
"ref_id": "BIBREF20"
},
{
"start": 205,
"end": 224,
"text": "Xu and Huang, 2013;",
"ref_id": "BIBREF57"
},
{
"start": 225,
"end": 243,
"text": "Bies et al., 2016;",
"ref_id": "BIBREF4"
},
{
"start": 244,
"end": 260,
"text": "Do et al., 2017;",
"ref_id": "BIBREF15"
},
{
"start": 261,
"end": 279,
"text": "Kalm et al., 2019)",
"ref_id": "BIBREF29"
},
{
"start": 303,
"end": 322,
"text": "(Ding et al., 2019)",
"ref_id": "BIBREF14"
},
{
"start": 350,
"end": 369,
"text": "(Tong et al., 2008)",
"ref_id": "BIBREF52"
},
{
"start": 372,
"end": 392,
"text": "(Weber et al., 2018)",
"ref_id": "BIBREF56"
},
{
"start": 577,
"end": 589,
"text": "(Modi, 2016)",
"ref_id": "BIBREF42"
},
{
"start": 833,
"end": 852,
"text": "(Hong et al., 2018)",
"ref_id": "BIBREF22"
},
{
"start": 1150,
"end": 1168,
"text": "Dong et al., 2017;",
"ref_id": "BIBREF16"
},
{
"start": 1169,
"end": 1187,
"text": "Wang et al., 2019)",
"ref_id": "BIBREF55"
},
{
"start": 1259,
"end": 1280,
"text": "(Gao and Huang, 2018;",
"ref_id": "BIBREF18"
},
{
"start": 1281,
"end": 1299,
"text": "Yang et al., 2015)",
"ref_id": "BIBREF58"
},
{
"start": 1438,
"end": 1465,
"text": "Bejan and Harabagiu, 2010a;",
"ref_id": "BIBREF2"
},
{
"start": 1466,
"end": 1485,
"text": "Zhang et al., 2015;",
"ref_id": "BIBREF59"
},
{
"start": 1486,
"end": 1504,
"text": "Peng et al., 2016;",
"ref_id": "BIBREF45"
},
{
"start": 1505,
"end": 1522,
"text": "Lai et al., 2021)",
"ref_id": "BIBREF31"
},
{
"start": 1734,
"end": 1752,
"text": "(Lee et al., 2012;",
"ref_id": "BIBREF32"
},
{
"start": 1753,
"end": 1778,
"text": "Araki and Mitamura, 2015;",
"ref_id": "BIBREF0"
},
{
"start": 1779,
"end": 1795,
"text": "Lu and Ng, 2017)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "We propose a novel continuous event representation called Event Network Embedding to capture the connections among events in a global context. This new representation provides a powerful framework for downstream applications such as event coreference resolution and event ordering.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "7"
},
{
"text": "In the future we aim to improve the ability to capture the long-distance relations in the graph encode by introducing event-event relation in the form of multiple meta-paths. The relations, or the event evolution patterns, extracted from large-scale corpora can guide event-related reasoning and act as shortcut linking event nodes. Another direction is to explore a unified automatic evaluation benchmark for event representation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "7"
},
{
"text": "Our code is released at https://github.com/ pkuzengqi/GENE",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://www.ldc.upenn.edu/collaborations/ past-projects/ace",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This research is based upon work supported in part by U.S. DARPA KAIROS Program No. FA8750-19-2-1004, U.S. DARPA AIDA Program No. FA8750-18-2-0014, Air Force No. FA8650-17-C-7715. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of DARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Joint event trigger identification and event coreference resolution with structured perceptron",
"authors": [
{
"first": "Jun",
"middle": [],
"last": "Araki",
"suffix": ""
},
{
"first": "Teruko",
"middle": [],
"last": "Mitamura",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2074--2080",
"other_ids": {
"DOI": [
"10.18653/v1/D15-1247"
]
},
"num": null,
"urls": [],
"raw_text": "Jun Araki and Teruko Mitamura. 2015. Joint event trig- ger identification and event coreference resolution with structured perceptron. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2074-2080, Lisbon, Portugal. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Algorithms for scoring coreference chains",
"authors": [
{
"first": "Amit",
"middle": [],
"last": "Bagga",
"suffix": ""
},
{
"first": "Breck",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 1998,
"venue": "The first international conference on language resources and evaluation workshop on linguistics coreference",
"volume": "1",
"issue": "",
"pages": "563--566",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amit Bagga and Breck Baldwin. 1998. Algorithms for scoring coreference chains. In The first international conference on language resources and evaluation workshop on linguistics coreference, volume 1, pages 563-566. Citeseer.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Unsupervised event coreference resolution with rich linguistic features",
"authors": [
{
"first": "Cosmin",
"middle": [],
"last": "Bejan",
"suffix": ""
},
{
"first": "Sanda",
"middle": [],
"last": "Harabagiu",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1412--1422",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cosmin Bejan and Sanda Harabagiu. 2010a. Unsu- pervised event coreference resolution with rich lin- guistic features. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1412-1422, Uppsala, Sweden. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Unsupervised event coreference resolution with rich linguistic features",
"authors": [
{
"first": "Adrian",
"middle": [],
"last": "Cosmin",
"suffix": ""
},
{
"first": "Sanda",
"middle": [
"M"
],
"last": "Bejan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Harabagiu",
"suffix": ""
}
],
"year": 2010,
"venue": "ACL 2010, Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1412--1422",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cosmin Adrian Bejan and Sanda M. Harabagiu. 2010b. Unsupervised event coreference resolution with rich linguistic features. In ACL 2010, Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, July 11-16, 2010, Uppsala, Sweden, pages 1412-1422. The Associa- tion for Computer Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A comparison of event representations in DEFT",
"authors": [
{
"first": "Ann",
"middle": [],
"last": "Bies",
"suffix": ""
},
{
"first": "Zhiyi",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Jeremy",
"middle": [],
"last": "Getman",
"suffix": ""
},
{
"first": "Joe",
"middle": [],
"last": "Ellis",
"suffix": ""
},
{
"first": "Justin",
"middle": [],
"last": "Mott",
"suffix": ""
},
{
"first": "Stephanie",
"middle": [],
"last": "Strassel",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "Teruko",
"middle": [],
"last": "Mitamura",
"suffix": ""
},
{
"first": "Marjorie",
"middle": [],
"last": "Freedman",
"suffix": ""
},
{
"first": "Ji",
"middle": [],
"last": "Heng",
"suffix": ""
},
{
"first": "Tim O'",
"middle": [],
"last": "Gorman",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Fourth Workshop on Events",
"volume": "",
"issue": "",
"pages": "27--36",
"other_ids": {
"DOI": [
"10.18653/v1/W16-1004"
]
},
"num": null,
"urls": [],
"raw_text": "Ann Bies, Zhiyi Song, Jeremy Getman, Joe Ellis, Justin Mott, Stephanie Strassel, Martha Palmer, Teruko Mitamura, Marjorie Freedman, Heng Ji, and Tim O'Gorman. 2016. A comparison of event repre- sentations in DEFT. In Proceedings of the Fourth Workshop on Events, pages 27-36, San Diego, Cali- fornia. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "An annotation framework for dense event ordering",
"authors": [
{
"first": "Taylor",
"middle": [],
"last": "Cassidy",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Mcdowell",
"suffix": ""
},
{
"first": "Nathanael",
"middle": [],
"last": "Chambers",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bethard",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "501--506",
"other_ids": {
"DOI": [
"10.3115/v1/P14-2082"
]
},
"num": null,
"urls": [],
"raw_text": "Taylor Cassidy, Bill McDowell, Nathanael Chambers, and Steven Bethard. 2014. An annotation frame- work for dense event ordering. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 501-506, Baltimore, Maryland. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Streamlining crossdocument coreference resolution: Evaluation and modeling",
"authors": [
{
"first": "Arie",
"middle": [],
"last": "Cattan",
"suffix": ""
},
{
"first": "Alon",
"middle": [],
"last": "Eirew",
"suffix": ""
},
{
"first": "Gabriel",
"middle": [],
"last": "Stanovsky",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arie Cattan, Alon Eirew, Gabriel Stanovsky, Mandar Joshi, and Ido Dagan. 2020. Streamlining cross- document coreference resolution: Evaluation and modeling. CoRR, abs/2009.11032.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Heterogeneous network embedding via deep architectures",
"authors": [
{
"first": "Shiyu",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Jiliang",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Guo-Jun",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Charu",
"middle": [
"C"
],
"last": "Aggarwal",
"suffix": ""
},
{
"first": "Thomas",
"middle": [
"S"
],
"last": "Huang",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining",
"volume": "",
"issue": "",
"pages": "119--128",
"other_ids": {
"DOI": [
"10.1145/2783258.2783296"
]
},
"num": null,
"urls": [],
"raw_text": "Shiyu Chang, Wei Han, Jiliang Tang, Guo-Jun Qi, Charu C. Aggarwal, and Thomas S. Huang. 2015. Heterogeneous network embedding via deep archi- tectures. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Sydney, NSW, Australia, August 10-13, 2015, pages 119-128.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Joint inference over a lightly supervised information extraction pipeline: Towards event coreference resolution for resource-scarce languages",
"authors": [
{
"first": "Chen",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, AAAI'16",
"volume": "",
"issue": "",
"pages": "2913--2920",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chen Chen and Vincent Ng. 2016. Joint infer- ence over a lightly supervised information extrac- tion pipeline: Towards event coreference resolu- tion for resource-scarce languages. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, AAAI'16, page 2913-2920. AAAI Press.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Graph-based event coreference resolution",
"authors": [
{
"first": "Zheng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Heng",
"middle": [],
"last": "Ji",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 2009 Workshop on Graph-based Methods for Natural Language Processing (TextGraphs-4)",
"volume": "",
"issue": "",
"pages": "54--57",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zheng Chen and Heng Ji. 2009. Graph-based event coreference resolution. In Proceedings of the 2009 Workshop on Graph-based Methods for Natural Language Processing (TextGraphs-4), pages 54-57, Suntec, Singapore. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A pairwise event coreference model, feature impact and evaluation for event coreference resolution",
"authors": [
{
"first": "Zheng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Ji",
"middle": [],
"last": "Heng",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Haralick",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Workshop on Events in Emerging Text Types",
"volume": "",
"issue": "",
"pages": "17--22",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zheng Chen, Heng Ji, and Robert Haralick. 2009. A pairwise event coreference model, feature impact and evaluation for event coreference resolution. In Proceedings of the Workshop on Events in Emerging Text Types, pages 17-22, Borovets, Bulgaria. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Improving event coreference resolution by modeling correlations between event coreference chains and document topic structures",
"authors": [
{
"first": "Prafulla",
"middle": [],
"last": "Kumar Choubey",
"suffix": ""
},
{
"first": "Ruihong",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "485--495",
"other_ids": {
"DOI": [
"10.18653/v1/P18-1045"
]
},
"num": null,
"urls": [],
"raw_text": "Prafulla Kumar Choubey and Ruihong Huang. 2018. Improving event coreference resolution by model- ing correlations between event coreference chains and document topic structures. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 485-495, Melbourne, Australia. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "What you can cram into a single $&!#* vector: Probing sentence embeddings for linguistic properties",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "German",
"middle": [],
"last": "Kruszewski",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Lo\u00efc",
"middle": [],
"last": "Barrault",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "2126--2136",
"other_ids": {
"DOI": [
"10.18653/v1/P18-1198"
]
},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, German Kruszewski, Guillaume Lam- ple, Lo\u00efc Barrault, and Marco Baroni. 2018. What you can cram into a single $&!#* vector: Prob- ing sentence embeddings for linguistic properties. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2126-2136, Melbourne, Australia. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Event representation learning enhanced with external commonsense knowledge",
"authors": [
{
"first": "Xiao",
"middle": [],
"last": "Ding",
"suffix": ""
},
{
"first": "Kuo",
"middle": [],
"last": "Liao",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Zhongyang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Junwen",
"middle": [],
"last": "Duan",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "4894--4903",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1495"
]
},
"num": null,
"urls": [],
"raw_text": "Xiao Ding, Kuo Liao, Ting Liu, Zhongyang Li, and Junwen Duan. 2019. Event representation learn- ing enhanced with external commonsense knowl- edge. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4894-4903, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Improving implicit semantic role labeling by predicting semantic frame arguments",
"authors": [
{
"first": "Thi",
"middle": [],
"last": "Quynh Ngoc",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Do",
"suffix": ""
},
{
"first": "Marie-Francine",
"middle": [],
"last": "Bethard",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Moens",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Eighth International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "90--99",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Quynh Ngoc Thi Do, Steven Bethard, and Marie- Francine Moens. 2017. Improving implicit seman- tic role labeling by predicting semantic frame argu- ments. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 90-99, Taipei, Tai- wan. Asian Federation of Natural Language Process- ing.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "metapath2vec: Scalable representation learning for heterogeneous networks",
"authors": [
{
"first": "Yuxiao",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Nitesh",
"suffix": ""
},
{
"first": "Ananthram",
"middle": [],
"last": "Chawla",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Swami",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining",
"volume": "",
"issue": "",
"pages": "135--144",
"other_ids": {
"DOI": [
"10.1145/3097983.3098036"
]
},
"num": null,
"urls": [],
"raw_text": "Yuxiao Dong, Nitesh V. Chawla, and Ananthram Swami. 2017. metapath2vec: Scalable rep- resentation learning for heterogeneous networks. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Halifax, NS, Canada, August 13 - 17, 2017, pages 135-144.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "UI CCG TAC-KBP2017 submissions: Entity discovery and linking, and event nugget detection and coreference",
"authors": [
{
"first": "Chase",
"middle": [],
"last": "Duncan",
"suffix": ""
},
{
"first": "Liang-Wei",
"middle": [],
"last": "Chan",
"suffix": ""
},
{
"first": "Haoruo",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Shyam",
"middle": [],
"last": "Upadhyay",
"suffix": ""
},
{
"first": "Nitish",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Chen-Tse",
"middle": [],
"last": "Tsai",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Sammons",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Text Analysis Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chase Duncan, Liang-Wei Chan, Haoruo Peng, Hao Wu, Shyam Upadhyay, Nitish Gupta, Chen-Tse Tsai, Mark Sammons, and Dan Roth. 2017. UI CCG TAC-KBP2017 submissions: Entity discov- ery and linking, and event nugget detection and co- reference. In Proceedings of the 2017 Text Analysis Conference, TAC 2017, Gaithersburg, Maryland, USA, November 13-14, 2017. NIST.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Deep attributed network embedding",
"authors": [
{
"first": "Hongchang",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Heng",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018",
"volume": "",
"issue": "",
"pages": "3364--3370",
"other_ids": {
"DOI": [
"10.24963/ijcai.2018/467"
]
},
"num": null,
"urls": [],
"raw_text": "Hongchang Gao and Heng Huang. 2018. Deep at- tributed network embedding. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018, July 13-19, 2018, Stockholm, Sweden, pages 3364-3370.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Constructions: A Construction Grammar Approach to Argument Structure",
"authors": [
{
"first": "Adele",
"middle": [
"E"
],
"last": "Goldberg",
"suffix": ""
}
],
"year": 1995,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adele E. Goldberg. 1995. Constructions: A Construction Grammar Approach to Argument Structure. Chicago: University of Chicago Press.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Constructions at Work: the Nature of Generalization in Language",
"authors": [
{
"first": "Adele",
"middle": [
"E"
],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adele E. Goldberg. 2006. Constructions at Work: the Nature of Generalization in Language. Oxford: Ox- ford University Press.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "A structural probe for finding syntax in word representations",
"authors": [
{
"first": "John",
"middle": [],
"last": "Hewitt",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4129--4138",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1419"
]
},
"num": null,
"urls": [],
"raw_text": "John Hewitt and Christopher D. Manning. 2019. A structural probe for finding syntax in word repre- sentations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4129-4138, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Learning distributed event representations with a multi-task approach",
"authors": [
{
"first": "Xudong",
"middle": [],
"last": "Hong",
"suffix": ""
},
{
"first": "Asad",
"middle": [],
"last": "Sayeed",
"suffix": ""
},
{
"first": "Vera",
"middle": [],
"last": "Demberg",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics",
"volume": "",
"issue": "",
"pages": "11--21",
"other_ids": {
"DOI": [
"10.18653/v1/S18-2002"
]
},
"num": null,
"urls": [],
"raw_text": "Xudong Hong, Asad Sayeed, and Vera Demberg. 2018. Learning distributed event representations with a multi-task approach. In Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics, pages 11-21, New Orleans, Louisiana. Association for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Building a cross-document event-event relation corpus",
"authors": [
{
"first": "Yu",
"middle": [],
"last": "Hong",
"suffix": ""
},
{
"first": "Tongtao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "O'",
"middle": [],
"last": "Tim",
"suffix": ""
},
{
"first": "Sharone",
"middle": [],
"last": "Gorman",
"suffix": ""
},
{
"first": "Heng",
"middle": [],
"last": "Horowit-Hendler",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Palmer",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 10th Linguistic Annotation Workshop held in conjunction with ACL 2016",
"volume": "",
"issue": "",
"pages": "1--6",
"other_ids": {
"DOI": [
"10.18653/v1/W16-1701"
]
},
"num": null,
"urls": [],
"raw_text": "Yu Hong, Tongtao Zhang, Tim O'Gorman, Sharone Horowit-Hendler, Heng Ji, and Martha Palmer. 2016. Building a cross-document event-event rela- tion corpus. In Proceedings of the 10th Linguistic Annotation Workshop held in conjunction with ACL 2016 (LAW-X 2016), pages 1-6, Berlin, Germany. Association for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "The module-attribute representation of verbal semantics",
"authors": [
{
"first": "Chu-Ren",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Kathleen",
"middle": [],
"last": "Ahrens",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 14th Pacific Asia Conference on Language, Information and Computation",
"volume": "",
"issue": "",
"pages": "109--120",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chu-Ren Huang and Kathleen Ahrens. 2000. The module-attribute representation of verbal semantics. In Proceedings of the 14th Pacific Asia Conference on Language, Information and Computation, pages 109-120, Waseda University International Confer- ence Center, Tokyo, Japan. PACLIC 14 Organizing Committee.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Challenges of adding causation to richer event descriptions",
"authors": [
{
"first": "Rei",
"middle": [],
"last": "Ikuta",
"suffix": ""
},
{
"first": "Will",
"middle": [],
"last": "Styler",
"suffix": ""
},
{
"first": "Mariah",
"middle": [],
"last": "Hamang",
"suffix": ""
},
{
"first": "O'",
"middle": [],
"last": "Tim",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Gorman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Palmer",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Second Workshop on EVENTS: Definition, Detection, Coreference, and Representation",
"volume": "",
"issue": "",
"pages": "12--20",
"other_ids": {
"DOI": [
"10.3115/v1/W14-2903"
]
},
"num": null,
"urls": [],
"raw_text": "Rei Ikuta, Will Styler, Mariah Hamang, Tim O'Gorman, and Martha Palmer. 2014. Chal- lenges of adding causation to richer event descrip- tions. In Proceedings of the Second Workshop on EVENTS: Definition, Detection, Coreference, and Representation, pages 12-20, Baltimore, Maryland, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Locative alternation and two levels of verb meaning",
"authors": [
{
"first": "Seizi",
"middle": [],
"last": "Iwata",
"suffix": ""
}
],
"year": 2005,
"venue": "Cognitive Linguistics",
"volume": "16",
"issue": "2",
"pages": "355--407",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Seizi Iwata. 2005. Locative alternation and two levels of verb meaning. Cognitive Linguistics, 16(2):355-407.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "SpanBERT: Improving pre-training by representing and predicting spans",
"authors": [
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Daniel",
"middle": [
"S"
],
"last": "Weld",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
}
],
"year": 2020,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "8",
"issue": "",
"pages": "64--77",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00300"
]
},
"num": null,
"urls": [],
"raw_text": "Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, and Omer Levy. 2020. SpanBERT: Improving pre-training by represent- ing and predicting spans. Transactions of the Association for Computational Linguistics, 8:64- 77.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "BERT for coreference resolution: Baselines and analysis",
"authors": [
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Weld",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "5803--5808",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1588"
]
},
"num": null,
"urls": [],
"raw_text": "Mandar Joshi, Omer Levy, Luke Zettlemoyer, and Daniel Weld. 2019. BERT for coreference resolu- tion: Baselines and analysis. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5803-5808, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Event structure representation: Between verbs and argument structure constructions",
"authors": [
{
"first": "Pavlina",
"middle": [],
"last": "Kalm",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Regan",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Croft",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the First International Workshop on Designing Meaning Representations",
"volume": "",
"issue": "",
"pages": "100--109",
"other_ids": {
"DOI": [
"10.18653/v1/W19-3311"
]
},
"num": null,
"urls": [],
"raw_text": "Pavlina Kalm, Michael Regan, and William Croft. 2019. Event structure representation: Between verbs and argument structure constructions. In Proceedings of the First International Workshop on Designing Meaning Representations, pages 100- 109, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Semisupervised classification with graph convolutional networks",
"authors": [
{
"first": "N",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Max",
"middle": [],
"last": "Kipf",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Welling",
"suffix": ""
}
],
"year": 2017,
"venue": "5th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas N. Kipf and Max Welling. 2017. Semi- supervised classification with graph convolutional networks. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "A context-dependent gated module for incorporating symbolic semantics into event coreference resolution",
"authors": [
{
"first": "Tuan",
"middle": [],
"last": "Lai",
"suffix": ""
},
{
"first": "Heng",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Trung",
"middle": [],
"last": "Bui",
"suffix": ""
},
{
"first": "Franck",
"middle": [],
"last": "Quan Hung Tran",
"suffix": ""
},
{
"first": "Walter",
"middle": [],
"last": "Dernoncourt",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2021,
"venue": "Proc. The 2021 Conference of the North American Chapter of the Association for Computational Linguistics -Human Language Technologies (NAACL-HLT2021)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tuan Lai, Heng Ji, Trung Bui, Quan Hung Tran, Franck Dernoncourt, and Walter Chang. 2021. A context-dependent gated module for incorpo- rating symbolic semantics into event coreference resolution. In Proc. The 2021 Conference of the North American Chapter of the Association for Computational Linguistics -Human Language Technologies (NAACL-HLT2021).",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Joint entity and event coreference resolution across documents",
"authors": [
{
"first": "Heeyoung",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Marta",
"middle": [],
"last": "Recasens",
"suffix": ""
},
{
"first": "Angel",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "489--500",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Heeyoung Lee, Marta Recasens, Angel Chang, Mi- hai Surdeanu, and Dan Jurafsky. 2012. Joint entity and event coreference resolution across documents. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 489-500, Jeju Island, Korea. Association for Computational Linguistics.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "End-to-end neural coreference resolution",
"authors": [
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luheng",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "188--197",
"other_ids": {
"DOI": [
"10.18653/v1/D17-1018"
]
},
"num": null,
"urls": [],
"raw_text": "Kenton Lee, Luheng He, Mike Lewis, and Luke Zettle- moyer. 2017. End-to-end neural coreference reso- lution. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 188-197, Copenhagen, Denmark. Association for Computational Linguistics.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "English Verb Classes and Alternations: a Preliminary Investigation",
"authors": [
{
"first": "Beth",
"middle": [],
"last": "Levin",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Beth Levin. 1993. English Verb Classes and Alternations: a Preliminary Investigation. Chicago: University of Chicago Press.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Connecting the dots: Event graph schema induction with path language modeling",
"authors": [
{
"first": "Manling",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Qi",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Ying",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Heng",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "May",
"suffix": ""
},
{
"first": "Nathanael",
"middle": [],
"last": "Chambers",
"suffix": ""
},
{
"first": "Clare",
"middle": [],
"last": "Voss",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manling Li, Qi Zeng, Ying Lin, Kyunghyun Cho, Heng Ji, Jonathan May, Nathanael Chambers, and Clare Voss. 2020. Connecting the dots: Event graph schema induction with path language modeling. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "A joint neural model for information extraction with global features",
"authors": [
{
"first": "Ying",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Heng",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Lingfei",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "7999--8009",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.713"
]
},
"num": null,
"urls": [],
"raw_text": "Ying Lin, Heng Ji, Fei Huang, and Lingfei Wu. 2020. A joint neural model for information ex- traction with global features. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7999-8009, On- line. Association for Computational Linguistics.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Supervised withindocument event coreference using information propagation",
"authors": [
{
"first": "Zhengzhong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Araki",
"suffix": ""
},
{
"first": "Eduard",
"middle": [
"H"
],
"last": "Hovy",
"suffix": ""
},
{
"first": "Teruko",
"middle": [],
"last": "Mitamura",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Ninth International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "4539--4544",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhengzhong Liu, Jun Araki, Eduard H. Hovy, and Teruko Mitamura. 2014. Supervised within- document event coreference using information prop- agation. In Proceedings of the Ninth International Conference on Language Resources and Evaluation, LREC 2014, Reykjavik, Iceland, May 26-31, 2014, pages 4539-4544. European Language Resources Association (ELRA).",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Event coreference resolution with multi-pass sieves",
"authors": [
{
"first": "Jing",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)",
"volume": "",
"issue": "",
"pages": "3996--4003",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jing Lu and Vincent Ng. 2016. Event coreference resolution with multi-pass sieves. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 3996- 4003, Portoro\u017e, Slovenia. European Language Re- sources Association (ELRA).",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Joint learning for event coreference resolution",
"authors": [
{
"first": "Jing",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "90--101",
"other_ids": {
"DOI": [
"10.18653/v1/P17-1009"
]
},
"num": null,
"urls": [],
"raw_text": "Jing Lu and Vincent Ng. 2017. Joint learning for event coreference resolution. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 90-101, Vancouver, Canada. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "On coreference resolution performance metrics",
"authors": [
{
"first": "Xiaoqiang",
"middle": [],
"last": "Luo",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "25--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaoqiang Luo. 2005. On coreference resolu- tion performance metrics. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, pages 25-32.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "1st International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jef- frey Dean. 2013. Efficient estimation of word rep- resentations in vector space. In 1st International Conference on Learning Representations, ICLR 2013, Scottsdale, Arizona, USA, May 2-4, 2013, Workshop Track Proceedings.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Event embeddings for semantic script modeling",
"authors": [
{
"first": "Ashutosh",
"middle": [],
"last": "Modi",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "75--83",
"other_ids": {
"DOI": [
"10.18653/v1/K16-1008"
]
},
"num": null,
"urls": [],
"raw_text": "Ashutosh Modi. 2016. Event embeddings for seman- tic script modeling. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pages 75-83, Berlin, Germany. Association for Computational Linguistics.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Richer event description: Integrating event coreference with temporal, causal and bridging annotation",
"authors": [
{
"first": "Kristin",
"middle": [],
"last": "Tim O'gorman",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Wright-Bettner",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Palmer",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2nd Workshop on Computing News Storylines (CNS 2016)",
"volume": "",
"issue": "",
"pages": "47--56",
"other_ids": {
"DOI": [
"10.18653/v1/W16-5706"
]
},
"num": null,
"urls": [],
"raw_text": "Tim O'Gorman, Kristin Wright-Bettner, and Martha Palmer. 2016. Richer event description: Integrating event coreference with temporal, causal and bridg- ing annotation. In Proceedings of the 2nd Workshop on Computing News Storylines (CNS 2016), pages 47-56, Austin, Texas. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Crosslingual name tagging and linking for 282 languages",
"authors": [
{
"first": "Xiaoman",
"middle": [],
"last": "Pan",
"suffix": ""
},
{
"first": "Boliang",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "May",
"suffix": ""
},
{
"first": "Joel",
"middle": [],
"last": "Nothman",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "Heng",
"middle": [],
"last": "Ji",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1946--1958",
"other_ids": {
"DOI": [
"10.18653/v1/P17-1178"
]
},
"num": null,
"urls": [],
"raw_text": "Xiaoman Pan, Boliang Zhang, Jonathan May, Joel Nothman, Kevin Knight, and Heng Ji. 2017. Cross- lingual name tagging and linking for 282 languages. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1946-1958, Vancouver, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Event detection and co-reference with minimal supervision",
"authors": [
{
"first": "Haoruo",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Yangqiu",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "392--402",
"other_ids": {
"DOI": [
"10.18653/v1/D16-1038"
]
},
"num": null,
"urls": [],
"raw_text": "Haoruo Peng, Yangqiu Song, and Dan Roth. 2016. Event detection and co-reference with minimal supervision. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 392-402, Austin, Texas. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "The TIMEBANK corpus. Corpus Linguistics",
"authors": [
{
"first": "James",
"middle": [],
"last": "Pustejovsky",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Hanks",
"suffix": ""
},
{
"first": "Roser",
"middle": [],
"last": "Saur\u00ed",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "See",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Gaizauskas",
"suffix": ""
},
{
"first": "Andrea",
"middle": [],
"last": "Setzer",
"suffix": ""
},
{
"first": "Dragomir",
"middle": [],
"last": "Radev",
"suffix": ""
},
{
"first": "Beth",
"middle": [],
"last": "Sundheim",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Day",
"suffix": ""
},
{
"first": "Lisa",
"middle": [],
"last": "Ferro",
"suffix": ""
},
{
"first": "Marcia",
"middle": [],
"last": "Lazo",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Pustejovsky, Patrick Hanks, Roser Saur\u00ed, An- drew See, Robert Gaizauskas, Andrea Setzer, Dragomir Radev, Beth Sundheim, David Day, Lisa Ferro, and Marcia Lazo. 2003. The TIMEBANK corpus. Corpus Linguistics, 2003:40.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Blanc: Implementing the rand index for coreference evaluation",
"authors": [
{
"first": "Marta",
"middle": [],
"last": "Recasens",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2011,
"venue": "Natural Language Engineering",
"volume": "17",
"issue": "4",
"pages": "485--510",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marta Recasens and Eduard Hovy. 2011. Blanc: Imple- menting the rand index for coreference evaluation. Natural Language Engineering, 17(4):485-510.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Event structure and ergativity",
"authors": [
{
"first": "Elizabeth",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "Sara",
"middle": [
"Thomas"
],
"last": "Rosen",
"suffix": ""
}
],
"year": 2000,
"venue": "Events as Grammatical Objects",
"volume": "",
"issue": "",
"pages": "187--238",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elizabeth Ritter and Sara Thomas Rosen. 2000. Event structure and ergativity. In Events as Grammatical Objects, pages 187-238.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Illinois CCG TAC 2015 event nugget, entity discovery and linking, and slot filler validation systems",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Sammons",
"suffix": ""
},
{
"first": "Haoruo",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Yangqiu",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Shyam",
"middle": [],
"last": "Upadhyay",
"suffix": ""
},
{
"first": "Chen-Tse",
"middle": [],
"last": "Tsai",
"suffix": ""
},
{
"first": "Pavankumar",
"middle": [],
"last": "Reddy",
"suffix": ""
},
{
"first": "Subhro",
"middle": [],
"last": "Roy",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Text Analysis Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Sammons, Haoruo Peng, Yangqiu Song, Shyam Upadhyay, Chen-Tse Tsai, Pavankumar Reddy, Subhro Roy, and Dan Roth. 2015. Illinois CCG TAC 2015 event nugget, entity discovery and linking, and slot filler validation systems. In Proceedings of the 2015 Text Analysis Conference, TAC 2015, Gaithersburg, Maryland, USA, November 16-17, 2015, 2015. NIST.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Modeling relational data with graph convolutional networks",
"authors": [
{
"first": "",
"middle": [],
"last": "Michael Sejr",
"suffix": ""
},
{
"first": "Thomas",
"middle": [
"N"
],
"last": "Schlichtkrull",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Kipf",
"suffix": ""
},
{
"first": "Rianne",
"middle": [],
"last": "Bloem",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Van Den",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Berg",
"suffix": ""
},
{
"first": "Max",
"middle": [],
"last": "Titov",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Welling",
"suffix": ""
}
],
"year": 2018,
"venue": "The Semantic Web -15th International Conference",
"volume": "",
"issue": "",
"pages": "593--607",
"other_ids": {
"DOI": [
"10.1007/978-3-319-93417-4_38"
]
},
"num": null,
"urls": [],
"raw_text": "Michael Sejr Schlichtkrull, Thomas N. Kipf, Pe- ter Bloem, Rianne van den Berg, Ivan Titov, and Max Welling. 2018. Modeling relational data with graph convolutional networks. In The Semantic Web -15th International Conference, ESWC 2018, Heraklion, Crete, Greece, June 3-7, 2018, Proceedings, pages 593-607.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Pathsim: Meta path-based top-k similarity search in heterogeneous information networks",
"authors": [
{
"first": "Yizhou",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Jiawei",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Xifeng",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "Philip",
"middle": [
"S"
],
"last": "Yu",
"suffix": ""
},
{
"first": "Tianyi",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2011,
"venue": "Proc. VLDB Endow",
"volume": "4",
"issue": "",
"pages": "992--1003",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yizhou Sun, Jiawei Han, Xifeng Yan, Philip S. Yu, and Tianyi Wu. 2011. Pathsim: Meta path-based top-k similarity search in heterogeneous information net- works. Proc. VLDB Endow., 4(11):992-1003.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Fast mining of complex time-stamped events",
"authors": [
{
"first": "Hanghang",
"middle": [],
"last": "Tong",
"suffix": ""
},
{
"first": "Yasushi",
"middle": [],
"last": "Sakurai",
"suffix": ""
},
{
"first": "Tina",
"middle": [],
"last": "Eliassi-Rad",
"suffix": ""
},
{
"first": "Christos",
"middle": [],
"last": "Faloutsos",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 17th ACM Conference on Information and Knowledge Management, CIKM 2008",
"volume": "",
"issue": "",
"pages": "759--768",
"other_ids": {
"DOI": [
"10.1145/1458082.1458184"
]
},
"num": null,
"urls": [],
"raw_text": "Hanghang Tong, Yasushi Sakurai, Tina Eliassi-Rad, and Christos Faloutsos. 2008. Fast mining of complex time-stamped events. In Proceedings of the 17th ACM Conference on Information and Knowledge Management, CIKM 2008, Napa Valley, California, USA, October 26-30, 2008, pages 759- 768.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "Deep graph infomax",
"authors": [
{
"first": "Petar",
"middle": [],
"last": "Velickovic",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Fedus",
"suffix": ""
},
{
"first": "William",
"middle": [
"L"
],
"last": "Hamilton",
"suffix": ""
},
{
"first": "Pietro",
"middle": [],
"last": "Li\u00f2",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "R",
"middle": [
"Devon"
],
"last": "Hjelm",
"suffix": ""
}
],
"year": 2019,
"venue": "7th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Petar Velickovic, William Fedus, William L. Hamilton, Pietro Li\u00f2, Yoshua Bengio, and R. Devon Hjelm. 2019. Deep graph infomax. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "A model-theoretic coreference scoring scheme",
"authors": [
{
"first": "Marc",
"middle": [
"B"
],
"last": "Vilain",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Burger",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Aberdeen",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Connolly",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Hirschman",
"suffix": ""
}
],
"year": 1995,
"venue": "MUC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marc B. Vilain, J. Burger, J. Aberdeen, D. Connolly, and L. Hirschman. 1995. A model-theoretic corefer- ence scoring scheme. In MUC.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "Heterogeneous graph attention network",
"authors": [
{
"first": "Xiao",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Houye",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Chuan",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Bai",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yanfang",
"middle": [],
"last": "Ye",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Cui",
"suffix": ""
},
{
"first": "Philip",
"middle": [
"S"
],
"last": "Yu",
"suffix": ""
}
],
"year": 2019,
"venue": "The World Wide Web Conference",
"volume": "",
"issue": "",
"pages": "2022--2032",
"other_ids": {
"DOI": [
"10.1145/3308558.3313562"
]
},
"num": null,
"urls": [],
"raw_text": "Xiao Wang, Houye Ji, Chuan Shi, Bai Wang, Yanfang Ye, Peng Cui, and Philip S. Yu. 2019. Heteroge- neous graph attention network. In The World Wide Web Conference, WWW 2019, San Francisco, CA, USA, May 13-17, 2019, pages 2022-2032.",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "Event representations with tensor-based compositions",
"authors": [
{
"first": "Noah",
"middle": [],
"last": "Weber",
"suffix": ""
},
{
"first": "Niranjan",
"middle": [],
"last": "Balasubramanian",
"suffix": ""
},
{
"first": "Nathanael",
"middle": [],
"last": "Chambers",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18)",
"volume": "",
"issue": "",
"pages": "4946--4953",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Noah Weber, Niranjan Balasubramanian, and Nathanael Chambers. 2018. Event representations with tensor-based compositions. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 4946-4953.",
"links": null
},
"BIBREF57": {
"ref_id": "b57",
"title": "Primitives of events and the semantic representation",
"authors": [
{
"first": "Hongzhi",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Chu-Ren",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 6th International Conference on Generative Approaches to the Lexicon (GL2013)",
"volume": "",
"issue": "",
"pages": "54--61",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hongzhi Xu and Chu-Ren Huang. 2013. Primitives of events and the semantic representation. In Proceedings of the 6th International Conference on Generative Approaches to the Lexicon (GL2013), pages 54-61, Pisa, Italy. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF58": {
"ref_id": "b58",
"title": "Network representation learning with rich text information",
"authors": [
{
"first": "Cheng",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Deli",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Edward",
"middle": [
"Y"
],
"last": "Chang",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence, IJCAI 2015",
"volume": "",
"issue": "",
"pages": "2111--2117",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cheng Yang, Zhiyuan Liu, Deli Zhao, Maosong Sun, and Edward Y. Chang. 2015. Network representation learning with rich text information. In Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence, IJCAI 2015, Buenos Aires, Argentina, July 25-31, 2015, pages 2111-2117.",
"links": null
},
"BIBREF59": {
"ref_id": "b59",
"title": "Cross-document event coreference resolution based on cross-media features",
"authors": [
{
"first": "Tongtao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Hongzhi",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Ji",
"middle": [],
"last": "Heng",
"suffix": ""
},
{
"first": "Shih-Fu",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "201--206",
"other_ids": {
"DOI": [
"10.18653/v1/D15-1020"
]
},
"num": null,
"urls": [],
"raw_text": "Tongtao Zhang, Hongzhi Li, Heng Ji, and Shih- Fu Chang. 2015. Cross-document event corefer- ence resolution based on cross-media features. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 201-206, Lisbon, Portugal. Association for Compu- tational Linguistics.",
"links": null
}
},
"ref_entries": {
"TABREF3": {
"content": "<table/>",
"text": "Statistics for the enhanced ACE 2005 dataset. Wiki and Narrative are enriched event-event relations.",
"num": null,
"type_str": "table",
"html": null
},
"TABREF4": {
"content": "<table><tr><td>Model</td><td colspan=\"3\">Node Typing Classification MUC Argument</td><td>B 3</td><td colspan=\"4\">Event Coreference CEAFe BLANC AVG</td></tr><tr><td>Event Mention</td><td>80.58</td><td>71.57</td><td>61.81</td><td colspan=\"2\">87.79</td><td>84.24</td><td>74.97</td><td>77.20</td></tr><tr><td>Event Tuple</td><td>68.40</td><td>72.13</td><td>63.10</td><td colspan=\"2\">89.06</td><td>85.23</td><td>77.6</td><td>78.75</td></tr><tr><td>Skip-gram(Mikolov et al., 2013)</td><td>75.55</td><td>93.42</td><td>59.81</td><td colspan=\"2\">88.09</td><td>83.40</td><td>77.30</td><td>77.15</td></tr><tr><td>Deep Graph Infomax(Velickovic et al., 2019)</td><td>74.96</td><td>95.32</td><td>59.36</td><td colspan=\"2\">87.05</td><td>82.19</td><td>73.41</td><td>75.50</td></tr><tr><td>HDP(Bejan and Harabagiu, 2010b)</td><td>-</td><td>-</td><td>-</td><td colspan=\"2\">83.8</td><td>76.7</td><td>-</td><td>-</td></tr><tr><td>(Liu et al., 2014)</td><td>-</td><td>-</td><td>50.98</td><td colspan=\"2\">89.38</td><td>86.47</td><td>70.43</td><td>74.32</td></tr><tr><td>SpanBERT(Joshi et al., 2020)</td><td>-</td><td>-</td><td>65.72</td><td colspan=\"2\">89.48</td><td>85.35</td><td>79.82</td><td>80.09</td></tr><tr><td>GENE</td><td>81.26</td><td>95.76</td><td>68.99</td><td colspan=\"2\">89.53</td><td>85.86</td><td>80.38</td><td>81.19</td></tr><tr><td>\u2022 w/o LT</td><td>78.78</td><td>79.04</td><td>70.63</td><td colspan=\"2\">89.03</td><td>84.88</td><td>81.13</td><td>81.42</td></tr><tr><td>\u2022 w/o LS</td><td>78.02</td><td>95.32</td><td>68.14</td><td colspan=\"2\">89.53</td><td>86.17</td><td>79.90</td><td>80.94</td></tr><tr><td>\u2022 w/ Event-Entity view</td><td>80.82</td><td>92.64</td><td>60.09</td><td colspan=\"2\">87.97</td><td>84.75</td><td>70.67</td><td>75.87</td></tr><tr><td>\u2022 w/ Event-only &amp; Entity-only views</td><td>74.79</td><td>72.02</td><td>60.86</td><td colspan=\"2\">88.46</td><td>85.15</td><td>75.64</td><td>77.53</td></tr><tr><td>\u2022 w/ Complete view</td><td>79.42</td><td>90.52</td><td>63.01</td><td colspan=\"2\">88.11</td><td>84.94</td><td>75.50</td><td>77.89</td></tr><tr><td>\u2022 w/ Concatenated integration</td><td>78.45</td><td>93.87</td><td>70.08</td><td colspan=\"2\">89.81</td><td>85.85</td><td>81.08</td><td>81.71</td></tr><tr><td>\u2022 w/ Weighted integration</td><td>74.53</td><td>94.31</td><td>66.36</td><td colspan=\"2\">88.99</td><td>85.81</td><td>76.97</td><td>79.53</td></tr></table>",
"text": "Vivendi Universal officials in the United States were not immediately available for comment on Friday . Under the reported plans , Blackstone Group would buy Vivendi ' s theme park division , including ...",
"num": null,
"type_str": "table",
"html": null
},
"TABREF5": {
"content": "<table/>",
"text": "Results on test set of ACE dataset. Node typing and argument role classification results are reported in micro F1 scores(%). Event Coreference are performed with our embeddings as additional features.",
"num": null,
"type_str": "table",
"html": null
}
}
}
}