Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N16-1049",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:37:44.603872Z"
},
"title": "Joint Learning Templates and Slots for Event Schema Induction",
"authors": [
{
"first": "Lei",
"middle": [],
"last": "Sha",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Sujian",
"middle": [],
"last": "Li",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Baobao",
"middle": [],
"last": "Chang",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Zhifang",
"middle": [],
"last": "Sui",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Automatic event schema induction (AESI) means to extract meta-event from raw text, in other words, to find out what types (templates) of event may exist in the raw text and what roles (slots) may exist in each event type. In this paper, we propose a joint entity-driven model to learn templates and slots simultaneously based on the constraints of templates and slots in the same sentence. In addition, the entities' semantic information is also considered for the inner connectivity of the entities. We borrow the normalized cut criteria in image segmentation to divide the entities into more accurate template clusters and slot clusters. The experiment shows that our model gains a relatively higher result than previous work.",
"pdf_parse": {
"paper_id": "N16-1049",
"_pdf_hash": "",
"abstract": [
{
"text": "Automatic event schema induction (AESI) means to extract meta-event from raw text, in other words, to find out what types (templates) of event may exist in the raw text and what roles (slots) may exist in each event type. In this paper, we propose a joint entity-driven model to learn templates and slots simultaneously based on the constraints of templates and slots in the same sentence. In addition, the entities' semantic information is also considered for the inner connectivity of the entities. We borrow the normalized cut criteria in image segmentation to divide the entities into more accurate template clusters and slot clusters. The experiment shows that our model gains a relatively higher result than previous work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Event schema is a high-level representation of a bunch of similar events. It is very useful for the traditional information extraction (IE) (Sagayam et al., 2012) task. An example of event schema is shown in Table 1 . Given the bombing schema, we only need to find proper words to fill the slots when extracting a bombing event.",
"cite_spans": [
{
"start": 140,
"end": 162,
"text": "(Sagayam et al., 2012)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [
{
"start": 208,
"end": 215,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There are two main approaches for AESI task. Both of them use the idea of clustering the potential event arguments to find the event schema. One of them is probabilistic graphical model (Chambers, 2013; Cheung, 2013) . By incorporating templates and slots as latent topics, probabilistic graphical models learns those templates and slots that best explains the text. However, the graphical models Bombing Template Perpetrator: person Victim:",
"cite_spans": [
{
"start": 186,
"end": 202,
"text": "(Chambers, 2013;",
"ref_id": "BIBREF9"
},
{
"start": 203,
"end": 216,
"text": "Cheung, 2013)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "person Target: public Instrument: bomb Table 1 : The event schema of bombing event in MUC-4, it has a bombing template and four main slots considers the entities independently and do not take the interrelationship between entities into account. Another method relies on ad-hoc clustering algorithms (Filatova et al., 2006; Sekine, 2006; Chambers and Jurafsky, 2011) . (Chambers and Jurafsky, 2011 ) is a pipelined approach. In the first step, it uses pointwise mutual information(PMI) between any two clauses in the same document to learn events, and then learns syntactic patterns as fillers. However, the pipelined approach suffers from the error propagation problem, which means the errors in the template clustering can lead to more errors in the slot clustering.",
"cite_spans": [
{
"start": 299,
"end": 322,
"text": "(Filatova et al., 2006;",
"ref_id": "BIBREF15"
},
{
"start": 323,
"end": 336,
"text": "Sekine, 2006;",
"ref_id": "BIBREF30"
},
{
"start": 337,
"end": 365,
"text": "Chambers and Jurafsky, 2011)",
"ref_id": "BIBREF8"
},
{
"start": 368,
"end": 396,
"text": "(Chambers and Jurafsky, 2011",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 39,
"end": 46,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper proposes an entity-driven model which jointly learns templates and slots for event schema induction. The main contribution of this paper are as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 To better model the inner connectivity between entities, we borrow the normalized cut in image segmentation as the clustering criteria.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We use constraints between templates and between slots in one sentence to improve AESI result. Our ultimate goal is to assign two labels, a slot variable s and a template variable t, to each entity. After that, we can summarize all of them to get event schemas.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "3 Automatic Event Schema Induction",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence",
"sec_num": null
},
{
"text": "We focus on two types of inner connectivity: (1) the likelihood of two entities to belong to the same template; (2) the likelihood of two entities to belong to the same slot;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inner Connectivity Between Entities",
"sec_num": "3.1"
},
{
"text": "It is easy to understand that entities occurred near each other are more likely to belong to the same template. Therefore, (Chambers and Jurafsky, 2011) uses PMI to measure the correlation of two words in the same document, but it cannot put two words from different documents together. In the Bayesian model of (Chambers, 2013) , p(predicate) is the key factor to decide the template, but it ignores the fact that entities occurring nearby should belong to the same template. In this paper, we try to put two measures together. That is, if two entities occurred nearby, they can belong to the same template; if they have similar meaning, they can also belong to the same template. We use PMI to measure the distance similarity and use word vector (Mikolov et al., 2013) to calculate the semantic similarity.",
"cite_spans": [
{
"start": 123,
"end": 152,
"text": "(Chambers and Jurafsky, 2011)",
"ref_id": "BIBREF8"
},
{
"start": 312,
"end": 328,
"text": "(Chambers, 2013)",
"ref_id": "BIBREF9"
},
{
"start": 748,
"end": 770,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Template Level Connectivity",
"sec_num": "3.1.1"
},
{
"text": "A word vector can well represent the meaning of a word. So we concatenate the word vector of the j-th entity's head word and its predicate, denoted as vec hp (i). We use the cosine distance cos hp (i, j) to measure the difference of two vectors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Template Level Connectivity",
"sec_num": "3.1.1"
},
{
"text": "Then we can get the template level connectivity formula as shown in Eq 1. The P M I(i, j) is calculated by the head words of entity mention i and j.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Template Level Connectivity",
"sec_num": "3.1.1"
},
{
"text": "W",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Template Level Connectivity",
"sec_num": "3.1.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "T (i, j) = P M I(i, j) + cos hp (i, j)",
"eq_num": "(1)"
}
],
"section": "Template Level Connectivity",
"sec_num": "3.1.1"
},
{
"text": "3.1.2 Slot Level Connectivity If two entities can play similar role in an event, they are likely to fill the same slot. We know that if two entities can play similar role, their head words may have the same hypernyms. We only consider the direct hypernyms here. Also, their predicates may have similar meaning and the entities may have the same dependency path to their predicate. Therefore, we give the factors equal weights and add them together to get the slot level similarity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Template Level Connectivity",
"sec_num": "3.1.1"
},
{
"text": "W S (i, j) = cos p (i, j) + \u03b4(depend i = depend j ) + \u03b4(hypernym i \u2229 hypernym j \u0338 = \u03d5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Template Level Connectivity",
"sec_num": "3.1.1"
},
{
"text": "(2) Here, the \u03b4(\u2022) has value 1 when the inner expression is true and 0 otherwise. The \"hypernym\" is derived from Wordnet (Miller, 1995) , so it is a set of direct hypernyms. If two entities' head words have at least one common direct hypernym, then they may belong to the same slot. And again cos p (i, j) represents the cosine distance between the predicates' word vector of entity i and entity j.",
"cite_spans": [
{
"start": 121,
"end": 135,
"text": "(Miller, 1995)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Template Level Connectivity",
"sec_num": "3.1.1"
},
{
"text": "Normalized cut intend to maximize the intra-class similarity while minimize the inter class similarity, which deals well with the connectivity between entities. We represent each entity as a point in a highdimension space. The edge weight between two points is their template level similarity / slot level similarity. Then the larger the similarity value is, the more likely the two entities (point) belong to the same template / slot, which is also our basis intuition.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Template and Slot Clustering Using Normalized Cut",
"sec_num": "3.2"
},
{
"text": "For simplicity, denote the entity set as E = {e 1 , \u2022 \u2022 \u2022 , e |E| }, and the template set as T . We use the |E| \u00d7 |T | partition matrix X T to represent the template clustering result. Let",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Template and Slot Clustering Using Normalized Cut",
"sec_num": "3.2"
},
{
"text": "X T = [X T 1 , \u2022 \u2022 \u2022 , X T |T | ], where X T l is a binary indicator for template l(T l ). X T (i, l) = { 1 e i \u2208 T l 0 otherwise (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Template and Slot Clustering Using Normalized Cut",
"sec_num": "3.2"
},
{
"text": "Usually, we define the degree matrix D T as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Template and Slot Clustering Using Normalized Cut",
"sec_num": "3.2"
},
{
"text": "D T (i, i) = \u2211 j\u2208E W T (i, j), i = 1, \u2022 \u2022 \u2022 , |E|. Obvi- ously, D",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Template and Slot Clustering Using Normalized Cut",
"sec_num": "3.2"
},
{
"text": "T is a diagonal matrix. It contains information about the weight sum of edges attached to each vertex. Then we have the template clustering optimization as shown in Eq 4 according to (Shi and Malik, 2000) .",
"cite_spans": [
{
"start": 183,
"end": 204,
"text": "(Shi and Malik, 2000)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Template and Slot Clustering Using Normalized Cut",
"sec_num": "3.2"
},
{
"text": "max \u03b5 1 (X T ) = 1 |T | |T | \u2211 l=1 X T T l W T X T l X T T l D T X T l s.t. X T \u2208 {0, 1} |E|\u00d7|T | X T 1 |T | = 1 |E| (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Template and Slot Clustering Using Normalized Cut",
"sec_num": "3.2"
},
{
"text": "where 1 |E| represents the |E| \u00d7 1 vector of all 1's.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Template and Slot Clustering Using Normalized Cut",
"sec_num": "3.2"
},
{
"text": "For the slot clustering, we have a similar optimization as shown in Eq 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Template and Slot Clustering Using Normalized Cut",
"sec_num": "3.2"
},
{
"text": "max \u03b5 2 (X S ) = 1 |S| |S| \u2211 l=1 X T S l W S X S l X T S l D S X S l s.t. X S \u2208 {0, 1} |E|\u00d7|S| X S 1 |S| = 1 |E| (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Template and Slot Clustering Using Normalized Cut",
"sec_num": "3.2"
},
{
"text": "where S represents the slot set, X S is the slot clustering result with",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Template and Slot Clustering Using Normalized Cut",
"sec_num": "3.2"
},
{
"text": "X S = [X S 1 , \u2022 \u2022 \u2022 , X S |S| ], where X S l is a binary indicator for slot l(S l ). X S (i, l) = { 1 e i \u2208 S l 0 otherwise (6)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Template and Slot Clustering Using Normalized Cut",
"sec_num": "3.2"
},
{
"text": "For event schema induction, we find an important property and we name it \"Sentence constraint\". The entities in one sentence often belong to one template but different slots.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Model With Sentence Constraints",
"sec_num": "3.3"
},
{
"text": "The sentence constraint contains two types of constraint, \"template constraint\" and \"slot constraint\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Model With Sentence Constraints",
"sec_num": "3.3"
},
{
"text": "1. Template constraint: Entities in the same sentence are usually in the same template. Hence we should make the templates taken by a sentence as few as possible.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Model With Sentence Constraints",
"sec_num": "3.3"
},
{
"text": "2. Slot constraint: Entities in the same sentence are usually in different slots. Hence we should make the slots taken by a sentence as many as possible.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Model With Sentence Constraints",
"sec_num": "3.3"
},
{
"text": "Based on these consideration, we can add an extra item to the optimization object. Let N sentence be the number of sentences. Define N sentence \u00d7 |E| matrix J as the sentence constraint matrix, the entries of J is as following:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Model With Sentence Constraints",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "J(i, j) = { 1 e i \u2208 Sentence j 0 otherwise",
"eq_num": "(7)"
}
],
"section": "Joint Model With Sentence Constraints",
"sec_num": "3.3"
},
{
"text": "Easy to show, the product G T = J T X T represents the relation between sentences and templates. In matrix G T , the (i, j)-th entry represents how many entities in sentence i are belong to T j . Using G T , we can construct our objective. To represent the two constraints, the best objective we have found is the trace value: tr(G T G T T ). Each entry on the diagonal of matrix G T G T T is the square sum of all the entries in the corresponding line in G T , and the larger the trace value is, the less templates the sentence would taken. Since tr(G T G T T ) is the sum of the diagonal elements, we only need to maximize the value tr(G T G T T ) to meet the template constraint. For the same reason, we need to minimize the value tr(G S G T S ) to meet the slot constraint. Generally, we have the following optimization objective:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Model With Sentence Constraints",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b5 3 (X T , X S ) = tr ( X T T JJ T X T ) tr ( X T S JJ T X S )",
"eq_num": "(8)"
}
],
"section": "Joint Model With Sentence Constraints",
"sec_num": "3.3"
},
{
"text": "The whole joint model is shown in Eq 9. The de-tailed derivation 1 is shown in the supplement file.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Model With Sentence Constraints",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "X T , X S = argmax X T ,X S \u03b5 1 (X T ) + \u03b5 2 (X S ) + \u03b5 3 (X T , X S ) s.t. X T \u2208 {0, 1} |E|\u00d7|T | X T 1 |T | = 1 |E| X S \u2208 {0, 1} |E|\u00d7|S| X S 1 |S| = 1 |E|",
"eq_num": "(9)"
}
],
"section": "Joint Model With Sentence Constraints",
"sec_num": "3.3"
},
{
"text": "4 Experiment",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Model With Sentence Constraints",
"sec_num": "3.3"
},
{
"text": "In this paper, we use MUC-4 (Sundheim, 1991) as our dataset, which is the same as previous works (Chambers and Jurafsky, 2011; Chambers, 2013 Table 2 : Slot-only mapping comparison to state-of-the-art unsupervised systems, \"-SC\" means without sentence constraint the same. We call this the slot-only mapping evaluation. The second approach is to map each template t to the best gold template g, and limit the slot mapping so that only the slots under t can map to slots under g. We call this the strict template mapping evaluation. The slot-only mapping can result in higher scores since it is not constrained to preserve schema structure in the mapping. We compare our results with four works (Chambers and Jurafsky, 2011; Cheung, 2013; Chambers, 2013; Nguyen et al., 2015) as is shown in Table 2 and Table 3 . Our model has outperformed all of the previous methods. The improvement of recall is due to the normalized cut criteria, which can better use the inner connectivity between entities. The sentence constraint improves the result one step further.",
"cite_spans": [
{
"start": 28,
"end": 44,
"text": "(Sundheim, 1991)",
"ref_id": "BIBREF34"
},
{
"start": 97,
"end": 126,
"text": "(Chambers and Jurafsky, 2011;",
"ref_id": "BIBREF8"
},
{
"start": 127,
"end": 141,
"text": "Chambers, 2013",
"ref_id": "BIBREF9"
},
{
"start": 724,
"end": 737,
"text": "Cheung, 2013;",
"ref_id": null
},
{
"start": 738,
"end": 753,
"text": "Chambers, 2013;",
"ref_id": "BIBREF9"
},
{
"start": 754,
"end": 774,
"text": "Nguyen et al., 2015)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [
{
"start": 142,
"end": 149,
"text": "Table 2",
"ref_id": null
},
{
"start": 790,
"end": 797,
"text": "Table 2",
"ref_id": null
},
{
"start": 802,
"end": 809,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4.1"
},
{
"text": "Note that after adding the sentence constraint, the slot-only performance has increased a little, but the strict template mapping performance has increased a lot as is shown in Table 3 . This phenomenon can be explained by the following facts: We count the Prec Recall F1 Chambers (2013) 0.42 0.27 0.33 Our Model-SC 0.26 0.55 0.35 Our Model 0.33 0.50 0.40 Table 3 : strict template mapping comparison to state-of-the-art unsupervised systems, \"-SC\" means without sentence constraint amount of entities which has been assigned different templates or different slots in \"Our Model-SC\" and \"Our Model\". Of all the 11465 entities, 2305 entities has been assigned different templates in the two methods while only 108 entities has different slots. This fact illustrates that the sentence constraint can affect the assignment of templates much more than the slots. Therefore, the sentence constraint leads largely improvement to the strict mapping performance and very little increase to the slot-only performance.",
"cite_spans": [],
"ref_spans": [
{
"start": 177,
"end": 184,
"text": "Table 3",
"ref_id": null
},
{
"start": 356,
"end": 363,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4.1"
},
{
"text": "The traditional information extraction task is to fill the event schema slots. Many slot filling algorithms requires the full information of the event schemas and the labeled corpus. Among them, there are rule-based method (Rau et al., 1992; Chinchor et al., 1993) , supervised learning method (Baker et al., 1998; Chieu et al., 2003; Bunescu and Mooney, 2004; Patwardhan and Riloff, 2009; Maslennikov and Chua, 2007) , bootstrapping method (Yangarber et al., 2000) and cross-document inference method (Ji and Grishman, 2008) . Also there are many semisupervised solutions, which begin with unlabeled, but clustered event-specific documents, and extract common word patterns as extractors (Riloff and Schmelzenbach, 1998; Sudo et al., 2003; Riloff et al., 2005; Patwardhan and Riloff, 2007; Filatova et al., 2006; Surdeanu et al., 2006) Other traditional information extraction task learns binary relations and atomic facts. Models can learn relations like \"Jenny is married to Bob\" with unlabeled data Etzioni et al., 2008; Yates et al., 2007; Fader et al., 2011) , or ontology induction (dog is an animal) and attribute extraction (dogs have tails) (Carlson et al., 2010a; Carlson et al., 2010b; Huang and Riloff, 2010; Van Durme and Pasca, 2008) , or rely on predefined patterns (Hearst, 1992) . Shinyama and Sekine (2006) proposed an approach to learn templates with unlabeled corpus. They use unrestricted relation discovery to discover relations in unlabeled corpus as well as extract their fillers. Their constraints are that they need redundant documents and their relations are binary over repeated named entities. (Chen et al., 2011) also extract binary relations using generative model. Kasch and Oates (2010) , Chambers and Jurafsky (2008) , Chambers and Jurafsky (2009) , Balasubramanian et al. 2013captures template-like knowledge from unlabeled text by large-scale learning of scripts and narrative schemas. However, their structures are limited to frequent topics in a large corpus. Chambers and Jurafsky (2011) uses their idea, and their goal is to characterize a specific domain with limited data using a three-stage clustering algorithm.",
"cite_spans": [
{
"start": 223,
"end": 241,
"text": "(Rau et al., 1992;",
"ref_id": "BIBREF26"
},
{
"start": 242,
"end": 264,
"text": "Chinchor et al., 1993)",
"ref_id": null
},
{
"start": 294,
"end": 314,
"text": "(Baker et al., 1998;",
"ref_id": "BIBREF0"
},
{
"start": 315,
"end": 334,
"text": "Chieu et al., 2003;",
"ref_id": "BIBREF12"
},
{
"start": 335,
"end": 360,
"text": "Bunescu and Mooney, 2004;",
"ref_id": "BIBREF3"
},
{
"start": 361,
"end": 389,
"text": "Patwardhan and Riloff, 2009;",
"ref_id": "BIBREF25"
},
{
"start": 390,
"end": 417,
"text": "Maslennikov and Chua, 2007)",
"ref_id": "BIBREF20"
},
{
"start": 441,
"end": 465,
"text": "(Yangarber et al., 2000)",
"ref_id": "BIBREF37"
},
{
"start": 502,
"end": 525,
"text": "(Ji and Grishman, 2008)",
"ref_id": "BIBREF18"
},
{
"start": 689,
"end": 721,
"text": "(Riloff and Schmelzenbach, 1998;",
"ref_id": "BIBREF27"
},
{
"start": 722,
"end": 740,
"text": "Sudo et al., 2003;",
"ref_id": "BIBREF33"
},
{
"start": 741,
"end": 761,
"text": "Riloff et al., 2005;",
"ref_id": "BIBREF28"
},
{
"start": 762,
"end": 790,
"text": "Patwardhan and Riloff, 2007;",
"ref_id": "BIBREF24"
},
{
"start": 791,
"end": 813,
"text": "Filatova et al., 2006;",
"ref_id": "BIBREF15"
},
{
"start": 814,
"end": 836,
"text": "Surdeanu et al., 2006)",
"ref_id": "BIBREF35"
},
{
"start": 1003,
"end": 1024,
"text": "Etzioni et al., 2008;",
"ref_id": "BIBREF13"
},
{
"start": 1025,
"end": 1044,
"text": "Yates et al., 2007;",
"ref_id": "BIBREF38"
},
{
"start": 1045,
"end": 1064,
"text": "Fader et al., 2011)",
"ref_id": "BIBREF14"
},
{
"start": 1151,
"end": 1174,
"text": "(Carlson et al., 2010a;",
"ref_id": "BIBREF4"
},
{
"start": 1175,
"end": 1197,
"text": "Carlson et al., 2010b;",
"ref_id": "BIBREF5"
},
{
"start": 1198,
"end": 1221,
"text": "Huang and Riloff, 2010;",
"ref_id": "BIBREF17"
},
{
"start": 1222,
"end": 1248,
"text": "Van Durme and Pasca, 2008)",
"ref_id": "BIBREF36"
},
{
"start": 1282,
"end": 1296,
"text": "(Hearst, 1992)",
"ref_id": "BIBREF16"
},
{
"start": 1299,
"end": 1325,
"text": "Shinyama and Sekine (2006)",
"ref_id": "BIBREF32"
},
{
"start": 1624,
"end": 1643,
"text": "(Chen et al., 2011)",
"ref_id": "BIBREF10"
},
{
"start": 1698,
"end": 1720,
"text": "Kasch and Oates (2010)",
"ref_id": "BIBREF19"
},
{
"start": 1723,
"end": 1751,
"text": "Chambers and Jurafsky (2008)",
"ref_id": "BIBREF6"
},
{
"start": 1754,
"end": 1782,
"text": "Chambers and Jurafsky (2009)",
"ref_id": "BIBREF7"
},
{
"start": 1999,
"end": 2027,
"text": "Chambers and Jurafsky (2011)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "5"
},
{
"text": "Also, there are some state-of-the-art works using probabilistic graphic model (Chambers, 2013; Cheung, 2013; Nguyen et al., 2015) . They use the Gibbs sampling and get good results.",
"cite_spans": [
{
"start": 78,
"end": 94,
"text": "(Chambers, 2013;",
"ref_id": "BIBREF9"
},
{
"start": 95,
"end": 108,
"text": "Cheung, 2013;",
"ref_id": null
},
{
"start": 109,
"end": 129,
"text": "Nguyen et al., 2015)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "5"
},
{
"text": "This paper presented a joint entity-driven model to induct event schemas automatically.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "This model uses word embedding as well as PMI to measure the inner connection of entities and uses normalized cut for more accurate clustering. Finally, our model uses sentence constraint to extract templates and slots simultaneously. The experiment has proved the effectiveness of our model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "At https://github.com/shalei120/ESI 1 2 can the code be found.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This research is supported by National Key Basic Research Program of China (No.2014CB340504) and National Natural Science Foundation of China (No.61375074,61273318) . The contact authors of this paper are Sujian Li and Baobao Chang.",
"cite_spans": [],
"ref_spans": [
{
"start": 142,
"end": 164,
"text": "(No.61375074,61273318)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The berkeley framenet project",
"authors": [
{
"first": "F",
"middle": [],
"last": "Collin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Baker",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Charles",
"suffix": ""
},
{
"first": "John B",
"middle": [],
"last": "Fillmore",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lowe",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "86--90",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Collin F Baker, Charles J Fillmore, and John B Lowe. 1998. The berkeley framenet project. In Proceed- ings of the 36th Annual Meeting of the Association for Computational Linguistics and 17th Internation- al Conference on Computational Linguistics-Volume 1, pages 86-90. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Generating coherent event schemas at scale",
"authors": [
{
"first": "Niranjan",
"middle": [],
"last": "Balasubramanian",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Soderland",
"suffix": ""
},
{
"first": "Oren",
"middle": [
"Etzioni"
],
"last": "Mausam",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Niranjan Balasubramanian, Stephen Soderland, and Oren Etzioni Mausam. 2013. Generating coheren- t event schemas at scale. Proceedings of the Empirical Methods in Natural Language Processing. ACM.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Open information extraction from the web",
"authors": [
{
"first": "Michele",
"middle": [],
"last": "Banko",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Cafarella",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Soderland",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Broadhead",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Etzioni",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 20th international joint conference on Artifical intelligence",
"volume": "",
"issue": "",
"pages": "2670--2676",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michele Banko, Michael J Cafarella, Stephen Soderland, Matt Broadhead, and Oren Etzioni. 2007. Open infor- mation extraction from the web. In Proceedings of the 20th international joint conference on Artifical intelli- gence, pages 2670-2676. Morgan Kaufmann Publish- ers Inc.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Collective information extraction with relational markov networks",
"authors": [
{
"first": "Razvan",
"middle": [],
"last": "Bunescu",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Raymond",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mooney",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Razvan Bunescu and Raymond J Mooney. 2004. Collec- tive information extraction with relational markov net- works. In Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics, page 438. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Toward an architecture for never-ending language learning",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Carlson",
"suffix": ""
},
{
"first": "Justin",
"middle": [],
"last": "Betteridge",
"suffix": ""
},
{
"first": "Bryan",
"middle": [],
"last": "Kisiel",
"suffix": ""
},
{
"first": "Burr",
"middle": [],
"last": "Settles",
"suffix": ""
},
{
"first": "Tom M",
"middle": [],
"last": "Estevam R Hruschka",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mitchell",
"suffix": ""
}
],
"year": 2010,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew Carlson, Justin Betteridge, Bryan Kisiel, Burr Settles, Estevam R Hruschka Jr, and Tom M Mitchell. 2010a. Toward an architecture for never-ending lan- guage learning. In AAAI.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Coupled semi-supervised learning for information extraction",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Carlson",
"suffix": ""
},
{
"first": "Justin",
"middle": [],
"last": "Betteridge",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Richard",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Tom M",
"middle": [],
"last": "Estevam R Hruschka",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mitchell",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the third ACM international conference on Web search and data mining",
"volume": "",
"issue": "",
"pages": "101--110",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew Carlson, Justin Betteridge, Richard C Wang, Es- tevam R Hruschka Jr, and Tom M Mitchell. 2010b. Coupled semi-supervised learning for information ex- traction. In Proceedings of the third ACM internation- al conference on Web search and data mining, pages 101-110. ACM.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Unsupervised learning of narrative event chains",
"authors": [
{
"first": "Nathanael",
"middle": [],
"last": "Chambers",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2008,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "789--797",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nathanael Chambers and Daniel Jurafsky. 2008. Unsu- pervised learning of narrative event chains. In ACL, pages 789-797.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Unsupervised learning of narrative schemas and their participants",
"authors": [
{
"first": "Nathanael",
"middle": [],
"last": "Chambers",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP",
"volume": "2",
"issue": "",
"pages": "602--610",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nathanael Chambers and Dan Jurafsky. 2009. Unsu- pervised learning of narrative schemas and their par- ticipants. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th Inter- national Joint Conference on Natural Language Pro- cessing of the AFNLP: Volume 2-Volume 2, pages 602- 610. Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Templatebased information extraction without the templates",
"authors": [
{
"first": "Nathanael",
"middle": [],
"last": "Chambers",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "976--986",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nathanael Chambers and Dan Jurafsky. 2011. Template- based information extraction without the templates. pages 976-986.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Event schema induction with a probabilistic entity-driven model",
"authors": [
{
"first": "Nathanael",
"middle": [],
"last": "Chambers",
"suffix": ""
}
],
"year": 2013,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nathanael Chambers. 2013. Event schema induction with a probabilistic entity-driven model. EMNLP.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "In-domain relation discovery with meta-constraints via posterior regularization",
"authors": [
{
"first": "Harr",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Benson",
"suffix": ""
},
{
"first": "Tahira",
"middle": [],
"last": "Naseem",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "530--540",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Harr Chen, Edward Benson, Tahira Naseem, and Regi- na Barzilay. 2011. In-domain relation discovery with meta-constraints via posterior regularization. In Pro- ceedings of the 49th Annual Meeting of the Associa- tion for Computational Linguistics: Human Language Technologies-Volume 1, pages 530-540. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Evaluating message understanding systems: an analysis of the third message understanding conference (muc-3)",
"authors": [
{
"first": "Hai",
"middle": [],
"last": "Leong Chieu",
"suffix": ""
},
{
"first": "Hwee Tou",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "Yoong Keok Lee ;",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Lynette",
"middle": [],
"last": "Hirschman",
"suffix": ""
}
],
"year": 1993,
"venue": "Proceedings of the 41st Annual Meeting on Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "409--449",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hai Leong Chieu, Hwee Tou Ng, and Yoong Keok Lee. 2003. Closing the gap: Learning-based information extraction rivaling knowledge-engineering methods. In Proceedings of the 41st Annual Meeting on Associ- ation for Computational Linguistics-Volume 1, pages 216-223. Association for Computational Linguistics. Nancy Chinchor, David D Lewis, and Lynette Hirschman. 1993. Evaluating message under- standing systems: an analysis of the third message understanding conference (muc-3). Computational linguistics, 19(3):409-449.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Open information extraction from the web",
"authors": [
{
"first": "Oren",
"middle": [],
"last": "Etzioni",
"suffix": ""
},
{
"first": "Michele",
"middle": [],
"last": "Banko",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Soderland",
"suffix": ""
},
{
"first": "Daniel",
"middle": [
"S"
],
"last": "Weld",
"suffix": ""
}
],
"year": 2008,
"venue": "Communications of the ACM",
"volume": "51",
"issue": "12",
"pages": "68--74",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oren Etzioni, Michele Banko, Stephen Soderland, and Daniel S Weld. 2008. Open information extrac- tion from the web. Communications of the ACM, 51(12):68-74.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Identifying relations for open information extraction",
"authors": [
{
"first": "Anthony",
"middle": [],
"last": "Fader",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Soderland",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Etzioni",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1535--1545",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anthony Fader, Stephen Soderland, and Oren Etzioni. 2011. Identifying relations for open information ex- traction. In Proceedings of the Conference on Empiri- cal Methods in Natural Language Processing, pages 1535-1545. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Automatic creation of domain templates",
"authors": [
{
"first": "Elena",
"middle": [],
"last": "Filatova",
"suffix": ""
},
{
"first": "Vasileios",
"middle": [],
"last": "Hatzivassiloglou",
"suffix": ""
},
{
"first": "Kathleen",
"middle": [],
"last": "Mckeown",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the COLING/ACL on Main conference poster sessions",
"volume": "",
"issue": "",
"pages": "207--214",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elena Filatova, Vasileios Hatzivassiloglou, and Kathleen McKeown. 2006. Automatic creation of domain tem- plates. In Proceedings of the COLING/ACL on Main conference poster sessions, pages 207-214. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Automatic acquisition of hyponyms from large text corpora",
"authors": [
{
"first": "A",
"middle": [],
"last": "Marti",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hearst",
"suffix": ""
}
],
"year": 1992,
"venue": "Proceedings of the 14th conference on Computational linguistics",
"volume": "2",
"issue": "",
"pages": "539--545",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marti A Hearst. 1992. Automatic acquisition of hy- ponyms from large text corpora. In Proceedings of the 14th conference on Computational linguistics-Volume 2, pages 539-545. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Inducing domain-specific semantic class taggers from (almost) nothing",
"authors": [
{
"first": "Ruihong",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Ellen",
"middle": [],
"last": "Riloff",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "275--285",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ruihong Huang and Ellen Riloff. 2010. Inducing domain-specific semantic class taggers from (almost) nothing. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 275-285. Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Refining event extraction through cross-document inference",
"authors": [
{
"first": "Heng",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Grishman",
"suffix": ""
}
],
"year": 2008,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "254--262",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Heng Ji and Ralph Grishman. 2008. Refining event ex- traction through cross-document inference. In ACL, pages 254-262.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Mining script-like structures from the web",
"authors": [
{
"first": "Niels",
"middle": [],
"last": "Kasch",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Oates",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the NAA-CL HLT 2010 First International Workshop on Formalisms and Methodology for Learning by Reading",
"volume": "",
"issue": "",
"pages": "34--42",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Niels Kasch and Tim Oates. 2010. Mining script-like structures from the web. In Proceedings of the NAA- CL HLT 2010 First International Workshop on For- malisms and Methodology for Learning by Reading, pages 34-42. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Automatic acquisition of domain knowledge for information extraction",
"authors": [
{
"first": "Mstislav",
"middle": [],
"last": "Maslennikov",
"suffix": ""
},
{
"first": "Tat-Seng",
"middle": [],
"last": "Chua",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Association of Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mstislav Maslennikov and Tat-Seng Chua. 2007. Auto- matic acquisition of domain knowledge for informa- tion extraction. In Proceedings of the Association of Computational Linguistics (ACL).",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1301.3781"
]
},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representa- tions in vector space. arXiv preprint arXiv:1301.3781.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Wordnet: a lexical database for english",
"authors": [
{
"first": "A",
"middle": [],
"last": "George",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Miller",
"suffix": ""
}
],
"year": 1995,
"venue": "Communications of the ACM",
"volume": "38",
"issue": "11",
"pages": "39--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George A Miller. 1995. Wordnet: a lexical database for english. Communications of the ACM, 38(11):39-41.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Generative event schema induction with entity disambiguation",
"authors": [
{
"first": "Kiem-Hieu",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Xavier",
"middle": [],
"last": "Tannier",
"suffix": ""
},
{
"first": "Olivier",
"middle": [],
"last": "Ferret",
"suffix": ""
},
{
"first": "Romaric",
"middle": [],
"last": "Besan\u00e7on",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "188--197",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kiem-Hieu Nguyen, Xavier Tannier, Olivier Ferret, and Romaric Besan\u00e7on. 2015. Generative event schema induction with entity disambiguation. In Proceed- ings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th Internation- al Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 188-197, Beijing, China, July. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Effective information extraction with semantic affinity patterns and relevant regions",
"authors": [
{
"first": "Siddharth",
"middle": [],
"last": "Patwardhan",
"suffix": ""
},
{
"first": "Ellen",
"middle": [],
"last": "Riloff",
"suffix": ""
}
],
"year": 2007,
"venue": "EMNLP-CoNLL",
"volume": "7",
"issue": "",
"pages": "717--727",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Siddharth Patwardhan and Ellen Riloff. 2007. Effective information extraction with semantic affinity patterns and relevant regions. In EMNLP-CoNLL, volume 7, pages 717-727.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "A unified model of phrasal and sentential evidence for information extraction",
"authors": [
{
"first": "Siddharth",
"middle": [],
"last": "Patwardhan",
"suffix": ""
},
{
"first": "Ellen",
"middle": [],
"last": "Riloff",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "151--160",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Siddharth Patwardhan and Ellen Riloff. 2009. A unified model of phrasal and sentential evidence for informa- tion extraction. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Process- ing: Volume 1-Volume 1, pages 151-160. Association for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Ge nltoolset: Muc-4 test results and analysis",
"authors": [
{
"first": "Lisa",
"middle": [],
"last": "Rau",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Krupka",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Jacobs",
"suffix": ""
},
{
"first": "Ira",
"middle": [],
"last": "Sider",
"suffix": ""
},
{
"first": "Lois",
"middle": [],
"last": "Childs",
"suffix": ""
}
],
"year": 1992,
"venue": "Proceedings of the 4th conference on Message understanding",
"volume": "",
"issue": "",
"pages": "94--99",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lisa Rau, George Krupka, Paul Jacobs, Ira Sider, and Lois Childs. 1992. Ge nltoolset: Muc-4 test results and analysis. In Proceedings of the 4th conference on Message understanding, pages 94-99. Association for Computational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "An empirical approach to conceptual case frame acquisition",
"authors": [
{
"first": "Ellen",
"middle": [],
"last": "Riloff",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Schmelzenbach",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the Sixth Workshop on Very Large Corpora",
"volume": "",
"issue": "",
"pages": "49--56",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ellen Riloff and Mark Schmelzenbach. 1998. An em- pirical approach to conceptual case frame acquisition. In Proceedings of the Sixth Workshop on Very Large Corpora, pages 49-56.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Exploiting subjectivity classification to improve information extraction",
"authors": [
{
"first": "Ellen",
"middle": [],
"last": "Riloff",
"suffix": ""
},
{
"first": "Janyce",
"middle": [],
"last": "Wiebe",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Phillips",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the National Conference On Artificial Intelligence",
"volume": "20",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ellen Riloff, Janyce Wiebe, and William Phillips. 2005. Exploiting subjectivity classification to improve in- formation extraction. In Proceedings of the Nation- al Conference On Artificial Intelligence, volume 20, page 1106. Menlo Park, CA; Cambridge, MA; Lon- don; AAAI Press; MIT Press; 1999.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "A survey of text mining: Retrieval, extraction and indexing techniques",
"authors": [
{
"first": "R",
"middle": [],
"last": "Sagayam",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Srinivasan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Roshni",
"suffix": ""
}
],
"year": 2012,
"venue": "International Journal Of Computational Engineering Research",
"volume": "2",
"issue": "5",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R Sagayam, S Srinivasan, and S Roshni. 2012. A sur- vey of text mining: Retrieval, extraction and indexing techniques. International Journal Of Computational Engineering Research, 2(5).",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "On-demand information extraction",
"authors": [
{
"first": "Satoshi",
"middle": [],
"last": "Sekine",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the COLING/ACL on Main conference poster sessions",
"volume": "",
"issue": "",
"pages": "731--738",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Satoshi Sekine. 2006. On-demand information extrac- tion. In Proceedings of the COLING/ACL on Main conference poster sessions, pages 731-738. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Normalized cuts and image segmentation. Pattern Analysis and Machine Intelligence",
"authors": [
{
"first": "Jianbo",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Jitendra",
"middle": [],
"last": "Malik",
"suffix": ""
}
],
"year": 2000,
"venue": "IEEE Transactions on",
"volume": "22",
"issue": "8",
"pages": "888--905",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jianbo Shi and Jitendra Malik. 2000. Normalized cuts and image segmentation. Pattern Analysis and Ma- chine Intelligence, IEEE Transactions on, 22(8):888- 905.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Preemptive information extraction using unrestricted relation discovery",
"authors": [
{
"first": "Yusuke",
"middle": [],
"last": "Shinyama",
"suffix": ""
},
{
"first": "Satoshi",
"middle": [],
"last": "Sekine",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics",
"volume": "",
"issue": "",
"pages": "304--311",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yusuke Shinyama and Satoshi Sekine. 2006. Preemp- tive information extraction using unrestricted relation discovery. In Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computation- al Linguistics, pages 304-311. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "An improved extraction pattern representation model for automatic ie pattern acquisition",
"authors": [
{
"first": "Kiyoshi",
"middle": [],
"last": "Sudo",
"suffix": ""
},
{
"first": "Satoshi",
"middle": [],
"last": "Sekine",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Grishman",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 41st Annual Meeting on Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "224--231",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kiyoshi Sudo, Satoshi Sekine, and Ralph Grishman. 2003. An improved extraction pattern representation model for automatic ie pattern acquisition. In Pro- ceedings of the 41st Annual Meeting on Association for Computational Linguistics-Volume 1, pages 224- 231. Association for Computational Linguistics.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Third message understanding evaluation and conference (muc-3): Phase 1 status report",
"authors": [
{
"first": "Beth",
"middle": [],
"last": "Sundheim",
"suffix": ""
}
],
"year": 1991,
"venue": "HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Beth Sundheim. 1991. Third message understanding e- valuation and conference (muc-3): Phase 1 status re- port. In HLT.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "A hybrid approach for the acquisition of information extraction patterns. Adaptive Text Extraction and Mining",
"authors": [
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "Jordi",
"middle": [],
"last": "Turmo",
"suffix": ""
},
{
"first": "Alicia",
"middle": [],
"last": "Ageno",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mihai Surdeanu, Jordi Turmo, and Alicia Ageno. 2006. A hybrid approach for the acquisition of information extraction patterns. Adaptive Text Extraction and Min- ing (ATEM 2006), page 48.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Finding cars, goddesses and enzymes: Parametrizable acquisition of labeled instances for open-domain information extraction",
"authors": [
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
},
{
"first": "Marius",
"middle": [],
"last": "Pasca",
"suffix": ""
}
],
"year": 2008,
"venue": "AAAI",
"volume": "8",
"issue": "",
"pages": "1243--1248",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Benjamin Van Durme and Marius Pasca. 2008. Finding cars, goddesses and enzymes: Parametrizable acquisi- tion of labeled instances for open-domain information extraction. In AAAI, volume 8, pages 1243-1248.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Automatic acquisition of domain knowledge for information extraction",
"authors": [
{
"first": "Roman",
"middle": [],
"last": "Yangarber",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Grishman",
"suffix": ""
},
{
"first": "Pasi",
"middle": [],
"last": "Tapanainen",
"suffix": ""
},
{
"first": "Silja",
"middle": [],
"last": "Huttunen",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 18th conference on Computational linguistics",
"volume": "2",
"issue": "",
"pages": "940--946",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roman Yangarber, Ralph Grishman, Pasi Tapanainen, and Silja Huttunen. 2000. Automatic acquisition of domain knowledge for information extraction. In Proceedings of the 18th conference on Computational linguistics-Volume 2, pages 940-946. Association for Computational Linguistics.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Textrunner: open information extraction on the web",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Yates",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Cafarella",
"suffix": ""
},
{
"first": "Michele",
"middle": [],
"last": "Banko",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Etzioni",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Broadhead",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Soderland",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of Human Language Technologies: The Annual Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations",
"volume": "",
"issue": "",
"pages": "25--26",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander Yates, Michael Cafarella, Michele Banko, Oren Etzioni, Matthew Broadhead, and Stephen Soderland. 2007. Textrunner: open information ex- traction on the web. In Proceedings of Human Lan- guage Technologies: The Annual Conference of the North American Chapter of the Association for Com- putational Linguistics: Demonstrations, pages 25-26. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "An entity example 2 Task DefinitionOur model is an entity-driven model. This model represents a document d as a series of entitiesE d = {e i |i = 1, 2, \u2022 \u2022 \u2022 }.Each entity is a quadruple e = (h, p, d, f ). Here, h represents the head word of an entity, p represents its predicate, and d represents the dependency path between the predicate and the head word, f contains the features of the entity (such as the direct hypernyms of the head word), the sentence id where e occurred and the document id where e occurred. A simple example isFig 1.",
"type_str": "figure",
"uris": null,
"num": null
},
"TABREF0": {
"type_str": "table",
"html": null,
"content": "<table><tr><td>entity 1 entity 1</td><td>entity 2</td></tr><tr><td>entity 3</td><td/></tr><tr><td colspan=\"2\">Entity Representation</td></tr><tr><td>Entity 1: h=bomb, p=explode, d=subject,</td><td/></tr><tr><td colspan=\"2\">f={hyper={explosive, weaponry...} sentence=5, passage=41}</td></tr><tr><td colspan=\"2\">Entity 2: h=residence, p=explode, d=prep_in_front_of,</td></tr><tr><td colspan=\"2\">f={hyper={diplomatic building...} sentence=5, passage=41}</td></tr><tr><td>Entity 3: h=capital, p=explode, d=prep_in,</td><td/></tr><tr><td colspan=\"2\">f={hyper={center, federal government...} sentence=5,</td></tr><tr><td>passage=41}</td><td/></tr></table>",
"text": "",
"num": null
}
}
}
}