Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "C10-1022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:59:23.802056Z"
},
"title": "A Twin-Candidate Based Approach for Event Pronoun Resolution using Composite Kernel",
"authors": [
{
"first": "Chen",
"middle": [],
"last": "Bin",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National University of Singapore",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Su",
"middle": [],
"last": "Jian",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Tan",
"middle": [
"Chew"
],
"last": "Lim",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National University of Singapore",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Event Anaphora Resolution is an important task for cascaded event template extraction and other NLP study. In this paper, we provide a first systematic study of resolving pronouns to their event verb antecedents for general purpose. First, we explore various positional, lexical and syntactic features useful for the event pronoun resolution. We further explore tree kernel to model structural information embedded in syntactic parses. A composite kernel is then used to combine the above diverse information. In addition, we employed a twin-candidate based preferences learning model to capture the pair wise candidates' preference knowledge. Besides we also look into the incorporation of the negative training instances with anaphoric pronouns whose antecedents are not verbs. Although these negative training instances are not used in previous study on anaphora resolution, our study shows that they are very useful for the final resolution through random sampling strategy. Our experiments demonstrate that it's meaningful to keep certain training data as development data to help SVM select a more accurate hyper plane which provides significant improvement over the default setting with all training data.",
"pdf_parse": {
"paper_id": "C10-1022",
"_pdf_hash": "",
"abstract": [
{
"text": "Event Anaphora Resolution is an important task for cascaded event template extraction and other NLP study. In this paper, we provide a first systematic study of resolving pronouns to their event verb antecedents for general purpose. First, we explore various positional, lexical and syntactic features useful for the event pronoun resolution. We further explore tree kernel to model structural information embedded in syntactic parses. A composite kernel is then used to combine the above diverse information. In addition, we employed a twin-candidate based preferences learning model to capture the pair wise candidates' preference knowledge. Besides we also look into the incorporation of the negative training instances with anaphoric pronouns whose antecedents are not verbs. Although these negative training instances are not used in previous study on anaphora resolution, our study shows that they are very useful for the final resolution through random sampling strategy. Our experiments demonstrate that it's meaningful to keep certain training data as development data to help SVM select a more accurate hyper plane which provides significant improvement over the default setting with all training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Anaphora resolution, the task of resolving a given text expression to its referred expression in prior texts, is important for intelligent text processing systems. Most previous works on anaphora resolution mainly aims at object anaphora in which both the anaphor and its antecedent are mentions of the same real world objects",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In contrast, an event anaphora as first defined in (Asher, 1993) is an anaphoric reference to an event, fact, and proposition which is representative of eventuality and abstract entity. Consider the following example:",
"cite_spans": [
{
"start": 51,
"end": 64,
"text": "(Asher, 1993)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This was an all-white, all-Christian community that all the sudden was taken over --not taken over, that's a very bad choice of words, but [invaded] 1 by, perhaps different groups.",
"cite_spans": [
{
"start": 82,
"end": 148,
"text": "--not taken over, that's a very bad choice of words, but [invaded]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "[It] 2 began when a Hasidic Jewish family bought one of the town's two meat-packing plants 13 years ago. The anaphor [It] 2 in the above example refers back to an event, \"all-white and all-Christian city of Postville is diluted by different ethnic groups.\" Here, we take the main verb of the event, [invaded] 1 as the representation of this event and the antecedent for pronoun [It] 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "According to (Asher, 1993) , antecedents of event pronoun include both gerunds (e.g. destruction) and inflectional verbs (e.g. destroying). In our study, we focus on the inflectional verb representation, as the gerund representation is studied in the conventional anaphora resolution. For the rest of this paper, \"event pronouns\" are pronouns whose antecedents are event verbs while \"non-event anaphoric pronouns\" are those with antecedents other than event verbs.",
"cite_spans": [
{
"start": 13,
"end": 26,
"text": "(Asher, 1993)",
"ref_id": "BIBREF0"
},
{
"start": 85,
"end": 97,
"text": "destruction)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Entity anaphora resolution provides critical links for cascaded event template extraction. It also provides useful information for further inference needed in other natural language processing tasks such as discourse relation and entailment. Event anaphora (both pronouns and noun phrases) contributes a significant proportion in anaphora corpora, such as OntoNotes. 19.97% of its total number of entity chains contains event verb mentions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In (Asher, 1993) chapter 6, a method to resolve references to abstract entities using discourse representation theory is discussed. However, no computation system was proposed for entity anaphora resolution. (Byron, 2002) proposed semantic filtering as a complement to salience calculations to resolve event pronoun targeted by us. This knowledge deep approach only works for much focused domain like trains spoken dialogue with handcraft knowledge of relevant events for only limited number of verbs involved. Clearly, this approach is not suitable for general event pronoun resolution say in news articles. Besides, there's also no specific performance report on event pronoun resolution, thus it's not clear how effective their approach is. (M\u00fcller, 2007) proposed pronoun resolution system using a set of hand-crafted constraints such as \"argumenthood\" and \"right-frontier condition\" together with logistic regression model based on corpus counts. The event pronouns are resolved together with object pronouns. This explorative work produced an 11.94% F-score for event pronoun resolution which demonstrated the difficulty of event anaphora resolution. In (Pradhan, et.al, 2007) , a general anaphora resolution system is applied to OntoNotes corpus. However, their set of features is designed for object anaphora resolution. There is no specific performance reported on event anaphora. We suspect the event pronouns are not correctly resolved in general as most of these features are irrelevant to event pronoun resolution.",
"cite_spans": [
{
"start": 3,
"end": 16,
"text": "(Asher, 1993)",
"ref_id": "BIBREF0"
},
{
"start": 208,
"end": 221,
"text": "(Byron, 2002)",
"ref_id": "BIBREF5"
},
{
"start": 744,
"end": 758,
"text": "(M\u00fcller, 2007)",
"ref_id": "BIBREF21"
},
{
"start": 1160,
"end": 1182,
"text": "(Pradhan, et.al, 2007)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we provide the first systematic study on pronominal references to event antecedents. First, we explore various positional, lexical and syntactic features useful for event pronoun resolution, which turns out quite different from conventional pronoun resolution except sentence distance information. These have been used together with syntactic structural information using a composite kernel. Furthermore, we also consider candidates' preferences information using twin-candidate model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Besides we further look into the incorporation of negative instances from non-event anaphoric pronoun, although these instances are not used in previous study on co-reference or anaphora resolution as they make training instances extremely unbalanced. Our study shows that they can be very useful for the final resolution after random sampling strategy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We further demonstrate that it's meaningful to keep certain training data as development data to help SVM select a more accurate hyper-plane which provide significant improvement over the default setting with all training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of this paper is organized as follows. Section 2 introduces the framework for event pronoun resolution, the considerations on training instance, the various features useful for event pronoun resolution and SVM classifier with adjustment of hyper-plane. Twin-candidate model is further introduced to capture the preferences among candidates. Section 3 presents in details the structural syntactic feature and the kernel functions to incorporate such a feature in the resolution. Section 4 presents the experiment results and some discussion. Section 5 concludes the paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our event-anaphora resolution system adopts the common learning-based model for object anaphora resolution, as employed by (Soon et al., 2001) and (Ng and Cardie, 2002a) .",
"cite_spans": [
{
"start": 123,
"end": 142,
"text": "(Soon et al., 2001)",
"ref_id": "BIBREF4"
},
{
"start": 147,
"end": 169,
"text": "(Ng and Cardie, 2002a)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Resolution Framework",
"sec_num": "2"
},
{
"text": "In the learning framework, training or testing instance of the resolution system has a form of where is the i th candidate of the antecedent of anaphor . An instance is labeled as positive if is the antecedent of , or negative if is not the antecedent of . An instance is associated with a feature vector which records different properties and relations between and . The features used in our system will be discussed later in this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training and Testing instance",
"sec_num": "2.1"
},
{
"text": "During training, for each event pronoun, we consider the preceding verbs in its current and previous two sentences as its antecedent candidates. A positive instance is formed by pairing an anaphor with its correct antecedent. And a set of negative instances is formed by pairing an anaphor with its candidates other than the correct antecedent. In addition, more negative instances are generated from non-event anaphoric pronouns. Such an instance is created by pairing up a non-event anaphoric pronoun with each of the verbs within the pronoun's sentence and previous two sentences. This set of instances from nonevent anaphoric pronouns is employed to provide extra power on ruling out non-event anaphoric pronouns during resolution. This is inspired by the fact that event pronouns are only 14.7% of all the pronouns in the OntoNotes corpus. Based on these generated training instances, we can train a binary classifier using any discriminative learning algorithm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training and Testing instance",
"sec_num": "2.1"
},
{
"text": "The natural distribution of textual data is often imbalanced. Classes with fewer examples are under-represented and classifiers often perform far below satisfactory. In our study, this becomes a significant issue as positive class (event anaphoric) is the minority class in pronoun resolution task. Thus we utilize a random down sampling method to reduce majority class samples to an equivalent level with the minority class samples which is described in (Kubat and Matwin, 1997) and (Estabrooks et al, 2004) . In (Ng and Cardie, 2002b) , they proposed a negative sample selection scheme which included only negative instances found in between an anaphor and its antecedent. However, in our event pronoun resolution, we are distinguishing the event-anaphoric from non-event anaphoric which is different from (Ng and Cardie, 2002b) .",
"cite_spans": [
{
"start": 455,
"end": 479,
"text": "(Kubat and Matwin, 1997)",
"ref_id": "BIBREF2"
},
{
"start": 484,
"end": 508,
"text": "(Estabrooks et al, 2004)",
"ref_id": "BIBREF14"
},
{
"start": 514,
"end": 536,
"text": "(Ng and Cardie, 2002b)",
"ref_id": "BIBREF8"
},
{
"start": 808,
"end": 830,
"text": "(Ng and Cardie, 2002b)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training and Testing instance",
"sec_num": "2.1"
},
{
"text": "In a conventional pronoun resolution, a set of syntactic and semantic knowledge has been reported as in (Strube and M\u00fcller, 2003; Yang et al, 2004; 2005a; 2006) . These features include number agreement, gender agreement and many others. However, most of these features are not useful for our task, as our antecedents are inflectional verbs instead of noun phrases. Thus we have conducted a study on effectiveness of potential positional, lexical and syntactic features. The lexical knowledge is mainly collected from corpus statistics. The syntactic features are mainly from intuitions. These features are purposely engineered to be highly correlated with positive instances. Therefore such kind of features will contribute to a high precision classifier. \uf0b7 Sentence Distance This feature measures the sentence distance between an anaphor and its antecedent candidate under the assumptions that a candidate in the closer sentence to the anaphor is preferred to be the antecedent. \uf0b7 Word Distance This feature measures the word distance between an anaphor and its antecedent candidate. It is mainly to distinguish verbs from the same sentence. \uf0b7 Surrounding Words and POS Tags The intuition behind this set of features is to find potential surface words that occur most frequently with the positive instances. Since most of verbs occurred in front of pronoun, we have built a frequency table from the preceding 5 words of the verb to succeeding 5 surface words of the pronoun. After the frequency table is built, we select those words with confidence 1 > 70% as features. Similar to Surrounding Words, we have built a frequency table to select indicative surrounding POS tags which occurs most frequently with positive instances. \uf0b7 Co-occurrences of Surrounding Words The intuition behind this set of features is to capture potential surface patterns such as \"It caused\u2026\" and \"It leads to\". These patterns are associated with strong indication that pronoun \"it\" is an event pronoun. The range for the cooccurrences is from preceding 5 words to succeeding 5 words. All possible combinations of word positions are used for a co-occurrence words pattern. For example \"it leads to\" will generate a pattern as \"S1_S2_lead_to\" where S1 and S2 mean succeeding position 1 and 2. Similar to previous surrounding words, we will conduct corpus statistics analysis and select cooccurrence patterns with a confidence greater than 70%. Following the same process, we have examined co-occurrence patterns for surrounding POS tags. \uf0b7 Subject/Object Features This set of features aims to capture the relative position of the pronoun in a sentence. It denotes the preference of pronoun's position at the clause level. There are 4 features in this category as listed below.",
"cite_spans": [
{
"start": 104,
"end": 129,
"text": "(Strube and M\u00fcller, 2003;",
"ref_id": "BIBREF9"
},
{
"start": 130,
"end": 147,
"text": "Yang et al, 2004;",
"ref_id": "BIBREF15"
},
{
"start": 148,
"end": 154,
"text": "2005a;",
"ref_id": "BIBREF16"
},
{
"start": 155,
"end": 160,
"text": "2006)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Space",
"sec_num": "2.2"
},
{
"text": "This feature indicates whether a pronoun is at the subject position of a main clause.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Subject of Main Clause",
"sec_num": null
},
{
"text": "This feature indicates whether a pronoun is at the subject position of a sub-clause.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Subject of Sub-clause",
"sec_num": null
},
{
"text": "This feature indicates whether a pronoun is at the object position of a main clause.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Object of Main Clause",
"sec_num": null
},
{
"text": "This feature indicates whether a pronoun is at the object position of a sub-clause. \uf0b7 Verb of Main/Sub Clause Similar to the Subject/Object features of pronoun, the following two features capture the rela-1 tive position of a verb in a sentence. It encodes the preference of verb position between main verbs in main/sub clauses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Object of Sub-clause",
"sec_num": null
},
{
"text": "This feature indicates whether a verb is a main verb in a main clause.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Main Verb in Main Clause",
"sec_num": null
},
{
"text": "This feature indicates whether a verb is a main verb in a sub-clause.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Main Verb in Sub-clause",
"sec_num": null
},
{
"text": "In theory, any discriminative learning algorithm is applicable to learn a classifier for pronoun resolution. In our study, we use Support Vector Machine (Vapnik, 1995) to allow the use of kernels to incorporate the structure feature. One advantage of SVM is that we can use tree kernel approach to capture syntactic parse tree information in a particular high-dimension space.",
"cite_spans": [
{
"start": 153,
"end": 167,
"text": "(Vapnik, 1995)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Support Vector Machine",
"sec_num": "2.3"
},
{
"text": "Suppose a training set consists of labeled vectors , where is the feature vector of a training instance and is its class label. The classifier learned by SVM is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Support Vector Machine",
"sec_num": "2.3"
},
{
"text": "where is the learned parameter for a support vector . An instance is classified as positive if . Otherwise, is negative. \uf0b7 Adjust Hyper-plane with Development Data Previous works on pronoun resolution such as (Yang et al, 2006) used the default setting for hyper-plane which sets . And an instance is positive if and negative otherwise. In our study, we look into a method of adjusting the hyper-plane's position using development data to improve the classifier's performance.",
"cite_spans": [
{
"start": 209,
"end": 227,
"text": "(Yang et al, 2006)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Support Vector Machine",
"sec_num": "2.3"
},
{
"text": "Considering a default model setting for SVM as shown in Figure 2 (for illustration purpose, we use a 2-D example). The objective of SVM learning process is to find a set of weight vector which maximizes the margin (defined as ) with constraints defined by support vectors. The separating hyper-plane is given by as bold line in the center. The margin is the region between the two dotted lines (bounded by and ). The margin is a space without any information from training instances. The actual hyper-plane may fall in any place within the margin. It does not necessarily occur in the. However, the hyper-plane is used to separate positive and negative instances during classification process without consideration of the margin. Thus if an instance falls in the margin, SVM can only decide class label from hyper-plane which may cause misclassification in the margin.",
"cite_spans": [],
"ref_spans": [
{
"start": 56,
"end": 64,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Support Vector Machine",
"sec_num": "2.3"
},
{
"text": "Based on the previous discussion, we propose an adjustment of the hyper-plane using development data. For simplicity, we adjust the hyperplane function value instead of modeling the function itself. The hyper-plane function value will be further referred as a threshold . The following is a modified version of a learned SVM classifier.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Support Vector Machine",
"sec_num": "2.3"
},
{
"text": "where is the threshold, is the learned parameter for a feature and is its class label. A set of development data is used to adjust the hyper-plane function threshold in order to maximize the accuracy of the learned SVM classifier on development data. The adjustment of hyperplane is defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Support Vector Machine",
"sec_num": "2.3"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Support Vector Machine",
"sec_num": "2.3"
},
{
"text": "is an indicator function which output 1 if is same sign as and 0 otherwise. Thereafter, the learned threshold is applied to the testing set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Support Vector Machine",
"sec_num": "2.3"
},
{
"text": "A parse tree that covers a pronoun and its antecedent candidate could provide us much syntactic information related to the pair which is explicitly or implicitly represented in the tree. Therefore, by comparing the common sub-structures between two trees we can find out to what degree two trees contain similar syntactic information, which can be done using a convolution tree kernel. The value returned from tree kernel reflects similarity between two instances in syntax. Such syntactic similarity can be further combined with other knowledge to compute overall similarity between two instances, through a composite kernel. Normally, parsing is done at sentence level. However, in many cases a pronoun and its antecedent candidate do not occur in the same sentence. To present their syntactic properties and relations in a single tree structure, we construct a syntax tree for an entire text, by attaching the parse trees of all its sentences to an upper node.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incorporating Structural Syntactic Information",
"sec_num": "3"
},
{
"text": "Having obtained the parse tree of a text, we shall consider how to select the appropriate portion of the tree as the structured feature for a given instance. As each instance is related to a pronoun and a candidate, the structured feature at least should be able to cover both of these two expressions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incorporating Structural Syntactic Information",
"sec_num": "3"
},
{
"text": "Generally, the more substructure of the tree is included, the more syntactic information would be provided, but at the same time the more noisy information that comes from parsing errors would likely be introduced. In our study, we examine three possible structured features that contain different substructures of the parse tree:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Structural Syntactic Feature",
"sec_num": "3.1"
},
{
"text": "\uf0b7 Minimum Expansion Tree This feature records the minimal structure covering both pronoun and its candidate in parse tree. It only includes the nodes occurring in the shortest path connecting the pronoun and its candidate, via the nearest commonly commanding node. When the pronoun and candidate are from different sentences, we will find a path through pseudo \"TOP\" node which links all the parse trees. Considering the example given in section 1, This was an all-white, all-Christian community that all the sudden was taken over --not taken over, that's a very bad choice of words, but [invaded] 1 by, perhaps different groups.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Structural Syntactic Feature",
"sec_num": "3.1"
},
{
"text": "[It] 2 began when a Hasidic Jewish family bought one of the town's two meat-packing plants 13 years ago. The minimum expansion structural feature of the instance {invaded, it} is annotated with bold lines and shaded nodes in figure 1. \uf0b7 Simple Expansion Tree Minimum-Expansion could, to some degree, describe the syntactic relationships between the candidate and pronoun. However, it is incapable of capturing the syntactic properties of the can-didate or the pronoun, because the tree structure surrounding the expression is not taken into consideration. To incorporate such information, feature Simple-Expansion not only contains all the nodes in Minimum-Expansion, but also includes the first-level children of these nodes 2 except the punctuations. The simple-expansion structural feature of instance {invaded, it} is annotated in figure 2. In the left sentence's tree, the node \"NP\" for \"perhaps different groups\" is terminated to provide a clue that we have a noun phrase at the object position of the candidate verb. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Structural Syntactic Feature",
"sec_num": "3.1"
},
{
"text": "This feature focuses on the whole tree structure between the candidate and pronoun. It not only includes all the nodes in Simple-Expansion, but also the nodes (beneath the nearest commanding parent) that cover the words between the candi-date and the pronoun 3 . Such a feature keeps the most information related to the pronoun and candidate pair. Figure 3 shows the structure for feature full-expansion for instance {invaded, it}. As illustrated, the \"NP\" node for \"perhaps different groups\" is further expanded to the POS level. All its child nodes are included in the full-expansion tree except the surface words.",
"cite_spans": [],
"ref_spans": [
{
"start": 348,
"end": 356,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "\uf0b7 Full Expansion Tree",
"sec_num": null
},
{
"text": "To calculate the similarity between two structured features, we use the convolution tree kernel that is defined by Collins and Duffy (2002) and Moschitti (2004) . Given two trees, the kernel will enumerate all their sub-trees and use the number of common sub-trees as the measure of similarity between two trees. The above tree kernel only aims for the structured feature. We also need a composite kernel to combine the structured feature and the flat features from section 2.2. In our study we define the composite kernel as follows:",
"cite_spans": [
{
"start": 115,
"end": 139,
"text": "Collins and Duffy (2002)",
"ref_id": "BIBREF6"
},
{
"start": 144,
"end": 160,
"text": "Moschitti (2004)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Convolution Parse Tree Kernel and Composite Kernel",
"sec_num": "3.2"
},
{
"text": "where is the convolution tree kernel defined for the structured feature, and is the kernel applied on the flat features. Both kernels are divided by their respective length 4 for normalization. The new composite kernel , defined as the sum of normalized and , will return a value close to 1 only if both the structured features and the flat features have high similarity under their respective kernels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convolution Parse Tree Kernel and Composite Kernel",
"sec_num": "3.2"
},
{
"text": "ing SVM Model In a ranking SVM kernel as described in (Moschitti et al, 2006) for Semantic Role Labeling, two argument annotations (as argument trees) are presented to the ranking SVM model to decide which one is better. In our case, we present two syntactic trees from two candidates to the ranking SVM model. The idea is inspired by (Yang, et.al, 2005b; 2008) . The intuition behind the twin-candidate model is to capture the information of how much one candidate is more pre- 3 We will not expand the nodes denoting the sentences other than where the pronoun and the candidate occur. 4 The length of a kernel is defined as ferred than another. The candidate wins most of the pair wise comparisons is selected as antecedent. The feature vector for each training instance has a form of . An instance is positive if is a better antecedent choice than . Otherwise, it is a negative instance. For each feature vector, both tree structural features and flat features are used. Thus each feature vector has a form of where and are trees of candidate i and j respectively, and are flat feature vectors of candidate i and j respectively.",
"cite_spans": [
{
"start": 54,
"end": 77,
"text": "(Moschitti et al, 2006)",
"ref_id": "BIBREF20"
},
{
"start": 335,
"end": 355,
"text": "(Yang, et.al, 2005b;",
"ref_id": null
},
{
"start": 356,
"end": 361,
"text": "2008)",
"ref_id": "BIBREF22"
},
{
"start": 479,
"end": 480,
"text": "3",
"ref_id": null
},
{
"start": 587,
"end": 588,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Twin-Candidate Framework using Rank-",
"sec_num": "3.3"
},
{
"text": "In the training instances generation, we only generate those instances with one candidate is the correct antecedent. This follows the same strategy used in (Yang et al, 2008) for object anaphora resolution.",
"cite_spans": [
{
"start": 156,
"end": 174,
"text": "(Yang et al, 2008)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Twin-Candidate Framework using Rank-",
"sec_num": "3.3"
},
{
"text": "In the resolution process, a list of m candidates is extracted from a three sentences window. A total of instances are generated by pairingup the m candidates pair-wisely. We used a Round-Robin scoring scheme for antecedent selection. Suppose a SVM output for an instance is 1, we will give a score 1 for and -1 for and vice versa. At last, the candidate with the highest score is selected as antecedent. In order to handle a nonevent anaphoric pronoun, we have set a threshold to distinguish event anaphoric from non-event anaphoric. A pronoun is considered as event anaphoric if its score is above the threshold. In our experiments, we kept a set of development data to find out the threshold in an empirical way.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Twin-Candidate Framework using Rank-",
"sec_num": "3.3"
},
{
"text": "OntoNotes Release 2.0 English corpus as in (Hovy et al, 2006) is used in our study, which contains 300k words of English newswire data (from the Wall Street Journal) and 200k words of English broadcast news data (from ABC, CNN, NBC, Public Radio International and Voice of America). Table 1 shows the distribution of various entities. We focused on the resolution of 502 event pronouns encountered in the corpus. The resolution system has to handle both the event pronoun identification and antecedent selection tasks. To illustrate the difficulty of event pronoun resolution, 14.7% of all pronoun mentions are event anaphoric and only 31.5% of event pronoun can be resolved using \"most recent verb\" heuristics. Therefore a most-recentverb baseline will yield an f-score 4.63%.",
"cite_spans": [
{
"start": 43,
"end": 61,
"text": "(Hovy et al, 2006)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 283,
"end": 290,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.1"
},
{
"text": "To conduct event pronoun resolution, an input raw text was preprocessed automatically by a pipeline of NLP components. The noun phrase identification and the predicate-argument extraction were done based on Stanford Parser (Klein and Manning, 2003a; b) For each pronoun encountered during resolution, all the inflectional verbs within the current and previous two sentences are taken as candidates. For the current sentence, we take only those verbs in front of the pronoun. On average, each event pronoun has 6.93 candidates. Nonevent anaphoric pronouns will generate 7.3 negative instances on average.",
"cite_spans": [
{
"start": 223,
"end": 249,
"text": "(Klein and Manning, 2003a;",
"ref_id": "BIBREF10"
},
{
"start": 250,
"end": 252,
"text": "b)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.1"
},
{
"text": "In this section, we will present our experimental results with discussions. The performance measures we used are precision, recall and F-score. All the experiments are done with a 10-folds cross validation. In each fold of experiments, the whole corpus is divided into 10 equal sized portions. One of them is selected as testing corpus while the remaining 9 are used for training. In experiments with development data, 1 of the 9 training portions is kept for development purpose. In case of statistical significance test for differences is needed, a two-tailed, paired-sample Student's t-Test is performed at 0.05 level of significance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Results and Discussion",
"sec_num": "4.2"
},
{
"text": "In the first set of experiments, we are aiming to investigate the effectiveness of each single knowledge source. Table 2 reports the performance of each individual experiment. The flat feature set yields a baseline system with 40.6% fscore. By using each tree structure along, we can only achieve a performance of 44.4% f-score using the minimum-expansion tree. Therefore, we will further investigate the different ways of combining flat and syntactic structure knowledge to improve resolution performances. As table 3 shows, minimum-expansion gives highest precision in both experiment settings. Minimum-expansion emphasizes syntactic structures linking the anaphor and antecedent. Although using only the syntactic path may lose the contextual information, but it also prune out the potential noise within the contextual structures. In contrast, the full-expansion gives the highest recall. This is probably due to the widest knowledge coverage provides by the full-expansion syntactic tree. As a trade-off, the precision of full-expansion is the lowest in the experiments. One reason for this may be due to OntoNotes corpus is from broadcasting news domain. Its texts are less-formally structured. Another type of noise is that a narrator of news may read an abnormally long sentence. It should appear as several separate sentences in a news article. However, in broadcasting news, these sentences maybe simply joined by conjunction word \"and\". Thus a very nasty and noisy structure is created from it. Comparing the three knowledge source, simple-expansion achieves moderate precision and recall which results in the highest f-score. From this, we can draw a conclusion that simpleexpansion achieves a balance between the indicative structural information and introduced noises.",
"cite_spans": [],
"ref_spans": [
{
"start": 113,
"end": 120,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiment Results and Discussion",
"sec_num": "4.2"
},
{
"text": "In the next set of experiments, we will compare different setting for training instances generation. A typical setting contains no negative instances generated from non-event anaphoric pronoun. This is not an issue for object pronoun resolution as majority of pronouns in an article is anaphoric. However in our case, the event pronoun consists of only 14.7% of the total pronouns in OntoNotes. Thus we incorporate the instances from non-event pronouns to improve the precision of the classifier. However, if we include all the negative instances from non-event anaphoric pronouns, the positive instances will be overwhelmed by the negative instances. A down sampling is applied to the training instances to create a more balanced class distribution. Table 4 reports various training settings using simple-expansion tree structure. In table 4, the first line is experiment without any negative instances from non-event pronouns. The second line is the performance with all negative instances from non-event pronouns. Third line is performance using a balanced training set using down sampling. The last line is experiment using hyper-plane adjustment. The first line gives the highest recall measure because it has no discriminative knowledge on non-event anaphoric pronoun. The second line yields the highest precision which complies with our claim that including negative instances from non-event pronouns will improve precision of the classifier because more discriminative power is given by non-event pronoun instances. The balanced training set achieves a better f-score comparing to models with no/all negative instances. This is because balanced training set provides a better weighted positive/negative instances which implies a balanced positive/negative knowledge representation. As a result of that, we achieve a better balanced f-score. In (Ng and Cardie, 2002b) , they concluded that only the negative instances in between the anaphor and antecedent are useful in the resolution. It is same as our strategy without negative instances from nonevent anaphoric pronouns. However, our study showed an improvement by adding in negative instances from non-event anaphoric pronouns as showed in table 4. This is probably due to our random sampling strategy over the negative instances near to the event anaphoric instances. It empowers the system with more discriminative power. The best performance is given by the hyper-plane adaptation model. Although the number of training instances is further reduced for development data, we can have an adjustment of the hyper-plane which is more fit to dataset.",
"cite_spans": [
{
"start": 1852,
"end": 1874,
"text": "(Ng and Cardie, 2002b)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 751,
"end": 758,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiment Results and Discussion",
"sec_num": "4.2"
},
{
"text": "In the last set of experiments, we will present the performance from the twin-candidates based approach in table 5. The first line is the best performance from single candidate system with hyper-plane adaptation. The second line is performance using the twin-candidates approach. Comparing to the single candidate model, the recall is significantly improved with a small trade-off in precision. The difference in results is statistically significant using t-test at 5% level of significance. It reinforced our intuition that preferences between two candidates are contributive information sources in co-reference resolution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Results and Discussion",
"sec_num": "4.2"
},
{
"text": "The purpose of this paper is to conduct a systematic study of the event pronoun resolution. We propose a resolution system utilizing a set of flat positional, lexical and syntactic feature and structural syntactic feature. The state-of-arts convolution tree kernel is used to extract indicative structural syntactic knowledge. A twincandidates preference learning based approach is incorporated to reinforce the resolution system with candidates' preferences knowledge. Last but not least, we also proposed a study of the various incorporations of negative training instances, specially using random sampling to handle the imbalanced data. Development data is also used to select more accurate hyper-plane in SVM for better determination.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "5"
},
{
"text": "To further our research work, we plan to employ more semantic information into the system such as semantic role labels and verb frames.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "5"
},
{
"text": "If the pronoun and the candidate are not in the same sentence, we will not include the nodes denoting the sentences before the candidate or after the pronoun.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank Professor Massimo Poesio from University of Trento for the initial discussion of this work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgment",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Reference to Abstract Objects in Discourse",
"authors": [
{
"first": "N",
"middle": [],
"last": "Asher",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "N. Asher. 1993. Reference to Abstract Objects in Dis- course. Kluwer Academic Publisher. 1993.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The Nature of Statistical Learning Theory",
"authors": [
{
"first": "V",
"middle": [],
"last": "Vapnik",
"suffix": ""
}
],
"year": 1995,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "V. Vapnik. 1995. The Nature of Statistical Learning Theory. Springer.1995.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Addressing the curse of imbalanced data set: One sided sampling",
"authors": [
{
"first": "M",
"middle": [],
"last": "Kubat",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Matwin",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the Fourteenth International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "179--186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Kubat and S. Matwin, 1997. Addressing the curse of imbalanced data set: One sided sampling. In Proceedings of the Fourteenth International Con- ference on Machine Learning,1997. pg179-186.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Making large-scale svm learning practical",
"authors": [
{
"first": "T",
"middle": [],
"last": "Joachims",
"suffix": ""
}
],
"year": 1999,
"venue": "Advances in Kernel Methods -Support Vector Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Joachims. 1999. Making large-scale svm learning practical. In Advances in Kernel Methods -Sup- port Vector Learning. MIT Press.1999.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A machine learning approach to coreference resolution of noun phrases",
"authors": [
{
"first": "W",
"middle": [],
"last": "Soon",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Lim",
"suffix": ""
}
],
"year": 2001,
"venue": "Computational Linguistics",
"volume": "27",
"issue": "",
"pages": "521--544",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W. Soon, H. Ng, and D. Lim. 2001. A machine learn- ing approach to coreference resolution of noun phrases. In Computational Linguistics, Vol:27(4), pg521-544.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Resolving Pronominal Reference to Abstract Entities",
"authors": [
{
"first": "D",
"middle": [],
"last": "Byron",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL'02)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Byron. 2002. Resolving Pronominal Reference to Abstract Entities, in Proceedings of the 40th An- nual Meeting of the Association for Computational Linguistics (ACL'02). July 2002. , USA",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "New ranking algorithms for parsing and tagging: Kernels over discrete structures, and the voted perceptron",
"authors": [
{
"first": "M",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Duffy",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL'02)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Collins and N. Duffy. 2002. New ranking algo- rithms for parsing and tagging: Kernels over dis- crete structures, and the voted perceptron. In Pro- ceedings of the 40th Annual Meeting of the Associ- ation for Computational Linguistics (ACL'02). July 2002. , USA",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Improving machine learning approaches to coreference resolution",
"authors": [
{
"first": "V",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL'02)",
"volume": "",
"issue": "",
"pages": "104--111",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "V. Ng and C. Cardie. 2002a. Improving machine learning approaches to coreference resolution. In Proceedings of the 40th Annual Meeting of the As- sociation for Computational Linguistics (ACL'02). July 2002. , USA. pg104-111.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Identifying anaphoric and non-anaphoric noun phrases to improve coreference resolution",
"authors": [
{
"first": "V",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 19th International Conference on Computational Linguistics (COLING02)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "V. Ng, and C. Cardie. 2002b. Identifying anaphoric and non-anaphoric noun phrases to improve core- ference resolution. In Proceedings of the 19th In- ternational Conference on Computational Linguis- tics (COLING02). (2002)",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A Machine Learning Approach to Pronoun Resolution in Spoken Dialogue",
"authors": [
{
"first": "M",
"middle": [],
"last": "Strube",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 41 st Annual Meeting of the Association for Computational Linguistics (ACL'03)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Strube and C. M\u00fcller. 2003. A Machine Learning Approach to Pronoun Resolution in Spoken Dialo- gue. . In Proceedings of the 41 st Annual Meeting of the Association for Computational Linguistics (ACL'03), 2003",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Fast Exact Inference with a Factored Model for Natural Language Parsing",
"authors": [
{
"first": "D",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2003,
"venue": "Advances in Neural Information Processing Systems 15 (NIPS 2002)",
"volume": "",
"issue": "",
"pages": "3--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Klein and C. Manning. 2003a. Fast Exact Infe- rence with a Factored Model for Natural Language Parsing. In Advances in Neural Information Processing Systems 15 (NIPS 2002), Cambridge, MA: MIT Press, pp. 3-10.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Accurate Unlexicalized Parsing",
"authors": [
{
"first": "D",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 41 st Annual Meeting of the Association for Computational Linguistics (ACL'03)",
"volume": "",
"issue": "",
"pages": "423--430",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Klein and C.Manning. 2003b. Accurate Unlexica- lized Parsing. In Proceedings of the 41 st Annual Meeting of the Association for Computational Lin- guistics (ACL'03), 2003. pg423-430.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Coreference Resolution Using Competition Learning Approach",
"authors": [
{
"first": "X",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Tan",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 41 st Annual Meeting of the Association for Computational Linguistics (ACL'03)",
"volume": "",
"issue": "",
"pages": "176--183",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "X. Yang, G. Zhou, J. Su, and C.Tan. 2003. Corefe- rence Resolution Using Competition Learning Ap- proach. In Proceedings of the 41 st Annual Meeting of the Association for Computational Linguistics (ACL'03), 2003. pg176-183.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A study on convolution kernels for shallow semantic parsing",
"authors": [
{
"first": "A",
"middle": [],
"last": "Moschitti",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL'04)",
"volume": "",
"issue": "",
"pages": "335--342",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Moschitti. 2004. A study on convolution kernels for shallow semantic parsing. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL'04), pg335-342.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A multiple resampling method for learning from imbalanced data sets",
"authors": [
{
"first": "A",
"middle": [],
"last": "Estabrooks",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Jo",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Japkowicz",
"suffix": ""
}
],
"year": 2004,
"venue": "In Computational Intelligence",
"volume": "20",
"issue": "1",
"pages": "18--36",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Estabrooks, T. Jo, and N. Japkowicz. 2004. A mul- tiple resampling method for learning from imba- lanced data sets. In Computational Intelligence Vol:20(1). pg18-36.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Improving pronoun resolution by incorporating coreferential information of candidates",
"authors": [
{
"first": "X",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Tan",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of 42th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "127--134",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "X. Yang, J. Su, G. Zhou, and C. Tan. 2004. Improving pronoun resolution by incorporating coreferential information of candidates. In Proceedings of 42th Annual Meeting of the Association for Computa- tional Linguistics, 2004. pg127-134.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Improving Pronoun Resolution Using Statistics-Based Semantic Compatibility Information",
"authors": [
{
"first": "X",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Tan",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "X. Yang, J. Su and C.Tan. 2005a. Improving Pronoun Resolution Using Statistics-Based Semantic Com- patibility Information. In Proceedings of Proceed- ings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05). June 2005.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A Twin-Candidates Model for Coreference Resolution with Non-Anaphoric Identification Capability",
"authors": [
{
"first": "X",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Tan",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of IJCNLP-2005",
"volume": "",
"issue": "",
"pages": "719--730",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "X. Yang, J. Su and C.Tan. 2005b. A Twin-Candidates Model for Coreference Resolution with Non- Anaphoric Identification Capability. In Proceed- ings of IJCNLP-2005. Pp. 719-730, 2005",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "OntoNotes: The 90\\% Solution",
"authors": [
{
"first": "E",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Marcus",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Ramshaw",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Weischedel",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Human Language Technology Conference of the NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Hovy, M. Marcus, M. Palmer, L. Ramshaw, and R. Weischedel. 2006. OntoNotes: The 90\\% Solution. In Proceedings of the Human Language Technol- ogy Conference of the NAACL, 2006",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Kernel-Based Pronoun Resolution with Structured Syntactic Knowledge",
"authors": [
{
"first": "X",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Tan",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 44th Annual Meeting of the Association for Computational Linguistics (ACL'06)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "X. Yang, J. Su and C.Tan. 2006. Kernel-Based Pro- noun Resolution with Structured Syntactic Know- ledge. In Proceedings of the 44th Annual Meeting of the Association for Computational Linguistics (ACL'06). July 2006. Australia.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Making tree kernels practical for natural language learning",
"authors": [
{
"first": "A",
"middle": [],
"last": "Moschitti",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings EACL 2006",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Moschitti, Making tree kernels practical for natural language learning. In Proceedings EACL 2006, Trento, Italy, 2006.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Resolving it, this, and that in unrestricted multi-party dialog",
"authors": [
{
"first": "C",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics (ACL'07)",
"volume": "",
"issue": "",
"pages": "816--823",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. M\u00fcller. 2007. Resolving it, this, and that in unre- stricted multi-party dialog. In Proceedings of the 45th Annual Meeting of the Association for Com- putational Linguistics (ACL'07). 2007. Czech Re- public. pg816-823.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "A Twin-Candidates Model for Learning-Based Coreference Resolution",
"authors": [
{
"first": "X",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Tan",
"suffix": ""
}
],
"year": 2008,
"venue": "Computational Linguistics",
"volume": "",
"issue": "",
"pages": "327--356",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "X. Yang, J. Su and C.Tan. 2008. A Twin-Candidates Model for Learning-Based Coreference Resolution. In Computational Linguistics, Vol:34(3). pg327- 356.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Unrestricted Coreference: Identifying Entities and Events in Onto-Notes",
"authors": [
{
"first": "S",
"middle": [],
"last": "Pradhan",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Ramshaw",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Weischedel",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Mac-Bride",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Micciulla",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the IEEE International Conference on Semantic Computing (ICSC)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Pradhan, L. Ramshaw, R. Weischedel, J. Mac- Bride, and L. Micciulla. 2007. Unrestricted Corefe- rence: Identifying Entities and Events in Onto- Notes. In Proceedings of the IEEE International Conference on Semantic Computing (ICSC), Sep. 2007.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "Figure 2: 2-D SVM Illustration",
"type_str": "figure",
"uris": null
},
"FIGREF1": {
"num": null,
"text": "Figure 1: Minimum-Expansion Tree",
"type_str": "figure",
"uris": null
},
"TABREF0": {
"text": "with F-score of 86.32% on Penn Treebank corpus.",
"html": null,
"num": null,
"type_str": "table",
"content": "<table><tr><td colspan=\"2\">Non-Event Anaphora:</td><td colspan=\"2\">4952 80.03%</td></tr><tr><td>Event</td><td>Event NP:</td><td colspan=\"2\">733 59.35%</td></tr><tr><td>Anaphora:</td><td>Event</td><td>It:</td><td>29.0%</td></tr><tr><td>1235</td><td>Pronoun:</td><td colspan=\"2\">This: 16.9%</td></tr><tr><td>19.97%</td><td>502 40.65%</td><td colspan=\"2\">That: 54.1%</td></tr><tr><td colspan=\"4\">Table 1: The distribution of various types of 6187</td></tr><tr><td colspan=\"3\">anaphora in OntoNotes 2.0</td><td/></tr></table>"
}
}
}
}