Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S10-1017",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:27:34.328803Z"
},
"title": "RelaxCor: A Global Relaxation Labeling Approach to Coreference Resolution",
"authors": [
{
"first": "Emili",
"middle": [],
"last": "Sapena",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "TALP Research Center Universitat Polit\u00e8cnica de Catalunya Barcelona",
"location": {
"country": "Spain"
}
},
"email": "[email protected]"
},
{
"first": "Llu\u00eds",
"middle": [],
"last": "Padr\u00f3",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "TALP Research Center Universitat Polit\u00e8cnica de Catalunya Barcelona",
"location": {
"country": "Spain"
}
},
"email": "[email protected]"
},
{
"first": "Jordi",
"middle": [],
"last": "Turmo",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "TALP Research Center Universitat Polit\u00e8cnica de Catalunya Barcelona",
"location": {
"country": "Spain"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes the participation of RelaxCor in the Semeval-2010 task number 1: \"Coreference Resolution in Multiple Languages\". RelaxCor is a constraint-based graph partitioning approach to coreference resolution solved by relaxation labeling. The approach combines the strengths of groupwise classifiers and chain formation methods in one global method.",
"pdf_parse": {
"paper_id": "S10-1017",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes the participation of RelaxCor in the Semeval-2010 task number 1: \"Coreference Resolution in Multiple Languages\". RelaxCor is a constraint-based graph partitioning approach to coreference resolution solved by relaxation labeling. The approach combines the strengths of groupwise classifiers and chain formation methods in one global method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The Semeval-2010 task is concerned with intradocument coreference resolution for six different languages: Catalan, Dutch, English, German, Italian and Spanish. The core of the task is to identify which noun phrases (NPs) in a text refer to the same discourse entity (Recasens et al., 2010) .",
"cite_spans": [
{
"start": 266,
"end": 289,
"text": "(Recasens et al., 2010)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "RelaxCor ) is a graph representation of the problem solved by a relaxation labeling process, reducing coreference resolution to a graph partitioning problem given a set of constraints. In this manner, decisions are taken considering the whole set of mentions, ensuring consistency and avoiding that classification decisions are independently taken.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The paper is organized as follows. Section 2 describes RelaxCor, the system used in the Semeval task. Next, Section 3 describes the tuning needed by the system to adapt it to different languages and other task issues. The same section also analyzes the obtained results. Finally, Section 4 concludes the paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This section briefly describes RelaxCor. First, the graph representation is presented. Next, there is an explanation of the methodology used to learn constraints and train the system. Finally, the algorithm used for resolution is described.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Description",
"sec_num": "2"
},
{
"text": "Let G = G(V, E) be an undirected graph where V is a set of vertices and E a set of edges. Let m = (m 1 , ..., m n ) be the set of mentions of a document with n mentions to resolve. Each mention m i in the document is represented as a vertex v i \u2208 V in the graph. An edge e ij \u2208 E is added to the graph for pairs of vertices (v i , v j ) representing the possibility that both mentions corefer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Representation",
"sec_num": "2.1"
},
{
"text": "Let C be our set of constraints. Given a pair of mentions (m i , m j ), a subset of constraints C ij \u2286 C restrict the compatibility of both mentions. C ij is used to compute the weight value of the edge connecting v i and v j . Let w ij \u2208 W be the weight of the edge e ij :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Representation",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "w ij = k\u2208C ij \u03bb k f k (m i , m j )",
"eq_num": "(1)"
}
],
"section": "Problem Representation",
"sec_num": "2.1"
},
{
"text": "where f k (\u2022) is a function that evaluates the constraint k and \u03bb k is the weight associated to the constraint. Note that \u03bb k and w ij can be negative. In our approach, each vertex (v i ) in the graph is a variable (v i ) for the algorithm. Let L i be the number of different values (labels) that are possible for v i . The possible labels of each variable are the partitions that the vertex can be assigned. A vertex with index i can be in the first i partitions (i.e. L i = i).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Representation",
"sec_num": "2.1"
},
{
"text": "DIST: Distance between mi and mj in sentences: number DIST MEN: Distance between mi and mj in mentions: number APPOSITIVE: One mention is in apposition with the other: y,n I/J IN QUOTES: m i/j is in quotes or inside a NP or a sentence in quotes: y,n I/J FIRST: m i/j is the first mention in the sentence: y,n Lexical:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distance and position:",
"sec_num": null
},
{
"text": "I/J DEF NP: m i/j is a definitive NP: y,n I/J DEM NP: m i/j is a demonstrative NP: y,n I/J INDEF NP: m i/j is an indefinite NP: y,n STR MATCH: String matching of mi and mj : y,n PRO STR: Both are pronouns and their strings match: y,n PN STR: Both are proper names and their strings match: y,n NONPRO STR: String matching like in Soon et al. (2001) and mentions are not pronouns: y,n HEAD MATCH: String matching of NP heads: y,n Morphological:",
"cite_spans": [
{
"start": 329,
"end": 347,
"text": "Soon et al. (2001)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Distance and position:",
"sec_num": null
},
{
"text": "NUMBER: The number of both mentions match: y,n,u GENDER: The gender of both mentions match: y,n,u AGREEMENT: Gender and number of both mentions match: y,n,u I/J THIRD PERSON: m i/j is 3rd person: y,n PROPER NAME: Both mentions are proper names: y,n,u I/J PERSON: m i/j is a person (pronoun or proper name in a list): y,n ANIMACY: Animacy of both mentions match (persons, objects): y,n I/J REFLEXIVE: m i/j is a reflexive pronoun: y,n I/J TYPE: m i/j is a pronoun (p), entity (e) or nominal (n) Syntactic:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distance and position:",
"sec_num": null
},
{
"text": "NESTED: One mention is included in the other: y,n MAXIMALNP: Both mentions have the same NP parent or they are nested: y,n I/J MAXIMALNP: m i/j is not included in any other mention: y,n I/J EMBEDDED: m i/j is a noun and is not a maximal NP: y,n BINDING: Conditions B and C of binding theory: y,n Semantic: SEMCLASS: Semantic class of both mentions match: y,n,u (the same as (Soon et al., 2001 )) ALIAS: One mention is an alias of the other: y,n,u (only entities, else unknown) I/J SRL ARG: Semantic role of m i/j : N,0,1,2,3,4,M,L SRL SAMEVERB: Both mentions have a semantic role for the same verb: y,n Figure 1 : Feature functions used.",
"cite_spans": [
{
"start": 374,
"end": 392,
"text": "(Soon et al., 2001",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 603,
"end": 611,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Distance and position:",
"sec_num": null
},
{
"text": "Each pair of mentions (m i , m j ) in a training document is evaluated by the set of feature functions shown in Figure 1 . The values returned by these functions form a positive example when the pair of mentions corefer, and a negative one otherwise. Three specialized models are constructed depending on the type of anaphor mention (m j ) of the pair: pronoun, named entity or nominal. A decision tree is generated for each specialized model and a set of rules is extracted with C4.5 rule-learning algorithm (Quinlan, 1993) . These rules are our set of constraints. The C4.5rules algorithm generates a set of rules for each path from the learned tree. It then checks if the rules can be generalized by dropping conditions.",
"cite_spans": [
{
"start": 509,
"end": 524,
"text": "(Quinlan, 1993)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 112,
"end": 120,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Training Process",
"sec_num": "2.2"
},
{
"text": "Given the training corpus, the weight of a constraint C k is related with the number of examples where the constraint applies A C k and how many of them corefer C C k . We define \u03bb k as the weight of constraint C k calculated as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Process",
"sec_num": "2.2"
},
{
"text": "\u03bb k = C C k A C k \u2212 0.5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Process",
"sec_num": "2.2"
},
{
"text": "Relaxation labeling (Relax) is a generic name for a family of iterative algorithms which perform function optimization, based on local information (Hummel and Zucker, 1987) . The algorithm solves our weighted constraint satisfaction problem dealing with the edge weights. In this manner, each vertex is assigned to a partition satisfying as many constraints as possible. To do that, the algorithm assigns a probability for each possible label of each variable. Let H = (h 1 , h 2 , . . . , h n ) be the weighted labeling to optimize, where each h i is a vector containing the probability distribution of v i , that is:",
"cite_spans": [
{
"start": 147,
"end": 172,
"text": "(Hummel and Zucker, 1987)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Resolution Algorithm",
"sec_num": "2.3"
},
{
"text": "h i = (h i 1 , h i 2 , . . . , h i L i ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Resolution Algorithm",
"sec_num": "2.3"
},
{
"text": "Given that the resolution process is iterative, the probability for label l of variable v i at time step t is h i l (t), or simply h i l when the time step is not relevant.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Resolution Algorithm",
"sec_num": "2.3"
},
{
"text": "Initialize:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Resolution Algorithm",
"sec_num": "2.3"
},
{
"text": "H := H0, Main loop: repeat For each variable vi For each possible label l for vi S il = j\u2208A(v i ) wij \u00d7 h j l",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Resolution Algorithm",
"sec_num": "2.3"
},
{
"text": "End for For each possible label l for vi",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Resolution Algorithm",
"sec_num": "2.3"
},
{
"text": "h i l (t + 1) = h i l (t)\u00d7(1+S il ) L i k=1 h i k (t)\u00d7(1+S ik )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Resolution Algorithm",
"sec_num": "2.3"
},
{
"text": "End for End for Until no more significant changes The support for a pair variable-label (S il ) expresses how compatible is the assignment of label l to variable v i taking into account the labels of adjacent variables and the edge weights. The support is defined as the sum of the edge weights that relate variable v i with each adjacent variable v j multiplied by the weight for the same label l of variable v j :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Resolution Algorithm",
"sec_num": "2.3"
},
{
"text": "S il = j\u2208A(v i ) w ij \u00d7 h j l",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Resolution Algorithm",
"sec_num": "2.3"
},
{
"text": "where w ij is the edge weight obtained in Equation 1 and vertex v i has |A(v i )| adjacent vertices. In our version of the algorithm for coreference resolution A(v i ) is the list of adjacent vertices of v i but only considering the ones with an index k < i.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Resolution Algorithm",
"sec_num": "2.3"
},
{
"text": "The aim of the algorithm is to find a weighted labeling such that global consistency is maximized. Maximizing global consistency is defined as maximizing the average support for each variable. The final partitioning is directly obtained from the weighted labeling H assigning to each variable the label with maximum probability.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Resolution Algorithm",
"sec_num": "2.3"
},
{
"text": "The pseudo-code of the relaxation algorithm can be found in Figure 2 . The process updates the weights of the labels in each step until convergence, i.e. when no more significant changes are done in an iteration. Finally, the assigned label for a variable is the one with the highest weight. Figure 3 shows an example of the process.",
"cite_spans": [],
"ref_spans": [
{
"start": 60,
"end": 68,
"text": "Figure 2",
"ref_id": "FIGREF0"
},
{
"start": 292,
"end": 300,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Resolution Algorithm",
"sec_num": "2.3"
},
{
"text": "RelaxCor have participated in the Semeval task for English, Catalan and Spanish. The system does not detect the mentions of the text by itself. Thus, the participation has been restricted to the goldstandard evaluation, which includes the manual annotated information and also provides the mention boundaries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semeval task participation",
"sec_num": "3"
},
{
"text": "All the knowledge required by the feature functions ( Figure 1) is obtained from the annotations of the corpora and no external resources have been used, with the exception of WordNet (Miller, 1995) for English. In this case, the system has been run two times for English: Englishopen, using WordNet, and English-closed, without WordNet.",
"cite_spans": [
{
"start": 184,
"end": 198,
"text": "(Miller, 1995)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 54,
"end": 63,
"text": "Figure 1)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Semeval task participation",
"sec_num": "3"
},
{
"text": "The whole methodology of RelaxCor including the resolution algorithm and the training process is totally independent of the language of the document. The only parts that need few adjustments are the preprocess and the set of feature functions. In most cases, the modifications in the feature functions are just for the different format of the data for different languages rather than for specific language issues. Moreover, given that the task includes many information about the mentions of the documents such as part of speech, syntactic dependency, head and semantic role, no preprocess has been needed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language and format adaptation",
"sec_num": "3.1"
},
{
"text": "One of the problems we have found adapting the system to the task corpora was the large amount of available data. As described in Section 2.2, the training process generates a feature vector for each pair of mentions into a document for all the documents of the training data set. However, the great number of training documents and their length overwhelmed the software that learns the constraints. In order to reduce the amount of pair examples, we run a clustering process to reduce the number of negative examples using the positive examples as the centroids. Note that negative examples are near 94% of the training examples, and many of them are repeated. For each positive example (a corefering pair of mentions), only the negative examples with distance less than a threshold d are included in the final training data. The distance is computed as the number of different values inside the feature vector. After some experiments over development data, the value of d was assigned to 3. Thus, the negative examples were discarded when they have more than three features different than any positive example.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language and format adaptation",
"sec_num": "3.1"
},
{
"text": "Our results for the development data set are shown in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 54,
"end": 61,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Language and format adaptation",
"sec_num": "3.1"
},
{
"text": "Results of RelaxCor for the test data set are shown in Table 2 . One of the characteristics of the system is that the resolution process always takes into account the whole set of mentions and avoids any possible pair-linkage contradiction as well as forces transitivity. Therefore, the system favors the precision, which results on high scores with metrics CEAF and B 3 . However, the system is penalized with the metrics based on pair-linkage, specially with MUC. Although RelaxCor has the highest precision scores even for MUC, the recall is low enough to finally obtain low scores for F 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 55,
"end": 62,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Results analysis",
"sec_num": "3.2"
},
{
"text": "Regarding the test scores of the task comparing with the other participants (Recasens et al., 2010) , RelaxCor obtains the best performances for Cata- ; open: B 3 ) and Spanish (B 3 ). Moreover, Relax-Cor is the most precise system for all the metrics in all the languages except for CEAF in Englishopen and Spanish. This confirms the robustness of the results of RelaxCor but also remarks that more knowledge or more information is needed to increase the recall of the system without loosing this precision The incorporation of WordNet to the English run is the only difference between English-open and English-closed. The scores are slightly higher when using WordNet but not significant. Analyzing the MUC scores, note that the recall is improved, while precision decreases a little which corresponds with the information and the noise that WordNet typically provides.",
"cite_spans": [
{
"start": 76,
"end": 99,
"text": "(Recasens et al., 2010)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results analysis",
"sec_num": "3.2"
},
{
"text": "The results for the test and development are very similar as expected, except the Spanish (es) ones. The recall considerably falls from development to test. It is clearly shown in the MUC recall and also is indirectly affecting on the other scores.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results analysis",
"sec_num": "3.2"
},
{
"text": "The participation of RelaxCor to the Semeval coreference resolution task has been useful to evaluate the system in multiple languages using data never seen before. Many published systems typically use the same data sets (ACE and MUC) and it is easy to unintentionally adapt the system to the corpora and not just to the problem. This kind of tasks favor comparisons between systems with the same framework and initial conditions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
},
{
"text": "The results obtained confirm the robustness of the RelaxCor, and the performance is considerably good in the state of the art. The system avoids con-tradictions in the results which causes a high precision. However, more knowledge is needed about the mentions in order to increase the recall without loosing that precision. A further error analysis is needed, but one of the main problem is the lack of semantic information and world knowledge specially for the nominal mentions -the mentions that are NPs but not including named entities neither pronouns-.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
}
],
"back_matter": [
{
"text": "The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013) under Grant Agreement number 247762 (FAUST), and from the Spanish Science and Innovation Ministry, via the KNOW2 project (TIN2009-14715-C04-04).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "On the foundations of relaxation labeling processes",
"authors": [
{
"first": "R",
"middle": [
"A"
],
"last": "Hummel",
"suffix": ""
},
{
"first": "S",
"middle": [
"W"
],
"last": "Zucker",
"suffix": ""
}
],
"year": 1987,
"venue": "",
"volume": "",
"issue": "",
"pages": "585--605",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. A. Hummel and S. W. Zucker. 1987. On the foundations of relaxation labeling processes. pages 585-605.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "WordNet: a lexical database for English",
"authors": [
{
"first": "G",
"middle": [
"A"
],
"last": "Miller",
"suffix": ""
}
],
"year": 1995,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G.A. Miller. 1995. WordNet: a lexical database for English.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "C4.5: Programs for Machine Learning",
"authors": [
{
"first": "J",
"middle": [
"R"
],
"last": "Quinlan",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J.R. Quinlan. 1993. C4.5: Programs for Machine Learning. Morgan Kaufmann.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "SemEval-2010 Task 1: Coreference resolution in multiple languages",
"authors": [
{
"first": "M",
"middle": [],
"last": "Recasens",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "M\u00e0rquez",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Sapena",
"suffix": ""
},
{
"first": "M",
"middle": [
"A"
],
"last": "Mart\u00ed",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Taul\u00e9",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Hoste",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Poesio",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Versley",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 5th International Workshop on Semantic Evaluations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Recasens, L. M\u00e0rquez, E. Sapena, M.A. Mart\u00ed, M. Taul\u00e9, V. Hoste, M. Poesio, and Y. Versley. 2010. SemEval-2010 Task 1: Coreference resolution in multiple languages. In Proceedings of the 5th International Workshop on Seman- tic Evaluations (SemEval-2010), Uppsala, Sweden.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A Global Relaxation Labeling Approach to Coreference Resolution",
"authors": [
{
"first": "E",
"middle": [],
"last": "Sapena",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Padr\u00f3",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Turmo",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Sapena, L. Padr\u00f3, and J. Turmo. 2010. A Global Relax- ation Labeling Approach to Coreference Resolution. Sub- mitted.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A Machine Learning Approach to Coreference Resolution of Noun Phrases",
"authors": [
{
"first": "W",
"middle": [
"M"
],
"last": "Soon",
"suffix": ""
},
{
"first": "H",
"middle": [
"T"
],
"last": "Ng",
"suffix": ""
},
{
"first": "D",
"middle": [
"C Y"
],
"last": "Lim",
"suffix": ""
}
],
"year": 2001,
"venue": "Computational Linguistics",
"volume": "27",
"issue": "4",
"pages": "521--544",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W.M. Soon, H.T. Ng, and D.C.Y. Lim. 2001. A Machine Learning Approach to Coreference Resolution of Noun Phrases. Computational Linguistics, 27(4):521-544.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "Relaxation labeling algorithm",
"uris": null,
"type_str": "figure"
},
"FIGREF1": {
"num": null,
"text": "Representation of Relax. The vertices representing mentions are connected by weighted edges eij. Each vertex has a vector h i of probabilities to belong to different partitions. The figure shows h 2 , h 3 and h 4 .",
"uris": null,
"type_str": "figure"
},
"TABREF1": {
"num": null,
"content": "<table><tr><td>-</td><td/><td>CEAF</td><td/><td/><td>MUC</td><td/><td/><td>B 3</td><td/><td/><td>BLANC</td><td/></tr><tr><td>language</td><td>R</td><td>P</td><td>F1</td><td>R</td><td>P</td><td>F1</td><td>R</td><td>P</td><td>F1</td><td>R</td><td>P</td><td>Blanc</td></tr><tr><td colspan=\"4\">Information: closed Annotation: gold</td><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>ca</td><td>70.5</td><td>70.5</td><td>70.5</td><td>29.3</td><td>77.3</td><td>42.5</td><td>68.6</td><td>95.8</td><td>79.9</td><td>56.0</td><td>81.8</td><td>59.7</td></tr><tr><td>es</td><td>66.6</td><td>66.6</td><td>66.6</td><td>14.8</td><td>73.8</td><td>24.7</td><td>65.3</td><td>97.5</td><td>78.2</td><td>53.4</td><td>81.8</td><td>55.6</td></tr><tr><td>en</td><td>75.6</td><td>75.6</td><td>75.6</td><td>21.9</td><td>72.4</td><td>33.7</td><td>74.8</td><td>97.0</td><td>84.5</td><td>57.0</td><td>83.4</td><td>61.3</td></tr><tr><td colspan=\"4\">Information: open Annotation: gold</td><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>en</td><td>75.8</td><td>75.8</td><td>75.8</td><td>22.6</td><td>70.5</td><td>34.2</td><td>75.2</td><td>96.7</td><td>84.6</td><td>58.0</td><td>83.8</td><td>62.7</td></tr></table>",
"type_str": "table",
"text": "Results on the development data set",
"html": null
},
"TABREF2": {
"num": null,
"content": "<table><tr><td>: Results of the task</td></tr></table>",
"type_str": "table",
"text": "",
"html": null
}
}
}
}