Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S07-1038",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:23:03.296604Z"
},
"title": "ILK2: Semantic Role Labelling for Catalan and Spanish using TiMBL",
"authors": [
{
"first": "Roser",
"middle": [],
"last": "Morante",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tilburg University",
"location": {
"postBox": "P.O.Box 90153",
"postCode": "NL-5000 LE",
"settlement": "Tilburg",
"country": "The Netherlands"
}
},
"email": "[email protected]"
},
{
"first": "Bertjan",
"middle": [],
"last": "Busser",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tilburg University",
"location": {
"postBox": "P.O.Box 90153",
"postCode": "NL-5000 LE",
"settlement": "Tilburg",
"country": "The Netherlands"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper we present a semantic role labeling system submitted to the task Multilevel Semantic Annotation of Catalan and Spanish in the context of SemEval-2007. The core of the system is a memory-based classifier that makes use of full syntactic information. Building on standard features, we train two classifiers to predict separately the semantic class of the verb and the semantic roles.",
"pdf_parse": {
"paper_id": "S07-1038",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper we present a semantic role labeling system submitted to the task Multilevel Semantic Annotation of Catalan and Spanish in the context of SemEval-2007. The core of the system is a memory-based classifier that makes use of full syntactic information. Building on standard features, we train two classifiers to predict separately the semantic class of the verb and the semantic roles.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Semantic role labelling (SRL) has been addressed in the CoNLL-2004 and CoNLL-2005 Shared Tasks (Carreras and M\u00e0rquez, 2004; Carreras and M\u00e0rquez, 2005) for English. In the task Multilevel Semantic Annotation of Catalan and Spanish of the SemEval competition 2007, the target are two different languages. The general SRL task consists of two tasks: prediction of semantic roles (SR) and prediction of the semantic class of the verb (SC).",
"cite_spans": [
{
"start": 56,
"end": 70,
"text": "CoNLL-2004 and",
"ref_id": null
},
{
"start": 71,
"end": 88,
"text": "CoNLL-2005 Shared",
"ref_id": null
},
{
"start": 95,
"end": 123,
"text": "(Carreras and M\u00e0rquez, 2004;",
"ref_id": "BIBREF6"
},
{
"start": 124,
"end": 151,
"text": "Carreras and M\u00e0rquez, 2005)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The data provided in the task (M\u00e0rquez et al., 2007) are sentences annotated with lemma, POS tags, syntactic information, semantic roles, and the semantic classes of the verb. A training corpus for Catalan (ca.3LB) and another for Spanish (sp.3LB) are provided. Although the setting is similar to the CoNLL-Shared Task 2005, three relevant differences are that the corpora are significantly smaller, that the syntactic information is based on a manually corrected treebank, which contains also syntactic functions (i.e. direct object, indirect object, etc.), and that the set of semantic roles is larger, especially for core arguments.",
"cite_spans": [
{
"start": 30,
"end": 52,
"text": "(M\u00e0rquez et al., 2007)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our goal is to check whether simple individual systems could produce competitive results in both subtasks, and whether they would be robust enough when applied to two languages and to the held-out test sets provided.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We approach the SRL task as two classification problems: prediction of SR and prediction of SC. We hypothesize that the two problems can be solved in the same way for both languages. We build two very similar systems that differ only in some of the features used, as we explain below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System description",
"sec_num": "2"
},
{
"text": "The task is solved in three phases: 1) A preprocessing phase that is very similar to the sequentialization in . We call it focus selection. It consists of identifying the potential candidates to be assigned a semantic role or a semantic verb class. 2) The classification. 3) Some limited postprocessing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System description",
"sec_num": "2"
},
{
"text": "The system starts by finding the target verb (which is marked in the corpus as such). Then, it finds the complete form of the verb (that in the corpus is tagged as verb group, infinitive, gerund, etc.) and the clause boundaries in order to look for the siblings of the verb that are under the same clause. Our assumption is that all siblings of the verb are potential candidates for semantic roles. The focus selection process produces two groups of focus tokens: on the one hand, the verbs and, on the other, the siblings of the verbs. These tokens will be the instances in each training set. Table 1 ",
"cite_spans": [],
"ref_spans": [
{
"start": 594,
"end": 601,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Focus selection",
"sec_num": "2.1"
},
{
"text": "In both systems we approach the classification task in one step, predicting directly the SR and the SC class. This means that in the SR task we do not perform a previous classification to select the tokens that might be assigned a role. We assume that all verbs belong to a class. As for the SR, we assume that most siblings of the verb will have a class, except for those that have syntactic functions AO, ET, MOD, NEG, IMPERS, PASS, and VOC. The siblings that do not have a semantic role are assigned the NONE tag. Because the corpus is small and because the amount of instances with a NONE class is proportionally low, we do not consider it necessary to filter these cases. Regarding the learning algorithm, we use the IB1 classifier as implemented in TiMBL (version 5.1) (Daelemans et al., 2004) , a supervised inductive algorithm for learning classification tasks based on the k nearest neighbor (k-nn) algorithm. In IB1, similarity is defined by a feature-level distance metric between a test instance and a memorized training instance. The metric combines a per-feature valuebased distance metric with global feature weights that account for relative differences in importance of the features.",
"cite_spans": [
{
"start": 775,
"end": 799,
"text": "(Daelemans et al., 2004)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Classification",
"sec_num": "2.2"
},
{
"text": "The TiMBL parameters used in the systems are the IB1 algorithm, the Jeffrey Divergence as feature metric, MVDM threshold at level 1, weighting using GainRatio, k=11, and weighting neighbors as function of their Inverse Linear Distance (for details we refer the reader to the TiMBL reference guide (Daelemans et al., 2004) ).",
"cite_spans": [
{
"start": 297,
"end": 321,
"text": "(Daelemans et al., 2004)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Classification",
"sec_num": "2.2"
},
{
"text": "As for the features, we started by using the same feature set for both classifiers and then, after some experimentation, we decided to use slightly differ-ent feature sets for the two sub-tasks. Most of the features we designed are features that have become standard for the SRL task (Gildea and Jurafsky, 2002; Xue and Palmer, 2004; Carreras and M\u00e0rquez, 2004; Carreras and M\u00e0rquez, 2005 ). In our system, the features relate to the verb, the verb siblings, what we take to be the content word of the siblings, the clause, and the relation verb-arguments. Additionally, we added lexical features extracted from the verb lexicon provided for the task, and from Word-Net.",
"cite_spans": [
{
"start": 284,
"end": 311,
"text": "(Gildea and Jurafsky, 2002;",
"ref_id": "BIBREF10"
},
{
"start": 312,
"end": 333,
"text": "Xue and Palmer, 2004;",
"ref_id": "BIBREF14"
},
{
"start": 334,
"end": 361,
"text": "Carreras and M\u00e0rquez, 2004;",
"ref_id": "BIBREF6"
},
{
"start": 362,
"end": 388,
"text": "Carreras and M\u00e0rquez, 2005",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Classification",
"sec_num": "2.2"
},
{
"text": "After experimenting with 323 features, we selected 98 for the SR task and 77 for the SC subclass. In order to select the features, we started with a basic system, the results of which were used as a baseline. Every new feature that was added to the basic system was evaluated in terms of average accuracy in 10fold cross-validation experiments; if it improved the performance on held-out data, it was added to the selection. One problem with this hill-climbing method is that the selection of features is determined by the order in which the features have been introduced. We also performed experiments applying the feature selection process reported in (Tjong Kim Sang et al., 2005) , a bi-directional hill climbing process. However, experiments with this advanced method did not produce a better selection of features.",
"cite_spans": [
{
"start": 665,
"end": 683,
"text": "Sang et al., 2005)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Classification",
"sec_num": "2.2"
},
{
"text": "The features for the SR prediction subtask are the following:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification",
"sec_num": "2.2"
},
{
"text": "\u2022 Features on the verb (6) . They are shared by all the instances that represent phrases belonging to the same clause:",
"cite_spans": [
{
"start": 23,
"end": 26,
"text": "(6)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Classification",
"sec_num": "2.2"
},
{
"text": "VForm; VLemma; VCau: binary feature that indicate if the verb is in a causative construction with hacer, fer or if the main verb is causar; VPron, VImp, VPass: binary features that indicate if the verb is pronominal, impersonal, and in passive form respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification",
"sec_num": "2.2"
},
{
"text": "\u2022 Features on the sibling in focus (12):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification",
"sec_num": "2.2"
},
{
"text": "SibSynCat: syntactic category; SibSynFunc: syntactic function; SibPrep: preposition; SibLemW1, SibPOSW1, SibLemW2, SibPOSW2, SibLemW3, SibPOSW3: lemma and POS tag of the first, second and third words of the sibling; SibRelPos: position of the sibling in relation to the verb (PRE or POST); Sib+1RelPos: position of the sibling next to the current phrase in relation to the verb (PRE or POST); SibAbsPos: absolute position of the sibling in the clause.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification",
"sec_num": "2.2"
},
{
"text": "\u2022 Features that describe the properties of the content word (CW) of the focus sibling (13): in the case of prepositional phrases the CW is the head of the first noun phrase; in cases of coordination, we only take the first element of the coordination.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification",
"sec_num": "2.2"
},
{
"text": "CWord; CWLemma; CWPOS: we take only the first character of the POS tags provided; CWPOSType: the type of POS, second character of the POS tags provided; CWGender; CWne: binary feature that indicates if the CW is a named entity; CWtmp, CWloc: binary features that indicate if the CW is a temporal or a locative adverb respectively; CW+2POS, CW+3POS: POS of the second and third words after CW.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification",
"sec_num": "2.2"
},
{
"text": "CWwnsc1, CWwnsc2, CWwnsc3: additionally, if the CW is a noun, we extract information from WordNet (Fellbaum, 1998) about the first, second, and third more frequent semantic classes of the CW in WordNet. We cannot decide on a single one because the corpus is not disambiguated. The semantic class corresponds to the lexicographer files in WN3.0. For nouns there are 25 file numbers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification",
"sec_num": "2.2"
},
{
"text": "CCtot: total number of siblings with function CC (circumstancial complement); SUJRelPos, CAGRelPos, CDRel-Pos, CIRelPos, ATRRelPos, CPREDRelPos, CREGRelPos: relative positions of siblings with functions SUJ, CAG, CD, CI,ATR, CPRED, and CREG in relation to verb (PRE or POST); SEsib: binary feature that indicates if the clause contains a verbal se; SIBtot: total number of verb siblings in the clause; SynFuncSib8, SynCatSib8, PrepSib8,W1Sib8, W2Sib8, W3Sib8, W4Sib8, SynFuncSib9, SynCatSib9, PrepSib9, W1Sib9, W2Sib9, W3Sib9, W4Sib9: syntactic function, syntactic category, preposition, and first to fourth word of siblings 8 and 9.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 Features on the clause (24):",
"sec_num": null
},
{
"text": "\u2022 Features extracted from the lexicon of verbal frames (43) that the task organizers provided. We access the lexicon to check if it is possible for a verb to have a certain semantic role. We check it for all semantic role classes, except for ArgX-Ag, ArgX-Cau, ArgX-Pat, ArgX-Tem because they proved not to be informative. The features are binary. For the SC prediction task the features are similar, but not exactly the same. Both systems contain some features about all candidate arguments. We point out the differences:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 Features on the clause (24):",
"sec_num": null
},
{
"text": "\u2022 Features that are in the SR system and that are not in the SC system: Verb form (VForm), verb lemma (VLemma), absolute position of the sibling in the clause (SibAbsPos), function of the sibling (SibSynFunc), preposition of the sibling (SibPrep), POS tag of the second and third words after CW (CW+2POS, CW+3POS), information about the WN classes of the CW (CWwnsc1, CWwnsc2, CWwnsc3) , feature about the CW being a named entity (CWne, SIBtot), syntactic function, syntactic category, preposition and first to fourth word of siblings 8 and 9 (SynFuncSib8, SynCatSib8, PrepSib8,W1Sib8, W2Sib8, W3Sib8, W4Sib8, SynFuncSib9, SynCatSib9, PrepSib9, W1Sib9, W2Sib9, W3Sib9, W4Sib9).",
"cite_spans": [],
"ref_spans": [
{
"start": 358,
"end": 385,
"text": "(CWwnsc1, CWwnsc2, CWwnsc3)",
"ref_id": null
}
],
"eq_spans": [],
"section": "\u2022 Features on the clause (24):",
"sec_num": null
},
{
"text": "\u2022 Features that are only in the SC system: AllCats: vector of the syntactic categories of the siblings in the order that they appear in the clause; AllFuncs: vector of the functions of the siblings in the order that they appear; AllFuncs-Bin vector with eight binary values that represent if a sibling with that function is present or not; Sib+1Prep, Sib+2Prep: prepositions of the two siblings after the verb.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 Features on the clause (24):",
"sec_num": null
},
{
"text": "As for the postprocessing phase, it consists of six simple rules to correct some basic errors in predicting some types of ArgM arguments. It only applies to the SR task. The rules are the following ones:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Postprocessing",
"sec_num": "2.3"
},
{
"text": "We are aware of the fact that these are very simple rules and that more elaborate postprocessing techniques can be applied, like the ones used in (Tjong Kim Sang et al., 2005) in order to make sure that the same role was not predicted more than once in the same clause. Catalan; 'sp': Spanish).",
"cite_spans": [
{
"start": 157,
"end": 175,
"text": "Sang et al., 2005)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Postprocessing",
"sec_num": "2.3"
},
{
"text": "The overall official results of the system are shown in Table 2 . The SC system performs better (overall F 1 = 86.46) than the SR system (overall F 1 = 83.78). In global, the systems perform better for Catalan (overall F 1 = 85.24) than for Spanish (overall F 1 = 84.04), although the SC system performs better for Catalan (89.37 vs. 86.46) , and the SR system performs better for Spanish (84.14 vs 83.40).",
"cite_spans": [
{
"start": 323,
"end": 340,
"text": "(89.37 vs. 86.46)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 56,
"end": 63,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "3"
},
{
"text": "Striking results are that the SR system gets significantly better results with the held-out test for Spanish, and that both of the complete SRL systems get significantly better results with the held-out test for Spanish. This might be due to differences in the process of gathering and annotation of the corpus. Table 3 shows detailed results on the Spanish CESS-ECE corpus for the SR task. Low scores are generally related to low frequency of the SR in the training corpus, and high scores are related to high frequency or to overt marking of the SR.",
"cite_spans": [],
"ref_spans": [
{
"start": 312,
"end": 319,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "3"
},
{
"text": "We have presented two memory-based SRL systems that make use of full syntactic information and approach the tasks in three steps. Results show that rather simple individual systems can produce competitive results in both tasks, and that they are robust enough to be applied to two languages and to the held-out test sets provided. Improvements of the systems would consist in improving the focus selection step, and applying more elaborate techniques for feature selection and postprocessing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "4"
}
],
"back_matter": [
{
"text": "This research has been funded by the postdoctoral grant EX2005-1145 awarded by the Ministerio de Educaci\u00f3n y Ciencia of Spain to the project T\u00e9cnicas semiautom\u00e1ticas para el etiquetado de roles sem\u00e1nticos en corpus del espa\u00f1ol. We would like to thank Martin Reynaert, Caroline Sporleder, Antal van den Bosch, and the anonymous reviewers for their comments and suggestions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "ArgM-MNR or ArgM-ADV, and either {SibPrep = 'durante' or 'durant'}, or {SibSynCat = sn and one of the WN semantic classes = 28}",
"authors": [],
"year": null,
"venue": "If prediction = ArgM-LOC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "If prediction = ArgM-LOC, ArgM-MNR or ArgM-ADV, and either {SibPrep = 'durante' or 'durant'}, or {SibSynCat = sn and one of the WN semantic classes = 28}, then prediction = ArgM-TMP.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "If prediction = ArgM-LOC, ArgM-MNR or ArgM-ADV, and CWLemma is a temporal adverb, then prediction = ArgM-TMP",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "If prediction = ArgM-LOC, ArgM-MNR or ArgM-ADV, and CWLemma is a temporal adverb, then prediction = ArgM- TMP.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "If prediction = ArgM-TMP and one of the WN classes = 15, then prediction = ArgM-LOC",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "If prediction = ArgM-TMP and one of the WN classes = 15, then prediction = ArgM-LOC.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "If prediction = ArgM-TMP, ArgM-MNR or ArgM-ADV, and CWLemma = locative adverb, then prediction = ArgM-LOC",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "If prediction = ArgM-TMP, ArgM-MNR or ArgM-ADV, and CWLemma = locative adverb, then prediction = ArgM-LOC.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "If prediction = ArgM-TMP or ArgM-ADV, and CWwnsc1 = 15, and SibPrep = 'en' or 'desde' or 'hacia' or 'a' or 'des de' or",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "If prediction = ArgM-TMP or ArgM-ADV, and CWwnsc1 = 15, and SibPrep = 'en' or 'desde' or 'hacia' or 'a' or 'des de' or 'cap a', then prediction = ArgM-LOC.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "If prediction = ArgM-ADV and CWLemma = causal conjunction, then prediction = ArgM-CAU",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "If prediction = ArgM-ADV and CWLemma = causal con- junction, then prediction = ArgM-CAU.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Introduction to the CoNLL-2004 shared task: Semantic role labeling",
"authors": [
{
"first": "X",
"middle": [],
"last": "References",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Carreras",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ll",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "M\u00e0rquez",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of CoNLL-2004",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "References X. Carreras and Ll. M\u00e0rquez. 2004. Introduction to the CoNLL-2004 shared task: Semantic role labeling. In Pro- ceedings of CoNLL-2004, Boston MA, USA.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Introduction to the CoNLL-2005 shared task: Semantic role labeling",
"authors": [
{
"first": "X",
"middle": [],
"last": "Carreras",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ll",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "M\u00e0rquez",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of CoNLL-2005",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "X. Carreras and Ll. M\u00e0rquez. 2005. Introduction to the CoNLL-2005 shared task: Semantic role labeling. In Pro- ceedings of CoNLL-2005, Ann Arbor, Michigan, June.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "TiMBL: Tilburg memory based learner, version 5.1, reference guide",
"authors": [
{
"first": "W",
"middle": [],
"last": "Daelemans",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Zavrel",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Van Der Sloot",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Van Den",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bosch",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W. Daelemans, J. Zavrel, K. Van der Sloot, and A. Van den Bosch. 2004. TiMBL: Tilburg memory based learner, ver- sion 5.1, reference guide. Technical Report Series 04-02, ILK, Tilburg, The Netherlands.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "WordNet: An Electronic Database",
"authors": [],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christiane Fellbaum, editor. 1998. WordNet: An Electronic Database. MIT Press, Cambridge, MA.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Automatic labeling of semantic roles",
"authors": [
{
"first": "D",
"middle": [],
"last": "Gildea",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2002,
"venue": "Computational Linguistics",
"volume": "28",
"issue": "3",
"pages": "245--288",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Gildea and D. Jurafsky. 2002. Automatic labeling of seman- tic roles. Computational Linguistics, 28(3):245-288.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Semantic role labeling as sequential tagging",
"authors": [
{
"first": "",
"middle": [],
"last": "Ll",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "M\u00e0rquez",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Comas",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Gim\u00e9nez",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Catal\u00e0",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of CoNLL-2005",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "LL. M\u00e0rquez, P. Comas, J. Gim\u00e9nez, and N. Catal\u00e0. 2005. Se- mantic role labeling as sequential tagging. In Proceedings of CoNLL-2005, Ann Arbor, Michigan.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "SemEval-2007 Task 09: Multilevel semantic annotation of catalan and spanish",
"authors": [
{
"first": "",
"middle": [],
"last": "Ll",
"suffix": ""
},
{
"first": "M",
"middle": [
"A"
],
"last": "M\u00e0rquez",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Mart\u00ed",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Taul\u00e9",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Villarejo",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of SemEval-2007, the 4th Workshop on Semantic Evaluations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ll. M\u00e0rquez, M.A. Mart\u00ed, M. Taul\u00e9, and L. Villarejo. 2007. SemEval-2007 Task 09: Multilevel semantic annotation of catalan and spanish. In Proceedings of SemEval-2007, the 4th Workshop on Semantic Evaluations, Prague, Czech Re- public.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Applying spelling error correction techniques for improving semantic role labelling",
"authors": [
{
"first": "E. Tjong Kim",
"middle": [],
"last": "Sang",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Canisius",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Van Den",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Bosch",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bogers",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of CoNLL-2005",
"volume": "",
"issue": "",
"pages": "229--232",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Tjong Kim Sang, S. Canisius, A. van den Bosch, and T. Bogers. 2005. Applying spelling error correction tech- niques for improving semantic role labelling. In Proceed- ings of CoNLL-2005, pages 229-232, Ann Arbor, Michigan.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Calibrating features for semantic role labeling",
"authors": [
{
"first": "N",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Palmer",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of 2004 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "N. Xue and M. Palmer. 2004. Calibrating features for semantic role labeling. In Proceedings of 2004 Conference on Empir- ical Methods in Natural Language Processing, Barcelona, Spain.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"num": null,
"html": null,
"content": "<table><tr><td/><td colspan=\"2\">Training 3LB</td><td colspan=\"2\">Test 3LB</td><td colspan=\"2\">Test CESS</td></tr><tr><td/><td>Ca.</td><td>Sp.</td><td>Ca.</td><td>Sp.</td><td>Ca.</td><td>Sp.</td></tr><tr><td colspan=\"7\">SR 23202 24668 1335 1451 1241 1186</td></tr><tr><td>SC</td><td>8932</td><td>9707</td><td>510</td><td>615</td><td>463</td><td>465</td></tr></table>",
"text": "shows the number of training and test instances for each subtask.",
"type_str": "table"
},
"TABREF1": {
"num": null,
"html": null,
"content": "<table><tr><td>Number of instances per corpus for each task ('Ca'</td></tr><tr><td>stands for Catalan, 'Sp' stands for Spanish).</td></tr></table>",
"text": "",
"type_str": "table"
},
"TABREF3": {
"num": null,
"html": null,
"content": "<table><tr><td>Overall results in the SR (above), SC (middle),</td></tr><tr><td>and general SRL tasks ('Perf.Props': perfect propositions; 'ca':</td></tr></table>",
"text": "",
"type_str": "table"
},
"TABREF5": {
"num": null,
"html": null,
"content": "<table/>",
"text": "Detailed results on the Spanish CESS-ECE test corpus for the SR subtask. F: frequency of the semantic roles in the training corpus, without counting V.",
"type_str": "table"
}
}
}
}