Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S07-1045",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:23:17.720981Z"
},
"title": "LCC-SRN: LCC's SRN System for SemEval 2007 Task 4",
"authors": [
{
"first": "Adriana",
"middle": [],
"last": "Badulescu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Language Computer Corporation",
"location": {
"addrLine": "1701 N Collins Blvd",
"postCode": "2000, 75080",
"settlement": "Richardson",
"region": "TX"
}
},
"email": ""
},
{
"first": "Munirathnam",
"middle": [],
"last": "Srikanth",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Language Computer Corporation",
"location": {
"addrLine": "1701 N Collins Blvd #",
"postCode": "2000, 75080",
"settlement": "Richardson"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This document provides a description of the Language Computer Corporation (LCC) SRN System that participated in the SemEval 2007 Semantic Relation between Nominals task. The system combines the outputs of different binary and multi-class classifiers build using machine learning algorithms like Decision Trees, Semantic Scattering, Iterative Semantic Specialization, and Support Vector Machines.",
"pdf_parse": {
"paper_id": "S07-1045",
"_pdf_hash": "",
"abstract": [
{
"text": "This document provides a description of the Language Computer Corporation (LCC) SRN System that participated in the SemEval 2007 Semantic Relation between Nominals task. The system combines the outputs of different binary and multi-class classifiers build using machine learning algorithms like Decision Trees, Semantic Scattering, Iterative Semantic Specialization, and Support Vector Machines.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The Semantic Relations between Nominals task from SemEval 2007 focuses on identifying the semantic relations that hold between two arguments manually annotated with word senses (Girju et al, 2007) .",
"cite_spans": [
{
"start": 177,
"end": 196,
"text": "(Girju et al, 2007)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The previous work in identifying semantic relations between nominals focuses on finding one or more relations in text for specific syntactic patterns or constructions (like genitives and noun compounds) using semi-automated and automated systems. An overview of some of these methods can be found in (Badulescu, 2004) . The LCC SRN system, developed during the SRN training period, was for us, the beginning of a different approach to semantic relations detection: detecting semantic relations in text without using a syntactic pattern. Our existing work on semantic relation detection was on detecting semantic relations in text (one or more at a time) at different levels in the sentence using different syntactic patterns like genitives, noun compounds, verbarguments, etc.",
"cite_spans": [
{
"start": 300,
"end": 317,
"text": "(Badulescu, 2004)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For SRN, we built a new system that combines the output of the pattern dependent classifiers with the new pattern-independent classifiers for better results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The remainder of this paper is organized as follows: Section 2 describes our system, Section 3 details the experimental results, and Section 4 summarizes the conclusions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The system consists of two types of classifiers: classifiers that do not use the syntactic parsed tree and that were built specifically for the SemEval 2007 Task 4(SRN) and classifiers that use specific syntactic pattern to determine the semantic relations and there were previously developed at LCC and then adapted to the SRN task (SRNPAT).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System description",
"sec_num": "2"
},
{
"text": "The classifiers for each type were built from annotated examples using supervised machine learning algorithms like Decision Trees (DT) 1 , Support Vector Machines (SVM) 2 , Semantic Scattering (SS) (Moldovan and Badulescu, 2005) , Iterative Semantic Specialization (ISS) (Girju, Badulescu, and Moldovan, 2006) , Na\u00efve Bayes (NB) 3 and Maximum Entropy (ME) 4 .",
"cite_spans": [
{
"start": 198,
"end": 228,
"text": "(Moldovan and Badulescu, 2005)",
"ref_id": "BIBREF1"
},
{
"start": 271,
"end": 309,
"text": "(Girju, Badulescu, and Moldovan, 2006)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "System description",
"sec_num": "2"
},
{
"text": "The outputs of different classifiers (built using different types of machine learning algorithms were combined and ranked using predefined rules. Figure 1 shows the architecture of our SRN system. Figure 1 . The architecture of our SRN system.",
"cite_spans": [],
"ref_spans": [
{
"start": 146,
"end": 154,
"text": "Figure 1",
"ref_id": null
},
{
"start": 197,
"end": 205,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "System description",
"sec_num": "2"
},
{
"text": "FeatureSetsSRN [Arg1Arg2Pattern, Arg1, Arg2] FeatureSetsSRNPAT [Pattern, NPArg1, NPArg2] Features Feature Extraction FeatureSetsSRN [Arg1Arg2Pattern, Arg1, Arg2] FeatureSetsSRNPAT [Pattern, NPArg1, NPArg2] Selection Relation Selection [Arg1, Arg2, Relation, Score] Selection Relation Selection [Arg1, Arg2, Relation, Score] Sentences Annotations Patterns [Pattern, NPArg1, NPArg2] Pattern Matching [Arg1Arg2Pattern, Arg1, Arg2] Argument Detection Patterns [Pattern, NPArg1, NPArg2] Pattern Matching [Arg1Arg2Pattern, Arg1,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System description",
"sec_num": "2"
},
{
"text": "The sentences were processed using an in-house text tokenizer, Brill's part-of-speech tagger, an inhouse WordNet-based concept detector, an inhouse Named Entity Recognizer, and an in-house syntactic parser.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text Preprocessing",
"sec_num": "2.1"
},
{
"text": "Then, the syntactic and semantic information obtained using these tools (concepts, part of speech, named entities, etc) or obtained from the sensekeys for the arguments as provided by the Task 4 organizers (e.g. word senses, lemmas, etc) were mapped into the syntactic trees. If an argument corresponds to more than one tree node, the annotation was mapped to the phrase containing the two nodes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text Preprocessing",
"sec_num": "2.1"
},
{
"text": "The core of our system is the learning and classification module.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning and Classification Methods",
"sec_num": "2.2"
},
{
"text": "We used two types of methods: patterndependent that uses the syntactic parsed trees for extracting and assigning a label to the arguments and pattern-independent that creates classifiers form all the examples disregarding the pattern in the tree.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning and Classification Methods",
"sec_num": "2.2"
},
{
"text": "Considering the limited number of examples for each pattern, we developed pattern-independent methods for classifying the semantic relations using the provided argument annotations and the context from the sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pattern-independent Methods (SRN)",
"sec_num": "2.2.1"
},
{
"text": "We built two types of classifiers: binary that focuses on building a classifier for a specific relation (SRNREL) and multi-class methods that build classifiers for all the SRN relations (SRN). Table 1 presents the accuracy of the classifiers built using different machine learning algorithms. The classifiers were built using lexical, semantic, and syntactic features of the arguments, their phrases, their clauses, their common phrase/clause, and their modifier or head phrase. The system uses WordNet, an in-house Named Entity Recognizer, and an in-house Syntactic Parser for determining the values of some of these features. Table 2 presents the list of features used by the SRN classifiers.",
"cite_spans": [],
"ref_spans": [
{
"start": 193,
"end": 200,
"text": "Table 1",
"ref_id": null
},
{
"start": 628,
"end": 635,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Pattern-independent Methods (SRN)",
"sec_num": "2.2.1"
},
{
"text": "Argument's lexical, semantic, and syntactic features: the surface form, the label (POS tag or phrase label), the named entity (human, group, location, etc), the WordNet hierarchy (entity, group, abstraction, etc), the Semantic Scattering class (e.g. object, substance, etc), the grammatical role (subject or object of the clause), the syntactic parser structure, the POS Pattern (the sequence of POS of the words from the argument), and the phrase pattern (the sequence of labels of the phrases, words from the argument); Argument's phrase features: surface form, label, grammatical role, named entity, POS pattern, Phrase patterns; Argument's Modifier/Head features: the label, surface forms, NE, and WN Hierarchy for the first modifier, post modifier, pre-modifier, and head; Arguments' common tree node features: label, named entity, grammatical role, POS pattern, and phrase pattern, the tree path between arguments, and their order in tree; Arguments' clause: label, verb, voice, POS pattern, phrase pattern. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pattern-independent Methods (SRN)",
"sec_num": "2.2.1"
},
{
"text": "The second type of methods we used, were for particular patterns frequent in the training corpus. Table 3 shows the list of most frequent patterns in the training corpus. For having general pattern and covering the arguments that correspond to more than one node in a tree, we considered as argument the noun phrase that contains the nominal instead of the node for the nominal. Manila radio station DZMM quoted survivors as saying that the <e1>fire</e1> started with an <e2>explosion</e2> in the cargo hold and spread across the ship within minutes. Table 3 . The most frequent patterns found in the training corpus.",
"cite_spans": [],
"ref_spans": [
{
"start": 98,
"end": 105,
"text": "Table 3",
"ref_id": null
},
{
"start": 551,
"end": 558,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Pattern-dependent Methods (SRNPAT)",
"sec_num": "2.2.2"
},
{
"text": "For the pattern-dependent methods we adapted some of our existing binary and multi-class classifiers to work with the SRN relations. For the SRN system we used only one binary classifier built for the Part-Whole relation (relation 6) using the ISS learning algorithm and trained/tested on the examples used in (Girju, Badulescu, and Moldovan, 2006) and different multi-class classifiers for the first 4 patterns from Table 3 built using DT, SVM, SS, and NB learning algorithms trained on a corpus annotated with 40 semantic relations (extracted from Wall Street Journal articles from the TreeBank collection and LATimes articles from TREC 9 collection) that includes the 7 SRN relations (or equivalents). (Badulescu, 2004) gives more details on this list of relations (definitions, examples, distribution on corpus, etc). Table 4 shows the accuracy of these classifiers on other WSJ and LAT articles for the 40 LCC relations and respectively Part-Whole relation for the most frequent patterns from the SRN corpus (Table 3) . Table 4 . The accuracy of the SRNPAT classifiers for the list of 40 LCC relations and the Part-Whole Relation.",
"cite_spans": [
{
"start": 310,
"end": 348,
"text": "(Girju, Badulescu, and Moldovan, 2006)",
"ref_id": "BIBREF2"
},
{
"start": 705,
"end": 722,
"text": "(Badulescu, 2004)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 417,
"end": 424,
"text": "Table 3",
"ref_id": null
},
{
"start": 822,
"end": 829,
"text": "Table 4",
"ref_id": null
},
{
"start": 1013,
"end": 1022,
"text": "(Table 3)",
"ref_id": null
},
{
"start": 1025,
"end": 1032,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Pattern-dependent Methods (SRNPAT)",
"sec_num": "2.2.2"
},
{
"text": "Any of the SRN or SRNPAT classifiers can return a relation for a pair of arguments. The best relation is selected by weighting them using the following predefined rules:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relation Selection",
"sec_num": "2.3"
},
{
"text": "The relations returned by the SRN classifiers weight more than the ones returned by SRNPAT classifiers because they were trained on the task annotated examples",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relation Selection",
"sec_num": "2.3"
},
{
"text": "The relations returned by the binary classifiers weight more than the ones returned by multi-class classifiers because they focus on one relation and therefore are more precise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relation Selection",
"sec_num": "2.3"
},
{
"text": "During the competition we performed several experiments to assess the correct combination of classifiers that leads to the best results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments on Testing",
"sec_num": "3.1"
},
{
"text": "The organizer provided 140 examples for each of the 7 relations. For testing the classifiers we trained the system on the first 110 examples and tested it on the last 30 of them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments on Testing",
"sec_num": "3.1"
},
{
"text": "We performed different sets of experiments. Experiments with one type of classifiers. These experiments showed that ME has a best performance (55.1) 10.05 more than DT and 8.05 more than SV. ME also got the highest score for Cause-Effect, while DT obtained the best score for Product-Producer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments on Testing",
"sec_num": "3.1"
},
{
"text": "Experiments with multiple classifiers. These experiments showed that DT+SV+SS+ISS has the best score (66.72) followed by DT+SS+ISS with 55.66. Also by adding the SS and ISS classifiers the DT score increased with 10.51, the SV score with 5.81 and the DT+SV with 20.57.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments on Testing",
"sec_num": "3.1"
},
{
"text": "Experiments with types of methods. These experiments showed that the SRN methods (with a score 0.44) are better than the SRNPAT methods (with a score of 0.41) with 0.03 which was expected since SRN were trained on provided examples. Table 5 shows the results of our SRN system when using specific classifiers or a combination of classifiers. The time did not permit us to do any experiments with the ME and NB classifiers.",
"cite_spans": [],
"ref_spans": [
{
"start": 233,
"end": 240,
"text": "Table 5",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Experiments on Testing",
"sec_num": "3.1"
},
{
"text": "Average We submitted the DT+SS+ISS version because of its closeness to the normal distribution rather than DT+SV+SS+ISS that had a better f-measure but it was closer to All-True. The evaluation results showed that the testing examples we used were representative and the DT+SV+SS+ISS produce better results. Table 6 shows the results obtained by our system on the evaluation corpus for the B4 case (using WordNet but not the query and all the training examples. Table 6 . The results of our system on the evaluation corpus. Table 7 shows a comparison of our results with the following baseline systems: All-True, a system that always returns true, Majority, a system that always returns the majority value from the training, and Prob-Match, a system that randomly generate the value. We have obtained a larger precision and accuracy than the All-True and the Prob-Match systems. However, we obtained a lower recall and therefore an F-measure.",
"cite_spans": [],
"ref_spans": [
{
"start": 308,
"end": 315,
"text": "Table 6",
"ref_id": null
},
{
"start": 462,
"end": 469,
"text": "Table 6",
"ref_id": null
},
{
"start": 524,
"end": 531,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Classifier Combination",
"sec_num": null
},
{
"text": "Precision ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System",
"sec_num": null
},
{
"text": "The results are promising. However, there is still room for improvement. The system was developed in a limited time, and therefore it could have been benefited from more features, feature selection, more experiments, a more complex relation selection scheme (using learning), more patterns, and more types of machine learning algorithms (especially unsupervised ones).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussions",
"sec_num": "3.3"
},
{
"text": "We presented a system for classifying the semantic relations between nominals that combines the results of different methods (pattern-dependent or pattern-independent) and machine learning algorithms (decision tree, support vector machines, semantic scattering, maximum entropy, na\u00efve bayes, etc). The classifiers use lexical, semantic, and syntactic features and external resources like WordNet and an in-house Named Entity dictionary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
},
{
"text": "C5.0., http://www.rulequest.com/see5-info.html 2 LIBSVM, www.csie.ntu.edu.tw/~cjlin/libsvm/ 3 jBNC, http://jbnc.sourceforge.net 4 http://homepages.inf.ed.ac.uk/s0450736/maxent_toolkit.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Classification of Semantic Relations between Nouns. PhD Dissertation. University of Texas at Dallas",
"authors": [
{
"first": "Adriana",
"middle": [],
"last": "Badulescu",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adriana Badulescu. 2004. Classification of Semantic Relations between Nouns. PhD Dissertation. Univer- sity of Texas at Dallas.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A Semantic Scattering Model for the Automatic Interpretation of Genitives",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Moldovan",
"suffix": ""
},
{
"first": "Adriana",
"middle": [],
"last": "Badulescu",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of HLT/EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Moldovan and Adriana Badulescu. 2005. A Seman- tic Scattering Model for the Automatic Interpretation of Genitives. In Proceedings of HLT/EMNLP 2005.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Automatic Discovery of Part-Whole Relations",
"authors": [
{
"first": "Roxana",
"middle": [],
"last": "Girju",
"suffix": ""
},
{
"first": "Adriana",
"middle": [],
"last": "Badulescu",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Moldovan",
"suffix": ""
}
],
"year": 2006,
"venue": "Computation Linguistics",
"volume": "32",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roxana Girju, Adriana Badulescu, and Dan Moldovan. 2006. Automatic Discovery of Part-Whole Relations. Computation Linguistics, 32:1.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Classification of Semantic Relations between Nominals: Description of Task 4 in SemEval-1",
"authors": [
{
"first": "Roxana",
"middle": [],
"last": "Girju",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of ACL-2007",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roxana Girju et al. 2007. Classification of Semantic Relations between Nominals: Description of Task 4 in SemEval-1, In Proceedings of ACL-2007, SemE- val-1 Workshop.",
"links": null
}
},
"ref_entries": {
"TABREF3": {
"text": "The list of features used for the SRN classifiers.",
"content": "<table/>",
"type_str": "table",
"html": null,
"num": null
},
"TABREF6": {
"text": "The results of some of our experiments with the different classifiers on the testing corpus.",
"content": "<table><tr><td/><td>F-measure</td></tr><tr><td>DT</td><td>45.05</td></tr><tr><td>SV</td><td>47.05</td></tr><tr><td>ME</td><td>55.10</td></tr><tr><td>DT+SV</td><td>46.15</td></tr><tr><td>DT+SS+ISS</td><td>55.66</td></tr><tr><td>SV+SS+ISS</td><td>52.96</td></tr><tr><td>DT+SV+SS+ISS</td><td>66.72</td></tr><tr><td>SRN</td><td>44.31</td></tr><tr><td>SRNPAT</td><td>41.15</td></tr></table>",
"type_str": "table",
"html": null,
"num": null
}
}
}
}