Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "U08-1014",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:11:47.556772Z"
},
"title": "Investigating Features for Classifying Noun Relations",
"authors": [
{
"first": "Dominick",
"middle": [],
"last": "Ng",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Sydney",
"location": {
"postCode": "2006",
"region": "NSW",
"country": "Australia"
}
},
"email": ""
},
{
"first": "David",
"middle": [
"J"
],
"last": "Kedziora",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Sydney",
"location": {
"postCode": "2006",
"region": "NSW",
"country": "Australia"
}
},
"email": ""
},
{
"first": "Terry",
"middle": [
"T W"
],
"last": "Miu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Sydney",
"location": {
"postCode": "2006",
"region": "NSW",
"country": "Australia"
}
},
"email": ""
},
{
"first": "James",
"middle": [
"R"
],
"last": "Curran",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Sydney",
"location": {
"postCode": "2006",
"region": "NSW",
"country": "Australia"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Automated recognition of the semantic relationship between two nouns in a sentence is useful for a wide variety of tasks in NLP. Previous approaches have used kernel methods with semantic and lexical evidence for classification. We present a system based on a maximum entropy classifier which also considers both the grammatical dependencies in a sentence and significance information based on the Google Web 1T dataset. We report results comparable with state of the art performance using limited data based on the SemEval 2007 shared task on nominal classification.",
"pdf_parse": {
"paper_id": "U08-1014",
"_pdf_hash": "",
"abstract": [
{
"text": "Automated recognition of the semantic relationship between two nouns in a sentence is useful for a wide variety of tasks in NLP. Previous approaches have used kernel methods with semantic and lexical evidence for classification. We present a system based on a maximum entropy classifier which also considers both the grammatical dependencies in a sentence and significance information based on the Google Web 1T dataset. We report results comparable with state of the art performance using limited data based on the SemEval 2007 shared task on nominal classification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Analysis of the semantics of natural language is an area of research undergoing renewed interest, driven by the many applications which can directly benefit from such information, including question answering and text summarization. In particular, recognition of the relationship between words in a sentence is useful for clarifying ambiguities in tools that attempt to interpret and respond to natural language. Recent developments in the semantic web -a vision where information is comprehensible to machines as well as people -have also accelerated the need for tools which can automatically analyse nominal relationships.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We approach the task of relation classification using the relation set and training data provided by the SemEval 2007 shared task on nominal classification . The problem as defined by the task description is to discover the underlying relationship between the concepts expressed by two nominals, excluding named entities. The relationship is informed by the context of an English sentence, e.g. it is clear that the relationship between door and car differs in the fragments the car door and the car scraped the door of the garage. Resolving the relation between nominals in cases where ambiguities exist is useful for generalisation in NLP systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We created a system based upon a maximum entropy (ME) classifier developed by Clark and Curran (2004) . A separate binary classifier for each of the relations was trained over the corresponding training data, and the additional features used for each relation were selected by performing a seven-fold cross-validation over all combinations of features developed for this task. We report an overall accuracy of 71.9% macroaveraged over the seven relations and an overall F-measure of 70.7%, comparable with state of the art performance.",
"cite_spans": [
{
"start": 78,
"end": 101,
"text": "Clark and Curran (2004)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The task of relation classification is complicated by the lack of consensus on relation sets and algorithms. Previous research has studied areas as diverse as noun compound classification in the medical domain (Rosario and Hearst, 2001) , gene relations (Stephens et al., 2001) , verb-verb semantic relations (Chklovski and Pantel, 2004) , and noun-modifier relations (Nastase and Szpakowicz, 2003) . Many independent class hierarchies have been developed to suit each application domain, and it is difficult to transfer one of these hierarchies to another domain. The organisers of the SemEval task defined the classification problem in terms of seven semantic relations commonly mentioned by researchers, and a list of these along with some training examples is provided in Table 1 . An annotated dataset of 140 training examples and at least 70 test sentences was created for each relation by searching the web using wild-card search patterns satisfying the constraints of each relation, e.g. * holds * for the Content-Container relation. This method was used in order to provide near miss negative examples .",
"cite_spans": [
{
"start": 210,
"end": 236,
"text": "(Rosario and Hearst, 2001)",
"ref_id": "BIBREF16"
},
{
"start": 254,
"end": 277,
"text": "(Stephens et al., 2001)",
"ref_id": "BIBREF18"
},
{
"start": 309,
"end": 337,
"text": "(Chklovski and Pantel, 2004)",
"ref_id": "BIBREF4"
},
{
"start": 368,
"end": 398,
"text": "(Nastase and Szpakowicz, 2003)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 776,
"end": 783,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Fifteen systems split into four categories were submitted for the SemEval 2007 workshop. Almost all of the systems utilised extended feature sets that built upon the data provided by the task; most systems also implemented some form of statistical or kernel approach to develop binary classifiers for the relations (see Bedmar et al. (2007) , Hendrickx et al. (2007) , and Nulty (2007) for some previous approaches to the classification task explored in this paper). The best performing systems achieved F-measures in the range of 71.5% -72.4% by utilising the provided WordNet sense keys and adding more training examples to those supplied by the task; however, the majority of systems did not augment the provided data like this and reported F-measures and accuracies below 67.2% .",
"cite_spans": [
{
"start": 320,
"end": 340,
"text": "Bedmar et al. (2007)",
"ref_id": "BIBREF1"
},
{
"start": 343,
"end": 366,
"text": "Hendrickx et al. (2007)",
"ref_id": "BIBREF11"
},
{
"start": 373,
"end": 385,
"text": "Nulty (2007)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Each training example in the annotated dataset consists of a sentence, two nominals whose relationship is to be evaluated, WordNet 3.0 sense keys for each of the nominals, the wild-card query used to obtain the example, and comments on the choices made during the creation of the example. The evaluation drew distinction between systems that did and did not use the supplied WordNet and wild-card query information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "The entropy of a classifier is a measure of how predictable that classifier's decisions are. The lower the entropy, the more biased a classifier is, i.e. a relation classifier has zero entropy if it always assigns the same relation to any input. The theory underpinning ME modelling is that the distribution chosen to fit the specified constraints will eliminate biases by being as uniform as possible. Such models are useful in NLP applications because they can effectively incorporate diverse and overlapping features whilst also addressing statistical dependencies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum Entropy Modelling",
"sec_num": "3"
},
{
"text": "We used the ME implementation described in Clark and Curran (2004) . The ME models used have the following form:",
"cite_spans": [
{
"start": 43,
"end": 66,
"text": "Clark and Curran (2004)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum Entropy Modelling",
"sec_num": "3"
},
{
"text": "p(y|x, \u03bb) = 1 Z(x|\u03bb) exp n k=1 \u03bb k f k (x, y)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum Entropy Modelling",
"sec_num": "3"
},
{
"text": "where Z(x|\u03bb) is the normalisation function and the f k are features with associated weights \u03bb k . The system uses Gaussian smoothing on the parameters of the model. The features are binaryvalued functions which pair a relation y with various observations x from the context provided, e.g.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum Entropy Modelling",
"sec_num": "3"
},
{
"text": "f j (x, y) = \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 1 if word(x) = damage & y = Cause-Effect-True 0 otherwise",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum Entropy Modelling",
"sec_num": "3"
},
{
"text": "We focused our efforts on finding features which aggressively generalise the initial material over as broad a search space as possible. We investigated lexical, semantic, and statistical features sourced from a number of corpora as well as morphosyntactic features from a grammatical parse of each sentence. Features were evaluated using a seven-fold cross-validation performed over the training data for each relation over every possible combination of features -a process made possible by the small size of the corpora and the relatively small number of features experimented with. The speed of the training process for the ME implementation was also a factor in enabling the exhaustive search.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features and Methodology",
"sec_num": null
},
{
"text": "The features which resulted in the best performance for each relation in the cross-validation were then used to train seven binary classifiers for the final run over the supplied test data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features and Methodology",
"sec_num": null
},
{
"text": "Prior to feature generation our system extracted from the supplied data the sentences and marked nominals (termed as e 1 and e 2 ). While WordNet was used internally as features by our system, we did not use the specific sense keys provided by the data to query WordNet -we relied upon a more general word lookup that extracted all of the possible senses for the nominals. We also did not make use of the provided query, based on its ineffectiveness in previous studies .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "4.1"
},
{
"text": "Close examination of the training data also revealed some negative training examples that were identified in comments as belonging to a different relation set. These examples were collected and added to the appropriate training file to further extend the original dataset. However, no new examples were created: only examples which had already been identified as belonging to a particular relation were added in this process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "4.1"
},
{
"text": "Lexical features are useful for capturing contextual information about the training example, and they are the most obvious features to incorporate. However, due to the limited amount of training data available for this task, lexical features encounter sparseness problems as there are few rel-evant collisions between words. We utilised the following lexical features:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Features",
"sec_num": "4.2"
},
{
"text": "\u2022 sen: The words of the sentence itself. This was used as the baseline for feature testing;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Features",
"sec_num": "4.2"
},
{
"text": "\u2022 red: A reduced version of the sentence with all words of length 2 or less removed;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Features",
"sec_num": "4.2"
},
{
"text": "\u2022 heads: The head words of the nominals in question, e.g. for [e 1 tumor shrinkage] after [e 2 radiation therapy] the relation actually holds between shrinkage and therapy. This feature is very specific, but allows for nominals which are commonly linked to certain relations to be identified;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Features",
"sec_num": "4.2"
},
{
"text": "\u2022 dir: The required direction of the relation (i.e. from e 1 to e 2 or vice versa) that is encoded in the data -useful as some relations are more likely to exist in a particular direction, e.g. the Part-Whole relation is most commonly found encoded in the direction [e 1 Part]-[e 2 Whole] (Beamer et al., 2007) .",
"cite_spans": [
{
"start": 289,
"end": 310,
"text": "(Beamer et al., 2007)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Features",
"sec_num": "4.2"
},
{
"text": "WordNet (Fellbaum, 1998) is the most heavily used database of lexical semantics in NLP. Created at Princeton University, WordNet is based around groups of synonyms (synsets) and encodes a vast array of semantic properties and relationships between these synsets. The coverage of WordNet means that it is very useful for generalising features over a small corpus of data, and many previous approaches to classification tasks have utilised WordNet in some way -including most of the systems from the SemEval proceedings . However, unlike most of these systems, we did not use the supplied Word-Net sense keys as we believe that it is unrealistic to have such precise data in real-world applications. As a consequence, all of our WordNet features were extracted using the indicated nominals as query points.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WordNet Features",
"sec_num": "4.3"
},
{
"text": "\u2022 syn: Synonyms of the nominals. We extracted from WordNet all synonyms in all senses for each of the marked nouns;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WordNet Features",
"sec_num": "4.3"
},
{
"text": "\u2022 hyp1, hyp2: Hypernyms of the nominals, i.e. more general concepts which encompass the nominals. These features allow us to broaden the coverage given by the nominals over less specific entities. We exhaustively mined all hypernyms of the marked nouns to a height of two levels, and encoded the two levels as separate features;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WordNet Features",
"sec_num": "4.3"
},
{
"text": "\u2022 lex: Lexical file numbers, which correspond to a number of abstract semantic classes in WordNet, including noun.artifact, noun.event, and noun.process. This allows for nominal relations which do not make sense to be identified, e.g. a noun.process should not be able to contain a noun.event, but the process may cause the event (Bedmar et al., 2007) ;",
"cite_spans": [
{
"start": 330,
"end": 351,
"text": "(Bedmar et al., 2007)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "WordNet Features",
"sec_num": "4.3"
},
{
"text": "\u2022 cont: Container -a binary feature indicating whether the marked nouns are hyponyms (more specific concepts) of the container synset. This feature was included mainly for the benefit of the Content-Container relation; however, we hypothesised that their inclusion may also assist in classifying other relations; e.g. the 'effect' in Cause-Effect should not be a physical entity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WordNet Features",
"sec_num": "4.3"
},
{
"text": "Syntactic features representing the path between nominals are a useful complement for semantic and lexical features because they account for the way in which words are commonly used in text. Semantic relationships can often be associated with certain patterns of words, e.g. the pattern e 1 is inside e 2 is a strong indicator for the Content-Container relation for many general combinations of e 1 and e 2 . However, these patterns can be expressed in many different waysinside e 2 e 1 is or inside e 2 is e 1 are other ways of expressing a Content-Container relationship -and while the words are essentially the same between the examples the changed ordering creates difficulties in designing good features. This problem can be alleviated by considering syntactic dependencies in a sentence rather than a naive concatenation of words (Nicolae et al., 2007) . Grammatical relations (GRs) represent the syntactic dependencies that hold between a head and a dependent in text. Initially proposed by Carroll et al. (1998) (det man 1 A 0) (ncmod does 2 not 3) (aux talk 4 does 2) (ncsubj talk 4 man 1 ) (det woman 7 every 6) (ncsubj walks 8 woman 7 ) (conj or 5 walks 8) (conj or 5 does 2) Figure 1 : GRs output from the C&C parser for parsing accuracy, GRs are arranged in a hierarchy that allows for varying levels of exactness in parsing: a general dependent relation can be assigned to indicate that there is some doubt over the precise dependency that holds between two words. We postulated that a simple graph constructed from the dependencies (whereby words of the text are nodes and undirected edges are added between nodes if there is some grammatical relation that links them) could be used to find a path between the two nominals in each sentence. This path would compare favourably to a naive concatenation of the words between the nominals as it considers the actual dependencies in the sentence rather than just the positions of the words, although in many cases at least one of the words between the marked nominals in the sentence will be represented in the dependency path. Table 2 gives a list of GRs used in this process.",
"cite_spans": [
{
"start": 836,
"end": 858,
"text": "(Nicolae et al., 2007)",
"ref_id": "BIBREF14"
},
{
"start": 998,
"end": 1019,
"text": "Carroll et al. (1998)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 1187,
"end": 1195,
"text": "Figure 1",
"ref_id": null
},
{
"start": 2088,
"end": 2095,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Grammatical Relations Features",
"sec_num": "4.4"
},
{
"text": "To extract the grammatical relations from the provided data we parsed each training and test example with the C&C parser developed by Clark and Curran (2007) . Figure 1 gives an example of the GRs for the sentence A man does not talk or every woman walks. A dependency graph was generated from this output and the shortest path between the nominals found. In the example in Figure 1 , the path between man and woman is talk does or walks.",
"cite_spans": [
{
"start": 134,
"end": 157,
"text": "Clark and Curran (2007)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 160,
"end": 168,
"text": "Figure 1",
"ref_id": null
},
{
"start": 374,
"end": 382,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Grammatical Relations Features",
"sec_num": "4.4"
},
{
"text": "Features were extracted from this path output in two formats: a generalised version (labelled with a 'g' prefix), whereby the two nominals in question were replaced whenever they appeared with the marker tags e 1 and e 2 , and the actual version, where this extra generalisation step was not applied. We reasoned that the generalised output would be more useful as a classification feature as it removed the stipulation on the start and end of the path; however, we also felt that keeping the identity of the nominals would aid in classifying words often paired with prepositions that suggest some form of spatial or logical relationship, e.g. the fragment after hurricanes suggests some form of Cause-Effect relationship from the temporal indicator 'after'. Our path features included:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grammatical Relations Features",
"sec_num": "4.4"
},
{
"text": "\u2022 path, gpath:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grammatical Relations Features",
"sec_num": "4.4"
},
{
"text": "The path itself in a concatenated format, e.g.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grammatical Relations Features",
"sec_num": "4.4"
},
{
"text": "or e 1 comes after e 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "damage comes after hurricanes",
"sec_num": null
},
{
"text": "These patterns were postulated to have some correlation with each relation;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "damage comes after hurricanes",
"sec_num": null
},
{
"text": "\u2022 strip, gstrip: The path with a length filter of 2 applied;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "damage comes after hurricanes",
"sec_num": null
},
{
"text": "\u2022 slice, gslice: The nominals with their immediate neighbour from the path, e.g. damage comes, after hurricanes or e 1 comes, after e 2 ;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "damage comes after hurricanes",
"sec_num": null
},
{
"text": "\u2022 pair, gpair: The bigrams in the path, e.g. e 1 comes, comes after, after e 2 ;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "damage comes after hurricanes",
"sec_num": null
},
{
"text": "\u2022 ptag, gptag: The underscore-concatenated POS tags of the path words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "damage comes after hurricanes",
"sec_num": null
},
{
"text": "Web 1T (Brants and Franz, 2006 ) is a Googlereleased corpus containing English word ngrams Figure 2 : The notation and formula used for the significance testing.",
"cite_spans": [
{
"start": 7,
"end": 30,
"text": "(Brants and Franz, 2006",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 91,
"end": 99,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Web 1T Significance Features",
"sec_num": "4.5"
},
{
"text": "\u03c7 2 = N (O11O22\u2212O12O21) 2 (O11+O12)\u00d7(O11+O21)\u00d7(O12+O22)\u00d7(O21+O22)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Web 1T Significance Features",
"sec_num": "4.5"
},
{
"text": "and their observed frequency counts in a body of approximately 1 trillion word tokens of text from publicly accessible web pages. Web 1T counts the occurrences of unigrams, 2-, 3-, 4-, and 5-grams in the 1 trillion word tokens, discarding unigrams appearing less than 200 times in the tokens (1 in 5 billion) and n-grams appearing less than 40 times (1 in 25 billion) in the tokens. This resource captures many lexical patterns used in common English, though there are some inconsistencies due to the permissive nature of the web: some commonly misspelt words are included and some text in languages other than English are also present.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Web 1T Significance Features",
"sec_num": "4.5"
},
{
"text": "The idea of searching a large corpus for specific lexical patterns to indicate semantic relations of interest was first described by Hearst (1992) . As previously mentioned, we postulated that certain patterns of words would associate with certain relations, but a naive concatenation of words located between the nominals would be unhelpful with such a small data set. This problem can be avoided by examining the frequencies of lexical patterns within a much larger dataset such as Web 1T, where the problem of data sparseness is offset by the size of the corpus. This pattern information would complement the semantic and syntactic information already used by incorporating evidence regarding word use in real-world text.",
"cite_spans": [
{
"start": 133,
"end": 146,
"text": "Hearst (1992)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Web 1T Significance Features",
"sec_num": "4.5"
},
{
"text": "We chose to conduct statistical significance tests with the intention of observing if the presence of particular words between the nominals is meaningful, irrespective of whether or not they are in the sentences themselves. This allows us to collate all the words found to be significant when placed between the nominals and use them as ngram features. Our methodology is justified by the observation that patterns correlated with relations are likely to contain the same words regardless of the bounds, i.e. the pattern e 1 is inside Table 3 : The average improvement in F-measure (using the words of the sentence as a baseline) for each feature macro-averaged over all 7 relations e 2 is a strong indicator for the Content-Container relation for general combinations of e 1 and e 2 . The significance test was conducted as in Manning and Sch\u00fctze (2000) . The Web 1T data was searched for any 3-, 4-, and 5-grams that had the same bounds as the nominals of the sentence in question, i.e. patterns which match e 1 ... e 2 . Then, for every intermediate word in the pattern, a \u03c7squared value was calculated to measure the significance of the word in relation to the bounds. This process was repeated for each training example, and Figure 2 gives the equations used for this test. We conducted some brief experiments to find the range of \u03c7-squared values returned by this test; based on these we chose the \u03c7-squared value of 10 to indicate significance to the training example being analysed, and selected all words with a \u03c7-squared value above this level to add as ngram features.",
"cite_spans": [
{
"start": 848,
"end": 854,
"text": "(2000)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 535,
"end": 542,
"text": "Table 3",
"ref_id": null
},
{
"start": 1230,
"end": 1238,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Web 1T Significance Features",
"sec_num": "4.5"
},
{
"text": "As an initial step, we used the sentence words of each example as a baseline to test the individual performance of each feature in a seven-fold cross- Table 4 : The best performing single features (using the words of the sentence as a baseline) and their mean improvement in F-measure for each relation validation over the training examples for each relation. We did this to compare the discrete improvement over the baseline that each feature offered and to allow a comparison as to how combining the features improves performance. Table 3 shows that most features offer small gains on average over the baseline sentence, but also exhibit varying degrees of performance over the relations as seen in the relatively large standard deviations. In particular, lexical file numbers and second-level hypernyms have the largest mean improvement in F-measure, but also the largest standard deviations -indicating a widespread distribution of positive and negative contributions. Table 4 shows that these two features improve the baseline F-measure of five of the relations, implying from the large standard deviations that they severely worsen the performance of the remaining two. This behaviour is explained by noting the wide generalisation that these features add, creating the most collisions between training and test data and hence affecting the decisions of the classifier the most.",
"cite_spans": [],
"ref_spans": [
{
"start": 151,
"end": 158,
"text": "Table 4",
"ref_id": null
},
{
"start": 533,
"end": 540,
"text": "Table 3",
"ref_id": null
},
{
"start": 973,
"end": 980,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Preliminary Feature Testing",
"sec_num": "5"
},
{
"text": "The features chosen to train the classifiers for the final system along with performance in the crossvalidation are given in Table 5 . All the relations performed best with a combination of lexical, semantic, and syntactic features, and three relations also used the statistical significance data obtained from Web 1T. The relatively even spread of feature types across relations implies that the classifier performs best when presented with a wide range of evidence that it can then combine into a model. However, the largest number of fea- tures used was 11 -considerably less than the 20 tested during cross-validation -and this supports the general conclusion that using too many features in a maximum entropy approach with a small amount of training data adversely affects classifying performance.",
"cite_spans": [],
"ref_spans": [
{
"start": 125,
"end": 132,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "6"
},
{
"text": "The most commonly selected features were the Grammatical Relations bigram features (pair and g / slice). These features were used in all but one of the classifiers, indicating that bigram information provided very useful evidence for relation classification. Given that most path bigrams involve the nominals with a preposition that indicates a temporal or spatial relationship, we infer that the syntactic dependency between nominals and prepositions is an important feature for semantic relation classification. Other commonly selected features were the head words of the sentence and their lexical file numbersthese were present together in the Cause-Effect, Instrument-Agency, Theme-Tool, and Content-Container classifiers. This correlation is expected given that these relations usually exist between nominals that generally correspond with the semantic classes from WordNet. Table 5 shows that some relations were more challenging to classify than others. Origin-Entity in particular exhibited the worst performance, with a standard deviation of 12.92 around an average F-measure of 62.3% under seven-fold cross-validation. This poor performance was expected given that most attempts from the SemEval proceedings rated Origin-Entity as equal hardest to classify along with Theme-Tool . On the other hand, our cross-validation yielded good performance for Theme-Tool, with an F-measure of 71.2% -potentially showing that Table 6 : Final percentage precision, recall, F-measure, and accuracy results over the test data using the features listed in Table 5 maximum entropy methods are more effective at handling difficult relations than kernel approaches to the problem. Also notable are the strong performances of the Part-Whole and Product-Producer classifiers, with F-measures above 80% and accuracies above 72%. The other relations also performed well, with no other classifier exhibiting an F-measure or accuracy score below 70%. Table 6 gives the final classifying results over the supplied test data using all the training examples and features selected in the cross-validation step as training material. We established a new benchmark for classifying the Instrument-Agency relation: our F-measure of 78.1% exceeds the best result of 77.9% for the relation from the SemEval proceedings . However, as a general rule, system performance was weaker over the test data than during the cross-validation step, providing some evidence of overfitting to the training data. This was particularly demonstrated in the markedly poor performance of the Part-Whole classifier -from the cross-validation F-measure dropped by 18.9% to 62.7% and accuracy fell 12.4% from 77.1% to 65.3%. It should be noted however that our system performed better than most others in classifying the difficult relations as ranked in the SemEval task .",
"cite_spans": [],
"ref_spans": [
{
"start": 881,
"end": 888,
"text": "Table 5",
"ref_id": null
},
{
"start": 1426,
"end": 1433,
"text": "Table 6",
"ref_id": null
},
{
"start": 1552,
"end": 1559,
"text": "Table 5",
"ref_id": null
},
{
"start": 1938,
"end": 1945,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "6"
},
{
"text": "We recorded a final F-measure of 70.7% and accuracy of 71.9% macroaveraged over the seven relations, an improvement of 5.9% in both Fmeasure and accuracy over the best performing system using the same data (no WordNet sense keys or query) from SemEval 2007. Our system performed within an F-measure of 1.7% and accuracy of 4.4% of the top system from SemEval 2007, which incorporated a large number of extra training examples and WordNet sense keys (Beamer et al., 2007) . Our results are comparable with more recent approaches to the same classification task, utilising pattern clusters (Fmeasure 70.6%, accuracy 70.1%, in Davidov and Rappoport (2008) ) and distributional kernels (Fmeasure 68.8%, accuracy 71.4%, in S\u00e9aghdha and Copestake (2008) ).",
"cite_spans": [
{
"start": 449,
"end": 470,
"text": "(Beamer et al., 2007)",
"ref_id": "BIBREF0"
},
{
"start": 624,
"end": 652,
"text": "Davidov and Rappoport (2008)",
"ref_id": "BIBREF7"
},
{
"start": 718,
"end": 747,
"text": "S\u00e9aghdha and Copestake (2008)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "6"
},
{
"text": "Overall these results show that a maximum entropy approach with a range of informative features is a feasible and effective method of classifying nominal relations when presented with limited data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "6"
},
{
"text": "We have created a system built around a maximum entropy classifier that achieves results comparable with state-of-the-art with limited training data. We have also demonstrated that syntactic dependencies and frequency-based statistical features taken from large corpora provide useful evidence for classification, especially when combined with lexical and semantic information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "We have also shown that a maximum entropy approach using informative features performs strongly in the task of relation classification, and that exact WordNet sense keys are not necessary for good performance. This is important since it is impractical in large scale classifying tasks to provide this annotation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "The corpora is extremely small, and it should be noted that the choice to select the dataset using a limited number of queries artificially limits the scope of this task. We feel that an effort to annotate a large amount of randomly selected text with several hundred positive examples would greatly benefit further research into relation classification and validate the results presented in this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Future improvements to the system could include incorporating more external resources (e.g. VerbNet), introducing Word-Sense Disambiguation as a replacement for WordNet sense keys, or by incorporating more relation-specific features, such as meronym (Has-Part) information from WordNet for the Part-Whole and Content-Container relations. More sophisticated analysis of the Web 1T data could also be undertaken, such as a generalised attempt to identify patterns underpinning semantic relationships, rather than just those corresponding to the provided sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "We achieved a final overall F-measure of 70.7% and accuracy of 71.9%, establishing a new benchmark for performance over the SemEval data without sense keys. Our system is also competitive with approaches that use sense keys, and so we expect that it will provide useful semantic information for classification and retrieval problems in the future.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
}
],
"back_matter": [
{
"text": "The authors would like to thank the three anonymous reviewers, whose comments greatly improved the quality of this paper. Dominick Ng was supported by a University of Sydney Merit Scholarship; Terry Miu was supported by a University of Sydney Outstanding Achievement Scholarship and a University of Sydney International Merit Scholarship.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "UIUC: A knowledge-rich approach to identifying semantic relations between nominals",
"authors": [
{
"first": "Brandon",
"middle": [],
"last": "Beamer",
"suffix": ""
},
{
"first": "Suma",
"middle": [],
"last": "Bhat",
"suffix": ""
},
{
"first": "Brant",
"middle": [],
"last": "Chee",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Fister",
"suffix": ""
},
{
"first": "Alla",
"middle": [],
"last": "Rozovskaya",
"suffix": ""
},
{
"first": "Roxana",
"middle": [],
"last": "Girju",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007)",
"volume": "",
"issue": "",
"pages": "386--389",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brandon Beamer, Suma Bhat, Brant Chee, Andrew Fister, Alla Rozovskaya, and Roxana Girju. 2007. UIUC: A knowledge-rich approach to identifying semantic relations between nominals. In Proceed- ings of the Fourth International Workshop on Se- mantic Evaluations (SemEval-2007), pages 386- 389, Prague, Czech Republic.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "UC3M: Classification of semantic relations between nominals using sequential minimal optimization",
"authors": [
{
"first": "Isabel",
"middle": [],
"last": "Segura Bedmar",
"suffix": ""
},
{
"first": "Doaa",
"middle": [],
"last": "Samy",
"suffix": ""
},
{
"first": "Jose",
"middle": [
"L"
],
"last": "",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007)",
"volume": "",
"issue": "",
"pages": "382--385",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Isabel Segura Bedmar, Doaa Samy, and Jose L. Mar- tinez. 2007. UC3M: Classification of semantic relations between nominals using sequential min- imal optimization. In Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007), pages 382-385, Prague, Czech Republic.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Web 1T 5-gram version 1. Linguistic Data Consortium",
"authors": [
{
"first": "Thorsten",
"middle": [],
"last": "Brants",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Franz",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thorsten Brants and Alex Franz. 2006. Web 1T 5-gram version 1. Linguistic Data Consortium, Philadelphia. LDC2006T13.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Parser evaluation: a survey and a new proposal",
"authors": [
{
"first": "John",
"middle": [],
"last": "Carroll",
"suffix": ""
},
{
"first": "Ted",
"middle": [],
"last": "Briscoe",
"suffix": ""
},
{
"first": "Antonio",
"middle": [],
"last": "Sanfilippo",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings, First International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "447--454",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Carroll, Ted Briscoe, and Antonio Sanfilippo. 1998. Parser evaluation: a survey and a new pro- posal. In Proceedings, First International Confer- ence on Language Resources and Evaluation, pages 447-454, Granada, Spain.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Verbocean: Mining the web for fine-grained semantic verb relations",
"authors": [
{
"first": "Timothy",
"middle": [],
"last": "Chklovski",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Pantel",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 2004 Conference on Empirical Methods on Natural Language Processing (EMNLP-2004)",
"volume": "",
"issue": "",
"pages": "33--40",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Timothy Chklovski and Patrick Pantel. 2004. Ver- bocean: Mining the web for fine-grained seman- tic verb relations. In Proceedings of the 2004 Conference on Empirical Methods on Natural Lan- guage Processing (EMNLP-2004), pages 33-40, Barcelona, Spain.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Parsing the WSJ using CCG and log-linear models",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "James",
"middle": [
"R"
],
"last": "Curran",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "104--111",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen Clark and James R. Curran. 2004. Parsing the WSJ using CCG and log-linear models. In Pro- ceedings of the 42nd Annual Meeting on Associa- tion for Computational Linguistics, pages 104-111, Barcelona, Spain.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Widecoverage efficient statistical parsing with CCG and log-linear models",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "James",
"middle": [
"R"
],
"last": "Curran",
"suffix": ""
}
],
"year": 2007,
"venue": "Computational Linguistics",
"volume": "33",
"issue": "4",
"pages": "493--552",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen Clark and James R. Curran. 2007. Wide- coverage efficient statistical parsing with CCG and log-linear models. Computational Linguistics, 33(4):493-552.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Classification of semantic relationships between nominals using pattern clusters",
"authors": [
{
"first": "Dmitry",
"middle": [],
"last": "Davidov",
"suffix": ""
},
{
"first": "Ari",
"middle": [],
"last": "Rappoport",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL-08: HLT)",
"volume": "",
"issue": "",
"pages": "227--235",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dmitry Davidov and Ari Rappoport. 2008. Classifica- tion of semantic relationships between nominals us- ing pattern clusters. In Proceedings of the 46th An- nual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL- 08: HLT), pages 227-235, Ohio, USA.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "WordNet -An Electronic Lexical Database",
"authors": [],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christiane Fellbaum, editor. 1998. WordNet -An Electronic Lexical Database. MIT Press.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Semeval-2007 task 04: Classification of semantic relations between nominals",
"authors": [
{
"first": "Roxana",
"middle": [],
"last": "Girju",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Vivi",
"middle": [],
"last": "Nastase",
"suffix": ""
},
{
"first": "Stan",
"middle": [],
"last": "Szpakowicz",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Turney",
"suffix": ""
},
{
"first": "Deniz",
"middle": [],
"last": "Yuret",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Fourth International Workshop on Semantic Evaluations",
"volume": "",
"issue": "",
"pages": "13--18",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roxana Girju, Preslav Nakov, Vivi Nastase, Stan Sz- pakowicz, Peter Turney, and Deniz Yuret. 2007. Semeval-2007 task 04: Classification of seman- tic relations between nominals. In Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007), pages 13-18, Prague, Czech Republic.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Automatic acquisition of hyponyms from large text corpora",
"authors": [
{
"first": "Marti",
"middle": [],
"last": "Hearst",
"suffix": ""
}
],
"year": 1992,
"venue": "Proceedings of the 14th International Conference on Computational Linguistics (COLING-1992)",
"volume": "",
"issue": "",
"pages": "539--545",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marti Hearst. 1992. Automatic acquisition of hy- ponyms from large text corpora. In Proceedings of the 14th International Conference on Computa- tional Linguistics (COLING-1992), pages 539-545, Nantes, France.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "ILK: Machine learning of semantic relations with shallow features and almost no data",
"authors": [
{
"first": "Iris",
"middle": [],
"last": "Hendrickx",
"suffix": ""
},
{
"first": "Roser",
"middle": [],
"last": "Morante",
"suffix": ""
},
{
"first": "Caroline",
"middle": [],
"last": "Sporleder",
"suffix": ""
},
{
"first": "Antal",
"middle": [],
"last": "Van Den",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bosch",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007)",
"volume": "",
"issue": "",
"pages": "187--190",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Iris Hendrickx, Roser Morante, Caroline Sporleder, and Antal van den Bosch. 2007. ILK: Machine learning of semantic relations with shallow fea- tures and almost no data. In Proceedings of the Fourth International Workshop on Semantic Eval- uations (SemEval-2007), pages 187-190, Prague, Czech Republic.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Foundations of Statistical Natural Language Processing",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Manning and Hinrich Sch\u00fctze. 2000. Foun- dations of Statistical Natural Language Processing. MIT Press, Massachusetts, United States.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Exploring noun-modifier semantic relations",
"authors": [
{
"first": "Vivi",
"middle": [],
"last": "Nastase",
"suffix": ""
},
{
"first": "Stan",
"middle": [],
"last": "Szpakowicz",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 5th International Workshop on Computational Semantics",
"volume": "",
"issue": "",
"pages": "285--301",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vivi Nastase and Stan Szpakowicz. 2003. Explor- ing noun-modifier semantic relations. In Proceed- ings of the 5th International Workshop on Compu- tational Semantics, pages 285-301, Tilburg, The Netherlands.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "UTD-HLT-CG: Semantic architecture for metonymy resolution and classification of nominal relations",
"authors": [
{
"first": "Cristina",
"middle": [],
"last": "Nicolae",
"suffix": ""
},
{
"first": "Gabriel",
"middle": [],
"last": "Nicolae",
"suffix": ""
},
{
"first": "Sanda",
"middle": [],
"last": "Harabagiu",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007)",
"volume": "",
"issue": "",
"pages": "454--459",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cristina Nicolae, Gabriel Nicolae, and Sanda Harabagiu. 2007. UTD-HLT-CG: Semantic archi- tecture for metonymy resolution and classification of nominal relations. In Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007), pages 454-459, Prague, Czech Republic.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "UCD-PN: Classification of semantic relations between nominals using wordnet and web counts",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Nulty",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007)",
"volume": "",
"issue": "",
"pages": "374--377",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul Nulty. 2007. UCD-PN: Classification of se- mantic relations between nominals using wordnet and web counts. In Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007), pages 374-377, Prague, Czech Republic.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Classifying the semantic relations in noun compounds via a domain-specific lexical hierarchy",
"authors": [
{
"first": "Barbara",
"middle": [],
"last": "Rosario",
"suffix": ""
},
{
"first": "Marti",
"middle": [],
"last": "Hearst",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 2001 Conference on Empirical Methods in Natural Language Processing (EMNLP-2001)",
"volume": "",
"issue": "",
"pages": "82--90",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Barbara Rosario and Marti Hearst. 2001. Classify- ing the semantic relations in noun compounds via a domain-specific lexical hierarchy. In Proceed- ings of the 2001 Conference on Empirical Methods in Natural Language Processing (EMNLP-2001), pages 82-90, Pennsylvania, United States.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Semantic classification with distributional kernels",
"authors": [
{
"first": "Diarmuid\u00f3",
"middle": [],
"last": "S\u00e9aghdha",
"suffix": ""
},
{
"first": "Ann",
"middle": [],
"last": "Copestake",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 22nd International Conference on Computational Linguistics (COLING-2008)",
"volume": "",
"issue": "",
"pages": "649--655",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diarmuid\u00d3 S\u00e9aghdha and Ann Copestake. 2008. Semantic classification with distributional kernels. In Proceedings of the 22nd International Con- ference on Computational Linguistics (COLING- 2008), pages 649-655, Manchester, UK.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Detecting gene relations from MEDLINE abstracts",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Stephens",
"suffix": ""
},
{
"first": "Mathew",
"middle": [
"J"
],
"last": "Palakal",
"suffix": ""
},
{
"first": "Snehasis",
"middle": [],
"last": "Mukhopadhyay",
"suffix": ""
},
{
"first": "Rajeev",
"middle": [
"R"
],
"last": "Raje",
"suffix": ""
},
{
"first": "Javed",
"middle": [],
"last": "Mostafa",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the Sixth Annual Pacific Symposium on Biocomputing",
"volume": "",
"issue": "",
"pages": "483--496",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Stephens, Mathew J. Palakal, Sneha- sis Mukhopadhyay, Rajeev R. Raje, and Javed Mostafa. 2001. Detecting gene relations from MEDLINE abstracts. In Proceedings of the Sixth Annual Pacific Symposium on Biocomputing, pages 483-496, Hawaii, United States.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"type_str": "table",
"content": "<table><tr><td>Relation</td><td>Training Example</td></tr><tr><td colspan=\"2\">Cause-Effect [e Theme-Tool The [e 1 submission] [e 2 deadline] is February, 2, 2007.</td></tr><tr><td>Part-Whole</td><td>Typically, an unglazed [e 1 clay ] [e 2 pot] is submerged for 15 to 30 minutes</td></tr><tr><td/><td>to absorb water.</td></tr><tr><td>Content-</td><td/></tr></table>",
"num": null,
"html": null,
"text": "Famine] following [e 2 drought] has hit the West African savannahs, where there have been other bad droughts. Instrument-Agency The [e 1 judge] hesitates, [e 2 gavel] poised. Product-Producer The [e 1 artist] made the [e 2 picture] when he was in fourth grade. Origin-Entity It's unfortunate you didn't try a [e 1 potato] [e 2 vodka]. Container The [e 1 kitchen] holds patient [e 2 drinks] and snacks."
},
"TABREF1": {
"type_str": "table",
"content": "<table/>",
"num": null,
"html": null,
"text": "Examples of the SemEval relations."
},
"TABREF3": {
"type_str": "table",
"content": "<table/>",
"num": null,
"html": null,
"text": ""
},
"TABREF4": {
"type_str": "table",
"content": "<table/>",
"num": null,
"html": null,
"text": "O 11 : freq count of word and bound together O 12 : freq count of bound without word O 21 : freq count of word without bound O 22 : freq count of neither bound or word N : total number of tokens"
},
"TABREF7": {
"type_str": "table",
"content": "<table><tr><td>Relation</td><td>Features selected</td><td>F\u00b1 std Acc\u00b1 std</td></tr><tr><td colspan=\"2\">Cause-Effect dir, headsOrigin-Entity dir, sen, hyp2, lex, gstrip</td><td>62.3\u00b112.92 70.7\u00b18.38</td></tr><tr><td>Theme-Tool</td><td>heads, lex, g/pair, gslice</td><td>71.2\u00b1 8.43 77.9\u00b14.88</td></tr><tr><td>Part-Whole</td><td>dir, red, hyp1, syn, g/pair, slice, ngram</td><td>81.6\u00b1 8.81 77.1\u00b18.09</td></tr><tr><td>Content-Container</td><td>heads, red, cont, hyp2, lex, pair, slice, ngram</td><td>71.1\u00b112.22 72.9\u00b16.36</td></tr><tr><td>Average</td><td>-</td><td>74.7\u00b1 6.88 74.5\u00b12.85</td></tr><tr><td colspan=\"3\">Table 5: The best performing features with F-measure and accuracy percentages from the cross-validation</td></tr></table>",
"num": null,
"html": null,
"text": ", cont, lex, pair, g/slice, g/strip, ngram 77.9\u00b1 7.06 73.6\u00b19.00 Instrument-Agency heads, cont, hyp2, lex, g/pair, gptag, gpath, g/slice, gstrip 78.2\u00b1 7.59 77.1\u00b19.94 red, cont, hyp1, hyp2, gpath, pair, slice, strip 80.6\u00b1 3.67 72.1\u00b13.93"
}
}
}
}