Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D09-1047",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:40:05.756764Z"
},
"title": "Joint Learning of Preposition Senses and Semantic Roles of Prepositional Phrases",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Dahlmeier",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Hwee",
"middle": [
"Tou"
],
"last": "Ng",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Tanja",
"middle": [],
"last": "Schultz",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Karlsruhe",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The sense of a preposition is related to the semantics of its dominating prepositional phrase. Knowing the sense of a preposition could help to correctly classify the semantic role of the dominating prepositional phrase and vice versa. In this paper, we propose a joint probabilistic model for word sense disambiguation of prepositions and semantic role labeling of prepositional phrases. Our experiments on the PropBank corpus show that jointly learning the word sense and the semantic role leads to an improvement over state-of-theart individual classifier models on the two tasks.",
"pdf_parse": {
"paper_id": "D09-1047",
"_pdf_hash": "",
"abstract": [
{
"text": "The sense of a preposition is related to the semantics of its dominating prepositional phrase. Knowing the sense of a preposition could help to correctly classify the semantic role of the dominating prepositional phrase and vice versa. In this paper, we propose a joint probabilistic model for word sense disambiguation of prepositions and semantic role labeling of prepositional phrases. Our experiments on the PropBank corpus show that jointly learning the word sense and the semantic role leads to an improvement over state-of-theart individual classifier models on the two tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Word sense disambiguation (WSD) and semantic role labeling (SRL) are two key components in natural language processing to find a semantic representation for a sentence. Semantic role labeling is the task of determining the constituents of a sentence that represent semantic arguments with respect to a predicate and labeling each with a semantic role. Word sense disambiguation tries to determine the correct meaning of a word in a given context. Ambiguous words occur frequently in normal English text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "One word class which is both frequent and highly ambiguous is preposition. The different senses of a preposition express different relations between the preposition complement and the rest of the sentence. Semantic roles and word senses offer two different inventories of \"meaning\" for prepositional phrases (PP): semantic roles distinguish between different verb complements while word senses intend to fully capture the preposition semantics at a more fine-grained level. In this paper, we use the semantic roles from the PropBank corpus and the preposition senses from the Preposition Project (TPP). Both corpora are explained in more detail in the following section. The relationship between the two inventories (PropBank semantic roles and TPP preposition senses) is not a simple one-to-one mapping, as we can see from the following examples:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 She now lives with relatives [in sense1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Alabama.] ARGM-LOC",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 The envelope arrives [in sense1 the mail.] ARG4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 [In sense5 separate statements] ARGM-LOC the two sides said they want to have \"further discussions.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the first two examples, the sense of the preposition in is annotated as sense 1 (\"surrounded by or enclosed in\"), following the definitions of the TPP, but the semantic roles are different. In the first example the semantic role is a locative adjunctive argument (ARGM-LOC), while in the second example it is ARG4 which denotes the \"end point or destination\" of the arriving action 1 . In the first and third example, the semantic roles are the same, but the preposition senses are different, i.e., sense 1 and sense 5 (\"inclusion or involvement\"). Preposition senses and semantic roles provide two different views on the semantics of PPs. Knowing the semantic role of the PP could be helpful to successfully disambiguate the sense of the preposition. Likewise, the preposition sense could provide valuable information to classify the semantic role of the PP. This is especially so for the semantic roles ARGM-LOC and ARGM-TMP, where we expect a strong correlation with spatial and temporal preposition senses respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we propose a probabilistic model for joint inference on preposition senses and semantic roles. For each prepositional phrase that has been identified as an argument of the predicate, we jointly infer its semantic role and the sense of the preposition that is the lexical head of the prepositional phrase. That is, our model maximizes the joint probability of the semantic role and the preposition sense.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Previous research has shown the benefit of jointly learning semantic roles of multiple constituents (Toutanova et al., 2008; Koomen et al., 2005) . In contrast, our joint model makes predictions for a single constituent, but multiple tasks (WSD and SRL) .",
"cite_spans": [
{
"start": 100,
"end": 124,
"text": "(Toutanova et al., 2008;",
"ref_id": "BIBREF14"
},
{
"start": 125,
"end": 145,
"text": "Koomen et al., 2005)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our experiments show that adding the SRL information leads to statistically significant improvements over an independent, state-of-the-art WSD classifier. For the SRL task, we show statistically significant improvements of our joint model over an independent, state-of-the-art SRL classifier for locative and temporal adjunctive arguments, even though the overall improvement over all semantic roles is small. To the best of our knowledge, no previous research has attempted to perform preposition WSD and SRL of prepositional phrases in a joint learning approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The remainder of this paper is structured as follows: First, we give an introduction to the WSD and SRL task. Then, in Section 3, we describe the individual and joint classifier models. The details of the data set used in our experiments are given in Section 4. In Section 5, we present experiments and results. Section 6 summarizes related work, before we conclude in the final section.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This section gives an introduction to preposition sense disambiguation and semantic role labeling of prepositional phrases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Description",
"sec_num": "2"
},
{
"text": "The task of word sense disambiguation is to find the correct meaning of a word, given its context. Most prior research on word sense disambiguation has focused on disambiguating the senses of nouns, verbs, and adjectives, but not on prepositions. Word sense disambiguation can be framed as a classification task. For each preposition, a classifier is trained on a corpus of training examples annotated with preposition senses, and tested on a set of unseen test examples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preposition Sense Disambiguation",
"sec_num": "2.1"
},
{
"text": "To perform WSD for prepositions, it is necessary to first find a set of suitable sense classes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preposition Sense Disambiguation",
"sec_num": "2.1"
},
{
"text": "We adopt the sense inventory from the Preposition Project (TPP) (Litkowski and Hargraves, 2005) that was also used in the SemEval 2007 preposition WSD task (Litkowski and Hargraves, 2007) . TPP is an attempt to create a comprehensive lexical database of English prepositions that is suitable for use in computational linguistics research. For each of the over 300 prepositions and phrasal prepositions, the database contains a set of sense definitions, which are based on the Oxford Dictionary of English. Every preposition has a set of fine-grained senses, which are grouped together into a smaller number of coarse-grained senses. In our experiments, we only focus on coarse-grained senses since better inter-annotator agreement can be achieved on coarse-grained senses, which also results in higher accuracy of the trained WSD classifier.",
"cite_spans": [
{
"start": 64,
"end": 95,
"text": "(Litkowski and Hargraves, 2005)",
"ref_id": "BIBREF6"
},
{
"start": 156,
"end": 187,
"text": "(Litkowski and Hargraves, 2007)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Preposition Sense Disambiguation",
"sec_num": "2.1"
},
{
"text": "The task of semantic role labeling in the context of PropBank is to label tree nodes with semantic roles in a syntactic parse tree.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Role Labeling",
"sec_num": "2.2"
},
{
"text": "The PropBank corpus adds a semantic layer to parse trees from the Wall Street Journal section of the Penn Treebank II corpus (Marcus et al., 1993) . There are two classes of semantic roles: core arguments and adjunctive arguments. Core arguments are verb sense specific, i.e., their meaning is defined relative to a specific verb sense. They are labeled with consecutive numbers ARG0, ARG1, etc. ARG0 usually denotes the AGENT and ARG1 the THEME of the event. Besides the core arguments, a verb can have a number of adjunctive arguments that express more general properties like time, location, or manner. They are labeled as ARGM plus a functional tag, e.g., LOC for locative or TMP for temporal modifiers. Prepositional phrases can appear as adjunctive arguments or core arguments.",
"cite_spans": [
{
"start": 125,
"end": 146,
"text": "(Marcus et al., 1993)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Role Labeling",
"sec_num": "2.2"
},
{
"text": "The standard approach to semantic role labeling is to divide the task into two sequential sub-tasks: identification and classification. During the identification phase, the system separates the nodes that fill some semantic roles from the rest. During the classification phase, the system assigns the exact semantic roles for all nodes that are identified as arguments. In this paper, we focus on the classification phase. That is, we assume that prepositional phrases that are semantic arguments have been identified correctly and concentrate on the task of determining the semantic role of prepositional phrases. The reason is that argument identification mostly relies on syntactic features, like the path from the constituent to the predicate (Pradhan et al., 2005) . Consider, for example, the phrase in the dark in the sentence: \"We are in the dark\", he said. The phrase is clearly not an argument to the verb say. But if we alter the syntactic structure of the sentence appropriately (while the sense of the preposition in remains unchanged), the same phrase suddenly becomes an adjunctive argument: In the dark, he said \"We are\". On the other hand, we can easily find examples, where in has a different sense, but the phrase always fills some semantic role:",
"cite_spans": [
{
"start": 747,
"end": 769,
"text": "(Pradhan et al., 2005)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Role Labeling",
"sec_num": "2.2"
},
{
"text": "\u2022 In a separate manner, he said . . .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Role Labeling",
"sec_num": "2.2"
},
{
"text": "\u2022 In 1998, he said . . .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Role Labeling",
"sec_num": "2.2"
},
{
"text": "\u2022 In Washington, he said . . . This illustrates that the preposition sense is independent of whether the PP is an argument or not. Thus, a joint learning model for argument identification and preposition sense is unlikely to perform better than the independent models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Role Labeling",
"sec_num": "2.2"
},
{
"text": "This section describes the models for preposition sense disambiguation and semantic role labeling.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "3"
},
{
"text": "We compare three different models for each task: First, we implement an independent model that only uses task specific features from the literature. This serves as the baseline model. Second, we extend the baseline model by adding the most likely prediction of the other task as an additional feature. This is equivalent to a pipeline model of classifiers that feeds the prediction of one classification step into the next stage. Finally, we present a joint model to determine the preposition sense and semantic role that maximize the joint probability.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "3"
},
{
"text": "Our approach to building a preposition WSD classifier follows that of Lee and Ng (2002) , who evaluated a set of different knowledge sources and learning algorithms for WSD. However, in this paper we use maximum entropy models 2 (instead of support vector machines (SVM) reported in (Lee and Ng, 2002) ), because maximum entropy models output probability distributions, unlike SVM. This property is useful in the joint model, as we will see later. Maxent models have been successfully applied to various NLP tasks and achieve state-of-the-art performance. There are two training parameters that have to be adjusted for maxent models: the number of training iterations and the Gaussian smoothing parameter. We find optimal values for both parameters through 10-fold crossvalidation on the training set.",
"cite_spans": [
{
"start": 70,
"end": 87,
"text": "Lee and Ng (2002)",
"ref_id": "BIBREF5"
},
{
"start": 283,
"end": 301,
"text": "(Lee and Ng, 2002)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "WSD model",
"sec_num": "3.1"
},
{
"text": "For every preposition, a baseline maxent model is trained using a set of features reported in the state-of-the-art WSD system of Lee and Ng (2002) . These features encode three knowledge sources:",
"cite_spans": [
{
"start": 129,
"end": 146,
"text": "Lee and Ng (2002)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "WSD model",
"sec_num": "3.1"
},
{
"text": "\u2022 Part-of-speech (POS) of surrounding words",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WSD model",
"sec_num": "3.1"
},
{
"text": "\u2022 Single words in the surrounding context",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WSD model",
"sec_num": "3.1"
},
{
"text": "For part-of-speech features, we include the POS tags of surrounding tokens from the same sentence within a window of seven tokens around the target prepositions. All tokens (i.e., all words and punctuation symbols) are considered. We use the Penn Treebank II POS tag set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 Local collocations",
"sec_num": null
},
{
"text": "For the knowledge source single words in the surrounding context, we consider all words from the same sentence. The input sentence is tokenized and all tokens that do not contain at least one alphabetical character (such as punctuation symbols and numbers) and all words that appear on a stopword list are removed. The remaining words are converted to lower case and replaced by their morphological root form. Every unique morphological root word contributes one binary feature, indicating whether or not the word is present in the context. The position of a word in the sentence is ignored in this knowledge source.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 Local collocations",
"sec_num": null
},
{
"text": "The third knowledge source, local collocations, encodes position-specific information of words within a small window around the target preposition. For this knowledge source, we consider unigrams, bigrams, and trigrams from a window of seven tokens. The position of the target preposition inside the n-gram is marked with a special character ' '. Words are converted to lower case, but no stemming or removal of stopwords is performed. If a token falls outside the sentence, it is replaced by the empty token symbol nil.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 Local collocations",
"sec_num": null
},
{
"text": "During testing, the maxent model computes the conditional probability of the sense, given the feature representation of the surrounding context c. The classifier outputs the sense that receives the highest probability:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 Local collocations",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "s = argmax s P (s|\u03a8(c))",
"eq_num": "(1)"
}
],
"section": "\u2022 Local collocations",
"sec_num": null
},
{
"text": "where \u03a8(\u2022) is a feature map from the surrounding context to the feature representation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 Local collocations",
"sec_num": null
},
{
"text": "To ensure that our model is competitive, we tested our system on the data set from the SemEval 2007 preposition WSD task (Litkowski and Hargraves, 2007) . Our baseline classifier achieved a coarse-grained accuracy of 70.7% (micro-average) on the official test set. This would have made our system the second best system in the competition, behind the MELB-YB system (Ye and Baldwin, 2007) .",
"cite_spans": [
{
"start": 121,
"end": 152,
"text": "(Litkowski and Hargraves, 2007)",
"ref_id": "BIBREF7"
},
{
"start": 366,
"end": 388,
"text": "(Ye and Baldwin, 2007)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 Local collocations",
"sec_num": null
},
{
"text": "We also investigate the effect of the semantic role label by adding it as a feature to the baseline model. This pipeline model is inspired by the work of Dang and Palmer (2005) who investigated the role of SRL features in verb WSD. We add the semantic role of the prepositional phrase dominating the preposition as a feature to the WSD model. During training, the PropBank gold SRL label is used. During testing, we rely on the baseline SRL model (to be introduced in the next subsection) to predict the semantic role of the prepositional phrase. This is equivalent to first performing semantic role labeling and adding the output as a feature to the WSD classifier. In earlier experiments, we found that training on gold SRL labels gave better results than training on automatically predicted SRL labels (using crossvalidation). Note that our approach uses automatically assigned SRL labels during testing, while the system of Dang and Palmer (2005) only uses gold SRL labels.",
"cite_spans": [
{
"start": 154,
"end": 176,
"text": "Dang and Palmer (2005)",
"ref_id": "BIBREF1"
},
{
"start": 928,
"end": 950,
"text": "Dang and Palmer (2005)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 Local collocations",
"sec_num": null
},
{
"text": "Our semantic role labeling classifier is also based on maxent models. It has been shown that maximum entropy models achieve state-of-the-art results on SRL (Xue and Palmer, 2004; Toutanova et al., 2008) . Again, we find optimal values for the training parameters through 10-fold crossvalidation on the training set.",
"cite_spans": [
{
"start": 156,
"end": 178,
"text": "(Xue and Palmer, 2004;",
"ref_id": "BIBREF15"
},
{
"start": 179,
"end": 202,
"text": "Toutanova et al., 2008)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "SRL model",
"sec_num": "3.2"
},
{
"text": "By treating SRL as a classification problem, the choice of appropriate features becomes a key issue. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SRL model",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "a = argmax a P (a|t, p, v) (2) = argmax a P (a|\u03a6(t, p, v))",
"eq_num": "(3)"
}
],
"section": "SRL model",
"sec_num": "3.2"
},
{
"text": "where \u03a6(\u2022, \u2022, \u2022) is a feature map to an appropriate feature representation. For our baseline SRL model, we adopt the features used in other state-of-the-art SRL systems, which include the seven baseline features from the original work of Gildea and Jurafsky (2002) , additional features taken from Pradhan et al. 2005, and feature combinations which are inspired by the system in Xue and Palmer (2004) . Table 1 lists the features we use for easy reference.",
"cite_spans": [
{
"start": 238,
"end": 264,
"text": "Gildea and Jurafsky (2002)",
"ref_id": "BIBREF2"
},
{
"start": 380,
"end": 401,
"text": "Xue and Palmer (2004)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 404,
"end": 411,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "SRL model",
"sec_num": "3.2"
},
{
"text": "In the pipeline model, we investigate the usefulness of the preposition sense as a feature for SRL by adding the preposition lemma concatenated with the sense number (e.g., on 1) as a feature. During training, the gold annotated preposition sense is used. During testing, the sense is automatically tagged by the baseline WSD model. This is equivalent to first running the WSD classifier for all prepositions, and adding the output preposition sense as a feature to our baseline SRL system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SRL model",
"sec_num": "3.2"
},
{
"text": "The two previous models seek to maximize the probability of the semantic role and the preposition sense individually, thus ignoring possible dependencies between the two. Instead of maximizing the individual probabilities, we would like to maximize the joint probability of the semantic role and the preposition sense, given the parse tree, predicate, constituent node, and surrounding context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Inference Model",
"sec_num": "3.3"
},
{
"text": "(a, s) = argmax (a,s)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Inference Model",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (a, s|t, p, v, c)",
"eq_num": "(4)"
}
],
"section": "Joint Inference Model",
"sec_num": "3.3"
},
{
"text": "We assume that the probability of the semantic role is already determined by the syntactic parse tree t, the predicate p, and the constituent node v, and is conditionally independent of the remaining surrounding context c given t, p, and v. Likewise, we assume that the probability of the preposition sense is conditionally independent of the parse tree t, predicate p, and constituent v, given the surrounding context c and the semantic role a. This assumption allows us to factor the joint probability into an SRL and a WSD component:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Inference Model",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "(a, s) = argmax (a,s) P (a|t, p, v)\u00d7P (s|c, a)",
"eq_num": "(5)"
}
],
"section": "Joint Inference Model",
"sec_num": "3.3"
},
{
"text": "= argmax (a,s)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Inference Model",
"sec_num": "3.3"
},
{
"text": "P (a|\u03a6(t, p, v))\u00d7P (s|\u03a8(c, a))(6)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Inference Model",
"sec_num": "3.3"
},
{
"text": "We observe that the first component in our joint model corresponds to the baseline SRL model and the second component corresponds to the WSD pipeline model. Because our maxent models output a complete probability distribution, we can combine both components by multiplying the probabilities. Theoretically, the joint probability could be factored in the other way, by first computing the probability of the preposition sense and then conditioning the SRL model on the predicted preposition sense. However, in our early experiments, we found that this approach gave lower classification accuracy. During testing, the classifier seeks to find the tuple of semantic role and preposition sense that maximizes the joint probability. For every semantic role, the classifier computes its probability given the SRL features, and multiplies it by the probability of the most likely preposition sense, given the context and the semantic role. The tuple that receives the highest joint probability is the final output of the joint classifier. Test ARG0 28 15 13 ARG1 374 208 166 ARG2 649 352 297 ARG3 111 67 44 ARG4 177 91 86 ARGM-ADV 141 101 40 ARGM-CAU 31 23 ",
"cite_spans": [],
"ref_spans": [
{
"start": 1032,
"end": 1176,
"text": "Test ARG0 28 15 13 ARG1 374 208 166 ARG2 649 352 297 ARG3 111 67 44 ARG4 177 91 86 ARGM-ADV 141 101 40 ARGM-CAU 31 23",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Joint Inference Model",
"sec_num": "3.3"
},
{
"text": "The joint model uses the probability of a preposition sense, given the semantic role of the dominating prepositional phrase. To estimate this probability, we need a corpus which is annotated with both preposition senses and semantic roles. Unfortunately, PropBank is not annotated with preposition senses. Instead, we manually annotated the seven most frequent prepositions in four sections of the PropBank corpus with their senses from the TPP dictionary. According to Jurafsky and Martin (2008) , the most frequent English prepositions are: of, in, for, to, with, on and at (in order of frequency). Our counts on Sections 2 to 21 of PropBank revealed that these top 7 prepositions account for about 65% of all prepositional phrases that are labeled with semantic roles. The annotation proceeds in the following way. First, we automatically extract all sentences which have one of the prepositions as the lexical head of a prepositional phrase. The position of the preposition is marked in the sentence. By only considering prepositional phrases, we automatically exclude occurrences of the word to before infinitives and instances of particle usage of prepositions, such as phrasal verbs. The extracted prepositions are manually tagged with their senses from the TPP dictionary. Idiomatic usage of prepositions like for example or in fact, and complex preposition constructions that involve more than one word (e.g., because of, instead of, etc.) are excluded by the annotators and compiled into a stoplist.",
"cite_spans": [
{
"start": 470,
"end": 496,
"text": "Jurafsky and Martin (2008)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Set",
"sec_num": "4"
},
{
"text": "We annotated 3854 instances of the top 7 prepo- Preposition Total Training Test at 404 260 144 for 478 307 171 in 1590 1083 507 of 97 51 46 on 408 246 162 to 532 304 228 with 345 211 134 Total 3854 2462 1392 Table 3 : Number of annotated prepositional phrases for each preposition sitions in Sections 2 to 4 and 23 of the PropBank corpus. The data shows a strong correlation between semantic roles and preposition senses that express a spatial or temporal meaning. For the preposition in, 90.8% of the instances that appear inside an ARGM-LOC are tagged with sense 1 (\"surrounded by or enclosed in\") or sense 5 (\"inclusion or involvement\"). 94.6% of the instances that appear inside an ARGM-TMP role are tagged with sense 2 (\"period of time\"). Our counts furthermore show that about one third of the annotated prepositional phrases fill core roles and that ARGM-LOC and ARGM-TMP are the most frequent roles. The detailed breakdown of semantic roles is shown in Table 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 48,
"end": 250,
"text": "Preposition Total Training Test at 404 260 144 for 478 307 171 in 1590 1083 507 of 97 51 46 on 408 246 162 to 532 304 228 with 345 211 134 Total 3854 2462 1392 Table 3",
"ref_id": "TABREF1"
},
{
"start": 996,
"end": 1003,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Data Set",
"sec_num": "4"
},
{
"text": "To see how consistent humans can perform the annotation task, we computed the inter-annotator agreement between two annotators on Section 4 of the PropBank corpus. We found that the two annotators assigned the same sense in 86% of the cases. Although not directly comparable, it is interesting to note that this figure is similar to inter-annotator agreement for open-class words reported in previous work (Palmer et al., 2000) . In our final data set, all labels were tagged by the same annotator, which we believe makes our annotation reasonably consistent across different instances. Because we annotate running text, not all prepositions have the same number of annotated instances. The numbers for all seven prepositions are shown in Table 3 . In our experiments, we use Sections 2 to 4 to train the models, and Section 23 is kept for testing. Although our experiments are limited to three sections of training data, it still allows us to train competitive SRL models. Pradhan et al. (2005) have shown that the benefit of using more training data diminishes after a few thousand training instances. We found that the accuracy of our SRL baseline model, which is trained on the 5275 sentences of these three sections, is only an absolute 3.89% lower than the accuracy of the same model when it is trained on twenty sections (71.71% accuracy compared to 75.60% accuracy).",
"cite_spans": [
{
"start": 406,
"end": 427,
"text": "(Palmer et al., 2000)",
"ref_id": "BIBREF11"
},
{
"start": 974,
"end": 995,
"text": "Pradhan et al. (2005)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 739,
"end": 746,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data Set",
"sec_num": "4"
},
{
"text": "We evaluate the performance of the joint model on the annotated prepositional phrases in test section 23 and compare the results with the performance of the baseline models and the pipeline models. Figure 1 shows the classification accuracy of the WSD models for each of the seven prepositions in the test section. The results show that the pipeline model and the joint model perform almost equally, with the joint model performing marginally better in the overall score. The detailed scores are given in Table 4 . Both models outperform the baseline classifier for three of the seven prepositions: at, for, and to. For the prepositions in, of, and on, the SRL feature did not affect the WSD classification accuracy significantly. For the preposition with, the classification accuracy even dropped by about 6%.",
"cite_spans": [],
"ref_spans": [
{
"start": 198,
"end": 206,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 505,
"end": 512,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "5"
},
{
"text": "Performing the student's t-test, we found that the improvement for the prepositions at, for, and to is statistical significant (p < 0.05), as is the overall improvement. This confirms our hypothesis that the semantic role of the prepositional phrase is a strong hint for the preposition sense. However, our results also show that it is the SRL feature that brings the improvement, not the joint model, because the pipeline and joint model achieve about the same performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "5"
},
{
"text": "For the SRL task, we report the classification accuracy over all annotated prepositional phrases in the test section and the F 1 measure for the semantic roles ARGM-LOC and ARGM-TMP. Fig ure 2 shows the results. The joint model shows a small performance increase of 0.43% over the baseline in the overall accuracy. Adding the preposition sense as a feature, on the other hand, significantly lowers the accuracy by over 2%. For ARGM-LOC and ARGM-TMP, the joint model improves the F 1 measure by about 1.3% each. The improvement of the joint model for these roles is statistically significant (p \u2264 0.05, student's ttest). Simply adding the preposition sense in the pipeline model again lowers the F 1 measure. The detailed results are listed in Table 5 . Table 5 : F 1 measure and accuracy of the baseline, pipeline, and joint model on the SRL task in test section 23, statistically significant improvements over the baseline are marked with an (*)",
"cite_spans": [],
"ref_spans": [
{
"start": 183,
"end": 186,
"text": "Fig",
"ref_id": null
},
{
"start": 743,
"end": 750,
"text": "Table 5",
"ref_id": null
},
{
"start": 753,
"end": 760,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "5"
},
{
"text": "Our SRL experiments show that a pipeline model degrades the performance. The reason is the relatively high degree of noise in the WSD classification and that the pipeline model does not discriminate whether the previous classifier predicts the extra feature with high or low confidence. Instead, the model only passes on the 1best WSD prediction, which can cause the next classifier to make a wrong classification based on the erroneous prediction of the previous step. In principle, this problem can be mitigated by training the pipeline model on automatically predicted labels using cross-validation, but in our case we found that automatically predicted WSD labels decreased the performance of the pipeline model even more. In contrast, the joint model computes the full probability distribution over the semantic roles and preposition senses. If the noise level in the first classification step is low, the joint model and the pipeline model perform almost identically, as we have seen in the previous WSD experiments. But if the noise level is high, the joint model can still improve while the pipeline model drops in performance. Our experiments show that the joint model is more robust in the presence of noisy features than the pipeline model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "5"
},
{
"text": "There is relatively less prior research on prepositions and prepositional phrases in the NLP community. O'Hara and Wiebe (2003) proposed a WSD system to disambiguate function tags of prepositional phrases. An extended version of their work was recently presented in (O'Hara and Wiebe, 2009) . Ye and Baldwin (2006) extended their work to a semantic role tagger specifically for prepositional phrases. Their system first classifies the semantic roles of all prepositional phrases and later merges the output with a general SRL system. Ye and Baldwin (2007) used semantic role tags from surrounding tokens as part of the MELB-YB preposition WSD system. They found that the SRL features did not significantly help their classifier, which is different from our findings. Dang and Palmer (2005) showed that semantic role features are helpful to disambiguate verb senses. Their approach is similar to our pipeline WSD model, but they do not present results with automatically predicted semantic roles. Toutanova et al. (2008) presented a re-ranking model to jointly learn the semantic roles of multiple constituents in the SRL task. Their work dealt with joint learning in SRL, but it is not directly comparable to ours. The difference is that Toutanova et al. attempt to jointly learn semantic role assignment of different constituents for one task (SRL), while we attempt to jointly learn two tasks (WSD and SRL) for one constituent. Because we only look at one constituent at a time, we do not have to restrict ourselves to a re-ranking approach like Toutanova et al., but can calculate the full joint probability distribution of both tasks. Andrew et al. (2004) propose a method to learn a joint generative inference model from partially labeled data and apply their method to the problems of word sense disambiguation for verbs and determination of verb subcategorization frames. Their motivation is similar to ours, but they focus on learning from partially labeled data and they investigate different tasks.",
"cite_spans": [
{
"start": 266,
"end": 290,
"text": "(O'Hara and Wiebe, 2009)",
"ref_id": "BIBREF10"
},
{
"start": 293,
"end": 314,
"text": "Ye and Baldwin (2006)",
"ref_id": "BIBREF16"
},
{
"start": 534,
"end": 555,
"text": "Ye and Baldwin (2007)",
"ref_id": "BIBREF17"
},
{
"start": 767,
"end": 789,
"text": "Dang and Palmer (2005)",
"ref_id": "BIBREF1"
},
{
"start": 996,
"end": 1019,
"text": "Toutanova et al. (2008)",
"ref_id": "BIBREF14"
},
{
"start": 1639,
"end": 1659,
"text": "Andrew et al. (2004)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "None of these systems attempted to jointly learn the semantics of the prepositional phrase and the preposition in a single model, which is the main contribution of our work reported in this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "We propose a probabilistic model to jointly classify the semantic role of a prepositional phrase and the sense of the associated preposition. We show that learning both tasks together leads to an improvement over competitive, individual models for both subtasks. For the WSD task, we show that the SRL information improves the classification accuracy, although joint learning does not significantly outperform a simpler pipeline model here. For the SRL task, we show that the joint model improves over both the baseline model and the pipeline model, especially for temporal and location arguments. As we only disambiguate the seven most frequent prepositions, potentially more improvement could be gained by including more prepositions into our data set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "http://verbs.colorado.edu/framesets/arrive-v.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Zhang Le's Maximum Entropy Modeling Toolkit, http://homepages.inf.ed.ac.uk/s0450736/maxent toolkit.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This research was supported by a research grant R-252-000-225-112 from National University of Singapore Academic Research Fund.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Verb Sense and Subcategorization: Using Joint Inference to Improve Performance on Complementary Tasks",
"authors": [
{
"first": "Galen",
"middle": [],
"last": "Andrew",
"suffix": ""
},
{
"first": "Trond",
"middle": [],
"last": "Grenager",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "150--157",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Galen Andrew, Trond Grenager, and Christopher D. Manning. 2004. Verb Sense and Subcategorization: Using Joint Inference to Improve Performance on Complementary Tasks. In Proceedings of the 2004 Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP 2004), pages 150-157.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The Role of Semantic Roles in Disambiguating Verb Senses",
"authors": [
{
"first": "Trang",
"middle": [],
"last": "Hoa",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Dang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Palmer",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL-05)",
"volume": "",
"issue": "",
"pages": "42--49",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hoa Trang Dang and Martha Palmer. 2005. The Role of Semantic Roles in Disambiguating Verb Senses. In Proceedings of the 43rd Annual Meet- ing of the Association for Computational Linguistics (ACL-05), pages 42-49.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Automatic Labeling of Semantic Roles",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2002,
"venue": "Computational Linguistics",
"volume": "28",
"issue": "3",
"pages": "245--288",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Gildea and Daniel Jurafsky. 2002. Automatic Labeling of Semantic Roles. Computational Lin- guistics, 28(3):245-288.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Speech and Language Processing",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "James",
"middle": [
"H"
],
"last": "Martin",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Jurafsky and James H. Martin. 2008. Speech and Language Processing. Prentice-Hall, Inc. Up- per Saddle River, NJ, USA.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Generalized Inference with Multiple Semantic Role Labeling Systems",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Koomen",
"suffix": ""
},
{
"first": "Vasin",
"middle": [],
"last": "Punyakanok",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
},
{
"first": "Wen-Tau",
"middle": [],
"last": "Yih",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 9th Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "181--184",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Koomen, Vasin Punyakanok, Dan Roth, and Wen-tau Yih. 2005. Generalized Inference with Multiple Semantic Role Labeling Systems. In Pro- ceedings of the 9th Conference on Computational Natural Language Learning (CoNLL 2005), pages 181-184.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "An Empirical Evaluation of Knowledge Sources and Learning Algorithms for Word Sense Disambiguation",
"authors": [
{
"first": "Yoong Keok",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Hwee Tou",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "41--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoong Keok Lee and Hwee Tou Ng. 2002. An Empir- ical Evaluation of Knowledge Sources and Learn- ing Algorithms for Word Sense Disambiguation. In Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing (EMNLP 2002), pages 41-48.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "The Preposition Project",
"authors": [
{
"first": "C",
"middle": [],
"last": "Kenneth",
"suffix": ""
},
{
"first": "Orin",
"middle": [],
"last": "Litkowski",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hargraves",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 2nd ACL-SIGSEM Workshop on The Linguistic Dimensions of Prepositions and Their Use in Computational Linguistic Formalisms and Applications",
"volume": "",
"issue": "",
"pages": "171--179",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenneth C. Litkowski and Orin Hargraves. 2005. The Preposition Project. In Proceedings of the 2nd ACL- SIGSEM Workshop on The Linguistic Dimensions of Prepositions and Their Use in Computational Lin- guistic Formalisms and Applications, pages 171- 179.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "SemEval-2007 Task 06: Word-Sense Disambiguation of Prepositions",
"authors": [
{
"first": "C",
"middle": [],
"last": "Kenneth",
"suffix": ""
},
{
"first": "Orin",
"middle": [],
"last": "Litkowski",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hargraves",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 4th International Workshop on Semantic Evaluations",
"volume": "",
"issue": "",
"pages": "24--29",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenneth C. Litkowski and Orin Hargraves. 2007. SemEval-2007 Task 06: Word-Sense Disambigua- tion of Prepositions. In Proceedings of the 4th In- ternational Workshop on Semantic Evaluations (Se- mEval 2007), pages 24-29.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Building a Large Annotated Corpus of English: The Penn Treebank",
"authors": [
{
"first": "Mitchell",
"middle": [
"P"
],
"last": "Marcus",
"suffix": ""
},
{
"first": "Mary",
"middle": [
"A"
],
"last": "Marcinkiewicz",
"suffix": ""
},
{
"first": "Beatrice",
"middle": [],
"last": "Santorini",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "2",
"pages": "313--330",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitchell P. Marcus, Mary A. Marcinkiewicz, and Beat- rice Santorini. 1993. Building a Large Annotated Corpus of English: The Penn Treebank. Computa- tional Linguistics, 19(2):313-330.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Preposition Semantic Classification via Penn Treebank and FrameNet",
"authors": [
{
"first": "Janyce",
"middle": [],
"last": "Tom O'hara",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wiebe",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 7th Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "79--86",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom O'Hara and Janyce Wiebe. 2003. Preposi- tion Semantic Classification via Penn Treebank and FrameNet. In Proceedings of the 7th Conference on Computational Natural Language Learning (CoNLL 2003), pages 79-86.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Exploiting Semantic Role Resources for Preposition Disambiguation",
"authors": [
{
"first": "Janyce",
"middle": [],
"last": "Tom O'hara",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wiebe",
"suffix": ""
}
],
"year": 2009,
"venue": "Computational Linguistics",
"volume": "35",
"issue": "2",
"pages": "151--184",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom O'Hara and Janyce Wiebe. 2009. Exploiting Se- mantic Role Resources for Preposition Disambigua- tion. Computational Linguistics, 35(2):151-184.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Sense Tagging the Penn Treebank",
"authors": [
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "Hoa",
"middle": [
"Trang"
],
"last": "Dang",
"suffix": ""
},
{
"first": "Joseph",
"middle": [],
"last": "Rosenzweig",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 2nd International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martha Palmer, Hoa Trang Dang, and Joseph Rosen- zweig. 2000. Sense Tagging the Penn Treebank. In Proceedings of the 2nd International Conference on Language Resources and Evaluation (LREC 2000).",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "The Proposition Bank: An Annotated Corpus of Semantic Roles",
"authors": [
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Kingsbury",
"suffix": ""
}
],
"year": 2005,
"venue": "Computational Linguistics",
"volume": "31",
"issue": "1",
"pages": "71--105",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martha Palmer, Daniel Gildea, and Paul Kingsbury. 2005. The Proposition Bank: An Annotated Cor- pus of Semantic Roles. Computational Linguistics, 31(1):71-105.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Support Vector Learning for Semantic Argument Classification",
"authors": [
{
"first": "Kadri",
"middle": [],
"last": "Sameer Pradhan",
"suffix": ""
},
{
"first": "Valerie",
"middle": [],
"last": "Hacioglu",
"suffix": ""
},
{
"first": "Wayne",
"middle": [],
"last": "Krugler",
"suffix": ""
},
{
"first": "James",
"middle": [
"H"
],
"last": "Ward",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Martin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2005,
"venue": "Machine Learning",
"volume": "60",
"issue": "",
"pages": "11--39",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sameer Pradhan, Kadri Hacioglu, Valerie Krugler, Wayne Ward, James H. Martin, and Daniel Juraf- sky. 2005. Support Vector Learning for Semantic Argument Classification. Machine Learning, 60(1- 3):11-39.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A Global Joint Model for Semantic Role Labeling",
"authors": [
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
},
{
"first": "Aria",
"middle": [],
"last": "Haghighi",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2008,
"venue": "Computational Linguistics",
"volume": "34",
"issue": "2",
"pages": "161--191",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kristina Toutanova, Aria Haghighi, and Christopher D. Manning. 2008. A Global Joint Model for Se- mantic Role Labeling. Computational Linguistics, 34(2):161-191.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Calibrating Features for Semantic Role Labeling",
"authors": [
{
"first": "Nianwen",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "88--94",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nianwen Xue and Martha Palmer. 2004. Calibrating Features for Semantic Role Labeling. In Proceed- ings of the 2004 Conference on Empirical Methods in Natural Language Processing (EMNLP 2004), pages 88-94.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Semantic Role Labeling of Prepositional Phrases",
"authors": [
{
"first": "Patrick",
"middle": [],
"last": "Ye",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2006,
"venue": "ACM Transactions on Asian Language Information Processing (TALIP)",
"volume": "5",
"issue": "3",
"pages": "228--244",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Patrick Ye and Timothy Baldwin. 2006. Seman- tic Role Labeling of Prepositional Phrases. ACM Transactions on Asian Language Information Pro- cessing (TALIP), 5(3):228-244.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "MELB-YB: Preposition Sense Disambiguation Using Rich Semantic Features",
"authors": [
{
"first": "Patrick",
"middle": [],
"last": "Ye",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 4th International Workshop on Semantic Evaluations",
"volume": "",
"issue": "",
"pages": "241--244",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Patrick Ye and Timothy Baldwin. 2007. MELB-YB: Preposition Sense Disambiguation Using Rich Se- mantic Features. In Proceedings of the 4th Interna- tional Workshop on Semantic Evaluations (SemEval 2007), pages 241-244.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Classification accuracy of the WSD models for the seven most frequent prepositions in test section 23",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF1": {
"text": "F 1 measure of the SRL models for ARGM-LOC and ARGM-TMP, and overall accuracy on prepositional phrases in test section 23",
"uris": null,
"type_str": "figure",
"num": null
},
"TABREF0": {
"text": "Features are encoded as binary-valued functions. During testing, the maxent model computes",
"html": null,
"content": "<table><tr><td colspan=\"2\">Baseline Features (Gildea and Jurafsky, 2002)</td></tr><tr><td>pred</td><td>predicate lemma</td></tr><tr><td>path</td><td>path from constituent to predicate</td></tr><tr><td>ptype</td><td>syntactic category (NP, PP, etc.)</td></tr><tr><td>pos</td><td>relative position to the predicate</td></tr><tr><td>voice</td><td>active or passive voice</td></tr><tr><td>hw</td><td>syntactic head word of the phrase</td></tr><tr><td>sub-cat</td><td>rule expanding the predicate's parent</td></tr><tr><td colspan=\"2\">Advanced Features (Pradhan et al., 2005)</td></tr><tr><td>hw POS</td><td>POS of the syntactic head word</td></tr><tr><td>PP hw/POS</td><td>head word and POS of the rightmost</td></tr><tr><td/><td>NP child if the phrase is a PP</td></tr><tr><td>first/last word</td><td>first/last word and POS in the con-</td></tr><tr><td/><td>stituent</td></tr><tr><td>parent ptype</td><td>syntactic category of the parent node</td></tr><tr><td colspan=\"2\">parent hw/POS head word and POS of the parent</td></tr><tr><td>sister ptype</td><td>phrase type of left and right sister</td></tr><tr><td>sister hw/POS</td><td>head word and POS of left and right</td></tr><tr><td/><td>sister</td></tr><tr><td>temporal</td><td>temporal key words present</td></tr><tr><td>partPath</td><td>partial path predicate</td></tr><tr><td>proPath</td><td>projected path without directions</td></tr><tr><td colspan=\"2\">Feature Combinations (Xue and Palmer, 2004)</td></tr><tr><td>pred &amp; ptype</td><td>predicate and phrase type</td></tr><tr><td>pred &amp; hw</td><td>predicate and head word</td></tr><tr><td>pred &amp; path</td><td>predicate and path</td></tr><tr><td>pred &amp; pos</td><td>predicate and relative position</td></tr></table>",
"num": null,
"type_str": "table"
},
"TABREF1": {
"text": "",
"html": null,
"content": "<table/>",
"num": null,
"type_str": "table"
},
"TABREF3": {
"text": "",
"html": null,
"content": "<table><tr><td>: Number of annotated prepositional</td></tr><tr><td>phrases for each semantic role</td></tr></table>",
"num": null,
"type_str": "table"
},
"TABREF4": {
"text": "-",
"html": null,
"content": "<table><tr><td colspan=\"3\">Preposition Baseline Pipeline at 70.83 78.47 * for 41.52 49.12 *</td><td>Joint 78.47 * 49.12 *</td></tr><tr><td>in</td><td>62.33</td><td>61.74</td><td>61.93</td></tr><tr><td>of</td><td>43.48</td><td>43.48</td><td>43.48</td></tr><tr><td>on to</td><td>51.85 58.77</td><td>51.85 67.11 *</td><td>52.47 66.67 *</td></tr><tr><td>with Total</td><td>44.78 56.54</td><td>38.06 58.76 *</td><td>38.06 58.84 *</td></tr></table>",
"num": null,
"type_str": "table"
},
"TABREF5": {
"text": "",
"html": null,
"content": "<table><tr><td/><td/><td colspan=\"3\">: Classification accuracy of the baseline,</td></tr><tr><td colspan=\"5\">pipeline, and joint model on the WSD task in test</td></tr><tr><td colspan=\"5\">section 23, statistically significant improvements</td></tr><tr><td colspan=\"5\">over the baseline are marked with an (*)</td></tr><tr><td/><td>90%</td><td/><td/></tr><tr><td/><td/><td/><td>Baseline</td></tr><tr><td/><td/><td/><td>Pipeline</td></tr><tr><td/><td>85%</td><td/><td>Joint</td></tr><tr><td>f1\u2212measure</td><td>75% 80%</td><td/><td/></tr><tr><td/><td>70%</td><td/><td/></tr><tr><td/><td>65%</td><td>Argm\u2212LOC</td><td>Argm\u2212TMP</td><td>Overall</td></tr></table>",
"num": null,
"type_str": "table"
}
}
}
}