Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "H05-1035",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:34:13.258731Z"
},
"title": "PP-attachment disambiguation using large context",
"authors": [
{
"first": "Marian",
"middle": [],
"last": "Olteanu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Texas at Dallas Richardson",
"location": {
"postCode": "75080",
"region": "TX"
}
},
"email": "[email protected]"
},
{
"first": "Dan",
"middle": [],
"last": "Moldovan",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Texas at Dallas Richardson",
"location": {
"postCode": "75080",
"region": "TX"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Prepositional Phrase-attachment is a common source of ambiguity in natural language. The previous approaches use limited information to solve the ambiguity-four lexical heads-although humans disambiguate much better when the full sentence is available. We propose to solve the PP-attachment ambiguity with a Support Vector Machines learning model that uses complex syntactic and semantic features as well as unsupervised information obtained from the World Wide Web. The system was tested on several datasets obtaining an accuracy of 93.62% on a Penn Treebank-II dataset; 91.79% on a FrameNet dataset when no manuallyannotated semantic information is provided and 92.85% when semantic information is provided.",
"pdf_parse": {
"paper_id": "H05-1035",
"_pdf_hash": "",
"abstract": [
{
"text": "Prepositional Phrase-attachment is a common source of ambiguity in natural language. The previous approaches use limited information to solve the ambiguity-four lexical heads-although humans disambiguate much better when the full sentence is available. We propose to solve the PP-attachment ambiguity with a Support Vector Machines learning model that uses complex syntactic and semantic features as well as unsupervised information obtained from the World Wide Web. The system was tested on several datasets obtaining an accuracy of 93.62% on a Penn Treebank-II dataset; 91.79% on a FrameNet dataset when no manuallyannotated semantic information is provided and 92.85% when semantic information is provided.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "1 Problem description",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Prepositional Phrase-attachment is a source of ambiguity in natural language that generates a significant number of errors in syntactic parsing. For example the sentence \"I saw yesterday the man in the park with a telescope\" has 5 different semantic interpretations based on the way the prepositional phrases \"in the park\" and \"with the telescope\" are attached: I saw yesterday [the man [in The problem can be viewed as a decision of attaching a prepositional phrase (PP) to one of the preceding head nouns or verbs. The ambiguity expressed by the number of potential parse trees generated by Context-Free Grammars increases exponentially with the number of PPs. For a PP that follows the object of a verb there are 2 parse trees, for a chain of 2, 3, 4 and 5 PPs there are respectively 5, 14, 42 and 132 parse trees. Usually the average number of consecutive PPs in a sentence increases linearly with the length of the sentence.",
"cite_spans": [
{
"start": 378,
"end": 390,
"text": "[the man [in",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "PP-attachment ambiguity problem",
"sec_num": "1.1"
},
{
"text": "Lexical and syntactic information alone is not sufficient to resolve the PP-attachment problem; often semantic and/or contextual information is necessary. For example, in \"I ate a pizza with anchovies\", \"with anchovies\" attaches to the noun \"pizza\", where as in \"I ate a pizza with friends.\", \"with friends\" attaches to the verb \"eat\" -example found in (McLauchlan, 2001 ). There are instances of PP-attachment, like the one in \"I saw the car in the picture\" that can be disambiguated only by using contextual discourse information.",
"cite_spans": [
{
"start": 353,
"end": 370,
"text": "(McLauchlan, 2001",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "PP-attachment ambiguity problem",
"sec_num": "1.1"
},
{
"text": "Usually, people don't have much trouble in finding the right way to attach PPs. But if one limits the information used for disambiguation of the PPattachment to include only the verb, the noun representing its object, the preposition and the main noun in the PP, the accuracy for human decision degrades from 93.2% to 88.2% (Ratnaparkhi et al., 1994 ) on a dataset extracted from Penn Treebank (Marcus et al., 1993) .",
"cite_spans": [
{
"start": 324,
"end": 349,
"text": "(Ratnaparkhi et al., 1994",
"ref_id": "BIBREF14"
},
{
"start": 394,
"end": 415,
"text": "(Marcus et al., 1993)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "PP-attachment ambiguity problem",
"sec_num": "1.1"
},
{
"text": "Syntactic parsing is essential for many natural language applications such as Machine Translation, Question Answering, Information Extraction, Information Retrieval, Automatic Speech Recognition. Since parsing occurs early in the chain of NLP processing steps it has a large impact on the overall system performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "1.2"
},
{
"text": "Our approach to solve the PP-attachment ambiguity is based on a Support Vector Machines learner (Cortes and Vapnik, 1995) . The feature set contains complex information extracted automatically from candidate syntax trees generated by parsing (Charniak, 2000) , trees that will be improved by more accurate PP-attachment decisions. Some of these features were proven efficient for semantic information labeling (Gildea and Jurafsky, 2002) . The feature set also includes unsupervised information obtained from a very large corpus (World Wide Web). Features containing manually annotated semantic information about the verb and about the objects of the verb have also been used. We adopted the standard approach to distinguish between verb and noun attachment; thus the classifier has to choose between two classes: V when the prepositional phrase is attached to the verb and N when the prepositional phrase is attached to the preceding head noun.",
"cite_spans": [
{
"start": 96,
"end": 121,
"text": "(Cortes and Vapnik, 1995)",
"ref_id": "BIBREF6"
},
{
"start": 242,
"end": 258,
"text": "(Charniak, 2000)",
"ref_id": "BIBREF4"
},
{
"start": 410,
"end": 437,
"text": "(Gildea and Jurafsky, 2002)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "2"
},
{
"text": "To be able to extract the required features from a dataset instance, one must identify the verb, the phrase identifying the object of the verb that precedes the prepositional phrase in question (np1) which usually is part of the predicate-argument structure of the verb, its head noun, the prepositional phrase (np2), its preposition and its head noun (the second most important word in the PP).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "We have adopted the notation from (Collins and Brooks, 1995) , where v is the verb, n 1 is the head noun of object phrase, p is the preposition and n 2 is the head noun of the prepositional phrase.",
"cite_spans": [
{
"start": 34,
"end": 60,
"text": "(Collins and Brooks, 1995)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "Compared to our datasets, Ratnaparkhi's dataset (Ratnaparkhi et al., 1994) contains only the lexical heads v, n 1 , p and n 2 . Thus, our methodology cannot be applied to Ratnaparkhi's dataset (RRR) .",
"cite_spans": [
{
"start": 48,
"end": 74,
"text": "(Ratnaparkhi et al., 1994)",
"ref_id": "BIBREF14"
},
{
"start": 171,
"end": 198,
"text": "Ratnaparkhi's dataset (RRR)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "In our experiments we used two datasets:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "\u2022 FN -extracted from FrameNet II 1.1 (Baker et al., 1998) \u2022 TB2 -extracted from Penn Treebank-II Table 1 presents the datasets 1 . The creation of the datasets is described in details in (Olteanu, 2004) .",
"cite_spans": [
{
"start": 37,
"end": 57,
"text": "(Baker et al., 1998)",
"ref_id": "BIBREF2"
},
{
"start": 187,
"end": 202,
"text": "(Olteanu, 2004)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 97,
"end": 104,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "The experiments described in this paper use a set of discrete (alphanumeric) and continuous (numeric) features. All features are fully deterministic, except the features count-ratio and pp-count that are based on information provided by an external resource -Google search engine (http://www.google. com).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4"
},
{
"text": "In describing the features, we will use the Penn Treebank-II parse tree associated with the sentence \"The Lorillard spokeswoman said asbestos was used in \"very modest amounts\" in making paper for the filters in the early 1950s and replaced with a different type of filter in 1956\". Table 2 describes the features and the origin of each feature. The preposition is the feature with the most discriminative power, because of preferences of particular prepositions to attach to verbs or nouns. Table 3 shows the distribution of top 10 most frequently used prepositions in the FN and TB2 datasets.",
"cite_spans": [],
"ref_spans": [
{
"start": 282,
"end": 289,
"text": "Table 2",
"ref_id": "TABREF4"
},
{
"start": 491,
"end": 498,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Features",
"sec_num": "4"
},
{
"text": "The features were carefully designed so that, when they are extracted from gold parse trees, they don't provide more information useful for disambiguation than when they are automatically generated using a parser. This claim is validated by the experimental results that show a strong correlation between the results on the two datasets -one based on automatically generated parse trees (FN) and one based on gold parse trees (TB2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4"
},
{
"text": "Next, we describe in further detail the features presented in Table 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 62,
"end": 69,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Features",
"sec_num": "4"
},
{
"text": "v-frame represents the frame of the verb -the frame to which the verb belongs, as it is present in FrameNet (manually annotated). We used this feature because the frame of the verb describes very well the semantic behavior of the verb including the predicate-argument structure of the verb, which entails the affinity of the verb for certain prepositions. Hindle'93, ...] n1-surface: surface form of n1. May be morphologically processed [Hindle'93, ...] p: the preposition, lower-cased [Hindle'93, ...] n2-surface: surface form of n 2 . May be morphologically processed [Ratnaparkhi'94, Collins'95, ...] n1-mp/n1-mpf: morph. processing of n 1 [Collins'95] n2-mp/n2-mpf: morph. processing of n2 [Collins'95] v-lemma: lemma of the verb [Collins'95] path: path in the candidate parse tree between the verb and np1 [Gildea'02] subcategorization: subcategorization of the verb [modified from Pradhan'03] v-pos: part-of-speech of the verb v-voice: voice of the verb n1-pos: part-of-speech of n1 n1-lemma: lemma of n 1 . May be morphologically processed n2-pos: part-of-speech of n 2 n2-lemma: lemma of n 2 .",
"cite_spans": [
{
"start": 356,
"end": 366,
"text": "Hindle'93,",
"ref_id": null
},
{
"start": 367,
"end": 371,
"text": "...]",
"ref_id": null
},
{
"start": 437,
"end": 448,
"text": "[Hindle'93,",
"ref_id": null
},
{
"start": 449,
"end": 453,
"text": "...]",
"ref_id": null
},
{
"start": 486,
"end": 497,
"text": "[Hindle'93,",
"ref_id": null
},
{
"start": 498,
"end": 502,
"text": "...]",
"ref_id": null
},
{
"start": 570,
"end": 586,
"text": "[Ratnaparkhi'94,",
"ref_id": null
},
{
"start": 587,
"end": 598,
"text": "Collins'95,",
"ref_id": null
},
{
"start": 599,
"end": 603,
"text": "...]",
"ref_id": null
},
{
"start": 643,
"end": 655,
"text": "[Collins'95]",
"ref_id": null
},
{
"start": 694,
"end": 706,
"text": "[Collins'95]",
"ref_id": null
},
{
"start": 734,
"end": 746,
"text": "[Collins'95]",
"ref_id": null
},
{
"start": 811,
"end": 822,
"text": "[Gildea'02]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4"
},
{
"text": "May be morphologically processed position: position of np1 relative to the verb [ Table 3 : Distribution of the first 10 most-frequent prepositions in the FN and TB2 datasets n1-sr represents the semantic role of the object phrase np1 -the label attached to the Frame Element (manual semantic annotation that can be found in FrameNet). This feature was introduced because of the relation between the underlying meaning of np1 and its semantic role.",
"cite_spans": [
{
"start": 80,
"end": 81,
"text": "[",
"ref_id": null
}
],
"ref_spans": [
{
"start": 82,
"end": 89,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Features",
"sec_num": "4"
},
{
"text": "n1-tr represents the thematic role of the object phrase np1 -a coarse-grained role based on the label attached to the Frame Element (manual semantic annotation that can be found in FrameNet). It was introduced to reduce data sparseness for the n1-sr feature. The conversion from fine-grained semantic role to coarse-grained semantic role is done automatically using a table that maps a pair of a framelevel semantic role (FE label) and a frame to a thematic role.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4"
},
{
"text": "subcategorization contains a semi-lexicalized description of the structure of the verb phrase. A subcategorization frame is closely related to the predicate argument structure and to the underlying meaning of the verb. It contains an ordered set of all the phrase labels that are siblings of the verb, plus a marker for the verb. If the child phrase of the verb is a PP, then the label will also contain the preposition (the headword of the PP). This feature is a modified form of the sub-categorization feature described in (Pradhan et al., 2003) : the differences in various part-of-speeches for the verb were ignored and the preposition that heads a prepositional phrase is also attached to the label. Therefore, for the sentence \"The stock declined in June by 4%\", the value for this feature is *-PPin-PPby.",
"cite_spans": [
{
"start": 512,
"end": 547,
"text": "described in (Pradhan et al., 2003)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4"
},
{
"text": "In the TB2 dataset the parse trees are gold standard (contain the expected output value for PPambiguity resolution). In the case of a verb attachment, if the selected PP is a child of the selected VP, then by applying the algorithm, the value of the feature will contain the PP label plus the preposition. This clearly is a clue for the learner that the instance is a verb attachment. To overcome this problem for datasets based on gold-standard parse trees, when computing the value of the subcategorization feature the selected PP will not be used. Figure 1 shows the subcategorization for the phrase \"replaced with a different type of filter in 1956\". path expresses the syntactic relation between the verb v and the object phrase np1. Its purpose is to describe the syntactic relation of np1 to the rest of the clause by the syntactic relation of np1 with the head of the clause -v. We adopted this feature from (Gildea and Jurafsky, 2002) . path describes the chain of labels in the tree from v to np1, includ-ing the label of v and np1. Ascending movements and descending movements are depicted separately. We used two variants of this feature to determine the optimum version for our problem -one with full POS of the verb and one with POS reduced to \"VB\". The experiments proved that the second variant provides a better performance. Figure 2 depicts the path between \"replaced\" and \"a different type of filter\": VBN\u2191VP\u2193PP\u2193NP or VB\u2191VP\u2193PP\u2193NP. position indicates the position of the n 1 -p-n 2 construction relative to the verb, i.e. whether the prepositional phrase in question lies before the verb or after the verb in the sentence. Position is very important in deciding the type of attachment, considering the totally different distribution of PPs constructions preceding the verb and PPs constructions following the verb.",
"cite_spans": [
{
"start": 916,
"end": 943,
"text": "(Gildea and Jurafsky, 2002)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 551,
"end": 559,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 1342,
"end": 1350,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Features",
"sec_num": "4"
},
{
"text": "Morphological processing applied to n 1 and n 2 was inspired by the algorithm described in (Collins and Brooks, 1995) . We analyzed the impact of different levels of morphological processing by using two types: partial morphological processing (only numbers and years are converted) -identified by adding -mp as a suffix to the name of this featureand full morphological processing (numbers, years and capitalized names) -identified by adding -mpf as a suffix to the name of this feature. The purpose of morphological processing is data sparseness reduction by clustering similar values for this feature.",
"cite_spans": [
{
"start": 91,
"end": 117,
"text": "(Collins and Brooks, 1995)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4"
},
{
"text": "n1-parent represents the phrase label of the parent of np1 and it cannot be used on gold parse trees (TB2 dataset) because it will provide a clue about the correct attachment type.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4"
},
{
"text": "n2-det is called the determination of the prepositional phrase np2. This novel feature tells if n 2 is preceded in np2 by a possessive pronoun or by a determiner. This is used to differentiate between \"buy books for children\" (which is probably a noun attachment) and \"buy books for her children\" (which very probably is a verb attachment).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4"
},
{
"text": "parser-vote feature represents the choice of the parser (Charniak's parser) in the PP-attachment resolution. It cannot be used with gold-standard parse trees because it will provide the right answer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4"
},
{
"text": "count-ratio represents the estimated ratio between the frequency of an unambiguous verb attachment construction based on v, p and n 2 and the frequency of a probably unambiguous noun attachment construction based on n 1 , p and n 2 in a very large corpus. A very large corpus is required to overcome the data sparseness inherent for complex constructions like those described above.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4"
},
{
"text": "We chose the World Wide Web as a corpus and Google as a query interface (see (Olteanu, 2004) for details).",
"cite_spans": [
{
"start": 77,
"end": 92,
"text": "(Olteanu, 2004)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4"
},
{
"text": "Let's consider the estimated frequency of unambiguous verb-attachments and respectively nounattachments defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4"
},
{
"text": "f v = c v\u2212p\u2212n2 c v \u2022 c p\u2212n2 f n = c n1\u2212p\u2212n2 c n1 \u2022 c p\u2212n2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4"
},
{
"text": "where:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4"
},
{
"text": "\u2022 c v\u2212p\u2212n2 is the number of occurrences of the phrase \"v p n 2 \", \"v p * n 2 \" (where * symbolizes any word), \"v-lemma p n 2 \" or \"v-lemma p * n 2 \" in World Wide Web, as reported by Google",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4"
},
{
"text": "\u2022 c v is the number of occurrences of the word \"v\" or \"v-lemma\" in WWW",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4"
},
{
"text": "\u2022 c p\u2212n2 is the number of occurrences of the phrase \"p n 2 \" or \"p * n 2 \" in WWW",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4"
},
{
"text": "\u2022 c n1\u2212p\u2212n2 is the number of occurrences of the phrase \"n 1 p n 2 \" or \"v p * n 2 \" in WWW",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4"
},
{
"text": "\u2022 c n1 is the number of occurrences of the word \"n 1 \" in WWW",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4"
},
{
"text": "The value for this feature is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4"
},
{
"text": "count \u2212 ratio = log 10 f v f n = log 10 c v\u2212p\u2212n2 \u2022 c n1 c n1\u2212p\u2212n2 \u2022 c v",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4"
},
{
"text": "We chose logarithmic values for this feature because experiments showed that logarithmic values provide a higher accuracy than linear values. Also, by experimentation we concluded that value bounding is helpful, and the feature was bounded to values between -3 and 3 on the logarithmic scale, unless specified otherwise in the experiment description.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4"
},
{
"text": "This feature resembles the approach adopted in (Volk, 2001) .",
"cite_spans": [
{
"start": 47,
"end": 59,
"text": "(Volk, 2001)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4"
},
{
"text": "pp-count depicts the estimated count of occurrences in World Wide Web of the prepositional phrases based on p and n 2 . The count is estimated by c p\u2212n2 . Therefore pp-count = log 10 (c p\u2212n2 + c p\u2212 * \u2212n2 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4"
},
{
"text": "n1-p-distance depicts the distance (in tokens) between n 1 and p. Let d n1\u2212p be the distance between n 1 and p (d = 1 if there is no other token between n 1 and p). Thus n1-p-distance = log 10 (1 + log 10 d n1\u2212p ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4"
},
{
"text": "We used in our experiments a Support Vector Machines learner with Radial Basis Function kernel as implemented in the LIBSVM toolkit (http://www.csie.ntu.edu.tw/ \u223c cjlin/ libsvm/).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning model and procedure",
"sec_num": "5"
},
{
"text": "We converted the feature tuples (containing discrete alphanumeric and continuous values) to multidimensional vectors using the following procedure:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning model and procedure",
"sec_num": "5"
},
{
"text": "\u2022 Discrete features: assign to each possible value of each feature a dimension in the vector space, and to each feature value in each training or test example put 1 in the dimension corresponding to the feature value and 0 in all other dimensions associated with that feature.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning model and procedure",
"sec_num": "5"
},
{
"text": "\u2022 Continuous features: assign a dimension and put the scaled value in the multi-dimensional vector (all examples in training data will span between 0 and 1 for that particular dimension).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning model and procedure",
"sec_num": "5"
},
{
"text": "SVM training was preceded by finding the optimal \u03b3 and C parameters required for training using 2-fold cross validation, which was found to be superior in model accuracy and training time over higher folds cross-validations (Olteanu, 2004) .",
"cite_spans": [
{
"start": 224,
"end": 239,
"text": "(Olteanu, 2004)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning model and procedure",
"sec_num": "5"
},
{
"text": "The criterion for selecting the best set of features was the accuracy on the cross-validation. Thus, the development of the models was performed entirely on the training set, which acted also as a development set. We later computed the accuracy on the test set on some representative models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning model and procedure",
"sec_num": "5"
},
{
"text": "For each dataset, we conducted experiments to determine an efficient combination of features and the accuracy on test data for the best combination of features. We also run the experimental procedure on the original Ratnaparkhi's dataset in order to compare SVM with other machine learning techniques applied to PP-attachment problem. FN-basic-flw uses v-surface, n1-surface, p and n2-surface on examples that follow the verb. FNlex-syn-flw uses v-surface, v-pos, v-lemma, subcategorization, path (full POS), position, n1preposition, n1-surface, n1-pos, n1-lemma, n1parent, p, n2-surface, n2-pos, n2-lemma, n2det and parser-vote on examples that follow the verb. FN-best-no-sem uses v-surface, v-pos, vlemma, subcategorization, path (reduced POS) , position, n1-preposition, n1-surface, n1-pos, n1lemma-mpf, n1-parent, p, n2-surface, n2-pos, n2-lemma-mpf, n2-det, parser-vote, count-ratio and pp-count on all examples. FN-best-sem uses the same set of features as FN-best-no-sem plus vframe and n1-sr.",
"cite_spans": [
{
"start": 663,
"end": 746,
"text": "FN-best-no-sem uses v-surface, v-pos, vlemma, subcategorization, path (reduced POS)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments, results and analysis",
"sec_num": "6"
},
{
"text": "TB2-basic uses v-surface, n1-surface-mpf, p and n2-surface-mpf. TB2-best-no-www uses vsurface, v-pos, v-lemma, subcategorization, path (reduced POS), n1-preposition, n1-surface, n1mpf, n1-pos, n1-lemma, n1-np-label, p, n2surface, n2-mpf and n1-p-distance. TB2-best also uses count-ratio and pp-count.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments, results and analysis",
"sec_num": "6"
},
{
"text": "RRR-basic uses v-surface, n1-surface, p and n2-surface. RRR-basic-mpf uses v-surface, n1surface-mpf, p and n2-surface-mpf.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments, results and analysis",
"sec_num": "6"
},
{
"text": "On the FN dataset, all features except v-voice have a positive contribution to the system (n2-det, choice between semantic vs. thematic role and how should morphological processing be applied is questionable). The negative impact for the v-voice feature may be explained by the fact that the only situation in which it may potentially help is extremely rare: passive voice and the agent headed by \"by\" appears after another argument of the verb (i.e.: \"The painting was presented to the audience by its author.\"). Moreover the PP-attachment based on the preposition \"by\" is not highly ambiguous; as seen in Table 3 in the FrameNet dataset, 88% of the \"by\" ambiguity instances are verb-attachments.",
"cite_spans": [],
"ref_spans": [
{
"start": 607,
"end": 614,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments, results and analysis",
"sec_num": "6"
},
{
"text": "The experiment with the highest cross-validation accuracy has an accuracy of 92.85% on the test data. The equivalent experiment that doesn't include manually annotated semantic information has an accuracy of 91.79% on the test data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments, results and analysis",
"sec_num": "6"
},
{
"text": "On TB2 dataset, the results are close to the results obtained on the FrameNet corpus, although the distribution of noun and verb attachment differs considerably between the two data sets (70.28% are verbattachments in FN2 and 35.71% in TB2). The best accuracy in cross-validation is 92.92%, which leads to an accuracy on test set of 93.62%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments, results and analysis",
"sec_num": "6"
},
{
"text": "Because we couldn't use the standard dataset used in PP-attachment resolution (Ratnaparkhi's), we implemented back-off algorithm developed by Collins and Brooks (1995) and applied it to our TB2 dataset. Both RRR and TB2 datasets are extracted from Penn Treebank. This algorithm, trained on TB2 training set, obtains an accuracy on TB2 test set of 86.1% (85.8% when no morphological processing is applied). The same algorithm provides an accuracy on RRR dataset of 84.5% (84.1% without morphological processing). The difference in accuracy between the two datasets is 1.6% (1.7% without morphological processing when using Collins and Brooks's algorithm. The difference in accuracy between a SVM model applied to RRR dataset (RRR-basic experiment) and the same experiment applied to TB2 dataset (TB2- (Ratnaparkhi et al., 1994) 88.2 RRR Average human, whole sentence (Ratnaparkhi et al., 1994) 93.2 RRR Maximum Likelihood-based (Hindle and Rooth, 1993) 79.7 AP Maximum entropy, words (Ratnaparkhi et al., 1994) 77.7 RRR Maximum entropy, words & classes (Ratnaparkhi et al., 1994) 81.6 RRR Decision trees (Ratnaparkhi et al., 1994) 77.7 RRR Transformation-Based Learning (Brill and Resnik, 1994) 81.8 WordNet Maximum-Likelihood based (Collins and Brooks, 1995) 84.5 RRR Maximum-Likelihood based (Collins and Brooks, 1995) 86.1 TB2 Decision trees & WSD (Stetina and Nagao, 1997) 88.1 RRR WordNet Memory-based Learning (Zavrel et al., 1997) 84.4 RRR LexSpace Maximum entropy, unsupervised (Ratnaparkhi, 1998) 81.9 Maximum entropy, supervised (Ratnaparkhi, 1998) 83.7 RRR Neural Nets (Alegre et al., 1999) 86.0 RRR WordNet Boosting (Abney et al., 1999) 84.4 RRR Semi-probabilistic (Pantel and Lin, 2000) 84.31 RRR Maximum entropy, ensemble (McLauchlan, 2001) 85.5 RRR LSA SVM (Vanschoenwinkel and Manderick, 2003) 84.8 RRR Nearest-neighbor (Zhao and Lin, 2004) 86. basic experiment) is 2.9%. Also, the baseline -the most probable PP type for each preposition -is approximately the same for the two datasets (72.19% on RRR and 72.30% on TB2). One may hypothesize that the majority of the algorithms for PP-attachment disambiguation obtain no more than 4% increase in accuracy on the TB2 compared to the results on the RRR dataset. One important difference between the two datasets is the size -20,801 training examples in RRR vs. 54,629 training examples in TB2. We plan to implement more algorithms described in literature in order to verify this statement. Table 5 summarizes the results in PP-attachment ambiguity resolution found in literature along with our best results.",
"cite_spans": [
{
"start": 142,
"end": 167,
"text": "Collins and Brooks (1995)",
"ref_id": "BIBREF5"
},
{
"start": 622,
"end": 653,
"text": "Collins and Brooks's algorithm.",
"ref_id": null
},
{
"start": 800,
"end": 826,
"text": "(Ratnaparkhi et al., 1994)",
"ref_id": "BIBREF14"
},
{
"start": 866,
"end": 892,
"text": "(Ratnaparkhi et al., 1994)",
"ref_id": "BIBREF14"
},
{
"start": 927,
"end": 951,
"text": "(Hindle and Rooth, 1993)",
"ref_id": "BIBREF8"
},
{
"start": 983,
"end": 1009,
"text": "(Ratnaparkhi et al., 1994)",
"ref_id": "BIBREF14"
},
{
"start": 1052,
"end": 1078,
"text": "(Ratnaparkhi et al., 1994)",
"ref_id": "BIBREF14"
},
{
"start": 1103,
"end": 1129,
"text": "(Ratnaparkhi et al., 1994)",
"ref_id": "BIBREF14"
},
{
"start": 1169,
"end": 1193,
"text": "(Brill and Resnik, 1994)",
"ref_id": "BIBREF3"
},
{
"start": 1232,
"end": 1258,
"text": "(Collins and Brooks, 1995)",
"ref_id": "BIBREF5"
},
{
"start": 1293,
"end": 1319,
"text": "(Collins and Brooks, 1995)",
"ref_id": "BIBREF5"
},
{
"start": 1350,
"end": 1375,
"text": "(Stetina and Nagao, 1997)",
"ref_id": "BIBREF16"
},
{
"start": 1415,
"end": 1436,
"text": "(Zavrel et al., 1997)",
"ref_id": "BIBREF19"
},
{
"start": 1485,
"end": 1504,
"text": "(Ratnaparkhi, 1998)",
"ref_id": "BIBREF15"
},
{
"start": 1538,
"end": 1557,
"text": "(Ratnaparkhi, 1998)",
"ref_id": "BIBREF15"
},
{
"start": 1579,
"end": 1600,
"text": "(Alegre et al., 1999)",
"ref_id": "BIBREF1"
},
{
"start": 1627,
"end": 1647,
"text": "(Abney et al., 1999)",
"ref_id": "BIBREF0"
},
{
"start": 1676,
"end": 1698,
"text": "(Pantel and Lin, 2000)",
"ref_id": "BIBREF12"
},
{
"start": 1735,
"end": 1753,
"text": "(McLauchlan, 2001)",
"ref_id": "BIBREF10"
},
{
"start": 1771,
"end": 1808,
"text": "(Vanschoenwinkel and Manderick, 2003)",
"ref_id": "BIBREF17"
},
{
"start": 1835,
"end": 1855,
"text": "(Zhao and Lin, 2004)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [
{
"start": 2453,
"end": 2460,
"text": "Table 5",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Comparison with previous work",
"sec_num": "7"
},
{
"text": "Other acronyms used in this table:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with previous work",
"sec_num": "7"
},
{
"text": "\u2022 AP -dataset of 13 million word sample of Associated Press news stories from 1999 (Hindle and Rooth, 1993) .",
"cite_spans": [
{
"start": 83,
"end": 107,
"text": "(Hindle and Rooth, 1993)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with previous work",
"sec_num": "7"
},
{
"text": "\u2022 LexSpace -Lexical Space -a method to measure the similarity of the words (Zavrel et al., 1997) .",
"cite_spans": [
{
"start": 75,
"end": 96,
"text": "(Zavrel et al., 1997)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with previous work",
"sec_num": "7"
},
{
"text": "\u2022 LSA -Latent Semantic Analysis -measure the lexical preferences between a preposition and a noun or a verb (McLauchlan, 2001) \u2022 DWS -Distributional Word Similarity. Words that tend to appear in the same contexts tend to have similar meanings (Zhao and Lin, 2004) \u2022 PR-WWW -the probability ratio between verb-preposition-noun and noun-prepositionnoun constructs measured using World Wide Web searching.",
"cite_spans": [
{
"start": 108,
"end": 126,
"text": "(McLauchlan, 2001)",
"ref_id": "BIBREF10"
},
{
"start": 243,
"end": 263,
"text": "(Zhao and Lin, 2004)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with previous work",
"sec_num": "7"
},
{
"text": "The Penn Treebank-II results indicate that the new features used for the disambiguation of PPattachment provide a very substantial improvement in accuracy over the base line (from 87.48% to 93.62%). This represents an absolute improvement of approximately 6.14%, equivalent to a 49% error drop. The performance of the system on Penn Treebank-II exceeds the reported human expert performance on Penn Treebank-I (Ratnaparkhi et al., 1994) by about 0.4%. A significant improvement comes from the unsupervised information collected from a very large corpus; this method proved to be efficient to overcome the data sparseness problem. By analyzing the results from the FrameNet dataset, we conclude that the contribution of the gold semantic features (frame and semantic role) is significant (1.05% difference in accuracy; 12.8% reduction in the error). We will further investigate this issue by replacing gold semantic information with automatically detected semantic information. Our additional lexico-syntactic features increase the accuracy of the system from 86.44% to 89.61% for PPs following the verb. This suggests that on the FrameNet dataset the proposed syntactic features have a considerable impact on the accuracy.",
"cite_spans": [
{
"start": 410,
"end": 436,
"text": "(Ratnaparkhi et al., 1994)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "8"
},
{
"text": "The best TB2 feature set is approximately the same as the best FN feature set in spite of the differences between the datasets (Parse trees: TB2gold standard; FN -automatically generated. PPattachment ambiguity identification: TB2 -parse trees; FN -a combination of trees and FE annotation. Data source: TB2 -WSJ articles; FN -BNC). This fact suggests that the selected feature sets do not exploit particularities of the datasets and that the features are relevant to the PP-attachment ambiguity problem.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "8"
},
{
"text": "The datasets are available at http://www.utdallas. edu/ \u223c mgo031000/ppa/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Boosting applied to tagging and PP Attachment",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Abney",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"E"
],
"last": "Schapire",
"suffix": ""
},
{
"first": "Yoram",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of EMNLP/VLC-99",
"volume": "",
"issue": "",
"pages": "38--45",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven Abney, Robert E. Schapire, and Yoram Singer. 1999. Boosting applied to tagging and PP Attachment. In Proceed- ings of EMNLP/VLC-99, pages 38-45.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Pp-attachment: A committee machine approach",
"authors": [
{
"first": "Martha",
"middle": [
"A"
],
"last": "Alegre",
"suffix": ""
},
{
"first": "Josep",
"middle": [
"M"
],
"last": "Sopena",
"suffix": ""
},
{
"first": "Agusti",
"middle": [],
"last": "Lloberas",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of EMNLP/VLC-99",
"volume": "",
"issue": "",
"pages": "231--238",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martha A. Alegre, Josep M. Sopena, and Agusti Lloberas. 1999. Pp-attachment: A committee machine approach. In Proceedings of EMNLP/VLC-99, pages 231-238.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The Berkeley FrameNet Project",
"authors": [
{
"first": "Collin",
"middle": [
"F"
],
"last": "Baker",
"suffix": ""
},
{
"first": "Charles",
"middle": [
"J"
],
"last": "Fillmore",
"suffix": ""
},
{
"first": "John",
"middle": [
"B"
],
"last": "Lowe",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the 17th international conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "86--90",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Collin F. Baker, Charles J. Fillmore, and John B. Lowe. 1998. The Berkeley FrameNet Project. In Proceedings of the 17th international conference on Computational Linguistics, pages 86-90.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A rule-based approach to prepositional phrase attachment disambiguation",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "Brill",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of the 15th conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1198--1204",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric Brill and Philip Resnik. 1994. A rule-based approach to prepositional phrase attachment disambiguation. In Pro- ceedings of the 15th conference on Computational Linguis- tics, pages 1198-1204.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A Maximum-Entropy-Inspired Parser",
"authors": [
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of NAACL-2000",
"volume": "",
"issue": "",
"pages": "132--139",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eugene Charniak. 2000. A Maximum-Entropy-Inspired Parser. In Proceedings of NAACL-2000, pages 132-139.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Prepositional Phrase Attachment through a Backed-Off Model",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Brooks",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the Thirds Workshop on Very Large Corpora",
"volume": "",
"issue": "",
"pages": "27--38",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins and James Brooks. 1995. Prepositional Phrase Attachment through a Backed-Off Model. In Proceedings of the Thirds Workshop on Very Large Corpora, pages 27-38.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Support-Vector Networks",
"authors": [
{
"first": "Corinna",
"middle": [],
"last": "Cortes",
"suffix": ""
},
{
"first": "Vladimir",
"middle": [],
"last": "Vapnik",
"suffix": ""
}
],
"year": 1995,
"venue": "Machine Learning",
"volume": "20",
"issue": "3",
"pages": "273--297",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Corinna Cortes and Vladimir Vapnik. 1995. Support-Vector Networks. Machine Learning, 20(3):273-297.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Automatic Labeling of Semantic Roles",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2002,
"venue": "Computational Linguistics",
"volume": "28",
"issue": "3",
"pages": "245--288",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Gildea and Daniel Jurafsky. 2002. Automatic Labeling of Semantic Roles. Computational Linguistics, 28(3):245- 288.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Structural Ambiguity and Lexical Relations",
"authors": [
{
"first": "Donald",
"middle": [],
"last": "Hindle",
"suffix": ""
},
{
"first": "Mats",
"middle": [],
"last": "Rooth",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "1",
"pages": "103--120",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Donald Hindle and Mats Rooth. 1993. Structural Ambi- guity and Lexical Relations. Computational Linguistics, 19(1):103-120.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Building a large annotated corpus of English: the Penn Treebank",
"authors": [
{
"first": "Mitchell",
"middle": [],
"last": "Marcus",
"suffix": ""
},
{
"first": "Beatrice",
"middle": [],
"last": "Santorini",
"suffix": ""
},
{
"first": "Mary",
"middle": [
"Ann"
],
"last": "Marcinkiewicz",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "2",
"pages": "313--330",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitchell Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated cor- pus of English: the Penn Treebank. Computational Linguistics, 19(2):313-330.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Maximum Entropy Models and Prepositional Phrase Ambiguity",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Mclauchlan",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark McLauchlan. 2001. Maximum Entropy Models and Prepositional Phrase Ambiguity. Master's thesis, University of Edinburgh.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Prepositional Phrase Attachment ambiguity resolution through a rich syntactic, lexical and semantic set of features applied in support vector machines learner",
"authors": [
{
"first": "Marian",
"middle": [
"G"
],
"last": "Olteanu",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marian G. Olteanu. 2004. Prepositional Phrase Attachment ambiguity resolution through a rich syntactic, lexical and semantic set of features applied in support vector machines learner. Master's thesis, University of Texas at Dallas.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "An unsupervised approach to Prepositional Phrase Attachment using contextually similar words",
"authors": [
{
"first": "Patrick",
"middle": [],
"last": "Pantel",
"suffix": ""
},
{
"first": "Dekang",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 38th Meeting of the Association for Computational Linguistic",
"volume": "",
"issue": "",
"pages": "101--108",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Patrick Pantel and Dekang Lin. 2000. An unsupervised ap- proach to Prepositional Phrase Attachment using contextu- ally similar words. In Proceedings of the 38th Meeting of the Association for Computational Linguistic, pages 101-108.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Semantic Role Parsing: Adding Semantic Structure to Unstructured Text",
"authors": [
{
"first": "Kadri",
"middle": [],
"last": "Sameer Pradhan",
"suffix": ""
},
{
"first": "Wayne",
"middle": [],
"last": "Hacioglu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "James",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the International Conference on Data Mining",
"volume": "",
"issue": "",
"pages": "629--632",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sameer Pradhan, Kadri Hacioglu, Wayne Ward, James H. Mar- tin, and Daniel Jurafsky. 2003. Semantic Role Parsing: Adding Semantic Structure to Unstructured Text. In Pro- ceedings of the International Conference on Data Mining, pages 629-632.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A Maximum Entropy Model for Prepositional Phrase Attachment",
"authors": [
{
"first": "Adwait",
"middle": [],
"last": "Ratnaparkhi",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Reynar",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of the Human Language Technology Workshop",
"volume": "",
"issue": "",
"pages": "250--255",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adwait Ratnaparkhi, Jeff Reynar, and Salim Roukos. 1994. A Maximum Entropy Model for Prepositional Phrase Attach- ment. In Proceedings of the Human Language Technology Workshop, pages 250-255.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Statistical Models for Unsupervised Prepositional Phrase Attachment",
"authors": [
{
"first": "Adwait",
"middle": [],
"last": "Ratnaparkhi",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the 36th conference on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1079--1085",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adwait Ratnaparkhi. 1998. Statistical Models for Unsuper- vised Prepositional Phrase Attachment. In Proceedings of the 36th conference on Association for Computational Lin- guistics, pages 1079-1085.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Corpus based PP attachment ambiguity resolution with a semantic dictionary",
"authors": [
{
"first": "Jiri",
"middle": [],
"last": "Stetina",
"suffix": ""
},
{
"first": "Makoto",
"middle": [],
"last": "Nagao",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the Fifth Workshop on Very Large Corpora",
"volume": "",
"issue": "",
"pages": "66--80",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiri Stetina and Makoto Nagao. 1997. Corpus based PP attach- ment ambiguity resolution with a semantic dictionary. In Proceedings of the Fifth Workshop on Very Large Corpora, pages 66-80.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A weighted polynomial information gain kernel for resolving Prepositional Phrase attachment ambiguities with Support Vector Machines",
"authors": [
{
"first": "Bram",
"middle": [],
"last": "Vanschoenwinkel",
"suffix": ""
},
{
"first": "Bernard",
"middle": [],
"last": "Manderick",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the Eighteenth International Joint Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "133--140",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bram Vanschoenwinkel and Bernard Manderick. 2003. A weighted polynomial information gain kernel for resolving Prepositional Phrase attachment ambiguities with Support Vector Machines. In Proceedings of the Eighteenth Inter- national Joint Conference on Artificial Intelligence, pages 133-140.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Exploiting the WWW as a corpus to resolve PP attachment ambiguities",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Volk",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of Corpus Linguistics",
"volume": "",
"issue": "",
"pages": "601--606",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martin Volk. 2001. Exploiting the WWW as a corpus to re- solve PP attachment ambiguities. In Proceedings of Corpus Linguistics, pages 601-606.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Resolving PP attachment Ambiguities with Memory-Based Learning",
"authors": [
{
"first": "Jakub",
"middle": [],
"last": "Zavrel",
"suffix": ""
},
{
"first": "Walter",
"middle": [],
"last": "Daelemans",
"suffix": ""
},
{
"first": "Jorn",
"middle": [],
"last": "Veenstra",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of CoNLL-97",
"volume": "",
"issue": "",
"pages": "136--144",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jakub Zavrel, Walter Daelemans, and Jorn Veenstra. 1997. Resolving PP attachment Ambiguities with Memory-Based Learning. In Proceedings of CoNLL-97, pages 136-144.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A Nearest-Neighbor Method for Resolving PP-Attachment Ambiguity",
"authors": [
{
"first": "Shaojun",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Dekang",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of IJCNLP-04",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shaojun Zhao and Dekang Lin. 2004. A Nearest-Neighbor Method for Resolving PP-Attachment Ambiguity. In Pro- ceedings of IJCNLP-04.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Subcategorization feature: *-PPin-PPby",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF1": {
"text": "Example of a path feature",
"type_str": "figure",
"uris": null,
"num": null
},
"TABREF0": {
"text": "the park [with a telescope]]]; I saw yesterday [the man [in the park] [with a telescope]]; I saw yesterday [the man [in the park]] [with a telescope]; I saw yesterday [the man] [in the park [with a telescope]] and I saw yesterday [the man] [in the park] [with a telescope].",
"type_str": "table",
"html": null,
"num": null,
"content": "<table/>"
},
"TABREF2": {
"text": "The datasets and their characteristics",
"type_str": "table",
"html": null,
"num": null,
"content": "<table/>"
},
"TABREF3": {
"text": "new] v-frame: frame of the verb [new in PPA] n1-sr: semantic role of np1 [new in PPA] n1-tr: thematic role of np1 [new in PPA] n1-preposition: preposition that heads np1, if np1 is a PP [new] n1-parent: label of the parent of np1 in the candidate parse tree [new in PPA] n1-np-label: label of np1 in the candidate parse tree [new in PPA] n2-det: determination of np2 [new] parser-vote: choice of the automatic parser in attaching PP [new in PPA] count-ratio: WWW statistics about verb-attachment vs. noun-attachment for that particular instance [new] pp-count: WWW statistics about co-occurrence of v and n2 [new] n1-p-distance: the distance between n1 and p [new]",
"type_str": "table",
"html": null,
"num": null,
"content": "<table/>"
},
"TABREF4": {
"text": "Features",
"type_str": "table",
"html": null,
"num": null,
"content": "<table><tr><td/><td>% of</td><td>% v-att</td><td>% of</td><td>% v-att</td></tr><tr><td>Prep.</td><td>FN</td><td>FN</td><td>TB2</td><td>TB2</td></tr><tr><td>of</td><td>13.47%</td><td>6.17%</td><td>30.14%</td><td>2.74%</td></tr><tr><td>to</td><td>13.27%</td><td>80.14%</td><td>9.55%</td><td>60.49%</td></tr><tr><td>in</td><td>12.42%</td><td>73.64%</td><td>16.94%</td><td>42.58%</td></tr><tr><td>for</td><td>6.87%</td><td>82.44%</td><td>8.95%</td><td>39.72%</td></tr><tr><td>on</td><td>6.21%</td><td>75.51%</td><td>5.16%</td><td>47.73%</td></tr><tr><td>with</td><td>6.17%</td><td>86.30%</td><td>3.79%</td><td>46.92%</td></tr><tr><td>from</td><td>5.37%</td><td>75.90%</td><td>5.76%</td><td>52.76%</td></tr><tr><td>at</td><td>4.09%</td><td>76.63%</td><td>3.21%</td><td>66.02%</td></tr><tr><td>as</td><td>3.95%</td><td>86.51%</td><td>2.49%</td><td>51.69%</td></tr><tr><td>by</td><td>3.53%</td><td>88.02%</td><td>3.27%</td><td>68.11%</td></tr></table>"
},
"TABREF5": {
"text": "",
"type_str": "table",
"html": null,
"num": null,
"content": "<table><tr><td>summa-</td></tr></table>"
},
"TABREF6": {
"text": "Results",
"type_str": "table",
"html": null,
"num": null,
"content": "<table/>"
},
"TABREF9": {
"text": "Accuracy of PP-attachment ambiguity resolution (our results in bold)",
"type_str": "table",
"html": null,
"num": null,
"content": "<table/>"
}
}
}
}