Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "E12-1038",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:36:41.516232Z"
},
"title": "Inferring Selectional Preferences from Part-Of-Speech N-grams",
"authors": [
{
"first": "Hyeju",
"middle": [],
"last": "Jang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University Pittsburgh",
"location": {
"postCode": "15213",
"region": "PA",
"country": "USA"
}
},
"email": "[email protected]"
},
{
"first": "Jack",
"middle": [],
"last": "Mostow",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University Pittsburgh",
"location": {
"postCode": "15213",
"region": "PA",
"country": "USA"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present the PONG method to compute selectional preferences using part-of-speech (POS) N-grams. From a corpus labeled with grammatical dependencies, PONG learns the distribution of word relations for each POS N-gram. From the much larger but unlabeled Google N-grams corpus, PONG learns the distribution of POS N-grams for a given pair of words. We derive the probability that one word has a given grammatical relation to the other. PONG estimates this probability by combining both distributions, whether or not either word occurs in the labeled corpus. PONG achieves higher average precision on 16 relations than a state-of-the-art baseline in a pseudo-disambiguation task, but lower coverage and recall.",
"pdf_parse": {
"paper_id": "E12-1038",
"_pdf_hash": "",
"abstract": [
{
"text": "We present the PONG method to compute selectional preferences using part-of-speech (POS) N-grams. From a corpus labeled with grammatical dependencies, PONG learns the distribution of word relations for each POS N-gram. From the much larger but unlabeled Google N-grams corpus, PONG learns the distribution of POS N-grams for a given pair of words. We derive the probability that one word has a given grammatical relation to the other. PONG estimates this probability by combining both distributions, whether or not either word occurs in the labeled corpus. PONG achieves higher average precision on 16 relations than a state-of-the-art baseline in a pseudo-disambiguation task, but lower coverage and recall.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Selectional preferences specify plausible fillers for the arguments of a predicate, e.g., celebrate. Can you celebrate a birthday? Sure. Can you celebrate a pencil? Arguably yes: Today the Acme Pencil Factory celebrated its one-billionth pencil. However, such a contrived example is unnatural because unlike birthday, pencil lacks a strong association with celebrate. How can we compute the degree to which birthday or pencil is a plausible and typical object of celebrate?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Formally, we are interested in computing the probability Pr(r | t, R), where (as Table 1 specifies), t is a target word such as celebrate, r is a word possibly related to it, such as birthday or pencil, and R is a possible relation between them, whether a semantic role such as the agent of an action, or a grammatical dependency such as the object of a verb. We call t the \"target\" because originally it referred to a vocabulary word targeted for instruction, and r its \"relative.\"",
"cite_spans": [],
"ref_spans": [
{
"start": 81,
"end": 88,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "a relation between words t a target word r, r' possible relatives of t g a word N-gram g i and g j i th and j th words of g p the POS N-gram of g Table 1 : Notation used throughout this paper Previous work on selectional preferences has used them primarily for natural language analytic tasks such as word sense disambiguation (Resnik, 1997) , dependency parsing (Zhou et al., 2011) , and semantic role labeling (Gildea and Jurafsky, 2002) . However, selectional preferences can also apply to natural language generation tasks such as sentence generation and question generation. For generation tasks, choosing the right word to express a specified argument of a relation requires knowing its connotationsthat is, its selectional preferences. Therefore, it is useful to know selectional preferences for many different relations. Such knowledge could have many uses. In education, they could help teach word connotations. In machine learning they could help computers learn languages.",
"cite_spans": [
{
"start": 327,
"end": 341,
"text": "(Resnik, 1997)",
"ref_id": "BIBREF12"
},
{
"start": 363,
"end": 382,
"text": "(Zhou et al., 2011)",
"ref_id": "BIBREF18"
},
{
"start": 412,
"end": 439,
"text": "(Gildea and Jurafsky, 2002)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 146,
"end": 153,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Notation Description R",
"sec_num": null
},
{
"text": "In machine translation, they could help generate more natural wording.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Notation Description R",
"sec_num": null
},
{
"text": "This paper introduces a method named PONG (for Part-Of-Speech N-Grams) to compute selectional preferences for many different relations by combining part-of-speech information and Google N-grams. PONG achieves higher precision on a pseudo-disambiguation task than the best previous model (Erk et al., 2010) , but lower coverage.",
"cite_spans": [
{
"start": 287,
"end": 305,
"text": "(Erk et al., 2010)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Notation Description R",
"sec_num": null
},
{
"text": "The paper is organized as follows. Section 2 describes the relations for which we compute selectional preferences. Section 3 describes PONG. Section 4 evaluates PONG. Section 5 relates PONG to prior work. Section 6 concludes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Notation Description R",
"sec_num": null
},
{
"text": "Selectional preferences characterize constraints on the arguments of predicates. Selectional preferences for semantic roles (such as agent and patient) are generally more informative than for grammatical dependencies (such as subject and object).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relations Used",
"sec_num": "2"
},
{
"text": "For example, consider these semantically equivalent but grammatically distinct sentences:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relations Used",
"sec_num": "2"
},
{
"text": "Pat opened the door. The door was opened by Pat. In both sentences the agent of opened, namely Pat, must be capable of opening somethingan informative constraint on Pat. In contrast, knowing that the grammatical subject of opened is Pat in the first sentence and the door in the second sentence tells us only that they are nouns.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relations Used",
"sec_num": "2"
},
{
"text": "Despite this limitation, selectional preferences for grammatical dependencies are still useful, for a number of reasons. First, in practice they approximate semantic role labels. For instance, typically the grammatical subject of opened is its agent. Second, grammatical dependencies can be extracted by parsers, which tend to be more accurate than current semantic role labelers. Third, the number of different grammatical dependencies is large enough to capture diverse relations, but not so large as to have sparse data for individual relations. Thus in this paper, we use grammatical dependencies as relations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relations Used",
"sec_num": "2"
},
{
"text": "A parse tree determines the basic grammatical dependencies between the words in a sentence. For instance, in the parse of Pat opened the door, the verb opened has Pat as its subject and door as its object, and door has the as its determiner. Besides these basic dependencies, we use two additional types of dependencies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relations Used",
"sec_num": "2"
},
{
"text": "Composing two basic dependencies yields a collapsed dependency (de Marneffe and Manning, 2008) . For example, consider this sentence:",
"cite_spans": [
{
"start": 63,
"end": 94,
"text": "(de Marneffe and Manning, 2008)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Relations Used",
"sec_num": "2"
},
{
"text": "The airplane flies in the sky. Here sky is the prepositional object of in, which is the head of a prepositional phrase attached to flies. Composing these two dependencies yields the collapsed dependency prep_in between flies and sky, which captures an important semantic relation between these two content words: sky is the location where flies occurs. Other function words yield different collapsed dependencies. For example, consider these two sentences:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relations Used",
"sec_num": "2"
},
{
"text": "The airplane flies over the ocean. The airplane flies and lands. Collapsed dependencies for the first sentence include prep_over between flies and ocean, which characterizes their relative vertical position, and conj_and between flies and lands, which links two actions that an airplane can perform. As these examples illustrate, collapsing dependencies involving prepositions and conjunctions can yield informative dependencies between content words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relations Used",
"sec_num": "2"
},
{
"text": "Besides collapsed dependencies, PONG infers inverse dependencies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relations Used",
"sec_num": "2"
},
{
"text": "Inverse selectional preferences are selectional preferences of arguments for their predicates, such as a preference of a subject or object for its verb. They capture semantic regularities such as the set of verbs that an agent can perform, which tend to outnumber the possible agents for a verb (Erk et al., 2010) .",
"cite_spans": [
{
"start": 295,
"end": 313,
"text": "(Erk et al., 2010)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Relations Used",
"sec_num": "2"
},
{
"text": "To compute selectional preferences, PONG combines information from a limited corpus labeled with the grammatical dependencies described in Section 2, and a much larger unlabeled corpus. The key idea is to abstract word sequences labeled with grammatical relations into POS N-grams, in order to learn a mapping from POS N-grams to those relations. For instance, PONG abstracts the parsed sentence Pat opened the door as NN VB DT NN, with the first and last NN as the subject and object of the VB. To estimate the distribution of POS N-grams containing particular target and relative words, PONG POS-tags Google Ngrams (Franz and Brants, 2006) .",
"cite_spans": [
{
"start": 617,
"end": 641,
"text": "(Franz and Brants, 2006)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "Section 3.1 derives PONG's probabilistic model for combining information from labeled and unlabeled corpora. Section 3.2 and Section 3.3 describe how PONG estimates probabilities from each corpus. Section 3.4 discusses a sparseness problem revealed during probability estimation, and how we address it in PONG.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "We quantify the selectional preference for a relative r to instantiate a relation R of a target t as the probability Pr(r | t, R), estimated as follows. By the definition of conditional probability:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic model",
"sec_num": "3.1"
},
{
"text": "Pr( , , ) Pr( | , ) Pr( , ) r t R r t R tR",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic model",
"sec_num": "3.1"
},
{
"text": "We care only about the relative probability of different r for fixed t and R, so we rewrite it as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic model",
"sec_num": "3.1"
},
{
"text": "Pr( , , ) r t R",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic model",
"sec_num": "3.1"
},
{
"text": "We use the chain rule:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic model",
"sec_num": "3.1"
},
{
"text": "Pr( | , ) Pr( | ) Pr( ) R r t r t t",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic model",
"sec_num": "3.1"
},
{
"text": "and notice that t is held constant:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic model",
"sec_num": "3.1"
},
{
"text": "Pr( | , ) Pr( | ) R r t r t",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic model",
"sec_num": "3.1"
},
{
"text": "We estimate the second factor as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic model",
"sec_num": "3.1"
},
{
"text": "Pr( , ) freq( , ) Pr( | ) Pr( ) freq( ) t r t r r t t t",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic model",
"sec_num": "3.1"
},
{
"text": "We calculate the denominator freq(t) as the number of N-grams in the Google N-gram corpus that contain t, and the numerator freq(t, r) as the number of N-grams containing both t and r.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic model",
"sec_num": "3.1"
},
{
"text": "To estimate the factor Pr(R | r, t) directly from a corpus of text labeled with grammatical relations, it would be trivial to count how often a word r bears relation R to target word t. However, the results would be limited to the words in the corpus, and many relation frequencies would be estimated sparsely or missing altogether; t or r might not even occur.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic model",
"sec_num": "3.1"
},
{
"text": "Instead, we abstract each word in the corpus as its part-of-speech (POS) label. Thus we abstract The big boy ate meat as DT JJ NN VB NN. We call this sequence of POS tags a POS N-gram. We use POS N-grams to predict word relations. For instance, we predict that in any word sequence with this POS N-gram, the JJ will modify (amod) the first NN, and the second NN will be the direct object (dobj) of the VB.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic model",
"sec_num": "3.1"
},
{
"text": "This prediction is not 100% reliable. For example, the initial 5-gram of The big boy ate meat pie has the same POS 5-gram as before.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic model",
"sec_num": "3.1"
},
{
"text": "However, the dobj of its VB (ate) is not the second NN (meat), but the subsequent NN (pie). Thus POS N-grams predict word relations only in a probabilistic sense.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic model",
"sec_num": "3.1"
},
{
"text": "To transform Pr(R | r, t) into a form we can estimate, we first apply the definition of conditional probability:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic model",
"sec_num": "3.1"
},
{
"text": "Pr( , , ) Pr( | , ) Pr( , ) R t r R t r t r",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic model",
"sec_num": "3.1"
},
{
"text": "To estimate the numerator Pr(R, t, r), we first marginalize over the POS N-gram p:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic model",
"sec_num": "3.1"
},
{
"text": "Pr( , , , ) Pr( , ) p R t r p t r",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic model",
"sec_num": "3.1"
},
{
"text": "We expand the numerator using the chain rule:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic model",
"sec_num": "3.1"
},
{
"text": "Pr( | , , ) Pr( | , ) Pr( , ) Pr( , ) p R t r p p t r t r t r",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic model",
"sec_num": "3.1"
},
{
"text": "Cancelling the common factor yields:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic model",
"sec_num": "3.1"
},
{
"text": "Pr( | , , ) Pr( | , ) p R p t r p t r",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic model",
"sec_num": "3.1"
},
{
"text": "We approximate the first term Pr(R | p, t, r) as Pr(R | p), based on the simplifying assumption that R is conditionally independent of t and r, given p. In other words, we assume that given a POS N-gram, the target and relative words t and r give no additional information about the probability of a relation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic model",
"sec_num": "3.1"
},
{
"text": "However, their respective positions i and j in the POS N-gram p matter, so we condition the probability on them:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic model",
"sec_num": "3.1"
},
{
"text": "Pr( | , , ) Pr( | , , ) R p t r R p i j Summing over their possible positions, we get Pr( | , ) Pr( | , , ) Pr( | , ) i j p i j R r t R p i j p t g r g",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic model",
"sec_num": "3.1"
},
{
"text": "As Figure 1 shows, we estimate Pr(R | p, i, j) by abstracting the labeled corpus into POS N-grams. We estimate Pr(p | t = g i , r = g j ) based on the frequency of partially lexicalized POS N-grams like DT JJ:red NN:hat VB NN among Google Ngrams with t and r in the specified positions.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 11,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Probabilistic model",
"sec_num": "3.1"
},
{
"text": "Sections 3.2 and 3.3 describe how we estimate Pr(R | p, i, j) and Pr(p | t = g i , r = g j ), respectively. Note that PONG estimates relative rather than absolute probabilities. Therefore it cannot (and does not) compare them against a fixed threshold to make decisions about selectional preferences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic model",
"sec_num": "3.1"
},
{
"text": "To estimate Pr(R | p, i, j), we use the Penn Treebank Wall Street Journal (WSJ) corpus, which is labeled with grammatical relations using the Stanford dependency parser (Klein and Manning, 2003) .",
"cite_spans": [
{
"start": 169,
"end": 194,
"text": "(Klein and Manning, 2003)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Mapping POS N-grams to relations",
"sec_num": "3.2"
},
{
"text": "To estimate the probability Pr(R | p, i, j) of a relation R between a target at position i and a relative at position j in a POS N-gram p, we compute what fraction of the word N-grams g with POS N-gram p have relation R between some target t and relative r at positions i and j:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mapping POS N-grams to relations",
"sec_num": "3.2"
},
{
"text": "Pr( | , , ) freq( . .POS( ) relation( , ) ) freq( . .POS( ) relation( , )) i j i j R p i j g s t g p g g R g s t g p g g",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mapping POS N-grams to relations",
"sec_num": "3.2"
},
{
"text": "Given a target and relative, we need to estimate their distribution of POS N-grams and positions. A labeled corpus is too sparse for this purpose, so we use the much larger unlabeled Google Ngrams corpus (Franz and Brants, 2006) .",
"cite_spans": [
{
"start": 204,
"end": 228,
"text": "(Franz and Brants, 2006)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Estimating POS N-gram distributions",
"sec_num": "3.3"
},
{
"text": "The probability that an N-gram with target t at position i and relative r at position j will have the POS N-gram p is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimating POS N-gram distributions",
"sec_num": "3.3"
},
{
"text": "Pr( | , ) freq( . .POS( ) , , )) freq( . . ) i j i j i j p t g r g g s t g p g t g r g s t g t g r",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimating POS N-gram distributions",
"sec_num": "3.3"
},
{
"text": "To compute this ratio, we first use a wellindexed table to efficiently retrieve all N-grams with words t and r at the specified positions. We then obtain their POS N-grams from the Stanford POS tagger (Toutanova et al., 2003) , and count how many of them have the POS N-gram p.",
"cite_spans": [
{
"start": 201,
"end": 225,
"text": "(Toutanova et al., 2003)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Estimating POS N-gram distributions",
"sec_num": "3.3"
},
{
"text": "We abstract word N-grams into POS N-grams to address the sparseness of the labeled corpus, but even the POS N-grams can be sparse. For n=5, the rarer ones occur too sparsely (if at all) in our labeled corpus to estimate their frequency.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reducing POS N-gram sparseness",
"sec_num": "3.4"
},
{
"text": "To address this issue, we use a coarser POS tag set than the Penn Treebank POS tag set. As Table 2 shows, we merge tags for adjectives, nouns, adverbs, and verbs into four coarser tags. To gauge the impact of the coarser POS tags, we calculated Pr(r | t, R) for 76 test instances used in an earlier unpublished study by Liu Liu, a former Project LISTEN graduate student. Each instance consists of two randomly chosen words in the WSJ corpus labeled with a grammatical relation. Coarse POS tags increased coverage of this pilot setthat is, the fraction of instances for which PONG computes a probabilityfrom 69% to 92%.",
"cite_spans": [],
"ref_spans": [
{
"start": 91,
"end": 98,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Reducing POS N-gram sparseness",
"sec_num": "3.4"
},
{
"text": "Using the universal tag set (Petrov et al., 2011) as an even coarser tag set is an interesting future direction, especially for other languages. Its smaller size (12 tags vs. our 23) should reduce data sparseness, but increase the risk of overgeneralization.",
"cite_spans": [
{
"start": 28,
"end": 49,
"text": "(Petrov et al., 2011)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Coarse",
"sec_num": null
},
{
"text": "To evaluate PONG, we use a standard pseudodisambiguation task, detailed in Section 4.1. Section 4.2 describes our test set. Section 4.3 lists the metrics we evaluate on this test set. Section 4.4 describes the baselines we compare PONG against on these metrics, and Section 4.5 describes the relations we compare them on. Section 4.6 reports our results. Section 4.7 analyzes sources of error.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4"
},
{
"text": "The pseudo-disambiguation task (Gale et al., 1992; Schutze, 1992 ) is as follows: given a target word t, a relation R, a relative r, and a random distracter r', prefer either r or r', whichever is likelier to have relation R to word t.",
"cite_spans": [
{
"start": 31,
"end": 50,
"text": "(Gale et al., 1992;",
"ref_id": "BIBREF5"
},
{
"start": 51,
"end": 64,
"text": "Schutze, 1992",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation task",
"sec_num": "4.1"
},
{
"text": "This evaluation does not use a threshold: just prefer whichever word is likelier according to the model being evaluated. If the model assigns only one of the words a probability, prefer it, based on the assumption that the unknown probability of the other word is lower. If the model assigns the same probability to both words, or no probability to either word, do not prefer either word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation task",
"sec_num": "4.1"
},
{
"text": "As a source of evaluation data, we used the British National Corpus (BNC). As a common test corpus for all the methods we evaluated, we selected one half of BNC by sorting filenames alphabetically and using the odd-numbered files. We used the other half of BNC as a training corpus for the baseline methods we compared PONG to.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Test set",
"sec_num": "4.2"
},
{
"text": "A test set for the pseudo-disambiguation task task consists of tuples of the form (R, t, r, r') . To construct a test set, we adapted the process used by Rooth et al. (1999) and Erk et al. (2010) .",
"cite_spans": [
{
"start": 154,
"end": 173,
"text": "Rooth et al. (1999)",
"ref_id": "BIBREF15"
},
{
"start": 178,
"end": 195,
"text": "Erk et al. (2010)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 82,
"end": 95,
"text": "(R, t, r, r')",
"ref_id": null
}
],
"eq_spans": [],
"section": "Test set",
"sec_num": "4.2"
},
{
"text": "First, we chose 100 (R, t) pairs for each relation R at random from the test corpus. Rooth et al. (1999) and Erk et al. (2010) chose such pairs from a training corpus to ensure that it contained the target t. In contrast, choosing pairs from an unseen test corpus includes target words whether or not they occur in the training corpus.",
"cite_spans": [
{
"start": 85,
"end": 104,
"text": "Rooth et al. (1999)",
"ref_id": "BIBREF15"
},
{
"start": 109,
"end": 126,
"text": "Erk et al. (2010)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Test set",
"sec_num": "4.2"
},
{
"text": "To obtain a sample stratified by frequency, rather than skewed heavily toward highfrequency pairs, Erk et al. (2010) drew (R, t) pairs from each of five frequency bands in the entire British National Corpus (BNC): 50-100 occurrences; 101-200; 201-500; 500-1000; and more than 1000. However, we use only half of BNC as our test corpus, so to obtain a comparable test set, we drew 20 (R, t) pairs from each of the corresponding frequency bands in that half: 26-50 occurrences; 51-100; 101-250; 251-500; and more than 500.",
"cite_spans": [
{
"start": 99,
"end": 116,
"text": "Erk et al. (2010)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Test set",
"sec_num": "4.2"
},
{
"text": "For each chosen (R, t) pair, we drew a separate (R, t, r) triple from each of six frequency bands: 1-25 occurrences; 26-50; 51-100; 101-250; 251-500; and more than 500. We necessarily omitted frequency bands that contained no such triples. We filtered out triples where r did not have the most frequent part of speech for the relation R. For example, this filter would exclude the triple (dobj, celebrate, the) because a direct object is most frequently a noun, but the is a determiner.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Test set",
"sec_num": "4.2"
},
{
"text": "Then, like Erk et al. 2010, we paired the relative r in each (R, t, r) triple with a distracter r' with the same (most frequent) part of speech as the relative r, yielding the test tuple (R, t, r, r'). Rooth et al. (1999) restricted distracter candidates to words with between 30 and 3,000 occurrences in BNC; accordingly, we chose only distracters with between 15 and 1,500 occurrences in our test corpus. We selected r' from these candidates randomly, with probability proportional to their frequency in the test corpus. Like Rooth et al. (1999) , we excluded as distracters any actual relatives, i.e. candidates r' where the test corpus contained the triple (R, t, r'). Table 3 shows the resulting number of (R, t, r, r') test tuples for each relation. ",
"cite_spans": [
{
"start": 202,
"end": 221,
"text": "Rooth et al. (1999)",
"ref_id": "BIBREF15"
},
{
"start": 528,
"end": 547,
"text": "Rooth et al. (1999)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 673,
"end": 680,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Test set",
"sec_num": "4.2"
},
{
"text": "We report four evaluation metrics: precision, coverage, recall, and F-score. Precision (called \"accuracy\" in some papers on selectional preferences) is the percentage of all covered tuples where the original relative r is preferred. Coverage is the percentage of tuples for which the model prefers r to r' or vice versa. Recall is the percentage of all tuples where the original relative is preferred, i.e., precision times coverage. F-score is the harmonic mean of precision and recall.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metrics",
"sec_num": "4.3"
},
{
"text": "We compare PONG to two baseline methods. EPP is a state-of-the-art model for which Erk et al. (2010) reported better performance than both Resnik's (1996) WordNet model and Rooth's (1999) EM clustering model. EPP computes selectional preferences using distributional similarity, based on the assumption that relatives are likely to appear in the same contexts as relatives seen in the training corpus. EPP computes the similarity of a potential relative's vector space representation to relatives in the training corpus. EPP has various options for its vector space representation, similarity measure, weighting scheme, generalization space, and whether to use PCA. In re-implementing EPP, we chose the options that performed best according to Erk et al. (2010) , with one exception. To save work, we chose not to use PCA, which Erk et al. (2010) described as performing only slightly better in the dependency-based space. Table 5 : Coverage, Precision, Recall, and F-score for various relations; R T is the inverse of relation R. PONG uses POS N-grams, EPP uses distributional similarity, and DEP uses dependency parses.",
"cite_spans": [
{
"start": 139,
"end": 154,
"text": "Resnik's (1996)",
"ref_id": "BIBREF11"
},
{
"start": 173,
"end": 187,
"text": "Rooth's (1999)",
"ref_id": "BIBREF15"
},
{
"start": 744,
"end": 761,
"text": "Erk et al. (2010)",
"ref_id": "BIBREF2"
},
{
"start": 829,
"end": 846,
"text": "Erk et al. (2010)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 923,
"end": 930,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Baselines",
"sec_num": "4.4"
},
{
"text": "To score a potential relative r 0 , EPP uses this formula: (r 0 , r) is the nGCM similarity defined below between vector space representations of r 0 and a relative r seen in the training data: The weight function wt r,t (a) is analogous to inverse document frequency in Information Retrieval.",
"cite_spans": [],
"ref_spans": [
{
"start": 59,
"end": 68,
"text": "(r 0 , r)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Relative",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": ", , 0 0 arg ( , )",
"eq_num": ", () ("
}
],
"section": "Relative",
"sec_num": null
},
{
"text": "DEP, our second baseline method, runs the Stanford dependency parser to label the training corpus with grammatical relations, and uses their frequencies to predict selectional preferences. To do the pseudo-disambiguation task, DEP compares the frequencies of (R, t, r) and (R, t, r').",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relative",
"sec_num": null
},
{
"text": "To test PONG, EPP, and DEP, we chose the most frequent eight relations between content words in the WSJ corpus, which occur over 10,000 times and are described in Table 4 . We also tested their inverse relations. However, EPP does not compute selectional preferences for adjective and adverb as relatives. For this reason, we did not test EPP on advmod and amod relations with adverbs and adjectives as relatives. Table 5 displays results for all 16 relations. To compute statistical significance conservatively in comparing methods, we used paired t-tests with N = 16 relations.",
"cite_spans": [],
"ref_spans": [
{
"start": 163,
"end": 170,
"text": "Table 4",
"ref_id": "TABREF5"
},
{
"start": 414,
"end": 421,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Relations tested",
"sec_num": "4.5"
},
{
"text": "PONG's precision was significantly better than EPP (p<0.001) but worse than DEP (p<0.0001).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental results",
"sec_num": "4.6"
},
{
"text": "Still, PONG's high precision validates its underlying assumption that POS Ngrams strongly predict grammatical dependencies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental results",
"sec_num": "4.6"
},
{
"text": "On coverage and recall, EPP beat PONG, which beat DEP (p<0.0001). PONG's F-score was higher, but not significantly, than EPP's (p>0.5) or DEP's (p>0.02).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental results",
"sec_num": "4.6"
},
{
"text": "In the pseudo-disambiguation task of choosing which of two words is related to a target, PONG makes errors of coverage (preferring neither word) and precision (preferring the wrong word).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error analysis",
"sec_num": "4.7"
},
{
"text": "Coverage errors, which occurred 17.4% of the time on average, arose only when PONG failed to estimate a probability for either word. PONG fails to score a potential relative r of a target t with a specified relation R if the labeled corpus has no POS N-grams that (a) map to R, (b) contain the POS of t and r, and (c) match Google word N-grams with t and r at those positions. Every relation has at least one POS N-gram that maps to it, so condition (a) never fails. PONG uses the most frequent POS of t and r, and we believe that condition (b) never fails. However, condition (c) can and does fail when t and r do not co-occur in any Google N-grams, at least that match a POS N-gram that can map to relation R. For example, oversee and diet do not co-occur in any Google N-grams, so PONG cannot score diet as a potential dobj of oversee.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error analysis",
"sec_num": "4.7"
},
{
"text": "Precision errors, which occur 17% of the time on average, arose when (a) PONG scored the distracter but failed to score the true relative, or (b) scored them both but preferred the distracter. Case (a) accounted for 44.62% of the errors on the covered test tuples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error analysis",
"sec_num": "4.7"
},
{
"text": "One likely cause of errors in case (b) is overgeneralization when PONG abstracts a word Ngram labeled with a relation by mapping its POS N-gram to that relation. In particular, the coarse POS tag set may discard too much information. Another likely cause of errors is probabilities estimated poorly due to sparse data. The probability of a relation for a POS N-gram rare in the training corpus is likely to be inaccurate. So is the probability of a POS N-gram for rare cooccurrences of a target and relative in Google word N-grams. Using a smaller tag set may reduce the sparse data problem but increase the risk of over-generalization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error analysis",
"sec_num": "4.7"
},
{
"text": "In predicting selectional preferences, a key issue is generalization. Our DEP baseline simply counts co-occurrences of target and relative words in a corpus to predict selectional preferences, but only for words seen in the corpus. Prior work, summarized in Table 6 , has therefore tried to infer the similarity of unseen relatives to seen relatives. To illustrate, consider the problem of inducing that the direct objects of celebrate tend to be days or events. Resnik (1996) combined WordNet with a labeled corpus to model the probability that relatives of a predicate belong to a particular conceptual class. This method could notice, for example, that the direct objects of celebrate tend to belong to the conceptual class event. Thus it could prefer anniversary or occasion as the object of celebrate even if unseen in its training corpus. However, this method depends strongly on the WordNet taxonomy.",
"cite_spans": [
{
"start": 463,
"end": 476,
"text": "Resnik (1996)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 258,
"end": 265,
"text": "Table 6",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Relation to Prior Work",
"sec_num": "5"
},
{
"text": "Rather than use linguistic resources such as WordNet, Rooth et al. (1999) and Wald et al. (2008) induced semantically annotated subcategorization frames from unlabeled corpora. They modeled semantic classes as hidden variables, which they estimated using EM-based clustering. Ritter (2010) computed selectional preferences by using unsupervised topic models such as LinkLDA, which infers semantic classes of words automatically instead of requiring a predefined set of classes as input.",
"cite_spans": [
{
"start": 54,
"end": 73,
"text": "Rooth et al. (1999)",
"ref_id": "BIBREF15"
},
{
"start": 78,
"end": 96,
"text": "Wald et al. (2008)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Relation to Prior Work",
"sec_num": "5"
},
{
"text": "The contexts in which a linguistic unit occurs provide information about its meaning. Erk (2007) and Erk et al. (2010) modeled the contexts of a word as the distribution of words that co-occur with it. They calculated the semantic similarity of two words as the similarity of their context distributions according to various measures. Erk et al. (2010) reported the state-ofthe-art method we used as our EPP baseline.",
"cite_spans": [
{
"start": 86,
"end": 96,
"text": "Erk (2007)",
"ref_id": "BIBREF1"
},
{
"start": 101,
"end": 118,
"text": "Erk et al. (2010)",
"ref_id": "BIBREF2"
},
{
"start": 335,
"end": 352,
"text": "Erk et al. (2010)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Relation to Prior Work",
"sec_num": "5"
},
{
"text": "In contrast to prior work that explored various solutions to the generalization problem, we don't so much solve this problem as circumvent it. Instead of generalizing from a training corpus directly to unseen words, PONG abstracts a word N-gram to a POS N-gram and maps it to the relations that the word N-gram is labeled with. To compute selectional preferences, whether the words are in the training corpus or not, PONG applies these abstract mappings to word N-grams in the much larger Google N-grams corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relation to Prior Work",
"sec_num": "5"
},
{
"text": "Some prior work on selectional preferences has used POS N-grams and a large unlabeled corpus. The most closely related work we found was by Gormley et al. (2011) . They used patterns in POS N-grams to generate test data for their selectional preferences model, but not to infer preferences. Zhou et al. (2011) (Fano, 1961) to check whether they co-occur more frequently in a large corpus than predicted by their unigram frequencies. However, their method did not distinguish among different relations.",
"cite_spans": [
{
"start": 140,
"end": 161,
"text": "Gormley et al. (2011)",
"ref_id": "BIBREF7"
},
{
"start": 291,
"end": 309,
"text": "Zhou et al. (2011)",
"ref_id": "BIBREF18"
},
{
"start": 310,
"end": 322,
"text": "(Fano, 1961)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Relation to Prior Work",
"sec_num": "5"
},
{
"text": "This paper describes, derives, and evaluates PONG, a novel probabilistic model of selectional preferences. PONG uses a labeled corpus to map POS N-grams to grammatical relations. It combines this mapping with probabilities estimated from a much larger POS-tagged but unlabeled Google N-grams corpus. We tested PONG on the eight most common relations in the WSJ corpus, and their inversesmore relations than evaluated in prior work. Compared to the state-of-the-art EPP baseline (Erk et al., 2010) , PONG averaged higher precision but lower coverage and recall. Compared to the DEP baseline, PONG averaged lower precision but higher coverage and recall. All these differences were substantial (p < 0.001). Compared to both baselines, PONG's average Fscore was higher, though not significantly.",
"cite_spans": [
{
"start": 478,
"end": 496,
"text": "(Erk et al., 2010)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Some directions for future work include: First, improve PONG by incorporating models of lexical similarity explored in prior work. Second, use the universal tag set to extend PONG to other languages, or to perform better in English. Third, in place of grammatical relations, use rich, diverse semantic roles, while avoiding sparsity. Finally, use selectional preferences to teach word connotations by using various relations to generate example sentences or useful questions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
}
],
"back_matter": [
{
"text": "The research reported here was supported by the Institute of Education Sciences, U.S. Department of Education, through Grant R305A080157. The opinions expressed are those of the authors and do not necessarily represent the views of the Institute or the U.S. Department of Education. We thank the helpful reviewers and Katrin Erk for her generous assistance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF1": {
"ref_id": "b1",
"title": "A Simple, Similarity-Based Model for Selectional Preferences",
"authors": [
{
"first": "K",
"middle": [],
"last": "Erk",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics",
"volume": "",
"issue": "",
"pages": "216--223",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Erk, K. 2007. A Simple, Similarity-Based Model for Selectional Preferences. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, Prague, Czech Republic, June, 2007, 216-223.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A Flexible, Corpus-Driven Model of Regular and Inverse Selectional Preferences",
"authors": [
{
"first": "K",
"middle": [],
"last": "Erk",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Pad\u00f3",
"suffix": ""
},
{
"first": "U",
"middle": [],
"last": "Pad\u00f3",
"suffix": ""
}
],
"year": 2010,
"venue": "Computational Linguistics",
"volume": "36",
"issue": "4",
"pages": "723--763",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Erk, K., Pad\u00f3, S. and Pad\u00f3, U. 2010. A Flexible, Corpus-Driven Model of Regular and Inverse Selectional Preferences. Computational Linguistics 36(4), 723-763.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Transmission O F Information: A Statistical Theory of Communications",
"authors": [
{
"first": "R",
"middle": [],
"last": "Fano",
"suffix": ""
}
],
"year": 1961,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fano, R. 1961. Transmission O F Information: A Statistical Theory of Communications. MIT Press, Cambridge, MA.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "All Our N-Gram Are Belong to You",
"authors": [
{
"first": "A",
"middle": [],
"last": "Franz",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Brants",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz, A. and Brants, T. 2006. All Our N-Gram Are Belong to You.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Work on Statistical Methods for Word Sense Disambiguation",
"authors": [
{
"first": "W",
"middle": [
"A"
],
"last": "Gale",
"suffix": ""
},
{
"first": "K",
"middle": [
"W"
],
"last": "Church",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Yarowsky",
"suffix": ""
}
],
"year": 1992,
"venue": "Proceedings of the AAAI Fall Symposium on Probabilistic Approaches to Natural Language",
"volume": "",
"issue": "",
"pages": "54--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gale, W.A., Church, K.W. and Yarowsky, D. 1992. Work on Statistical Methods for Word Sense Disambiguation. In Proceedings of the AAAI Fall Symposium on Probabilistic Approaches to Natural Language, Cambridge, MA, October 23-25, 1992, 54-60.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Automatic Labeling of Semantic Roles",
"authors": [
{
"first": "D",
"middle": [],
"last": "Gildea",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2002,
"venue": "Computational Linguistics",
"volume": "28",
"issue": "3",
"pages": "245--288",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gildea, D. and Jurafsky, D. 2002. Automatic Labeling of Semantic Roles. Computational Linguistics 28(3), 245-288.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Shared Components Topic Models with Application to Selectional Preference",
"authors": [
{
"first": "M",
"middle": [
"R"
],
"last": "Gormley",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Dredze",
"suffix": ""
},
{
"first": "B",
"middle": [
"V"
],
"last": "Durme",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gormley, M.R., Dredze, M., Durme, B.V. and Eisner, J. 2011. Shared Components Topic Models with Application to Selectional Preference, NIPS Workshop on Learning Semantics Sierra Nevada, Spain.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Combining Em Training and the Mdl Principle for an Automatic Verb Classification Incorporating Selectional Preferences",
"authors": [
{
"first": "S",
"middle": [
"S"
],
"last": "Im Walde",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Hying",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Scheible",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Schmid",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "496--504",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "im Walde, S.S., Hying, C., Scheible, C. and Schmid, H. 2008. Combining Em Training and the Mdl Principle for an Automatic Verb Classification Incorporating Selectional Preferences. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics, Columbus, OH, 2008, 496-504.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Accurate Unlexicalized Parsing",
"authors": [
{
"first": "D",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "C",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Klein, D. and Manning, C.D. 2003. Accurate Unlexicalized Parsing. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, Sapporo, Japan, July 7- 12, 2003, E.W. HINRICHS and D. ROTH, Eds.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A Universal Part-of-Speech Tagset",
"authors": [
{
"first": "S",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "R",
"middle": [
"T"
],
"last": "Mcdonald",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Petrov, S., Das, D. and McDonald, R.T. 2011. A Universal Part-of-Speech Tagset. ArXiv 1104.2086.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Selectional Constraints: An Information-Theoretic Model and Its Computational Realization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Resnik",
"suffix": ""
}
],
"year": 1996,
"venue": "Cognition",
"volume": "61",
"issue": "",
"pages": "127--159",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Resnik, P. 1996. Selectional Constraints: An Information-Theoretic Model and Its Computational Realization. Cognition 61, 127-159.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Selectional Preference and Sense Disambiguation",
"authors": [
{
"first": "P",
"middle": [],
"last": "Resnik",
"suffix": ""
}
],
"year": 1997,
"venue": "ACL SIGLEX Workshop on",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Resnik, P. 1997. Selectional Preference and Sense Disambiguation. In ACL SIGLEX Workshop on",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A Latent Dirichlet Allocation Method for Selectional Preferences",
"authors": [
{
"first": "A",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mausam",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Etzioni",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "424--434",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ritter, A., Mausam and Etzioni, O. 2010. A Latent Dirichlet Allocation Method for Selectional Preferences. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, Uppsala, Sweden, 2010, 424-434.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Inducing a Semantically Annotated Lexicon Via Em-Based Clustering",
"authors": [
{
"first": "M",
"middle": [],
"last": "Rooth",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Riezler",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Prescher",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Carroll",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Beil",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "104--111",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rooth, M., Riezler, S., Prescher, D., Carroll, G. and Beil, F. 1999. Inducing a Semantically Annotated Lexicon Via Em-Based Clustering. In Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics on Computational Linguistics, College Park, MD, 1999, Association for Computational Linguistics, 104-111.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Context Space",
"authors": [
{
"first": "H",
"middle": [],
"last": "Schutze",
"suffix": ""
}
],
"year": 1992,
"venue": "Proceedings of the AAAI Fall Symposium on Intelligent Probabilistic Approaches to Natural Language",
"volume": "",
"issue": "",
"pages": "113--120",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Schutze, H. 1992. Context Space. In Proceedings of the AAAI Fall Symposium on Intelligent Probabilistic Approaches to Natural Language, Cambridge, MA, 1992, 113-120.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Feature-Rich Part-of-Speech Tagging with a Cyclic Dependency Network",
"authors": [
{
"first": "K",
"middle": [],
"last": "Toutanova",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the Human Language Technology Conference and Annual Meeting of the North American Chapter of the Association for Computational Linguistics (HLT-NAACL)",
"volume": "",
"issue": "",
"pages": "252--259",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Toutanova, K., Klein, D., Manning, C. and Singer, Y. 2003. Feature-Rich Part-of-Speech Tagging with a Cyclic Dependency Network. In Proceedings of the Human Language Technology Conference and Annual Meeting of the North American Chapter of the Association for Computational Linguistics (HLT-NAACL), Edmonton, Canada, 2003, 252- 259.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Exploiting Web-Derived Selectional Preference to Improve Statistical Dependency Parsing",
"authors": [
{
"first": "G",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Cai",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1556--1565",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhou, G., Zhao, J., Liu, K. and Cai, L. 2011. Exploiting Web-Derived Selectional Preference to Improve Statistical Dependency Parsing. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, Portland, OR, 2011, 1556-1565.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Overview of PONG. From the labeled corpus, PONG extracts abstract mappings from POS N-grams to relations. From the unlabeled corpus, PONG estimates POS N-gram probability given a target and relative."
},
"TABREF1": {
"type_str": "table",
"html": null,
"num": null,
"content": "<table/>",
"text": "Coarser POS tag set used in PONG"
},
"TABREF3": {
"type_str": "table",
"html": null,
"num": null,
"content": "<table/>",
"text": "Test set size for each relation"
},
"TABREF5": {
"type_str": "table",
"html": null,
"num": null,
"content": "<table><tr><td>Relation</td><td colspan=\"12\">Precision (%) PONG EPP DEP PONG EPP DEP PONG EPP DEP PONG EPP DEP Coverage (%) Recall (%) F-score (%)</td></tr><tr><td>advmod</td><td>78.7</td><td>-</td><td>98.6</td><td>72.1</td><td>-</td><td>69.2</td><td>56.7</td><td>-</td><td>68.3</td><td>65.9</td><td>-</td><td>80.7</td></tr><tr><td>advmod T</td><td>89.0</td><td colspan=\"2\">71.0 97.4</td><td>69.5</td><td colspan=\"2\">100 59.5</td><td>61.8</td><td colspan=\"2\">71.0 58.0</td><td>73.0</td><td colspan=\"2\">71.0 72.7</td></tr><tr><td>amod</td><td>78.8</td><td>-</td><td>99.0</td><td>90.1</td><td>-</td><td>61.1</td><td>71.0</td><td>-</td><td>60.5</td><td>74.7</td><td>-</td><td>75.1</td></tr><tr><td>amod T</td><td>84.1</td><td colspan=\"2\">74.0 97.3</td><td>83.6</td><td colspan=\"2\">99.2 57.0</td><td>70.3</td><td colspan=\"2\">73.4 55.5</td><td>76.6</td><td colspan=\"2\">73.7 70.6</td></tr><tr><td>conj_and</td><td>77.2</td><td colspan=\"2\">74.2 100</td><td>73.6</td><td colspan=\"2\">100 52.3</td><td>56.8</td><td colspan=\"2\">74.2 52.3</td><td>65.4</td><td colspan=\"2\">74.2 68.6</td></tr><tr><td>conj_and T</td><td>80.5</td><td colspan=\"2\">70.2 97.3</td><td>74.8</td><td colspan=\"2\">100 49.7</td><td>60.3</td><td colspan=\"2\">70.2 48.3</td><td>68.9</td><td colspan=\"2\">70.2 64.6</td></tr><tr><td>dobj</td><td>87.2</td><td colspan=\"2\">80.0 97.7</td><td>80.7</td><td colspan=\"2\">100 60.0</td><td>70.3</td><td colspan=\"2\">80.0 58.6</td><td>77.9</td><td colspan=\"2\">80.0 73.3</td></tr><tr><td>dobj T</td><td>89.6</td><td colspan=\"2\">80.2 98.1</td><td>92.2</td><td colspan=\"2\">100 64.1</td><td>82.6</td><td colspan=\"2\">80.2 62.9</td><td>86.0</td><td colspan=\"2\">80.2 76.6</td></tr><tr><td>nn</td><td>86.7</td><td colspan=\"2\">73.8 97.2</td><td>95.3</td><td colspan=\"2\">99.4 63.0</td><td>82.7</td><td colspan=\"2\">73.4 61.3</td><td>84.6</td><td colspan=\"2\">73.6 75.2</td></tr><tr><td>nn T</td><td>83.8</td><td colspan=\"2\">79.7 99.0</td><td>93.7</td><td colspan=\"2\">100 60.8</td><td>78.5</td><td colspan=\"2\">79.7 60.1</td><td>81.0</td><td colspan=\"2\">79.7 74.8</td></tr><tr><td>nsubj</td><td>76.1</td><td colspan=\"2\">77.3 100</td><td>69.1</td><td colspan=\"2\">100 42.3</td><td>52.6</td><td colspan=\"2\">77.3 42.3</td><td>62.2</td><td colspan=\"2\">77.3 59.4</td></tr><tr><td>nsubj T</td><td>78.5</td><td colspan=\"2\">66.9 95.0</td><td>86.3</td><td colspan=\"2\">100 48.4</td><td>67.7</td><td colspan=\"2\">66.9 46.0</td><td>72.7</td><td colspan=\"2\">66.9 62.0</td></tr><tr><td>prep_of</td><td>88.4</td><td colspan=\"2\">77.8 98.4</td><td>84.0</td><td colspan=\"2\">100 44.4</td><td>74.3</td><td colspan=\"2\">77.8 43.8</td><td>80.3</td><td colspan=\"2\">77.8 60.6</td></tr><tr><td>prep_of T</td><td>79.2</td><td colspan=\"2\">76.5 97.4</td><td>81.7</td><td colspan=\"2\">100 50.3</td><td>64.7</td><td colspan=\"2\">76.5 49.0</td><td>71.2</td><td colspan=\"2\">76.5 65.2</td></tr><tr><td>xcomp</td><td>84.0</td><td colspan=\"2\">61.9 95.3</td><td>85.6</td><td colspan=\"2\">100 61.2</td><td>71.9</td><td colspan=\"2\">61.9 58.3</td><td>77.5</td><td colspan=\"2\">61.9 72.3</td></tr><tr><td>xcomp T</td><td>86.4</td><td colspan=\"2\">78.6 98.9</td><td>89.3</td><td colspan=\"2\">100 63.6</td><td>77.1</td><td colspan=\"2\">78.6 62.9</td><td>81.5</td><td colspan=\"2\">78.6 76.9</td></tr><tr><td>average</td><td>83.0</td><td colspan=\"2\">74.4 97.9</td><td>82.6</td><td colspan=\"2\">99.9 56.7</td><td>68.7</td><td colspan=\"2\">74.4 55.5</td><td>75.0</td><td colspan=\"2\">74.4 70.5</td></tr></table>",
"text": "Relations tested in the pseudo-disambiguation experiment. Relation names and descriptions are from de Marneffe and Manning (2008) except for prep_of. Target and relative POS are the most frequent POS pairs for the relations in our labeled WSJ corpus."
},
"TABREF7": {
"type_str": "table",
"html": null,
"num": null,
"content": "<table/>",
"text": "Comparison with prior methods to compute selectional preferences"
}
}
}
}