Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S12-1026",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:23:47.869536Z"
},
"title": "Unsupervised Induction of a Syntax-Semantics Lexicon Using Iterative Refinement",
"authors": [
{
"first": "Hagen",
"middle": [],
"last": "F\u00fcrstenau",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "CCLS Columbia University New York",
"location": {
"region": "NY",
"country": "USA"
}
},
"email": ""
},
{
"first": "Owen",
"middle": [],
"last": "Rambow",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "CCLS Columbia University",
"location": {
"settlement": "New York",
"region": "NY",
"country": "USA"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present a method for learning syntaxsemantics mappings for verbs from unannotated corpora. We learn linkings, i.e., mappings from the syntactic arguments and adjuncts of a verb to its semantic roles. By learning such linkings, we do not need to model individual semantic roles independently of one another, and we can exploit the relation between different mappings for the same verb, or between mappings for different verbs. We present an evaluation on a standard test set for semantic role labeling.",
"pdf_parse": {
"paper_id": "S12-1026",
"_pdf_hash": "",
"abstract": [
{
"text": "We present a method for learning syntaxsemantics mappings for verbs from unannotated corpora. We learn linkings, i.e., mappings from the syntactic arguments and adjuncts of a verb to its semantic roles. By learning such linkings, we do not need to model individual semantic roles independently of one another, and we can exploit the relation between different mappings for the same verb, or between mappings for different verbs. We present an evaluation on a standard test set for semantic role labeling.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "A verb can have several ways of mapping its semantic arguments to syntax (\"diathesis alternations\"):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(1) a. We increased the response rate with SHK. b. SHK increased the response rate. c. The response rate increased.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The subject of increase can be the agent (1a), the instrument (1b), or the theme (what is being increased) (1c). Other verbs that show this pattern include break or melt. Much theoretical and lexicographic (descriptive) work has been devoted to determining how verbs map their lexical predicate-argument structure to syntactic arguments (Burzio, 1986; Levin, 1993) . The last decades have seen a surge in activity on the computational front, spurred in part by efforts to annotate large corpora for lexical semantics (Baker et al., 1998; Palmer et al., 2005) . Initially, we have seen computational efforts devoted to finding classes of verbs that share similar syntax-semantics mappings from annotated and unannotated corpora (Lapata and Brew, 1999; Merlo and Stevenson, 2001 ).",
"cite_spans": [
{
"start": 337,
"end": 351,
"text": "(Burzio, 1986;",
"ref_id": "BIBREF3"
},
{
"start": 352,
"end": 364,
"text": "Levin, 1993)",
"ref_id": "BIBREF9"
},
{
"start": 517,
"end": 537,
"text": "(Baker et al., 1998;",
"ref_id": "BIBREF1"
},
{
"start": 538,
"end": 558,
"text": "Palmer et al., 2005)",
"ref_id": "BIBREF12"
},
{
"start": 727,
"end": 750,
"text": "(Lapata and Brew, 1999;",
"ref_id": "BIBREF8"
},
{
"start": 751,
"end": 776,
"text": "Merlo and Stevenson, 2001",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "More recently, there has been an explosion of interest in semantic role labeling (with too many recent publications to cite).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we explore learning syntaxsemantics mappings for verbs from unannotated corpora. We are specifically interested in learning linkings. A linking is a mapping for one verb from its syntactic arguments and adjuncts to all of its semantic roles, so that individual semantic roles are not modeled independently of one another and so that we can exploit the relation between different mappings for the same verb (as in (1) above), or between mappings for different verbs. We therefore follow Grenager and Manning (2006) in treating linkings as first-class objects; however, we differ from their work in two important respects. First, we use semantic clustering of head words of arguments in an approach that resembles topic modeling, rather than directly modeling the subcategorization of verbs with a distribution over words. Second and most importantly, we do not make any assumptions about the linkings, as do Grenager and Manning (2006) . They list a small set of rules from which they derive all linkings possible in their model; in contrast, we are able to learn any linking observed in the data. Therefore, our approach is languageindependent. Grenager and Manning (2006) claim that their rules represent \"a weak form of Universal Grammar\", but their rules lack such common linking operations as the addition of an accusative reflexive for the unaccusative (Romance) or case marking (many languages), and they include a specific (English) preposition. We have no objection to using linguistic knowledge, but we do not feel that we have the empirical basis as of now to provide a set of Universal Grammar rules relevant for our task.",
"cite_spans": [
{
"start": 501,
"end": 528,
"text": "Grenager and Manning (2006)",
"ref_id": "BIBREF5"
},
{
"start": 922,
"end": 949,
"text": "Grenager and Manning (2006)",
"ref_id": "BIBREF5"
},
{
"start": 1160,
"end": 1187,
"text": "Grenager and Manning (2006)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A complete syntax-semantics lexicon describes how lexemes syntactically realize their semantic arguments, and provides selectional preferences on these dependents. Though rich lexical resources exist (such as the PropBank rolesets, the FrameNet lexicon, or VerbNet, which relates and extends these sources), none of them is complete, not even for English, on which most of the efforts have focused. However, if a complete syntax-semantics lexicon did exist, it would be an extremely useful resource: the task of shallow semantic parsing (semantic argument detection and semantic role labeling) could be reduced to determining the best analysis according to this lexicon. In fact, the learning model we present in this paper is itself a semantic role labeling model, since we can simply apply it to the data we want to label semantically.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper is a step towards the unsupervised induction of a complete syntax-semantics lexicon. We present a unified procedure for associating verbs with linkings and for associating the discovered semantic roles with selectional preferences. As input, we assume a syntactic representation scheme and a parser which can produce syntactic representations of unseen sentences in the chosen scheme reasonably well, as well as unlabeled text. We do not assume a specific theory of lexical semantics, nor a specific set of semantic roles. We induce a set of linkings, which are mappings from semantic role symbols to syntactic functions. We also induce a lexicon, which associates a verb lemma with a distribution over the linkings, and which associates the sematic role symbols with verb-specific selectional preferences (which are distributions over distributions of words). We evaluate on the task of semantic role labeling using PropBank (Palmer et al., 2005) as a gold standard.",
"cite_spans": [
{
"start": 937,
"end": 958,
"text": "(Palmer et al., 2005)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We focus on semantic arguments, as they are defined specifically for each verb and thus have verbspecific mappings to syntactic arguments, which may further be subject to diathesis alternations. In contrast, semantic adjuncts (modifiers) apply (in principle) to all verbs, and do not participate in diathesis alternations. For this reason, the Prop-Bank lexicon includes arguments but not adjuncts in its framesets. The method we present in this paper is designed to find verb-specific arguments, and we therefore take the results on semantic arguments (Argn) as our primary result. On these, we achieve a 20% F-measure error reduction over a high syntactic baseline (which maps each syntactic relation to a single semantic argument).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As mentioned above, our approach is most similar to that of Grenager and Manning (2006) . However, since their model uses hand-crafted rules, they are able to predict and evaluate against actual PropBank role labels, whereas our approach has to be evaluated in terms of clustering quality.",
"cite_spans": [
{
"start": 60,
"end": 87,
"text": "Grenager and Manning (2006)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The problem of unsupervised semantic role labeling has recently attracted some attention (Lang and Lapata, 2011a; Lang and Lapata, 2011b; Titov and Klementiev, 2012) . While the present paper shares the general aim of inducing semantic role clusters in an unsupervised way, it differs in treating syntax-semantics linkings explicitly and modeling predicate-specific distributions over them. Abend et al. (2009) address the problem of unsupervised argument recognition, which we do not address in the present paper. For the purpose of building a complete unsupervised semantic parser, a method such as theirs would be complementary to our work.",
"cite_spans": [
{
"start": 89,
"end": 113,
"text": "(Lang and Lapata, 2011a;",
"ref_id": "BIBREF6"
},
{
"start": 114,
"end": 137,
"text": "Lang and Lapata, 2011b;",
"ref_id": "BIBREF7"
},
{
"start": 138,
"end": 165,
"text": "Titov and Klementiev, 2012)",
"ref_id": "BIBREF14"
},
{
"start": 391,
"end": 410,
"text": "Abend et al. (2009)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In this section, we decribe a model that generates arguments for a given predicate instance. Specifically, this generative model describes the probability of a given set of argument head words and associated syntactic functions in terms of underlying semantic roles, which are modelled as latent variables. The semantic role labeling task is therefore framed as the induction of these latent variables from the observed data, which we assume to be preprocessed by a syntactic parser.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "The basic idea of our approach is to explicitly model linkings between the syntactic realizations and the underlying semantic roles of the arguments in a predicate-argument structure. Since our model of argument classification is completely unsupervised, we cannot assign familiar semantic role labels like Agent or Instrument, but rather aim at inducing role clusters, i.e., clusters of argument instances that share a semantic role. For example, each of the three instances of response rate in (1) should be assigned to the same cluster. We assume a fixed maximum number R of semantic roles per predicate and formulate argument classification as the task of assigning each argument in a predicate-argument structure to one of the numbered roles 1, . . . , R. Such an assignment can therefore be represented by an R-tuple, where each role position is either filled by one of the arguments or empty (denoted as ). We represent each argument by its head word and its syntactic function, i.e., the path of syntactic dependency relations leading to it from the predicate. In our example (1a), a possible assignment of arguments to semantic roles could therefore be represented by a head word tuple (we, rate, , SHK) and a corresponding tuple of syntactic functions (nsubj, dobj, , prep with), where for the sake of the example we have chosen R = 4 and the third semantic role slot is empty. Note that this ordered R-tuple thus represents a semantic labeling of the unordered set of arguments, which our model takes as input. While in the case of a single predicateargument structure the assignment of arguments to arbitrary semantic role numbers does not provide additional information, its value lies in the consistent assignment of arguments to specific roles across instances of the same predicate. For example, to be consistent with the assignment above, (1b) would have to be represented by ( , rate, , SHK) and ( , dobj, , nsubj).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "To formulate a generative model of argument tuples, we separately consider the tuple of argument head words and the tuple of syntactic functions. The following two subsections will address each of these in turn.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "The probability of an argument in a certain semantic role depends strongly on the selectional preferences of the predicate with respect to this role. In the context of our model, we therefore need to describe the probability P (w r |p, r) of an argument head word w r depending on the predicate p and the role r. Instead of directly modeling predicate-and role-specific distributions over head words, however, we model selectional preferences as distributions \u03c7 p,r (c) over semantic word classes c = 1, . . . , C (with C being a fixed model parameter), each of which is in turn as-sociated with a distribution \u03c8 c (w r ) over the vocabulary. They are thus similar to topics in semantic topic models. An advantage of this approach is that semantic word classes can be shared among different predicates, which facilitates their inference. Technically, the introduction of semantic word classes can be seen as a factorization of the probability of the argument head P",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selectional Preferences",
"sec_num": "3.1"
},
{
"text": "(w r |p, r) = C c=1 \u03c7 p,r (c)\u03c8 c (w r ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selectional Preferences",
"sec_num": "3.1"
},
{
"text": "Another important factor for the assignment of arguments to semantic roles are their syntactic functions. While in the preceding subsection we considered selectional preferences for each semantic role separately (assuming their independence), the interdependence between syntactic functions is crucial and cannot be ignored: The assignment of an argument does not depend solely on its own syntactic function, but on the whole subcategorization frame of the predicate-argument structure. We therefore have to model the probability of the whole tuple y = (y 1 , . . . , y R ) of syntactic functions. We assume that for each predicate there is a relatively small number of ways in which it realizes its arguments syntactically, i.e., in which semantic roles are linked to syntactic functions. These may correspond to alternations like those shown in (1). Instead of directly modeling the predicate-specific probability P (y|p), we consider predicate-specific distributions \u03c6 p (l) over linkings l = (x 1 , . . . , x R ). Such a linking then gives rise to the tuple y = (y 1 , . . . , y R ) by way of probability distributions P (y r |x r ) = \u03b7 xr (y r ). This allows us to keep the number of possible linkings l per predicate relatively small (by setting \u03c6 p (l) = 0 for most l), and generate a wide variety of syntactic function tuples y from them. Figure 1 presents our linking model. For each predicate-argument structure in the corpus, it contains observable variables for the predicate p and the unordered set s of arguments, and further shows latent variables for the linking l and (for each role r) the semantic word class c, the head word w, and the syntactic function y.",
"cite_spans": [],
"ref_spans": [
{
"start": 1347,
"end": 1355,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Linkings",
"sec_num": "3.2"
},
{
"text": "The distributions \u03c7 p,r (c) and \u03c8 c (w) are drawn from Dirichlet priors with symmetric parameters \u03b1 and \u03b2, respectively. In the case of the linking dis- tribution \u03c6 p (l), we are faced with an exponentially large space of possible linkings (considering a set G of syntactic functions, there are (|G| + 1) R possible linkings). This is both computationally problematic and counter-intuitive. We therefore maintain a global list L of permissible linkings and enforce \u03c6 p (l) = 0 for all l / \u2208 L. On the set L we then draw \u03c6 p (l) from a Dirichlet prior with symmetric parameter \u03b3. In Section 3.5, we will describe how the linking list L is iteratively induced from the data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Structure of the Model",
"sec_num": "3.3"
},
{
"text": "We introduced the distribution \u03b7 x to allow for incidental changes when generating the tuple of syntactic functions out of the linking. If this process were allowed to arbitrarily change any syntactic function in the linking, the linkings would be too unconstrained and not reflect the syntactic functions in the corpus. We therefore parameterize \u03b7 x in such a way that the only allowed modifications are the addition or removal of syntactic functions from the linking, but no change from one syntactic function to another. We attain this by parameterizing \u03b7 x as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Structure of the Model",
"sec_num": "3.3"
},
{
"text": "\u03b7 x (y) = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 \u03b7 if x = y = 1\u2212\u03b7 |G| if x = and y \u2208 G 1 \u2212 \u03b7 x if x \u2208 G and y = \u03b7 x if x = y \u2208 G 0 else",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Structure of the Model",
"sec_num": "3.3"
},
{
"text": "Here, G again denotes the set of all syntactic functions. The parameter \u03b7 is drawn from a uniform prior on the interval [0.0, 1.0] and the |G| parameters \u03b7 x for x \u2208 G have uniform priors on [0.5, 1.0]. This has the effect that no syntactic function can change into another, that a syntactic function is never more probable to disappear than to stay, and that all syntactic functions are added with the same probability. This last property will be important for the iterative refinement process described in Section 3.5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Structure of the Model",
"sec_num": "3.3"
},
{
"text": "In this subsection, we describe how we train the model described so far, assuming that we are given a fixed linking list L. The following subsection will address the problem of infering this list. In Section 3.6, we will then describe how we apply the trained model to infer semantic role assignments for given predicate-argument structures.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "3.4"
},
{
"text": "To train the linking model, we apply a Gibbs sampling procedure to the latent variables shown in Figure 1. In each sampling iteration, we first sample the values of the latent variables of each predicateargument structure based on the current distributions, and then the latent distributions based on counts obtained over the corpus. For each predicateargument structure, we begin with a blocked sampling step, simultaneously drawing values for w and y, while summing out c. This gives us",
"cite_spans": [],
"ref_spans": [
{
"start": 97,
"end": 103,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "Training",
"sec_num": "3.4"
},
{
"text": "P (w, y|p, l, s) \u221d R r=1 \u03b7 xr (y r ) C c=1 \u03c7 p,r (c)\u03c8 c (w r )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "3.4"
},
{
"text": "where we have omitted the factor P (s|w, y), which is uniform as long as we assume that w and y indeed represent permutations of the argument set s. To sample efficiently from this distribution, we precompute the inner sum (as a tensor contraction or, equivalently, R matrix multiplications). We then enumerate all permutations of the argument set and compute their probabilities, defaulting to an approximative beam search procedure in cases where the space of permutations is too large.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "3.4"
},
{
"text": "Next, the linking l is sampled according to",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "3.4"
},
{
"text": "P (l|p, y) \u221d P (l|p)P (y|l) = \u03c6 p (l) R r=1 \u03b7 xr (y r )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "3.4"
},
{
"text": "Since the space L of possible linkings is small, completely enumerating the values of this distribution is not a problem. After sampling the latent variables w, y, and l for each corpus instance, we go on to apply Gibbs sampling to the latent distributions. For example, for \u03c6 p we obtain",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "3.4"
},
{
"text": "P (\u03c6 p |p 1 , l 1 , . . . , p N , l N ) \u221d P (\u03c6 p ) N i=1 P (l i |p i ) \u221d Dir(\u03b3)(\u03c6 p ) \u2022 l\u2208L [\u03c6 p (l)] np(l) = Dir( n p + \u03b3)(\u03c6 p )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "3.4"
},
{
"text": "Here n p (l) is the number of corpus instances with predicate p and latent linking l, and n p is the vector of these counts for a fixed p, indexed by l. Hence, \u03c6 p is drawn from the Dirichlet distribution parameterized by this vector, smoothed in each component by \u03b3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "3.4"
},
{
"text": "In the same way, the sampling distributions for \u03c7 p,r and \u03c8 c are determined as Dir( n p,r + \u03b1) and Dir( n c + \u03b2), where each n p,r is a vector of counts 1 indexed by word classes c and each n c is a vector of counts indexed by head words w r . Similarly, we draw the parameter \u03b7 in the parameterization of \u03b7 x from Beta n( , ) + 1, x\u2208G n( , x) + 1 and approximate \u03b7 x by drawing \u03b7 x from Beta (n(x, x) + 1, n(x, ) + 1) and redrawing it uniformly from [0.5, 1.0], if it is smaller than 0.5. In this context, n(x, y) refers to the number of times the syntactic relation x is turned into y, counted over all corpus instances and semantic roles.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "3.4"
},
{
"text": "To test for convergence of the sampling process, we monitor the log-likelihood of the data. For each predicate-argument structure with predicate p i and argument set s i , we have",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "3.4"
},
{
"text": "P (p i , s i ) \u221d l P (l|p i )P (s i |l) \u2248 P (s i |l i ) = w,y P (w, y, s i |l i ) = w,y\u21d2s i P (w, y|l i ) =: L i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "3.4"
},
{
"text": "The approximation is rather crude (replacing an expected value by a single sample from P (l|p i )), but we expect the errors to mostly cancel out over the instances of the corpus. The last sum ranges over all pairs (w, y) that represent permutations of the argument set s, and this can be computed as a by-product of the sampling process of w and y. We then compute",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "3.4"
},
{
"text": "L := log N i=1 L i = N i=1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "3.4"
},
{
"text": "log L i , and terminate the sampling process if L does not increase by more than 0.1% over 5 iterations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "3.4"
},
{
"text": "In Section 3.3, we have addressed the problem of the exponentially large space of possible linkings by introducing a subset L \u2282 G R from which linkings may be drawn. We now need to clarify how this subset is determined. In contrast to Grenager and Manning (2006), we do not want to use any linguistic intuitions or manual rules to specify this subset, but rather automatically infer it from the data, so that the model stays agnostic to the language and paradigm of semantic roles. We therefore adopt a strategy of iterative refinement.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Iterative Refinement of Possible Linkings",
"sec_num": "3.5"
},
{
"text": "We start with a very small set that only contains the trivial linking ( , . . . , ) and one linking for each of the R most frequent syntactic functions, placing the most frequent one in the first slot, the second one in the second slot etc. We then run Gibbs sampling. When it has converged in terms of log-likelihood, we add some new linkings to L. These new linkings are inferred by inspecting the action of the step from l to y in the generative model. Here, a syntactic function may be added to or deleted from a linking. If a particular syntactic function is frequently added to some linking, then a corresponding linking, i.e., one featuring this syntactic function and thus not requiring such a modification, seems to be missing from the set L. We therefore count for each linking l how often it is either reduced by the deletion of any syntactic function or expanded by the addition of a syntactic function. We then rank these modifications in descending order and for each of them determine the semantic role slot in which the modification (deletion or addition) occured most frequently. By applying the modification to this slot, each of the linkings gives rise to a new one. We add the first a of those, skipping new linkings if they are duplicates of those we already have in the linking set. We iterate this procedure, alternating between Gibbs sampling to convergence and the addition of a new linkings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Iterative Refinement of Possible Linkings",
"sec_num": "3.5"
},
{
"text": "To predict semantic roles for a given predicate and argument set, we maximize P (l, w, y|p, s). If the space of permutations is too large for exhaustive enumeration, we apply a similar beam search procedure as the one employed in training to approximately maximize P (w, y|p, s, l) for each value of l. For efficiency, we do not marginalize over l. This has the potential of reducing prediction quality, as we do not predict the most likely role assignment, but rather the most likely combination of role assignment and latent linking.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "3.6"
},
{
"text": "In all experiments we averaged over 10 consecutive samples of the latent distributions, at the end of the sampling process (i.e., when convergence has been reached).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "3.6"
},
{
"text": "We train and evaluate our linking model on the data set produced for the CoNLL-08 Shared Task on Joint Parsing of Syntactic and Semantic Dependencies (Surdeanu et al., 2008) , which is based on the PropBank corpus (Palmer et al., 2005) . This data set includes part-of-speech tags, lemmatized tokens, and syntactic dependencies, which have been converted from the manual syntactic annotation of the underlying Penn Treebank (Marcus et al., 1993) .",
"cite_spans": [
{
"start": 150,
"end": 173,
"text": "(Surdeanu et al., 2008)",
"ref_id": "BIBREF13"
},
{
"start": 214,
"end": 235,
"text": "(Palmer et al., 2005)",
"ref_id": "BIBREF12"
},
{
"start": 424,
"end": 445,
"text": "(Marcus et al., 1993)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4"
},
{
"text": "As input to our model, we decided not to use the syntactic representation in the CoNLL-08 data set, but instead to rely on Stanford Dependencies (de Marneffe et al., 2006) , which seem to facilitate semantic analysis. We thus used the Stanford Parser 2 to convert the underlying phrase structure trees of the Penn Tree Bank into Stanford Dependencies. In the resulting dependency analyses, the syntactic head word of a semantic role may differ from the syntactic head according to the provided syntax. We therefore mapped the semantic role annotation onto the Stanford Dependency trees by identifying the tree node that covers the same set of tokens as the one marked in the CoNLL-08 data set.",
"cite_spans": [
{
"start": 149,
"end": 171,
"text": "Marneffe et al., 2006)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Set",
"sec_num": "4.1"
},
{
"text": "The focus of the present work is on the linking behavior and classification of semantic arguments and not their identification. The latter is a substantially different task, and likely to be best addressed by other approaches, such as that of (Abend et al., edu/software/lex-parser.shtml 2009). We therefore use gold standard information of the CoNLL-08 data set for identifying argument sets as input to our model. The task of our model is then to classify these arguments into semantic roles. We train our model on a corpus consisting of the training and the test part of the CoNLL-08 data set, which is permissible since as a unsupervised system our model does not make any use of the annotated argument labels for training. We test the model performance against the gold argument classification on the test part. For development purposes (both designing the model and tuning the parameters as described in Section 4.4), we train on the training and development part and test on the development part.",
"cite_spans": [
{
"start": 243,
"end": 257,
"text": "(Abend et al.,",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Set",
"sec_num": "4.1"
},
{
"text": "As explained above, our model does not predict specific role labels, such as those annotated in Prop-Bank, but rather aims at clustering like argument instances together. Since the (numbered) labels of these clusters are arbitrary, we cannot evaluate the predictions of our model against the PropBank gold annotation directly. We follow Lang and Lapata (2011b) in measuring the quality of our clustering in terms of cluster purity and collocation instead.",
"cite_spans": [
{
"start": 337,
"end": 360,
"text": "Lang and Lapata (2011b)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Measures",
"sec_num": "4.2"
},
{
"text": "Cluster purity is a measure of the degree to which the predicted clusters meet the goal of containing only instances with the same gold standard class label. Given predicted clusters C 1 , . . . , C n C and gold clusters G 1 , . . . , G n G over a set of n argument instances, it is defined as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Measures",
"sec_num": "4.2"
},
{
"text": "Pu = 1 n n C i=1 max j=1,...,n G |C i \u2229 G j |",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Measures",
"sec_num": "4.2"
},
{
"text": "Similarly, cluster collocation measures how well the clustering meets the goal of clustering all gold instances with the same label into a single predicted cluster, formally:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Measures",
"sec_num": "4.2"
},
{
"text": "Co = 1 n n G j=1 max i=1,...,n C |C i \u2229 G j |",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Measures",
"sec_num": "4.2"
},
{
"text": "We determine purity and collocation separately for each predicate type and then compute their microaverage, i.e., weighting each score by the number of argument instances of this precidate. Just as precision and recall, purity and collocation stand in tradeoff. In the next section, we therefore report their F 1 score, i.e., their harmonic mean 2\u2022P u\u2022Co P u+Co .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Measures",
"sec_num": "4.2"
},
{
"text": "We compare the performance of our model with a simple syntactic baseline that assumes that semantic roles are identical with syntactic functions. We follow Lang and Lapata (2011b) in clustering argument instances of each predicate by their syntactic functions. We do not restrict the number of clusters per predicate. In contrast, Lang and Lapata (2011b) restrict the number of clusters to 21, which is the number of clusters their system generates. We found that this reduces the baseline by 0.1% F 1 -score (Argn on the development set, c.f. Table 1 ). If we reduce the number of clusters in the baseline to the number of clusters in our system 7, the baseline is reduced by another 0.8% F 1 -score. These lower baselines are due to lower purity values. In general, we find that a smaller number of clusters results in lower F 1 measure for the baseline; the reported baseline therefore is the strictest possible.",
"cite_spans": [
{
"start": 156,
"end": 179,
"text": "Lang and Lapata (2011b)",
"ref_id": "BIBREF7"
},
{
"start": 331,
"end": 354,
"text": "Lang and Lapata (2011b)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 544,
"end": 551,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Syntactic Baseline",
"sec_num": "4.3"
},
{
"text": "For all experiments, we fixed the number of semantic roles at R = 7. This is the maximum size of the argument set over all instances of the data set and thus the lower limit for R. If R was set to a higher value, the model would be able to account for the possibility of a larger number of roles, out of which never more than 7 are expressed simultaneously. We leave such investigation to future work. We set the symmetric parameters for the Dirichlet distributions to \u03b1 = 1.0, \u03b2 = 0.1, and \u03b3 = 1.0. This corresponds to uninformative uniform priors for \u03c7 p,r and \u03c6 p , and a prior encouraging a sparse lexical distribution \u03c8 c , similar as in topic models such as LDA (Blei et al., 2003) .",
"cite_spans": [
{
"start": 668,
"end": 687,
"text": "(Blei et al., 2003)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parameters and Tuning",
"sec_num": "4.4"
},
{
"text": "The number C of word classes, the number a of additional linkings in each refinement of the linking set L, and the number k of refinement steps were tuned on the development set. We first fixed a = 10 and trained models for C = 10, 20, . . . , 100, performing 50 refinement steps. The best F 1 score was obtained with C = 10 after k = 20 refinements (i.e., with 200 linkings). Next, we fixed these two parameters and trained models for a = 5, 10, 15, 20, 25. Here, we confirmed an optimal value of a = 10.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameters and Tuning",
"sec_num": "4.4"
},
{
"text": "In this section, we give quantitative results, comparing our system to the syntactic baseline in terms of cluster purity and collocation, and a qualitative discussion of some phenomena observed in the performance of the model. Table 1 shows the results of applying our models to the CoNLL-08 test with the parameter values tuned in Section 4.4. For comparison, we also show results on the development set. The table is divided into three parts, one only considering semantic arguments (Argn), one considering adjuncts (ArgM), and one aggregating results over both kinds of Prop-Bank roles (Arg*). It can be seen that our model consistently outperforms the syntactic baseline in terms of collocation (by 10% on Argn, 3% on ArgM, and 8.2% overall). In terms of purity, however, it falls short of the baseline. As mentioned above, there is a trade-off between purity and collocation. Compared to our model, which we run with a total of 7 semantic role slots, the baseline predicts a large number of small argument clusters for each predicate, whereas our model tends to group arguments together based on selectional preferences.",
"cite_spans": [],
"ref_spans": [
{
"start": 227,
"end": 234,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "In terms of F 1 score, our model outperforms the baseline by 3.6% on Argn, which translates into a relative error reduction by 20%. On adjuncts, on the other hand, our model falls short of the baseline by almost 10% F 1 score. This indicates that our approach based on explicit representations of linkings is most suited to the classification of arguments rather than adjuncts, which do not participate in diathesis alternations and do therefore not profit as much from our explicit induction of linkings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quantitative Results",
"sec_num": "5.1"
},
{
"text": "Among the verbs with at least 10 test instances, include shows the largest gain in F 1 score over the baseline. In the test corpus, we find an interesting pair of sentences for this predicate:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative Observations",
"sec_num": "5.2"
},
{
"text": "(2) a. Mr. Herscu proceeded to launch an ambitious, but ill-fated, $1 billion acquisition binge that included Bonwit Teller and B. The first of these two sentences is generated from the linking (nsubj, dobj, , , , , -rcmod), which does not need to be modified in any way to account for the subject that (coreferent with the head of the predicate in the modifying relative clause, binge) and the direct object Teller (head of the phrase Bonwit Teller and B. Altman & Co.). These are assigned to the first and second role slots, respectively. The second sentence, on the other hand, is generated out of the linking (prep in, nsubjpass, , , , , ) . Here, the passive subject Teller is assigned to the second role slot (which we may interpret as the Includee), while the first semantic role (the Includer) is labeled on bid, which is realized in a prepositional phrase headed by the preposition in. Note that this alternation is not the general passive alternation though, which would have led to Teller is not included by the bid. Instead, the model learned a specific alternation pattern for the predicate include. But even where a specific linking has not been learned, the model can often still infer a correct labeling by virtue of its selectional preference component. In our corpus, the predicate give occurs mostly with a direct and an indirect object as in CNN recently gave most employees raises of as much as 15%. The model therefore learns a linking (nsubj, dobj, , , , , iobj), but fails to learn that the Beneficient role can also be expressed with the preposition to as in",
"cite_spans": [],
"ref_spans": [
{
"start": 613,
"end": 643,
"text": "(prep in, nsubjpass, , , , , )",
"ref_id": null
}
],
"eq_spans": [],
"section": "Qualitative Observations",
"sec_num": "5.2"
},
{
"text": "(3) [...] only 25% give $2,500 or more to charity each year.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative Observations",
"sec_num": "5.2"
},
{
"text": "However, when applying our model to this sentence, it nonetheless assigns charity to the last role slot (the same one previously occupied by the indirect object). This is due to the fact that charity is a good fit for the selectional preference of this role slot of the predicate give.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative Observations",
"sec_num": "5.2"
},
{
"text": "We have presented a novel generative model of predicate-argument structures that incorporates selectional preferences of argument heads and explicitly describes linkings between semantic roles and syntactic functions. The model iteratively induces a lexicon of possible linkings from unlabeled data. The trained model can be used to cluster given argument instances according to their semantic roles, outperforming a competitive syntactic baseline. The approach is independent of any particular language or paradigm of semantic roles. However, in its present form the model assumes that each predicate has its own set of semantic roles. In formalisms such as Frame Semantics (Baker et al., 1998) , semantic roles generalize across semantically similar predicates belonging to the same frame. A natural extension of our approach would therefore consist in modeling predicate groups that share semantic roles and selectional preferences.",
"cite_spans": [
{
"start": 675,
"end": 695,
"text": "(Baker et al., 1998)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "Since we do not sample c, we use pseudo-counts based on P (cr|p, r, wr) for each instance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "version 1.6.8, available at http://nlp.stanford.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Unsupervised argument identification for semantic role labeling",
"authors": [
{
"first": "Omri",
"middle": [],
"last": "Abend",
"suffix": ""
},
{
"first": "Roi",
"middle": [],
"last": "Reichart",
"suffix": ""
},
{
"first": "Ari",
"middle": [],
"last": "Rappoport",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP",
"volume": "",
"issue": "",
"pages": "28--36",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Omri Abend, Roi Reichart, and Ari Rappoport. 2009. Unsupervised argument identification for semantic role labeling. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 28-36, Singapore.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The Berkeley FrameNet project",
"authors": [
{
"first": "Collin",
"middle": [
"F"
],
"last": "Baker",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Fillmore",
"suffix": ""
},
{
"first": "John",
"middle": [
"B"
],
"last": "Lowe",
"suffix": ""
}
],
"year": 1998,
"venue": "36th Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics (COLING-ACL'98)",
"volume": "",
"issue": "",
"pages": "86--90",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Collin F. Baker, J. Fillmore, and John B. Lowe. 1998. The Berkeley FrameNet project. In 36th Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Lin- guistics (COLING-ACL'98), pages 86-90, Montr\u00e9al.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Latent dirichlet allocation",
"authors": [
{
"first": "David",
"middle": [
"M"
],
"last": "Blei",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"I"
],
"last": "Jordan",
"suffix": ""
}
],
"year": 2003,
"venue": "Journal of Machine Learning Research",
"volume": "3",
"issue": "",
"pages": "993--1022",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent dirichlet allocation. Journal of Machine Learning Research, 3:993-1022.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Italian Syntax: A Government-Binding Approach",
"authors": [
{
"first": "Luigi",
"middle": [],
"last": "Burzio",
"suffix": ""
}
],
"year": 1986,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luigi Burzio. 1986. Italian Syntax: A Government- Binding Approach. Reidel, Dordrecht.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Generating typed dependency parses from phrase structure parses",
"authors": [
{
"first": "Marie-Catherine",
"middle": [],
"last": "De Marneffe",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Maccartney",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marie-Catherine de Marneffe, Bill MacCartney, and Christopher D. Manning. 2006. Generating typed de- pendency parses from phrase structure parses. In Pro- ceedings of LREC 2006.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Unsupervised discovery of a statistical verb lexicon",
"authors": [
{
"first": "Trond",
"middle": [],
"last": "Grenager",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Trond Grenager and Christopher D. Manning. 2006. Unsupervised discovery of a statistical verb lexicon. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing, pages 1-8, Sydney, Australia.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Unsupervised semantic role induction via split-merge clustering",
"authors": [
{
"first": "Joel",
"middle": [],
"last": "Lang",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "1117--1126",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joel Lang and Mirella Lapata. 2011a. Unsupervised se- mantic role induction via split-merge clustering. In Proceedings of the 49th Annual Meeting of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, pages 1117-1126, Portland, Ore- gon, USA.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Unsupervised semantic role induction with graph partitioning",
"authors": [
{
"first": "Joel",
"middle": [],
"last": "Lang",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1320--1331",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joel Lang and Mirella Lapata. 2011b. Unsupervised se- mantic role induction with graph partitioning. In Pro- ceedings of the 2011 Conference on Empirical Meth- ods in Natural Language Processing, pages 1320- 1331, Edinburgh, Scotland, UK.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Using subcategorization to resolve verb class ambiguity",
"authors": [
{
"first": "Maria",
"middle": [],
"last": "Lapata",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Brew",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora",
"volume": "",
"issue": "",
"pages": "266--274",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maria Lapata and Chris Brew. 1999. Using subcatego- rization to resolve verb class ambiguity. In In Proceed- ings of Joint SIGDAT Conference on Empirical Meth- ods in Natural Language Processing and Very Large Corpora, pages 266--274, College Park, MD.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "English Verb Classes and Alternations: A Preliminary Investigation",
"authors": [
{
"first": "Beth",
"middle": [],
"last": "Levin",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Beth Levin. 1993. English Verb Classes and Alterna- tions: A Preliminary Investigation. The University of Chicago Press.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Building a Large Annotated Corpus of English: The Penn Treebank",
"authors": [
{
"first": "Mitchell",
"middle": [
"M"
],
"last": "Marcus",
"suffix": ""
},
{
"first": "Beatrice",
"middle": [],
"last": "Santorini",
"suffix": ""
},
{
"first": "Mary",
"middle": [
"Ann"
],
"last": "Marcinkiewicz",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "",
"pages": "313--330",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitchell M. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a Large Annotated Corpus of English: The Penn Treebank. Computa- tional Linguistics, 19.2:313-330, June.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Automatic verb classification based on statistical distributions of argument structure",
"authors": [
{
"first": "Paola",
"middle": [],
"last": "Merlo",
"suffix": ""
},
{
"first": "Suzanne",
"middle": [],
"last": "Stevenson",
"suffix": ""
}
],
"year": 2001,
"venue": "Computational Linguistics",
"volume": "",
"issue": "3",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paola Merlo and Suzanne Stevenson. 2001. Automatic verb classification based on statistical distributions of argument structure. Computational Linguistics, 27(3).",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "The Proposition Bank: An annotated corpus of semantic roles",
"authors": [
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Kingsbury",
"suffix": ""
}
],
"year": 2005,
"venue": "Computational Linguistics",
"volume": "31",
"issue": "1",
"pages": "71--106",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martha Palmer, Daniel Gildea, and Paul Kingsbury. 2005. The Proposition Bank: An annotated corpus of semantic roles. Computational Linguistics, 31(1):71- 106.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "The conll 2008 shared task on joint parsing of syntactic and semantic dependencies",
"authors": [
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Johansson",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Meyers",
"suffix": ""
},
{
"first": "Llu\u00eds",
"middle": [],
"last": "M\u00e0rquez",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2008,
"venue": "CoNLL 2008: Proceedings of the Twelfth Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "159--177",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mihai Surdeanu, Richard Johansson, Adam Meyers, Llu\u00eds M\u00e0rquez, and Joakim Nivre. 2008. The conll 2008 shared task on joint parsing of syntactic and se- mantic dependencies. In CoNLL 2008: Proceedings of the Twelfth Conference on Computational Natu- ral Language Learning, pages 159-177, Manchester, England.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A bayesian approach to unsupervised semantic role induction",
"authors": [
{
"first": "Ivan",
"middle": [],
"last": "Titov",
"suffix": ""
},
{
"first": "Alexandre",
"middle": [],
"last": "Klementiev",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ivan Titov and Alexandre Klementiev. 2012. A bayesian approach to unsupervised semantic role induction. In Proceedings of the Conference of the European Chap- ter of the Association for Computational Linguistics, Avignon, France, April.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Representation of our linking model as a Bayesian network. The nodes p and s are observed for each of the N predicate-argument structures in the corpus. The latent variables c, w, l, and y are inferred from the data along with their distributions \u03c7, \u03c8, \u03c6, and \u03b7.",
"num": null,
"type_str": "figure",
"uris": null
},
"TABREF0": {
"text": "Syntactic Baseline 90.6 75.4 82.3 87.0 73.3 79.6 88.0 74.9 80.9 Linking Model 86.4 85.4 85.9 64.4 76.3 69.8 74.5 83.1 78.6 Syntactic Baseline 91.5 73.9 81.8 88.7 78.6 83.3 89.2 75.1 81.5",
"html": null,
"content": "<table><tr><td/><td/><td>Argn</td><td/><td/><td>ArgM</td><td/><td/><td>Arg*</td><td/></tr><tr><td>Test Set</td><td>Pu</td><td>Co</td><td>F 1</td><td>Pu</td><td>Co</td><td>F 1</td><td>Pu</td><td>Co</td><td>F 1</td></tr><tr><td>Development Set</td><td>Pu</td><td>Co</td><td>F 1</td><td>Pu</td><td>Co</td><td>F 1</td><td>Pu</td><td>Co</td><td>F 1</td></tr><tr><td>Linking Model</td><td colspan=\"9\">85.6 84.4 85.0 67.7 79.9 73.3 75.2 83.2 79.0</td></tr><tr><td colspan=\"10\">Table 1: Purity (Pu), collocation (Co), and F 1 scores of our model and the syntactic baseline in percent. Performance</td></tr><tr><td colspan=\"8\">on arguments (Argn), adjuncts (ArgM), and overall results (Arg*) are shown separately.</td><td/><td/></tr><tr><td colspan=\"4\">b. Not included in the bid are Bonwit Teller or</td><td/><td/><td/><td/><td/><td/></tr><tr><td>B. Altman &amp; Co. [...]</td><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td/><td/><td/><td/><td/><td colspan=\"3\">Altman &amp; Co. [...]</td><td/><td/></tr></table>",
"num": null,
"type_str": "table"
}
}
}
}