Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "E12-1003",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:36:43.552522Z"
},
"title": "A Bayesian Approach to Unsupervised Semantic Role Induction",
"authors": [
{
"first": "Ivan",
"middle": [],
"last": "Titov",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Saarland University Saarbr\u00fccken",
"location": {
"country": "Germany"
}
},
"email": ""
},
{
"first": "Alexandre",
"middle": [],
"last": "Klementiev",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Saarland University Saarbr\u00fccken",
"location": {
"country": "Germany"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We introduce two Bayesian models for unsupervised semantic role labeling (SRL) task. The models treat SRL as clustering of syntactic signatures of arguments with clusters corresponding to semantic roles. The first model induces these clusterings independently for each predicate, exploiting the Chinese Restaurant Process (CRP) as a prior. In a more refined hierarchical model, we inject the intuition that the clusterings are similar across different predicates, even though they are not necessarily identical. This intuition is encoded as a distance-dependent CRP with a distance between two syntactic signatures indicating how likely they are to correspond to a single semantic role. These distances are automatically induced within the model and shared across predicates. Both models achieve state-of-the-art results when evaluated on PropBank, with the coupled model consistently outperforming the factored counterpart in all experimental setups .",
"pdf_parse": {
"paper_id": "E12-1003",
"_pdf_hash": "",
"abstract": [
{
"text": "We introduce two Bayesian models for unsupervised semantic role labeling (SRL) task. The models treat SRL as clustering of syntactic signatures of arguments with clusters corresponding to semantic roles. The first model induces these clusterings independently for each predicate, exploiting the Chinese Restaurant Process (CRP) as a prior. In a more refined hierarchical model, we inject the intuition that the clusterings are similar across different predicates, even though they are not necessarily identical. This intuition is encoded as a distance-dependent CRP with a distance between two syntactic signatures indicating how likely they are to correspond to a single semantic role. These distances are automatically induced within the model and shared across predicates. Both models achieve state-of-the-art results when evaluated on PropBank, with the coupled model consistently outperforming the factored counterpart in all experimental setups .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Semantic role labeling (SRL) (Gildea and Jurafsky, 2002) , a shallow semantic parsing task, has recently attracted a lot of attention in the computational linguistic community (Carreras and M\u00e0rquez, 2005; Surdeanu et al., 2008; Haji\u010d et al., 2009) . The task involves prediction of predicate argument structure, i.e. both identification of arguments as well as assignment of labels according to their underlying semantic role. For example, in the following sentences: Mary always takes an agent role (A0) for the predicate open, and door is always a patient (A1). SRL representations have many potential applications in natural language processing and have recently been shown to be beneficial in question answering (Shen and Lapata, 2007; Kaisser and Webber, 2007) , textual entailment (Sammons et al., 2009) , machine translation (Wu and Fung, 2009; Liu and Gildea, 2010; Wu et al., 2011; Gao and Vogel, 2011) , and dialogue systems (Basili et al., 2009; van der Plas et al., 2011) , among others. Though syntactic representations are often predictive of semantic roles (Levin, 1993) , the interface between syntactic and semantic representations is far from trivial. The lack of simple deterministic rules for mapping syntax to shallow semantics motivates the use of statistical methods.",
"cite_spans": [
{
"start": 29,
"end": 56,
"text": "(Gildea and Jurafsky, 2002)",
"ref_id": "BIBREF12"
},
{
"start": 176,
"end": 204,
"text": "(Carreras and M\u00e0rquez, 2005;",
"ref_id": "BIBREF5"
},
{
"start": 205,
"end": 227,
"text": "Surdeanu et al., 2008;",
"ref_id": "BIBREF37"
},
{
"start": 228,
"end": 247,
"text": "Haji\u010d et al., 2009)",
"ref_id": "BIBREF15"
},
{
"start": 716,
"end": 739,
"text": "(Shen and Lapata, 2007;",
"ref_id": "BIBREF35"
},
{
"start": 740,
"end": 765,
"text": "Kaisser and Webber, 2007)",
"ref_id": "BIBREF16"
},
{
"start": 787,
"end": 809,
"text": "(Sammons et al., 2009)",
"ref_id": "BIBREF34"
},
{
"start": 832,
"end": 851,
"text": "(Wu and Fung, 2009;",
"ref_id": "BIBREF43"
},
{
"start": 852,
"end": 873,
"text": "Liu and Gildea, 2010;",
"ref_id": "BIBREF26"
},
{
"start": 874,
"end": 890,
"text": "Wu et al., 2011;",
"ref_id": "BIBREF44"
},
{
"start": 891,
"end": 911,
"text": "Gao and Vogel, 2011)",
"ref_id": "BIBREF11"
},
{
"start": 935,
"end": 956,
"text": "(Basili et al., 2009;",
"ref_id": "BIBREF2"
},
{
"start": 957,
"end": 983,
"text": "van der Plas et al., 2011)",
"ref_id": null
},
{
"start": 1072,
"end": 1085,
"text": "(Levin, 1993)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Although current statistical approaches have been successful in predicting shallow semantic representations, they typically require large amounts of annotated data to estimate model parameters. These resources are scarce and expensive to create, and even the largest of them have low coverage (Palmer and Sporleder, 2010) . Moreover, these models are domain-specific, and their performance drops substantially when they are used in a new domain (Pradhan et al., 2008) . Such domain specificity is arguably unavoidable for a semantic analyzer, as even the definitions of semantic roles are typically predicate specific, and different domains can have radically different distributions of predicates (and their senses). The necessity for a large amounts of human-annotated data for every language and domain is one of the major obstacles to the wide-spread adoption of semantic role representations.",
"cite_spans": [
{
"start": 293,
"end": 321,
"text": "(Palmer and Sporleder, 2010)",
"ref_id": "BIBREF29"
},
{
"start": 445,
"end": 467,
"text": "(Pradhan et al., 2008)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "These challenges motivate the need for unsupervised methods which, instead of relying on labeled data, can exploit large amounts of unlabeled texts. In this paper, we propose simple and effi-cient hierarchical Bayesian models for this task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "It is natural to split the SRL task into two stages: the identification of arguments (the identification stage) and the assignment of semantic roles (the labeling stage). In this and in much of the previous work on unsupervised techniques, the focus is on the labeling stage. Identification, though an important problem, can be tackled with heuristics (Lang and Lapata, 2011a; Grenager and Manning, 2006) or, potentially, by using a supervised classifier trained on a small amount of data. We follow (Lang and Lapata, 2011a) , and regard the labeling stage as clustering of syntactic signatures of argument realizations for every predicate. In our first model, as in most of the previous work on unsupervised SRL, we define an independent model for each predicate. We use the Chinese Restaurant Process (CRP) (Ferguson, 1973) as a prior for the clustering of syntactic signatures. The resulting model achieves state-of-the-art results, substantially outperforming previous methods evaluated in the same setting.",
"cite_spans": [
{
"start": 352,
"end": 376,
"text": "(Lang and Lapata, 2011a;",
"ref_id": "BIBREF20"
},
{
"start": 377,
"end": 404,
"text": "Grenager and Manning, 2006)",
"ref_id": "BIBREF14"
},
{
"start": 500,
"end": 524,
"text": "(Lang and Lapata, 2011a)",
"ref_id": "BIBREF20"
},
{
"start": 809,
"end": 825,
"text": "(Ferguson, 1973)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the first model, for each predicate we independently induce a linking between syntax and semantics, encoded as a clustering of syntactic signatures. The clustering implicitly defines the set of permissible alternations, or changes in the syntactic realization of the argument structure of the verb. Though different verbs admit different alternations, some alternations are shared across multiple verbs and are very frequent (e.g., passivization, example sentences (a) vs. (d), or dativization: John gave a book to Mary vs. John gave Mary a book) (Levin, 1993) . Therefore, it is natural to assume that the clusterings should be similar, though not identical, across verbs.",
"cite_spans": [
{
"start": 550,
"end": 563,
"text": "(Levin, 1993)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our second model encodes this intuition by replacing the CRP prior for each predicate with a distance-dependent CRP (dd-CRP) prior (Blei and Frazier, 2011) shared across predicates. The distance between two syntactic signatures encodes how likely they are to correspond to a single semantic role. Unlike most of the previous work exploiting distance-dependent CRPs (Blei and Frazier, 2011; Socher et al., 2011; Duan et al., 2007) , we do not encode prior or external knowledge in the distance function but rather induce it automatically within our Bayesian model. The coupled dd-CRP model consistently outperforms the factored CRP counterpart across all the experimental settings (with gold and predicted syntactic parses, and with gold and automatically identified arguments).",
"cite_spans": [
{
"start": 365,
"end": 389,
"text": "(Blei and Frazier, 2011;",
"ref_id": "BIBREF3"
},
{
"start": 390,
"end": 410,
"text": "Socher et al., 2011;",
"ref_id": "BIBREF36"
},
{
"start": 411,
"end": 429,
"text": "Duan et al., 2007)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Both models admit efficient inference: the estimation time on the Penn Treebank WSJ corpus does not exceed 30 minutes on a single processor and the inference algorithm is highly parallelizable, reducing inference time down to several minutes on multiple processors. This suggests that the models scale to much larger corpora, which is an important property for a successful unsupervised learning method, as unlabeled data is abundant.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of the paper is structured as follows. Section 2 begins with a definition of the semantic role labeling task and discuss some specifics of the unsupervised setting. In Section 3, we describe CRPs and dd-CRPs, the key components of our models. In Sections 4 -6, we describe our factored and coupled models and the inference method. Section 7 provides both evaluation and analysis. Finally, additional related work is presented in Section 8.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, instead of assuming the availability of role annotated data, we rely only on automatically generated syntactic dependency graphs. While we cannot expect that syntactic structure can trivially map to a semantic representation (Palmer et al., 2005 ) 1 , we can use syntactic cues to help us in both stages of unsupervised SRL. Before defining our task, let us consider the two stages separately.",
"cite_spans": [
{
"start": 239,
"end": 259,
"text": "(Palmer et al., 2005",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Task Definition",
"sec_num": "2"
},
{
"text": "In the argument identification stage, we implement a heuristic proposed in (Lang and Lapata, 2011a) comprised of a list of 8 rules, which use nonlexicalized properties of syntactic paths between a predicate and a candidate argument to iteratively discard non-arguments from the list of all words in a sentence. Note that inducing these rules for a new language would require some linguistic expertise. One alternative may be to annotate a small number of arguments and train a classifier with nonlexicalized features instead.",
"cite_spans": [
{
"start": 75,
"end": 99,
"text": "(Lang and Lapata, 2011a)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Task Definition",
"sec_num": "2"
},
{
"text": "In the argument labeling stage, semantic roles are represented by clusters of arguments, and labeling a particular argument corresponds to deciding on its role cluster. However, instead of deal-ing with argument occurrences directly, we represent them as predicate specific syntactic signatures, and refer to them as argument keys. This representation aids our models in inducing high purity clusters (of argument keys) while reducing their granularity. We follow (Lang and Lapata, 2011a) and use the following syntactic features to form the argument key representation:",
"cite_spans": [
{
"start": 464,
"end": 488,
"text": "(Lang and Lapata, 2011a)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Task Definition",
"sec_num": "2"
},
{
"text": "\u2022 Active or passive verb voice (ACT/PASS).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Definition",
"sec_num": "2"
},
{
"text": "\u2022 Argument position relative to predicate (LEFT/RIGHT). \u2022 Syntactic relation to its governor.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Definition",
"sec_num": "2"
},
{
"text": "\u2022 Preposition used for argument realization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Definition",
"sec_num": "2"
},
{
"text": "In the example sentences in Section 1, the argument keys for candidate arguments Mary for sentences (a) and (d) would be ACT:LEFT:SBJ and PASS:RIGHT:LGS->by, 2 respectively. While aiming to increase the purity of argument key clusters, this particular representation will not always produce a good match: e.g. the door in sentence (c) will have the same key as Mary in sentence (a). Increasing the expressiveness of the argument key representation by flagging intransitive constructions would distinguish that pair of arguments. However, we keep this particular representation, in part to compare with the previous work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Definition",
"sec_num": "2"
},
{
"text": "In this work, we treat the unsupervised semantic role labeling task as clustering of argument keys. Thus, argument occurrences in the corpus whose keys are clustered together are assigned the same semantic role. Note that some adjunct-like modifier arguments are already explicitly represented in syntax and thus do not need to be clustered (modifiers AM-TMP, AM-MNR, AM-LOC, and AM-DIR are encoded as 'syntactic' relations TMP, MNR, LOC, and DIR, respectively (Surdeanu et al., 2008) ); instead we directly use the syntactic labels as semantic roles.",
"cite_spans": [
{
"start": 461,
"end": 484,
"text": "(Surdeanu et al., 2008)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Task Definition",
"sec_num": "2"
},
{
"text": "The central components of our non-parametric Bayesian models are the Chinese Restaurant Processes (CRPs) and the closely related Dirichlet Processes (DPs) (Ferguson, 1973) . CRPs define probability distributions over partitions of a set of objects. An intuitive metaphor for describing CRPs is assignment of tables to restaurant customers. Assume a restaurant with a sequence of tables, and customers who walk into the restaurant one at a time and choose a table to join. The first customer to enter is assigned the first table. Suppose that when a client number i enters the restaurant, i \u2212 1 customers are sitting at each of the k \u2208 (1, . . . , K) tables occupied so far. The new customer is then either seated at one of the K tables with probability N k i\u22121+\u03b1 , where N k is the number customers already sitting at table k, or assigned to a new table with the probability \u03b1 i\u22121+\u03b1 . The concentration parameter \u03b1 encodes the granularity of the drawn partitions: the larger \u03b1, the larger the expected number of occupied tables. Though it is convenient to describe CRP in a sequential manner, the probability of a seating arrangement is invariant of the order of customers' arrival, i.e. the process is exchangeable. In our factored model, we use CRPs as a prior for clustering argument keys, as we explain in Section 4.",
"cite_spans": [
{
"start": 155,
"end": 171,
"text": "(Ferguson, 1973)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Traditional and Distance-dependent CRPs",
"sec_num": "3"
},
{
"text": "Often CRP is used as a part of the Dirichlet Process mixture model where each subset in the partition (each table) selects a parameter (a meal) from some base distribution over parameters. This parameter is then used to generate all data points corresponding to customers assigned to the table. The Dirichlet processes (DP) are closely connected to CRPs: instead of choosing meals for customers through the described generative story, one can equivalently draw a distribution G over meals from DP and then draw a meal for every customer from G. We refer the reader to Teh (2010) for details on CRPs and DPs. In our method, we use DPs to model distributions of arguments for every role.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Traditional and Distance-dependent CRPs",
"sec_num": "3"
},
{
"text": "In order to clarify how similarities between customers can be integrated in the generative process, we start by reformulating the traditional CRP in an equivalent form so that distancedependent CRP (dd-CRP) can be seen as its generalization. Instead of selecting a table for each customer as described above, one can equivalently assume that a customer i chooses one of the previous customers c i as a partner with probability 1 i\u22121+\u03b1 and sits at the same table, or occupies a new table with the probability \u03b1 i\u22121+\u03b1 . The transitive closure of this seating-with relation determines the partition.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Traditional and Distance-dependent CRPs",
"sec_num": "3"
},
{
"text": "A generalization of this view leads to the definition of the distance-dependent CRP. In dd-CRPs, a customer i chooses a partner c i = j with the probability proportional to some non-negative",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Traditional and Distance-dependent CRPs",
"sec_num": "3"
},
{
"text": "score d i,j (d i,j = d j,i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Traditional and Distance-dependent CRPs",
"sec_num": "3"
},
{
"text": "which encodes a similarity between the two customers. 3 More formally,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Traditional and Distance-dependent CRPs",
"sec_num": "3"
},
{
"text": "p(c i = j|D, \u03b1) \u221d d i,j , i = j \u03b1, i = j (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Traditional and Distance-dependent CRPs",
"sec_num": "3"
},
{
"text": "where D is the entire similarity graph. This process lacks the exchangeability property of the traditional CRP but efficient approximate inference with dd-CRP is possible with Gibbs sampling. For more details on inference with dd-CRPs, we refer the reader to Blei and Frazier (2011). Though in previous work dd-CRP was used either to encode prior knowledge (Blei and Frazier, 2011) or other external information (Socher et al., 2011) , we treat D as a latent variable drawn from some prior distribution over weighted graphs. This view provides a powerful approach for coupling a family of distinct but similar clusterings: the family of clusterings can be drawn by first choosing a similarity graph D for the entire family and then re-using D to generate each of the clusterings independently of each other as defined by equation 1. In Section 5, we explain how we use this formalism to encode relatedness between argument key clusterings for different predicates.",
"cite_spans": [
{
"start": 412,
"end": 433,
"text": "(Socher et al., 2011)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Traditional and Distance-dependent CRPs",
"sec_num": "3"
},
{
"text": "In this section we describe the factored method which models each predicate independently. In Section 2 we defined our task as clustering of argument keys, where each cluster corresponds to a semantic role. If an argument key k is assigned to a role r (k \u2208 r), all of its occurrences are labeled r.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Factored Model",
"sec_num": "4"
},
{
"text": "Our Bayesian model encodes two common assumptions about semantic roles. First, we enforce the selectional restriction assumption: we assume that the distribution over potential argument fillers is sparse for every role, implying that 'peaky' distributions of arguments for each role r are preferred to flat distributions. Second, each role normally appears at most once per predicate occurrence. Our inference will search for a clustering which meets the above requirements to the maximal extent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Factored Model",
"sec_num": "4"
},
{
"text": "Our model associates two distributions with each predicate: one governs the selection of argument fillers for each semantic role, and the other models (and penalizes) duplicate occurrence of roles. Each predicate occurrence is generated independently given these distributions. Let us describe the model by first defining how the set of model parameters and an argument key clustering are drawn, and then explaining the generation of individual predicate and argument instances. The generative story is formally presented in Figure 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 525,
"end": 533,
"text": "Figure 1",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Factored Model",
"sec_num": "4"
},
{
"text": "We start by generating a partition of argument keys B p with each subset r \u2208 B p representing a single semantic role. The partitions are drawn from CRP(\u03b1) (see the Factored model section of Figure 1 ) independently for each predicate. The crucial part of the model is the set of selectional preference parameters \u03b8 p,r , the distributions of arguments x for each role r of predicate p. We represent arguments by their syntactic heads, 4 or more specifically, by either their lemmas or word clusters assigned to the head by an external clustering algorithm, as we will discuss in more detail in Section 7. 5 For the agent role A0 of the predicate open, for example, this distribution would assign most of the probability mass to arguments denoting sentient beings, whereas the distribution for the patient role A1 would concentrate on arguments representing \"openable\" things (doors, boxes, books, etc).",
"cite_spans": [],
"ref_spans": [
{
"start": 190,
"end": 198,
"text": "Figure 1",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Factored Model",
"sec_num": "4"
},
{
"text": "In order to encode the assumption about sparseness of the distributions \u03b8 p,r , we draw them from the DP prior DP (\u03b2, H (A) ) with a small concentration parameter \u03b2, the base probability distribution H (A) is just the normalized frequencies of arguments in the corpus. The geometric distribution \u03c8 p,r is used to model the number of times a role r appears with a given predicate occurrence. The decision whether to generate at least one role r is drawn from the uniform Bernoulli distribution. If 0 is drawn then the semantic role is not realized for the given occurrence, otherwise the number of additional roles r is drawn from the geometric distribution Geom(\u03c8 p,r ). The Beta priors over \u03c8",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Factored Model",
"sec_num": "4"
},
{
"text": "Factored model: for each predicate p = 1, 2, . . . : can indicate the preference towards generating at most one argument for each role. For example, it would express the preference that a predicate open typically appears with a single agent and a single patient arguments. Now, when parameters and argument key clusterings are chosen, we can summarize the remainder of the generative story as follows. We begin by independently drawing occurrences for each predicate. For each predicate role we independently decide on the number of role occurrences. Then we generate each of the arguments (see GenArgument) by generating an argument key k p,r uniformly from the set of argument keys assigned to the cluster r, and finally choosing its filler x p,r , where the filler is either a lemma or a word cluster corresponding to the syntactic head of the argument.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Clustering of argument keys:",
"sec_num": null
},
{
"text": "Bp \u223c CRP (\u03b1) [",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Clustering of argument keys:",
"sec_num": null
},
{
"text": "As we argued in Section 1, clusterings of argument keys implicitly encode the pattern of alter-nations for a predicate. E.g., passivization can be roughly represented with the clustering of the key ACT:LEFT:SBJ with PASS:RIGHT:LGS->by and ACT:RIGHT:OBJ with PASS:LEFT:SBJ. The set of permissible alternations is predicatespecific, 6 but nevertheless they arguably represent a small subset of all clusterings of argument keys. Also, some alternations are more likely to be applicable to a verb than others: for example, passivization and dativization alternations are both fairly frequent, whereas, locativepreposition-drop alternation (Mary climbed up the mountain vs. Mary climbed the mountain) is less common and applicable only to several classes of predicates representing motion (Levin, 1993) . We represent this observation by quantifying how likely a pair of keys is to be clustered. These scores (d i,j for every pair of argument keys i and j) are induced automatically within the model, and treated as latent variables shared across predicates. Intuitively, if data for several predicates strongly suggests that two argument keys should be clustered (e.g., there is a large overlap between argument fillers for the two keys) then the posterior will indicate that d i,j is expected to be greater for the pair {i, j} than for some other pair {i , j } for which the evidence is less clear. Consequently, argument keys i and j will be clustered even for predicates without strong evidence for such a clustering, whereas i and j will not.",
"cite_spans": [
{
"start": 784,
"end": 797,
"text": "(Levin, 1993)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Coupled Model",
"sec_num": "5"
},
{
"text": "One argument against coupling predicates may stem from the fact that we are using unlabeled data and may be able to obtain sufficient amount of learning material even for less frequent predicates. This may be a valid observation, but another rationale for sharing this similarity structure is the hypothesis that alternations may be easier to detect for some predicates than for others. For example, argument key clustering of predicates with very restrictive selectional restrictions on argument fillers is presumably easier than clustering for predicates with less restrictive and overlapping selectional restriction, as compactness of selectional preferences is a central assumption driving unsupervised learning of semantic roles. E.g., predicates change and defrost belong to the same Levin class (change-of-state verbs) and therefore admit similar alternations. However, the set of potential patients of defrost is sufficiently restricted, whereas the selectional restrictions for the patient of change are far less specific and they overlap with selectional restrictions for the agent role, further complicating the clustering induction task. This observation suggests that sharing clustering preferences across verbs is likely to help even if the unlabeled data is plentiful for every predicate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coupled Model",
"sec_num": "5"
},
{
"text": "More formally, we generate scores d i,j , or equivalently, the full labeled graph D with vertices corresponding to argument keys and edges weighted with the similarity scores, from a prior. In our experiments we use a non-informative prior which factorizes over pairs (i.e. edges of the graph D), though more powerful alternatives can be considered. Then we use it, in a dd-CRP(\u03b1, D), to generate clusterings of argument keys for every predicate. The rest of the generative story is the same as for the factored model. The part relevant to this model is shown in the Coupled model section of Figure 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 592,
"end": 600,
"text": "Figure 1",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Coupled Model",
"sec_num": "5"
},
{
"text": "Note that this approach does not assume that the frequencies of syntactic patterns corresponding to alternations are similar, and a large value for d i,j does not necessarily mean that the corresponding syntactic frames i and j are very frequent in a corpus. What it indicates is that a large number of different predicates undergo the corresponding alternation; the frequency of the alternation is a different matter. We believe that this is an important point, as we do not make a restricting assumption that an alternation has the same distributional properties for all verbs which undergo this alternation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coupled Model",
"sec_num": "5"
},
{
"text": "An inference algorithm for an unsupervised model should be efficient enough to handle vast amounts of unlabeled data, as it can easily be obtained and is likely to improve results. We use a simple approximate inference algorithm based on greedy MAP search. We start by discussing MAP search for argument key clustering with the factored model and then discuss its extension applicable to the coupled model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "6"
},
{
"text": "For the factored model, semantic roles for every predicate are induced independently. Nevertheless, search for a MAP clustering can be expensive, as even a move involving a single argument key implies some computations for all its occurrences in the corpus. Instead of more complex MAP search algorithms (see, e.g., (Daume III, 2007) ), we use a greedy procedure where we start with each argument key assigned to an individual cluster, and then iteratively try to merge clusters. Each move involves (1) choosing an argument key and (2) deciding on a cluster to reassign it to. This is done by considering all clusters (including creating a new one) and choosing the most probable one.",
"cite_spans": [
{
"start": 316,
"end": 333,
"text": "(Daume III, 2007)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Role Induction",
"sec_num": "6.1"
},
{
"text": "Instead of choosing argument keys randomly at the first stage, we order them by corpus frequency. This ordering is beneficial as getting clustering right for frequent argument keys is more important and the corresponding decisions should be made earlier. 7 We used a single iteration in our experiments, as we have not noticed any benefit from using multiple iterations.",
"cite_spans": [
{
"start": 255,
"end": 256,
"text": "7",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Role Induction",
"sec_num": "6.1"
},
{
"text": "In the coupled model, clusterings for different predicates are statistically dependent, as the similarity structure D is latent and shared across predicates. Consequently, a more complex inference procedure is needed. For simplicity here and in our experiments, we use the non-informative prior distribution over D which assigns the same prior probability to every possible weight d i,j for every pair {i, j}.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Graph Induction",
"sec_num": "6.2"
},
{
"text": "Recall that the dd-CRP prior is defined in terms of customers choosing other customers to sit with. For the moment, let us assume that this relation among argument keys is known, that is, every argument key k for predicate p has chosen an argument key c p,k to 'sit' with. We can compute the MAP estimate for all d i,j by maximizing the objective:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Graph Induction",
"sec_num": "6.2"
},
{
"text": "arg max d i,j , i =j p k\u2208Kp log d k,c p,k k \u2208Kp d k,k ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Graph Induction",
"sec_num": "6.2"
},
{
"text": "where K p is the set of all argument keys for the predicate p. We slightly abuse the notation by using d i,i to denote the concentration parameter \u03b1 in the previous expression. Note that we also assume that similarities are symmetric,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Graph Induction",
"sec_num": "6.2"
},
{
"text": "d i,j = d j,i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Graph Induction",
"sec_num": "6.2"
},
{
"text": "If the set of argument keys K p would be the same for every predicate, then the optimal d i,j would be proportional to the number of times either i selects j as a partner, or j chooses i as a partner. 8 This no longer holds if the sets are different, but the solution can be found efficiently using a numeric optimization strategy; we use the gradient descent algorithm. We do not learn the concentration parameter \u03b1, as it is used in our model to indicate the desired granularity of semantic roles, but instead only learn d i,j (i = j). However, just learning the concentration parameter would not be sufficient as the effective concentration can be reduced or increased arbitrarily by scaling all the similarities d i,j (i = j) at once, as follows from expression (1). Instead, we enforce the normalization constraint on the similarities d i,j . We ensure that the prior probability of choosing itself as a partner, averaged over predicates, is the same as it would be with uniform d i,j (d i,j = 1 for every key pair {i, j}, i = j). This roughly says that we want to preserve the same granularity of clustering as it was with the uniform similarities. We accomplish this normalization in a post-hoc fashion by dividing the weights after optimization by",
"cite_spans": [
{
"start": 201,
"end": 202,
"text": "8",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Graph Induction",
"sec_num": "6.2"
},
{
"text": "p k,k \u2208Kp, k =k d k,k / p |K p |(|K p | \u2212 1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Graph Induction",
"sec_num": "6.2"
},
{
"text": "If D is fixed, partners for every predicate p and every k can be found using virtually the same algorithm as in Section 6.1: the only difference is that, instead of a cluster, each argument key iteratively chooses a partner.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Graph Induction",
"sec_num": "6.2"
},
{
"text": "Though, in practice, both the choice of partners and the similarity graphs are latent, we can use an iterative approach to obtain a joint MAP estimate of c k (for every k) and the similarity graph D by alternating the two steps. 9 Notice that the resulting algorithm is again highly parallelizable: the graph induction stage is fast, and induction of the seat-with relation (i.e. clustering argument keys) is factorizable over predicates.",
"cite_spans": [
{
"start": 229,
"end": 230,
"text": "9",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Graph Induction",
"sec_num": "6.2"
},
{
"text": "One shortcoming of this approach is typical for generative models with multiple 'features': when such a model predicts a latent variable, it tends to ignore the prior class distribution and relies solely on features. This behavior is due to the over-simplifying independence assumptions. It is well known, for instance, that the poste-rior with Naive Bayes tends to be overconfident due to violated conditional independence assumptions (Rennie, 2001) . The same behavior is observed here: the shared prior does not have sufficient effect on frequent predicates. 10 Though different techniques have been developed to discount the over-confidence (Kolcz and Chowdhury, 2005) , we use the most basic one: we raise the likelihood term in power 1",
"cite_spans": [
{
"start": 436,
"end": 450,
"text": "(Rennie, 2001)",
"ref_id": "BIBREF33"
},
{
"start": 645,
"end": 672,
"text": "(Kolcz and Chowdhury, 2005)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Graph Induction",
"sec_num": "6.2"
},
{
"text": "T , where the parameter T is chosen empirically.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Graph Induction",
"sec_num": "6.2"
},
{
"text": "We keep the general setup of (Lang and Lapata, 2011a) , to evaluate our models and compare them to the current state of the art. We run all of our experiments on the standard CoNLL 2008 shared task (Surdeanu et al., 2008) version of Penn Treebank WSJ and PropBank. In addition to gold dependency analyses and gold PropBank annotations, it has dependency structures generated automatically by the MaltParser (Nivre et al., 2007) . We vary our experimental setup as follows:",
"cite_spans": [
{
"start": 29,
"end": 53,
"text": "(Lang and Lapata, 2011a)",
"ref_id": "BIBREF20"
},
{
"start": 198,
"end": 221,
"text": "(Surdeanu et al., 2008)",
"ref_id": "BIBREF37"
},
{
"start": 407,
"end": 427,
"text": "(Nivre et al., 2007)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data and Evaluation",
"sec_num": "7.1"
},
{
"text": "\u2022 We evaluate our models on gold and automatically generated parses, and use either gold PropBank annotations or the heuristic from Section 2 to identify arguments, resulting in four experimental regimes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data and Evaluation",
"sec_num": "7.1"
},
{
"text": "\u2022 In order to reduce the sparsity of predicate argument fillers we consider replacing lemmas of their syntactic heads with word clusters induced by a clustering algorithm as a preprocessing step. In particular, we use Brown (Br) clustering (Brown et al., 1992) induced over RCV1 corpus (Turian et al., 2010) . Although the clustering is hierarchical, we only use a cluster at the lowest level of the hierarchy for each word.",
"cite_spans": [
{
"start": 240,
"end": 260,
"text": "(Brown et al., 1992)",
"ref_id": "BIBREF4"
},
{
"start": 286,
"end": 307,
"text": "(Turian et al., 2010)",
"ref_id": "BIBREF42"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data and Evaluation",
"sec_num": "7.1"
},
{
"text": "We use the purity (PU) and collocation (CO) metrics as well as their harmonic mean (F1) to measure the quality of the resulting clusters. Purity measures the degree to which each cluster contains arguments sharing the same gold role:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data and Evaluation",
"sec_num": "7.1"
},
{
"text": "P U = 1 N i max j |G j \u2229 C i |",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data and Evaluation",
"sec_num": "7.1"
},
{
"text": "where if C i is the set of arguments in the i-th induced cluster, G j is the set of arguments in the jth gold cluster, and N is the total number of arguments. Collocation evaluates the degree to which arguments with the same gold roles are assigned to a single cluster. It is computed as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data and Evaluation",
"sec_num": "7.1"
},
{
"text": "CO = 1 N j max i |G j \u2229 C i |",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data and Evaluation",
"sec_num": "7.1"
},
{
"text": "We compute the aggregate PU, CO, and F1 scores over all predicates in the same way as (Lang and Lapata, 2011a) by weighting the scores of each predicate by the number of its argument occurrences. Note that since our goal is to evaluate the clustering algorithms, we do not include incorrectly identified arguments (i.e. mistakes made by the heuristic defined in Section 2) when computing these metrics.",
"cite_spans": [
{
"start": 86,
"end": 110,
"text": "(Lang and Lapata, 2011a)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data and Evaluation",
"sec_num": "7.1"
},
{
"text": "We evaluate both factored and coupled models proposed in this work with and without Brown word clustering of argument fillers (Factored, Coupled, Factored+Br, Coupled+Br). Our models are robust to parameter settings, they were tuned (to an order of magnitude) on the development set and were the same for all model variants: \u03b1 = 1.e-3, \u03b2 = 1.e-3, \u03b7 0 = 1.e-3, \u03b7 1 = 1.e-10, T = 5. Although they can be induced within the model, we set them by hand to indicate granularity preferences. We compare our results with the following alternative approaches. The syntactic function baseline (SyntF) simply clusters predicate arguments according to the dependency relation to their head. Following (Lang and Lapata, 2010) , we allocate a cluster for each of 20 most frequent relations in the CoNLL dataset and one cluster for all other relations. We also compare our performance with the Latent Logistic classification (Lang and Lapata, 2010) , Split-Merge clustering (Lang and Lapata, 2011a) , and Graph Partitioning (Lang and Lapata, 2011b) approaches (labeled LLogistic, SplitMerge, and GraphPart, respectively) which achieve the current best unsupervised SRL results in this setting.",
"cite_spans": [
{
"start": 689,
"end": 712,
"text": "(Lang and Lapata, 2010)",
"ref_id": "BIBREF19"
},
{
"start": 910,
"end": 933,
"text": "(Lang and Lapata, 2010)",
"ref_id": "BIBREF19"
},
{
"start": 959,
"end": 983,
"text": "(Lang and Lapata, 2011a)",
"ref_id": "BIBREF20"
},
{
"start": 1009,
"end": 1033,
"text": "(Lang and Lapata, 2011b)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data and Evaluation",
"sec_num": "7.1"
},
{
"text": "Experimental results are summarized in Table 1. We begin by comparing our models to the three existing clustering approaches on gold syntactic parses, and using gold PropBank annotations to identify predicate arguments. In this set of experiments we measure the relative performance of argument clustering, removing the identifica- tion stage, and minimize the noise due to automatic syntactic annotations. All four variants of the models we propose substantially outperform other models: the coupled model with Brown clustering of argument fillers (Coupled+Br) beats the previous best model SplitMerge by 2.9% F1 score. As mentioned in Section 2, our approach specifically does not cluster some of the modifier arguments. In order to verify that this and argument filler clustering were not the only aspects of our approach contributing to performance improvements, we also evaluated our coupled model without Brown clustering and treating modifiers as regular arguments. The model achieves 89.2% purity, 74.0% collocation, and 80.9% F1 scores, still substantially outperforming all of the alternative approaches. Replacing gold parses with MaltParser analyses we see a similar trend, where Coupled+Br outperforms the best alternative approach SplitMerge by 1.5%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gold Arguments",
"sec_num": "7.2.1"
},
{
"text": "Results are summarized in Table 2 . 11 The precision and recall of our re-implementation of the argument identification heuristic described in Section 2 on gold parses were 87.7% and 88.0%, respectively, and do not quite match 88.1% and 87.9% reported in (Lang and Lapata, 2011a ). Since we could not reproduce their argument identification stage exactly, we are omitting their results for the two regimes, instead including the results for our two best models Factored+Br and Coupled+Br. We see a similar trend, where the coupled system consistently outperforms its factored counterpart, achieving 85.8% and 83.9% F1 for gold and MaltParser analyses, respectively.",
"cite_spans": [
{
"start": 36,
"end": 38,
"text": "11",
"ref_id": null
},
{
"start": 255,
"end": 278,
"text": "(Lang and Lapata, 2011a",
"ref_id": "BIBREF20"
}
],
"ref_spans": [
{
"start": 26,
"end": 33,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Automatic Arguments",
"sec_num": "7.2.2"
},
{
"text": "We observe that consistently through the four regimes, sharing of alternations between predicates captured by the coupled model outperforms the factored version, and that reducing the argument filler sparsity with clustering also has a substantial positive effect. Due to the space constraints we are not able to present detailed analysis of the induced similarity graph D, however, argument-key pairs with the highest induced similarity encode, among other things, passivization, benefactive alternations, near-interchangeability of some subordinating conjunctions and prepositions (e.g., if and whether), as well as, restoring some of the unnecessary splits introduced by the argument key definition (e.g., semantic roles for adverbials do not normally depend on whether the construction is passive or active).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic Arguments",
"sec_num": "7.2.2"
},
{
"text": "Most of SRL research has focused on the supervised setting (Carreras and M\u00e0rquez, 2005; Surdeanu et al., 2008) , however, lack of annotated resources for most languages and insufficient coverage provided by the existing resources motivates the need for using unlabeled data or other forms of weak supervision. This work includes methods based on graph alignment between labeled and unlabeled data (F\u00fcrstenau and Lapata, 2009) , using unlabeled data to improve lexical generalization (Deschacht and Moens, 2009) , and projection of annotation across languages (Pado and Lapata, 2009; van der Plas et al., 2011) . Semi-supervised and weakly-supervised techniques have also been explored for other types of semantic representations but these studies have mostly focused on restricted domains (Kate and Mooney, 2007; Liang et al., 2009; Titov and Kozhevnikov, 2010; Goldwasser et al., 2011; Liang et al., 2011) .",
"cite_spans": [
{
"start": 59,
"end": 87,
"text": "(Carreras and M\u00e0rquez, 2005;",
"ref_id": "BIBREF5"
},
{
"start": 88,
"end": 110,
"text": "Surdeanu et al., 2008)",
"ref_id": "BIBREF37"
},
{
"start": 397,
"end": 425,
"text": "(F\u00fcrstenau and Lapata, 2009)",
"ref_id": "BIBREF10"
},
{
"start": 483,
"end": 510,
"text": "(Deschacht and Moens, 2009)",
"ref_id": "BIBREF7"
},
{
"start": 559,
"end": 582,
"text": "(Pado and Lapata, 2009;",
"ref_id": "BIBREF28"
},
{
"start": 583,
"end": 609,
"text": "van der Plas et al., 2011)",
"ref_id": null
},
{
"start": 789,
"end": 812,
"text": "(Kate and Mooney, 2007;",
"ref_id": "BIBREF17"
},
{
"start": 813,
"end": 832,
"text": "Liang et al., 2009;",
"ref_id": "BIBREF23"
},
{
"start": 833,
"end": 861,
"text": "Titov and Kozhevnikov, 2010;",
"ref_id": "BIBREF41"
},
{
"start": 862,
"end": 886,
"text": "Goldwasser et al., 2011;",
"ref_id": "BIBREF13"
},
{
"start": 887,
"end": 906,
"text": "Liang et al., 2011)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "8"
},
{
"text": "Unsupervised learning has been one of the central paradigms for the closely-related area of relation extraction, where several techniques have been proposed to cluster semantically similar ver-balizations of relations (Lin and Pantel, 2001; Banko et al., 2007) . Early unsupervised approaches to the SRL problem include the work by Swier and Stevenson (2004) , where the Verb-Net verb lexicon was used to guide unsupervised learning, and a generative model of Grenager and Manning (2006) which exploits linguistic priors on syntactic-semantic interface.",
"cite_spans": [
{
"start": 218,
"end": 240,
"text": "(Lin and Pantel, 2001;",
"ref_id": "BIBREF25"
},
{
"start": 241,
"end": 260,
"text": "Banko et al., 2007)",
"ref_id": "BIBREF1"
},
{
"start": 332,
"end": 358,
"text": "Swier and Stevenson (2004)",
"ref_id": "BIBREF38"
},
{
"start": 460,
"end": 487,
"text": "Grenager and Manning (2006)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "8"
},
{
"text": "More recently, the role induction problem has been studied in Lang and Lapata (2010) where it has been reformulated as a problem of detecting alterations and mapping non-standard linkings to the canonical ones. Later, Lang and Lapata (2011a) proposed an algorithmic approach to clustering argument signatures which achieves higher accuracy and outperforms the syntactic baseline. In Lang and Lapata (2011b) , the role induction problem is formulated as a graph partitioning problem: each vertex in the graph corresponds to a predicate occurrence and edges represent lexical and syntactic similarities between the occurrences. Unsupervised induction of semantics has also been studied in Poon and Domingos (2009) and Titov and Klementiev (2010) but the induced representations are not entirely compatible with the PropBank-style annotations and they have been evaluated only on a question answering task for the biomedical domain. Also, the related task of unsupervised argument identification was considered in Abend et al. (2009) .",
"cite_spans": [
{
"start": 62,
"end": 84,
"text": "Lang and Lapata (2010)",
"ref_id": "BIBREF19"
},
{
"start": 218,
"end": 241,
"text": "Lang and Lapata (2011a)",
"ref_id": "BIBREF20"
},
{
"start": 383,
"end": 406,
"text": "Lang and Lapata (2011b)",
"ref_id": "BIBREF21"
},
{
"start": 687,
"end": 711,
"text": "Poon and Domingos (2009)",
"ref_id": "BIBREF31"
},
{
"start": 716,
"end": 743,
"text": "Titov and Klementiev (2010)",
"ref_id": null
},
{
"start": 1011,
"end": 1030,
"text": "Abend et al. (2009)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "8"
},
{
"text": "In this work we introduced two Bayesian models for unsupervised role induction. They treat the task as a family of related clustering problems, one for each predicate. The first factored model induces each clustering independently, whereas the second model couples them by exploiting a novel technique for sharing clustering preferences across a family of clusterings. Both methods achieve state-of-the-art results with the coupled model outperforming the factored counterpart in all regimes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "9"
},
{
"text": "Although it provides a strong baseline which is difficult to beat(Grenager and Manning, 2006;Lang and Lapata, 2010;Lang and Lapata, 2011a).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "LGS denotes a logical subject in a passive construction(Surdeanu et al., 2008).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "It may be more standard to use a decay function f : R \u2192 R and choose a partner with the probability proportional to f (\u2212di,j). However, the two forms are equivalent and using scores di,j directly is more convenient for our induction purposes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For prepositional phrases, we take as head the head noun of the object noun phrase as it encodes crucial lexical information. However, the preposition is not ignored but rather encoded in the corresponding argument key, as explained in Section 2.5 Alternatively, the clustering of arguments could be induced within the model, as done in(Titov and Klementiev, 2011).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Or, at least specific to a class of predicates(Levin, 1993).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This idea has been explored before for shallow semantic representations(Lang and Lapata, 2011a;Titov and Klementiev, 2011).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Note that weights di,j are invariant under rescaling when the rescaling is also applied to the concentration parameter \u03b1.9 In practice, two iterations were sufficient.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The coupled model without discounting still outperforms the factored counterpart in our experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Note, that the scores are computed on correctly identified arguments only, and tend to be higher in these experiments probably because the complex arguments get discarded by the heuristic.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The authors acknowledge the support of the MMCI Cluster of Excellence, and thank Hagen F\u00fcrstenau, Mikhail Kozhevnikov, Alexis Palmer, Manfred Pinkal, Caroline Sporleder and the anonymous reviewers for their suggestions, and Joel Lang for answering questions about their methods and data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Unsupervised argument identification for semantic role labeling",
"authors": [
{
"first": "Omri",
"middle": [],
"last": "Abend",
"suffix": ""
},
{
"first": "Roi",
"middle": [],
"last": "Reichart",
"suffix": ""
},
{
"first": "Ari",
"middle": [],
"last": "Rappoport",
"suffix": ""
}
],
"year": 2009,
"venue": "ACL-IJCNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Omri Abend, Roi Reichart, and Ari Rappoport. 2009. Unsupervised argument identification for semantic role labeling. In ACL-IJCNLP.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Open information extraction from the web",
"authors": [
{
"first": "Michele",
"middle": [],
"last": "Banko",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Cafarella",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Soderland",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Broadhead",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Etzioni",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michele Banko, Michael J Cafarella, Stephen Soder- land, Matt Broadhead, and Oren Etzioni. 2007. Open information extraction from the web. In IJ- CAI.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Cross-language frame semantics transfer in bilingual corpora",
"authors": [
{
"first": "Roberto",
"middle": [],
"last": "Basili",
"suffix": ""
},
{
"first": "Diego",
"middle": [
"De"
],
"last": "Cao",
"suffix": ""
},
{
"first": "Danilo",
"middle": [],
"last": "Croce",
"suffix": ""
},
{
"first": "Bonaventura",
"middle": [],
"last": "Coppola",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Moschitti",
"suffix": ""
}
],
"year": 2009,
"venue": "CICLING",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roberto Basili, Diego De Cao, Danilo Croce, Bonaventura Coppola, and Alessandro Moschitti. 2009. Cross-language frame semantics transfer in bilingual corpora. In CICLING.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Distance dependent chinese restaurant processes",
"authors": [
{
"first": "M",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Blei",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Frazier",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Machine Learning Research",
"volume": "12",
"issue": "",
"pages": "2461--2488",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David M. Blei and Peter Frazier. 2011. Distance de- pendent chinese restaurant processes. Journal of Machine Learning Research, 12:2461-2488.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Classbased n-gram models for natural language",
"authors": [
{
"first": "F",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Vincent",
"middle": [
"Della"
],
"last": "Brown",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"V"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "Jenifer",
"middle": [
"C"
],
"last": "Desouza",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"L"
],
"last": "Lai",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mercer",
"suffix": ""
}
],
"year": 1992,
"venue": "Computational Linguistics",
"volume": "18",
"issue": "4",
"pages": "467--479",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter F. Brown, Vincent Della Pietra, Peter V. deSouza, Jenifer C. Lai, and Robert L. Mercer. 1992. Class- based n-gram models for natural language. Compu- tational Linguistics, 18(4):467-479.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Introduction to the CoNLL-2005 Shared Task: Semantic Role Labeling",
"authors": [
{
"first": "Xavier",
"middle": [],
"last": "Carreras",
"suffix": ""
},
{
"first": "Llu\u00eds",
"middle": [],
"last": "M\u00e0rquez",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xavier Carreras and Llu\u00eds M\u00e0rquez. 2005. Intro- duction to the CoNLL-2005 Shared Task: Semantic Role Labeling. In CoNLL.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Fast search for dirichlet process mixture models",
"authors": [
{
"first": "Hal",
"middle": [],
"last": "Daume",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2007,
"venue": "AISTATS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hal Daume III. 2007. Fast search for dirichlet process mixture models. In AISTATS.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Semi-supervised semantic role labeling using the Latent Words Language Model",
"authors": [
{
"first": "Koen",
"middle": [],
"last": "Deschacht",
"suffix": ""
},
{
"first": "Marie-Francine",
"middle": [],
"last": "Moens",
"suffix": ""
}
],
"year": 2009,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Koen Deschacht and Marie-Francine Moens. 2009. Semi-supervised semantic role labeling using the Latent Words Language Model. In EMNLP.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Generalized spatial dirichlet process models",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Duan",
"suffix": ""
},
{
"first": "Michele",
"middle": [],
"last": "Guindani",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Gelfand",
"suffix": ""
}
],
"year": 2007,
"venue": "Biometrika",
"volume": "94",
"issue": "",
"pages": "809--825",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Duan, Michele Guindani, and Alan Gelfand. 2007. Generalized spatial dirichlet process models. Biometrika, 94:809-825.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A Bayesian analysis of some nonparametric problems",
"authors": [
{
"first": "Thomas",
"middle": [
"S"
],
"last": "Ferguson",
"suffix": ""
}
],
"year": 1973,
"venue": "The Annals of Statistics",
"volume": "1",
"issue": "2",
"pages": "209--230",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas S. Ferguson. 1973. A Bayesian analysis of some nonparametric problems. The Annals of Statistics, 1(2):209-230.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Graph alignment for semi-supervised semantic role labeling",
"authors": [
{
"first": "Hagen",
"middle": [],
"last": "F\u00fcrstenau",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2009,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hagen F\u00fcrstenau and Mirella Lapata. 2009. Graph alignment for semi-supervised semantic role label- ing. In EMNLP.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Corpus expansion for statistical machine translation with semantic role label substitution rules",
"authors": [
{
"first": "Qin",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Vogel",
"suffix": ""
}
],
"year": 2011,
"venue": "ACL:HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qin Gao and Stephan Vogel. 2011. Corpus expansion for statistical machine translation with semantic role label substitution rules. In ACL:HLT.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Automatic labelling of semantic roles",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2002,
"venue": "Computational Linguistics",
"volume": "28",
"issue": "3",
"pages": "245--288",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Gildea and Daniel Jurafsky. 2002. Automatic labelling of semantic roles. Computational Linguis- tics, 28(3):245-288.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Confidence driven unsupervised semantic parsing",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Goldwasser",
"suffix": ""
},
{
"first": "Roi",
"middle": [],
"last": "Reichart",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Clarke",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2011,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Goldwasser, Roi Reichart, James Clarke, and Dan Roth. 2011. Confidence driven unsupervised se- mantic parsing. In ACL.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Unsupervised discovery of a statistical verb lexicon",
"authors": [
{
"first": "Trond",
"middle": [],
"last": "Grenager",
"suffix": ""
},
{
"first": "Christoph",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2006,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Trond Grenager and Christoph Manning. 2006. Unsu- pervised discovery of a statistical verb lexicon. In EMNLP.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "The CoNLL-2009 shared task: Syntactic and semantic dependencies in multiple languages",
"authors": [
{
"first": "Jan",
"middle": [],
"last": "Haji\u010d",
"suffix": ""
},
{
"first": "Massimiliano",
"middle": [],
"last": "Ciaramita",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Johansson",
"suffix": ""
},
{
"first": "Daisuke",
"middle": [],
"last": "Kawahara",
"suffix": ""
},
{
"first": "Maria",
"middle": [
"Ant\u00f2nia"
],
"last": "Mart\u00ed",
"suffix": ""
},
{
"first": "Llu\u00eds",
"middle": [],
"last": "M\u00e0rquez",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Meyers",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Pad\u00f3",
"suffix": ""
},
{
"first": "Pavel",
"middle": [],
"last": "Jan\u0161t\u011bp\u00e1nek",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Stra\u0148\u00e1k",
"suffix": ""
},
{
"first": "Nianwen",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "Yi",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 13th Conference on Computational Natural Language Learning (CoNLL-2009)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jan Haji\u010d, Massimiliano Ciaramita, Richard Johans- son, Daisuke Kawahara, Maria Ant\u00f2nia Mart\u00ed, Llu\u00eds M\u00e0rquez, Adam Meyers, Joakim Nivre, Sebastian Pad\u00f3, Jan\u0160t\u011bp\u00e1nek, Pavel Stra\u0148\u00e1k, Mihai Surdeanu, Nianwen Xue, and Yi Zhang. 2009. The CoNLL- 2009 shared task: Syntactic and semantic depen- dencies in multiple languages. In Proceedings of the 13th Conference on Computational Natural Language Learning (CoNLL-2009), June 4-5.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Question answering based on semantic roles",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Kaisser",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [],
"last": "Webber",
"suffix": ""
}
],
"year": 2007,
"venue": "ACL Workshop on Deep Linguistic Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Kaisser and Bonnie Webber. 2007. Question answering based on semantic roles. In ACL Work- shop on Deep Linguistic Processing.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Learning language semantics from ambigous supervision",
"authors": [
{
"first": "J",
"middle": [],
"last": "Rohit",
"suffix": ""
},
{
"first": "Raymond",
"middle": [
"J"
],
"last": "Kate",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mooney",
"suffix": ""
}
],
"year": 2007,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rohit J. Kate and Raymond J. Mooney. 2007. Learn- ing language semantics from ambigous supervision. In AAAI.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Discounting over-confidence of naive bayes in highrecall text classification",
"authors": [
{
"first": "Aleksander",
"middle": [],
"last": "Kolcz",
"suffix": ""
},
{
"first": "Abdur",
"middle": [],
"last": "Chowdhury",
"suffix": ""
}
],
"year": 2005,
"venue": "ECML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aleksander Kolcz and Abdur Chowdhury. 2005. Dis- counting over-confidence of naive bayes in high- recall text classification. In ECML.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Unsupervised induction of semantic roles",
"authors": [
{
"first": "Joel",
"middle": [],
"last": "Lang",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2010,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joel Lang and Mirella Lapata. 2010. Unsupervised induction of semantic roles. In ACL.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Unsupervised semantic role induction via split-merge clustering",
"authors": [
{
"first": "Joel",
"middle": [],
"last": "Lang",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2011,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joel Lang and Mirella Lapata. 2011a. Unsupervised semantic role induction via split-merge clustering. In ACL.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Unsupervised semantic role induction with graph partitioning",
"authors": [
{
"first": "Joel",
"middle": [],
"last": "Lang",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2011,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joel Lang and Mirella Lapata. 2011b. Unsupervised semantic role induction with graph partitioning. In EMNLP.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "English Verb Classes and Alternations: A Preliminary Investigation",
"authors": [
{
"first": "Beth",
"middle": [],
"last": "Levin",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Beth Levin. 1993. English Verb Classes and Alter- nations: A Preliminary Investigation. University of Chicago Press.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Learning semantic correspondences with less supervision",
"authors": [
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"I"
],
"last": "Jordan",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2009,
"venue": "ACL-IJCNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Percy Liang, Michael I. Jordan, and Dan Klein. 2009. Learning semantic correspondences with less super- vision. In ACL-IJCNLP.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Learning dependency-based compositional semantics",
"authors": [
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Jordan",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2011,
"venue": "ACL: HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Percy Liang, Michael Jordan, and Dan Klein. 2011. Learning dependency-based compositional seman- tics. In ACL: HLT.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "DIRT -discovery of inference rules from text",
"authors": [
{
"first": "Dekang",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Pantel",
"suffix": ""
}
],
"year": 2001,
"venue": "KDD",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dekang Lin and Patrick Pantel. 2001. DIRT -discov- ery of inference rules from text. In KDD.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Semantic role features for machine translation",
"authors": [
{
"first": "Ding",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ding Liu and Daniel Gildea. 2010. Semantic role fea- tures for machine translation. In Coling.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "The CoNLL 2007 shared task on dependency parsing",
"authors": [
{
"first": "J",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "K\u00fcbler",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Nilsson",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Yuret",
"suffix": ""
}
],
"year": 2007,
"venue": "EMNLP-CoNLL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Nivre, J. Hall, S. K\u00fcbler, R. McDonald, J. Nilsson, S. Riedel, and D. Yuret. 2007. The CoNLL 2007 shared task on dependency parsing. In EMNLP- CoNLL.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Crosslingual annotation projection for semantic roles",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Pado",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2009,
"venue": "Journal of Artificial Intelligence Research",
"volume": "36",
"issue": "",
"pages": "307--340",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Pado and Mirella Lapata. 2009. Cross- lingual annotation projection for semantic roles. Journal of Artificial Intelligence Research, 36:307- 340.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Evaluating FrameNet-style semantic parsing: the role of coverage gaps in FrameNet",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "Caroline",
"middle": [],
"last": "Sporleder",
"suffix": ""
}
],
"year": 2010,
"venue": "COLING",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexis Palmer and Caroline Sporleder. 2010. Evalu- ating FrameNet-style semantic parsing: the role of coverage gaps in FrameNet. In COLING.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "The proposition bank: An annotated corpus of semantic roles",
"authors": [
{
"first": "M",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Gildea",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Kingsbury",
"suffix": ""
}
],
"year": 2005,
"venue": "Computational Linguistics",
"volume": "31",
"issue": "1",
"pages": "71--106",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Palmer, D. Gildea, and P. Kingsbury. 2005. The proposition bank: An annotated corpus of semantic roles. Computational Linguistics, 31(1):71-106.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Unsupervised semantic parsing",
"authors": [
{
"first": "Hoifung",
"middle": [],
"last": "Poon",
"suffix": ""
},
{
"first": "Pedro",
"middle": [],
"last": "Domingos",
"suffix": ""
}
],
"year": 2009,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hoifung Poon and Pedro Domingos. 2009. Unsuper- vised semantic parsing. In EMNLP.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Towards robust semantic role labeling",
"authors": [
{
"first": "Sameer",
"middle": [],
"last": "Pradhan",
"suffix": ""
},
{
"first": "Wayne",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "James",
"middle": [
"H"
],
"last": "Martin",
"suffix": ""
}
],
"year": 2008,
"venue": "Computational Linguistics",
"volume": "34",
"issue": "",
"pages": "289--310",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sameer Pradhan, Wayne Ward, and James H. Martin. 2008. Towards robust semantic role labeling. Com- putational Linguistics, 34:289-310.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Improving multi-class text classification with Naive bayes",
"authors": [
{
"first": "Jason",
"middle": [
"Rennie"
],
"last": "",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Rennie. 2001. Improving multi-class text classification with Naive bayes. Technical Report AITR-2001-004, MIT.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Relation alignment for textual entailment recognition",
"authors": [
{
"first": "M",
"middle": [],
"last": "Sammons",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Vydiswaran",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Vieira",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Johri",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Goldwasser",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Srikumar",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Kundu",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Tu",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Small",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Rule",
"suffix": ""
},
{
"first": "Q",
"middle": [],
"last": "Do",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2009,
"venue": "Text Analysis Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Sammons, V. Vydiswaran, T. Vieira, N. Johri, M. Chang, D. Goldwasser, V. Srikumar, G. Kundu, Y. Tu, K. Small, J. Rule, Q. Do, and D. Roth. 2009. Relation alignment for textual entailment recogni- tion. In Text Analysis Conference (TAC).",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Using semantic roles to improve question answering",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2007,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Shen and Mirella Lapata. 2007. Using semantic roles to improve question answering. In EMNLP.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Spectral chinese restaurant processes: Nonparametric clustering based on similarities",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Maas",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2011,
"venue": "AISTATS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, Andrew Maas, and Christopher Man- ning. 2011. Spectral chinese restaurant processes: Nonparametric clustering based on similarities. In AISTATS.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "The CoNLL-2008 shared task on joint parsing of syntactic and semantic dependencies",
"authors": [
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Meyers Richard Johansson",
"suffix": ""
},
{
"first": "Llu\u00eds",
"middle": [],
"last": "M\u00e0rquez",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2008,
"venue": "CoNLL 2008: Shared Task",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mihai Surdeanu, Adam Meyers Richard Johansson, Llu\u00eds M\u00e0rquez, and Joakim Nivre. 2008. The CoNLL-2008 shared task on joint parsing of syn- tactic and semantic dependencies. In CoNLL 2008: Shared Task.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Unsupervised semantic role labelling",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Swier",
"suffix": ""
},
{
"first": "Suzanne",
"middle": [],
"last": "Stevenson",
"suffix": ""
}
],
"year": 2004,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Swier and Suzanne Stevenson. 2004. Unsu- pervised semantic role labelling. In EMNLP.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Dirichlet processes",
"authors": [
{
"first": "Yee Whye",
"middle": [],
"last": "Teh",
"suffix": ""
}
],
"year": 2010,
"venue": "Encyclopedia of Machine Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yee Whye Teh. 2010. Dirichlet processes. In Ency- clopedia of Machine Learning. Springer.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "A Bayesian model for unsupervised semantic parsing",
"authors": [
{
"first": "Ivan",
"middle": [],
"last": "Titov",
"suffix": ""
},
{
"first": "Alexandre",
"middle": [],
"last": "Klementiev",
"suffix": ""
}
],
"year": 2011,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ivan Titov and Alexandre Klementiev. 2011. A Bayesian model for unsupervised semantic parsing. In ACL.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Bootstrapping semantic analyzers from noncontradictory texts",
"authors": [
{
"first": "Ivan",
"middle": [],
"last": "Titov",
"suffix": ""
},
{
"first": "Mikhail",
"middle": [],
"last": "Kozhevnikov",
"suffix": ""
}
],
"year": 2010,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ivan Titov and Mikhail Kozhevnikov. 2010. Bootstrapping semantic analyzers from non- contradictory texts. In ACL.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Word representations: A simple and general method for semi-supervised learning",
"authors": [
{
"first": "Joseph",
"middle": [],
"last": "Turian",
"suffix": ""
},
{
"first": "Lev",
"middle": [],
"last": "Ratinov",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2010,
"venue": "ACL. Lonneke van der Plas",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joseph Turian, Lev Ratinov, and Yoshua Bengio. 2010. Word representations: A simple and general method for semi-supervised learning. In ACL. Lonneke van der Plas, Paola Merlo, and James Hen- derson. 2011. Scaling up automatic cross-lingual semantic role annotation. In ACL.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Semantic roles for SMT: A hybrid two-pass model",
"authors": [
{
"first": "Dekai",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Pascale",
"middle": [],
"last": "Fung",
"suffix": ""
}
],
"year": 2009,
"venue": "NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dekai Wu and Pascale Fung. 2009. Semantic roles for SMT: A hybrid two-pass model. In NAACL.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Proc. of Fifth Workshop on Syntax, Semantics and Structure in Statistical Translation. ACL",
"authors": [
{
"first": "Dekai",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Marianna",
"middle": [],
"last": "Apidianaki",
"suffix": ""
},
{
"first": "Marine",
"middle": [],
"last": "Carpuat",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dekai Wu, Marianna Apidianaki, Marine Carpuat, and Lucia Specia, editors. 2011. Proc. of Fifth Work- shop on Syntax, Semantics and Structure in Statisti- cal Translation. ACL.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "(a) [ A0 Mary] opened [ A1 the door]. (b) [ A0 Mary] is expected to open [ A1 the door]."
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "(c) [ A1 The door] opened. (d) [ A1 The door] was opened [ A0 by Mary]."
},
"FIGREF2": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Generative stories for the factored and coupled models."
},
"TABREF0": {
"num": null,
"type_str": "table",
"content": "<table><tr><td>Coupled model:</td><td/></tr><tr><td>D \u223c N onInf orm</td><td>[similarity graph]</td></tr><tr><td>for each predicate p = 1, 2, . . . :</td><td/></tr><tr><td>Bp \u223c dd-CRP (\u03b1, D)</td><td>[partition of arg keys]</td></tr><tr><td colspan=\"2\">Parameters:</td></tr><tr><td>for each predicate p = 1, 2, . . . :</td><td/></tr><tr><td>for each role r \u2208 Bp:</td><td/></tr><tr><td>\u03b8p,r \u223c DP (\u03b2, H (A) )</td><td>[distrib of arg fillers]</td></tr><tr><td>\u03c8p,r \u223c Beta(\u03b70, \u03b71)</td><td>[geom distr for dup roles]</td></tr><tr><td colspan=\"2\">Data Generation:</td></tr><tr><td>for each predicate p = 1, 2, . . . :</td><td/></tr><tr><td>for each occurrence l of p:</td><td/></tr><tr><td>for every role r \u2208 Bp:</td><td/></tr><tr><td colspan=\"2\">if [n \u223c U nif (0, 1)] = 1: [role appears at least once]</td></tr><tr><td>GenArgument(p, r)</td><td>[draw one arg]</td></tr><tr><td>while [n \u223c \u03c8p,r] = 1:</td><td>[continue generation]</td></tr><tr><td>GenArgument(p, r)</td><td>[draw more args]</td></tr></table>",
"html": null,
"text": "partition of arg keys]"
},
"TABREF2": {
"num": null,
"type_str": "table",
"content": "<table><tr><td>gold parses</td><td>auto parses</td></tr><tr><td>PU CO F1</td><td>PU CO F1</td></tr></table>",
"html": null,
"text": "Factored+Br 87.8 82.9 85.3 85.8 81.1 83.4 Coupled+Br 89.2 82.6 85.8 87.4 80.7 83.9 SyntF 83.5 81.4 82.4 81.4 79.1 80.2 Table 2: Argument clustering performance with automatic argument identification."
}
}
}
}