Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D14-1034",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:53:18.343530Z"
},
"title": "An Unsupervised Model for Instance Level Subcategorization Acquisition",
"authors": [
{
"first": "Simon",
"middle": [],
"last": "Baker",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Computer Laboratory University of Cambridge",
"location": {}
},
"email": ""
},
{
"first": "Roi",
"middle": [],
"last": "Reichart",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Cambridge",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Most existing systems for subcategorization frame (SCF) acquisition rely on supervised parsing and infer SCF distributions at type, rather than instance level. These systems suffer from poor portability across domains and their benefit for NLP tasks that involve sentence-level processing is limited. We propose a new unsupervised, Markov Random Field-based model for SCF acquisition which is designed to address these problems. The system relies on supervised POS tagging rather than parsing, and is capable of learning SCFs at instance level. We perform evaluation against gold standard data which shows that our system outperforms several supervised and type-level SCF baselines. We also conduct task-based evaluation in the context of verb similarity prediction, demonstrating that a vector space model based on our SCFs substantially outperforms a lexical model and a model based on a supervised parser 1 .",
"pdf_parse": {
"paper_id": "D14-1034",
"_pdf_hash": "",
"abstract": [
{
"text": "Most existing systems for subcategorization frame (SCF) acquisition rely on supervised parsing and infer SCF distributions at type, rather than instance level. These systems suffer from poor portability across domains and their benefit for NLP tasks that involve sentence-level processing is limited. We propose a new unsupervised, Markov Random Field-based model for SCF acquisition which is designed to address these problems. The system relies on supervised POS tagging rather than parsing, and is capable of learning SCFs at instance level. We perform evaluation against gold standard data which shows that our system outperforms several supervised and type-level SCF baselines. We also conduct task-based evaluation in the context of verb similarity prediction, demonstrating that a vector space model based on our SCFs substantially outperforms a lexical model and a model based on a supervised parser 1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Subcategorization frame (SCF) acquisition involves identifying the arguments of a predicate and generalizing about its syntactic frames, where each frame specifies the syntactic type and number of arguments permitted by the predicate. For example, in sentences (1)-(3) the verb distinguish takes three different frames, the difference between which is not evident when considering the phrase structure categorization:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(1) Direct Transitive:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "[They]NP [distinguished] As SCFs describe the syntactic realization of the verbal predicate-argument structure, they are highly valuable for a variety of NLP tasks. For example, verb subcategorization information has proven useful for tasks such as parsing (Carroll and Fang, 2004; Arun and Keller, 2005; Cholakov and van Noord, 2010) , semantic role labeling (Bharati et al., 2005; Moschitti and Basili, 2005) , verb clustering, (Schulte im Walde, 2006; Sun and Korhonen, 2011) and machine translation (hye Han et al., 2000; Haji\u010d et al., 2002; Weller et al., 2013) .",
"cite_spans": [
{
"start": 9,
"end": 24,
"text": "[distinguished]",
"ref_id": null
},
{
"start": 257,
"end": 281,
"text": "(Carroll and Fang, 2004;",
"ref_id": "BIBREF6"
},
{
"start": 282,
"end": 304,
"text": "Arun and Keller, 2005;",
"ref_id": "BIBREF1"
},
{
"start": 305,
"end": 334,
"text": "Cholakov and van Noord, 2010)",
"ref_id": "BIBREF8"
},
{
"start": 360,
"end": 382,
"text": "(Bharati et al., 2005;",
"ref_id": "BIBREF3"
},
{
"start": 383,
"end": 410,
"text": "Moschitti and Basili, 2005)",
"ref_id": "BIBREF34"
},
{
"start": 442,
"end": 454,
"text": "Walde, 2006;",
"ref_id": "BIBREF47"
},
{
"start": 455,
"end": 478,
"text": "Sun and Korhonen, 2011)",
"ref_id": "BIBREF50"
},
{
"start": 503,
"end": 525,
"text": "(hye Han et al., 2000;",
"ref_id": "BIBREF15"
},
{
"start": 526,
"end": 545,
"text": "Haji\u010d et al., 2002;",
"ref_id": "BIBREF14"
},
{
"start": 546,
"end": 566,
"text": "Weller et al., 2013)",
"ref_id": "BIBREF56"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "SCF induction is challenging. The argumentadjunct distinction is difficult even for humans, and is further complicated by the fact that both arguments and adjuncts can appear frequently in potential argument head positions (Korhonen et al., 2000) . SCFs are also highly sensitive to domain variation so that both the frames themselves and their probabilities vary depending on the meaning and behavior of predicates in the domain in question (e.g. (Roland and Jurafsky, 1998; Lippincott et al., 2010; Rimell et al., 2013) , Section 4).",
"cite_spans": [
{
"start": 223,
"end": 246,
"text": "(Korhonen et al., 2000)",
"ref_id": "BIBREF21"
},
{
"start": 448,
"end": 475,
"text": "(Roland and Jurafsky, 1998;",
"ref_id": "BIBREF44"
},
{
"start": 476,
"end": 500,
"text": "Lippincott et al., 2010;",
"ref_id": "BIBREF28"
},
{
"start": 501,
"end": 521,
"text": "Rimell et al., 2013)",
"ref_id": "BIBREF42"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Because of the strong impact of domain variation, SCF information is best acquired automatically. Existing data-driven SCF induction systems, however, do not port well between domains. Most existing systems rely on handwritten rules (Briscoe and Carroll, 1997; Korhonen, 2002; Preiss et al., 2007) or simple cooccurrence statistics (O'Donovan et al., 2005; Chesley and Salmon-Alt, 2006; Ienco et al., 2008; Lenci et al., 2008 ; Altamirano and Alonso i Alemany, 2010; Kawahara and Kurohashi, 2010) applied to the grammatical dependency output of supervised statistical parsers. Even the handful of recent systems that use modern machine learning techniques (Debowski, 2009; Lippincott et al., 2012; Van de Cruys et al., 2012; Reichart and Korhonen, 2013) use supervised parsers to pre-process the data 2 .",
"cite_spans": [
{
"start": 233,
"end": 260,
"text": "(Briscoe and Carroll, 1997;",
"ref_id": "BIBREF4"
},
{
"start": 261,
"end": 276,
"text": "Korhonen, 2002;",
"ref_id": "BIBREF22"
},
{
"start": 277,
"end": 297,
"text": "Preiss et al., 2007)",
"ref_id": "BIBREF37"
},
{
"start": 332,
"end": 356,
"text": "(O'Donovan et al., 2005;",
"ref_id": "BIBREF35"
},
{
"start": 357,
"end": 386,
"text": "Chesley and Salmon-Alt, 2006;",
"ref_id": "BIBREF7"
},
{
"start": 387,
"end": 406,
"text": "Ienco et al., 2008;",
"ref_id": "BIBREF16"
},
{
"start": 407,
"end": 425,
"text": "Lenci et al., 2008",
"ref_id": "BIBREF26"
},
{
"start": 656,
"end": 672,
"text": "(Debowski, 2009;",
"ref_id": "BIBREF11"
},
{
"start": 673,
"end": 697,
"text": "Lippincott et al., 2012;",
"ref_id": "BIBREF29"
},
{
"start": 698,
"end": 724,
"text": "Van de Cruys et al., 2012;",
"ref_id": "BIBREF54"
},
{
"start": 725,
"end": 753,
"text": "Reichart and Korhonen, 2013)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Supervised parsers are notoriously sensitive to domain variation (Lease and Charniak, 2005) . As annotation of data for each new domain is unrealistic, current SCF systems suffer from poor portability. This problem is compounded for the many systems that employ manually developed SCF rules because rules are inherently ignorant to domain-specific preferences. The few SCF studies that focused on specific domains (e.g. biomedicine) have reported poor performance due to these reasons (Rimell et al., 2013) .",
"cite_spans": [
{
"start": 65,
"end": 91,
"text": "(Lease and Charniak, 2005)",
"ref_id": "BIBREF25"
},
{
"start": 485,
"end": 506,
"text": "(Rimell et al., 2013)",
"ref_id": "BIBREF42"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Another limitation of most current SCF systems is that they produce a type-level SCF lexicon (i.e. a lexicon which lists, for a given predicate, different SCF types with their relative frequencies). Such a lexicon provides a useful high-level profile of the syntactic behavior of the predicate in question, but is less useful for downstream NLP tasks (e.g. information extraction, parsing, machine translation) that involve sentence processing and can therefore benefit from SCF information at instance level. Sentences (1)-(3) demonstrate this limitation -a prior distribution over the possible syntactic frames of distinguish provides only a weak signal to a sentence level NLP application that needs to infer the verbal argument structure of its input sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We propose a new unsupervised model for SCF induction which addresses these problems with existing systems. Our model does not use a parser or hand-written rules, only a part-of-speech (POS) tagger is utilizes in order to produce features for machine learning. While POS taggers are also sensitive to domain variation, they can be adapted to domains more easily than parsers because they require much smaller amounts of annotated data (Lease and Charniak, 2005; Ringger et al., 2007) . However, as we demonstrate in our experiments, domain adaptation of POS tagging may not even be necessary to obtain good results on the SCF acquisition task.",
"cite_spans": [
{
"start": 435,
"end": 461,
"text": "(Lease and Charniak, 2005;",
"ref_id": "BIBREF25"
},
{
"start": 462,
"end": 483,
"text": "Ringger et al., 2007)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our model, based on the Markov Random Field (MRF) framework, performs instance-based SCF learning. It encodes syntactic similarities among verb instances across different verb types (derived from a lexical and POS-based feature representation of verb instances) as well as prior beliefs on the tendencies of specific instances of the same verb type to take the same SCF. We evaluate our model against corpora annotated with verb instance SCFs (Quochi et al., 2012) . In addition, following the Levin verb clustering tradition (Levin, 1993) which ties verb meanings with their syntactic properties, we evaluate the semantic predictive power of our clusters. In the former evaluation, our model outperforms a number of strong baselines, including supervised and type-level ones, achieving an accuracy of up to 69.2%. In the latter evaluation a vector space model that utilized our induced SCFs substantially outperforms the output of a type-level SCF system that uses the fully trained Stanford parser.",
"cite_spans": [
{
"start": 443,
"end": 464,
"text": "(Quochi et al., 2012)",
"ref_id": "BIBREF38"
},
{
"start": 526,
"end": 539,
"text": "(Levin, 1993)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Several SCF acquisition systems are available for English (O'Donovan et al., 2005; Preiss et al., 2007; Lippincott et al., 2012; Van de Cruys et al., 2012; Reichart and Korhonen, 2013) and other languages, including French (Messiant, 2008) , Italian (Lenci et al., 2008 ), Turkish (Uzun et al., 2008 , Japanese (Kawahara and Kurohashi, 2010) and Chinese (Han et al., 2008) . The prominent input to these systems are grammatical relations (GRs) which express binary dependencies between words (e.g. direct and indirect objects, various types of complements and conjunctions). These are generated by some parsers (e.g. (Briscoe et al., 2006) ) and can be extracted from the output of others (De-Marneffe et al., 2006) .",
"cite_spans": [
{
"start": 58,
"end": 82,
"text": "(O'Donovan et al., 2005;",
"ref_id": "BIBREF35"
},
{
"start": 83,
"end": 103,
"text": "Preiss et al., 2007;",
"ref_id": "BIBREF37"
},
{
"start": 104,
"end": 128,
"text": "Lippincott et al., 2012;",
"ref_id": "BIBREF29"
},
{
"start": 129,
"end": 155,
"text": "Van de Cruys et al., 2012;",
"ref_id": "BIBREF54"
},
{
"start": 156,
"end": 184,
"text": "Reichart and Korhonen, 2013)",
"ref_id": "BIBREF39"
},
{
"start": 223,
"end": 239,
"text": "(Messiant, 2008)",
"ref_id": "BIBREF32"
},
{
"start": 250,
"end": 269,
"text": "(Lenci et al., 2008",
"ref_id": "BIBREF26"
},
{
"start": 270,
"end": 299,
"text": "), Turkish (Uzun et al., 2008",
"ref_id": null
},
{
"start": 311,
"end": 341,
"text": "(Kawahara and Kurohashi, 2010)",
"ref_id": "BIBREF18"
},
{
"start": 346,
"end": 372,
"text": "Chinese (Han et al., 2008)",
"ref_id": null
},
{
"start": 617,
"end": 639,
"text": "(Briscoe et al., 2006)",
"ref_id": "BIBREF5"
},
{
"start": 689,
"end": 715,
"text": "(De-Marneffe et al., 2006)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": "2"
},
{
"text": "Two representative systems for English are the Cambridge system (Preiss et al., 2007) and the BioLexicon system which was used to acquire a substantial lexicon for biomedicine (Venturi et al., 2009) . These systems extract GRs at the verb instance level from the output of a parser: the RASP general-language unlexicalized parser 3 (Briscoe et al., 2006) and the lexicalized Enju parser tuned to the biomedical domain (Miyao and Tsujii, 2005) , respectively. They generate potential SCFs by mapping GRs to a predefined SCF inventory using a set of manually developed rules (the Cambridge system) or by simply considering the sets of GRs including verbs in question as potential SCFs (BioLexicon). Finally, a type level lexicon is built through noisy frame filtering (based on frequencies or on external resources and annotations), which aims to remove errors from parsing and argument-adjunct distinction. Clearly, these systems require extensive manual work: a-priori definition of an SCF inventory and rules, manually annotated sentences for training a supervised parser, SCF annotations for parser lexicalization, and manually developed resources for optimal filtering.",
"cite_spans": [
{
"start": 64,
"end": 85,
"text": "(Preiss et al., 2007)",
"ref_id": "BIBREF37"
},
{
"start": 176,
"end": 198,
"text": "(Venturi et al., 2009)",
"ref_id": "BIBREF55"
},
{
"start": 332,
"end": 354,
"text": "(Briscoe et al., 2006)",
"ref_id": "BIBREF5"
},
{
"start": 418,
"end": 442,
"text": "(Miyao and Tsujii, 2005)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": "2"
},
{
"text": "A number of recent works have applied modern machine learning techniques to SCF induction, including point-wise co-occurrence of arguments (Debowski, 2009) , a Bayesian network model (Lippincott et al., 2012) , multi-way tensor factorization (Van de Cruys et al., 2012) and Determinantal Point Processes (DPPs) -based clustering (Reichart and Korhonen, 2013) . However, all of these systems induce type-level SCF lexicons and, except from the system of (Lippincott et al., 2012) that is not capable of learning traditional SCFs, they all rely on supervised parsers.",
"cite_spans": [
{
"start": 139,
"end": 155,
"text": "(Debowski, 2009)",
"ref_id": "BIBREF11"
},
{
"start": 183,
"end": 208,
"text": "(Lippincott et al., 2012)",
"ref_id": "BIBREF29"
},
{
"start": 242,
"end": 269,
"text": "(Van de Cruys et al., 2012)",
"ref_id": "BIBREF54"
},
{
"start": 329,
"end": 358,
"text": "(Reichart and Korhonen, 2013)",
"ref_id": "BIBREF39"
},
{
"start": 453,
"end": 478,
"text": "(Lippincott et al., 2012)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": "2"
},
{
"text": "Our new system differs from previous ones in a number of respects. First, in contrast to most previous systems, our system provides SCF analysis for each verb instance in its sentential context, yielding more precise SCF information for systems benefiting from instance-based analysis. Secondly, it addresses SCF induction as an unsupervised clustering problem, avoiding the use of supervised parsing or any of the sources of manual supervision used in previous works. Our system relies on POS tags -however, we show that it is not necessary to train a tagger with in-domain data to obtain good performance on this task, and therefore our approach provides a more domainindependent solution to SCF acquisition.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": "2"
},
{
"text": "We employ POS-tagging instead of unsupervised parsing for two main reasons. First, while a major progress has been made on unsupervised parsing (e.g. (Cohen and Smith, 2009; Berg-Kirkpatrick et al., 2010) ), the performance is still considerably behind that of supervised parsing. For example, the state-of-the-art discriminative model of (Berg-Kirkpatrick et al., 2010) achieves only 63% directed arc accuracy for WSJ sentences of up to 10 words, compared to more than 95% obtained with supervised parsers. Second, current unsupervised parsers produce unlabeled structures which are substantially less useful for SCF acquisition than labeled structures produced by super-vised parsers (e.g. grammatical relations).",
"cite_spans": [
{
"start": 150,
"end": 173,
"text": "(Cohen and Smith, 2009;",
"ref_id": "BIBREF9"
},
{
"start": 174,
"end": 204,
"text": "Berg-Kirkpatrick et al., 2010)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": "2"
},
{
"text": "Finally, a number of recent works addressed related tasks such as argument role clustering for SRL (Lang and Lapata, 2011a; Lang and Lapata, 2011b; Titvo and Klementiev, 2012) in an unsupervised manner. While these works differ from ours in the task (clustering arguments rather than verbs) and the level of supervision (applying a supervised parser), like us they analyze the verb argument structure at the instance level.",
"cite_spans": [
{
"start": 99,
"end": 123,
"text": "(Lang and Lapata, 2011a;",
"ref_id": "BIBREF23"
},
{
"start": 124,
"end": 147,
"text": "Lang and Lapata, 2011b;",
"ref_id": "BIBREF24"
},
{
"start": 148,
"end": 175,
"text": "Titvo and Klementiev, 2012)",
"ref_id": "BIBREF51"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": "2"
},
{
"text": "We address SCF induction as an unsupervised verb instance clustering problem. Given a set of plain sentences, our algorithm aims to cluster the verb instances in its input into syntactic clusters that strongly correlate with SCFs. In this section we introduce a Markov Random Field (MRF) model for this task: Section 3.1 describes our model's structure, components and objective; Section 3.2 describes the model potentials and the knowledge they encode; and Section 3.3 describes how clusters are induced from the model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "We implement our model in the MRF framework (Koller and Friedman, 2009) . This enables us to encode the two main sources of information that govern SCF selection in verb instances: (1) At the sentential context, the verbal syntactic frame is encoded through syntactic features. Verb instances with similar feature representations should therefore take the same syntactic frame; and (2) At the global context, per verb type SCF distributions tend to be Zipfian (Korhonen et al., 2000) . Instances of the same verb type should therefore be biased to take the same syntactic frame.",
"cite_spans": [
{
"start": 44,
"end": 71,
"text": "(Koller and Friedman, 2009)",
"ref_id": "BIBREF20"
},
{
"start": 460,
"end": 483,
"text": "(Korhonen et al., 2000)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Structure",
"sec_num": "3.1"
},
{
"text": "Given a collection of plain input sentences, we denote the number of verb instances in the collection with n, and the number of data-dependent equivalence classes (ECs) with K (see below for their definition), and define an undirected graphical model (MRF), G = (V, E, L). We define the vertex set as V = X \u222a C, with X = {x 1 , . . . , x n } consisting of one vertex for every verb instance in the input collection, and C = {c 1 . . . c K } consisting of one vertex for each data-dependent EC. The set of labels used by the model, L, corresponds to the syntactic frames taken by the verbs in the input data. The edge set E is defined through the model's potentials that are described below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Structure",
"sec_num": "3.1"
},
{
"text": "We encode information in the model through three main sets of potentials: one set of singleton potentials -defined over individual model vertexes, and two sets of pairwise potentials -defined between pairs of vertexes. The first set consists of a singleton potential for each vertex in the model. Reflecting the Zipfian distribution of SCFs across the instances of the same verb type, these potentials encourage the model to assign such verb instances to the same frame (cluster). The information encoded in these potentials is induced via a pre-processing clustering step. The second set consists of a pairwise potential for each pair of vertexes x i , x j \u2208 X -that is, for each verb instance pair in the input, across verb types. These potentials encode the belief, computed as feature-based similarity (see below), that their verb instance arguments implement the same SCF.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Structure",
"sec_num": "3.1"
},
{
"text": "Finally, potentials from the last set bias the model to assign the same SCF to high cardinality sets of cross-type verb instances based on their syntactic context. While these are pairwise potentials defined between verb instance vertexes (X) and EC vertexes (C), they are designed so that they bias the assignment of all verb instance vertexes that are connected to the same EC vertex towards the same frame assignment (l \u2208 L). The two types of pairwise potentials complement each other by modeling syntactic similarities among verb instance pairs, as well as among higher cardinality verb instance sets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Structure",
"sec_num": "3.1"
},
{
"text": "The resulted maximum aposteriori problem (MAP) takes the following form:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Structure",
"sec_num": "3.1"
},
{
"text": "M AP (V ) = arg max x,c\u2208V n i=1 \u03b8i(xi) + n i=1 n j=1 \u03b8i,j(xi, xj)+ n i=1 K j=1 \u03c6i,j(xi, cj) \u2022 I(xi \u2208 ECj) + K i=1 K j=1 \u03bei,j(ci, cj)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Structure",
"sec_num": "3.1"
},
{
"text": "where the predicate I(x i \u2208 EC j ) returns 1 if the i-th verb instance belongs the j-th equivalence class and 0 otherwise. The \u03be pairwise potentials defined between EC vertexes are very simple potentials designed to promise different assignments for each pair of EC vertexes. They do so by assigning a \u2212\u221e score to assignments where their argument vertexes take the same frame and a 0 otherwise. In the rest of this section we do not get back to this simple set of potentials.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Structure",
"sec_num": "3.1"
},
{
"text": "A graphical illustration of the model is given in Figure 1 . Note that we could have selected a richer model structure, for example, by defining a similarity potential over all verb instance vertexes that share an equivalence class. However, as the figure demonstrates, even the structure of the pruned version of our model (see Section 3.3) usually contains cycles, which makes inference NPhard (Shimony, 1994) . Our design choices aim to balance between the expressivity of the model and the complexity of inference. In Section 3.3 we describe the LP relaxation algorithm we use for inference.",
"cite_spans": [
{
"start": 396,
"end": 411,
"text": "(Shimony, 1994)",
"ref_id": "BIBREF48"
}
],
"ref_spans": [
{
"start": 50,
"end": 58,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model Structure",
"sec_num": "3.1"
},
{
"text": "Figure 1: A graphical illustration of our model (after pruning, see Sec. 3.3) for twenty verb instances (|X| = 20), each represented with a black vertex, and two equivalence classes (ECs), each represented with a gray vertex (|C| = 2). Solid lines represent edges (and \u03b8 i,j pairwise potentials) between verb instance vertexes. Dashed lines represent edges between verb instance vertexes and EC vertexes (\u03c6 i,j pairwise potentials) or between EC vertexes (\u03be i,j pairwise potentials) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C1 C2",
"sec_num": null
},
{
"text": "Pairwise Syntactic Similarity Potentials. The pairwise syntactic similarity potentials are defined for each pair of verb instance vertexes, x i , x j \u2208 X. They are designed to encourage the model to assign verb instances with similar fine-grained feature representations to the same frame (l \u2208 L) and verb instances with dissimilar representations to different frames. For this aim, for every verb pair i, j with feature representation vectors v i , v j and verb instance vertexes x i , x j \u2208 X, we define the following potential function:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Potentials and Encoded Knowledge",
"sec_num": "3.2"
},
{
"text": "\u03b8i,j(xi = l1, xj = l2) = \u03bb(vi, vj) if l1 = l2 0 otherwise",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Potentials and Encoded Knowledge",
"sec_num": "3.2"
},
{
"text": "Where l 1 , l 2 \u2208 L are label pairs and \u03bb is a verb instance similarity function. Below we describe the feature representation and the \u03bb function. The verb instance feature representation is defined through the following process. For each word instance in the input sentences we first build a basic feature representation (see below). Then, for each verb instance we construct a final feature representation defined to be the concatenation of that verb's basic feature representation with the basic representations of the words in a size 2 window around the represented verb. The final feature representation for the i-th verb instance in our dataset is therefore defined to be",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Potentials and Encoded Knowledge",
"sec_num": "3.2"
},
{
"text": "v i = [w \u22122 , w \u22121 , vb i , w +1 , w +2 ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Potentials and Encoded Knowledge",
"sec_num": "3.2"
},
{
"text": ", where w \u2212k and w +k are the basic feature representations of the words in distance \u2212k or +k from the i-th verb instance in its sentence, and vb i is the basic feature representation of that verb instance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Potentials and Encoded Knowledge",
"sec_num": "3.2"
},
{
"text": "Our basic feature representation is inspired from the feature representation of the MST parser (McDonald et al., 2005) except that in the parser the features represent a directed edge in the complete directed graph defined over the words in a sentence that is to be parsed, while our features are generated for word n-grams. Particularly, our feature set is a concatenation of two sets derived from the MST set described in Table 1 of (McDonald et al., 2005) in the following way: (1) In both sets the parent word in the parser's set is replaced with the represented word;",
"cite_spans": [
{
"start": 95,
"end": 118,
"text": "(McDonald et al., 2005)",
"ref_id": "BIBREF30"
},
{
"start": 435,
"end": 458,
"text": "(McDonald et al., 2005)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [
{
"start": 424,
"end": 431,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Potentials and Encoded Knowledge",
"sec_num": "3.2"
},
{
"text": "(2) In one set every child word in the parser's set is replaced by the word to the left of the represented word and in the other set it is replaced by the word to its right. This choice of features allows us to take advantage of a provably useful syntactic feature representation without the application of any parse tree annotation or parser.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Potentials and Encoded Knowledge",
"sec_num": "3.2"
},
{
"text": "We compute the similarity between the syntactic environments of two verb instances, i, j, using the following equation:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Potentials and Encoded Knowledge",
"sec_num": "3.2"
},
{
"text": "\u03bb(v i , v j ) = W \u2022 cos(v i , v j ) \u2212 S",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Potentials and Encoded Knowledge",
"sec_num": "3.2"
},
{
"text": "Where W is a hyperparameter designed to bias verb instances of the same verb type towards the same frame. Practically, W was tuned to be 3 for instances of the same type, and 1 otherwise 4 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Potentials and Encoded Knowledge",
"sec_num": "3.2"
},
{
"text": "While the cosine function is the standard measure of similarity between two vectors, its values are in the [0, 1] range. In the MRF modeling framework, however, we must encode a negative pairwise potential value between two vertexes in order to encourage the model to assign different labels (frames) to them. We therefore added the positive hyperparameter S which was tuned, with-out access to gold standard manual annotations, so that there is an even number of negative and positive pairwise syntactic similarity potentials after the model is pruned (see Section 3.3) 5 .",
"cite_spans": [
{
"start": 571,
"end": 572,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Potentials and Encoded Knowledge",
"sec_num": "3.2"
},
{
"text": "Type Level Singleton Potentials. The goal of these potentials is to bias verb instances of the same type to be assigned to the same syntactic frame while still keeping the instance based nature of our algorithm. For this aim, we applied Algorithm 1 for pre-clustering of the verb instances and encoded the induced clusters into the local potentials of the corresponding x \u2208 X vertexes. For every x \u2208 X the singleton potential is therefore defined to be:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Potentials and Encoded Knowledge",
"sec_num": "3.2"
},
{
"text": "\u03b8i(xi = l) = F \u2022 max \u03bb if l is induced by Algorithm 1 0 otherwise",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Potentials and Encoded Knowledge",
"sec_num": "3.2"
},
{
"text": "where max \u03bb is the maximum \u03bb score across all verb instance pairs in the model and F = 0.2 is a hyperparamter.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Potentials and Encoded Knowledge",
"sec_num": "3.2"
},
{
"text": "Algorithm 1 has two hyperparameters: T and M , the first is a similarity cut-off value used to determine the initial set of clusters, while the second is used to determine whether two clusters are similar enough to be merged. We tuned these hyperparameters, without manually annotated data, so that the number of clusters induced by this algorithm will be equal to the number of gold standard SCFs. T was tuned so that the first part of the algorithm generates an excessive number of clusters, and M was then tuned so that these clusters are merged to the desired number of clusters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Potentials and Encoded Knowledge",
"sec_num": "3.2"
},
{
"text": "The \u03bb function, used to measure the similarity between two verbs, is designed to bias the instances of the same verb type to have a higher similarity score. Algorithm 1 therefore tends to assign such instances to the same cluster. In our experiments that was always the case for this algorithm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Potentials and Encoded Knowledge",
"sec_num": "3.2"
},
{
"text": "High Cardinality Verb Sets Potentials. This set of potentials aims to bias larger sets of verb instances to share the same SCF. It is inspired by (Rush et al., 2012) who demonstrated, that syntactic structures that appear at the same syntactic context, in terms of the surrounding POS tags, tend to manifest similar syntactic behavior. While they demonstrated the usefulness of their method for dependency parsing and POS tagging, we implement it for higher level SCFs.",
"cite_spans": [
{
"start": 146,
"end": 165,
"text": "(Rush et al., 2012)",
"ref_id": "BIBREF46"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Potentials and Encoded Knowledge",
"sec_num": "3.2"
},
{
"text": "We identified syntactic contexts that imply similar SCFs for verb instances appearing inside them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Potentials and Encoded Knowledge",
"sec_num": "3.2"
},
{
"text": "Algorithm 1 Verb instance pre-clustering algorithm.\u03bb is the average \u03bb score between the members of its cluster arguments. T and M are hyperparametes tuned without access to gold standard data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Potentials and Encoded Knowledge",
"sec_num": "3.2"
},
{
"text": "Require: K = \u2205 for all x \u2208 X do for all k \u2208 K do for all u \u2208 k do if \u03bb(vx, vu) > T then k = k \u222a {x} Go to next x end if end for end for k1 = {x} K = K \u222a k1 end for for all k1, k2 \u2208 K: k1 = k2 do if\u03bb(k1, k2) > M then Merge (k1, k2) end if end for",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Potentials and Encoded Knowledge",
"sec_num": "3.2"
},
{
"text": "Contexts are characterized by the coarse POS tag to the left and to the right of the verb instance. While the number of context sets is bounded only by the number of frames our model is designed to induce, in practice we found that defining two equivalence sets led to the best performance gain, and the sets we used are presented in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 334,
"end": 341,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Potentials and Encoded Knowledge",
"sec_num": "3.2"
},
{
"text": "In order to encode this information into our MRF, each set of syntactic contexts is associated with an equivalence class (EC) vertex c \u2208 C and the verb instance vertexes of all verbs that appear in a context from that set are connected with an edge to c. The pairwise potential between a vertex x \u2208 X and its equivalence class is defined to be:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Potentials and Encoded Knowledge",
"sec_num": "3.2"
},
{
"text": "\u03c6 i,j (x i = l 1 , c j = l 2 ) = U if l 1 = l 2 0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Potentials and Encoded Knowledge",
"sec_num": "3.2"
},
{
"text": "otherwise U = 10 is a hyperparameter that strongly biases x vertexes to get the same SCF as their EC vertex.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Potentials and Encoded Knowledge",
"sec_num": "3.2"
},
{
"text": "In this section we describe how we induce verb instance clusters from our model. This process is based on the following three steps: (1) Graph pruning;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Verb Cluster Induction",
"sec_num": "3.3"
},
{
"text": "(2) Induction of an Ensemble of approximate MAP inference solutions in the resulted graphical model; and, (3) Induction of a final clustering solution based on the ensemble created at step 2. Below we explain the necessity of each of these steps and provide the algorithmic details. Table 1 : POS contexts indicative for the syntactic frame of the verb instance they surround. D: determiner, N: noun, V: verb, T: the preposition 'to' (which has its own POS tag in the WSJ POS tag set which we use), R: adverb. EC-1 and EC-2 stand for the first and second equivalence class respectively. In addition, the following contexts where associated with both ECs:",
"cite_spans": [],
"ref_spans": [
{
"start": 283,
"end": 290,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Verb Cluster Induction",
"sec_num": "3.3"
},
{
"text": "EC-1 EC-2 Left Right Left Right , D V T N D R T V . N D R D R N",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Verb Cluster Induction",
"sec_num": "3.3"
},
{
"text": "(T, D), (T, N ), (N, N )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Verb Cluster Induction",
"sec_num": "3.3"
},
{
"text": "and (V, I) where I stands for a preposition.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Verb Cluster Induction",
"sec_num": "3.3"
},
{
"text": "Graph Pruning. The edge set of our model consists of an edge for every pair of verb instance vertexes and of the edges that connect verb instance vertexes and equivalence class vertexes. This results in a large tree-width graph which substantially complicates MRF inference. To alleviate this we prune all edges with a positive score lower than p + and all edges with a negative score higher than p \u2212 , where p + and p \u2212 are manually tuned hyperparametes 6 . MAP Inference. For most reasonable values of p + and p \u2212 our graph still contains cycles even after it is pruned, which makes inference NP-hard (Shimony, 1994 ). Yet, thanks to our choice of an edge-factorized model, there are various approximate inference algorithms suitable for our case.",
"cite_spans": [
{
"start": 455,
"end": 456,
"text": "6",
"ref_id": null
},
{
"start": 603,
"end": 617,
"text": "(Shimony, 1994",
"ref_id": "BIBREF48"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Verb Cluster Induction",
"sec_num": "3.3"
},
{
"text": "We applied the message passing algorithm for linear-programming (LP) relaxation of the MAP assignment (MPLP, (Sontag et al., 2008) ). LP relaxation algorithms for the MAP problem define an upper bound on the original objective which takes the form of a linear program. Consequently, a minimum of this upper bound can be found using standard LP solvers or, more efficiently, using specialized message passing algorithms (Yanover et al., 2006 ). The MPLP algorithm described in (Sontag et al., 2008) is appealing in that it iteratively computes tighter upper bounds on the MAP objective (for details see their paper).",
"cite_spans": [
{
"start": 109,
"end": 130,
"text": "(Sontag et al., 2008)",
"ref_id": "BIBREF49"
},
{
"start": 419,
"end": 440,
"text": "(Yanover et al., 2006",
"ref_id": "BIBREF57"
},
{
"start": 476,
"end": 497,
"text": "(Sontag et al., 2008)",
"ref_id": "BIBREF49"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Verb Cluster Induction",
"sec_num": "3.3"
},
{
"text": "Cluster Ensemble Generation and a Final Solution. As our MAP objective is non-convex, the convergent point of an optimization algorithm applied to it is highly sensitive to its initialization. To avoid convergence to arbitrary local maxima which may be of poor quality, we turn to a perturbation protocol where we repeatedly introduce random noise to the MRF's potential functions and then compute the approximate MAP solution of the resulted model using the MPLP algorithm. Noising was done by adding an term to the lambda values described in section 3.2 7 . This protocol results in a set of cluster (label) assignments for the involved verb instances, which we treat as an ensemble of experts from which a final, high quality, solution is to be induced.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Verb Cluster Induction",
"sec_num": "3.3"
},
{
"text": "The basic idea in ensemble learning is that if several experts independently cluster together two verb instances, our belief that these verbs belong in the same cluster should increase. implemented this idea through the kway normalized cut clustering algorithm (Yu and Shi, 2003) . Its input is an undirected graph\u011c = (V ,\u00ca,\u0174 ) whereV is the set of vertexes,\u00ca is the set of edges and\u0174 is a non-negative and symmetric edge weight matrix. To apply this model to our task, we construct the input graph\u011c from the labelings (frame assignments) contained in the ensemble. The graph vertexesV correspond to the verb instances and the (i, j)-th entry of the matrix W is the number of ensemble members that assign the same label to the i-th and j-th verb instances.",
"cite_spans": [
{
"start": 261,
"end": 279,
"text": "(Yu and Shi, 2003)",
"ref_id": "BIBREF58"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Verb Cluster Induction",
"sec_num": "3.3"
},
{
"text": "For A, B \u2286V define:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Verb Cluster Induction",
"sec_num": "3.3"
},
{
"text": "links(A, B) = i\u2208A,j\u2208B\u0174 (i, j)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Verb Cluster Induction",
"sec_num": "3.3"
},
{
"text": "Using this definition, the normalized link ratio of A and B is defined to be:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Verb Cluster Induction",
"sec_num": "3.3"
},
{
"text": "N ormLinkRatio(A, B) = links(A, B) links(A,V )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Verb Cluster Induction",
"sec_num": "3.3"
},
{
"text": "The k-way normalized cut problem is to minimize the links that leave a cluster relative to the total weight of the cluster. Denote the set of clusterings ofV that consist of k clusters by\u0108 = {\u0109 1 , . . .\u0109 t } and the j-th cluster of the i-th cluster-ing by\u0109 ij . Then",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Verb Cluster Induction",
"sec_num": "3.3"
},
{
"text": "c * = argmin c i \u2208\u0108 k j=1 N ormLinkRatio(\u0109 ij ,V \u2212\u0109 ij )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Verb Cluster Induction",
"sec_num": "3.3"
},
{
"text": "The algorithm of (Yu and Shi, 2003) solves this problem very efficiently as it avoids the heavy eigenvalues and eigenvectors computations required by traditional approaches.",
"cite_spans": [
{
"start": 17,
"end": 35,
"text": "(Yu and Shi, 2003)",
"ref_id": "BIBREF58"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Verb Cluster Induction",
"sec_num": "3.3"
},
{
"text": "Our model is unique compared to existing systems in two respects. First, it does not utilize supervision in the form of either a supervised syntactic parser and/or manually crafted SCF rules. Consequently, it induces unnamed frames (clusters) that are not directly comparable to the named frames induced by previous systems. Second, it induces syntactic frames at the verb instance, rather than type, level. Evaluation, and especially comparison to previous work, is therefore challenging.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "4"
},
{
"text": "We therefore evaluate our system in two ways. First, we compare its output, as well as the output of a number of clustering baselines, to the gold standard annotation of corpora from two different domains (the only publicly available ones with instance level SCF annotation, to the best of our knowledge). Second, in order to compare the output of our system to a rule-based SCF system that utilizes a supervised syntactic parser, we turn to a task-based evaluation. We aim to predict the degree of similarity between verb pairs and, following (Pado and Lapata, 2007) , we do so using a syntactic-based vector space model (VSM). We construct three VSMs -(a) one that derives features from our clusters; (b) one whose features come from the output of a state-of-the-art verb type level, rule based, SCF system (Reichart and Korhonen, 2013) that uses a modern parser ; and (c) a standard lexical VSM. Below we show that our system compares favorably in both evaluations.",
"cite_spans": [
{
"start": 544,
"end": 567,
"text": "(Pado and Lapata, 2007)",
"ref_id": "BIBREF36"
},
{
"start": 809,
"end": 838,
"text": "(Reichart and Korhonen, 2013)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "4"
},
{
"text": "Data. We experimented with two datasets taken from different domains: labor legislation and environment (Quochi et al., 2012) . These datasets were created through web crawling followed by domain filtering. Each sentence in both datasets may contain multiple verbs but only one target verb has been manually annotated with a SCF. The labour legislation domain dataset contains 4415 annotated verb instances (and hence also sentences) of 117 types, and the environmental domain dataset contains 4503 annotated verb instances of 116 types. In both datasets no verb type accounts for more than 4% of the instances and only up to 35 verb types account for 1% of the instances or more. The lexical difference between the corpora is substantial: they share only 42 annotated verb types in total, of which only 2 verb types (responsible for 4.1% and 5.2% of the instances in the environment and labor legislation domains respectively) belong to the 20 most frequent types (responsible for 37.9% and 46.85% of the verb instances in the respective domains) of each corpus.",
"cite_spans": [
{
"start": 104,
"end": 125,
"text": "(Quochi et al., 2012)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "4"
},
{
"text": "The 29 members of the SCF inventory are detailed in (Quochi et al., 2012) . Table 2 , presenting the distribution of the 5 highest frequency frames in each corpus, demonstrates that, in addition to the significant lexical difference, the corpora differ to some extent in their syntactic properties. This is reflected by the substantially different frequencies of the \"dobj:iobj-prep:su\" and \"dobj:su\" frames.",
"cite_spans": [
{
"start": 52,
"end": 73,
"text": "(Quochi et al., 2012)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [
{
"start": 76,
"end": 83,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "4"
},
{
"text": "As a pre-processing step we first POS tagged the datasets with the Stanford tagger (Toutanova et al., 2003) trained on the standard POS training sections of the WSJ PennTreebank corpus.",
"cite_spans": [
{
"start": 83,
"end": 107,
"text": "(Toutanova et al., 2003)",
"ref_id": "BIBREF52"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "4"
},
{
"text": "Experimental Protocol The computational complexity of our algorithm does not allow us to run it on thousands of verb instances in a feasible time. We therefore repeatedly sampled 5% of the sentences from each dataset, ran our algorithm as well as the baselines (see below) and report the average performance of each method. The number of repetitions was 40 and samples were drawn from a uniform distribution while still promising that the distribution of gold standard SCFs in each sample is identical to their distribution in the entire dataset. Before running this protocol, 5% of each corpus was kept as held-out data on which hyperparameter tuning was performed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Against SCF Gold Standard",
"sec_num": "4.1"
},
{
"text": "Evaluation Measures and Baselines. We compare our system's output to instance-level gold standard annotation. We use standard measures for clustering evaluation, one measure from each of the two leading measure types: the V measure (Rosenberg and Hirschberg, 2007) , which is an information theoretic measure, and greedy many-toone accuracy, which is a mapping-based measure. For the latter, each induced cluster is first mapped to the gold SCF frame that annotates the highest number of verb instances this induced cluster also annotates and then a standard instance-level accuracy score is computed (see, e.g., (Reichart and Rappoport, 2009) ). Both measures scale from 100 (perfect match with gold standard) to 0 (no match).",
"cite_spans": [
{
"start": 232,
"end": 264,
"text": "(Rosenberg and Hirschberg, 2007)",
"ref_id": "BIBREF45"
},
{
"start": 613,
"end": 643,
"text": "(Reichart and Rappoport, 2009)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Against SCF Gold Standard",
"sec_num": "4.1"
},
{
"text": "As mentioned above, comparing the performance of our system with respect to a gold standard to the performance of previous type-level systems that used hand-crafted rules and/or supervised syntactic parsers would be challenging. We therefore compare our model to the following baselines: (a) The most frequent class (MFC) baseline which assigns all verb instances with the SCF that is the most frequent one in the gold standard annotation of the data; (b) The Random baseline which simply assigns every verb instance with a randomly selected SCF; (c) Algorithm 1 of section 3.2 which generates unsupervised verb instance clustering such that verb instances of the same type are assigned to the same cluster; and (d) Finally, we also compare our model against versions where everything is kept fixed, except a subset of potentials which is omitted. This enables us to study the intricacies of our model and the relative importance of its components. For all models, the number of induced clusters is equal to the number of SCFs in the gold standard.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Against SCF Gold Standard",
"sec_num": "4.1"
},
{
"text": "Results Table 3 presents the results, demonstrating that our full model substantially outperforms all baselines. For the first two simple heuristic baselines (MFC and Random) the margin is higher than 20% for both the greedy M-1 mapping measure and the V measure. Note tat the V score of the MFC baseline is 0 by definition, as it assigns all items to the same cluster. The poor performance of these simple baselines is an indication of the difficulty of our task.",
"cite_spans": [],
"ref_spans": [
{
"start": 8,
"end": 15,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation Against SCF Gold Standard",
"sec_num": "4.1"
},
{
"text": "Recall that the type level clustering induced by Algorithm 1 is the main source of type level information our model utilizes (through its singleton potentials). The comparison to the output of this algorithm (the Type Pre-clustering baseline) therefore shows the quality of the instance level refinement our model provides. As seen in table 3, our model outperforms this baseline by 6.9% for the M-1 measure and 5.2% for the V measure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Against SCF Gold Standard",
"sec_num": "4.1"
},
{
"text": "In order to compare our model to its components we exclude either the EC potentials (\u03c6 and \u03be) only (Model -EC), or the EC and the singleton potentials (\u03b8 i , Model -EC -Type pre-clustering). The results show that our model gains much more 45.7 44.7 Table 3 : Results for our full model, the baselines (Type Pre-clustering: the pre-clustering algorithm (Algorithm 1 of section 3.2), MFC: the most frequent class (SCF) in the gold standard annotation and Random: random SCF assignment) and the model components. The full model outperforms all other models across measures and datasets.",
"cite_spans": [],
"ref_spans": [
{
"start": 249,
"end": 256,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation Against SCF Gold Standard",
"sec_num": "4.1"
},
{
"text": "from the type level information encoded through the singleton potentials than from the EC potentials. Yet, EC potentials do lead to an improvement of up to 1.5% in M-1 and up to 1.1% in V and are therefore responsible for up to 26.1% and 21.2% of the improvement over the type pre-clustering baseline in terms of M-1 and V, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Against SCF Gold Standard",
"sec_num": "4.1"
},
{
"text": "We next evaluate our model in the context of vector space modeling for verb similarity prediction (Turney and Pantel, 2010) . Since most previous word similarity works used noun datasets, we constructed a new verb pair dataset, following the protocol used in the collection of the wordSimilarity-353 dataset (Finkelstein et al., 2002) . Our dataset consists of 143 verb pairs, constructed from 122 unique verb lemma types. The participating verbs appear \u2265 10 times in the concatenation of the labour legislation and the environment datasets. Only pairs of verbs that were considered at least remotely similar by human judges (independent of those that provided the similarity scores) were included. A similarity score between 1 and 10 was assigned to each pair by 10 native English speaking annotators and were then averaged in order to get a unique pair score.",
"cite_spans": [
{
"start": 98,
"end": 123,
"text": "(Turney and Pantel, 2010)",
"ref_id": "BIBREF53"
},
{
"start": 308,
"end": 334,
"text": "(Finkelstein et al., 2002)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Task Based Evaluation",
"sec_num": "4.2"
},
{
"text": "Our first baseline is a standard VSM based on lexical collocations. In this model features correspond to the number of collocations inside a size 2 window of the represented verb with each of the 5000 most frequent nouns in the Google n-gram corpus (Goldberg and Orwant, 2013) . Since our corpora are limited in size, we use the collocation counts from the Google corpus. We used our model to generate a vector representation of each verb in the following way. We run the model 5000 times, each time over a set of verbs consisting of one instance of each of the 122 verb types participating in the verb similarity set. The output of each such run is transformed to a binary vector for each participating verb, where all coordinates are assigned the value of 0, except from the one that corresponds to the cluster to which the verb was assigned which has the value of 1. The final vector representation is a concatenation of the 5000 binary vectors. Note that for this task we did not use the graph cut algorithm to generate a final clustering from the multiple MRF runs. Instead we concatenated the output of all these runs into one feature representation that facilitates similarity prediction. For our model we estimated the verb pair similarity using the Tanimato similarity score for binary vectors:",
"cite_spans": [
{
"start": 249,
"end": 276,
"text": "(Goldberg and Orwant, 2013)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Task Based Evaluation",
"sec_num": "4.2"
},
{
"text": "T (X, Y ) = i X i \u2227 Y i i x i \u2228 Y i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Based Evaluation",
"sec_num": "4.2"
},
{
"text": "For the baseline model, where the features are collocation counts, we used the standard cosine similarity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Based Evaluation",
"sec_num": "4.2"
},
{
"text": "Our second baseline is identical to our model, except that: (a) the data is parsed with the Stanford parser (version 3.3.0, ) which was trained with sections 2-21 of the WSJ corpus; (b) the phrase structure output of the parser is transformed to the CoNLL dependency format using the official CoNLL 2007 conversion script (Johansson and Nugues, 2007) ; and then (c) the SCF of each verb instance is inferred using the rule-based system used by (Reichart and Korhonen, 2013) . The vector space representation for each verb is then created using the process we described for our model and the same holds for vector comparison. This baseline allows direct comparison of frames induced by our SCF model with those derived from a supervised parser's output.",
"cite_spans": [
{
"start": 322,
"end": 350,
"text": "(Johansson and Nugues, 2007)",
"ref_id": "BIBREF17"
},
{
"start": 444,
"end": 473,
"text": "(Reichart and Korhonen, 2013)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Task Based Evaluation",
"sec_num": "4.2"
},
{
"text": "We computed the Pearson correlation between the scores of each of the models and the human scores. The results demonstrate the superiority of our model in predicting verb similarity: the correlation of our model with the human scores is 0.642 while the correlation of the lexical collocation baseline is 0.522 and that of the supervised parser baseline is only 0.266. The results indicate that in addition to their good alignment with SCFs, our clusters are also highly useful for verb meaning representation. This is in line with the verb clustering theory of the Levin tradition (Levin, 1993) which ties verb meaning with their syntactic properties. We consider this an intriguing direction of future work.",
"cite_spans": [
{
"start": 581,
"end": 594,
"text": "(Levin, 1993)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Task Based Evaluation",
"sec_num": "4.2"
},
{
"text": "We presented an MRF-based unsupervised model for SCF acquisition which produces verb instance level SCFs as output. As opposed to previous systems for the task, our model uses only a POS tagger, avoiding the need for a statistical parser or manually crafted rules. The model is particularly valuable for NLP tasks benefiting from SCFs that are applied across text domains, and for the many tasks that involve sentence-level processing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "Our results show that the accuracy of the model is promising, both when compared against gold standard annotations and when evaluated in the context of a task. In the future we intend to improve our model by encoding additional information in it. We will also adapt it to a multilingual setup, aiming to model a wide range of languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "The verb similarity dataset used for the evaluation of our model is publicly available at ie.technion.ac.il/\u223croiri/.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "(Lippincott et al., 2012) does not use a parser, but the syntactic frames induced by the system do not capture sets of arguments for verbs, so are not SCFs in a traditional sense.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "A so-called unlexicalized parser is a parser trained without explicit SCF annotations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "All hyperparameters that require gold-standard annotation for tuning, were tuned using held-out data (Section 4).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The values in practice are S = 0.43 for labour legislation and S = 0.38 for environment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The values used in practice are p+ = 0.28, p\u2212 = \u22120.17 for the labour legislation dataset, and p+ = 0.25, p\u2212 = \u22120.20 for the environment set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "was accepted by first sampling a number in the [0, 1] range using the Java psuodorandom generator and then scaling it to 1% of cos(vi, vj). This value was tuned, without access to gold standard manual annotations, so that there is an even number of negative and positive pairwise syntactic similarity potentials after the model is pruned (Section 3.3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The first author is supported by the Commonwealth Scholarship Commission (CSC) and the Cambridge Trust.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "IRASubcat, a highly customizable, language independent tool for the acquisition of verbal subcategorization information from corpus",
"authors": [],
"year": 2010,
"venue": "Proceedings of the NAACL 2010 Workshop on Computational Approaches to Languages of the Americas",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ivana Romina Altamirano and Laura Alonso i Ale- many. 2010. IRASubcat, a highly customizable, language independent tool for the acquisition of ver- bal subcategorization information from corpus. In Proceedings of the NAACL 2010 Workshop on Com- putational Approaches to Languages of the Ameri- cas.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Lexicalization in crosslinguistic probabilistic parsing: The case of french",
"authors": [
{
"first": "Abhishek",
"middle": [],
"last": "Arun",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Keller",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of ACL-05",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abhishek Arun and Frank Keller. 2005. Lexicalization in crosslinguistic probabilistic parsing: The case of french. In Proceedings of ACL-05.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Painless unsupervised learning with features",
"authors": [
{
"first": "Taylor",
"middle": [],
"last": "Berg-Kirkpatrick",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Bouchard-Cote",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Denero",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of NAACL-HLT-10",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Taylor Berg-Kirkpatrick, Alexander Bouchard-Cote, John DeNero, and Dan Klein. 2010. Painless un- supervised learning with features. In Proceedings of NAACL-HLT-10.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Inferring semantic roles using subcategorization frames and maximum entropy model",
"authors": [
{
"first": "Akshar",
"middle": [],
"last": "Bharati",
"suffix": ""
},
{
"first": "Sriram",
"middle": [],
"last": "Venkatapathy",
"suffix": ""
},
{
"first": "Prashanth",
"middle": [],
"last": "Reddy",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of CoNLL-05",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Akshar Bharati, Sriram Venkatapathy, and Prashanth Reddy. 2005. Inferring semantic roles using sub- categorization frames and maximum entropy model. In Proceedings of CoNLL-05.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Automatic extraction of subcategorization from corpora",
"authors": [
{
"first": "Ted",
"middle": [],
"last": "Briscoe",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Carroll",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of ANLP-97",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ted Briscoe and John Carroll. 1997. Automatic ex- traction of subcategorization from corpora. In Pro- ceedings of ANLP-97.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "The second release of the rasp system",
"authors": [
{
"first": "Ted",
"middle": [],
"last": "Briscoe",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Carroll",
"suffix": ""
},
{
"first": "Rebecca",
"middle": [],
"last": "Watson",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of ACL-COLING-06",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ted Briscoe, John Carroll, and Rebecca Watson. 2006. The second release of the rasp system. In Proceed- ings of ACL-COLING-06.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "The automatic acquisition of verb subcategorisations and their impact on the performance of an HPSG parser",
"authors": [
{
"first": "John",
"middle": [],
"last": "Carroll",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Fang",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of IJCNLP-04",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Carroll and Alex Fang. 2004. The automatic ac- quisition of verb subcategorisations and their impact on the performance of an HPSG parser. In Proceed- ings of IJCNLP-04.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Automatic extraction of subcategorization frames for french",
"authors": [
{
"first": "Paula",
"middle": [],
"last": "Chesley",
"suffix": ""
},
{
"first": "Susanne",
"middle": [],
"last": "Salmon-Alt",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of LREC-06",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paula Chesley and Susanne Salmon-Alt. 2006. Au- tomatic extraction of subcategorization frames for french. In Proceedings of LREC-06.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Using unknown word techniques to learn known words",
"authors": [
{
"first": "Kostadin",
"middle": [],
"last": "Cholakov",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gertjan Van Noord",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of EMNLP-10",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kostadin Cholakov and Gertjan van Noord. 2010. Us- ing unknown word techniques to learn known words. In Proceedings of EMNLP-10.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Shared logistic normal distributions for soft parameter tying in unsupervised grammar induction",
"authors": [
{
"first": "Shay",
"middle": [],
"last": "Cohen",
"suffix": ""
},
{
"first": "Noah",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of NAACL-HLT-09",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shay Cohen and Noah Smith. 2009. Shared logistic normal distributions for soft parameter tying in un- supervised grammar induction. In Proceedings of NAACL-HLT-09.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Generating typed dependency parses from phrase structure parses",
"authors": [
{
"first": "Marie-Catherine",
"middle": [],
"last": "De-Marneffe",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Maccartney",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of LREC-06",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marie-Catherine De-Marneffe, Bill Maccartney, and Christopher Manning. 2006. Generating typed de- pendency parses from phrase structure parses. In Proceedings of LREC-06.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Valence extraction using EM selection and co-occurrence matrices",
"authors": [
{
"first": "Lukasz",
"middle": [],
"last": "Debowski",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedins of LREC-09",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lukasz Debowski. 2009. Valence extraction using EM selection and co-occurrence matrices. Proceedins of LREC-09.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Placing search in context: The concept revisited",
"authors": [
{
"first": "Lev",
"middle": [],
"last": "Finkelstein",
"suffix": ""
},
{
"first": "Evgeniy",
"middle": [],
"last": "Gabrilovich",
"suffix": ""
},
{
"first": "Yossi",
"middle": [],
"last": "Matias",
"suffix": ""
},
{
"first": "Ehud",
"middle": [],
"last": "Rivlin",
"suffix": ""
},
{
"first": "Zach",
"middle": [],
"last": "Solan",
"suffix": ""
},
{
"first": "Gadi",
"middle": [],
"last": "Wolfman",
"suffix": ""
},
{
"first": "Eitan",
"middle": [],
"last": "Ruppin",
"suffix": ""
}
],
"year": 2002,
"venue": "ACM Transactions on Information Systems",
"volume": "20",
"issue": "",
"pages": "116--131",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Ei- tan Ruppin. 2002. Placing search in context: The concept revisited. ACM Transactions on Informa- tion Systems, 20:116-131.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A dataset of syntactic-ngrams over time from a very large corpus of english books",
"authors": [
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Jon",
"middle": [],
"last": "Orwant",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of (*SEM)-13. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoav Goldberg and Jon Orwant. 2013. A dataset of syntactic-ngrams over time from a very large corpus of english books. In Proceedings of (*SEM)-13. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Natural language generation in the context of machine translation",
"authors": [
{
"first": "Jan",
"middle": [],
"last": "Haji\u010d",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [],
"last": "Martin Mejrek",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Dorr",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Ding",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Eisner",
"suffix": ""
},
{
"first": "Terry",
"middle": [],
"last": "Gildea",
"suffix": ""
},
{
"first": "Kristen",
"middle": [],
"last": "Koo",
"suffix": ""
},
{
"first": "Gerald",
"middle": [],
"last": "Parton",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Penn",
"suffix": ""
}
],
"year": 2002,
"venue": "Center for Language and Speech Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jan Haji\u010d, Martin mejrek, Bonnie Dorr, Yuan Ding, Ja- son Eisner, Daniel Gildea, Terry Koo, Kristen Par- ton, Gerald Penn, Dragomir Radev, and Owen Ram- bow. 2002. Natural language generation in the con- text of machine translation. Technical report, Cen- ter for Language and Speech Processing, Johns Hop- kins University, Baltimore. Summer Workshop Final Report.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Handling structural divergences and recovering dropped arguments in a korean/english machine translation system",
"authors": [
{
"first": "Benoit",
"middle": [],
"last": "Chung Hye Han",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Lavoie",
"suffix": ""
},
{
"first": "Owen",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Rambow",
"suffix": ""
},
{
"first": "Tanya",
"middle": [],
"last": "Kittredge",
"suffix": ""
},
{
"first": "Myunghee",
"middle": [],
"last": "Korelsky",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chung hye Han, Benoit Lavoie, Martha Palmer, Owen Rambow, Richard Kittredge, Tanya Korelsky, and Myunghee Kim. 2000. Handling structural diver- gences and recovering dropped arguments in a ko- rean/english machine translation system. In Pro- ceedings of the AMTA-00.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Automatic extraction of subcategorization frames for italian",
"authors": [
{
"first": "Dino",
"middle": [],
"last": "Ienco",
"suffix": ""
},
{
"first": "Serena",
"middle": [],
"last": "Villata",
"suffix": ""
},
{
"first": "Cristina",
"middle": [],
"last": "Bosco",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of LREC-08",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dino Ienco, Serena Villata, and Cristina Bosco. 2008. Automatic extraction of subcategorization frames for italian. In Proceedings of LREC-08.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Extended constituent-to-dependency conversion for english",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Johansson",
"suffix": ""
},
{
"first": "Pierre",
"middle": [],
"last": "Nugues",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of NODALIDA-07",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Johansson and Pierre Nugues. 2007. Ex- tended constituent-to-dependency conversion for en- glish. In Proceedings of NODALIDA-07.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Acquiring reliable predicate-argument structures from raw corpora for case frame compilation",
"authors": [
{
"first": "Daisuke",
"middle": [],
"last": "Kawahara",
"suffix": ""
},
{
"first": "Sadao",
"middle": [],
"last": "Kurohashi",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of LREC-10",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daisuke Kawahara and Sadao Kurohashi. 2010. Ac- quiring reliable predicate-argument structures from raw corpora for case frame compilation. In Proceed- ings of LREC-10.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Accurate unlexicalized parsing",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of ACL-03",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Klein and Christopher Manning. 2003. Accurate unlexicalized parsing. In Proceedings of ACL-03.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Probabilistic graphical models: principles and techniques",
"authors": [
{
"first": "Daphne",
"middle": [],
"last": "Koller",
"suffix": ""
},
{
"first": "Nir",
"middle": [],
"last": "Friedman",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daphne Koller and Nir Friedman. 2009. Probabilistic graphical models: principles and techniques. The MIT Press.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Statistical filtering and subcategorization frame acquisition",
"authors": [
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
},
{
"first": "Genevieve",
"middle": [],
"last": "Gorrell",
"suffix": ""
},
{
"first": "Diana",
"middle": [],
"last": "Mc-Carthy",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of EMNLP-00",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anna Korhonen, Genevieve Gorrell, and Diana Mc- Carthy. 2000. Statistical filtering and subcate- gorization frame acquisition. In Proceedings of EMNLP-00.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Semantically motivated subcategorization acquisition",
"authors": [
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the ACL-02 workshop on Unsupervised lexical acquisition",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anna Korhonen. 2002. Semantically motivated sub- categorization acquisition. In Proceedings of the ACL-02 workshop on Unsupervised lexical acquisi- tion.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Unsupervised semantic role induction via split-merge clustering",
"authors": [
{
"first": "Joel",
"middle": [],
"last": "Lang",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of ACL-11",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joel Lang and Mirella Lapata. 2011a. Unsupervised semantic role induction via split-merge clustering. In Proceedings of ACL-11.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Unsupervised semantic role induction with graph partitioning",
"authors": [
{
"first": "Joel",
"middle": [],
"last": "Lang",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of EMNLP-11",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joel Lang and Mirella Lapata. 2011b. Unsupervised semantic role induction with graph partitioning. In Proceedings of EMNLP-11.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Parsing biomedical literature",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Lease",
"suffix": ""
},
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of IJCNLP-05",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Lease and Eugene Charniak. 2005. Parsing biomedical literature. In Proceedings of IJCNLP- 05.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Unsupervised acquisition of verb subcategorization frames from shallow-parsed corpora",
"authors": [
{
"first": "Alessandro",
"middle": [],
"last": "Lenci",
"suffix": ""
},
{
"first": "Barbara",
"middle": [],
"last": "Mcgillivray",
"suffix": ""
},
{
"first": "Simonetta",
"middle": [],
"last": "Montemagni",
"suffix": ""
},
{
"first": "Vito",
"middle": [],
"last": "Pirrelli",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of LREC-08",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alessandro Lenci, Barbara Mcgillivray, Simonetta Montemagni, and Vito Pirrelli. 2008. Unsupervised acquisition of verb subcategorization frames from shallow-parsed corpora. In Proceedings of LREC- 08.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "English verb classes and alternations: A preliminary investigation",
"authors": [
{
"first": "Beth",
"middle": [],
"last": "Levin",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Beth Levin. 1993. English verb classes and alterna- tions: A preliminary investigation. Chicago, IL.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Exploring subdomain variation in biomedical language",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Lippincott",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
},
{
"first": "Diarmuid",
"middle": [],
"last": "Oseaghdha",
"suffix": ""
}
],
"year": 2010,
"venue": "BMC Bioinformatics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom Lippincott, Anna Korhonen, and Diarmuid Os- eaghdha. 2010. Exploring subdomain variation in biomedical language. BMC Bioinformatics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Learning syntactic verb frames using graphical models",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Lippincott",
"suffix": ""
},
{
"first": "Aanna",
"middle": [],
"last": "Korhonen",
"suffix": ""
},
{
"first": "Diarmuid",
"middle": [],
"last": "Oseaghdha",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of ACL-12",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom Lippincott, Aanna Korhonen, and Diarmuid Os- eaghdha. 2012. Learning syntactic verb frames us- ing graphical models. In Proceedings of ACL-12.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Online large-margin training of dependency parsers",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Koby",
"middle": [],
"last": "Crammer",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of ACL-05",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan McDonald, Koby Crammer, and Fernando Pereira. 2005. Online large-margin training of de- pendency parsers. In Proceedings of ACL-05.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "LexSchem: A large subcategorization lexicon for French verbs",
"authors": [
{
"first": "Cedric",
"middle": [],
"last": "Messiant",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
},
{
"first": "Thierry",
"middle": [],
"last": "Poibeau",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of LREC-08",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cedric Messiant, Anna Korhonen, and Thierry Poibeau. 2008. LexSchem: A large subcategoriza- tion lexicon for French verbs. In Proceedings of LREC-08.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "A subcategorization acquistion system for french verbs",
"authors": [
{
"first": "Cedric",
"middle": [],
"last": "Messiant",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of ACL08-SRW",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cedric Messiant. 2008. A subcategorization acquis- tion system for french verbs. In Proceedings of ACL08-SRW.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Probabilistic disambiguaton models for wide-coverage hpsg parsing",
"authors": [
{
"first": "Yusuke",
"middle": [],
"last": "Miyao",
"suffix": ""
},
{
"first": "Junichi",
"middle": [],
"last": "Tsujii",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of ACL-05",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yusuke Miyao and Junichi Tsujii. 2005. Probabilistic disambiguaton models for wide-coverage hpsg pars- ing. In Proceedings of ACL-05.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Verb subcategorization kernels for automatic semantic labeling",
"authors": [
{
"first": "Alessandro",
"middle": [],
"last": "Moschitti",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Basili",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the ACL-SIGLEX Workshop on Deep Lexical Acquisition",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alessandro Moschitti and Roberto Basili. 2005. Verb subcategorization kernels for automatic semantic la- beling. In Proceedings of the ACL-SIGLEX Work- shop on Deep Lexical Acquisition.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Large-scale induction and evaluation of lexical resources from the penn-ii and penn-iii treebanks",
"authors": [
{
"first": "O'",
"middle": [],
"last": "Ruth",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Donovan",
"suffix": ""
},
{
"first": "Aoife",
"middle": [],
"last": "Burke",
"suffix": ""
},
{
"first": "Josef",
"middle": [],
"last": "Cahill",
"suffix": ""
},
{
"first": "Andy",
"middle": [],
"last": "Van Genabith",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Way",
"suffix": ""
}
],
"year": 2005,
"venue": "Computational Linguistics",
"volume": "31",
"issue": "",
"pages": "328--365",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ruth O'Donovan, Michael Burke, Aoife Cahill, Josef van Genabith, and Andy Way. 2005. Large-scale induction and evaluation of lexical resources from the penn-ii and penn-iii treebanks. Computational Linguistics, 31:328-365.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Dependency-based construction of semantic space models",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Pado",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2007,
"venue": "Computational Linguistics",
"volume": "33",
"issue": "",
"pages": "161--199",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Pado and Mirella Lapata. 2007. Dependency-based construction of semantic space models. Computational Linguistics, 33:161-199.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "A system for large-scale acquisition of verbal, nominal and adjectival subcategorization frames from corpora",
"authors": [
{
"first": "Judita",
"middle": [],
"last": "Preiss",
"suffix": ""
},
{
"first": "Ted",
"middle": [],
"last": "Briscoe",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of ACL-07",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Judita Preiss, Ted Briscoe, and Anna Korhonen. 2007. A system for large-scale acquisition of verbal, nom- inal and adjectival subcategorization frames from corpora. In Proceedings of ACL-07.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Third evaluation report. evaluation of panacea v3 and produced resources",
"authors": [
{
"first": "Valeria",
"middle": [],
"last": "Quochi",
"suffix": ""
},
{
"first": "Francesca",
"middle": [],
"last": "Frontini",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Bartolini",
"suffix": ""
},
{
"first": "Olivier",
"middle": [],
"last": "Hamon",
"suffix": ""
},
{
"first": "Marc",
"middle": [],
"last": "Poch",
"suffix": ""
},
{
"first": "Muntsa",
"middle": [],
"last": "Padr",
"suffix": ""
},
{
"first": "Nuria",
"middle": [],
"last": "Bel",
"suffix": ""
},
{
"first": "Gregor",
"middle": [],
"last": "Thurmair",
"suffix": ""
},
{
"first": "Antonio",
"middle": [],
"last": "Toral",
"suffix": ""
},
{
"first": "Amir",
"middle": [],
"last": "Kamram",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Valeria Quochi, Francesca Frontini, Roberto Bartolini, Olivier Hamon, Marc Poch, Muntsa Padr, Nuria Bel, Gregor Thurmair, Antonio Toral, and Amir Kam- ram. 2012. Third evaluation report. evaluation of panacea v3 and produced resources. Technical re- port.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Improved lexical acquisition through dpp-based verb clustering",
"authors": [
{
"first": "Roi",
"middle": [],
"last": "Reichart",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of ACL-13",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roi Reichart and Anna Korhonen. 2013. Improved lexical acquisition through dpp-based verb cluster- ing. In Proceedings of ACL-13.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "The nvi clustering evaluation measure",
"authors": [
{
"first": "Roi",
"middle": [],
"last": "Reichart",
"suffix": ""
},
{
"first": "Ari",
"middle": [],
"last": "Rappoport",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of CoNLL-09",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roi Reichart and Ari Rappoport. 2009. The nvi clustering evaluation measure. In Proceedings of CoNLL-09.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "A diverse dirichlet process ensemble for unsupervised induction of syntactic categories",
"authors": [
{
"first": "Roi",
"middle": [],
"last": "Reichart",
"suffix": ""
},
{
"first": "Gal",
"middle": [],
"last": "Elidan",
"suffix": ""
},
{
"first": "Ari",
"middle": [],
"last": "Rappoport",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of COLING-12",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roi Reichart, Gal Elidan, and Ari Rappoport. 2012. A diverse dirichlet process ensemble for unsupervised induction of syntactic categories. In Proceedings of COLING-12.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Acquisition and evaluation of verb subcategorization resources for biomedicine",
"authors": [
{
"first": "Laura",
"middle": [],
"last": "Rimell",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Lippincott",
"suffix": ""
},
{
"first": "Karin",
"middle": [],
"last": "Verspoor",
"suffix": ""
},
{
"first": "Helen",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2013,
"venue": "Journal of Biomedical Informatics",
"volume": "46",
"issue": "",
"pages": "228--237",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Laura Rimell, Thomas Lippincott, Karin Verspoor, He- len Johnson, and Anna Korhonen. 2013. Acqui- sition and evaluation of verb subcategorization re- sources for biomedicine. Journal of Biomedical In- formatics, 46:228-237.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Active learning for part-of-speech tagging: Accelerating corpus annotation",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "Ringger",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Mcclanahan",
"suffix": ""
},
{
"first": "Robbie",
"middle": [],
"last": "Haertel",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Busby",
"suffix": ""
},
{
"first": "Marc",
"middle": [],
"last": "Carmen",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Carroll",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Seppi",
"suffix": ""
},
{
"first": "Deryle",
"middle": [],
"last": "Lonsdale",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the ACL-07 Linguistic Annotation Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric Ringger, Peter McClanahan, Robbie Haertel, George Busby, Marc Carmen, James Carroll, Kevin Seppi, and Deryle Lonsdale. 2007. Active learning for part-of-speech tagging: Accelerating corpus an- notation. In Proceedings of the ACL-07 Linguistic Annotation Workshop.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "subcategorization frequencies are affected by corpus choice",
"authors": [
{
"first": "Douglas",
"middle": [],
"last": "Roland",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of ACL-98",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Douglas Roland and Daniel Jurafsky. 1998. subcate- gorization frequencies are affected by corpus choice. In Proceedings of ACL-98.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "V measure: a conditional entropybased external cluster evaluation measure",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Rosenberg",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Hirschberg",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of EMNLP-07",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew Rosenberg and Julia Hirschberg. 2007. V measure: a conditional entropybased external cluster evaluation measure. In Proceedings of EMNLP-07.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Improved parsing and pos tagging using inter-sentence consistency constraints",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Rush",
"suffix": ""
},
{
"first": "Roi",
"middle": [],
"last": "Reichart",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Amir",
"middle": [],
"last": "Globerson",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of EMNLP-12",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander Rush, Roi Reichart, Michael Collins, and Amir Globerson. 2012. Improved parsing and pos tagging using inter-sentence consistency constraints. In Proceedings of EMNLP-12.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Experiments on the automatic induction of german semantic verb classes",
"authors": [
{
"first": "Sabine",
"middle": [],
"last": "Schulte Im Walde",
"suffix": ""
}
],
"year": 2006,
"venue": "Computational Linguistics",
"volume": "32",
"issue": "2",
"pages": "159--194",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sabine Schulte im Walde. 2006. Experiments on the automatic induction of german semantic verb classes. Computational Linguistics, 32(2):159-194.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Finding the maps for belief networks is np-hard",
"authors": [
{
"first": "Solomon",
"middle": [],
"last": "Shimony",
"suffix": ""
}
],
"year": 1994,
"venue": "Artificial Intelligence",
"volume": "68",
"issue": "",
"pages": "399--310",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Solomon Shimony. 1994. Finding the maps for belief networks is np-hard. Artificial Intelligence, 68:399- 310.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Tightening lp relaxations for map using message passing",
"authors": [
{
"first": "David",
"middle": [],
"last": "Sontag",
"suffix": ""
},
{
"first": "Talya",
"middle": [],
"last": "Meltzer",
"suffix": ""
},
{
"first": "Amir",
"middle": [],
"last": "Globerson",
"suffix": ""
},
{
"first": "Tommi",
"middle": [],
"last": "Jaakkola",
"suffix": ""
},
{
"first": "Yair",
"middle": [],
"last": "Weiss",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of UAI-08",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Sontag, Talya Meltzer, Amir Globerson, Tommi Jaakkola, and Yair Weiss. 2008. Tightening lp re- laxations for map using message passing. In Pro- ceedings of UAI-08.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Hierarchical verb clustering using graph factorization",
"authors": [
{
"first": "Lin",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of EMNLP-11",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lin Sun and Anna Korhonen. 2011. Hierarchical verb clustering using graph factorization. In Proceedings of EMNLP-11.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "A bayesian approach to unsupervised semantic role induction",
"authors": [
{
"first": "Ivan",
"middle": [],
"last": "Titvo",
"suffix": ""
},
{
"first": "Alexandre",
"middle": [],
"last": "Klementiev",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of EMNLP-12",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ivan Titvo and Alexandre Klementiev. 2012. A bayesian approach to unsupervised semantic role in- duction. In Proceedings of EMNLP-12.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Feature-rich part-ofspeech tagging with a cyclic dependency network",
"authors": [
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "Yoram",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of NAACL-03",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kristina Toutanova, Dan Klein, Christopher Manning, and Yoram Singer. 2003. Feature-rich part-of- speech tagging with a cyclic dependency network. In Proceedings of NAACL-03.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "From frequency to meaning: Vector space models of semantics",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Turney",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Pantel",
"suffix": ""
}
],
"year": 2010,
"venue": "Journal of artificial intelligence research",
"volume": "37",
"issue": "",
"pages": "141--188",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Turney and Patrick Pantel. 2010. From fre- quency to meaning: Vector space models of se- mantics. Journal of artificial intelligence research, 37:141-188.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "Multi-way tensor factorization for unsupervised lexical acquisition",
"authors": [
{
"first": "Tim",
"middle": [],
"last": "Van De Cruys",
"suffix": ""
},
{
"first": "Laura",
"middle": [],
"last": "Rimell",
"suffix": ""
},
{
"first": "Thierry",
"middle": [],
"last": "Poibeau",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of COLING-12",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tim Van de Cruys, Laura Rimell, Thierry Poibeau, and Anna Korhonen. 2012. Multi-way tensor factor- ization for unsupervised lexical acquisition. In Pro- ceedings of COLING-12.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "Bootstrapping a verb lexicon for biomedical information extraction",
"authors": [
{
"first": "Giulia",
"middle": [],
"last": "Venturi",
"suffix": ""
},
{
"first": "Simonetta",
"middle": [],
"last": "Montemagni",
"suffix": ""
},
{
"first": "Simone",
"middle": [],
"last": "Marchi",
"suffix": ""
},
{
"first": "Yutaka",
"middle": [],
"last": "Sasaki",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Thompson",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Mc-Naught",
"suffix": ""
},
{
"first": "Sophia",
"middle": [],
"last": "Ananiadou",
"suffix": ""
}
],
"year": 2009,
"venue": "Computational Linguistics and Intelligent Text Processing",
"volume": "5449",
"issue": "",
"pages": "137--148",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Giulia Venturi, Simonetta Montemagni, Simone Marchi, Yutaka Sasaki, Paul Thompson, John Mc- Naught, and Sophia Ananiadou. 2009. Bootstrap- ping a verb lexicon for biomedical information ex- traction. Computational Linguistics and Intelligent Text Processing, 5449:137-148.",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "Using subcategorization knowledge to improve case prediction for translation to german",
"authors": [
{
"first": "Marion",
"middle": [],
"last": "Weller",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Fraser",
"suffix": ""
},
{
"first": "Sabine",
"middle": [],
"last": "Schulte Im Walde",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of ACL-13",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marion Weller, Alexander Fraser, and Sabine Schulte im Walde. 2013. Using subcategorization knowl- edge to improve case prediction for translation to german. In Proceedings of ACL-13.",
"links": null
},
"BIBREF57": {
"ref_id": "b57",
"title": "Linear programming relazations and belief propogataion an empitical study",
"authors": [
{
"first": "Chen",
"middle": [],
"last": "Yanover",
"suffix": ""
},
{
"first": "Talya",
"middle": [],
"last": "Meltzer",
"suffix": ""
},
{
"first": "Yair",
"middle": [],
"last": "Weiss",
"suffix": ""
}
],
"year": 2006,
"venue": "JMLR Special Issue on Machine Learning and Large Scale Optimization",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chen Yanover, Talya Meltzer, and Yair Weiss. 2006. Linear programming relazations and belief pro- pogataion an empitical study. JMLR Special Issue on Machine Learning and Large Scale Optimization.",
"links": null
},
"BIBREF58": {
"ref_id": "b58",
"title": "Multiclass spectral clustering",
"authors": [
{
"first": "Stella",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Jianbo",
"middle": [],
"last": "Shi",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of ICCV-13",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stella Yu and Jianbo Shi. 2003. Multiclass spectral clustering. In Proceedings of ICCV-13.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"num": null,
"content": "<table/>",
"type_str": "table",
"html": null,
"text": "VP [the mast]NP [of [ships on the horizon ]NP ]PP . (2) Indirect Transitive: [They]NP [distinguished]VP [between [me and you]ADVP ]PP . (3) Ditransitive: [They]NP [distinguished]VP [him]NP [from [the other boys]NP ]PP."
},
"TABREF2": {
"num": null,
"content": "<table><tr><td colspan=\"2\">Environment Labour Legislation</td></tr><tr><td>M-1 V</td><td>M-1 V</td></tr></table>",
"type_str": "table",
"html": null,
"text": "Top 5 most frequent SCFs for the Environment and Labour Legislation datasets used in our experiments."
}
}
}
}