Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N15-1042",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:35:37.496962Z"
},
"title": "Data-driven sentence generation with non-isomorphic trees",
"authors": [
{
"first": "Miguel",
"middle": [],
"last": "Ballesteros",
"suffix": "",
"affiliation": {
"laboratory": "Natural Language Processing Group",
"institution": "Pompeu Fabra University",
"location": {
"settlement": "Barcelona",
"country": "Spain"
}
},
"email": ""
},
{
"first": "Bernd",
"middle": [],
"last": "Bohnet",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Google Inc",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Simon",
"middle": [],
"last": "Mille",
"suffix": "",
"affiliation": {
"laboratory": "Natural Language Processing Group",
"institution": "Pompeu Fabra University",
"location": {
"settlement": "Barcelona",
"country": "Spain"
}
},
"email": ""
},
{
"first": "Leo",
"middle": [],
"last": "Wanner",
"suffix": "",
"affiliation": {
"laboratory": "Natural Language Processing Group",
"institution": "Pompeu Fabra University",
"location": {
"settlement": "Barcelona",
"country": "Spain"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "structures from which the generation naturally starts often do not contain any functional nodes, while surface-syntactic structures or a chain of tokens in a linearized tree contain all of them. Therefore, data-driven linguistic generation needs to be able to cope with the projection between non-isomorphic structures that differ in their topology and number of nodes. So far, such a projection has been a challenge in data-driven generation and was largely avoided. We present a fully stochastic generator that is able to cope with projection between non-isomorphic structures. The generator, which starts from PropBank-like structures, consists of a cascade of SVM-classifier based submodules that map in a series of transitions the input structures onto sentences. The generator has been evaluated for English on the Penn-Treebank and for Spanish on the multi-layered Ancora-UPF corpus.",
"pdf_parse": {
"paper_id": "N15-1042",
"_pdf_hash": "",
"abstract": [
{
"text": "structures from which the generation naturally starts often do not contain any functional nodes, while surface-syntactic structures or a chain of tokens in a linearized tree contain all of them. Therefore, data-driven linguistic generation needs to be able to cope with the projection between non-isomorphic structures that differ in their topology and number of nodes. So far, such a projection has been a challenge in data-driven generation and was largely avoided. We present a fully stochastic generator that is able to cope with projection between non-isomorphic structures. The generator, which starts from PropBank-like structures, consists of a cascade of SVM-classifier based submodules that map in a series of transitions the input structures onto sentences. The generator has been evaluated for English on the Penn-Treebank and for Spanish on the multi-layered Ancora-UPF corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Applications such as machine translation that inherently draw upon sentence generation increasingly deal with deep meaning representations; see, e.g., (Aue et al., 2004; Jones et al., 2012; Andreas et al., 2013) . Deep representations tend to differ in their topology and number of nodes from the corresponding surface structures since they do not contain, e.g., any functional nodes, while syntactic structures or chains of tokens in linearized trees do. This means that sentence generation needs to be able to cope with the projection between non-isomorphic structures. However, most of the recent work in datadriven sentence generation still avoids this challenge. Some systems focus on syntactic generation (Bangalore and Rambow, 2000; Langkilde-Geary, 2002; Filippova and Strube, 2008) or linearization and inflection (Filippova and Strube, 2007; He et al., 2009; Wan et al., 2009; Guo et al., 2011a) , and avoid thus the need to cope with this projection all together; some use a rule-based module to handle the projection between non-isomorphic structures (Knight and Hatzivassiloglou, 1995; Langkilde and Knight, 1998; Bohnet et al., 2011) ; and some adapt the meaning structures to be isomorphic with syntactic structures (Bohnet et al., 2010) . However, it is obvious that a \"syntacticization\" of meaning structures can be only a temporary workaround and that a rule-based module raises the usual questions of coverage, maintenance and portability.",
"cite_spans": [
{
"start": 151,
"end": 169,
"text": "(Aue et al., 2004;",
"ref_id": "BIBREF1"
},
{
"start": 170,
"end": 189,
"text": "Jones et al., 2012;",
"ref_id": "BIBREF19"
},
{
"start": 190,
"end": 211,
"text": "Andreas et al., 2013)",
"ref_id": "BIBREF0"
},
{
"start": 711,
"end": 739,
"text": "(Bangalore and Rambow, 2000;",
"ref_id": "BIBREF5"
},
{
"start": 740,
"end": 762,
"text": "Langkilde-Geary, 2002;",
"ref_id": "BIBREF23"
},
{
"start": 763,
"end": 790,
"text": "Filippova and Strube, 2008)",
"ref_id": "BIBREF12"
},
{
"start": 823,
"end": 851,
"text": "(Filippova and Strube, 2007;",
"ref_id": "BIBREF11"
},
{
"start": 852,
"end": 868,
"text": "He et al., 2009;",
"ref_id": "BIBREF17"
},
{
"start": 869,
"end": 886,
"text": "Wan et al., 2009;",
"ref_id": "BIBREF35"
},
{
"start": 887,
"end": 905,
"text": "Guo et al., 2011a)",
"ref_id": "BIBREF13"
},
{
"start": 1063,
"end": 1098,
"text": "(Knight and Hatzivassiloglou, 1995;",
"ref_id": "BIBREF21"
},
{
"start": 1099,
"end": 1126,
"text": "Langkilde and Knight, 1998;",
"ref_id": "BIBREF22"
},
{
"start": 1127,
"end": 1147,
"text": "Bohnet et al., 2011)",
"ref_id": "BIBREF9"
},
{
"start": 1231,
"end": 1252,
"text": "(Bohnet et al., 2010)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we present a fully stochastic generator that is able to cope with the projection between non-isomorphic structures. 1 Such a generator can be used as a stand-alone application and also, e.g., in text simplification (Klebanov et al., 2004) or deep machine translation (Jones et al., 2012 ) (where the transfer is done at a deep level). In abstractive summarization, it facilitates the generation of the summaries, and in extractive summarization a better sentence fusion. 2 The generator, which starts from elementary predicate-argument lexico-structural structures as used in sentence planning by Stent et al. (2004) , consists of a cascade of Support Vector Machines (SVM)-classifier based submodules that map the input structures onto sentences in a series of transitions. Following the idea presented in (Ballesteros et al., 2014b) , a separate SVM-classifier is defined for the mapping of each linguistic category. The generator has been tested on Spanish with the multi-layered Ancora-UPF corpus (Mille et al., 2013) and on English with an extended version of the dependency Penn TreeBank (Johansson and Nugues, 2007) .",
"cite_spans": [
{
"start": 230,
"end": 253,
"text": "(Klebanov et al., 2004)",
"ref_id": "BIBREF20"
},
{
"start": 282,
"end": 301,
"text": "(Jones et al., 2012",
"ref_id": "BIBREF19"
},
{
"start": 486,
"end": 487,
"text": "2",
"ref_id": null
},
{
"start": 612,
"end": 631,
"text": "Stent et al. (2004)",
"ref_id": "BIBREF34"
},
{
"start": 822,
"end": 849,
"text": "(Ballesteros et al., 2014b)",
"ref_id": "BIBREF4"
},
{
"start": 1016,
"end": 1036,
"text": "(Mille et al., 2013)",
"ref_id": "BIBREF28"
},
{
"start": 1109,
"end": 1137,
"text": "(Johansson and Nugues, 2007)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The remainder of the paper is structured as follows. In the next section, we briefly outline the fundamentals of sentence generation as we view it in our work, focusing in particular on the most challenging part of it: the transition between the non-isomorphic predicateargument lexico-structural structures and surfacesyntactic structures. Section 3 outlines the setup of our system. Section 4 discusses the experiments we carried out and the results we obtained. In Section 5, we briefly summarize related work, before in Section 6 some conclusions are drawn and future work is outlined.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Sentence generation realized in this paper is part of the sentence synthesis pipeline argued for by Mel'\u010duk (1988) . It consists of a sequence of two mappings:",
"cite_spans": [
{
"start": 100,
"end": 114,
"text": "Mel'\u010duk (1988)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Fundamentals",
"sec_num": "2"
},
{
"text": "1. Predicate-argument lexico-structural structure \u2192 Syntactic structure 2. Syntactic Structure \u2192 Linearized structure Following the terminology in (Mel'\u010duk, 1988) , we refer to the predicate-argument lexico-structural structures as \"deep-syntactic structures\" (DSyntSs) and to the syntactic structures as \"surface-syntactic structures\" (SSyntSs).",
"cite_spans": [
{
"start": 147,
"end": 162,
"text": "(Mel'\u010duk, 1988)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Fundamentals",
"sec_num": "2"
},
{
"text": "While SSyntSs and linearized structures are isomorphic, the difference in the linguistic abstraction of the DSyntSs and SSyntSs leads to divergences that impede the isomorphy between the two and make the first mapping a challenge for statistical generation. Therefore, we focus in this section on the presentation of the DSyntSs and SSyntSs and the mapping between them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Fundamentals",
"sec_num": "2"
},
{
"text": "DSyntSs are very similar to the PropBank (Babko-Malaya, 2005 ) structures and the structures as used for the deep track of the First Surface Realization Shared Task (SRST, (Belz et al., 2011)) annotations. DSyntSs are connected trees that contain only meaning-bearing lexical items and both predicate-argument (indicated by Roman numbers: I, II, III, IV, . . . ) and lexico-structural, or deepsyntactic, (ATTR(ibutive), APPEND(itive) and CO-ORD(inative)) relations. In other words, they do not contain any punctuation and functional nodes, i.e., governed elements, auxiliaries and determiners. Governed elements such governed prepositions and subordinating conjunctions are dropped because they are imposed by sub-categorization restrictions of the predicative head and void of own meaningas, for instance, to in give TO your friend or that in I know that you will come. 3 Auxiliaries do not appear as nodes in DSyntSs. Rather, the information they encode is captured in terms of tense, aspect and voice attributes of the corresponding full verbal nodes. Equally, determiners are substituted by attribute-value pairs of givenness they encode, assigned to their governors. See Figure 1 (a) for a sample DSyntS. 4",
"cite_spans": [
{
"start": 41,
"end": 60,
"text": "(Babko-Malaya, 2005",
"ref_id": "BIBREF2"
},
{
"start": 172,
"end": 192,
"text": "(Belz et al., 2011))",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 1176,
"end": 1184,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Input DSyntSs",
"sec_num": "2.1.1"
},
{
"text": "SSyntSs are connected dependency trees in which the nodes are labeled by open or closed class lexical items and the edges by grammatical function relations of the type 'subject', 'oblique object', 'adverbial', 'modifier', etc. A SSyntS is thus a typical dependency tree as used in data-driven syntactic parsing (Haji\u010d et al., 2009) and generation (Belz et al., 2011) . See Figure 1 In order to project a DSyntS onto its corresponding SSyntS in the course of generation (where both DSyntSs and their corresponding SSyntSs are stored in the 14-column CoNLL'09 format), the following types of actions need to be performed: 5",
"cite_spans": [
{
"start": 311,
"end": 331,
"text": "(Haji\u010d et al., 2009)",
"ref_id": "BIBREF16"
},
{
"start": 347,
"end": 366,
"text": "(Belz et al., 2011)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 373,
"end": 381,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "SSyntSs",
"sec_num": "2.1.2"
},
{
"text": "1. Project each node in the DSyntS onto its SSynS-correspondence. This correspondence can be a single node, as, e.g., job \u2192 [NN] (where NN is a noun), or a subtree (hypernode, known as syntagm in linguistics), as, e.g., time \u2192 [DT NN] (where DT is a determiner and NN a noun) or create",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SSyntSs",
"sec_num": "2.1.2"
},
{
"text": "\u2192 [V AUX V AUX VB IN] (where V",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SSyntSs",
"sec_num": "2.1.2"
},
{
"text": "AUX is an auxiliary, VB a full verb and IN a preposition). In formal terms, we assume any SSyntS-correspondence to be a hypernode with a cardinality \u2265 1. 2. Generate the correct lemma for the nodes in SSyntS that do not have a 1:1 correspondence with an origin DSyntS node (as DT and V AUX above). 6 3. Establish the dependencies within the individual SSyntS-hypernodes. 4. Establish the dependencies between the SSyntS-hypernodes (more precisely, between the nodes of different SSyntS-hypernodes) to obtain a connected SSyntS-tree.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SSyntSs",
"sec_num": "2.1.2"
},
{
"text": "For the validation of the performance of our generator on Spanish, we use the AnCora-UPF treebank, which contains only about 100,000 tokens, but which has been manually annotated and validated on the SSyntS-and DSyntS-layers, such that its quality is rather high. The deep annotation does not contain any functional prepositions since they have been removed for all predicates of the corpus, and the DSyntS-relations have been edited following annotation guidelines. AnCora-UPF SSyntSs are annotated with fine-grained dependencies organized in a hierarchical scheme (Mille et al., 2012) , in a similar fashion as the dependencies of the Stanford Scheme (de Marneffe et al., 2006). 7 Thus, it is possible to use the full set of labels or to reduce it according to our needs. We performed preliminary experiments in order to assess which tag granularity is better suited for generation and came up with the 31-label tagset.",
"cite_spans": [
{
"start": 566,
"end": 586,
"text": "(Mille et al., 2012)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Spanish Treebank",
"sec_num": "2.3.1"
},
{
"text": "For the validation of the generator on English, we use the dependency Penn TreeBank (about 1,000,000 tokens), which we extend by a DSynt layer defined by the same deep dependency relations, features and node correspondences as the Spanish DSynt layer. The Penn TreeBank DSynt layer is obtained by a rule-based graph transducer. The transducer removes definite and indefinite determiners, auxiliaries, THAT complementizers, TO infinitive markers, and a finite list of functional prepositions. The functional prepositions have been manually compiled from the description and examples of the roles in the PropBank and NomBank annotations of the 150 most frequent predicates of the corpus. A dictionary has been built, which contains for each of the 150 predicates the argument slots (roles) and the prepositions associated to it, such that given a predicate and a preposition, we know to which role it corresponds. Consider, for illustration, Figure 2 , which indicates that for the nominal predicate plan 01, a dependent introduced by the preposition to corresponds to the second argument of plan 01, while a dependent introduced by for is its third argument. For each possible surface dependency relation between a governor and a dependent, a default mapping is provided, which is applied if (i) The syntactic structure fulfills the conditions of the default mapping (e.g., 'subject' is by default mapped onto 'I' unless it is the subject of a passive verb, in which case it is mapped to the second argument 'II'), and (ii) The pair governor-dependent is not found in the dictionary; that is, if the dependent of the SSyntS dependency relation is a preposition found in the governor's entry in the dictionary, the information provided in the dictionary is used instead of the default mapping. 8",
"cite_spans": [],
"ref_spans": [
{
"start": 940,
"end": 948,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "English Treebank",
"sec_num": "2.3.2"
},
{
"text": "For instance, in the sentence Sony announced its plans to hire Mr. Guber, to is a dependent of plan with the SSyntS dependency NMOD. NMOD is by default mapped onto the deep relation ATTR, but since in the dictionary entry of plan it is stated that a dependent introduced by to is mapped to 'II' (cf. Figure 2) , II is the relation that appears in the DSyntS-annotation.",
"cite_spans": [],
"ref_spans": [
{
"start": 300,
"end": 309,
"text": "Figure 2)",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "English Treebank",
"sec_num": "2.3.2"
},
{
"text": "The features definiteness, voice, tense, aspect in the FEATS column of the CoNLL format capture the information conveyed by determiners and auxiliaries. The conversion procedure maps surface dependency relations as found in the Penn TreeBank onto the restricted set of deep dependency relations as described in Section 2.1.1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "English Treebank",
"sec_num": "2.3.2"
},
{
"text": "The nodes in the original (surface-oriented) and deep annotations are connected through their IDs. In the FEATS column of the output CoNLL file, id0 indicates the deep identifier of a word, while id1 indicates the ID of the surface node it corresponds to. There are less nodes in DSyntSs than in SSyntSs since SSyntSs contain all the words of a sentence. Hence, a DSynt-node can correspond to several SSyntS nodes. Multiple correspondences are indicated by the presence of the id2 (id3, id4, etc) feature in the FEATS column.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "English Treebank",
"sec_num": "2.3.2"
},
{
"text": "Since no available data-driven generator uses as input DSyntSs, we developed as baselines two rulebased graph transducer generators which produce for English respectively Spanish the best possible SSyntSs, using only the information contained in the starting DSyntS.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "3.1"
},
{
"text": "The two baseline generators are structured similarly: both contain around 50 graph transducer rules, separated into two clusters. The first cluster maps DSyntS-nodes onto SSyntS-nodes, while the second one handles the introduction of SSyntS dependency relations between the generated SSyntS-nodes. For instance, in English, one rule maps DSyntS-nodes that have a one-to-one correspondence in the SSyntS external and internal arguments, such that for some predicates the arguments are numbered starting from '0', and for other starting from '1'. This has been normalized in order to make all arguments start from '1' for all predicates.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "3.1"
},
{
"text": "if N1 is a V fin and ((R1,2 == I and N1 is in active voice and N2 is not by) or (R1,2 == II and N1 is in passive voice))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "3.1"
},
{
"text": "if \u2203 one-to-one correspondence between ND i and NS i then introduce SBJ between NS ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "3.1"
},
{
"text": "N \u2192 DET+NN, N \u2192 DET+NN+governed PREP, V \u2192 AUX+VV, V \u2192 that COMPL+AUX+VV+governed PREP, etc.)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "3.1"
},
{
"text": ", and 25 rules generate the dependency relations. 9 The transduction rules apply in two phases, see Figure 3 . During the first phase, all nodes and intra-hypernode dependencies are created in the output structure. During the second phase, all inter-hypernode dependencies are established. Since there are one-to-many DSyntS-SSyntS correspondences, the rules of the second phase have to ensure that the correct output nodes are targeted, i.e., that jobs in Figure 1(b) is made a dependent of have, and not of been or created, which all correspond to create in the input. Consider, for illustration of the complexity of the rule-based generator, the transduction rule in Figure 3 . The rule creates the SSynt dependency relation SBJ (subject) in a target SSyntS (with a governor node N D 1 and a dependent node N D 2 linked by a deep dependency relation R 1,2 in the input DSyntS and two nodes N S 1 and N S 2 which correspond to N D 1 and N D 2 respectively in the target SSyntS).",
"cite_spans": [],
"ref_spans": [
{
"start": 100,
"end": 108,
"text": "Figure 3",
"ref_id": "FIGREF2"
},
{
"start": 457,
"end": 468,
"text": "Figure 1(b)",
"ref_id": null
},
{
"start": 670,
"end": 679,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Baselines",
"sec_num": "3.1"
},
{
"text": "The evaluation shows that all straightforward mappings are performed correctly; English auxiliaries, that complementizers, infinitive markers and determiners are introduced, and so are Spanish auxiliaries, reflexive pronouns, and determiners. That is, the rules produce well-formed SSyntSs of all possible combinations of auxiliaries, conjunctions and/or prepositions for verbs, determiners and/or prepositions for nouns, adjectives and adverbs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "3.1"
},
{
"text": "When there are several possible mappings, the baseline takes decisions by default. For example, when a governed preposition must be introduced, we always introduce the most common one (of in English, de 'of' in Spanish).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "3.1"
},
{
"text": "The data-driven generator is defined as a tree transducer framework that consists of a cascade of 6 datadriven small tasks; cf. Figure 4 . The first four tasks capture the actions 1.-4. from Section 2.2; the 5th linearizes the obtained SSyntS. Figure 4 provides a sample input and output of each submodule. The system outputs a 14 column CoNLL'09 linearized format without morphological inflections or punctuation marks.",
"cite_spans": [],
"ref_spans": [
{
"start": 128,
"end": 136,
"text": "Figure 4",
"ref_id": "FIGREF4"
},
{
"start": 244,
"end": 252,
"text": "Figure 4",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Data-Driven Generator",
"sec_num": "3.2"
},
{
"text": "In the next sections, we discuss how these actions are realized and how they are embedded into the overall generation process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data-Driven Generator",
"sec_num": "3.2"
},
{
"text": "The intra-and inter-hypernode dependency determination works as an informed dependency parser that uses the DSyntS as input. The search space is thus completely pruned. Note also that for each step, the space of classes for the SVMs is based on linguistic facts extracted from the training corpus (for instance, for the preposition generation SVM, the classes are the possible prepositions; for the auxiliary generation SVM, the possible auxiliaries, etc.).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data-Driven Generator",
"sec_num": "3.2"
},
{
"text": "Given a node n d from the DSyntS, the system must find the shape of the surface hypernode that corresponds to n d in the SSyntS. The hypernode identification SVMs use the following features: In order to simplify the task, we define the shape of a surface hypernode as a list of surface PoS tags. This unordered list contains the PoS of each of the lemmas contained within the hypernode and a tag that encodes the original deep node; for instance: For each deep, i.e., DSyntS, PoS tag (which can be one of the following four: N (noun), V (verb), Adv (adverb), A (adjective)), a separate multi-class classifier is defined. 10 For instance, in the case of N, the N-classifier will use the above features to assign to the a DSynt-node with PoS N the most appropriate (most likely) hypernode-in this case, [NN(deep), DT].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hypernode Identification",
"sec_num": "3.2.1"
},
{
"text": "Once the hypernodes of the SSyntS under construction have been produced, the functional nodes that have been newly introduced in the hypernodes must be assigned a lemma label. The lemma generation SVMs use the following features of the deep nodes n d in the hypernodes to select the most likely lemma:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lemma Generation",
"sec_num": "3.2.2"
},
{
"text": "verbal finiteness (finite, infinitive, gerund, participle) and aspect (perfective, progressive), degree of definiteness of nouns, PoS of n d , lemma of n d , PoS of the head of n d Again, for each surface PoS tag, a separate classifier is defined. Thus, the DT-classifier would pick for the hypernode [NN(deep) , DT] the most likely lemma for the DT-node (optimally, a determiner).",
"cite_spans": [
{
"start": 301,
"end": 310,
"text": "[NN(deep)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Lemma Generation",
"sec_num": "3.2.2"
},
{
"text": "10 As will be seen in the discussion of the results, the strategy proposed by Ballesteros et al. (2014b) to define a separate classifier for each linguistic category here and in the other stages largely pays off because it reduces the classification search space enormously and thus leads to a higher accuracy.",
"cite_spans": [
{
"start": 78,
"end": 104,
"text": "Ballesteros et al. (2014b)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Lemma Generation",
"sec_num": "3.2.2"
},
{
"text": "Given a hypernode and its lemmas provided by the two previous stages, the dependencies (i.e., the dependency attachments and dependency labels) between the elements of the created SSyntS hypernodes must be determined (and thus also the governors of the hypernodes). For this task, the intrahypernode dependency generation SVMs use the following features: lemmas included in the hypernode, PoS-tags of the lemmas in the hypernode, voice of the head h of the hypernode, deep dependency relation to h.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Intra-hypernode Dependency Generation",
"sec_num": "3.2.3"
},
{
"text": "For each kind of hypernode, dynamically a separate classifier is generated. 11 In the case of the hypernode [NN(deep) , DT], the corresponding classifier will create a link between the determiner and the noun, with the noun as head and the determiner as dependent because it is the best link that it can find; cf. Figure 5 for illustration. We ensure that the output of the classifiers is a tree by controlling that every node (except the root) has one and only one governor. The DSynt input is a tree; in the case of hypernodes of cardinality one, the governor/dependent relation is maintained; in the case of hypernodes of higher cardinality, only one node receives an incoming arc and only one can govern another hypernode.",
"cite_spans": [
{
"start": 108,
"end": 117,
"text": "[NN(deep)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 314,
"end": 322,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Intra-hypernode Dependency Generation",
"sec_num": "3.2.3"
},
{
"text": "Once the individual hypernodes have been converted into connected dependency subtrees, the hy- Figure 5 : Internal dependency within a hypernode pernodes must be connected between each other, such that we obtain a complete SSyntS. The interhypernode dependency generation SVMs use the following features of a hypernode s s to determine for each hypernode its governor. For each hypernode with a distinct internal dependency pattern, a separate classifier is dynamically derived (for our treebanks, we obtained 114 different SVM classifiers because they also take into account hypernodes with just one token).: The task faced by the inter-hypernode dependency classifiers is the same as that of a dependency parser, only that its search space is very small (which is favorably reflected in the accuracy figures). ",
"cite_spans": [],
"ref_spans": [
{
"start": 95,
"end": 103,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Inter-hypernode Dependency Generation",
"sec_num": "3.2.4"
},
{
"text": "[ NN(deep), DT] det",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inter-hypernode Dependency Generation",
"sec_num": "3.2.4"
},
{
"text": "Once we obtained a SSyntS, the linearizer must find the correct order of the words. There is already a body of work available on statistical linearization. Therefore, these tasks were not in the focus of our work. Rather, we adopt the most successful technique of the first SRST (Belz et al., 2011) , a bottomup tree linearizer that orders bottom-up each head and its children (Bohnet et al., 2011; Guo et al., 2011a) . This has the advantage that the linear order obtained previously can provide context features for ordering sub-trees higher up in the dependency tree. Each head and its children are ordered with a beam search.",
"cite_spans": [
{
"start": 279,
"end": 298,
"text": "(Belz et al., 2011)",
"ref_id": "BIBREF6"
},
{
"start": 377,
"end": 398,
"text": "(Bohnet et al., 2011;",
"ref_id": "BIBREF9"
},
{
"start": 399,
"end": 417,
"text": "Guo et al., 2011a)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Linearization",
"sec_num": "3.3"
},
{
"text": "The beam is initialized with entries of single words that are expanded in the next step by the remaining words of the sub-tree, which results in a number of new entries for the next iteration. After the expansion step, the new beam entries are sorted and pruned. We keep the 30 best entries and continue with the expansion and pruning steps until no further nodes of the subtree are left. We take an SVM to obtain the scores for sorting the beam entries, using the same feature templates as in Guo et al. (2011b) and Bohnet et al. (2011) .",
"cite_spans": [
{
"start": 494,
"end": 512,
"text": "Guo et al. (2011b)",
"ref_id": "BIBREF14"
},
{
"start": 517,
"end": 537,
"text": "Bohnet et al. (2011)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Linearization",
"sec_num": "3.3"
},
{
"text": "In our experiments, the Spanish treebank has been divided into: (i) a development set of 219 sentences, with 3,437 tokens in the DSyntS treebank and 4,799 tokens in the SSyntS treebank (with an average of 21.91 words by sentence in SSynt); (ii) a training set of 3,036 sentences, with 57,665 tokens in the DSyntS treebank and 84,668 tokens in the SSyntS treebank (with an average of 27.89 words by sentence in SSynt); and a (iii) a held-out test for evaluation of 258 sentences, with 5,878 tokens in the DSyntS treebank and 8,731 tokens in the SSyntS treebank (with an average of 33.84 words by sentence in SSynt).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "4"
},
{
"text": "For the English treebank, we used a classical split of (i) a training set of 39,279 sentences, with 724,828 tokens in the DSynt treebank and 958,167 tokens in the SSynt treebank (with an average of 24.39 words by sentence in SSynt); and (ii) a test set of 2,399 sentences, with 43,245 tokens in the DSynt treebank and 57,676 tokens in the SSynt treebank (with an average of 24.04 words by sentence in SSynt) .",
"cite_spans": [],
"ref_spans": [
{
"start": 401,
"end": 407,
"text": "SSynt)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "4"
},
{
"text": "In what follows, we show the system performance on both treebanks. The Spanish treebank was used for development and testing, while the English treebank was only used for testing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "4"
},
{
"text": "In this section, we present the performance of, first of all, the individual tasks of the data-driven DSyntS-SSyntS projection, since these have been the challenging tasks that we addressed. Table 1 shows similar results for all tasks on the development and test sets with gold-standard input, that is, the results of the classifiers as a stand-alone module, assuming that the previous module provides a perfect output. To have the entire generation pipeline in place, we carried out several linearization experiments, starting from: (i) the SSyntS gold standard, (ii) SSyntSs generated by the rule-based baselines, and (iii) SSyntSs generated by the data-driven deep generator; cf. surface gen., baseline deep gen, and deep gen. respectively in Tables 3 and 4 In general, Tables 1-4 show that the quality of the presented deep data-driven generator is rather good both during the individual stages of the DSyntS-SSyntS transition and as part of the DSyntSlinearized sentence pipeline. Two main problems impede an even better performance figures than those reflected in Tables 1 and 2 . First, the introduction of prepositions causes most errors in hypernode detection and lemma generation: when a preposition should be introduced or not and which preposition should be introduced depends exclusively on the subcategorization frame of the governor of the DSyntS node. A corpus of a limited size does not capture the subcategorization frames of ALL predicates. This is especially true for our Spanish treebank, which is particularly small. Second, the inter-hypernode dependency suffers from the fact that the SSyntS tagset is quite fine-grained, at least in the case of Spanish, which makes the task of the classifiers harder (e.g., there are nine different types of verbal objects). In spite of these problems, each set of classifiers achieves results above 88% on the test sets.",
"cite_spans": [],
"ref_spans": [
{
"start": 191,
"end": 198,
"text": "Table 1",
"ref_id": "TABREF2"
},
{
"start": 746,
"end": 760,
"text": "Tables 3 and 4",
"ref_id": "TABREF5"
},
{
"start": 773,
"end": 783,
"text": "Tables 1-4",
"ref_id": "TABREF2"
},
{
"start": 1070,
"end": 1084,
"text": "Tables 1 and 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.1"
},
{
"text": "The results of deep generation in Tables 3 and 4 can be explained by the fact of error propagation: while (only) about 1 out of 10 hypernodes and about 1 out of 10 lemmas are not correct and very little information is lost in the stage of the intrahypernode dependencies determination, already almost 1.75 out of 10 inter-hypernode dependencies, and finally 1 out 10 linear orderings are incorrect for English and more than 2 out 10 for Spanish.",
"cite_spans": [],
"ref_spans": [
{
"start": 34,
"end": 49,
"text": "Tables 3 and 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.1"
},
{
"text": "As already mentioned above, the size of the training corpus strongly affects the results. Thus, for English, for which the size of the training dataset has been 10 times bigger than for Spanish, the datadriven generator provides, without any tuning, more than 0.2 BLEU points more that for Spanish. A bigger corpus also covers more linguistic phenomena (lexical features, subcategorization frames, syntactic sentential constructions, etc.)-which can be also exploited for rule-based generation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.1"
},
{
"text": "The linearizer also suffers from a small size of the training set. Thus, while the small Spanish training corpus leads to 0.754 BLEU and 0.762 BLEU for the development and test sets respectively, for English, we achieve 0.91 BLEU, which is a very competitive outcome compared to other English linearizers (Song et al., 2014) .",
"cite_spans": [
{
"start": 305,
"end": 324,
"text": "(Song et al., 2014)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.1"
},
{
"text": "We also found that the data-driven generator tends to output slightly shorter sentences, when compared to the rule-based baseline. It is always difficult to find the best evaluation metric for plain text sentences (Smith et al., 2014) . In our experiments, we used BLEU, NIST and the exact match metric. BLEU is the average of n-gram precisions and includes a brevity penalty, which reduces the score if the length of the output sentence is shorter than the gold. In other words, BLEU favors longer sentences. We believe that this is one of the reasons why the machine-learning based generator shows a bigger difference for the English test set and the Spanish development set than the rule-based baseline. Firstly, there are extremely long sentences in the Spanish test set (31 words per sentence, in the average; the longest being 165 words). Secondly, the English sentences and the Spanish development sentences are much shorter than the Spanish test sentences, such that the ML approach has the potential to perform better.",
"cite_spans": [
{
"start": 214,
"end": 234,
"text": "(Smith et al., 2014)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.1"
},
{
"text": "There is an increasing amount of work on statistical sentence generation, although hardly any addresses the problem of deep generation from semantic structures that are not isomorphic with syntactic-structures as a purely data-driven problem (as we do). To the best of our knowledge, the only exception is our earlier work in (Ballesteros et al., 2014b) , where we discuss the principles of classifiers for data-driven generators. As already mentioned in Section 1, most of the state-of-the-art work focuses on syntactic generation; see, among others (Bangalore and Rambow, 2000; Langkilde-Geary, 2002; Filippova and Strube, 2008) , or only on linearization and inflection (Filippova and Strube, 2007; He et al., 2009; Wan et al., 2009; Guo et al., 2011a) . A number of proposals are hybrid in that they combine statistical machine learning-based generation with rule-based generation. Thus, some combine machine learning with pre-generated elements, as, e.g., (Marciniak and Strube, 2004; Wong and Mooney, 2007; Mairesse et al., 2010) , or with handcrafted rules, as, e.g., Belz, 2005) . Others derive automatically grammars for rule-based generation modules from annotated data, which can be used for surface generation, as, e.g., (Knight and Hatzivassiloglou, 1995; Langkilde and Knight, 1998; Oh and Rudnicky, 2002; Zhong and Stent, 2005; Bohnet et al., 2011; Rajkumar et al., 2011) or for generation from ontology triples, as, e.g., (Gyawali and Gardent, 2013) .",
"cite_spans": [
{
"start": 326,
"end": 353,
"text": "(Ballesteros et al., 2014b)",
"ref_id": "BIBREF4"
},
{
"start": 551,
"end": 579,
"text": "(Bangalore and Rambow, 2000;",
"ref_id": "BIBREF5"
},
{
"start": 580,
"end": 602,
"text": "Langkilde-Geary, 2002;",
"ref_id": "BIBREF23"
},
{
"start": 603,
"end": 630,
"text": "Filippova and Strube, 2008)",
"ref_id": "BIBREF12"
},
{
"start": 673,
"end": 701,
"text": "(Filippova and Strube, 2007;",
"ref_id": "BIBREF11"
},
{
"start": 702,
"end": 718,
"text": "He et al., 2009;",
"ref_id": "BIBREF17"
},
{
"start": 719,
"end": 736,
"text": "Wan et al., 2009;",
"ref_id": "BIBREF35"
},
{
"start": 737,
"end": 755,
"text": "Guo et al., 2011a)",
"ref_id": "BIBREF13"
},
{
"start": 961,
"end": 989,
"text": "(Marciniak and Strube, 2004;",
"ref_id": "BIBREF25"
},
{
"start": 990,
"end": 1012,
"text": "Wong and Mooney, 2007;",
"ref_id": "BIBREF36"
},
{
"start": 1013,
"end": 1035,
"text": "Mairesse et al., 2010)",
"ref_id": "BIBREF24"
},
{
"start": 1075,
"end": 1086,
"text": "Belz, 2005)",
"ref_id": "BIBREF7"
},
{
"start": 1233,
"end": 1268,
"text": "(Knight and Hatzivassiloglou, 1995;",
"ref_id": "BIBREF21"
},
{
"start": 1269,
"end": 1296,
"text": "Langkilde and Knight, 1998;",
"ref_id": "BIBREF22"
},
{
"start": 1297,
"end": 1319,
"text": "Oh and Rudnicky, 2002;",
"ref_id": "BIBREF29"
},
{
"start": 1320,
"end": 1342,
"text": "Zhong and Stent, 2005;",
"ref_id": "BIBREF37"
},
{
"start": 1343,
"end": 1363,
"text": "Bohnet et al., 2011;",
"ref_id": "BIBREF9"
},
{
"start": 1364,
"end": 1386,
"text": "Rajkumar et al., 2011)",
"ref_id": "BIBREF30"
},
{
"start": 1438,
"end": 1465,
"text": "(Gyawali and Gardent, 2013)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "5"
},
{
"text": "We presented a statistical deep sentence generator that successfully handles the non-isomorphism between meaning representations and syntactic structures in terms of a principled machine learning approach. This generator has been successfully tested on an English and a Spanish corpus, as a stand-alone DSyntS-SSyntS generator and as a part of the generation pipeline. We are currently about to apply it to other languages-including Chinese, French and German. Furthermore, resources are compiled to use it for generation of spoken discourse in Arabic, Polish and Turkish.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "We believe that our generator can be used not only in generation per se, but also, e.g., in machine translation (MT), since MT could profit from using meaning representations such as DSyntSs, which abstract away from the surface syntactic idiosyncrasies of each language, but are still linguistically motivated, as transfer representations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "The data-driven sentence generator is available for public downloading at https://github.com/ talnsoftware/deepgenerator/wiki.2 For all of these applications, the deep representation can be obtained by a deep parser, such as, e.g.,(Ballesteros et al., 2014a).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In contrast, on in the bottle is on the table is not dropped because it is semantic.4 \"That\" is considered a kind of determiner (to be derived from the Information Structure). This is the reason to omit it in the deep structure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For Spanish, we apply after the DSyntS-SSyntS transition in a postprocessing stage rules for the generation of relative pronouns that are implied by the the SSyntS. Since we cannot count on the annotation of coreference in the training data, we do not treat other types of referring expressions.6 The lemmas of nodes with 1:1 correspondence are the same in both structures.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The main difference with the Stanford scheme is that in AnCora-UPF no distinction is explicitly made between argumental and non-argumental dependencies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In the PropBank annotation, a distinction is made between",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "'N' stands for \"noun\", 'NN' for \"common noun\", 'DET' for \"determiner\", 'PREP' for \"preposition\", 'V' for \"verb\", 'AUX' for \"auxiliary verb\", 'VV' for \"main verb\", and 'COMPL' for \"complementizer\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This implies that the number of classifiers varies depending on the training set. For instance, during the intra-hypernode dependency creation for Spanish, 108 SVMs are generated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Following(Langkilde-Geary, 2002;Belz et al., 2011) and other works on statistical text generation, we access the quality of the linearization module via BLEU score, NIST and exactly matched sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Our work on deep stochastic sentence generation is partially supported by the European Commission under the contract numbers FP7-ICT-610411 (project MULTISENSOR) and H2020-RIA-645012 (project KRISTINA).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Semantic Parsing as Machine Translation",
"authors": [
{
"first": "J",
"middle": [],
"last": "Andreas",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Vlachos",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of ACL '13",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Andreas, A. Vlachos, and S. Clark. 2013. Seman- tic Parsing as Machine Translation. In Proceedings of ACL '13.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Statistical Machine Translation Using Labeled Semantic Dependency Graphs",
"authors": [
{
"first": "A",
"middle": [],
"last": "Aue",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Menezes",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Moore",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Quirk",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Ringger",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of TMI '04",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Aue, A. Menezes, R. Moore, C. Quirk, and E. Ringger. 2004. Statistical Machine Translation Using Labeled Semantic Dependency Graphs. In Proceedings of TMI '04.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Propbank Annotation Guidelines",
"authors": [
{
"first": "Olga",
"middle": [],
"last": "Babko-Malaya",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Olga Babko-Malaya, 2005. Propbank Annotation Guide- lines.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Deep-syntactic parsing",
"authors": [
{
"first": "Miguel",
"middle": [],
"last": "Ballesteros",
"suffix": ""
},
{
"first": "Bernd",
"middle": [],
"last": "Bohnet",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Mille",
"suffix": ""
},
{
"first": "Leo",
"middle": [],
"last": "Wanner",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of COLING'14",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Miguel Ballesteros, Bernd Bohnet, Simon Mille, and Leo Wanner. 2014a. Deep-syntactic parsing. In Proceed- ings of COLING'14, Dublin, Ireland.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Classifiers for Data-Driven Deep Sentence Generation",
"authors": [
{
"first": "Miguel",
"middle": [],
"last": "Ballesteros",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Mille",
"suffix": ""
},
{
"first": "Leo",
"middle": [],
"last": "Wanner",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the International Conference of Natural Language Generation (INLG)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Miguel Ballesteros, Simon Mille, and Leo Wanner. 2014b. Classifiers for Data-Driven Deep Sentence Generation. In Proceedings of the International Con- ference of Natural Language Generation (INLG).",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Exploiting a probabilistic hierarchical model for generation",
"authors": [
{
"first": "Srinivas",
"middle": [],
"last": "Bangalore",
"suffix": ""
},
{
"first": "Owen",
"middle": [],
"last": "Rambow",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 18th conference on Computational linguistics",
"volume": "1",
"issue": "",
"pages": "42--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Srinivas Bangalore and Owen Rambow. 2000. Exploit- ing a probabilistic hierarchical model for generation. In Proceedings of the 18th conference on Computa- tional linguistics-Volume 1, pages 42-48. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "The first surface realisation shared task: Overview and evaluation results",
"authors": [
{
"first": "Anja",
"middle": [],
"last": "Belz",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "White",
"suffix": ""
},
{
"first": "Dominic",
"middle": [],
"last": "Espinosa",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Kow",
"suffix": ""
},
{
"first": "Deirdre",
"middle": [],
"last": "Hogan",
"suffix": ""
},
{
"first": "Amanda",
"middle": [],
"last": "Stent",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Generation Challenges Session at the 13th European Workshop on Natural Language Generation",
"volume": "",
"issue": "",
"pages": "217--226",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anja Belz, Mike White, Dominic Espinosa, Eric Kow, Deirdre Hogan, and Amanda Stent. 2011. The first surface realisation shared task: Overview and evalu- ation results. In Proceedings of the Generation Chal- lenges Session at the 13th European Workshop on Nat- ural Language Generation, pages 217-226.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Statistical generation: Three methods compared and evaluated",
"authors": [
{
"first": "Anja",
"middle": [],
"last": "Belz",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 10th European Workshop on Natural Language Generation",
"volume": "",
"issue": "",
"pages": "15--23",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anja Belz. 2005. Statistical generation: Three meth- ods compared and evaluated. In Proceedings of the 10th European Workshop on Natural Language Gen- eration, pages 15-23.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Broad coverage multilingual deep sentence generation with a stochastic multi-level realizer",
"authors": [
{
"first": "Bernd",
"middle": [],
"last": "Bohnet",
"suffix": ""
},
{
"first": "Leo",
"middle": [],
"last": "Wanner",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Mille",
"suffix": ""
},
{
"first": "Alicia",
"middle": [],
"last": "Burga",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of COLING '10",
"volume": "",
"issue": "",
"pages": "98--106",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bernd Bohnet, Leo Wanner, Simon Mille, and Alicia Burga. 2010. Broad coverage multilingual deep sen- tence generation with a stochastic multi-level realizer. In Proceedings of COLING '10, pages \"98-106\".",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "StuMaBa: From deep representation to surface",
"authors": [
{
"first": "Bernd",
"middle": [],
"last": "Bohnet",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Mille",
"suffix": ""
},
{
"first": "Beno\u00eet",
"middle": [],
"last": "Favre",
"suffix": ""
},
{
"first": "Leo",
"middle": [],
"last": "Wanner",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of ENLG 2011, Surface-Generation Shared Task",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bernd Bohnet, Simon Mille, Beno\u00eet Favre, and Leo Wan- ner. 2011. StuMaBa: From deep representation to surface. In Proceedings of ENLG 2011, Surface- Generation Shared Task, Nancy, France.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Generating typed dependency parses from phrase structure parses",
"authors": [
{
"first": "Marie-Catherine",
"middle": [],
"last": "De Marneffe",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Maccartney",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 5th International Conference on Language Resources and Evaluation (LREC)",
"volume": "6",
"issue": "",
"pages": "449--454",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marie-Catherine de Marneffe, Bill MacCartney, and Christopher D Manning. 2006. Generating typed de- pendency parses from phrase structure parses. In Pro- ceedings of the 5th International Conference on Lan- guage Resources and Evaluation (LREC), volume 6, pages 449-454.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Generating constituent order in german clauses",
"authors": [
{
"first": "Katja",
"middle": [],
"last": "Filippova",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Strube",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics",
"volume": "45",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katja Filippova and Michael Strube. 2007. Generating constituent order in german clauses. In Proceedings of the Annual Meeting of the Association for Computa- tional Linguistics, volume 45, page 320.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Sentence fusion via dependency graph compression",
"authors": [
{
"first": "Katja",
"middle": [],
"last": "Filippova",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Strube",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP '08",
"volume": "",
"issue": "",
"pages": "177--185",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katja Filippova and Michael Strube. 2008. Sentence fu- sion via dependency graph compression. In Proceed- ings of the Conference on Empirical Methods in Nat- ural Language Processing, EMNLP '08, pages 177- 185, Stroudsburg, PA, USA. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Dcu at generation challenges 2011 surface realisation track",
"authors": [
{
"first": "Yuqing",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Deirdre",
"middle": [],
"last": "Hogan",
"suffix": ""
},
{
"first": "Josef",
"middle": [],
"last": "Van Genabith",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Generation Challenges Session at the 13th European Workshop on Natural Language Generation",
"volume": "",
"issue": "",
"pages": "227--229",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuqing Guo, Deirdre Hogan, and Josef van Genabith. 2011a. Dcu at generation challenges 2011 surface realisation track. In Proceedings of the Generation Challenges Session at the 13th European Workshop on Natural Language Generation, pages 227-229, Nancy, France, September. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Dependency-based n-gram models for general purpose sentence realisation",
"authors": [
{
"first": "Yuqing",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Haifeng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Josef",
"middle": [],
"last": "Van Genabith",
"suffix": ""
}
],
"year": 2011,
"venue": "Natural Language Engineering",
"volume": "17",
"issue": "04",
"pages": "455--483",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuqing Guo, Haifeng Wang, and Josef Van Genabith. 2011b. Dependency-based n-gram models for general purpose sentence realisation. Natural Language Engi- neering, 17(04):455-483.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "LOR-KBGEN, A Hybrid Approach To Generating from the KBGen Knowledge-Base",
"authors": [
{
"first": "B",
"middle": [],
"last": "Gyawali",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Gardent",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the KBGen Chal",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B. Gyawali and C. Gardent. 2013. LOR-KBGEN, A Hybrid Approach To Generating from the KBGen Knowledge-Base. In Proceedings of the KBGen Chal- lenge http://www.kbgen.org/papers/.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "The CoNLL-2009 shared task: Syntactic and semantic dependencies in multiple languages",
"authors": [
{
"first": "Jan",
"middle": [],
"last": "Haji\u010d",
"suffix": ""
},
{
"first": "Massimiliano",
"middle": [],
"last": "Ciaramita",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Johansson",
"suffix": ""
},
{
"first": "Daisuke",
"middle": [],
"last": "Kawahara",
"suffix": ""
},
{
"first": "Maria",
"middle": [
"Ant\u00f2nia"
],
"last": "Mart\u00ed",
"suffix": ""
},
{
"first": "Llu\u00eds",
"middle": [],
"last": "M\u00e0rquez",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Meyers",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Pad\u00f3",
"suffix": ""
},
{
"first": "Pavel",
"middle": [],
"last": "Jan\u0161t\u011bp\u00e1nek",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Stra\u0148\u00e1k",
"suffix": ""
},
{
"first": "Nianwen",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "Yi",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 13th Conference on Computational Natural Language Learning (CoNLL-2009)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jan Haji\u010d, Massimiliano Ciaramita, Richard Johans- son, Daisuke Kawahara, Maria Ant\u00f2nia Mart\u00ed, Llu\u00eds M\u00e0rquez, Adam Meyers, Joakim Nivre, Sebastian Pad\u00f3, Jan\u0160t\u011bp\u00e1nek, Pavel Stra\u0148\u00e1k, Mihai Surdeanu, Nianwen Xue, and Yi Zhang. 2009. The CoNLL- 2009 shared task: Syntactic and semantic depen- dencies in multiple languages. In Proceedings of the 13th Conference on Computational Natural Lan- guage Learning (CoNLL-2009), June 4-5, Boulder, Colorado, USA.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Dependency based chinese sentence realization",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Haifeng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yuqing",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP",
"volume": "2",
"issue": "",
"pages": "809--816",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei He, Haifeng Wang, Yuqing Guo, and Ting Liu. 2009. Dependency based chinese sentence realization. In Proceedings of the Joint Conference of the 47th An- nual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2-Volume 2, pages 809-816. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Extended constituent-to-dependency conversion for English",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Johansson",
"suffix": ""
},
{
"first": "Pierre",
"middle": [],
"last": "Nugues",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 16th Nordic Conference of Computational Linguistics (NODALIDA)",
"volume": "",
"issue": "",
"pages": "105--112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Johansson and Pierre Nugues. 2007. Extended constituent-to-dependency conversion for English. In Proceedings of the 16th Nordic Conference of Com- putational Linguistics (NODALIDA), pages 105-112, Tartu, Estonia, May 25-26.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Semantics-Based Machine Translation with Hyperedge Replacement Grammars",
"authors": [
{
"first": "B",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Andreas",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Bauer",
"suffix": ""
},
{
"first": "K",
"middle": [
"M"
],
"last": "Hermann",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of COLING '12",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B. Jones, J. Andreas, D. Bauer, K.M. Hermann, and K. Knight. 2012. Semantics-Based Machine Transla- tion with Hyperedge Replacement Grammars. In Pro- ceedings of COLING '12.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Text simplification for informationseeking applications",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Beata Beigman Klebanov",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2004,
"venue": "On the Move to Meaningful Internet Systems",
"volume": "",
"issue": "",
"pages": "735--747",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Beata Beigman Klebanov, Kevin Knight, and Daniel Marcu. 2004. Text simplification for information- seeking applications. In On the Move to Meaningful Internet Systems, Lecture Notes in Computer Science, pages 735-747. Springer Verlag.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Two-level, many-paths generation",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "Vasileios",
"middle": [],
"last": "Hatzivassiloglou",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the 33rd annual meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "252--260",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Knight and Vasileios Hatzivassiloglou. 1995. Two-level, many-paths generation. In Proceedings of the 33rd annual meeting on Association for Compu- tational Linguistics, pages 252-260. Association for Computational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Generation that exploits corpus-based statistical knowledge",
"authors": [
{
"first": "I",
"middle": [],
"last": "Langkilde",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the COLING/ACL",
"volume": "",
"issue": "",
"pages": "704--710",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "I. Langkilde and K. Knight. 1998. Generation that ex- ploits corpus-based statistical knowledge. In Proceed- ings of the COLING/ACL, pages 704-710.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "An empirical verification of coverage and correctness for a general-purpose sentence generator",
"authors": [
{
"first": "Irene",
"middle": [],
"last": "Langkilde-Geary",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 12th International Natural Language Generation Workshop",
"volume": "",
"issue": "",
"pages": "17--24",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Irene Langkilde-Geary. 2002. An empirical verification of coverage and correctness for a general-purpose sen- tence generator. In Proceedings of the 12th Interna- tional Natural Language Generation Workshop, pages 17-24. Citeseer.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Phrase-based statistical language generation using graphical models and active learning",
"authors": [
{
"first": "Fran\u00e7ois",
"middle": [],
"last": "Mairesse",
"suffix": ""
},
{
"first": "Milica",
"middle": [],
"last": "Ga\u0161i\u0107",
"suffix": ""
},
{
"first": "Filip",
"middle": [],
"last": "Jur\u010d\u00ed\u010dek",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Keizer",
"suffix": ""
},
{
"first": "Blaise",
"middle": [],
"last": "Thomson",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Young",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1552--1561",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fran\u00e7ois Mairesse, Milica Ga\u0161i\u0107, Filip Jur\u010d\u00ed\u010dek, Simon Keizer, Blaise Thomson, Kai Yu, and Steve Young. 2010. Phrase-based statistical language generation us- ing graphical models and active learning. In Proceed- ings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1552-1561. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Classification-based generation using tag",
"authors": [
{
"first": "Tomasz",
"middle": [],
"last": "Marciniak",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Strube",
"suffix": ""
}
],
"year": 2004,
"venue": "Natural Language Generation",
"volume": "",
"issue": "",
"pages": "100--109",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomasz Marciniak and Michael Strube. 2004. Classification-based generation using tag. In Natural Language Generation, pages 100-109. Springer.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Dependency Syntax: Theory and Practice",
"authors": [
{
"first": "Igor",
"middle": [],
"last": "Mel",
"suffix": ""
},
{
"first": "'",
"middle": [],
"last": "\u010cuk",
"suffix": ""
}
],
"year": 1988,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Igor Mel'\u010duk. 1988. Dependency Syntax: Theory and Practice. State University of New York Press, Albany.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "How does the granularity of an annotation scheme influence dependency parsing performance?",
"authors": [
{
"first": "Simon",
"middle": [],
"last": "Mille",
"suffix": ""
},
{
"first": "Alicia",
"middle": [],
"last": "Burga",
"suffix": ""
},
{
"first": "Gabriela",
"middle": [],
"last": "Ferraro",
"suffix": ""
},
{
"first": "Leo",
"middle": [],
"last": "Wanner",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of COLING 2012",
"volume": "",
"issue": "",
"pages": "839--852",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Simon Mille, Alicia Burga, Gabriela Ferraro, and Leo Wanner. 2012. How does the granularity of an an- notation scheme influence dependency parsing perfor- mance? In Proceedings of COLING 2012, pages 839- 852, Mumbai, India.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Ancora-upf: A multi-level annotation of spanish",
"authors": [
{
"first": "Simon",
"middle": [],
"last": "Mille",
"suffix": ""
},
{
"first": "Alicia",
"middle": [],
"last": "Burga",
"suffix": ""
},
{
"first": "Leo",
"middle": [],
"last": "Wanner",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Second International Conference on Dependency Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Simon Mille, Alicia Burga, and Leo Wanner. 2013. Ancora-upf: A multi-level annotation of spanish. In Proceedings of the Second International Conference on Dependency Linguistics, Prague, Czech Republic.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Stochastic natural language generation for spoken dialog systems",
"authors": [
{
"first": "H",
"middle": [],
"last": "Alice",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"I"
],
"last": "Oh",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rudnicky",
"suffix": ""
}
],
"year": 2002,
"venue": "Computer Speech & Language",
"volume": "16",
"issue": "3",
"pages": "387--407",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alice H Oh and Alexander I Rudnicky. 2002. Stochastic natural language generation for spoken dialog systems. Computer Speech & Language, 16(3):387-407.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "The osu system for surface realization at generation challenges 2011",
"authors": [
{
"first": "Rajakrishnan",
"middle": [],
"last": "Rajkumar",
"suffix": ""
},
{
"first": "Dominic",
"middle": [],
"last": "Espinosa",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "White",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 13th European workshop on natural language generation",
"volume": "",
"issue": "",
"pages": "236--238",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rajakrishnan Rajkumar, Dominic Espinosa, and Michael White. 2011. The osu system for surface realization at generation challenges 2011. In Proceedings of the 13th European workshop on natural language gener- ation, pages 236-238. Association for Computational Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Linguistically informed statistical models of constituent structure for ordering in sentence realization",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "Ringger",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Gamon",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Robert",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Moore",
"suffix": ""
},
{
"first": "Martine",
"middle": [],
"last": "Rojas",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Smets",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Corston-Oliver",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 20th international conference on Computational Linguistics, page 673. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric Ringger, Michael Gamon, Robert C Moore, David Rojas, Martine Smets, and Simon Corston-Oliver. 2004. Linguistically informed statistical models of constituent structure for ordering in sentence realiza- tion. In Proceedings of the 20th international confer- ence on Computational Linguistics, page 673. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Bleu is not the colour: How optimising bleu reduces translation quality",
"authors": [
{
"first": "Aaron",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Hardmeier",
"suffix": ""
},
{
"first": "J\u00f6rg",
"middle": [],
"last": "Tiedemann",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aaron Smith, Christian Hardmeier, and J\u00f6rg Tiedemann. 2014. Bleu is not the colour: How optimising bleu reduces translation quality.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Joint morphological generation and syntactic linearization",
"authors": [
{
"first": "Linfeng",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "1522--1528",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Linfeng Song, Yue Zhang, Kai Song, and Qun Liu. 2014. Joint morphological generation and syntactic linearization. In Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence, July 27 - 31, 2014, Qu\u00e9bec City, Qu\u00e9bec, Canada., pages 1522- 1528.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Trainable sentence planning for complex information presentation in spoken dialog systems",
"authors": [
{
"first": "A",
"middle": [],
"last": "Stent",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Prasad",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Walker",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the ACL '04",
"volume": "",
"issue": "",
"pages": "79--86",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Stent, R. Prasad, and M. Walker. 2004. Trainable sen- tence planning for complex information presentation in spoken dialog systems. In Proceedings of the ACL '04, pages 79-86.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Improving grammaticality in statistical sentence generation: Introducing a dependency spanning tree algorithm with an argument satisfaction model",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Wan",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Dras",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Dale",
"suffix": ""
},
{
"first": "C\u00e9cile",
"middle": [],
"last": "Paris",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "852--860",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen Wan, Mark Dras, Robert Dale, and C\u00e9cile Paris. 2009. Improving grammaticality in statistical sentence generation: Introducing a dependency spanning tree algorithm with an argument satisfaction model. In Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguis- tics, pages 852-860. Association for Computational Linguistics.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Generation by inverting a semantic parser that uses statistical machine translation",
"authors": [
{
"first": "Yuk",
"middle": [
"Wah"
],
"last": "Wong",
"suffix": ""
},
{
"first": "Raymond J",
"middle": [],
"last": "Mooney",
"suffix": ""
}
],
"year": 2007,
"venue": "HLT-NAACL",
"volume": "",
"issue": "",
"pages": "172--179",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuk Wah Wong and Raymond J Mooney. 2007. Genera- tion by inverting a semantic parser that uses statistical machine translation. In HLT-NAACL, pages 172-179.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Building surface realizers automatically from corpora",
"authors": [
{
"first": "Huayan",
"middle": [],
"last": "Zhong",
"suffix": ""
},
{
"first": "Amanda",
"middle": [],
"last": "Stent",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of UCNLG",
"volume": "5",
"issue": "",
"pages": "49--54",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Huayan Zhong and Amanda Stent. 2005. Building sur- face realizers automatically from corpora. Proceed- ings of UCNLG, 5:49-54.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "A DSyntS (a) and its corresponding SSyntS (b) for the sentence Almost 1.2 million jobs have been created by the state in that time 2.2 Projection of DSyntS to SSyntS",
"uris": null,
"num": null
},
"FIGREF1": {
"type_str": "figure",
"text": "A sample (partial) mapping dictionary entry",
"uris": null,
"num": null
},
"FIGREF2": {
"type_str": "figure",
"text": "Sample graph transducer rule (simple nouns, verbs, adverbs, adjectives, etc.), 22 rules map DSyntS-nodes that have a one-to-many correspondence in the SSyntS (",
"uris": null,
"num": null
},
"FIGREF3": {
"type_str": "figure",
"text": "PoS of n d , PoS of n d 's head, verbal voice (active, passive) and aspect (perfective, progressive) of the current node, lemma of n d , and n d 's dependencies.",
"uris": null,
"num": null
},
"FIGREF4": {
"type_str": "figure",
"text": "Workflow of the Data-Driven Generator.[ NN(deep), DT]",
"uris": null,
"num": null
},
"FIGREF5": {
"type_str": "figure",
"text": "the internal dependencies of ss, the head of ss, the lemmas of ss, the PoS of the dependent of the head of ss in DSyntS For instance, the classifier for the hypernode [JJ(deep)] is most likely to identify as its governor NN in the hypernode[NN(deep), DT]; cf.Figure 6",
"uris": null,
"num": null
},
"FIGREF6": {
"type_str": "figure",
"text": ".",
"uris": null,
"num": null
},
"FIGREF7": {
"type_str": "figure",
"text": "NN(deep), DT] [ JJ(deep)] modif Figure 6: Surface dependencies between two hypernodes.",
"uris": null,
"num": null
},
"TABREF0": {
"html": null,
"text": "1 and NS 2 else if NS 2 is top node of the SSyntS hypernode and ((NS 1 is top node of the SSynt hypernode and is AUX) or (NS 1 is the bottom node of the SSynt hypernode and is V fin ) or (NS 1 is not top node or bottom node of the SSynt-hypernode and is AUX)) then introduce SBJ between NS 1 and NS 2 endif endif",
"num": null,
"type_str": "table",
"content": "<table/>"
},
"TABREF2": {
"html": null,
"text": "",
"num": null,
"type_str": "table",
"content": "<table><tr><td colspan=\"3\">: Results of the evaluation of the SVMs for the</td></tr><tr><td colspan=\"3\">non-isomorphic transition for the Spanish DSyntS devel-</td></tr><tr><td>opment and test sets</td><td/><td/></tr><tr><td>English Test set</td><td>#</td><td>%</td></tr><tr><td>Hyper-node identification</td><td colspan=\"2\">42103/43245 97.36</td></tr><tr><td>Lemma generation</td><td>6726/7199</td><td>93.43</td></tr><tr><td>Intra-hypernode dep. generation</td><td>6754/7179</td><td>94.08</td></tr><tr><td colspan=\"3\">Inter-hypernode dep. generation 35922/40699 88.26</td></tr></table>"
},
"TABREF3": {
"html": null,
"text": "Results of the evaluation of the SVMs for the non-isomorphic transition for the English DSyntS test set",
"num": null,
"type_str": "table",
"content": "<table/>"
},
"TABREF5": {
"html": null,
"text": "Overview of the results on the Spanish development and test sets excluding punctuation marks after the linearization",
"num": null,
"type_str": "table",
"content": "<table><tr><td>Test Set</td><td colspan=\"2\">BLEU NIST</td><td>Exact</td></tr><tr><td>surface gen.</td><td>0.91</td><td colspan=\"2\">15.26 56.02 %</td></tr><tr><td>baseline deep gen.</td><td>0.69</td><td colspan=\"2\">13.71 12.38 %</td></tr><tr><td>deep gen.</td><td>0.77</td><td colspan=\"2\">14.42 21.05 %</td></tr></table>"
},
"TABREF6": {
"html": null,
"text": "Overview of the results on the English test set excluding punctuation marks after the linearization",
"num": null,
"type_str": "table",
"content": "<table><tr><td>4.2 Discussion and Error Analysis</td></tr></table>"
}
}
}
}