Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "C12-1043",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:24:20.815643Z"
},
"title": "Towards efficient HPSG generation for German, a non-configurational language",
"authors": [],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we propose a rule-based method to improve efficiency in bottom-up chart generation with GG, an open-source reversible large-scale HPSG for German. Following an indepth analysis of efficiency problems in the baseline system, we show that costly combinatorial explosion in brute force bottom-up search can be largely avoided using information already contained implicitly in the input semantics: either (i) information is globally present, but needs to be made locally available to a particular elementary predication, or (ii) semantic configurations in the input have a clear translation to syntactic constraints, provided some knowledge of the grammar. We propose several performance features targeting inflection and extraction, as well as more language-specific features, relating to verb movement and discontinuous complex predicates. In a series of experiments on three different test suites we show that 7 out of 8 features are consistently effective in reducing generation times, both in isolation and in combination. Combining all efficiency measures, we observe a speedup factor of 4.5 for our less complex test suites, increasing to almost 28 for the more complex one: the fact that performance benefits drastically increase with input length suggests that our method scales up well in the sense that it effectively heads off the problem with exponential growth. The present approach of using a generator-internal transfer grammar has the added advantage that it locates performance-related issues close to the grammar, thereby keeping the external semantic interface as general as possible.",
"pdf_parse": {
"paper_id": "C12-1043",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we propose a rule-based method to improve efficiency in bottom-up chart generation with GG, an open-source reversible large-scale HPSG for German. Following an indepth analysis of efficiency problems in the baseline system, we show that costly combinatorial explosion in brute force bottom-up search can be largely avoided using information already contained implicitly in the input semantics: either (i) information is globally present, but needs to be made locally available to a particular elementary predication, or (ii) semantic configurations in the input have a clear translation to syntactic constraints, provided some knowledge of the grammar. We propose several performance features targeting inflection and extraction, as well as more language-specific features, relating to verb movement and discontinuous complex predicates. In a series of experiments on three different test suites we show that 7 out of 8 features are consistently effective in reducing generation times, both in isolation and in combination. Combining all efficiency measures, we observe a speedup factor of 4.5 for our less complex test suites, increasing to almost 28 for the more complex one: the fact that performance benefits drastically increase with input length suggests that our method scales up well in the sense that it effectively heads off the problem with exponential growth. The present approach of using a generator-internal transfer grammar has the added advantage that it locates performance-related issues close to the grammar, thereby keeping the external semantic interface as general as possible.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Recent advances in the efficiency of bottom-up chart generation with reversible HPSG grammars (Carroll and Oepen, 2005) , namely local ambiguity factoring under subsumption and index accessibility filtering, appear to have solved the most pressing efficiency problems associated with HPSG generation for English, turning reversible linguistically motivated grammars like the ERG (Copestake and Flickinger, 2000) into interesting resources for offline and online surface generation. While the efficiency measures implemented in the LKB and ACE generators (see section 1.2) are also effective for German, these measures appear to be insufficient to resolve generation performance issues for GG, a large-scale HPSG for German originally developed at DFKI (M\u00fcller and Kasper, 2000; Crysmann, 2003 Crysmann, , 2005 Crysmann, , 2007 . In fact, even on moderately complex inputs, the generator quickly runs into a combinatorial explosion, having so far prevented the grammar from being usable for any serious real-time NLG tasks.",
"cite_spans": [
{
"start": 94,
"end": 119,
"text": "(Carroll and Oepen, 2005)",
"ref_id": "BIBREF2"
},
{
"start": 379,
"end": 411,
"text": "(Copestake and Flickinger, 2000)",
"ref_id": "BIBREF4"
},
{
"start": 752,
"end": 777,
"text": "(M\u00fcller and Kasper, 2000;",
"ref_id": "BIBREF19"
},
{
"start": 778,
"end": 792,
"text": "Crysmann, 2003",
"ref_id": "BIBREF6"
},
{
"start": 793,
"end": 809,
"text": "Crysmann, , 2005",
"ref_id": "BIBREF7"
},
{
"start": 810,
"end": 826,
"text": "Crysmann, , 2007",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "HPSG bottom-up generation and non-configurationality",
"sec_num": "1.1"
},
{
"text": "Upon closer inspection of the source of the inefficiency, it became quickly apparent that the observed performance problems are the result of a conspiracy of several factors, most of which can be subsumed under the notion of non-configurationality:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HPSG bottom-up generation and non-configurationality",
"sec_num": "1.1"
},
{
"text": "\u2022 Relatively free constituent order In contrast to English, constituent order in German clauses is relatively free, permitting permutation of complements, including the subject, as well as interspersal of modifiers in pretty much any position. As a result, chart size grows rather quickly, with ambiguity packing being ineffective until rather large, i.e. mostly clausal, structures are built.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HPSG bottom-up generation and non-configurationality",
"sec_num": "1.1"
},
{
"text": "German finite verbs display a placement alternation between clause-initial (V1/V2) and clause-final realisation, determined by clausal construction type. Under a bottom-up regime, both left-branching and right-branching structures must be explored. Furthermore, PPs and sentential complements easily extrapose across final verbs, thereby increasing the search space even more. In the case of particle verbs, initial placement of the verb leaves the particle in final position, giving rise to discontinuous lexical items, related by (simulated) head movement (Kiss and Wesche, 1991; M\u00fcller and Kasper, 2000; Crysmann, 2003, among others) .",
"cite_spans": [
{
"start": 558,
"end": 581,
"text": "(Kiss and Wesche, 1991;",
"ref_id": "BIBREF15"
},
{
"start": 582,
"end": 606,
"text": "M\u00fcller and Kasper, 2000;",
"ref_id": "BIBREF19"
},
{
"start": 607,
"end": 636,
"text": "Crysmann, 2003, among others)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 Verb placement",
"sec_num": null
},
{
"text": "\u2022 Argument composition Auxiliaries, modals, raising verbs, and, optionally, control verbs form a verb cluster with their non-finite complements. Arguments of upstairs (=governing) and downstairs (=governed) verbs can be interleaved (e.g. ... weil ein Buch 2 er 1 ihm 2 zu kaufen 2 versprach 1 'because he 1 promised 1 to buy him 2 a book 2 .'), making it necessary to compose arguments of the downstairs verb (kaufen 'buy') onto the valence lists of the upstairs verb (versprechen 'promise'), resulting in the creation of complex predicates.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 Verb placement",
"sec_num": null
},
{
"text": "Argument composition interacts with both free constituent order and verb placement. In particular the latter means that some members of the composed valence list must be hypothesised before the initial verb has been encountered, leading to partially underspecified valence lists (e.g. Letzte Woche versprach 1 ein Buch 2 er 1 ihm 2 zu kaufen 2 'Last week, he 1 promised 1 to buy him 2 a book 2 .'). Since the underspecified valencies are not constrained as to the identity of the argument (no semantic Skolem constants), any chart item (e.g., letzte Woche 'last week') that matches the underspecified syntactic description can sneak in (i.e., locally satisfy the hypothesised valency), thereby creating massive local ambiguity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 Verb placement",
"sec_num": null
},
{
"text": "\u2022 Partial VP fronting Verb fronting in German may leave some (or all) arguments behind for realisation in the Mittelfeld (e.g. Kaufen 2 [soll 1 er 1,2 ihm 2 das Buch 2 morgen]. / Das Buch 2 kaufen 2 [soll 1 er 1,2 ihm 2 morgen]. / Ihm 2 das Buch 2 kaufen 2 [soll 1 er 1,2 morgen]. 'He 1,2 should 1 buy 2 him 2 the book 2 tomorrow.'). Since the core sentence has to be generated before it combines with the fronted element, construction of the core sentence needs to proceed without any access to valence information. As a result we experience a massive combinatorial explosion that can only be controlled very late, i.e. once the entire sentential structure has been built.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 Verb placement",
"sec_num": null
},
{
"text": "While not a problem in itself, the fact that German NPs are inflected for case multiplies the existing performance issues, most specifically in the case of underspecified valence information, since irrelevant case inflection can only be detected quite late.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 Rich inflection",
"sec_num": null
},
{
"text": "In the present paper we suggest a method that automatically enriches the input semantics in such a way, as to derive local syntacto-semantic constraints from the global semantic configuration: as a consequence we shall be able to eliminate globally unsuccessful generator hypotheses early on in bottom-up chart generation. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 Rich inflection",
"sec_num": null
},
{
"text": "The open source ACE platform (http://sweaglesw.org/linguistics/ace/) implements a natural language generator based primarily on chart generation (Kay, 1996) . The input to the generation system is a grammar and the semantics of the utterance to be generated (expressed in Minimal Recursion Semantics, or MRS (Copestake et al., 2005) ). The grammar is primarily a declarative formalism, in that it defines a bidirectional relationship between MRSes and strings. The generator's output is the list of all strings which are related to the input MRS by the grammar. To combat the exponential worst-case complexity of the chart generation algorithm, ACE deploys two key efficiency measures described by (Carroll and Oepen, 2005) , namely ambiguity packing under subsumption and index accessibility filtering. In these respects it is quite similar to the LKB parser-generator system (Copestake, 2002) . While ACE, just like the LKB, supports not only parsing and generation modes, but also MRS-based transfer, its main advantage resides in its processing efficiency: compared to the LKB, generation speed on the LOGON Rondane treebank (1169 items, avg. sentence length: 14.13) is 14.7 times better than that of the LKB, bringing average generation times down from 6.34s (LKB) to 0.43 (ACE).",
"cite_spans": [
{
"start": 145,
"end": 156,
"text": "(Kay, 1996)",
"ref_id": "BIBREF12"
},
{
"start": 308,
"end": 332,
"text": "(Copestake et al., 2005)",
"ref_id": "BIBREF5"
},
{
"start": 698,
"end": 723,
"text": "(Carroll and Oepen, 2005)",
"ref_id": "BIBREF2"
},
{
"start": 877,
"end": 894,
"text": "(Copestake, 2002)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The ACE generator",
"sec_num": "1.2"
},
{
"text": "The central mechanism by which we intend to address the generation efficiency problem is to automatically enrich the input semantics to the generator in such a way that global information implicitly present in the MRS representation will be made explicit and locally available on relevant elementary predications. By doing so, we will make them ultimately accessible to the grammar during generation. By means of an automated and quasi-deterministic rewrite step on the input MRS, we hope to strike a good balance between a maximally grammar-independent external semantic interface, suitable for application developers, and an enriched input to the generator that will hopefully reduce brute-force search by means of automatically derived syntacto-semantic constraints.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MRS term rewriting for generation efficiency",
"sec_num": "2"
},
{
"text": "Within the context of the LOGON MT project (Oepen et al., 2004 (Oepen et al., , 2007 , the LKB processing and development platform was extended with a term rewrite system for semantic representation using Minimal Recursion Semantics (Copestake et al., 2005) . In the LOGON system, a transfer grammar is a sequential, resource-sensitive set of rewrite rules which, when applied one after another in order, transform an MRS produced by a sourcelanguage grammar into an MRS suitable for NLG with the target-language grammar. We adopt the same formalism for a different purpose. A rewrite rule is a tuple of patterns for matching pieces of MRSes, consisting of an input pattern, a context pattern, a filter pattern and an output pattern. A rule < I, C, O, F > causes a part of the current MRS matching the input pattern I to be replaced with the output O, provided that the context C also matches the input MRS and the filter F does not match. The patterns can be interdependent, so that e.g. the output can copy information matched by the input and the context. Each of the four patterns I, C, O, F contains any number of descriptions of elementary predications 1 .",
"cite_spans": [
{
"start": 43,
"end": 62,
"text": "(Oepen et al., 2004",
"ref_id": "BIBREF23"
},
{
"start": 63,
"end": 84,
"text": "(Oepen et al., , 2007",
"ref_id": "BIBREF24"
},
{
"start": 233,
"end": 257,
"text": "(Copestake et al., 2005)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The LOGON MRS term rewrite system",
"sec_num": "2.1"
},
{
"text": "In an MRS (cf. figure 1) , an elementary predication consists of a predicate name and any number of named roles, whose values are logical variables. A description of an elementary predication can use a regular expression to constrain which predications can match it. Several sorts of type constraints can be imposed on the values of individual roles, including unifiability with, subsumption by, or equality to a particular type (x,e,u,i,h). As for properties, i.e. features, of such variables, matching is restricted to mere unifiability. Finally, it is possible to specify that two or more variables matched in different elementary predication descriptions within the same rule have the same identity, by assigning a coreference tag. Similarly, for variable properties.",
"cite_spans": [],
"ref_spans": [
{
"start": 15,
"end": 24,
"text": "figure 1)",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "The LOGON MRS term rewrite system",
"sec_num": "2.1"
},
{
"text": "Formally, the rewrite system has the computational power of a Turing machine. As such, it is not possible to give bounds on the time complexity of applying an arbitrary rule set. However, in practice the operation is tractable for the types of rule sets considered here.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The LOGON MRS term rewrite system",
"sec_num": "2.1"
},
{
"text": "The ACE platform, similarly to the LKB, allows grammarians to interpose a step of term rewriting between the declarative portion of the grammar and the publicly displayed MRS. The purpose of this is to allow grammarians additional freedom in designing the MRS schema described by the grammar proper, while maintaining a semantic interface that is more stable between grammar revisions and also affording an opportunity to remove remnants of non-semantic information. Since the term rewrite system is not bidirectional, separate rule sets are used after parsing and before generation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation of term rewrite system in ACE",
"sec_num": "2.2"
},
{
"text": "The external MRS input to the generator is passed through the pre-generation rewrite system, resulting in a so-called internal MRS input (cf. figure 1) . It is this MRS that is used to identify the initial set of grammar entities that need to be added to the chart. The immediate result of chart generation is a set of strings together with the sequence of grammar rules and lexemes that licensed them, and the MRS corresponding to that analysis. The result MRSes are passed through the post parsing rewrite rules, resulting in external MRSes. Only those strings whose corresponding external MRSes are subsumed 2 by the external MRS input are output.",
"cite_spans": [],
"ref_spans": [
{
"start": 142,
"end": 151,
"text": "figure 1)",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Implementation of term rewrite system in ACE",
"sec_num": "2.2"
},
{
"text": "A single rule can match an MRS multiple ways. Due to resource sensitivity, the order in which the matches have the rule applied can in principal affect the outcome. When a rule matches the input in K different places, there are K! possible match orderings to try, which could each yield a different result, making the complexity of the operation worse than exponential. In practice, this issue can be so severe that the time spent in the rewriting process dominates the overall generation time. However, through careful rule-writing it is possible to ensure commutativity. ACE features a mode in which only one (arbitrary) match ordering is performed, rather than executing all K! orderings (only to determine that they all have identical results). We exploit this feature to reduce the time spent in rewriting to a fraction of the overall generation time. 3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation of term rewrite system in ACE",
"sec_num": "2.2"
},
{
"text": "In this subsection, we shall present in some detail the individual efficiency measures our transfer grammar automatically derives from the generic external semantic representation. The efficiency measures we implemented can be largely classified into three groups: inflectionrelated measures, which mainly reduce the number of inflectional variants in the chart, Germanspecific measures related to verb placement and argument composition, and finally, extractionrelated measures.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rule-based enrichment of input semantics",
"sec_num": "2.3"
},
{
"text": "Most of the enrichment was done by means of having the transfer grammar augment semantic variables with additional performance features. Values of these features are typically atomic types (cf. figure 1 for a sample MRS: performance features are prefixed with two dashes and rendered in blue). In one case, i.e. oind, the transfer grammar adds an additional role argument (individual variable) to relevant elementary predications. On the grammar side, rules were additionally constrained according to these efficiency features. Unless specialised to some value by the MRS rewrite grammar, the enriched grammar rules will apply just as before, enabling us to measure performance gains by simply activating or deactivating blocks of transfer rules.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rule-based enrichment of input semantics",
"sec_num": "2.3"
},
{
"text": "In this subsection, we shall discuss each measure in turn, together with brief remarks on the implementation and an estimate of the expected benefits.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rule-based enrichment of input semantics",
"sec_num": "2.3"
},
{
"text": "Case (cas) One of the most straightforward efficiency measures to come up with when confronted with a highly inflectional language such as German is to eliminate inflected forms from the chart that cannot possibly be part of a globally well-formed realisation. While some inflectional forms are readily filtered by the semantic input or the lexicon, namely predicateinherent information such as TAM (tense/aspect/mood) and number/gender for nominal expressions, this is not the case for morphosyntactic case, which is determined by properties of the governing predicate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inflection",
"sec_num": "2.3.1"
},
{
"text": "In a configurational language, the expected inefficiency of inflecting every NP for all possible cases, even irrelevant cases may be suboptimal, but not really a matter for concern, since the NP will locally combine with its governing predicate, rendering NPs in irrelevant cases inert during further search. In a non-configurational language such as German, which features argument composition, heads combine with complements that are not their own arguments but rather those of a predicate they compose with, i.e., they need to cater for unknown raised arguments by means of underspecified valence lists. As a result, the identity of the inherited arguments is not known, so any XP present in the chart, however inflected, can sneak into these underspecified lists, to be ruled out, in the majority of cases, only when a significantly larger structure has been built. Unfortunately, languages with an articulate case system tend to be of the non-configurational, rather than the configurational type.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inflection",
"sec_num": "2.3.1"
},
{
"text": "In order to predict the case for NPs, we developed a set of 35 rules that derive case requirements from lexical and structural properties of the input semantics (cf. the --CAS feature in figure 1 ). While some cases are indeed trivial, e.g., predicting the case of obliques, or arguments of prepositions, others are not: first, since individuals can be arguments to more than one predicate, as witnessed in relative clause constructions, such individual variables must be exempted from case prediction. Second, raising and, in particular, voice alternation can change case assignment properties. Thus, the transfer grammar must carefully anticipate these properties in a case by case fashion.",
"cite_spans": [],
"ref_spans": [
{
"start": 187,
"end": 197,
"text": "figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Inflection",
"sec_num": "2.3.1"
},
{
"text": "Apart from an overall slight reduction in chart size, we expect this feature to be particularly useful in all constructions involving locally underspecified valence lists, including the quite common case of separable particle verbs and raising and control constructions, as well as more specific, yet quite expensive ones like partial VP fronting.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inflection",
"sec_num": "2.3.1"
},
{
"text": "The implementation of punctuation in GG (Kilian, 2007) follows that of the English ERG in using inflectional rules. Even when limiting ourselves to basic sentence punctuation (commas, period, question mark, exclamation mark), almost every chart item can be inflected in 5 different ways, given that it cannot be known a priori which chart item will end up, e.g., at the right periphery of the entire sentence, where sentence mode (declarative/interrogative/imperative) is expressed. What is globally known, however, is sentence mode: all it takes is to distribute exactly this information onto every elementary predication (cf. --PUNCT in figure 1). Abstracting away from quotations, every sentence will be in only one of the three modes, so the number of punctuation variants for each chart item can be brought down to 3 (instead of 5).",
"cite_spans": [
{
"start": 40,
"end": 54,
"text": "(Kilian, 2007)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Punctuation (punct)",
"sec_num": null
},
{
"text": "This measure, while simple-minded and straightforwardly implemented (5 rules), is nevertheless expected to be highly efficient, given that it indiscriminately targets almost every lexical edge (heads and dependents alike) and therefore has a significant impact on the overall size of the search space, given the bottom-up regime of the generator. The only edges that do not benefit from this (or any other measure discussed in this paper) are those corresponding to semantically empty lexical items, like, e.g., auxiliaries, relative pronouns etc.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Punctuation (punct)",
"sec_num": null
},
{
"text": "A peculiarity of German syntax that has quite strong repercussions on processing efficiency is verb placement: while non-finite verbs are placed in phrase-final position, finite verbs display an alternation between final and initial position: in relative clause, embedded interrogatives, as well as subordinate clauses introduced by a complementiser or subordinating conjunction, the finite verb is realised in final position, otherwise initially, including matrix clauses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Verb movement and direction of branching (sub)",
"sec_num": "2.3.2"
},
{
"text": "The global construction type (\"matrix order\" vs. \"subordinate order\") can be calculated from properties of the input semantics, in particular, by taking into consideration sentence mode (declarative vs. interrogative), the kind of embedding (relativisation, complementation, type of conjunction), as well as the presence and nature of the topicalised element (embedded V2 vs. embedded wh vs. that-clause). The pre-generation transfer grammar uses this information to determine for each verb whether it is in a \"subordinate\" or \"non-subordinate\" context (cf.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Verb movement and direction of branching (sub)",
"sec_num": "2.3.2"
},
{
"text": "--SUB in figure 1). In addition to predicting direction of branching for simple verbs, the main benefit of this feature is that we can decide when to hypothesise head movement of the verb.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Verb movement and direction of branching (sub)",
"sec_num": "2.3.2"
},
{
"text": "Coherent vs. non-coherent constructions (coh) Probably one of the strongest factors responsible for generation inefficiency is due to discontinuous complex predicates leading to local underspecification of valence lists that permit sneaking in from any XP edge in the chart, triggering massive combinatorial explosion. Fortunately, whether some predicate permits argument composition or not is a lexical matter, with composition being restricted to auxiliaries, modals, raising and control verbs. Thus, the transfer grammar marks the arguments of nonfinite complements of modals etc., as to their potential of undergoing argument composition (coh +). Likewise, arguments of finite verbs that are expressed periphrastically (perfective, future, passives) are marked for composition. With arguments of all other predicates being marked with a negative value, underspecified valence lists can be protected to some degree against illicit intrusion of arguments (cf. --COH in figure 1 ). This feature is expected to be particularly helpful in those cases where we are confronted with entirely underspecified valence lists, as with separable particle verbs and partial VP fronting.",
"cite_spans": [],
"ref_spans": [
{
"start": 971,
"end": 979,
"text": "figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Discontinuous complex predicates",
"sec_num": "2.3.3"
},
{
"text": "Predicting upstairs objects (raising/control) As discussed in the introduction, discontinuous verb clusters may necessitate hypothesising valencies of the initial verbs to be realised in the Mittelfeld, in particular objects of initial raising and control verbs that intersperse with the arguments of the final verb or verb cluster. Since the subcategorisation requirement of the upstairs verb are not known during bottom-up construction of the Mittelfeld, additional arguments are hypothesised even in cases where the initial verb takes no complement at all. Furthermore, such hypothesised argument slots provide potential for illicit intrusion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discontinuous complex predicates",
"sec_num": "2.3.3"
},
{
"text": "On the basis of the predicate argument structure, however, it is quite straightforward to decide not only whether an argument should be hypothesised or not, but also to determine its identity. To this end, the transfer grammar redundantly encodes the upstairs verb's object as an additional argument role (oind) which is used by the grammar to restrict any additional argument slot.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discontinuous complex predicates",
"sec_num": "2.3.3"
},
{
"text": "Predicting properties of raised subjects (sind) The last feature relating to complex predicates targets modals and subject raising verbs, which agree with a subject that is not their own argument. In order to limit the number of inflected variants of potentially expensive items in the chart (they all trigger argument composition), the transfer derives the person-number information of the syntactic subject from the argument structure of their non-finite argument (cf. --SIND in figure 1) . Since the scope of this feature is limited, we did not have any a priori expectation as to whether the potential gains in certain construction will be sufficient to offset the overhead incurred by the extended rule set.",
"cite_spans": [],
"ref_spans": [
{
"start": 481,
"end": 490,
"text": "figure 1)",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Discontinuous complex predicates",
"sec_num": "2.3.3"
},
{
"text": "Long distance dependencies, like topicalisation, wh-fronting and relativisation are a notorious source of inefficiency in syntactic processing. In German, extraction is very common: even in ordinary declaratives, some constituent is extracted from the matrix or an embedded clause and placed into the sentence-initial Vorfeld, a kind of topic position. In the external MRS, the distinguished individual variable of the topicalised element is represented as an informationstructural property of the proposition or question relation (TPC feature). Topicalised elements can be arguments, modifiers (scopal or intersective), as well as heads, in the case of (partial) VP fronting. Moreover, as a side-effect of wide-spread scrambling, there is no canonical position even for arguments, let alone adjuncts, so gap prediction is vital.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Non-local dependencies",
"sec_num": "2.3.4"
},
{
"text": "Local vs. non-local realisation (top) Predicting local vs. non-local realisation of arguments is expected to be both straightforward and effective: given that the individual variable of the topicalised element is already registered in the external MRS, it is almost sufficient to redundantly encode this fact as a property of the variable, thereby making it visible on the governing predicate as well, i.e., the context from which extraction proceeds, and mark all remaining arguments as local (cf. --TOP in figure 1). This basic scenario gets, of course, slightly more complicated given the fact that individual variables can be arguments of more than one predicate, which may or may not be a reason for concern: in the case of across-the-board extraction from coordinate structures, it can be harmless, whereas in the case of relativisation, we are confronted with the possibility of an individual which is realised locally with respect to the upstairs predicate, yet non-locally within the relative clause. In the transfer grammar, this is resolved by means of a three-valued system of types (+,\u2212,na), where Boolean values correspond to topicalised (+) and non-topicalised (\u2212) realisation, whereas na represents the neutral case (relativisation). Both local head-complement rules (na_or_\u2212) and complement extraction rules (na_or_+) are made sensitive to this distinction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Non-local dependencies",
"sec_num": "2.3.4"
},
{
"text": "As a consequence of this feature, we expect some considerable reduction in chart size: with the exception of relativised arguments, all arguments will be marked as either local or non-local, thereby eliminating a great deal of non-determinism.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Non-local dependencies",
"sec_num": "2.3.4"
},
{
"text": "While prediction of argument extraction as sketched above can reduce some of the complexity incurred by long-distance dependencies, it is rather moot when it comes to items that are not represented on the argument structure of the local head at all, e.g. modifiers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predict extraction type and gap site (tpc)",
"sec_num": null
},
{
"text": "Taking into consideration the semantic relation that the topicalised elementary predication enters in with some other elementary predication, it is possible to detect, from the external MRS, both modifier status (scopal vs. intersective) and location of the gap site, i.e. the modified item. In a similar way, it is possible to identify cases of (partial) VP fronting.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predict extraction type and gap site (tpc)",
"sec_num": null
},
{
"text": "Complementing the top feature, which marks extraction as a property of arguments, the transfer grammar introduces a feature tpc to identify the locus of the gap as a property of the head. Values of this feature serve to distinguish further between different types of extraction, e.g. intersective vs. scopal modification, verb fronting and plain argument extraction (cf. --TPC in figure 1 ). Extraction rules are made to be sensitive to properties of the head, accordingly. The ordered transfer rules first try to detect instances of modifier fronting and partial VP fronting and mark the event variable of the elementary predication corresponding to the head for the appropriate extraction type. All remaining verbal and predicative elementary predications are marked as a potential site for argument extraction, thereby ruling out modifier or verb fronting from these sites.",
"cite_spans": [],
"ref_spans": [
{
"start": 380,
"end": 388,
"text": "figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Predict extraction type and gap site (tpc)",
"sec_num": null
},
{
"text": "A priori, it is difficult to assess the efficiency of this feature. However, given the rather unconstrained nature of modification, and therefore modifier extraction, it is safe to expect some decent benefit.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predict extraction type and gap site (tpc)",
"sec_num": null
},
{
"text": "In order to assess the impact of the proposed measures on generation efficiency, we carried out several experiments on three different regression test suites for German: the Babel test suite (M\u00fcller, 2004) , the TSNLP test suite (Lehmann et al., 1996) , and the German version of the CSLI MRS test suite. The test suites were parsed, and successfully analysed test items were subsequently disambiguated using the Redwoods treebank annotation tool (Oepen et al., 2002) . This left us with a total of 2,259 semantic input representations for the generator (Babel: 609, TSNLP: 1547, MRS: 103).",
"cite_spans": [
{
"start": 191,
"end": 205,
"text": "(M\u00fcller, 2004)",
"ref_id": "BIBREF18"
},
{
"start": 229,
"end": 251,
"text": "(Lehmann et al., 1996)",
"ref_id": null
},
{
"start": 447,
"end": 467,
"text": "(Oepen et al., 2002)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "3"
},
{
"text": "None of the test suites used in the experiments here was specifically designed for the purposes of NLG. Rather, all three are general purpose, phenomenon-oriented regression test suites. However, there are some differences in the design of the individual test suites that we expect to affect the impact of our performance improvements: while the MRS and TSNLP test suites consist of rather short utterances (MRS: 4.44 words/item, TSNLP: 4.76 words/item), Babel is slightly more complex (6.76 words/item). Another important difference relates to the kind of phenomena included in the test suite: to give an example, TSNLP includes a fair amount of non-sentential items for testing NP-internal agreement, a phenomenon which should be entirely unaffected by most of the efficiency measures suggested here, which are all targeted at clausal syntax.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "3"
},
{
"text": "All test runs were performed on a Linux (kernel 2.6.32) compute server with 12 Intel Xeon X5650 2.67GHz CPUs and 16GB RAM, running 4 processes in parallel (on an otherwise idle machine). The ACE generator was run in standard configuration, i.e. with a memory limit of 1.2 GB for forest creation plus another 300 MB for unpacking. The number of realisations per item was limited to 1000. Tests have been profiled using [incr tsdb] (Oepen, 2002) .",
"cite_spans": [
{
"start": 430,
"end": 443,
"text": "(Oepen, 2002)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "3"
},
{
"text": "In addition to comparing the performance of the full pre-generation transfer grammar to that of the baseline, we conducted a number of additional test runs to evaluate the effectiveness of each performance feature on its own (+feature), as well as each feature's contribution to the combined performance (\u2212feature).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "3"
},
{
"text": "The main results are summarised in tables 1 and 2, giving crucial performance indicators (overall processing time and passive edges) for all three test suites. Comparing baseline performance (Base) to the combined effect of all features (All), we observe a speedup of around a factor of 4.5 for MRS and TSNLP test suites. On the more complex Babel test suite efficiency gains even go up to a factor of almost 28. 4 Similarly positive efficiency factors can be observed regarding space consumption (edges), although space savings typically fall short of time savings, given the fact that we are using a generator with ambiguity packing. 5 Investigating the impact of the individual features in more detail, we find that almost all of them are effective on at least two of the three test suites. The only exception is +sind, the feature which calculates subject agreement information for raising and modal verbs: not only do we not find any clear benefits in isolation; its inclusion also proves detrimental in combination with other features. Given that this measure is highly specific, its failure to give rise to positive effects is hardly surprising. All other features are effective not only in isolation (top half of the table), but we can observe from the runs in the lower half of each table (leave-one-out) that each feature still has an impact when used in combination.",
"cite_spans": [
{
"start": 413,
"end": 414,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "3.1"
},
{
"text": "Starting with the inflection-oriented features, i.e., cas and punct, we find that both have consistent impact on all test suites. However, the effect of controlling punctuation is clearly stronger than that of predicting case: in fact, punctuation is the second to third most effective feature of all features tested. We believe that this is due to the following factors: first, predicting morphosyntactic case can only ever have an effect on nominal expressions (nouns, determiners, attributive adjectives), whereas punctuation will affect every lexical item in the chart that corresponds to some elementary predication in the input, targeting nominal and non-nominal expressions alike. Furthermore, while case prediction only reduces the number of potential complements, predicting punctuation also has an effect on heads and modifiers. Second, sentential punctuation is a global feature, i.e., all elementary predicates will be specialised in the same way. Case assignment, however, is a local property, and must therefore cater for the situation where an individual variable is shared by two or more governing predicates, as, e.g., in the case of relativisation. As a net effect, individual variables must be exempt from case prediction in these cases. This explanation is further supported by the fact that the reduction factor on passive edges is very close to the theoretically maximal value for punctuation of 1.67 (5/3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "3.1"
},
{
"text": "The features related to verb placement and argument composition (sub,oind,coh) all lead to performance improvements, albeit to differing degrees, depending on the feature and the test suites: while prediction of verb placement, i.e., direction of branching and presence/absence of verb movement (sub), leads to consistently good effects on all three test suites, as does the prediction of the absence/identity of the initial verb's object (oind), the coh feature, which exclusively caters for argument composition, shows more variable behaviour: though beneficial otherwise, space savings on TSNLP are negligible, with processing times even going up slightly. This may not be too surprising: while prediction of verb placement and direction of branching affect every clause, the coh feature will only show an effect in constructions involving particular predicates or tenses, which is a situation that can vary depending on the concrete input.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "3.1"
},
{
"text": "Finally, the two extraction features top and tpc show again consistent and highly effective performance improvements across all test suites, both in isolation and in combination. This confirms quite neatly our initial expectation that these two efficiency measures are largely independent, the former (top) targeting complement extraction, by virtue of their being represented on the head's argument structure, the latter (tpc) targeting the remaining cases, most notably gap prediction for adjunct extraction. Finally, the fact that long distance dependencies are not only costly, but also frequent and not specific to any particular construction, explains why good gap prediction gives rise to consistently high performance benefits across all test suites.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "3.1"
},
{
"text": "Before we close the presentation of the main results, we would like to briefly compare the efficiency of the measures proposed here to those suggested by Carroll and Oepen (2005) , namely index accessibility filtering and ambiguity packing. Disabling each of these previously established performance features in turn (cf. the last two rows in tables 1 and 2), we can show that the detrimental effect shown on our test data is comparable to that incurred by disabling one of the features investigated here. ",
"cite_spans": [
{
"start": 154,
"end": 178,
"text": "Carroll and Oepen (2005)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "3.1"
},
{
"text": "We have observed during the presentation of the main results that the majority of MRS-derived efficiency features show comparable speedup effects across the three different test suites, when used in isolation. Notable exceptions were the somewhat more construction and, therefore, input-specific features coh and oind which are highly dependent on the presence of complex predicates. However, we observed quite strong differences (a factor around 5.5) as to the cumulative effects between babel on the one hand and the less complex TSNLP and MRS test suites on the other. In order to better understand the significance of the experiments reported on here we shall investigate the differences and discuss what practical implications will ensue.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "3.2"
},
{
"text": "Base 0 5000 10000 15000 20000 25000 30000 35000 40000 45000",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4 6 8 10 114 16",
"sec_num": "2"
},
{
"text": "\u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022\u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022\u2022 \u2022\u2022 \u2022 \u2022 \u2022 \u2022\u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4 6 8 10 114 16",
"sec_num": "2"
},
{
"text": "All 0 5000 10000 15000 20000 25000 30000 35000 40000 45000 The most obvious difference between Babel and the other two test suites is of course input length: by artificially reducing average input length on Babel to slightly above (4.85) that of TSNLP (by filtering out longer inputs), the cumulative speedup factor reduces to a factor around 9.5, compared to 27.5, which is still not fully comparable, yet much closer to the factor of 4.5 observed for MRS and TSNLP suites. 6 The impact of input length on relative generation speedup is also corroborated by the scatter plots of time and space (passive edge) consumption shown in figures 2 and 3: without any of the efficiency measures proposed in this paper, processing time begins to explode already at an average sentence length of 8 (see figure 2) , averaging at around 8.4s. With the efficiency measures, processing time never even comes close to that level, leading to massive performance gains on longer inputs. The comparison of passive edges in figure 3 confirms even more clearly how the current efficiency measures particularly counter the combinatorial explosion observable with the baseline. To summarise, while all test suites witness good reduction of average processing times, it is clear that the real benefit of the generation efficiency measures suggested here becomes apparent with longer (and therefore more complex) inputs. For practical purposes, in particular for online processing, taming of the worst case complexity for longer inputs is more important than speedup factors on the relatively short utterances characteristic of MRS and TSNLP test suites.",
"cite_spans": [
{
"start": 475,
"end": 476,
"text": "6",
"ref_id": null
}
],
"ref_spans": [
{
"start": 793,
"end": 803,
"text": "figure 2)",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "4 6 8 10 114 16",
"sec_num": "2"
},
{
"text": "\u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022\u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022\u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022\u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022\u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022\u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022\u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022\u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022\u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022\u2022 \u2022\u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022\u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022\u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022\u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022\u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022\u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022\u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022\u2022 \u2022 \u2022 \u2022\u2022 \u2022 \u2022 \u2022\u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022\u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022\u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022\u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4 6 8 10 114 16",
"sec_num": "2"
},
{
"text": "Before we close, we should like to address the issue of how our method could be ported to grammars for languages other than German: while some of the concrete features we used are somewhat specific to German (or Dutch), others should be easily portable. The punctuation feature, as well as the the two extraction-related features top and tpc should be useful to improve generation efficiency in a wide range of languages: in a small experiment carried out with the ERG (Copestake and Flickinger, 2000) on the Rondane treebank (see section 1.2), we observed a 10.5% reduction in generation time for punctuation alone. We expect that other features, such as the cas feature will be useful for other less configurational languages, such as Slavic languages, given the fact that elaborate case systems and relatively free word order often go hand in hand.",
"cite_spans": [
{
"start": 469,
"end": 501,
"text": "(Copestake and Flickinger, 2000)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "4 6 8 10 114 16",
"sec_num": "2"
},
{
"text": "An alternative strand of approaches towards efficient processing of unification grammars builds on the idea of compiling these grammars into formalisms with better worst-case complexity than native unification-based processing, such as CFG or TAG: e.g., Kiefer and Krieger (2000) proposed a CFG superset approximation of HPSG for parsing English and Japanese. However, this method has so far never been successfully applied to German, let alone for generation. Furthermore, despite potentially better raw performance, CFGs are plagued by at least as severe locality issues as bottom-up HPSG generation. TAGs, by contrast, with their extended domain of locality, constitute a much more interesting target formalism for compiling an HPSG generation grammar. In the literature, two such approaches have been reported: Kasper et al. (1995) describe a method of compiling Klaus Netter's HPSG of German to TAG, but the compilation did not cover the full grammar but only a fragment, and, unfortunately, no performance measures are reported for either parsing or generation. Becker and Lopez (2000) specifically capitalise on the fact that TAG's extended domain of locality gives rise to an a priori expectation towards greater generation efficiency, and, building on Kasper et al. (1995) , they describe a compilation of the Verbmobil English and Japanese HPSG grammars into LTAG. Again, however, no performance tests are reported that could substantiate the claim of increased generation performance with the compiled grammar. Furthermore, to the best of our knowledge, no such compilation has ever been carried out for German.",
"cite_spans": [
{
"start": 254,
"end": 279,
"text": "Kiefer and Krieger (2000)",
"ref_id": "BIBREF13"
},
{
"start": 815,
"end": 835,
"text": "Kasper et al. (1995)",
"ref_id": "BIBREF11"
},
{
"start": 1068,
"end": 1091,
"text": "Becker and Lopez (2000)",
"ref_id": "BIBREF0"
},
{
"start": 1261,
"end": 1281,
"text": "Kasper et al. (1995)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "3.3"
},
{
"text": "In the context of native unification-based processing, Gardent and Kow (2007) suggest a method to enrich the semantic input to an FTAG of French with tree features that permit almost deterministic selection of generation paraphrases. Moreover, Gardent and Kow (2005) argue that such selection also leads to performance improvements, as they show on the basis of sample sentences. With respect to the German LFG (Rohrer and Forst, 2006; Cahill et al., 2007) , Zarrie\u00df and Kuhn (2010) propose a transfer approach to provide f-structure input for the XLE surface realiser from shallow semantic representations. The main motivation for this was that f-structures contain a high level of syntactic and even morphosyntactic detail that make them less suitable for paraphrasing and, more generally, for deployment in natural language generation systems. Zarrie\u00df and Kuhn (2010) also discuss the impact of grammatical function prediction from semantic roles for generator efficiency: depending on the complexity of the transfer rules, they observe considerable differences in average generation time, ranging from 246.14s for the \"naive rules\" to 36.2s for their \"informed rules\", which operate on configurations rather than individual roles. Nakanishi et al. (2005) propose a beam search approach to tackle generation efficiency for an English HPSG. We believe their approach to be complementary to ours, since MRS enrichment can prune the search space with certainty and without locality restrictions, such that a future system using both methods will be able to provide good results at smaller beam sizes, taking advantage of a division of labour between transfer-based treatment of non-local and probabilistic pruning of local dependencies.",
"cite_spans": [
{
"start": 55,
"end": 77,
"text": "Gardent and Kow (2007)",
"ref_id": "BIBREF10"
},
{
"start": 244,
"end": 266,
"text": "Gardent and Kow (2005)",
"ref_id": "BIBREF9"
},
{
"start": 411,
"end": 435,
"text": "(Rohrer and Forst, 2006;",
"ref_id": "BIBREF25"
},
{
"start": 436,
"end": 456,
"text": "Cahill et al., 2007)",
"ref_id": "BIBREF1"
},
{
"start": 459,
"end": 482,
"text": "Zarrie\u00df and Kuhn (2010)",
"ref_id": "BIBREF26"
},
{
"start": 847,
"end": 870,
"text": "Zarrie\u00df and Kuhn (2010)",
"ref_id": "BIBREF26"
},
{
"start": 1235,
"end": 1258,
"text": "Nakanishi et al. (2005)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "3.3"
},
{
"text": "We have proposed a method to improve generation efficiency with GG, a reversible HPSG of German. Using a term rewrite system integrated into the generator, we automatically enrich the purely semantic input representation with additional syntacto-semantic constraints, derived from the semantic configuration. Evaluating our method on three different regression test suites for German, we have shown that this approach is highly successful in taming combinatorial explosion in bottom-up chart generation, leading to significant speedup factors: while on less complex inputs, we achieved a speedup by a factor of around 4.5, performance gains increase considerably on more complex inputs, yielding a speedup factor of almost 28, which shows that our method scales up well to increasing input lengths.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": null
},
{
"text": "Descriptions involving handle constraints are also possible, though less common.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "By subsumption of MRSes, it is meant that every predicate in the input MRS must be realised in the output MRS, and the identity of the logical variables is the same (modulo renaming). It is considered permissible for the output MRS to be more specific than the input MRS, permitting, inter alia paraphrase generation by input underspecification.3 Average transfer processing times on the three test suites discussed in this paper are as follows: MRS: 13.2ms, TSNLP: 12.1ms, Babel: 29.4ms. Transfer times are already included in the overall processing time reported in table 1 below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Apparently, combination of features pays off much better in terms of time savings than mere multiplication of individual factors would suggest, an effect that has been previously noted in the context of chart generation(Carroll and Oepen, 2005).5 Passive edges reported in table 2 are packed edges: thus, any edge filtered by our performance features can lead to time savings at several points during generation, namely edge creation, packing, and unpacking.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Reducing average string length to slightly below that of the MRS test suite (4.37), results in a cumulative speedup factor of only 5.4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Adapting HPSG-to-TAG compilation to wide-coverage grammars",
"authors": [
{
"first": "T",
"middle": [],
"last": "Becker",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Lopez",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 5th International Workshop on Tree-Adjoining Grammars and Related Formalisms (TAG+5)",
"volume": "",
"issue": "",
"pages": "47--54",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Becker, T. and Lopez, P. (2000). Adapting HPSG-to-TAG compilation to wide-coverage gram- mars. In Proceedings of the 5th International Workshop on Tree-Adjoining Grammars and Related Formalisms (TAG+5), pages 47-54, Paris.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Stochastic realisation ranking for a free word order language",
"authors": [
{
"first": "A",
"middle": [],
"last": "Cahill",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Forst",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Rohrer",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Eleventh European Workshop on Natural Language Generation",
"volume": "",
"issue": "",
"pages": "17--24",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cahill, A., Forst, M., and Rohrer, C. (2007). Stochastic realisation ranking for a free word order language. In Proceedings of the Eleventh European Workshop on Natural Language Generation, pages 17-24, Saarbr\u00fccken, Germany. DFKI GmbH. Document D-07-01.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "High efficiency realization for a wide-coverage unification grammar",
"authors": [
{
"first": "J",
"middle": [],
"last": "Carroll",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Oepen",
"suffix": ""
}
],
"year": 2005,
"venue": "Natural Language Processing-IJCNLP 2005",
"volume": "",
"issue": "",
"pages": "165--176",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carroll, J. and Oepen, S. (2005). High efficiency realization for a wide-coverage unification grammar. Natural Language Processing-IJCNLP 2005, pages 165-176.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Implementing Typed Feature Structure Grammars",
"authors": [
{
"first": "A",
"middle": [],
"last": "Copestake",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Copestake, A. (2002). Implementing Typed Feature Structure Grammars. CSLI Publications, Stanford.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "An open-source grammar development environment and broad-coverage English grammar using HPSG",
"authors": [
{
"first": "A",
"middle": [],
"last": "Copestake",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Flickinger",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the Second conference on Language Resources and Evaluation (LREC-2000)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Copestake, A. and Flickinger, D. (2000). An open-source grammar development environment and broad-coverage English grammar using HPSG. In Proceedings of the Second conference on Language Resources and Evaluation (LREC-2000), Athens.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Minimal recursion semantics: an introduction",
"authors": [
{
"first": "A",
"middle": [],
"last": "Copestake",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Flickinger",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Pollard",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Sag",
"suffix": ""
}
],
"year": 2005,
"venue": "Research on Language and Computation",
"volume": "3",
"issue": "4",
"pages": "281--332",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Copestake, A., Flickinger, D., Pollard, C., and Sag, I. (2005). Minimal recursion semantics: an introduction. Research on Language and Computation, 3(4):281-332.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "On the efficient implementation of German verb placement in HPSG",
"authors": [
{
"first": "B",
"middle": [],
"last": "Crysmann",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of RANLP 2003",
"volume": "",
"issue": "",
"pages": "112--116",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Crysmann, B. (2003). On the efficient implementation of German verb placement in HPSG. In Proceedings of RANLP 2003, pages 112-116, Borovets, Bulgaria.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Relative clause extraposition in German: An efficient and portable implementation",
"authors": [
{
"first": "B",
"middle": [],
"last": "Crysmann",
"suffix": ""
}
],
"year": 2005,
"venue": "Research on Language and Computation",
"volume": "3",
"issue": "1",
"pages": "61--82",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Crysmann, B. (2005). Relative clause extraposition in German: An efficient and portable implementation. Research on Language and Computation, 3(1):61-82.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Local ambiguity packing and discontinuity in German",
"authors": [
{
"first": "B",
"middle": [],
"last": "Crysmann",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Baldwin",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Dras",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Hockenmaier",
"suffix": ""
},
{
"first": "T",
"middle": [
"H"
],
"last": "King",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Van Noord",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the ACL 2007 Workshop on Deep Linguistic Processing",
"volume": "",
"issue": "",
"pages": "144--151",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Crysmann, B. (2007). Local ambiguity packing and discontinuity in German. In Baldwin, T., Dras, M., Hockenmaier, J., King, T. H., and van Noord, G., editors, Proceedings of the ACL 2007 Workshop on Deep Linguistic Processing, pages 144-151, Prague, Czech Republic. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Generating and selecting grammatical paraphrases",
"authors": [
{
"first": "C",
"middle": [],
"last": "Gardent",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Kow",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 10th European Workshop on Natural Language Generation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gardent, C. and Kow, E. (2005). Generating and selecting grammatical paraphrases. In Proceedings of the 10th European Workshop on Natural Language Generation.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A Symbolic Approach to Near-Deterministic Surface Realisation using Tree Adjoining Grammar",
"authors": [
{
"first": "C",
"middle": [],
"last": "Gardent",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Kow",
"suffix": ""
}
],
"year": 2007,
"venue": "45th Annual Meeting of the Association for Computational Linguistics -ACL 2007",
"volume": "",
"issue": "",
"pages": "328--335",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gardent, C. and Kow, E. (2007). A Symbolic Approach to Near-Deterministic Surface Realisa- tion using Tree Adjoining Grammar. In 45th Annual Meeting of the Association for Computational Linguistics -ACL 2007, pages 328-335, Prague, Czech Republic. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Compilation of HPSG to TAG",
"authors": [
{
"first": "R",
"middle": [],
"last": "Kasper",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Kiefer",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Netter",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Vijay-Shanker",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the 33rd annual meeting on Association for Computational Linguistics, ACL '95",
"volume": "",
"issue": "",
"pages": "92--99",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kasper, R., Kiefer, B., Netter, K., and Vijay-Shanker, K. (1995). Compilation of HPSG to TAG. In Proceedings of the 33rd annual meeting on Association for Computational Linguistics, ACL '95, pages 92-99, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Chart generation",
"authors": [
{
"first": "M",
"middle": [],
"last": "Kay",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of the 34th annual meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "200--204",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kay, M. (1996). Chart generation. In Proceedings of the 34th annual meeting on Association for Computational Linguistics, pages 200-204. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A context-free approximation of Head-Driven Phrase Structure Grammar",
"authors": [
{
"first": "B",
"middle": [],
"last": "Kiefer",
"suffix": ""
},
{
"first": "H.-U",
"middle": [],
"last": "Krieger",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 6th International Workshop on Parsing Technologies (IWPT 2000)",
"volume": "",
"issue": "",
"pages": "135--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kiefer, B. and Krieger, H.-U. (2000). A context-free approximation of Head-Driven Phrase Structure Grammar. In Proceedings of the 6th International Workshop on Parsing Technologies (IWPT 2000), pages 135-146.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Zum Punkt gekommen",
"authors": [
{
"first": "N",
"middle": [],
"last": "Kilian",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kilian, N. (2007). Zum Punkt gekommen. Master's thesis, Universit\u00e4t des Saarlandes, Saar- br\u00fccken.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Verb order and head movement",
"authors": [
{
"first": "T",
"middle": [],
"last": "Kiss",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Wesche",
"suffix": ""
}
],
"year": 1991,
"venue": "Text Understanding in LILOG, number 546 in Lecture Notes in Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "216--240",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kiss, T. and Wesche, B. (1991). Verb order and head movement. In Herzog, O. and Rollinger, C.- R., editors, Text Understanding in LILOG, number 546 in Lecture Notes in Artificial Intelligence, pages 216-240. Springer-Verlag, Berlin.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Tsnlp -test suites for natural language processing",
"authors": [],
"year": null,
"venue": "The 16th International Conference on Computational Linguistics (COLING)",
"volume": "2",
"issue": "",
"pages": "711--716",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tsnlp -test suites for natural language processing. In The 16th International Conference on Computational Linguistics (COLING), volume 2, pages 711-716, Copenhagen, Denmark. Center for Sprogteknologi.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Continuous or discontinuous constituents? A comparison between syntactic analyses for constituent order and their processing systems",
"authors": [
{
"first": "S",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
}
],
"year": 2004,
"venue": "Research on Language and Computation",
"volume": "2",
"issue": "2",
"pages": "209--257",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M\u00fcller, S. (2004). Continuous or discontinuous constituents? A comparison between syntactic analyses for constituent order and their processing systems. Research on Language and Computation, 2(2):209-257.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Verbmobil: Foundations of Speech-to-Speech Translation",
"authors": [
{
"first": "S",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Kasper",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "238--253",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M\u00fcller, S. and Kasper, W. (2000). HPSG analysis of German. In Wahlster, W., editor, Verbmobil: Foundations of Speech-to-Speech Translation, pages 238-253. Springer, Berlin.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Probabilistic models for disambiguation of an HPSG-based chart generator",
"authors": [
{
"first": "H",
"middle": [],
"last": "Nakanishi",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Miyao",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Tsujii",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Ninth International Workshop on Parsing Technology",
"volume": "",
"issue": "",
"pages": "93--102",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nakanishi, H., Miyao, Y., and Tsujii, J. (2005). Probabilistic models for disambiguation of an HPSG-based chart generator. In Proceedings of the Ninth International Workshop on Parsing Technology, pages 93-102. Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Competence and Performance Profiling for Constraint-based Grammars: A New Methodology, Toolkit, and Applications",
"authors": [
{
"first": "S",
"middle": [],
"last": "Oepen",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oepen, S. (2002). Competence and Performance Profiling for Constraint-based Grammars: A New Methodology, Toolkit, and Applications. PhD thesis, Saarland University.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "LinGO Redwoods: A rich and dynamic treebank for HPSG",
"authors": [
{
"first": "S",
"middle": [],
"last": "Oepen",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Callahan",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Flickinger",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2002,
"venue": "Beyond PARSEVAL. Workshop at the Third International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oepen, S., Callahan, E., Flickinger, D., Manning, C., and Toutanova, K. (2002). LinGO Redwoods: A rich and dynamic treebank for HPSG. In Beyond PARSEVAL. Workshop at the Third International Conference on Language Resources and Evaluation, LREC 2002, Las Palmas, Spain.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Som \u00e5 kapp-ete med trollet? Towards MRS-based Norwegian-English Machine Translation",
"authors": [
{
"first": "S",
"middle": [],
"last": "Oepen",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Dyvik",
"suffix": ""
},
{
"first": "J",
"middle": [
"T"
],
"last": "L\u00f8nning",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Velldal",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Beermann",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Carroll",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Flickinger",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Hellan",
"suffix": ""
},
{
"first": "J",
"middle": [
"B"
],
"last": "Johannessen",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Meurer",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Nordg\u00e5rd",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Ros\u00e9n",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 10th International Conference on Theoretical and Methodological Issues in Machine Translation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oepen, S., Dyvik, H., L\u00f8nning, J. T., Velldal, E., Beermann, D., Carroll, J., Flickinger, D., Hellan, L., Johannessen, J. B., Meurer, P., Nordg\u00e5rd, T., and Ros\u00e9n, V. (2004). Som \u00e5 kapp-ete med trollet? Towards MRS-based Norwegian-English Machine Translation. In Proceedings of the 10th International Conference on Theoretical and Methodological Issues in Machine Translation, Baltimore, MD.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Towards hybrid quality-oriented machine translation. On linguistics and probabilities in MT",
"authors": [
{
"first": "S",
"middle": [],
"last": "Oepen",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Velldal",
"suffix": ""
},
{
"first": "J",
"middle": [
"T"
],
"last": "L\u00f8nning",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Meurer",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Ros\u00e9n",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Flickinger",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 11th International Conference on Theoretical and Methodological Issues in Machine Translation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oepen, S., Velldal, E., L\u00f8nning, J. T., Meurer, P., Ros\u00e9n, V., and Flickinger, D. (2007). Towards hybrid quality-oriented machine translation. On linguistics and probabilities in MT. In Proceed- ings of the 11th International Conference on Theoretical and Methodological Issues in Machine Translation, Sk\u00f6vde, Sweden.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Improving coverage and parsing quality of a large-scale LFG for German",
"authors": [
{
"first": "C",
"middle": [],
"last": "Rohrer",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Forst",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of LREC-2006",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rohrer, C. and Forst, M. (2006). Improving coverage and parsing quality of a large-scale LFG for German. In Proceedings of LREC-2006, Genova.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Reversing f-structure rewriting for generation from meaning representations",
"authors": [
{
"first": "S",
"middle": [],
"last": "Zarrie\u00df",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Kuhn",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the LFG10 Conference",
"volume": "",
"issue": "",
"pages": "479--499",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zarrie\u00df, S. and Kuhn, J. (2010). Reversing f-structure rewriting for generation from meaning representations. In Butt, M. and King, T. H., editors, Proceedings of the LFG10 Conference, Ottawa, pages 479-499, Stanford. CSLI publications.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"text": "prpstn_m_rel LBL: h0 MARG: h16 ARG0: e19 TPC: x17 PSV: u2 ] [ \"_pron_n_ppro_rel\" LBL: h3 ARG0: x17 [ --TOP: + --COH: + --PUNCT: prop-punct --CAS: n-list PNG.PN: 2s ] ] [ \"pronoun_q_rel\" LBL: h7 ARG0: x17 RSTR: h9 body: h10 ] [ \"_sollen_v_modal-haben_rel\" LBL: h18 ARG0: e19 [ --TOP: ---COH: ---SIND: 2s --PUNCT: prop-punct --TPC: tpc-non-event-non-mod --SUB: -TENSE: present MOOD: indicative PERFECTIVE: -] ARG1: h14 ] [ \"_schnarchen_v_n-haben_rel\" LBL: h12 ARG0: e15 [ --TOP: ---COH: ---PUNCT: prop-punct --TPC: tpc-non-event-non-mod --SUB: bool TENSE: untensed PERFECTIVE: -] ARG1: x17 ] > HCONS: < h18 qeq h16 h9 qeq h3 h14 qeq h12 > ]",
"num": null
},
"FIGREF1": {
"uris": null,
"type_str": "figure",
"text": "Enriched input MRS for Du sollst schnarchen 'You should snore'",
"num": null
},
"FIGREF2": {
"uris": null,
"type_str": "figure",
"text": "Processing time (in s) per string length (babel)",
"num": null
},
"FIGREF3": {
"uris": null,
"type_str": "figure",
"text": "Passive edges per string length (babel)",
"num": null
},
"TABREF0": {
"num": null,
"content": "<table><tr><td/><td>MRS</td><td>TSNLP</td><td>Babel</td><td/><td colspan=\"2\">MRS+TSNLP+Babel</td></tr><tr><td/><td colspan=\"3\">Time (s) Red. Time (s) Red. Time (s)</td><td>Red.</td><td>Time (s)</td><td>Red.</td></tr><tr><td>Base</td><td>0.257 1.00</td><td>0.273 1.00</td><td>7.213</td><td>1.00</td><td>2.143</td><td>1.00</td></tr><tr><td>+cas +punct +sub +coh +oind +sind +top +tpc</td><td>0.222 1.16 0.150 1.71 0.210 1.22 0.226 1.14 0.242 1.06 0.262 0.98 0.130 1.97 0.131 1.96</td><td>0.216 1.26 0.159 1.71 0.215 1.27 0.275 0.99 0.262 1.04 0.274 1.00 0.136 2.00 0.178 1.53</td><td>4.547 4.598 6.042 5.681 5.506 7.035 5.309 3.602</td><td>1.59 1.57 1.19 1.27 1.31 1.03 1.36 2.00</td><td>1.384 1.355 1.786 1.730 1.675 2.096 1.530 1.099</td><td>1.55 1.58 1.20 1.24 1.28 1.02 1.40 1.95</td></tr><tr><td>All</td><td>0.054 4.71</td><td>0.063 4.31</td><td>0.260</td><td>27.79</td><td>0.116</td><td>18.52</td></tr><tr><td>-cas</td><td>0.056 4.59</td><td>0.065 4.20</td><td>0.371</td><td>19.46</td><td>0.147</td><td>14.58</td></tr><tr><td>-punct</td><td>0.063 4.11</td><td>0.081 3.38</td><td>0.415</td><td>17.39</td><td>0.170</td><td>12.61</td></tr><tr><td>-sub</td><td>0.058 4.44</td><td>0.067 4.09</td><td>0.304</td><td>23.71</td><td>0.130</td><td>16.44</td></tr><tr><td>-coh</td><td>0.054 4.78</td><td>0.061 4.49</td><td>0.353</td><td>20.45</td><td>0.139</td><td>15.41</td></tr><tr><td>-oind</td><td>0.056 4.59</td><td>0.062 4.37</td><td>0.526</td><td>13.72</td><td>0.187</td><td>11.46</td></tr><tr><td>-sind</td><td>0.050 5.12</td><td>0.062 4.40</td><td colspan=\"2\">0.258 27.95</td><td>0.114</td><td>18.74</td></tr><tr><td>-top</td><td>0.076 3.37</td><td>0.081 3.37</td><td>0.381</td><td>18.92</td><td>0.162</td><td>13.25</td></tr><tr><td>-tpc</td><td>0.064 4.01</td><td>0.079 3.46</td><td>0.897</td><td>8.04</td><td>0.299</td><td>7.17</td></tr><tr><td>-index</td><td>0.049 5.24</td><td>0.069 3.97</td><td>0.433</td><td>16.67</td><td>0.166</td><td>12.91</td></tr><tr><td>-pack</td><td>0.079 3.23</td><td>0.087 3.14</td><td>0.563</td><td>12.82</td><td>0.215</td><td>9.98</td></tr></table>",
"type_str": "table",
"text": "As indicated in table 1, average speedup on all three test suites is at 18.5.",
"html": null
},
"TABREF1": {
"num": null,
"content": "<table/>",
"type_str": "table",
"text": "Processing time and speedup factor",
"html": null
},
"TABREF2": {
"num": null,
"content": "<table><tr><td/><td/><td>MRS</td><td/><td>TSNLP</td><td/><td>Babel</td></tr><tr><td/><td colspan=\"2\">Cov Edges Red.</td><td colspan=\"2\">Cov Edges Red.</td><td colspan=\"2\">Cov Edges Red.</td></tr><tr><td>Base</td><td>100.0</td><td colspan=\"2\">703 1.00 100.0</td><td>693 1.00</td><td>96.7</td><td>4864 1.00</td></tr><tr><td colspan=\"2\">100.0 +punct 100.0 +cas 100.0 +sub 100.0 +coh 100.0 +oind 100.0 +sind 100.0 +top +tpc 100.0</td><td colspan=\"2\">663 1.06 100.0 440 1.60 100.0 555 1.27 100.0 645 1.09 100.0 675 1.04 100.0 706 1.00 100.0 392 1.79 100.0 410 1.71 100.0</td><td>630 1.10 458 1.51 567 1.22 692 1.00 676 1.02 692 1.00 441 1.57 452 1.53</td><td>97.9 97.5 96.9 98.0 98.2 96.9 98.0 98.0</td><td>3817 1.27 3364 1.45 4137 1.18 4460 1.09 3959 1.23 4859 1.00 3539 1.37 2931 1.66</td></tr><tr><td>All</td><td>100.0</td><td colspan=\"2\">116 6.07 100.0</td><td colspan=\"2\">172 4.03 100.0</td><td>554 8.78</td></tr><tr><td>-cas</td><td>100.0</td><td colspan=\"2\">133 5.28 100.0</td><td colspan=\"2\">194 3.57 100.0</td><td>693 7.02</td></tr><tr><td>-punct</td><td>100.0</td><td colspan=\"2\">179 3.92 100.0</td><td colspan=\"2\">242 2.87 100.0</td><td>860 5.65</td></tr><tr><td>-sub</td><td>100.0</td><td colspan=\"2\">141 4.99 100.0</td><td colspan=\"2\">195 3.55 100.0</td><td>673 7.23</td></tr><tr><td>-coh</td><td>100.0</td><td colspan=\"2\">124 5.67 100.0</td><td colspan=\"2\">175 3.96 100.0</td><td>657 7.41</td></tr><tr><td>-oind</td><td>100.0</td><td colspan=\"2\">121 5.82 100.0</td><td colspan=\"2\">174 3.99 100.0</td><td>841 5.78</td></tr><tr><td>-sind</td><td>100.0</td><td colspan=\"2\">117 6.00 100.0</td><td colspan=\"2\">172 4.03 100.0</td><td>558 8.72</td></tr><tr><td>-top</td><td>100.0</td><td colspan=\"2\">205 3.43 100.0</td><td colspan=\"2\">251 2.76 100.0</td><td>825 5.90</td></tr><tr><td>-tpc</td><td>100.0</td><td colspan=\"2\">150 4.68 100.0</td><td colspan=\"2\">228 3.04 100.0</td><td>1173 4.15</td></tr><tr><td>-index</td><td>100.0</td><td colspan=\"2\">126 5.58 100.0</td><td colspan=\"2\">198 3.50 100.0</td><td>682 7.13</td></tr><tr><td>-pack</td><td>100.0</td><td colspan=\"2\">215 3.27 100.0</td><td>292 2.37</td><td>99.7</td><td>1292 3.77</td></tr></table>",
"type_str": "table",
"text": "also details generation coverage achieved by each test run: on MRS and TSNLP test suites, coverage is 100% throughout. On Babel, we achieve full coverage, once a sufficient number of efficiency measures is enabled (second half of table 2). Test runs for baseline performance, as well as those with only a single performance feature activated at a time (top half of table 2), occasionally run into memory exhaustion, accounting for reduced coverage. However, since coverage on all test runs is either greater or equal to that of the baseline, a potential floor effect benefits the baseline more than any other runs, thus leaving the significance of our results unaffected.",
"html": null
},
"TABREF3": {
"num": null,
"content": "<table/>",
"type_str": "table",
"text": "Generation coverage and space consumption (passive edges)",
"html": null
}
}
}
}