Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "E14-1020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:39:09.700377Z"
},
"title": "Incremental Query Generation",
"authors": [
{
"first": "Laura",
"middle": [],
"last": "Perez-Beltrachini",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Free University of Bozen-Bolzano Bozen-Bolzano",
"location": {
"country": "Italy"
}
},
"email": ""
},
{
"first": "Claire",
"middle": [],
"last": "Gardent",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "CNRS/LORIA",
"location": {
"settlement": "Nancy",
"country": "France"
}
},
"email": "[email protected]"
},
{
"first": "Enrico",
"middle": [],
"last": "Franconi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Bozen-Bolzano",
"location": {
"settlement": "Bozen-Bolzano",
"country": "Italy"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present a natural language generation system which supports the incremental specification of ontology-based queries in natural language. Our contribution is two fold. First, we introduce a chart based surface realisation algorithm which supports the kind of incremental processing required by ontology-based querying. Crucially, this algorithm avoids confusing the end user by preserving a consistent ordering of the query elements throughout the incremental query formulation process. Second, we show that grammar based surface realisation better supports the generation of fluent, natural sounding queries than previous template-based approaches.",
"pdf_parse": {
"paper_id": "E14-1020",
"_pdf_hash": "",
"abstract": [
{
"text": "We present a natural language generation system which supports the incremental specification of ontology-based queries in natural language. Our contribution is two fold. First, we introduce a chart based surface realisation algorithm which supports the kind of incremental processing required by ontology-based querying. Crucially, this algorithm avoids confusing the end user by preserving a consistent ordering of the query elements throughout the incremental query formulation process. Second, we show that grammar based surface realisation better supports the generation of fluent, natural sounding queries than previous template-based approaches.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Previous research has shown that formal ontologies could be used as a means not only to provide a uniform and flexible approach to integrating and describing heterogeneous data sources, but also to support the final user in querying them, thus improving the usability of the integrated system. To support the wide access to these data sources, it is crucial to develop efficient and user-friendly ways to query them (Wache et al., 2001) .",
"cite_spans": [
{
"start": 416,
"end": 436,
"text": "(Wache et al., 2001)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we present a Natural Language (NL) interface of an ontology-based query tool, called Quelo 1 , which allows the end user to formulate a query without any knowledge either of the formal languages used to specify ontologies, or of the content of the ontology being used. Following the conceptual authoring approach described in (Tennant et al., 1983; Hallett et al., 2007) , this interface masks the composition of a formal query as the composition of an English text describing the equivalent information needs using natural language generation techniques. The natural language generation system that we propose for Quelo's NL interface departs from similar work (Hallett et al., 2007; Franconi et al., 2010a; Franconi et al., 2011b; Franconi et al., 2010b; Franconi et al., 2011a) in that it makes use of standard grammar based surface realisation techniques. Our contribution is two fold. First, we introduce a chart based surface realisation algorithm which supports the kind of incremental processing required by ontology driven query formulation. Crucially, this algorithm avoids confusing the end user by preserving a consistent ordering of the query elements throughout the incremental query formulation process. Second, we show that grammar based surface realisation better supports the generation of fluent, natural sounding queries than previous template-based approaches.",
"cite_spans": [
{
"start": 341,
"end": 363,
"text": "(Tennant et al., 1983;",
"ref_id": "BIBREF16"
},
{
"start": 364,
"end": 385,
"text": "Hallett et al., 2007)",
"ref_id": "BIBREF10"
},
{
"start": 677,
"end": 699,
"text": "(Hallett et al., 2007;",
"ref_id": "BIBREF10"
},
{
"start": 700,
"end": 723,
"text": "Franconi et al., 2010a;",
"ref_id": "BIBREF3"
},
{
"start": 724,
"end": 747,
"text": "Franconi et al., 2011b;",
"ref_id": "BIBREF6"
},
{
"start": 748,
"end": 771,
"text": "Franconi et al., 2010b;",
"ref_id": "BIBREF4"
},
{
"start": 772,
"end": 795,
"text": "Franconi et al., 2011a)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The paper is structured as follows. Section 2 discusses related work and situates our approach. Section 3 describes the task being addressed namely, ontology driven query formulation. It introduces the input being handled, the constraints under which generation operates and the operations the user may perform to build her query. In Section 4, we present the generation algorithm used to support the verbalisation of possible queries. Section 5 reports on an evaluation of the system with respect to fluency, clarity, coverage and incrementality. Section 6 concludes with pointers for further research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our approach is related to two main strands of work: incremental generation and conceptual authoring.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Incremental Generation (Oh and Rudnicky, 2000) used an n-gram language model to stochas-tically generate system turns. The language model is trained on a dialog corpus manually annotated with word and utterance classes. The generation engine uses the appropriate language model for the utterance class and generates word sequences randomly according to the language model distribution. The generated word sequences are then ranked using a scoring mechanism and only the best-scored utterance is kept. The system is incremental is that each word class to be verbalised can yield a new set of utterance candidates. However it supports only addition not revisions. Moreover it requires domain specific training data and manual annotation while the approach we propose is unsupervised and generic to any ontology. (Dethlefs et al., 2013) use Conditional Random Fields to find the best surface realisation from a semantic tree. They show that the resulting system is able to modify generation results on the fly when new or updated input is provided by the dialog manager. While their approach is fast to execute, it is limited to a restricted set of domain specific attributes; requires a training corpus of example sentences to define the space of possible surface realisations; and is based on a large set (800 rules) of domain specific rules extracted semi-automatically from the training corpus. In contrast, we use a general, small size grammar (around 50 rules) and a lexicon which is automatically derived from the input ontologies. The resulting system requires no training and thus can be applied to any ontology with any given signature of concepts and relations. Another difference between the two approaches concerns revisions: while our approach supports revisions anywhere in the input, the CRF approach proposed by (Dethlefs et al., 2013) only supports revisions occurring at the end of the generated string.",
"cite_spans": [
{
"start": 23,
"end": 46,
"text": "(Oh and Rudnicky, 2000)",
"ref_id": "BIBREF12"
},
{
"start": 810,
"end": 833,
"text": "(Dethlefs et al., 2013)",
"ref_id": "BIBREF1"
},
{
"start": 1826,
"end": 1849,
"text": "(Dethlefs et al., 2013)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "There is also much work (Schlangen and Skantze, 2009; in the domain of spoken dialog systems geared at modelling the incremental nature of dialog and in particular, at developing dialog systems where processing starts before the input is complete. In these approaches, the focus is on developing efficient architectures which support the timely interleaving of parsing and generation. Instead, our aim is to develop a principled approach to the incremental generation of a user query which supports revision and additions at arbitrary points of the query being built; generates natural sounding text; and maxi-mally preserves the linear order of the query.",
"cite_spans": [
{
"start": 24,
"end": 53,
"text": "(Schlangen and Skantze, 2009;",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Our proposal is closely related to the conceptual authoring approach described in (Hallett et al., 2007) . In this approach, a text generated from a knowledge base, describes in natural language the knowledge encoded so far, and the options for extending it. Starting with an initial very general query (e.g., all things), the user can formulate a query by choosing between these options. Similarly, (Franconi et al., 2010a; Franconi et al., 2011b; Franconi et al., 2010b; Franconi et al., 2011a ) describes a conceptual authoring approach to querying semantic data where in addition , logical inference is used to semantically constrain the possible completions/revisions displayed to the user.",
"cite_spans": [
{
"start": 82,
"end": 104,
"text": "(Hallett et al., 2007)",
"ref_id": "BIBREF10"
},
{
"start": 400,
"end": 424,
"text": "(Franconi et al., 2010a;",
"ref_id": "BIBREF3"
},
{
"start": 425,
"end": 448,
"text": "Franconi et al., 2011b;",
"ref_id": "BIBREF6"
},
{
"start": 449,
"end": 472,
"text": "Franconi et al., 2010b;",
"ref_id": "BIBREF4"
},
{
"start": 473,
"end": 495,
"text": "Franconi et al., 2011a",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conceptual authoring",
"sec_num": null
},
{
"text": "Our approach departs from this work in that it makes use of standard grammars and algorithms. While previous work was based on procedures and templates, we rely on a Feature-Based Tree Adjoining Grammar to capture the link between text and semantics required by conceptual authoring; and we adapt a chart based algorithm to support the addition, the revision and the substitution of input material. To avoid confusing the user, we additionally introduce a scoring function which helps preserve the linear order of the NL query. The generation system we present is in fact integrated in the Quelo interface developed by (Franconi et al., 2011a ) and compared with their previous template-based approach.",
"cite_spans": [
{
"start": 619,
"end": 642,
"text": "(Franconi et al., 2011a",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conceptual authoring",
"sec_num": null
},
{
"text": "The generation task we address is the following. Given a knowledge base K, some initial formal query q and a focus point p in that query, the reasoning services supported by Quelo's query logic framework (see (Guagliardo, 2009) ) will compute a set of new queries rev(q) formed by adding, deleting and revising the current query q at point p. The task of the generator is then to produce a natural language sentence for each new formal query q \u2032 \u2208 rev(q) which results from this revision process. In other words, each time the user refines a query q to produce a new query q \u2032 , the system computes all revisions rev(q) of q \u2032 that are compatible with the underlying knowledge base using a reasoner. Each of these possible revisions is then input to the generator and the resulting revised NL queries are displayed to the user. In what follows, we assume that formal queries are represented using Description Logics (Baader, 2003) . The following examples show a possible sequence of NL queries, their corresponding DL representation and the operations provided by Quelo that can be performed on a query (bold face is used to indicate the point in the query at which the next revision takes place). For instance, the query in (1c) results from adding the concept Y oung to the query underlying (1b) at the point highlighted by man.",
"cite_spans": [
{
"start": 209,
"end": 227,
"text": "(Guagliardo, 2009)",
"ref_id": "BIBREF9"
},
{
"start": 916,
"end": 930,
"text": "(Baader, 2003)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Incremental Generation of Candidate Query Extensions",
"sec_num": "3"
},
{
"text": "( 1) ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incremental Generation of Candidate Query Extensions",
"sec_num": "3"
},
{
"text": "Generation of KB queries differs from standard natural language generation algorithms in two main ways. First it should support the revisions, deletions and additions required by incremental processing. Second, to avoid confusing the user, the revisions (modifications, extensions, deletions) performed by the user should have a minimal effect on the linear order of the NL query.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating Queries",
"sec_num": "4"
},
{
"text": "That is the generator is not free to produce any NL variant verbalising the query but should produce a verbalisation that is linearly as close as possible, modulo the revision applied by the user, to the query before revisions. Thus for instance, given the DL query (2) and assuming a linearisation of that formula that matches the linear order it is presented in (see Section 4.2.1 below for a definition of the linearisation of DL formulae), sentence (2b) will be preferred over (2c).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating Queries",
"sec_num": "4"
},
{
"text": "(2) a. Car \u2293 \u2203runOn.(Diesel) \u2293 \u2203equippedW ith.(AirCond) b. A car which runs on Diesel and is equipped with air conditioning c. A car which is equipped with air conditioning and runs on Diesel In what follows, we describe the generation algorithm used to verbalise possible extensions of user queries as proposed by the Quelo tool. We start by introducing and motivating the underlying formal language supported by Quelo and the input to the generator. We then describe the overall architecture of our generator. Finally, we present the incremental surface realisation algorithm supporting the verbalisation of the possible query extensions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating Queries",
"sec_num": "4"
},
{
"text": "Following (Franconi et al., 2010a; Franconi et al., 2011b; Franconi et al., 2010b; Franconi et al., 2011a) we assume a formal language for queries that targets the querying of various knowledge and data bases independent of their specification language. To this end, it uses a minimal query language L that is shared by most knowledge representation languages and is supported by Description Logic (DL) reasoners namely, the language of tree shaped conjunctive DL queries. Let R be a set of relations and C be a set of concepts, then the language of tree-shaped conjunctive DL queries is defined as follows: S ::= C | \u2203R.(S) | S \u2293 S where R \u2208 R, C \u2208 C, \u2293 denotes conjunction and \u2203 is the existential quantifier.",
"cite_spans": [
{
"start": 10,
"end": 34,
"text": "(Franconi et al., 2010a;",
"ref_id": "BIBREF3"
},
{
"start": 35,
"end": 58,
"text": "Franconi et al., 2011b;",
"ref_id": "BIBREF6"
},
{
"start": 59,
"end": 82,
"text": "Franconi et al., 2010b;",
"ref_id": "BIBREF4"
},
{
"start": 83,
"end": 106,
"text": "Franconi et al., 2011a)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Input Language",
"sec_num": "4.1"
},
{
"text": "A tree shaped conjunctive DL query can be represented as a tree where nodes are associated with a set of concept names (node labels) and edges are labelled with a relation name (edge labels). Figure 1 shows some example query trees.",
"cite_spans": [],
"ref_spans": [
{
"start": 192,
"end": 201,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "The Input Language",
"sec_num": "4.1"
},
{
"text": "Our generator takes as input two L formula: the formula representing the current query q and the formula representing a possible revision r (addition/deletion/modification) of q. Given this input, the system architecture follows a traditional pipeline sequencing a document planner which (i) linearises the input query and (ii) partition the input into sentence size chunks; a surface realiser mapping each sentence size L formula into a sentence; and a referring expression generator verbalising NPs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NLG architecture",
"sec_num": "4.2"
},
{
"text": "The document planning module linearises the input query and segments the resulting linearised Query Linearisation Among the different strategies investigated in (Dongilli, 2008) to find a good order for the content contained in a query tree the depth-first planning, i.e. depth-first traversal of the query tree, was found to be the most appropriate one. Partly because it is obtained straightforward from the query tree but mostly due to the fact that it minimizes the changes in the text plan that are required by incremental query modifications. Thus, (Franconi et al., 2010a) defines a query linearisation as a strict total order 2 on the query tree that satisfies the following conditions:",
"cite_spans": [
{
"start": 161,
"end": 177,
"text": "(Dongilli, 2008)",
"ref_id": "BIBREF2"
},
{
"start": 555,
"end": 579,
"text": "(Franconi et al., 2010a)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Document Planning",
"sec_num": "4.2.1"
},
{
"text": "\u2022 all labels associated with the edge's leaving node precede the edge label \u2022 the edge label is followed by at least one label associated with the edge's arriving node \u2022 between any two labels of a node there can only be (distinct) labels of the same node",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document Planning",
"sec_num": "4.2.1"
},
{
"text": "The specific linearisation adopted in Quelo is defined by the depth-first traversal strategy of the query tree and a total order on the children which is based on the query operations. That is, the labels of a node are ordered according to the sequence applications of the add compatible concept operation. The children of a node are inversely ordered according to the sequence of applications of the add relation operation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document Planning",
"sec_num": "4.2.1"
},
{
"text": "According to this linearisation definition, for the query tree (e) in Figure 1 the following linear order is produced:",
"cite_spans": [],
"ref_spans": [
{
"start": 70,
"end": 78,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Document Planning",
"sec_num": "4.2.1"
},
{
"text": "(3) a. M an marriedTo P erson livesIn House Beautif ul ownedBy RichP eron 2 A strict total order can be obtained by fixing an order in the children nodes and traversing the tree according to some tree traversal strategy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document Planning",
"sec_num": "4.2.1"
},
{
"text": "Query Segmentation Given a linearised query q, the document planner uses some heuristics based on the number and the types of relations/concepts present in q to output a sequence of sub-formulae each of which will be verbalised as a sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document Planning",
"sec_num": "4.2.1"
},
{
"text": "We now describe the main module of the generator namely the surface realiser which supports both the incremental refinement of a query and a minimal modification of the linear order between increments. This surface realiser is caracterised by the following three main features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incremental Surface Realisation and Linearisation Constraints",
"sec_num": "4.2.2"
},
{
"text": "We use a symbolic, grammarbased approach rather than a statistical one for two reasons. First, there is no training corpus available that would consist of knowledge base queries and their increments. Second, the approach must be portable and should apply to any knowledge base independent of the domain it covers and independent of the presence of a training corpus. By combining a lexicon automatically extracted from the ontology with a small grammar tailored to produce natural sounding queries, we provide a generator which can effectively apply to any ontology without requiring the construction of a training corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grammar-Based",
"sec_num": null
},
{
"text": "Chart-Based A chart-based architecture enhances efficiency by avoiding the recomputation of intermediate structures while allowing for a natural implementation of the revisions (addition, deletion, substitution) operations required by the incremental formulation of user queries. We show how the chart can be used to implement these operations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grammar-Based",
"sec_num": null
},
{
"text": "Beam search. As already mentioned, for ergonomic reasons, the linear order of the generated NL query should be minimally disturbed during query formulation. The generation system should also be sufficiently fast to support a timely Man/Machine interaction. We use beam search and a customised scoring function both to preserve linear order and to support efficiency.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grammar-Based",
"sec_num": null
},
{
"text": "We now introduce each of these components in more details.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grammar-Based",
"sec_num": null
},
{
"text": "A tree adjoining grammar (TAG) is a tuple \u03a3, N, I, A, S with \u03a3 a set of terminals, N a set of non-terminals, I a finite set of initial trees, A a finite set of auxiliary trees, and S a distinguished non-terminal (S \u2208 N ). Initial trees are trees whose leaves are labeled with substitution nodes (marked with a down-arrow) or with terminal categories 3 . Auxiliary trees are distinguished by a foot node (marked with a star) whose category must be the same as that of the root node.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature-Based Tree Adjoining Grammar",
"sec_num": null
},
{
"text": "Two tree-composition operations are used to combine trees: substitution and adjunction. Substitution inserts a tree onto a substitution node of some other tree while adjunction inserts an auxiliary tree into a tree. In a Feature-Based Lexicalised TAG (FB-LTAG), tree nodes are furthermore decorated with two feature structures which are unified during derivation; and each tree is anchored with a lexical item. Figure 2 shows an example toy FB-LTAG with unification semantics. The dotted arrows indicate possible tree combinations (substitution for John, adjunction for often). As the trees are combined, the semantics is the union of their semantics modulo unification. Thus given the grammar and the derivation shown, the semantics of John often runs is as shown namely, named(j john), run(a,j), often(a). Chart-Based Surface Realisation Given an FB-LTAG G of the type described above, sentences can be generated from semantic formulae by (i) selecting all trees in G whose semantics subsumes part of the input formula and (ii) combining these trees using the FB-LTAG combining operations namely substitution and adjunction. Thus for instance, in Figure 2 , given the semantics l1:named(j john), lv:run(a,j), lv:often(a), the three trees shown are selected. When combined they produce a complete phrase structure tree whose yield (John runs often) is the generated sentence. Following (Gardent and Perez-Beltrachini, 2011), we implement an Earley style generation algorithm for FB-LTAG which makes use of the fact that the derivation trees of an FB-LTAG are context free and that an FB-LTAG can be converted to a a Feature-Based Regular Tree Grammar (FB-RTG) describing the derivation trees of this FB-LTAG 4 .",
"cite_spans": [],
"ref_spans": [
{
"start": 411,
"end": 419,
"text": "Figure 2",
"ref_id": "FIGREF2"
},
{
"start": 1149,
"end": 1157,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Feature-Based Tree Adjoining Grammar",
"sec_num": null
},
{
"text": "NP j John l1:john(j) S b NP\u2193 c VP b a V a",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature-Based Tree Adjoining Grammar",
"sec_num": null
},
{
"text": "On the one hand, this Earley algorithm enhances efficiency in that (i) it avoids recomputing intermediate structures by storing them and (ii) it packs locally equivalent structures into a single representative (the most general one). Locally equivalent structures are taken to be partial derivation trees with identical semantic coverage and similar combinatorics (same number and type of substitution and adjunction requirements).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature-Based Tree Adjoining Grammar",
"sec_num": null
},
{
"text": "On the other hand, it naturally supports the range of revisions required for the incremental formulation of ontology-based queries. Let C be the current chart i.e., the chart built when generating a NL query from the formal query. Then additions, revisions and deletion can be handled as follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature-Based Tree Adjoining Grammar",
"sec_num": null
},
{
"text": "\u2022 Add concept or property X: the grammar units selected by X are added to the agenda 5 and tried for combinations with the elements of C. \u2022 Substitute selection X with Y : all chart items derived from a grammar unit selected by an element of X are removed from the chart. Conversely, all chart items derived from a grammar unit selected by an element of Y are added to the agenda. All items in the agenda are then processed until generation halts. \u2022 Delete selection X: all chart items derived from a grammar unit selected by an element of X are removed from the chart. Intermediate structures that had previously used X are moved to the agenda and the agenda is processed until generation halts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature-Based Tree Adjoining Grammar",
"sec_num": null
},
{
"text": "Beam Search To enhance efficiency and favor those structures which best preserve the word order while covering maximal input, we base our beam search on a scoring function combining linear order and semantic coverage information. This works as follows. First, we associate each literal in the input query with its positional information e.g., This positional information is copied over to each FB-LTAG tree selected by a given literal and is then used to compute a word order cost (C wo ) for each derived tree as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature-Based Tree Adjoining Grammar",
"sec_num": null
},
{
"text": "C wo (t i+j ) = C wo (t i ) + C wo (t j ) + C wo (t i + t j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature-Based Tree Adjoining Grammar",
"sec_num": null
},
{
"text": "That is the cost of a tree t i+j obtained by combining t i and t j is the sum of the cost of each of these trees plus the cost incurred by combining these two trees. We define this latter cost to be proportional to the distance separating the actual position (ap i ) of the tree (t i ) being substituted/adjoined in from its required position (rp i ). If t i is substituted/adjoined at position n to the right (left) of the anchor of a tree t j with position p j , then the actual position of t i is pj + n (pj \u2212 n) and the cost of combining",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature-Based Tree Adjoining Grammar",
"sec_num": null
},
{
"text": "t i with t j is | pj + n \u2212 rp i | /\u03b1 (| pj \u2212 n \u2212 rp i | /\u03b1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature-Based Tree Adjoining Grammar",
"sec_num": null
},
{
"text": "where we empirically determined \u03b1 to be 100 6 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature-Based Tree Adjoining Grammar",
"sec_num": null
},
{
"text": "Finally, the total score of a tree reflects the relation between the cost of the built tree, i.e. its word order cost, and its semantic coverage, i.e. nb. of literals from the input semantics:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature-Based Tree Adjoining Grammar",
"sec_num": null
},
{
"text": "S(t i ) = \u2212(|literals| \u2212 1) C wo (t i ) = 0 C wo (t i )/(|literals| \u2212 1) otherwise",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature-Based Tree Adjoining Grammar",
"sec_num": null
},
{
"text": "The total score is defined by cases. Those trees with C wo = 0 get a negative value according to their input coverage (i.e. those that cover a larger subset of the input semantics are favored as the trees in the agenda are ordered by increasing total score). Conversely, those trees with C wo > 0 get a score that is the word order cost proportional to the covered input.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature-Based Tree Adjoining Grammar",
"sec_num": null
},
{
"text": "In effect, this scoring mechanism favors trees with low word order cost and large semantic coverage. The beam search will select those trees with lowest score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature-Based Tree Adjoining Grammar",
"sec_num": null
},
{
"text": "The referring expression (RE) module takes as input the sequence of phrase structure trees output by the surface realiser and uses heuristics to decide for each NP whether it should be verbalised as a pronoun, a definite or an indefinite NP. These heuristics are based on the linear order and morpho-syntactic information contained in the phrase structure trees of the generated sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Referring Expression Generation",
"sec_num": "4.2.3"
},
{
"text": "We conducted evaluation experiments designed to address the following questions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and evaluation",
"sec_num": "5"
},
{
"text": "\u2022 Does the scoring mechanism appropriately capture the ordering constraints on the generated queries ? That is, does it ensure that the generated queries respect the strict total order of the query tree linearisation ? \u2022 Does our grammar based approach produce more fluent and less ambiguous NL query than the initial template based approach currently used by Quelo ? \u2022 Does the automatic extraction of lexicons from ontology support generic coverage of arbitrary ontologies ? We start by describing the grammar used. We then report on the results obtained for each of these evaluation points.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and evaluation",
"sec_num": "5"
},
{
"text": "We specify an FB-LTAG with unification semantics which covers a set of basic constructions used to formulate queries namely, active and passive transitive verbs, adjectives, prepositional phrases, relative and elliptical clauses, gerund and participle modifiers. The resulting grammar consists of 53 FB-LTAG pairs of syntactic trees and semantic schema.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grammar and Lexicon",
"sec_num": "5.1"
},
{
"text": "To ensure the appropriate syntax/semantic interface, we make explicit the arguments of a relation using the variables associated with the nodes of the query tree. Thus for instance, given the rightmost query tree shown in Figure 1 , the flat semantics input to surface realisation is",
"cite_spans": [],
"ref_spans": [
{
"start": 222,
"end": 231,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Grammar and Lexicon",
"sec_num": "5.1"
},
{
"text": "{Man(x), Person(y), House(w), Beautiful(w), RichPerson(z), marriedTo(x,y), livesIn(x,w), ownedBy(w,z)}.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grammar and Lexicon",
"sec_num": "5.1"
},
{
"text": "For each ontology, a lexicon mapping concepts and relations to FB-LTAG trees is automatically derived from the ontology using (Trevisan, 2010) 's approach. We specify for each experiment below, the size of the extracted lexicon.",
"cite_spans": [
{
"start": 126,
"end": 142,
"text": "(Trevisan, 2010)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Grammar and Lexicon",
"sec_num": "5.1"
},
{
"text": "In this first experiment, we manually examined whether the incremental algorithm we propose supports the generation of NL queries whose word order matches the linearisation of the input query tree.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linearisation",
"sec_num": "5.2"
},
{
"text": "We created four series of queries such that each serie is a sequence q 1 . . . q n where q i+1 is an increment of q i . That is, q i+1 is derived from q i by adding, removing or substituting to q i a concept or a relation. The series were devised so as to encompass the whole range of possible operations at different points of the preceding query (e.g., at the last node/edge or on some node/edge occurring further to the left of the previous query); and include 14 revisions on 4 initial queries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linearisation",
"sec_num": "5.2"
},
{
"text": "For all queries, the word order of the best NL query produced by the generator was found to match the linearisation of the DL query.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linearisation",
"sec_num": "5.2"
},
{
"text": "Following the so-called consensus model (Power and Third, 2010) , the current, template based version of Quelo generates one clause per relation 7 . Thus for instance, template-based Quelo will generate (5a) while our grammar based approach supports the generation of arguably more fluent sentences such as (5b).",
"cite_spans": [
{
"start": 40,
"end": 63,
"text": "(Power and Third, 2010)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Fluency and Clarity",
"sec_num": "5.3"
},
{
"text": "(5) a. I am looking for a car. Its make should be a Land Rover. The body style of the car should be an off-road car. The exterior color of the car should be beige.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fluency and Clarity",
"sec_num": "5.3"
},
{
"text": "b. I am looking for car whose make is a Land Rover, whose body style is an off-road car and whose exterior color is beige.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fluency and Clarity",
"sec_num": "5.3"
},
{
"text": "We ran two experiments designed to assess how fluency impacts users. The first experiment aims to assess how Quelo template based queries are perceived by the users in terms of clarity and fluency, the second aims to compare these template based queries with the queries produced by our grammar-based approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fluency and Clarity",
"sec_num": "5.3"
},
{
"text": "Assessing Quelo template-based queries Using the Quelo interface, we generated a set of 41 queries chosen to capture different combinations of concepts and relations. Eight persons (four native speakers of English, four with C2 level of competence for foreign learners of English) were then asked to classify (a binary choice) each query in terms of clarity and fluency. Following (Kow and Belz, 2012) , we take Fluency to be a single quality criterion intended to capture language quality as distinct from its meaning, i.e. how well a piece of text reads. In contrast, Clarity/ambiguity refers to ease of understanding (Is the sentence easy to understand?). Taking the average of the majority vote, we found that the judges evaluated the queries as non fluent in 50% of the cases and as unclear in 10% of the cases. In other words, template based queries were found to be disfluent about half of the time and unclear to a lesser extent. The major observation made by most of the participants was that the generated text is too repetitive and lacks aggregation. Comparing template-and grammar-based queries In this second experiment, we asked 10 persons (all proficient in the English language) to compare pairs of NL queries where one query is produced using templates and the other using our grammar-based generation algorithm. The evaluation was done online using the LG-Eval toolkit (Kow and Belz, 2012) and geared to collect relative quality judgements using visual analogue scales. After logging in, judges were given a description of the task. The sentence pairs were displayed as shown in Figure 3 with one sentence to the left and the other to the right. The judges were instructed to move the slider to the left to favor the sentence shown on the left side of the screen; and to the right to favor the sentence appearing to the right. Not moving the slider means that both sentences rank equally. To avoid creating a bias, the sentences from both systems were equally distributed to both sides of the screen.",
"cite_spans": [
{
"start": 381,
"end": 401,
"text": "(Kow and Belz, 2012)",
"ref_id": "BIBREF11"
},
{
"start": 1387,
"end": 1407,
"text": "(Kow and Belz, 2012)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 1597,
"end": 1605,
"text": "Figure 3",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Fluency and Clarity",
"sec_num": "5.3"
},
{
"text": "For this experiment, we used 14 queries built from two ontologies, an ontology on cars and the other on universities. The extracted lexicons for each of these ontology contained 465 and 297 entries respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fluency and Clarity",
"sec_num": "5.3"
},
{
"text": "The results indicate that the queries generated by the grammar based approach are perceived as more fluent than those produced by the template based approach (19.76 points in average for the grammar based approach against 7.20 for the template based approach). Furthermore, although the template based queries are perceived as clearer (8.57 for Quelo, 6.87 for our approach), the difference is not statistically significant (p < 0.5). Overall thus, the grammar based approach appears to produce verbalisations that are better accepted by the users. Concerning clarity, we observed that longer sentences let through by document planning were often deemed unclear. In future work, we plan to improve clarity by better integrating document planning and sentence realisation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fluency and Clarity",
"sec_num": "5.3"
},
{
"text": "One motivation for the symbolic based approach was the lack of training corpus and the need for portability: the query interface should be usable independently of the underlying ontology and of the existence of a training corpus. To support coverage, we combined the grammar based approach with a lexicon which is automatically extracted from the ontology using the methodology described in (Trevisan, 2010) . When tested on a corpus of 200 ontologies, this approach was shown to be able to provide appropriate verbalisation templates for about 85% of the relation identifiers present in these ontologies. 12 000 relation identifiers were extracted from the 200 ontologies and 13 syntactic templates were found to be sufficient to verbalise these relation identifiers (see (Trevisan, 2010) for more details on this evaluation).",
"cite_spans": [
{
"start": 391,
"end": 407,
"text": "(Trevisan, 2010)",
"ref_id": "BIBREF17"
},
{
"start": 773,
"end": 789,
"text": "(Trevisan, 2010)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Coverage",
"sec_num": "5.4"
},
{
"text": "That is, in general, the extracted lexicons permit covering about 85% of the ontological data. In addition, we evaluated the coverage of our approach by running the generator on 40 queries generated from five distinct ontologies. The domains observed are cinema, wines, human abilities, disabilities, and assistive devices, e-commerce on the Web, and a fishery database for observations about an aquatic resource. The extracted lexicons contained in average 453 lexical entries and the coverage (proportion of DL queries for which the generator produced a NL query) was 87%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coverage",
"sec_num": "5.4"
},
{
"text": "Fuller coverage could be obtained by manually adding lexical entries, or by developing new ways of inducing lexical entries from ontologies (c.f. e.g. (Walter et al., 2013) ).",
"cite_spans": [
{
"start": 151,
"end": 172,
"text": "(Walter et al., 2013)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Coverage",
"sec_num": "5.4"
},
{
"text": "Conceptual authoring (CA) allows the user to query a knowledge base without having any knowledge either of the formal representation language used to specify that knowledge base or of the content of the knowledge base. Although this approach builds on a tight integration between syntax and semantics and requires an efficient processing of revisions, existing CA tools predominantly make use of ad hoc generation algorithms and restricted computational grammars (e.g., Definite Clause Grammars or templates). In this paper, we have shown that FB-LTAG and chart based surface realisation provide a natural framework in which to implement conceptual authoring. In particular, we show that the chart based approach naturally supports the definition of an incremental algorithm for query verbalisation; and that the added fluency provided by the grammar based approach potentially provides for query interfaces that are better accepted by the human evaluators.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "In the future, we would like to investigate the interaction between context, document structuring and surface realisation. In our experiments we found out that this interaction strongly impacts fluency whereby for instance, a complex sentence might be perceived as more fluent than several clauses but a too long sentence will be perceived as difficult to read (non fluent). Using data that can now be collected using our grammar based approach to query verbalisation and generalising over FB-LTAG tree names rather than lemmas or POS tags, we plan to explore how e.g., Conditional Random Fields can be used to model these interactions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "krdbapp.inf.unibz.it:8080/quelo",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For a more detailed introduction to TAG and FB-LTAG, see(Vijay-Shanker and Joshi, 1988).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For more details on this algorithm, we refer the reader to(Gardent and Perez-Beltrachini, 2010).5 The agenda is a book keeping device which stores all items that needs to be processed i.e., which need to be tried for combination with elements in the chart.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In the current implementation we assume that n = 1. Furthermore, as ti might be a derived tree we also add to Cwo(ti + tj ) the cost computed on each tree t k used in the derivation of ti with respect to tj.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This is modulo aggregation of relations. Thus two subject sharing relations may be realised in the same clause.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank Marco Trevisan, Paolo Guagliardo and Alexandre Denis for facilitating the access to the libraries they developed and to Natalia Korchagina and the judges who participated in the evaluation experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The description logic handbook: theory, implementation, and applications",
"authors": [
{
"first": "Franz",
"middle": [],
"last": "Baader",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Baader. 2003. The description logic handbook: theory, implementation, and applications. Cam- bridge university press.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Conditional Random Fields for Responsive Surface Realisation using Global Features",
"authors": [
{
"first": "Nina",
"middle": [],
"last": "Dethlefs",
"suffix": ""
},
{
"first": "Helen",
"middle": [],
"last": "Hastie",
"suffix": ""
},
{
"first": "Heriberto",
"middle": [],
"last": "Cuay\u00e1huitl",
"suffix": ""
},
{
"first": "Oliver",
"middle": [],
"last": "Lemon",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nina Dethlefs, Helen Hastie, Heriberto Cuay\u00e1huitl, and Oliver Lemon. 2013. Conditional Random Fields for Responsive Surface Realisation using Global Features. Proceedings of ACL, Sofia, Bulgaria.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Natural language rendering of a conjunctive query",
"authors": [
{
"first": "Paolo",
"middle": [],
"last": "Dongilli",
"suffix": ""
}
],
"year": 2008,
"venue": "KRDB Research Centre Technical Report",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paolo Dongilli. 2008. Natural language rendering of a conjunctive query. KRDB Research Centre Techni- cal Report No. KRDB08-3). Bozen, IT: Free Univer- sity of Bozen-Bolzano, 2:5.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "An intelligent query interface based on ontology navigation",
"authors": [
{
"first": "E",
"middle": [],
"last": "Franconi",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Guagliardo",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Trevisan",
"suffix": ""
}
],
"year": 2010,
"venue": "Workshop on Visual Interfaces to the Social and Semantic Web",
"volume": "10",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Franconi, P. Guagliardo, and M. Trevisan. 2010a. An intelligent query interface based on ontology navigation. In Workshop on Visual Interfaces to the Social and Semantic Web, VISSW, volume 10. Cite- seer.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Quelo: a NL-based intelligent query interface",
"authors": [
{
"first": "E",
"middle": [],
"last": "Franconi",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Guagliardo",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Trevisan",
"suffix": ""
}
],
"year": 2010,
"venue": "Pre-Proceedings of the Second Workshop on Controlled Natural Languages",
"volume": "622",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Franconi, P. Guagliardo, and M. Trevisan. 2010b. Quelo: a NL-based intelligent query interface. In Pre-Proceedings of the Second Workshop on Con- trolled Natural Languages, volume 622.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A natural language ontology-driven query interface",
"authors": [
{
"first": "E",
"middle": [],
"last": "Franconi",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Guagliardo",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Tessaris",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Trevisan",
"suffix": ""
}
],
"year": 2011,
"venue": "9th International Conference on Terminology and Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Franconi, P. Guagliardo, S. Tessaris, and M. Tre- visan. 2011a. A natural language ontology-driven query interface. In 9th International Conference on Terminology and Artificial Intelligence, page 43.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Quelo: an Ontology-Driven Query Interface",
"authors": [
{
"first": "E",
"middle": [],
"last": "Franconi",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Guagliardo",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Trevisan",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Tessaris",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Franconi, P. Guagliardo, M. Trevisan, and S. Tes- saris. 2011b. Quelo: an Ontology-Driven Query Interface. In Description Logics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "RTG based Surface Realisation for TAG",
"authors": [
{
"first": "C",
"middle": [],
"last": "Gardent",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Perez-Beltrachini",
"suffix": ""
}
],
"year": 2010,
"venue": "COLING'10, Beijing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Gardent and L. Perez-Beltrachini. 2010. RTG based Surface Realisation for TAG. In COLING'10, Bei- jing, China.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Using regular tree grammar to enhance surface realisation",
"authors": [
{
"first": "B",
"middle": [],
"last": "Gardent",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Perez-Beltrachini",
"suffix": ""
}
],
"year": 2011,
"venue": "Special Issue on Finite State Methods and Models in Natural Language Processing",
"volume": "17",
"issue": "",
"pages": "185--201",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B. Gottesman Gardent, C. and L. Perez-Beltrachini. 2011. Using regular tree grammar to enhance sur- face realisation. Natural Language Engineering, 17:185-201. Special Issue on Finite State Methods and Models in Natural Language Processing.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Theoretical foundations of an ontology-based visual tool for query formulation support",
"authors": [
{
"first": "Paolo",
"middle": [],
"last": "Guagliardo",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paolo Guagliardo. 2009. Theoretical foundations of an ontology-based visual tool for query formulation support. Technical report, KRDB Research Centre, Free University of Bozen-Bolzano, October.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Composing questions through conceptual authoring",
"authors": [
{
"first": "C",
"middle": [],
"last": "Hallett",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Scott",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Power",
"suffix": ""
}
],
"year": 2007,
"venue": "Computational Linguistics",
"volume": "33",
"issue": "1",
"pages": "105--133",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Hallett, D. Scott, and R. Power. 2007. Composing questions through conceptual authoring. Computa- tional Linguistics, 33(1):105-133.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "LG-Eval: A Toolkit for Creating Online Language Evaluation Experiments",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "Kow",
"suffix": ""
},
{
"first": "Anja",
"middle": [],
"last": "Belz",
"suffix": ""
}
],
"year": 2012,
"venue": "LREC",
"volume": "",
"issue": "",
"pages": "4033--4037",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric Kow and Anja Belz. 2012. LG-Eval: A Toolkit for Creating Online Language Evaluation Experi- ments. In LREC, pages 4033-4037.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Stochastic language generation for spoken dialogue systems",
"authors": [
{
"first": "H",
"middle": [],
"last": "Alice",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"I"
],
"last": "Oh",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rudnicky",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 2000 ANLP/NAACL Workshop on Conversational systems",
"volume": "3",
"issue": "",
"pages": "27--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alice H Oh and Alexander I Rudnicky. 2000. Stochas- tic language generation for spoken dialogue sys- tems. In Proceedings of the 2000 ANLP/NAACL Workshop on Conversational systems-Volume 3, pages 27-32. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Expressing owl axioms by english sentences: dubious in theory, feasible in practice",
"authors": [
{
"first": "R",
"middle": [],
"last": "Power",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Third",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 23rd International Conference on Computational Linguistics: Posters",
"volume": "",
"issue": "",
"pages": "1006--1013",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Power and A. Third. 2010. Expressing owl ax- ioms by english sentences: dubious in theory, fea- sible in practice. In Proceedings of the 23rd Inter- national Conference on Computational Linguistics: Posters, pages 1006-1013. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A general, abstract model of incremental dialogue processing",
"authors": [
{
"first": "David",
"middle": [],
"last": "Schlangen",
"suffix": ""
},
{
"first": "Gabriel",
"middle": [],
"last": "Skantze",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "710--718",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Schlangen and Gabriel Skantze. 2009. A gen- eral, abstract model of incremental dialogue pro- cessing. In Proceedings of the 12th Conference of the European Chapter of the Association for Com- putational Linguistics, pages 710-718. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Incremental reference resolution: The task, metrics for evaluation, and a bayesian filtering model that is sensitive to disfluencies",
"authors": [
{
"first": "David",
"middle": [],
"last": "Schlangen",
"suffix": ""
},
{
"first": "Timo",
"middle": [],
"last": "Baumann",
"suffix": ""
},
{
"first": "Michaela",
"middle": [],
"last": "Atterer",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the SIGDIAL 2009 Conference: The 10th Annual Meeting of the Special Interest Group on Discourse and Dialogue",
"volume": "",
"issue": "",
"pages": "30--37",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Schlangen, Timo Baumann, and Michaela At- terer. 2009. Incremental reference resolution: The task, metrics for evaluation, and a bayesian filtering model that is sensitive to disfluencies. In Proceed- ings of the SIGDIAL 2009 Conference: The 10th An- nual Meeting of the Special Interest Group on Dis- course and Dialogue, pages 30-37. Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Menu-based natural language understanding",
"authors": [
{
"first": "H",
"middle": [],
"last": "Tennant",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Ross",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Saenz",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Thompson",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Miller",
"suffix": ""
}
],
"year": 1983,
"venue": "Proceedings of the 21st annual meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "151--158",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. R Tennant, K. M Ross, R. M Saenz, C. W Thomp- son, and J. R Miller. 1983. Menu-based natural lan- guage understanding. In Proceedings of the 21st an- nual meeting on Association for Computational Lin- guistics, pages 151-158. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A Portable Menuguided Natural Language Interface to Knowledge Bases for Querytool",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Trevisan",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Trevisan. 2010. A Portable Menuguided Nat- ural Language Interface to Knowledge Bases for Querytool. Ph.D. thesis, Masters thesis, Free Uni- versity of Bozen-Bolzano (Italy) and University of Groningen (Netherlands).",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Feature based tags",
"authors": [
{
"first": "K",
"middle": [],
"last": "Vijay-Shanker",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Joshi",
"suffix": ""
}
],
"year": 1988,
"venue": "Proceedings of the 12th International Conference of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "573--577",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Vijay-Shanker and A. Joshi. 1988. Feature based tags. In Proceedings of the 12th International Con- ference of the Association for Computational Lin- guistics, pages 573-577, Budapest.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Ontologybased integration of information-a survey of existing approaches",
"authors": [
{
"first": "Holger",
"middle": [],
"last": "Wache",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Voegele",
"suffix": ""
},
{
"first": "Ubbo",
"middle": [],
"last": "Visser",
"suffix": ""
},
{
"first": "Heiner",
"middle": [],
"last": "Stuckenschmidt",
"suffix": ""
},
{
"first": "Gerhard",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "H\u00fcbner",
"suffix": ""
}
],
"year": 2001,
"venue": "IJCAI-01 workshop: ontologies and information sharing",
"volume": "2001",
"issue": "",
"pages": "108--117",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Holger Wache, Thomas Voegele, Ubbo Visser, Heiner Stuckenschmidt, Gerhard Schuster, Holger Neu- mann, and Sebastian H\u00fcbner. 2001. Ontology- based integration of information-a survey of existing approaches. In IJCAI-01 workshop: ontologies and information sharing, volume 2001, pages 108-117. Citeseer.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A corpus-based approach for the induction of ontology lexica",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Walter",
"suffix": ""
},
{
"first": "Christina",
"middle": [],
"last": "Unger",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Cimiano",
"suffix": ""
}
],
"year": 2013,
"venue": "Natural Language Processing and Information Systems",
"volume": "",
"issue": "",
"pages": "102--113",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Walter, Christina Unger, and Philipp Cimi- ano. 2013. A corpus-based approach for the induc- tion of ontology lexica. In Natural Language Pro- cessing and Information Systems, pages 102-113. Springer.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"text": "Example of query tree and incremental query construction. query into sentence size chunks.",
"uris": null
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"text": "(j john), lv:run(a,j), lv:often(a)",
"uris": null
},
"FIGREF2": {
"type_str": "figure",
"num": null,
"text": "Derivation and Semantics for \"John often runs\"",
"uris": null
},
"FIGREF3": {
"type_str": "figure",
"num": null,
"text": "Online Evaluation.",
"uris": null
}
}
}
}