Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "Q14-1042",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:11:33.107550Z"
},
"title": "A New Corpus and Imitation Learning Framework for Context-Dependent Semantic Parsing",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Vlachos",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University College London",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Cambridge",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Semantic parsing is the task of translating natural language utterances into a machineinterpretable meaning representation. Most approaches to this task have been evaluated on a small number of existing corpora which assume that all utterances must be interpreted according to a database and typically ignore context. In this paper we present a new, publicly available corpus for context-dependent semantic parsing. The MRL used for the annotation was designed to support a portable, interactive tourist information system. We develop a semantic parser for this corpus by adapting the imitation learning algorithm DAGGER without requiring alignment information during training. DAGGER improves upon independently trained classifiers by 9.0 and 4.8 points in F-score on the development and test sets respectively.",
"pdf_parse": {
"paper_id": "Q14-1042",
"_pdf_hash": "",
"abstract": [
{
"text": "Semantic parsing is the task of translating natural language utterances into a machineinterpretable meaning representation. Most approaches to this task have been evaluated on a small number of existing corpora which assume that all utterances must be interpreted according to a database and typically ignore context. In this paper we present a new, publicly available corpus for context-dependent semantic parsing. The MRL used for the annotation was designed to support a portable, interactive tourist information system. We develop a semantic parser for this corpus by adapting the imitation learning algorithm DAGGER without requiring alignment information during training. DAGGER improves upon independently trained classifiers by 9.0 and 4.8 points in F-score on the development and test sets respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Semantic parsing is the task of translating natural language utterances into a machine-interpretable meaning representation (MR). Progress in semantic parsing has been facilitated by the existence of corpora containing utterances annotated with MRs, the most commonly used being ATIS (Dahl et al., 1994) and GeoQuery (Zelle, 1995) . As these corpora cover rather narrow application domains, recent work has developed corpora to support natural language interfaces to the Freebase database (Cai and Yates, 2013) , as well as the development of MT systems (Banarescu et al., 2013) .",
"cite_spans": [
{
"start": 284,
"end": 303,
"text": "(Dahl et al., 1994)",
"ref_id": "BIBREF10"
},
{
"start": 317,
"end": 330,
"text": "(Zelle, 1995)",
"ref_id": "BIBREF38"
},
{
"start": 489,
"end": 510,
"text": "(Cai and Yates, 2013)",
"ref_id": "BIBREF7"
},
{
"start": 554,
"end": 578,
"text": "(Banarescu et al., 2013)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, these existing corpora have some important limitations. The MRs accompanying the utterances are typically restricted to some form of database query. Furthermore, in most cases each utterance is interpreted in isolation; thus utterances that use coreference or whose semantics are contextdependent are typically ignored. In this paper we present a new corpus for context-dependent semantic parsing to support the development of an interactive navigation and exploration system for tourismrelated activities. The new corpus was annotated with MRs that can handle dialog context such as coreference and can accommodate utterances that are not interpretable according to a database, e.g. repetition requests. The utterances were collected in experiments with human subjects, and contain phenomena such as ellipsis and disfluency. We developed guidelines and annotated 17 dialogs containing 2,374 utterances, with 82.9% exact match agreement between two annotators.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We also develop a semantic parser for this corpus. As the output MRs are rather complex, instead of adopting an approach that searches the output space exhaustively, we use the imitation learning algorithm DAGGER (Ross et al., 2011 ) that converts learning a structured prediction model into learning a set of classification models. We take advantage of its ability to learn with non-decomposable loss functions and extend it to handle the absence of alignment information during training by developing a randomized expert policy. Our approach improves upon independently trained classifiers by 9.0 and 4.8 F-score on the development and test sets.",
"cite_spans": [
{
"start": 213,
"end": 231,
"text": "(Ross et al., 2011",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our proposed MR language (MRL) was designed in the context of the portable, interactive naviga-tion and exploration system of Janarthanam et al. (2013) , through which users can obtain information about places and objects of interest, such as monuments and restaurants, as well as directions (see dialog in Fig. 1) . The system is aware of the position of the user (through the use of GPS technology) and is designed to be interactive; hence it can initiate the dialog by offering information on nearby points of interest and correcting the route taken by the user if needed. The MRs returned by the semantic parser must represent the user utterances adequately so that the system can generate the appropriate response. The system was developed in the context of the SPACEBOOK project. 1 The MRL uses a flat syntax composed of elementary predications, based loosely on minimal recursion semantics (Copestake et al., 2005) , but without an explicit treatment of scope. Each MR consists of a dialog act representing the overall function of the utterance, followed for some dialog acts by an unordered set of predicates. All predicates are implicitly conjoined and the names of their arguments specified to improve readability and to allow for some of the arguments to be optional. The argument values can be either constants from the controlled vocabulary, verbatim string extracts from the utterance (enclosed in quotes) or variables (Xno). Negation is denoted by a tilde (\u02dc) in front of predicates. The variables are used to bind together the arguments of different predicates within an utterance, as well as to denote coreference across utterances.",
"cite_spans": [
{
"start": 126,
"end": 151,
"text": "Janarthanam et al. (2013)",
"ref_id": "BIBREF18"
},
{
"start": 786,
"end": 787,
"text": "1",
"ref_id": null
},
{
"start": 897,
"end": 921,
"text": "(Copestake et al., 2005)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 307,
"end": 314,
"text": "Fig. 1)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Meaning Representation Language",
"sec_num": "2"
},
{
"text": "The goals in designing the MRL were to remain close to existing semantic formalisms, whilst at the same time producing an MRL that is particularly suited to the application at hand (Janarthanam et al., 2013) . We also wanted an MRL that could be computed with efficiently and accurately, given the nature of the NL input. Hence we developed an MRL that is able to express the relevant semantics for the majority of the utterances in our data, without moving to the full expressive power of, e.g., DRT.",
"cite_spans": [
{
"start": 181,
"end": 207,
"text": "(Janarthanam et al., 2013)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Meaning Representation Language",
"sec_num": "2"
},
{
"text": "The dialog acts are utterance-level labels which capture the overall function of the utterance in the dialog, for example whether an utterance is a question seeking a list as an answer, a statement of information, an acknowledgement, an instruction 1 www.spacebook-project.eu USER what's the nearest italian, em, for a meal? dialogAct(set_question) * isA(id:X1, type:restaurant) def(id:X1) hasProperty(id:X1, property:cuisine, value:\"italian\") distance(location:@USER, location:X1, value:X2) argmin(argument:X1, value:X2) WIZARD vapiano's. dialogAct(inform) isA(id:X4, type:restaurant) * isNamed(id:X4, name:\"vapiano's\") equivalent(id:X1, id:X4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dialog acts",
"sec_num": null
},
{
"text": "USER take me to vapiano! dialogAct(set_question) * route(from_location:@USER, to_location:X4) isA(id:X4, type:restaurant) isNamed(id:X4, name:\"vapiano\") WIZARD certainly. dialogAct(acknowledge) WIZARD keep walking straight down clerk street. dialogAct(instruct) * walk(agent:@USER, along_location:X1, direction:forward) isA(id:X1, type:street) isNamed(id:X1, name:\"clerk street\") USER yes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dialog acts",
"sec_num": null
},
{
"text": "USER what is this church? dialogAct(set_question) * isA(id:X2, type:church) index(id:X2) WIZARD sorry, can you say this again? dialogAct(repeat)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "dialogAct(acknowledge)",
"sec_num": null
},
{
"text": "USER i said what is this church on my left! dialogAct(set_question) * isA(id:X2, type:church) index(id:X2) position(id:X2, ref:@USER, location:left)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "dialogAct(acknowledge)",
"sec_num": null
},
{
"text": "WIZARD it is saint john's. dialogAct(inform) isA(id:X3, type:church) * isNamed(id:X3, name:\"saint john's\") equivalent(id:X2, id:X3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "dialogAct(acknowledge)",
"sec_num": null
},
{
"text": "USER A sign here says it is saint mark's. dialogAct(inform) isA(id:X4, type:church) * isNamed(id:X4, name:\"saint mark's\") equivalent(id:X2, id:X4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "dialogAct(acknowledge)",
"sec_num": null
},
{
"text": "Figure 1: Sample dialog annotated with MRs or a repetition request (set_question, inform, acknowledge, instruct and repeat in Figure 1) . The focal point together with the act provide similar information to the intent annotation in ATIS (Tur et al., 2010) . The acts defined in the proposed MRL follow the guidelines proposed by Allen and Core (1997) , Stolcke et al. (2000) and Bunt et al. (2012) . The dialog acts are divided into two categories. The first category contains those that are accompanied by a set of predicates to represent the semantics of the sentence, such as set_question and inform. For these acts we denote their focal points -for example the piece of information requested in a set_question -with an asterisk ( * ) in front of the relevant predicate. The second category contains dialog acts that are not accompanied by predicates, such as acknowledge and repeat.",
"cite_spans": [
{
"start": 237,
"end": 255,
"text": "(Tur et al., 2010)",
"ref_id": "BIBREF33"
},
{
"start": 329,
"end": 350,
"text": "Allen and Core (1997)",
"ref_id": "BIBREF0"
},
{
"start": 353,
"end": 374,
"text": "Stolcke et al. (2000)",
"ref_id": "BIBREF32"
},
{
"start": 379,
"end": 397,
"text": "Bunt et al. (2012)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 126,
"end": 135,
"text": "Figure 1)",
"ref_id": null
}
],
"eq_spans": [],
"section": "dialogAct(acknowledge)",
"sec_num": null
},
{
"text": "Predicates The MRL contains predicates to denote entities, properties and their relations:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "dialogAct(acknowledge)",
"sec_num": null
},
{
"text": "\u2022 Predicates introducing entities and their properties: isA, isNamed and hasProperty.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "dialogAct(acknowledge)",
"sec_num": null
},
{
"text": "\u2022 Predicates describing user actions, such as walk and turn, with arguments such as direction and along_location.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "dialogAct(acknowledge)",
"sec_num": null
},
{
"text": "\u2022 Predicates describing geographic relations, such as distance, route and position, using ref to denote relative positioning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "dialogAct(acknowledge)",
"sec_num": null
},
{
"text": "\u2022 Predicates denoting whether an entity is introduced using a definite article (def), an indefinite (indef) or an indexical (index).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "dialogAct(acknowledge)",
"sec_num": null
},
{
"text": "\u2022 Predicates expressing numerical relations such as argmin and argmax.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "dialogAct(acknowledge)",
"sec_num": null
},
{
"text": "Coreference In order to model coreference we adopt the notion of discourse referents (DRs) and discourse entities (DEs) from Discourse Representation Theory (DRT) (Webber, 1978; Kamp and Reyle, 1993) . DRs are referential expressions appearing in utterances which denote DEs, which are mental entities in the speaker's model of discourse. Multiple DEs can refer to the same real-world entity; for example, in Fig. 1 \"vapiano's\" refers to a different DE from the restaurant in the previous sentence (\"the nearest italian\"), even though they are likely to be the same real-world entity. We con-sidered DEs instead of actual entities in the MRL because they allow us to capture the semantics of interactions such as the last exchange between the wizard and user. The MRL represents multiple DEs referring to the same real-world entity through the predicate equivalent.",
"cite_spans": [
{
"start": 163,
"end": 177,
"text": "(Webber, 1978;",
"ref_id": "BIBREF37"
},
{
"start": 178,
"end": 199,
"text": "Kamp and Reyle, 1993)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [
{
"start": 409,
"end": 415,
"text": "Fig. 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "dialogAct(acknowledge)",
"sec_num": null
},
{
"text": "Coreference is indicated by using identical variables across predicate arguments within an utterance or across utterances. The main principle in determining whether DRs corefer is that it must be possible to infer this from the dialog context alone, without using world knowledge.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "dialogAct(acknowledge)",
"sec_num": null
},
{
"text": "The NL utterances were collected using Wizard-of-Oz experiments (Kelley, 1983) with pairs of human subjects. In each experiment, one human pretended to be a tourist visiting Edinburgh (by physically walking around the city), while the other performed the role of the system responding through a suitable interface using a text-to-speech system.",
"cite_spans": [
{
"start": 64,
"end": 78,
"text": "(Kelley, 1983)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Collection and Annotation",
"sec_num": "3"
},
{
"text": "Each user-wizard pair was given one of two scenarios involving requests for directions to different points of interest. The first scenario involves seeking directions to the national museum of Scotland, then going to a nearby coffee shop, followed by a pub via a cash machine and finally looking for a park. The second scenario involves looking for a Japanese restaurant and the university gym, requesting information about the Flodden Wall monument, visiting the Scottish parliament and the Dynamic Earth science centre, and going to the Royal Mile and the Surgeon's Hall museum. Each experiment formed one dialog which was manually transcribed from recorded audio files. 17 dialogs were collected in total, 7 from the first scenario and 10 from the second. More details are reported in Hill et al. (2013) .",
"cite_spans": [
{
"start": 788,
"end": 806,
"text": "Hill et al. (2013)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Collection and Annotation",
"sec_num": "3"
},
{
"text": "Given the varied nature of the dialogs, some of the user requests were not within the scope of the system. Furthermore, the proposed MRL has its own limitations; for example it does not have predicates to express temporal relationships. Thus, it was necessary to filter the utterances collected and decide which ones to annotate with MRs. \u2022 Utterances that are not human-interpretable, e.g. utterances that were interrupted too early to be interpretable. In such cases, the system is likely to respond with a repetition request.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Collection and Annotation",
"sec_num": "3"
},
{
"text": "\u2022 Utterances that are human-interpretable but outside the scope of the system, e.g. questions about historical events which are not included in the database of the application considered.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Collection and Annotation",
"sec_num": "3"
},
{
"text": "\u2022 Utterances that are within the scope of the system but too complex to be represented by the proposed MRL, e.g. an utterance requiring representation of time to be interpreted.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Collection and Annotation",
"sec_num": "3"
},
{
"text": "Note that we still annotate an utterance if the core of its semantics can be captured by the MRL. For example, \"take me to vapiano now!\" would be annotated, even though the MRL cannot represent the meaning of \"now\". Broad information requests such as \"tell me more about this church\" are also annotated using the predicate extraInfo(id:Xno). We argue that determining which utterances should be translated into MRs, and which should be ignored, is an important subtask for real-world applications of semantic parsing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Collection and Annotation",
"sec_num": "3"
},
{
"text": "The annotation was performed by one of the authors and a freelance linguist with no experience in semantic parsing. As well as annotating the user utterances, we also annotated the wizard utterances with dialog acts and the entities mentioned, as they provide the necessary context to perform contextdependent interpretation. In practice, though, we expect this information to be used by a natural language generation system to produce the system's response and thus be available to the semantic parser.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Collection and Annotation",
"sec_num": "3"
},
{
"text": "The total number of user utterances annotated was 2374, out of which 1906 were annotated with MRs, the remaining not translated due to the reasons discussed earlier in this section. The number and types of the MRL vocabulary terms used appear in Tbl. 1. The annotated dialogs, the guidelines and the lists of the vocabulary terms are available from http://sites.google.com/ site/andreasvlachos/resources. In order to assess the quality of the guidelines and the annotation, we conducted an inter-annotator agreement study. For this purpose, the two annotators annotated one dialog consisting of 510 utterances. Exact match agreement at the utterance level, which requires that the MRs by the annotators agree on dialog act, predicates and within-utterance variable assignment, was 0.829, which is a strong result given the complexity of the annotation task, and which suggests that the proposed guidelines can be applied consistently. We also assessed the agreement on predicates using F-score, which was 0.914.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Collection and Annotation",
"sec_num": "3"
},
{
"text": "The most closely related corpus to the one presented in this paper (herein SPACEBOOK) is the airline travel information system (ATIS) corpus (Dahl et al., 1994) which consists of dialogs between a user and a flight booking system collected in Wizard-of-Oz experiments. Each utterance is annotated with the SQL statement that would return the requested piece of information from the flights database. The utterance interpretation is context-dependent. For example, when the user follows up an initial flight request -e.g. \"find me flights to Boston\" -with utterances containing additional preferences -e.g. \"on Monday\" -the interpretation of the additional preferences extends the MR for the initial request.",
"cite_spans": [
{
"start": 141,
"end": 160,
"text": "(Dahl et al., 1994)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison to Existing Corpora",
"sec_num": "4"
},
{
"text": "Compared to ATIS, the dialogs in the SPACE-BOOK corpus are substantially longer (8.8 vs. 139.7 utterances on average respectively) and cover a broader domain due to the longer scenarios used in data collection. Furthermore, allowing the wizards to answer in natural language instead of restricting them to responding via database queries as in ATIS led to more varied dialogs. Finally, our approach to annotating coreference avoids repeating the MR of previous utterances, thus resulting in shorter expressions that are closer to the semantics of the NL utterances.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison to Existing Corpora",
"sec_num": "4"
},
{
"text": "The datasets developed in the recent dialog state tracking challenge (Henderson et al., 2014 ) also consist of dialogs between a user and a tourism information system. However the task is easier since only three entity types are considered (restaurant, coffeeshop and pub), a slot-filling MRL is used and the argument slots take values from fixed lists.",
"cite_spans": [
{
"start": 69,
"end": 92,
"text": "(Henderson et al., 2014",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison to Existing Corpora",
"sec_num": "4"
},
{
"text": "The abstract meaning representation (AMR) described by Banarescu et al. (2013) was developed to provide a semantic interpretation layer to improve machine translation (MT) systems. It has similar predicate argument structure to the MRL proposed here, including a lack of cover for temporal relations and scoping. However, due to the different application domains (MT vs. tourism-related activities), there are some differences. Since MT systems operate at the sentence-level, each sentence is interpreted in isolation in AMR, whilst our proposed MRL takes context into account. Also, AMR tries to account for all the words in a sentence, whilst our MRL only tries to capture the semantics of those words that are relevant to the application at hand.",
"cite_spans": [
{
"start": 55,
"end": 78,
"text": "Banarescu et al. (2013)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison to Existing Corpora",
"sec_num": "4"
},
{
"text": "Other popular semantic parsing corpora include GeoQuery (Zelle, 1995) and Free-917 (Cai and Yates, 2013) . Both consist exclusively of questions to be answered with a database query, the former considering a small American geography database and the latter the much wider Freebase database (Bollacker et al., 2008) . Unlike SPACEBOOK and ATIS, there is no notion of context in either of these corpora. Furthermore, the NL utterances in these corpora are compiled to be interpreted as database queries, which is equivalent to only one of the dialog acts (set_question)in the SPACEBOOK corpus. Thus the latter allows the exploration of the application of dialog act tagging as a first step in semantic parsing. Finally, MacMahon et al. 2006developed a corpus of natural language instructions paired with sequences of actions; however the domain is limited to simple navigation instructions and there is no notion of dialog in this corpus.",
"cite_spans": [
{
"start": 56,
"end": 69,
"text": "(Zelle, 1995)",
"ref_id": "BIBREF38"
},
{
"start": 83,
"end": 104,
"text": "(Cai and Yates, 2013)",
"ref_id": "BIBREF7"
},
{
"start": 290,
"end": 314,
"text": "(Bollacker et al., 2008)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison to Existing Corpora",
"sec_num": "4"
},
{
"text": "The MRL in Fig. 1 is readable and easy to annotate with. However, it is not ideal for experiments, as it is difficult to compare MR expressions beyond exact match. For these reasons, we converted the MR expressions into a node-argument form. In particular, all predicates introducing entities (isA) and most predicates introducing relations among entities (e.g. distance) become nodes, while all other predicates (e.g. isNamed, def) are converted into arguments. For example, the MR for the first utterance in Fig. 1 is converted into the form in Fig. 2g . Entities appearing in MR expressions without a type (e.g. X2 in the last utterance of Fig. 1 ) are denoted with a node of type empty. Each node has a unique id (e.g. X1) and each argument can take as value a constant (e.g. det), a node id, or a verbatim string extract from the utterance. Arguments that are absent (e.g. the name of restaurant) are set to the constant null. This conversion results in 16 utterance-level labels (15 dialog acts plus one for the non-interpretable utterances), 35 node types and 32 arguments.",
"cite_spans": [],
"ref_spans": [
{
"start": 11,
"end": 17,
"text": "Fig. 1",
"ref_id": null
},
{
"start": 510,
"end": 516,
"text": "Fig. 1",
"ref_id": null
},
{
"start": 547,
"end": 554,
"text": "Fig. 2g",
"ref_id": "FIGREF1"
},
{
"start": 643,
"end": 649,
"text": "Fig. 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Semantic Parsing for the New Corpus",
"sec_num": "5"
},
{
"text": "The comparison between a predicted and a gold standard node-argument form is performed in three stages. First we map the ids of the predicted nodes to those of the gold standard. While ids do not carry any semantics, they are needed to differentiate between multiple nodes of the same type; e.g. if a second restaurant had been predicted in Fig. 2h then it would have a different id and would not be matched to a gold standard node. Second, we decompose the node-argument forms into a set of atomic predictions (Fig. 2h) . This decomposition allows the awarding of partial credit, e.g. when the node type is correct but some of the arguments are not. Using these atomic predictions we calculate precision, recall and F-score.",
"cite_spans": [],
"ref_spans": [
{
"start": 341,
"end": 348,
"text": "Fig. 2h",
"ref_id": "FIGREF1"
},
{
"start": 511,
"end": 520,
"text": "(Fig. 2h)",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Semantic Parsing for the New Corpus",
"sec_num": "5"
},
{
"text": "The mapping between predicted and gold standard ids is performed by evaluating all mappings (with mappings between nodes of different types not allowed), and choosing the one resulting in the lowest sum of false positives and negatives. Fig. 2 shows the decomposition of the semantic parsing task in stages, which are described below.",
"cite_spans": [],
"ref_spans": [
{
"start": 237,
"end": 243,
"text": "Fig. 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Semantic Parsing for the New Corpus",
"sec_num": "5"
},
{
"text": "We first assign an utterance-level label using a classifier that exploits features based on the textual content of the utterance and on the utterance preceding it. bigrams and trigrams and the final punctuation mark. Unlike in typical text classification tasks, content words are not always helpful in dialog act tagging; e.g. the token \"meal\" in Fig. 2a is not indicative of set_question, while n-grams of words typically considered as stopwords, such as \"what 's the\", can be more helpful. If the dialog act predicted is to be accompanied by other predicates according to the guidelines (Sec. 2) we proceed to the following stages, otherwise stop.",
"cite_spans": [],
"ref_spans": [
{
"start": 347,
"end": 354,
"text": "Fig. 2a",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Dialog act prediction",
"sec_num": null
},
{
"text": "The features based on the preceding utterance indicate whether it was by the user or the wizard and, in the latter case, its dialog act. Such features are useful in determining the act of short, ambiguous utterances such as \"yes\", which is tagged as yes when following a prop_question utterance, but as acknowledge otherwise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dialog act prediction",
"sec_num": null
},
{
"text": "Node prediction In node prediction we use a classifier to predict whether each of the tokens in the utterance denotes a node of a particular type or empty (Fig. 2b) . The features used include the target token and its lemma, which are conjoined with the PoS tag, the previous and following tokens, as well as the lemmas of the tokens with which it has syntactic dependencies. Further features represent the dialog act (e.g. route is more likely to appear in a set question utterance), and the number and types of the nodes already predicted. Since the evaluation ignores the alignment between nodes and tokens, it would have been correct to predict the correct nodes from any token; e.g. restaurant could be predicted from \"italian\" instead. However, alignment does affect argument prediction, since it determines its feature extraction.",
"cite_spans": [],
"ref_spans": [
{
"start": 155,
"end": 164,
"text": "(Fig. 2b)",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Dialog act prediction",
"sec_num": null
},
{
"text": "Constant argument prediction In this stage (Fig. 2c) we predict, for each argument of each node, whether its value is an MRL vocabulary term, a verbatim string extract, a node, or absent (special values STRING, NODE, null respectively). If the value predicted is STRING or NODE it is replaced by the predictions in subsequent stages. For each argument different values are possible; thus we use separate classifiers for each, resulting in 32 classifiers. The features used include the node type, the token that predicted the node, and the syntactic dependency paths from that token to all other tokens in the utterance. We also include as features the values predicted for other arguments of the node, the dialog act, and the other node types predicted.",
"cite_spans": [],
"ref_spans": [
{
"start": 43,
"end": 52,
"text": "(Fig. 2c)",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Dialog act prediction",
"sec_num": null
},
{
"text": "String argument prediction For each argument predicted to be STRING (e.g. cuisine in Figure 2d) , we predict for each token in left-to-right order whether it should be part of the value for this argument or not (IN or OUT). Since the strings that are appropriate for each argument differ (e.g. the strings for cuisine are unlikely to be appropriate for name), we use separate classifiers for each of them, resulting in five classifiers. The features used include the target token and its lemma, its conjunction with the PoS tag, the previous and following tokens, and the lemmas of the tokens with which it has syntactic dependencies. We also added the label assigned to the previous token and the syntactic dependency path to the token that predicted the node.",
"cite_spans": [],
"ref_spans": [
{
"start": 85,
"end": 95,
"text": "Figure 2d)",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Dialog act prediction",
"sec_num": null
},
{
"text": "Node argument prediction For each argument predicted to have NODE as its value, we predict for every other node whether it should be the value or not (e.g. argmin in Fig. 2e ). As with the string argument prediction, we use separate binary classifiers for each argument, resulting in 18 classifiers. The features extracted are similar to that stage, but we now consider the tokens that predicted each candidate argument node (e.g. \"meal\" for restaurant) instead of the tokens in the utterance.",
"cite_spans": [],
"ref_spans": [
{
"start": 166,
"end": 173,
"text": "Fig. 2e",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Dialog act prediction",
"sec_num": null
},
{
"text": "Focus/Negation prediction We predict whether each node should be focused or negated as two separate binary tasks. The features used include the token that predicted the target node, its lemma and PoS tag and the syntactic dependency paths to all other tokens in the utterance. Further features include the type of the node and its arguments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dialog act prediction",
"sec_num": null
},
{
"text": "In order to learn the classifiers for the task decomposition described, two challenges must be addressed. The first is the complexity of the structure to be predicted. The task involves many interdependent predictions made by a variety of classifiers, and thus cannot be tackled by approaches that assume a particular type of graph structure, or restrict structure feature extraction in order to perform efficient dynamic programming. The second challenge is the lack of alignment information during training. Imitation learning algorithms such as SEARN (Daum\u00e9 III et al., 2009) and DAGGER (Ross et al., 2011) have been applied successfully to a variety of structured prediction tasks including summarization, biomedical event extraction and dynamic feature selection (Daum\u00e9 III et al., 2009; Vlachos, 2012; He et al., 2013) thanks to their ability to handle complex output spaces without exhaustive search and their flexibility in incorporating features based on the structured output. In this work we focus on DAGGER and extend it to handle the missing alignments.",
"cite_spans": [
{
"start": 554,
"end": 578,
"text": "(Daum\u00e9 III et al., 2009)",
"ref_id": "BIBREF11"
},
{
"start": 590,
"end": 609,
"text": "(Ross et al., 2011)",
"ref_id": "BIBREF31"
},
{
"start": 768,
"end": 792,
"text": "(Daum\u00e9 III et al., 2009;",
"ref_id": "BIBREF11"
},
{
"start": 793,
"end": 807,
"text": "Vlachos, 2012;",
"ref_id": "BIBREF34"
},
{
"start": 808,
"end": 824,
"text": "He et al., 2013)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Imitation Learning",
"sec_num": "6"
},
{
"text": "The dataset aggregation (DAGGER) algorithm (Ross et al., 2011) forms the prediction of an instance s as a sequence of T actions\u0177 1:T predicted by a learned policy which consists of one or more classifiers.",
"cite_spans": [
{
"start": 43,
"end": 62,
"text": "(Ross et al., 2011)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Structured prediction with DAGGER",
"sec_num": "6.1"
},
{
"text": "Algorithm 1: Imitation learning with DAG-GER Input: training instances S, expert policy \u03c0 , loss function , learning rate \u03b2, CSC learner CSCL Output:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Structured prediction with DAGGER",
"sec_num": "6.1"
},
{
"text": "Learned policy H N 1 CSC Examples E = \u2205 2 for i = 1 to N do 3 p = (1 \u2212 \u03b2) i\u22121 4 current policy \u03c0 = p\u03c0 + (1 \u2212 p)H i\u22121 5 for s in S do 6 Predict \u03c0(s) =\u0177 1:T 7 for\u0177 t in \u03c0(s) do 8 Extract features \u03a6 t = f (s,\u0177 1:t\u22121 ) 9 foreach possible action y j t do 10",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Structured prediction with DAGGER",
"sec_num": "6.1"
},
{
"text": "Predict",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Structured prediction with DAGGER",
"sec_num": "6.1"
},
{
"text": "y t+1:T = \u03c0(s;\u0177 1:t\u22121 , y j t ) 11",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Structured prediction with DAGGER",
"sec_num": "6.1"
},
{
"text": "Assess",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Structured prediction with DAGGER",
"sec_num": "6.1"
},
{
"text": "c j t = (\u0177 1:t\u22121 , y j t , y t+1:T ) 12 Add (\u03a6 t , c t ) to E 13 Learn H i = CSCL(E)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Structured prediction with DAGGER",
"sec_num": "6.1"
},
{
"text": "These actions are taken in a greedy fashion, i.e. once an action has been taken it cannot be changed. During training, it converts the problem of learning how to predict these sequences of actions into cost sensitive classification (CSC) learning. In CSC learning each training example has a vector of misclassification costs associated with it, thus rendering some mistakes on some examples to be more expensive than others (Domingos, 1999) . Algorithm 1 presents the training procedure. DAGGER requires a set of labeled training instances S and a loss function that compares complete outputs for instances in S against the gold standard. In addition, an expert policy \u03c0 must be specified which is an oracle that returns the optimal action for the instances in S, akin to an expert demonstrating the task. \u03c0 is typically derived from the gold standard; e.g. in part of speech tagging \u03c0 would return the correct tag for each token. In addition, the learning rate \u03b2 and a CSC learner (CSCL) must be provided. The algorithm outputs a learned policy H N that, unlike \u03c0 , can generalize to unseen data.",
"cite_spans": [
{
"start": 425,
"end": 441,
"text": "(Domingos, 1999)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Structured prediction with DAGGER",
"sec_num": "6.1"
},
{
"text": "Each training iteration begins by setting the probability p (line 3) of using \u03c0 in the current policy \u03c0. In the first iteration, only \u03c0 is used but, in later iterations, \u03c0 becomes stochastic and, for each action, \u03c0 is used with probability p, and the learned policy from the previous iteration H i\u22121 with probability 1 \u2212 p (line 4). Then \u03c0 is used to predict each training instance s (line 6). For each action\u0177 t , a CSC example is generated (lines 7-12). The features \u03a6 t are extracted from s and all previous actions\u0177 1:t\u22121 (line 8). The cost for each possible action y j t is estimated by predicting the remaining actions y t+1:T for s using \u03c0 (line 10) and calculating the loss incurred given y j t w.r.t. the gold standard for s using (line 11). As \u03c0 is stochastic, it is common to use multiple samples of y t+1:T to assess the cost of each action y j t by repeating lines 10-11. The features, together with the costs for each possible action, form a CSC example (\u03a6 t , c t ) (line 12). At the end of each iteration, the CSC examples obtained from all iterations are used by the CSC learning algorithm to learn the classifier(s) for H i (line 13).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Structured prediction with DAGGER",
"sec_num": "6.1"
},
{
"text": "When predicting the training instances (line 6), and when estimating the costs for each possible action (lines 10-11), the policy learned in the previous iteration H i\u22121 is used as part of \u03c0 after the first iteration. Thus the CSC examples generated to learn H i depend on the predictions of H i\u22121 and, by gradually increasing the use of H i\u22121 and ignoring \u03c0 in \u03c0, the learned policies are adjusted to their own predictions, thus learning the dependencies among the actions and how to predict them in order to minimize the loss. The learning rate \u03b2 determines how fast \u03c0 moves away from \u03c0 . The use of H i\u22121 in predicting the training instances (line 6) also has the effect of exploring sub-optimal actions so that the learned policies are adjusted to recover from their mistakes. Finally, note that if only one training iteration is performed, the learned policy is equivalent to a set of independently trained classifiers since no training against the predictions of the previously learned policy takes place.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Structured prediction with DAGGER",
"sec_num": "6.1"
},
{
"text": "The loss function in DAGGER is only used to compare complete outputs against the gold standard. Therefore, when generating a CSC training example in DAGGER (lines 7-12), we do not need to know whether an action y j t is correct or not, we only evaluate what the effect of y j t is on the loss incurred by the complete action sequence. Thus, it does not need to decompose over the actions taken to evaluate them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training with missing alignments",
"sec_num": "6.2"
},
{
"text": "The ability to train against non-decomposable loss functions is useful when the training data has missing labels, as is the case with semantic parsing. Following Sec. 5, is defined as the sum of the false positive and false negative atomic predictions used to calculate precision and recall and, since it ignores the alignment between tokens and nodes, it cannot assess node prediction actions. However, we can use it under DAGGER to learn a node prediction classifier together with the classifiers of the other stages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training with missing alignments",
"sec_num": "6.2"
},
{
"text": "The only component of DAGGER which assumes knowledge of the correct actions for training is the expert policy \u03c0 . Since these are not available for the node prediction stage, we replace \u03c0 with a randomized expert policy \u03c0 rand , in which actions that are not specified by the annotation are chosen randomly from a set of equally optimal ones. For example, in Fig. 2b when predicting the action for each token, \u03c0 rand chooses randomly among null, distance, and restaurant, so that by the end of the stage the correct nodes have been predicted. Randomizing this choice helps explore the actions available. In our experiments we placed a uniform distribution over the available actions, i.e. all optimal actions are equally likely to be chosen. The actions returned by \u03c0 rand will often result in alignments that do not incur any loss but are nonsensical, e.g. predicting restaurant from \"what\". However, since \u03c0 rand is progressively ignored, the effect of such actions is reduced.",
"cite_spans": [],
"ref_spans": [
{
"start": 359,
"end": 366,
"text": "Fig. 2b",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Training with missing alignments",
"sec_num": "6.2"
},
{
"text": "While being able to learn a semantic parser without alignment information is useful, it would help to use some supervision, e.g. that \"street\" commonly predicts the node street. We incorporate such an alignment dictionary in \u03c0 rand as follows: if the target token is mapped to a node type in the dictionary, and if a node of this type needs to be predicted for the utterance, then this type is returned. Otherwise, the prediction is made with \u03c0 rand . Finally, like \u03c0 rand itself, the dictionary is progressively ignored and neither constrains the training process, nor is used during testing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training with missing alignments",
"sec_num": "6.2"
},
{
"text": "We split the annotated dialogs into training and test sets. The former consists of four dialogs from the first scenario and seven from the second, and the lat-ter of three dialogs from each scenario. All development and feature engineering was conducted using cross-validation on the training set, at the dialog level rather than the utterance level (therefore resulting in as many folds as dialogs in the training set), to ensure that each fold contains utterances from all parts of the scenario from which the dialog is taken.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "7"
},
{
"text": "To perform cost-sensitive classification learning we used the adaptive regularization of weight vectors (AROW) algorithm (Crammer et al., 2009) . AROW is an online algorithm for linear predictors that adjusts the per-feature learning rates so that popular features do not overshadow rare but useful ones. Given the task decomposition, each learned hypothesis consists of 59 classifiers. We restricted the prediction of nodes to content words since function words are unlikely to provide useful alignments. All preprocessing was performed using the Stanford CoreNLP toolkit (Manning et al., 2014) . The implementation of the semantic parser is available from http://sites.google.com/ site/andreasvlachos/resources. The DAGGER parameters were set to 12 training iterations, \u03b2 = 0.3 and 3 samples for action cost assessment. We compared our DAGGER-based imitation learning approach (henceforth Imit) against independently trained classifiers using the same classification learner and features (henceforth Indep).",
"cite_spans": [
{
"start": 121,
"end": 143,
"text": "(Crammer et al., 2009)",
"ref_id": "BIBREF9"
},
{
"start": 573,
"end": 595,
"text": "(Manning et al., 2014)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "7"
},
{
"text": "For both systems we incorporated an alignment dictionary (+align versions) as described in Sec. 6.2, in order to improve node prediction performance. The dictionary was extracted from the training data and contains 96 tokens that commonly predict a particular node type.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "7"
},
{
"text": "The results from the cross-validation experiments are reported in Tbl. 2. Overall performance evaluated as described in Sec. 5 was 53.6 points in Fscore for Imit, 5.7 points higher than Indep and the difference is greater for the +align versions. These results demonstrate the advantages of training classifiers using imitation learning versus independently trained classifiers. Isolating the performance for node and argument prediction stages, we observe that the main bottleneck is the former, which in the case of Imit is 60.9 points in F-score compared to 78.8 for the latter. Accuracy for dialog acts is 78.9%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "7"
},
{
"text": "As shown in Tbl. 2, the alignment dictionary improved not only node prediction performance by 6 Table 2 : Performances using 11-fold cross-validation on the training set.",
"cite_spans": [],
"ref_spans": [
{
"start": 96,
"end": 103,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "7"
},
{
"text": "points in F-score, but also argument prediction by 2.5 points, thus demonstrating the benefits of learning the alignments together with the other components of the semantic parser. The overall performance improved by 5.5 points in F-score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "7"
},
{
"text": "Finally, we ran an experiment with oracle node prediction and found that the overall performance using cross-validation on the training data improved to 88.2 and 79.9 points in F-score for the Imit+align Indep+align systems. This is in agreement with the results presented by Flanigan et al. (2014) on developing a semantic parsing parser for the AMR formalism who also argue that node prediction is the main performance bottleneck.",
"cite_spans": [
{
"start": 276,
"end": 298,
"text": "Flanigan et al. (2014)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "7"
},
{
"text": "Tbl. 3 gives results on the test set. The overall performance for Imit is 48.4 F-score and 47.9% for exact match. As in the cross-validation results on the training data, training with imitation learning improved upon independently trained classifiers. The performance was improved further using the alignment dictionary, reaching 53.5 points in F-score and 49.1% exact match accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "7"
},
{
"text": "In the experimental setup above, dialogs from the same scenarios appear in both training and testing. While this is a reasonable evaluation approach also followed in ATIS evaluations, it is likely to be relatively forgiving; in practice, semantic parsers are likely to encounter entities, activities, etc. unseen in training. Hence we conducted a second evaluation in which dialogs from one scenario are used to train a parser evaluated on the other (still respecting the train/test split from before). When testing on the dialogs from the first scenario and training on the dialogs from the second, the overall performance using Imit+align was 36.9 points in F-score, while in the reverse experiment it was 41.7. Note that direct comparisons against the performances in Tbl. 3 are not meaningful since fewer dialogs are being used for training and testing in the cross-scenario setup.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "7"
},
{
"text": "Previous work on semantic parsing handled the lack of alignments during training in a variety of ways. Zettlemoyer and Collins (2009) manually engineered a CCG lexicon for the ATIS corpus. Kwiatkowski et al. (2011) used a dedicated algorithm to infer a similar dictionary and used alignments from Giza++ (Och and Ney, 2000) to initialize the relevant features. Most recent work on Geo-Query uses an alignment dictionary that includes for each geographical entity all noun phrases referring to it (Jones et al., 2012) . More recently, Flanigan et al. (2014) developed a dedicated alignment model on top of which they learned a semantic parser for the AMR formalism. In our approach, we learn the alignments together with the semantic parser without requiring a dictionary.",
"cite_spans": [
{
"start": 103,
"end": 133,
"text": "Zettlemoyer and Collins (2009)",
"ref_id": "BIBREF40"
},
{
"start": 189,
"end": 214,
"text": "Kwiatkowski et al. (2011)",
"ref_id": "BIBREF24"
},
{
"start": 297,
"end": 323,
"text": "Giza++ (Och and Ney, 2000)",
"ref_id": null
},
{
"start": 496,
"end": 516,
"text": "(Jones et al., 2012)",
"ref_id": "BIBREF20"
},
{
"start": 534,
"end": 556,
"text": "Flanigan et al. (2014)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with Related Work",
"sec_num": "8"
},
{
"text": "In terms of structured prediction frameworks, most previous work uses hidden variable linear (Zettlemoyer and Collins, 2007) or log-linear (Liang et al., 2011 ) models with beam search. In terms of direct comparisons with existing work, the goal of this paper is to introduce the new corpus and provide a competitive first attempt at the new semantic parsing task. However, we believe it is non-trivial to apply existing approaches to the new task, since, assuming a decomposition similar to that of Sec. 5.1, exhaustive search would be too expensive, and applying vanilla beam search would be difficult since different predictions result in beams of (sometimes radically) different lengths that are not comparable.",
"cite_spans": [
{
"start": 93,
"end": 124,
"text": "(Zettlemoyer and Collins, 2007)",
"ref_id": "BIBREF39"
},
{
"start": 139,
"end": 158,
"text": "(Liang et al., 2011",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with Related Work",
"sec_num": "8"
},
{
"text": "We have attempted applying the MT-based semantic parsing approach proposed by Andreas et al. (2013) Table 3 : Performances on the test set.",
"cite_spans": [
{
"start": 78,
"end": 99,
"text": "Andreas et al. (2013)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 100,
"end": 107,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comparison with Related Work",
"sec_num": "8"
},
{
"text": "performance was poor. The main reason for this is that, unlike GeoQuery, the proposed MRL does not align well with English.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with Related Work",
"sec_num": "8"
},
{
"text": "The expert policy in DAGGER is a generalization of the dynamic oracle of Goldberg and Nivre (2013) for shift-reduce dependency parsing to any structured prediction task decomposed into a sequence of actions. The randomized expert policy proposed extends DAGGER to learn not only how to avoid error propagation, but also how to infer latent variables.",
"cite_spans": [
{
"start": 73,
"end": 98,
"text": "Goldberg and Nivre (2013)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with Related Work",
"sec_num": "8"
},
{
"text": "The main bottleneck is training data sparsity. Some node types appear only a few times in relatively long utterances, and thus it is difficult to infer appropriate alignments for them. Unlike machine translation between natural languages, it is unrealistic to expect large quantities of utterances to be annotated with MR expressions. An appealing alternative would be to use response-based learning, i.e. use the response from the system instead of MR expressions as training signal (Liang et al., 2011; Kwiatkowski et al., 2013; Berant and Liang, 2014) . However such an approach would not be straightforward to implement in our application, since the response from the system is not always the result of a database query but, e.g., a navigation instruction that is context-dependent and thus difficult to assess its correctness. Furthermore, it would require the development of a user simulator (Keizer et al., 2012) , a non-trivial task which is beyond the scope of this work. A different approach is to use dialogs between a system and its users as proposed by Artzi and Zettlemoyer (2011) using the DARPA communicator corpus (Walker et al., 2002) . However, in that work utterances were selected to be shorter than 6 words and to include one noun phrase present in the lexicon used during learning while ignoring short but common phrases such as \"yes\" and \"no\"; thus it is unclear whether it would be applicable to our dataset.",
"cite_spans": [
{
"start": 484,
"end": 504,
"text": "(Liang et al., 2011;",
"ref_id": "BIBREF26"
},
{
"start": 505,
"end": 530,
"text": "Kwiatkowski et al., 2013;",
"ref_id": "BIBREF25"
},
{
"start": 531,
"end": 554,
"text": "Berant and Liang, 2014)",
"ref_id": "BIBREF4"
},
{
"start": 898,
"end": 919,
"text": "(Keizer et al., 2012)",
"ref_id": "BIBREF22"
},
{
"start": 1066,
"end": 1094,
"text": "Artzi and Zettlemoyer (2011)",
"ref_id": "BIBREF2"
},
{
"start": 1131,
"end": 1152,
"text": "(Walker et al., 2002)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with Related Work",
"sec_num": "8"
},
{
"text": "Finally, dialog context is only taken into account in predicting the dialog act for each utterance. Even though our corpus contains coreference information, we did not attempt this task as it is difficult to evaluate and our performance on node prediction on which it relies is relatively low. We leave coreference resolution on the new corpus as an interesting and challenging task for future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with Related Work",
"sec_num": "8"
},
{
"text": "In this paper we presented a new corpus for contextdependent semantic parsing in the context of a portable, interactive navigation and exploration system for tourism-related activities. The MRL used for the annotation can handle dialog context such as coreference and can accommodate utterances that are not interpretable according to a database. We conducted an inter-annotator agreement study and found 0.829 exact match agreement.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "9"
},
{
"text": "We also developed a semantic parser for the SPACEBOOK corpus using the imitation learning algorithm DAGGER that, unlike previous approaches, can infer the missing alignments in the training data using a randomized expert policy. In experiments using the new corpus we found that training with imitation learning substantially improves performance compared to independently trained classifiers. Finally, we showed how to improve performance further by incorporating an alignment dictionary. der grant agreement no. 270019 (SPACEBOOK",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "9"
},
{
"text": "Transactions of the Association for Computational Linguistics, vol. 2, pp. 547-559, 2014. Action Editor: Sharon Goldwater, Alexander Koller. Submission batch: 3/2014; Revision batch 8/2014; Published 12/2014. c 2014 Association for Computational Linguistics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "A similar filtering process was used for GeoQuery (Section 7.5.1 inZelle (1995)) and ATIS (principles of interpretation document (/atis3/doc/pofi.doc) in the NIST CDs).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The research reported was conducted while the first author was at the University of Cambridge and funded by the European Community's Seventh Framework Programme (FP7/2007-2013) un-project www.spacebook-project.eu). The authors would like to acknowledge the work of Diane Nicholls in the annotation; the efforts of Robin Hill in collecting the dialogs from Wizard-of-Oz experiments; and Tim Vieira for helpful comments on an earlier version of this manuscript.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Dialogue act markup in several layers",
"authors": [
{
"first": "James",
"middle": [],
"last": "Allen",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Core",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Allen and Mark Core. 1997. Dialogue act markup in several layers. Technical report, University of Rochester.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Semantic parsing as machine translation",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Andreas",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Vlachos",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Andreas, Andreas Vlachos, and Stephen Clark. 2013. Semantic parsing as machine translation. In Proceedings of the 51st Annual Meeting of the Associ- ation for Computational Linguistics (short papers).",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Bootstrapping semantic parsers from conversations",
"authors": [
{
"first": "Yoav",
"middle": [],
"last": "Artzi",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "421--432",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoav Artzi and Luke Zettlemoyer. 2011. Bootstrapping semantic parsers from conversations. In Proceedings of the 2011 Conference on Empirical Methods in Natu- ral Language Processing, pages 421-432, Edinburgh, UK.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Abstract meaning representation for sembanking",
"authors": [
{
"first": "Laura",
"middle": [],
"last": "Banarescu",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Bonial",
"suffix": ""
},
{
"first": "Shu",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "Madalina",
"middle": [],
"last": "Georgescu",
"suffix": ""
},
{
"first": "Kira",
"middle": [],
"last": "Griffitt",
"suffix": ""
},
{
"first": "Ulf",
"middle": [],
"last": "Hermjakob",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Schneider",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse",
"volume": "",
"issue": "",
"pages": "178--186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract meaning representation for sembanking. In Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Dis- course, pages 178-186, Sofia, Bulgaria, August. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Semantic parsing via paraphrasing",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Berant",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan Berant and Percy Liang. 2014. Semantic pars- ing via paraphrasing. In Proceedings of the 52nd An- nual Meeting of the Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Freebase: a collaboratively created graph database for structuring human knowledge",
"authors": [
{
"first": "Kurt",
"middle": [],
"last": "Bollacker",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Evans",
"suffix": ""
},
{
"first": "Praveen",
"middle": [],
"last": "Paritosh",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Sturge",
"suffix": ""
},
{
"first": "Jamie",
"middle": [],
"last": "Taylor",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 2008 ACM SIGMOD International Conference on Management of Data",
"volume": "",
"issue": "",
"pages": "1247--1250",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a col- laboratively created graph database for structuring hu- man knowledge. In Proceedings of the 2008 ACM SIGMOD International Conference on Management of Data, pages 1247-1250.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Iso 24617-2: A semantically-based standard for dialogue annotation",
"authors": [
{
"first": "Harry",
"middle": [],
"last": "Bunt",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Alexandersson",
"suffix": ""
},
{
"first": "Jae-Woong",
"middle": [],
"last": "Choe",
"suffix": ""
},
{
"first": "Alex",
"middle": [
"Chengyu"
],
"last": "Fang",
"suffix": ""
},
{
"first": "Koiti",
"middle": [],
"last": "Hasida",
"suffix": ""
},
{
"first": "Volha",
"middle": [],
"last": "Petukhova",
"suffix": ""
},
{
"first": "Andrei",
"middle": [],
"last": "Popescu-Belis",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Traum",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Eight International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Harry Bunt, Jan Alexandersson, Jae-Woong Choe, Alex Chengyu Fang, Koiti Hasida, Volha Petukhova, Andrei Popescu-Belis, and David Traum. 2012. Iso 24617-2: A semantically-based standard for dialogue annotation. In Proceedings of the Eight International Conference on Language Resources and Evaluation, Istanbul, Turkey.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Large-scale Semantic Parsing via Schema Matching and Lexicon Extension",
"authors": [
{
"first": "Qingqing",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Yates",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qingqing Cai and Alexander Yates. 2013. Large-scale Semantic Parsing via Schema Matching and Lexicon Extension. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Minimal recursion semantics: An introduction",
"authors": [
{
"first": "Ann",
"middle": [],
"last": "Copestake",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Flickinger",
"suffix": ""
}
],
"year": 2005,
"venue": "Research in Language and Computation",
"volume": "3",
"issue": "2-3",
"pages": "281--332",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ann Copestake, Dan Flickinger, Ivan Sag, and Carl Pol- lard. 2005. Minimal recursion semantics: An in- troduction. Research in Language and Computation, 3(2-3):281-332.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Adaptive regularization of weight vectors",
"authors": [
{
"first": "Koby",
"middle": [],
"last": "Crammer",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Kulesza",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Dredze",
"suffix": ""
}
],
"year": 2009,
"venue": "Advances in Neural Information Processing Systems",
"volume": "22",
"issue": "",
"pages": "414--422",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Koby Crammer, Alex Kulesza, and Mark Dredze. 2009. Adaptive regularization of weight vectors. In Ad- vances in Neural Information Processing Systems 22, pages 414-422.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Expanding the scope of the ATIS task: the ATIS-3 corpus",
"authors": [
{
"first": "Deborah",
"middle": [
"A"
],
"last": "Dahl",
"suffix": ""
},
{
"first": "Madeleine",
"middle": [],
"last": "Bates",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Brown",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Fisher",
"suffix": ""
},
{
"first": "Kate",
"middle": [],
"last": "Hunicke-Smith",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Pallett",
"suffix": ""
},
{
"first": "Christine",
"middle": [],
"last": "Pao",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Rudnicky",
"suffix": ""
},
{
"first": "Elizabeth",
"middle": [],
"last": "Shriberg",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of the Workshop on Human Language Technology",
"volume": "",
"issue": "",
"pages": "43--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Deborah A. Dahl, Madeleine Bates, Michael Brown, William Fisher, Kate Hunicke-Smith, David Pallett, Christine Pao, Alexander Rudnicky, and Elizabeth Shriberg. 1994. Expanding the scope of the ATIS task: the ATIS-3 corpus. In Proceedings of the Work- shop on Human Language Technology, pages 43-48, Plainsboro, New Jersey.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Search-based structured prediction",
"authors": [
{
"first": "Hal",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Langford",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2009,
"venue": "Machine Learning",
"volume": "75",
"issue": "",
"pages": "297--325",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hal Daum\u00e9 III, John Langford, and Daniel Marcu. 2009. Search-based structured prediction. Machine Learn- ing, 75:297-325.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Metacost: a general method for making classifiers cost-sensitive",
"authors": [
{
"first": "Pedro",
"middle": [],
"last": "Domingos",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the 5th International Conference on Knowledge Discovery and Data Mining",
"volume": "",
"issue": "",
"pages": "155--164",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pedro Domingos. 1999. Metacost: a general method for making classifiers cost-sensitive. In Proceedings of the 5th International Conference on Knowledge Dis- covery and Data Mining, pages 155-164. Association for Computing Machinery.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A discriminative graph-based parser for the abstract meaning representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Flanigan",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Thomson",
"suffix": ""
},
{
"first": "Jaime",
"middle": [],
"last": "Carbonell",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1426--1436",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Flanigan, Sam Thomson, Jaime Carbonell, Chris Dyer, and Noah A. Smith. 2014. A discriminative graph-based parser for the abstract meaning represen- tation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 1426-1436, Baltimore, Maryland, June. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Training deterministic parsers with non-deterministic oracles",
"authors": [
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2013,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "3",
"issue": "1",
"pages": "403--414",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoav Goldberg and Joakim Nivre. 2013. Training deterministic parsers with non-deterministic oracles. Transactions of the Association for Computational Linguistics, 3(1):403-414, October.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Dynamic feature selection for dependency parsing",
"authors": [
{
"first": "He",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Hal",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1455--1464",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "He He, Hal Daum\u00e9 III, and Jason Eisner. 2013. Dynamic feature selection for dependency parsing. In Proceed- ings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1455-1464, Seattle, October.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "The Third Dialog State Tracking Challenge",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Henderson",
"suffix": ""
},
{
"first": "Blaise",
"middle": [],
"last": "Thomson",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Williams",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of IEEE Spoken Language Technology",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Henderson, Blaise Thomson, and Jason Williams. 2014. The Third Dialog State Tracking Challenge. In Proceedings of IEEE Spoken Language Technology.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "SpaceBook Project: Final Data Release, Wizard-of-Oz (WoZ) experiments",
"authors": [
{
"first": "Robin",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Jana",
"middle": [],
"last": "G\u00f6tze",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [],
"last": "Webber",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robin Hill, Jana G\u00f6tze, and Bonnie Webber. 2013. SpaceBook Project: Final Data Release, Wizard-of- Oz (WoZ) experiments. Technical report, University of Edinburgh.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Evaluating a city exploration dialogue system with integrated question-answering and pedestrian navigation",
"authors": [
{
"first": "Srinivasan",
"middle": [],
"last": "Janarthanam",
"suffix": ""
},
{
"first": "Oliver",
"middle": [],
"last": "Lemon",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Bartie",
"suffix": ""
},
{
"first": "Tiphaine",
"middle": [],
"last": "Dalmas",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Dickinson",
"suffix": ""
},
{
"first": "Xingkun",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Mackaness",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [],
"last": "Webber",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Srinivasan Janarthanam, Oliver Lemon, Phil Bartie, Tiphaine Dalmas, Anna Dickinson, Xingkun Liu, William Mackaness, and Bonnie Webber. 2013. Eval- uating a city exploration dialogue system with inte- grated question-answering and pedestrian navigation.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"authors": [],
"year": null,
"venue": "",
"volume": "1",
"issue": "",
"pages": "1660--1668",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "In Proceedings of the 51st Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 1660-1668, Sofia, Bulgaria, Au- gust. Association for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Semantic parsing with Bayesian tree transducers",
"authors": [
{
"first": "Keeley",
"middle": [],
"last": "Bevan",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Sharon",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Goldwater",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "488--496",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bevan Keeley Jones, Mark Johnson, and Sharon Goldwa- ter. 2012. Semantic parsing with Bayesian tree trans- ducers. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics, pages 488-496.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "From Discourse to Logic. Introduction to Modeltheoretic Semantics of Natural Language, Formal Logic and Discourse Representation Theory",
"authors": [
{
"first": "Hans",
"middle": [],
"last": "Kamp",
"suffix": ""
},
{
"first": "Uwe",
"middle": [],
"last": "Reyle",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hans Kamp and Uwe Reyle. 1993. From Discourse to Logic. Introduction to Modeltheoretic Semantics of Natural Language, Formal Logic and Discourse Rep- resentation Theory. Kluwer, Dordrecht.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "User simulation in the development of statistical spoken dialogue systems",
"authors": [
{
"first": "Simon",
"middle": [],
"last": "Keizer",
"suffix": ""
},
{
"first": "Stphane",
"middle": [],
"last": "Rossignol",
"suffix": ""
},
{
"first": "Senthilkumar",
"middle": [],
"last": "Chandramohan",
"suffix": ""
},
{
"first": "Olivier",
"middle": [],
"last": "Pietquin",
"suffix": ""
}
],
"year": 2012,
"venue": "Oliver Lemon and Olivier Pietquin, editors, Data-Driven Methods for Adaptive Spoken Dialogue Systems",
"volume": "",
"issue": "",
"pages": "39--73",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Simon Keizer, Stphane Rossignol, Senthilkumar Chan- dramohan, and Olivier Pietquin. 2012. User simula- tion in the development of statistical spoken dialogue systems. In Oliver Lemon and Olivier Pietquin, edi- tors, Data-Driven Methods for Adaptive Spoken Dia- logue Systems, pages 39-73. Springer New York.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "An empirical methodology for writing user-friendly natural language computer applications",
"authors": [
{
"first": "F",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kelley",
"suffix": ""
}
],
"year": 1983,
"venue": "Proceedings of the SIGCHI Conference on Human Factors in Computing Systems",
"volume": "",
"issue": "",
"pages": "193--196",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John F. Kelley. 1983. An empirical methodology for writing user-friendly natural language computer appli- cations. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pages 193- 196.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Lexical generalization in CCG grammar induction for semantic parsing",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Kwiatkowski",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Sharon",
"middle": [],
"last": "Goldwater",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1512--1523",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom Kwiatkowski, Luke Zettlemoyer, Sharon Goldwa- ter, and Mark Steedman. 2011. Lexical generaliza- tion in CCG grammar induction for semantic parsing. In Proceedings of the 2011 Conference on Empiri- cal Methods in Natural Language Processing, pages 1512-1523, Edinburgh, UK.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Scaling semantic parsers with on-the-fly ontology matching",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Kwiatkowski",
"suffix": ""
},
{
"first": "Eunsol",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Artzi",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1545--1556",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom Kwiatkowski, Eunsol Choi, Yoav Artzi, and Luke Zettlemoyer. 2013. Scaling semantic parsers with on-the-fly ontology matching. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1545-1556, Seattle, WA.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Learning dependency-based compositional semantics",
"authors": [
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Jordan",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "590--599",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Percy Liang, Michael Jordan, and Dan Klein. 2011. Learning dependency-based compositional semantics. In Proceedings of the 49th Annual Meeting of the As- sociation for Computational Linguistics: Human Lan- guage Technologies, pages 590-599, Portland, Ore- gon.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Walk the talk: connecting language, knowledge, and action in route instructions",
"authors": [
{
"first": "Matt",
"middle": [],
"last": "Macmahon",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Stankiewicz",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Kuipers",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 21st National Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "1475--1482",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matt MacMahon, Brian Stankiewicz, and Benjamin Kuipers. 2006. Walk the talk: connecting language, knowledge, and action in route instructions. In Pro- ceedings of the 21st National Conference on Artificial Intelligence, pages 1475-1482. AAAI Press.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "The Stanford CoreNLP natural language processing toolkit",
"authors": [
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Bauer",
"suffix": ""
},
{
"first": "Jenny",
"middle": [],
"last": "Finkel",
"suffix": ""
},
{
"first": "Steven",
"middle": [
"J"
],
"last": "Bethard",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Mcclosky",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations",
"volume": "",
"issue": "",
"pages": "55--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language pro- cessing toolkit. In Proceedings of 52nd Annual Meet- ing of the Association for Computational Linguistics: System Demonstrations, pages 55-60.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Improved statistical alignment models",
"authors": [
{
"first": "Josef",
"middle": [],
"last": "Franz",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 38th",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Josef Och and Hermann Ney. 2000. Improved sta- tistical alignment models. In Proceedings of the 38th",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Annual Meeting of the Association for Computational Linguistics",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "440--447",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual Meeting of the Association for Computational Linguistics, pages 440-447, Hong Kong, China.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "A reduction of imitation learning and structured prediction to no-regret online learning",
"authors": [
{
"first": "St\u00e9phane",
"middle": [],
"last": "Ross",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [
"J"
],
"last": "Gordon",
"suffix": ""
},
{
"first": "Drew",
"middle": [],
"last": "Bagnell",
"suffix": ""
}
],
"year": 2011,
"venue": "14th International Conference on Artificial Intelligence and Statistics",
"volume": "",
"issue": "",
"pages": "627--635",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "St\u00e9phane Ross, Geoffrey J. Gordon, and Drew Bagnell. 2011. A reduction of imitation learning and structured prediction to no-regret online learning. In 14th In- ternational Conference on Artificial Intelligence and Statistics, pages 627-635.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Dialogue act modeling for automatic tagging and recognition of conversational speech",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Stolcke",
"suffix": ""
},
{
"first": "Klaus",
"middle": [],
"last": "Ries",
"suffix": ""
},
{
"first": "Noah",
"middle": [],
"last": "Coccaro",
"suffix": ""
},
{
"first": "Elizabeth",
"middle": [],
"last": "Shriberg",
"suffix": ""
},
{
"first": "Rebecca",
"middle": [],
"last": "Bates",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Taylor",
"suffix": ""
},
{
"first": "Rachel",
"middle": [],
"last": "Martin",
"suffix": ""
},
{
"first": "Carol",
"middle": [],
"last": "Van Ess-Dykema",
"suffix": ""
},
{
"first": "Marie",
"middle": [],
"last": "Meteer",
"suffix": ""
}
],
"year": 2000,
"venue": "Computational linguistics",
"volume": "26",
"issue": "3",
"pages": "339--373",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andreas Stolcke, Klaus Ries, Noah Coccaro, Eliza- beth Shriberg, Rebecca Bates, Daniel Jurafsky, Paul Taylor, Rachel Martin, Carol Van Ess-Dykema, and Marie Meteer. 2000. Dialogue act modeling for automatic tagging and recognition of conversational speech. Computational linguistics, 26(3):339-373.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "What's left to be understood in ATIS?",
"authors": [
{
"first": "Gokhan",
"middle": [],
"last": "Tur",
"suffix": ""
},
{
"first": "Dilek",
"middle": [],
"last": "Hakkani-T\u00fcr",
"suffix": ""
},
{
"first": "Larry",
"middle": [],
"last": "Heck",
"suffix": ""
}
],
"year": 2010,
"venue": "IEEE Workshop on Spoken Language Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gokhan Tur, Dilek Hakkani-T\u00fcr, and Larry Heck. 2010. What's left to be understood in ATIS? In IEEE Work- shop on Spoken Language Technologies.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "An investigation of imitation learning algorithms for structured prediction",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Vlachos",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 10th European Workshop on Reinforcement Learning",
"volume": "24",
"issue": "",
"pages": "143--154",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andreas Vlachos. 2012. An investigation of imitation learning algorithms for structured prediction. Journal of Machine Learning Research Workshop and Confer- ence Proceedings, Proceedings of the 10th European Workshop on Reinforcement Learning, 24:143-154.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "DARPA communicator: cross-system results for the 2001 evaluation",
"authors": [
{
"first": "Bryan",
"middle": [
"L"
],
"last": "Le",
"suffix": ""
},
{
"first": "Alexandros",
"middle": [],
"last": "Pellom",
"suffix": ""
},
{
"first": "Rebecca",
"middle": [
"J"
],
"last": "Potamianos",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Passonneau",
"suffix": ""
},
{
"first": "Gregory",
"middle": [
"A"
],
"last": "Roukos",
"suffix": ""
},
{
"first": "Stephanie",
"middle": [],
"last": "Sanders",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Seneff",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Stallard",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 7th International Conference on Spoken Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Le, Bryan L. Pellom, Alexandros Potamianos, Re- becca J. Passonneau, Salim Roukos, Gregory A. Sanders, Stephanie Seneff, and David Stallard. 2002. DARPA communicator: cross-system results for the 2001 evaluation. In Proceedings of the 7th Interna- tional Conference on Spoken Language Processing.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "A Formal Approach to Discourse Anaphora",
"authors": [
{
"first": "Bonnie",
"middle": [
"Lynn"
],
"last": "Webber",
"suffix": ""
}
],
"year": 1978,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bonnie Lynn Webber. 1978. A Formal Approach to Dis- course Anaphora. Ph.D. thesis, Harvard University.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Using Inductive Logic Programming to Automate the Construction of Natural Language Parsers",
"authors": [
{
"first": "John",
"middle": [
"M"
],
"last": "Zelle",
"suffix": ""
}
],
"year": 1995,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John M. Zelle. 1995. Using Inductive Logic Program- ming to Automate the Construction of Natural Lan- guage Parsers. Ph.D. thesis, Department of Computer Sciences, The University of Texas at Austin.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Online learning of relaxed CCG grammars for parsing to logical form",
"authors": [
{
"first": "Luke",
"middle": [
"S"
],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "678--687",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luke S. Zettlemoyer and Michael Collins. 2007. Online learning of relaxed CCG grammars for parsing to logi- cal form. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Process- ing and Computational Natural Language Learning, pages 678-687.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Learning context-dependent mappings from sentences to logical form",
"authors": [
{
"first": "Luke",
"middle": [
"S"
],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Joint conference of the 47th Annual Meeting of the Association for Computational Linguistics and the 4th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing",
"volume": "",
"issue": "",
"pages": "976--984",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luke S. Zettlemoyer and Michael Collins. 2009. Learn- ing context-dependent mappings from sentences to logical form. In Proceedings of the Joint conference of the 47th Annual Meeting of the Association for Computational Linguistics and the 4th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing, pages 976-984, Singapore.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"text": "The features extracted from the utterance are all unigrams, what 's the nearest italian for a meal ? (num:singular, det:def, cuisine:\"italian\") X2:distance(location:USER, location:X1, argmin:X1) focus:X1(g) Node-argument form dialogAct:SET QUESTION X1:restaurant X1:restaurant(num:singular) X1:restaurant(det:def ) X1:restaurant(cuisine:\"italian\") X2:distance X2:distance(location:USER) X2:distance(location:X1-restaurant (num:singular, det:def )) X2:distance(argmin:X1) focus:X1-restaurant(num:singular, det:def ) (h) Atomic predictions",
"type_str": "figure"
},
"FIGREF1": {
"uris": null,
"num": null,
"text": "Semantic parsing decomposition.",
"type_str": "figure"
},
"TABREF2": {
"html": null,
"num": null,
"type_str": "table",
"text": "to our dataset but in initial experiments the Prec/F) 73.9 73.7 73.8 76.8 77.3 77.1 69.5 61.3 65.1 77.3 63.6 69.8 focus (Rec/Prec/F) 87.1 80.7 83.8 86 81.2 83.6 81.6 73.4 77.3 90.6 76.8 83.1 overall (Rec/Prec/F) 56.6 42.3 48.4 63.5 46.2 53.5 41.2 47.8 44.3 50 47.4 48.7",
"content": "<table><tr><td/><td>Imit</td><td>Imit+align</td><td>Indep</td><td colspan=\"2\">Indep+align</td></tr><tr><td>exact match (accuracy)</td><td>47.9%</td><td>49.1%</td><td>47.6%</td><td/><td>46.1%</td></tr><tr><td>dialog act (accuracy)</td><td>77%</td><td>80.5%</td><td>79.8%</td><td/><td>79.5%</td></tr><tr><td>nodes (Rec/Prec/F) arguments (Rec/</td><td colspan=\"3\">68.7 45.7 54.8 75.5 51.7 61.4 41.9 61.1 49.7</td><td>54</td><td>64.9 58.9</td></tr></table>"
}
}
}
}