Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "U19-1013",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:07:43.917328Z"
},
"title": "A Pointer Network Architecture for Context-Dependent Semantic Parsing",
"authors": [
{
"first": "Xuanli",
"middle": [],
"last": "He",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Monash University",
"location": {
"country": "Australia"
}
},
"email": "[email protected]"
},
{
"first": "\u2663",
"middle": [],
"last": "Quan",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Hung",
"middle": [],
"last": "Tran",
"suffix": "",
"affiliation": {
"laboratory": "Adobe Research",
"institution": "",
"location": {
"settlement": "San Jose",
"region": "CA"
}
},
"email": "[email protected]"
},
{
"first": "Gholamreza",
"middle": [],
"last": "Haffari",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Monash University",
"location": {
"country": "Australia"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Semantic parsing targets at mapping human utterances into structured meaning representations, such as logical forms, programming snippets, SQL queries etc. In this work, we focus on logical form generation, which is extracted from an automated email assistant system. Since this task is dialogue-oriented, information across utterances must be well handled. Furthermore, certain inputs from users are used as arguments for the logical form, which requires a parser to distinguish the functional words and content words. Hence, an intelligent parser should be able to switch between generation mode and copy mode. In order to address the aforementioned issues, we equip the vanilla seq2seq model with a pointer network and a context-dependent architecture to generate more accurate logical forms. Our model achieves state-of-the-art performance on the email assistant task.",
"pdf_parse": {
"paper_id": "U19-1013",
"_pdf_hash": "",
"abstract": [
{
"text": "Semantic parsing targets at mapping human utterances into structured meaning representations, such as logical forms, programming snippets, SQL queries etc. In this work, we focus on logical form generation, which is extracted from an automated email assistant system. Since this task is dialogue-oriented, information across utterances must be well handled. Furthermore, certain inputs from users are used as arguments for the logical form, which requires a parser to distinguish the functional words and content words. Hence, an intelligent parser should be able to switch between generation mode and copy mode. In order to address the aforementioned issues, we equip the vanilla seq2seq model with a pointer network and a context-dependent architecture to generate more accurate logical forms. Our model achieves state-of-the-art performance on the email assistant task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Recently, due to the breakthrough of the deep learning, numerous and various tasks within the filed of natural language processing (NLP) have made impressive achievements (Vaswani et al., 2017; Devlin et al., 2018; Edunov et al., 2018) . However, most these achievements are assessed by automatic metrics, which are relatively superficial and brittle, and can be easily tricked (Paulus et al., 2017; Jia and Liang, 2017; L\u00e4ubli et al., 2018) . Hence, understanding the underlying meaning of natural language sentences is crucial to NLP tasks.",
"cite_spans": [
{
"start": 171,
"end": 193,
"text": "(Vaswani et al., 2017;",
"ref_id": "BIBREF15"
},
{
"start": 194,
"end": 214,
"text": "Devlin et al., 2018;",
"ref_id": "BIBREF1"
},
{
"start": 215,
"end": 235,
"text": "Edunov et al., 2018)",
"ref_id": "BIBREF4"
},
{
"start": 378,
"end": 399,
"text": "(Paulus et al., 2017;",
"ref_id": "BIBREF9"
},
{
"start": 400,
"end": 420,
"text": "Jia and Liang, 2017;",
"ref_id": "BIBREF5"
},
{
"start": 421,
"end": 441,
"text": "L\u00e4ubli et al., 2018)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As an appealing direction in natural language understanding, semantic parsing has been widely studied in the NLP community (Ling et al., 2016; Dong and Lapata, 2016; Jia and Liang, 2017) . Semantic parsing aims at converting human utterances to machine executable representations. Most existing work focuses on parsing individual utterances independently, even they have an access to the contextual information. In spite of several pioneering efforts (Zettlemoyer and Collins, 2009; Srivastava et al., 2017) , these pre-neural models suffer from complicated hand-crafted feature engineering, compared to their neural counterparts (Dong and Lapata, 2018; Rabinovich et al., 2017) . One notable exception is the work of Suhr et al. (2018) , who incorporate context into ATIS data with a neural approach.",
"cite_spans": [
{
"start": 123,
"end": 142,
"text": "(Ling et al., 2016;",
"ref_id": "BIBREF8"
},
{
"start": 143,
"end": 165,
"text": "Dong and Lapata, 2016;",
"ref_id": "BIBREF2"
},
{
"start": 166,
"end": 186,
"text": "Jia and Liang, 2017)",
"ref_id": "BIBREF5"
},
{
"start": 451,
"end": 482,
"text": "(Zettlemoyer and Collins, 2009;",
"ref_id": "BIBREF17"
},
{
"start": 483,
"end": 507,
"text": "Srivastava et al., 2017)",
"ref_id": "BIBREF12"
},
{
"start": 630,
"end": 653,
"text": "(Dong and Lapata, 2018;",
"ref_id": "BIBREF3"
},
{
"start": 654,
"end": 678,
"text": "Rabinovich et al., 2017)",
"ref_id": "BIBREF10"
},
{
"start": 718,
"end": 736,
"text": "Suhr et al. (2018)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, we propose a neural semantic parser for email assistant task which incorporates the conversation context as well as a copy mechanism to fill-in the arguments of the logical forms from the input sentence. Our model achieves stateof-the-art (SOTA) performance. We further provide details analysis about where these improvements come from.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To build our models, we follow a process of errordriven design. We first start with a simple seq2seq model, then we closely examine the errors, group them, and then propose a solution to each of these error groups. From our examination, we identify two main sources of errors of a seq2seq model: i) the overly strong influence of the language model component, and ii) the lack of contextual information. Thus we design our model to incorporate the Pointer Mechanism and Context-dependent Mechanism to solve these problems. From this point, we refer to the errors caused by the first source (language model) as Copy-related errors, and the ones caused by the second source (lack of context) as Context-related errors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "2"
},
{
"text": "With the basic seq2seq architecture, the model's generation is heavily influenced by the language Figure 1 : A example of semantic parsing on the email assistant system. model aspect. Thus, it tends to use the strings it has seen in the training dataset (see Table 1 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 98,
"end": 106,
"text": "Figure 1",
"ref_id": null
},
{
"start": 259,
"end": 266,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Word Copy using the Pointer Mechanism",
"sec_num": "2.1"
},
{
"text": "current utterance: set body to blue logical form reference: (setFieldFromString ( getProbMutableField-ByFieldName body ) ( stringValue \" blue \" ) ) seq2seq: (setFieldFromString ( getProbMutableFieldBy-FieldName body ) ( stringValue \" charlie is on his way \" ) ) From this analysis, we realize that it would be crucial for the model to learn when to copy from the source sentence, and when to generate a new token. Thus, we incorporate the pointer mechanism into our base seq2seq approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Copy using the Pointer Mechanism",
"sec_num": "2.1"
},
{
"text": "As shown in Figure 1 , for an email assistant system, users inputs are usually comprised of a functional part and a content part. A semantic parser should be able to distinguish and handle them in a different way. Specifically, the parser must generate a series of lambda-like functions for the functional part, while the content part should be copied to the argument slot.",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 20,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Word Copy using the Pointer Mechanism",
"sec_num": "2.1"
},
{
"text": "Our pointer network is inspired by that of See et al. (2017) designed for the summarisation task. Given an utterance x and a logical form y, at each time step t, we have a soft switch which determines the contributions of the token generator and the copier which uses a pointer over the words of the input utterance:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Copy using the Pointer Mechanism",
"sec_num": "2.1"
},
{
"text": "P (y t ) = p gen P vocab (y t ) + (1 \u2212 p gen ) i:x i =y t \u03b1 t i where \u03b1 t",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Copy using the Pointer Mechanism",
"sec_num": "2.1"
},
{
"text": "i is the attention score over the position i in the t-th generation step, and P vocab is a probability distribution over the vocabulary. p gen \u2208 [0, 1] is the generation probability, modelled as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Copy using the Pointer Mechanism",
"sec_num": "2.1"
},
{
"text": "p gen = \u03c3(w T c c t + w T s s t + w T x x t + b)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Copy using the Pointer Mechanism",
"sec_num": "2.1"
},
{
"text": "where c t and s t are the context vector and the decoder state respectively, while w T c , w T s , w T x and b are learnable parameters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Copy using the Pointer Mechanism",
"sec_num": "2.1"
},
{
"text": "Understanding conversations between a user and the system requires the comprehension of the flow of the discourse among sequence of utterances. Processing utterances independently within a conversation leads to misinterpreting users inputs, which will result in incorrect logical form generation (see Table 2 ). Therefore, we incorporate the context when processing the current utterance for a better generation. Basically, a conversation consists of a sequence of user utterances:{x 1 , ..., x T } paired with a list of logical forms: {y 1 , ..., y T }. For a given utterance sequence Suhr et al. (2018) , we introduce a hierarchical architecture to model both utterance-level and conversation-level information; see Figure 2 . At the utterance level, we use an attentional seq2seq model to establish the mapping from an utterance x i to its corresponding logical form y i :",
"cite_spans": [
{
"start": 586,
"end": 604,
"text": "Suhr et al. (2018)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 301,
"end": 308,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 718,
"end": 726,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Conditioning on Conversation Context",
"sec_num": "2.2"
},
{
"text": "x i = {x i 1 , ..., x i m }, a semantic parser should predict its associated logical form y i = {y i 1 , ..., y i n }. Inspired by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditioning on Conversation Context",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h i 1:m = Encoder(x i 1 , ..., x i m ),",
"eq_num": "(1)"
}
],
"section": "Conditioning on Conversation Context",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "c i t = Attention(h i 1:m , s i t\u22121 ), (2) y i t , s i t = Decoder(y i t\u22121 , s i t\u22121 , c i t )",
"eq_num": "(3)"
}
],
"section": "Conditioning on Conversation Context",
"sec_num": "2.2"
},
{
"text": "As the seq2seq model, we investigate the use of RNN-based and Transformer-based architectures. Furthermore, we make use of a conversation-level RNN to capture the wider conversational context:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditioning on Conversation Context",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "g i = RNN(h i m , g i\u22121 )",
"eq_num": "(4)"
}
],
"section": "Conditioning on Conversation Context",
"sec_num": "2.2"
},
{
"text": "where h i m is the last hidden state of the ith utterance, and g is the conversational hidden state.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditioning on Conversation Context",
"sec_num": "2.2"
},
{
"text": "In order to incorporate the conversational information into our model, we modify the Equ. 1 by injecting g i\u22121 :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditioning on Conversation Context",
"sec_num": "2.2"
},
{
"text": "h i 1:m = Encoder([x i 1 : g i\u22121 ], ..., [x i m : g i\u22121 ])",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditioning on Conversation Context",
"sec_num": "2.2"
},
{
"text": "where [:] denotes a concatenation operation. Similar to memory networks (Sukhbaatar et al., 2015) , it is essential to give the decoder a direct access to the last k utterances, if we want to leverage the discourse information effectively. Hence, we concatenate the previous k utterance {x i\u2212k , .., x i\u22121 } with the current utterance. Now Equ. 2 is rewritten as:",
"cite_spans": [
{
"start": 72,
"end": 97,
"text": "(Sukhbaatar et al., 2015)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conditioning on Conversation Context",
"sec_num": "2.2"
},
{
"text": "c i t = Attention(h i\u2212k 1:m , .., h i\u22121 1:m , h i 1:m , s i t\u22121 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditioning on Conversation Context",
"sec_num": "2.2"
},
{
"text": "In addition, since the importance of the concatenated utterances is different, it is significant to differentiate these utterances to reduce confusion. Therefore, as suggested by Suhr et al. (2018) , we add relative position embeddings E pos [\u2022] to the utterances when we compute attention scores. Depending on their distances from the current utterance, we append E pos [0], .., E pos [k] to the previous utterances respectively.",
"cite_spans": [
{
"start": 179,
"end": 197,
"text": "Suhr et al. (2018)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conditioning on Conversation Context",
"sec_num": "2.2"
},
{
"text": "Dataset Semantic paring is crucial to dialogue systems, especially for multi-turn conversations. Additionally, understanding users' intentions and extracting salient requirements play an important role in the dialogue-related semantic parsing. We use a dataset created by Srivastava et al. (2017) as a case study to explore the performance of semantic parsing in dialogue systems. This dataset is collected from an email assistant, which can help users to manage their emails. As shown in Table 3 Users can type some human sentences from the interface. Then the email assistant can automatically convert the natural sentences to the machineunderstandable logical forms.",
"cite_spans": [
{
"start": 272,
"end": 296,
"text": "Srivastava et al. (2017)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 489,
"end": 496,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "dialog history ... user: Define the concept \" contact \" user: add field \" email \" to concept \" contact \" user: create contact \" Mom \" ... logical form ... (defineConcept ( stringNoun \" contact \" ) ) (addFieldToConcept contact ( stringNoun \" email \" ) ) (createInstanceByFullNames contact ( stringNoun \" mom \" ) ) ... 2017, we partition the dataset into a training fold (93 conversations) and a test fold (20 conversations) as well. However, this partition might be different from Srivastava et al. 2017, as they only release the raw Email Assistant dataset. The total number of user utterances is 4759, the number of sessions is 113, and the mean/max of the number of utterances per interactive session is 42/273.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "Prior to this work, Srivastava et al. (2017) also incorporate the conversational context into a CCG parser (Zettlemoyer and Collins, 2007) . CCG requires extensive hand-feature engineering to construct text-based features. However, neural semantic parsers have been demonstrating impressive improvement over various and numerous dataset (Suhr et al., 2018; Dong and Lapata, 2018) . Hence, we explore both RNN-based (Bahdanau et al., 2014) and transformer-based (Vaswani et al., 2017) architectures for our attentional seq2seq model, denoted as RNNS2S and Transformer respectively. Hyperparameters, architecture details, and other experimental choices are detailed in the supplementary material. Unless otherwise mentioned, we use 3 previous utterances as the history. Since there is no validation set, we use 10fold cross validation over the training set to find the best parameters. Table 4 demonstrates the accuracy of different models. Our RNNS2S baseline already surpasses the previous SOTA result with a large margin. However, since we use our own partition, this comparison should not be as a reference. Both pointer network and conversational architecture dramatically advance the accuracy. Finally, our transformer model combining these two techniques obtains a new SOTA result.",
"cite_spans": [
{
"start": 20,
"end": 44,
"text": "Srivastava et al. (2017)",
"ref_id": "BIBREF12"
},
{
"start": 107,
"end": 138,
"text": "(Zettlemoyer and Collins, 2007)",
"ref_id": "BIBREF16"
},
{
"start": 337,
"end": 356,
"text": "(Suhr et al., 2018;",
"ref_id": "BIBREF13"
},
{
"start": 357,
"end": 379,
"text": "Dong and Lapata, 2018)",
"ref_id": "BIBREF3"
},
{
"start": 415,
"end": 438,
"text": "(Bahdanau et al., 2014)",
"ref_id": "BIBREF0"
},
{
"start": 461,
"end": 483,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 884,
"end": 891,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Main Results",
"sec_num": "3.1"
},
{
"text": "Seq2seq (Srivastava et al., 2017) 52.3 SPCon (Srivastava et al., 2017) 62. ",
"cite_spans": [
{
"start": 8,
"end": 33,
"text": "(Srivastava et al., 2017)",
"ref_id": "BIBREF12"
},
{
"start": 45,
"end": 70,
"text": "(Srivastava et al., 2017)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous methods",
"sec_num": null
},
{
"text": "In this section we provide some deep analysis on our models. Since we see the same trend in both RNNS2S and Transformer, we only report the analysis of RNNS2S. The supplementary material reports the analysis of Transformer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "3.2"
},
{
"text": "We analyze the test data, and count the number of errors that can be rectified by introducing the pointer network for both vanilla and context-dependent seq2seq models. In the test set, we identify that a total of 37 errors made by the seq2seq model and 36 errors made by the seq2seq+context model can be rectified by the copy mechanism. According to Figure 3 , our pointer network fixes at least half of the incorrect instances. Clearly, the pointer mechanism cannot solve all copy-related errors. After scrutinizing the system-generated results, we realize that the pointer network tends to retain the copy mode once it is triggered. This phenomenon is consistent with the observations by See et al. (2017) . Consequently, the extra copies impinge on the accuracy of the system. The effects of the context-dependent mechanism. In the experiments, our context-dependent mechanism is shown to be able to address contextrelated errors, especially when user's input implies a complex and compositional command. These complex commands usually involve a series of complicated actions, as shown in Table 5 . According to Table 6 , our context-dependent model rectifies 27 out of 68 context-related errors. Since we notice that previous utterances can also obfuscate the model, we conduct an ablation study over the size of history. As shown in Figure 4 , incorporating 3 previous utterances reach the best performance. According to Figure 5 , we believe that incorporating 3 previous utterances covers sufficient contextual information. Less than this number, the system cannot better utilize context, while the salient information is contaminated by the extra history. The same behavior is observed in Transformer model. We argue that the size of the effective history would be dependent utterance: Set recipient to Mom's email . Set subject to hello and send the email logical form: ( doSeq ( setFieldFromFieldVal ( getProbMutableFieldByFieldName body ) ( evalField ( getProbFieldByInstanceName-AndFieldName inbox body ) ) ) ( doSeq ( setFieldFromFieldVal ( getProbMutableFieldByFieldName recipient list ) ( eval-Field ( getProbFieldByInstanceNameAndFieldName inbox sender ) ) ) ( send email ) ) ) ",
"cite_spans": [
{
"start": 691,
"end": 708,
"text": "See et al. (2017)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 351,
"end": 359,
"text": "Figure 3",
"ref_id": "FIGREF2"
},
{
"start": 1093,
"end": 1100,
"text": "Table 5",
"ref_id": "TABREF6"
},
{
"start": 1116,
"end": 1123,
"text": "Table 6",
"ref_id": "TABREF7"
},
{
"start": 1339,
"end": 1347,
"text": "Figure 4",
"ref_id": "FIGREF3"
},
{
"start": 1427,
"end": 1435,
"text": "Figure 5",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "The effects of the copy mechanism",
"sec_num": null
},
{
"text": "In this work, we explore a neural semantic parser architecture that incorporates conversational context and copy mechanism. These modelling improvements are solidly grounded by our analysis, and they significantly boost the performance of the base model. As a result, our best architecture establish a new state-of-the-art on the Email Assistant dataset. In the future, we would explore other architectural innovations for the system, for example, the neural denoising mechanisms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "4"
},
{
"text": "We would like to thank three anonymous reviewers for their valuable comments and suggestions. This work was supported by the Multimodal Australian ScienceS Imaging and Visualisation Environment (MASSIVE). 1 This work is partly supported by the ARC Future Fellowship FT190100039 to G.H.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgement",
"sec_num": "5"
},
{
"text": "In RNNS2S model, at the utterance level, a onelayer bidiretional RNNs for the encoder, while the decoder is a two-layer RNNs. We use a one-layer RNNs to represent the conversational information flow. All RNNs use LSTM cells, with a hidden size of 128. The sizes of word embeddings and position embeddings are 128 and 50 respectively. We train our models for 10 epochs by Adam optimizer (Kingma and Ba, 2014) with an initial learning rate of 0.001. The batch size of non-context training is 16, while the context variant is 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Implementation Details",
"sec_num": null
},
{
"text": "For Transformer model, we use 3 identical transformer blocks for both encoder and decoder. Within each block, the size of the embeddings is 256, while the feed forward network has 512 neurons. We set the size of heads to 4. The conversational encoder is a one-layer RNNs with the size of 256. The optimizer and training schedule is same as Vaswani et al. (2017) , except warmup steps = 500. Due to the warmup steps, We train this model for 14 epochs. The batch size is same as that of RNNS2S.",
"cite_spans": [
{
"start": 340,
"end": 361,
"text": "Vaswani et al. (2017)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Implementation Details",
"sec_num": null
},
{
"text": "The effects of the pointer mechanism According to Figure 6 , half of the incorrect instances are fixed by the pointer mechanism. Transformer 35 Transformer + context 24 context dependency Transformer 25 Transformer + context 16 Table 7 :",
"cite_spans": [],
"ref_spans": [
{
"start": 50,
"end": 58,
"text": "Figure 6",
"ref_id": "FIGREF5"
},
{
"start": 129,
"end": 243,
"text": "Transformer 35 Transformer + context 24 context dependency Transformer 25 Transformer + context 16 Table 7",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "B Analysis of Transformer",
"sec_num": null
},
{
"text": "Incorrect instances of Transformer and context-dependent Transformer models in terms of complex commands and context dependency. The effects of the context-dependent mechanism. Similarly, incorporating the contextual information is able to address the context-oriented issues by a larger margin (see Table 7) .",
"cite_spans": [],
"ref_spans": [
{
"start": 300,
"end": 308,
"text": "Table 7)",
"ref_id": null
}
],
"eq_spans": [],
"section": "#incorrect complex command",
"sec_num": null
},
{
"text": "Finally, as observed in the main paper, having an access to the 3 previous utterances achieves the best performance. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "#incorrect complex command",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1409.0473"
]
},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Bert: Pre-training of deep 1",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep 1 https://www.massive.org.au/ bidirectional transformers for language understand- ing.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Language to logical form with neural attention",
"authors": [
{
"first": "Li",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1601.01280"
]
},
"num": null,
"urls": [],
"raw_text": "Li Dong and Mirella Lapata. 2016. Language to logical form with neural attention. arXiv preprint arXiv:1601.01280.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Coarse-to-fine decoding for neural semantic parsing",
"authors": [
{
"first": "Li",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1805.04793"
]
},
"num": null,
"urls": [],
"raw_text": "Li Dong and Mirella Lapata. 2018. Coarse-to-fine de- coding for neural semantic parsing. arXiv preprint arXiv:1805.04793.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Understanding back-translation at scale",
"authors": [
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1808.09381"
]
},
"num": null,
"urls": [],
"raw_text": "Sergey Edunov, Myle Ott, Michael Auli, and David Grangier. 2018. Understanding back-translation at scale. arXiv preprint arXiv:1808.09381.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Adversarial examples for evaluating reading comprehension systems",
"authors": [
{
"first": "Robin",
"middle": [],
"last": "Jia",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1707.07328"
]
},
"num": null,
"urls": [],
"raw_text": "Robin Jia and Percy Liang. 2017. Adversarial exam- ples for evaluating reading comprehension systems. arXiv preprint arXiv:1707.07328.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6980"
]
},
"num": null,
"urls": [],
"raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Has machine translation achieved human parity? a case for document-level evaluation",
"authors": [
{
"first": "Samuel",
"middle": [],
"last": "L\u00e4ubli",
"suffix": ""
},
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Volk",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1808.07048"
]
},
"num": null,
"urls": [],
"raw_text": "Samuel L\u00e4ubli, Rico Sennrich, and Martin Volk. 2018. Has machine translation achieved human parity? a case for document-level evaluation. arXiv preprint arXiv:1808.07048.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Latent predictor networks for code generation",
"authors": [
{
"first": "Wang",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Grefenstette",
"suffix": ""
},
{
"first": "Karl",
"middle": [
"Moritz"
],
"last": "Hermann",
"suffix": ""
},
{
"first": "Tom\u00e1\u0161",
"middle": [],
"last": "Ko\u010disk\u1ef3",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Senior",
"suffix": ""
},
{
"first": "Fumin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1603.06744"
]
},
"num": null,
"urls": [],
"raw_text": "Wang Ling, Edward Grefenstette, Karl Moritz Her- mann, Tom\u00e1\u0161 Ko\u010disk\u1ef3, Andrew Senior, Fumin Wang, and Phil Blunsom. 2016. Latent predic- tor networks for code generation. arXiv preprint arXiv:1603.06744.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A deep reinforced model for abstractive summarization",
"authors": [
{
"first": "Romain",
"middle": [],
"last": "Paulus",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1705.04304"
]
},
"num": null,
"urls": [],
"raw_text": "Romain Paulus, Caiming Xiong, and Richard Socher. 2017. A deep reinforced model for abstractive sum- marization. arXiv preprint arXiv:1705.04304.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Abstract syntax networks for code generation and semantic parsing",
"authors": [
{
"first": "Maxim",
"middle": [],
"last": "Rabinovich",
"suffix": ""
},
{
"first": "Mitchell",
"middle": [],
"last": "Stern",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1704.07535"
]
},
"num": null,
"urls": [],
"raw_text": "Maxim Rabinovich, Mitchell Stern, and Dan Klein. 2017. Abstract syntax networks for code gen- eration and semantic parsing. arXiv preprint arXiv:1704.07535.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Get to the point: Summarization with pointer-generator networks",
"authors": [
{
"first": "Abigail",
"middle": [],
"last": "See",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1704.04368"
]
},
"num": null,
"urls": [],
"raw_text": "Abigail See, Peter J Liu, and Christopher D Man- ning. 2017. Get to the point: Summarization with pointer-generator networks. arXiv preprint arXiv:1704.04368.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Parsing natural language conversations using contextual cues",
"authors": [
{
"first": "Shashank",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Amos",
"middle": [],
"last": "Azaria",
"suffix": ""
},
{
"first": "Tom M",
"middle": [],
"last": "Mitchell",
"suffix": ""
}
],
"year": 2017,
"venue": "IJCAI",
"volume": "",
"issue": "",
"pages": "4089--4095",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shashank Srivastava, Amos Azaria, and Tom M Mitchell. 2017. Parsing natural language conversa- tions using contextual cues. In IJCAI, pages 4089- 4095.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Learning to map context-dependent sentences to executable formal queries",
"authors": [
{
"first": "Alane",
"middle": [],
"last": "Suhr",
"suffix": ""
},
{
"first": "Srinivasan",
"middle": [],
"last": "Iyer",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Artzi",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1804.06868"
]
},
"num": null,
"urls": [],
"raw_text": "Alane Suhr, Srinivasan Iyer, and Yoav Artzi. 2018. Learning to map context-dependent sentences to executable formal queries. arXiv preprint arXiv:1804.06868.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "End-to-end memory networks",
"authors": [
{
"first": "Sainbayar",
"middle": [],
"last": "Sukhbaatar",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Rob",
"middle": [],
"last": "Fergus",
"suffix": ""
}
],
"year": 2015,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "2440--2448",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. 2015. End-to-end memory networks. In Advances in neural information processing systems, pages 2440-2448.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, pages 5998-6008.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Online learning of relaxed ccg grammars for parsing to logical form",
"authors": [
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)",
"volume": "",
"issue": "",
"pages": "678--687",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luke Zettlemoyer and Michael Collins. 2007. Online learning of relaxed ccg grammars for parsing to log- ical form. In Proceedings of the 2007 Joint Con- ference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 678-687.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Learning context-dependent mappings from sentences to logical form",
"authors": [
{
"first": "S",
"middle": [],
"last": "Luke",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP",
"volume": "2",
"issue": "",
"pages": "976--984",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luke S Zettlemoyer and Michael Collins. 2009. Learn- ing context-dependent mappings from sentences to logical form. In Proceedings of the Joint Confer- ence of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Lan- guage Processing of the AFNLP: Volume 2-Volume 2, pages 976-984. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "dialog",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF1": {
"text": "Overall architecture of our semantic parser. We omit the pointer network due to lack of space.",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF2": {
"text": "Number of copy-related incorrect instances that can be corrected by a pointer network.",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF3": {
"text": "Accuracy of different size of history.",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF4": {
"text": "Heat map of different size of history.",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF5": {
"text": "Number of copy-related incorrect instances that can be corrected by a pointer network.",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF6": {
"text": "Accuracy of different size of history.",
"type_str": "figure",
"uris": null,
"num": null
},
"TABREF0": {
"num": null,
"text": "An error made by the base seq2seq model. Copy mechanism can fix it.",
"type_str": "table",
"content": "<table/>",
"html": null
},
"TABREF2": {
"num": null,
"text": "An error made by the base seq2seq model. It is clear that without the context information, the model cannot infer the correct logical form.",
"type_str": "table",
"content": "<table/>",
"html": null
},
"TABREF3": {
"num": null,
"text": "",
"type_str": "table",
"content": "<table/>",
"html": null
},
"TABREF5": {
"num": null,
"text": "Test accuracy on Email Assistant dataset. Bold indicates the best result. SPCon is the best CCG parser with contextual information in Srivastava et al.",
"type_str": "table",
"content": "<table><tr><td>(2017)</td></tr></table>",
"html": null
},
"TABREF6": {
"num": null,
"text": "An example of complex and compositional commands.",
"type_str": "table",
"content": "<table><tr><td/><td>#incorrect</td></tr><tr><td>complex command</td><td/></tr><tr><td>RNNS2S</td><td>39</td></tr><tr><td colspan=\"2\">RNNS2S + context 20</td></tr><tr><td>context dependency</td><td/></tr><tr><td>RNNS2S</td><td>29</td></tr><tr><td colspan=\"2\">RNNS2S + context 21</td></tr></table>",
"html": null
},
"TABREF7": {
"num": null,
"text": "",
"type_str": "table",
"content": "<table><tr><td>: Incorrect instances of RNNS2S and context-</td></tr><tr><td>dependent RNNS2S models in terms of complex com-</td></tr><tr><td>mands and context dependency.</td></tr><tr><td>on different datasets, but they will demonstrate the</td></tr><tr><td>same trend.</td></tr></table>",
"html": null
}
}
}
}