ACL-OCL / Base_JSON /prefixS /json /spnlp /2021.spnlp-1.7.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:34:46.000977Z"
},
"title": "A Globally Normalized Neural Model for Semantic Parsing",
"authors": [
{
"first": "Chenyang",
"middle": [],
"last": "Huang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Alberta",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Wei",
"middle": [],
"last": "Yang",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Yanshuai",
"middle": [],
"last": "Cao",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Osmar",
"middle": [],
"last": "Za\u00efane",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Alberta",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Lili",
"middle": [],
"last": "Mou",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Alberta",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we propose a globally normalized model for context-free grammar (CFG)based semantic parsing. Instead of predicting a probability, our model predicts a real-valued score at each step and does not suffer from the label bias problem. Experiments show that our approach outperforms locally normalized models on small datasets, but it does not yield improvement on a large dataset.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we propose a globally normalized model for context-free grammar (CFG)based semantic parsing. Instead of predicting a probability, our model predicts a real-valued score at each step and does not suffer from the label bias problem. Experiments show that our approach outperforms locally normalized models on small datasets, but it does not yield improvement on a large dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Semantic parsing has received much interest in the NLP community (Zelle and Mooney, 1996; Zettlemoyer and Collins, 2005; Jia and Liang, 2016; Guo et al., 2020) . The task is to map a natural language utterance to executable code, such as \u03bb-expressions, SQL queries, and Python programs.",
"cite_spans": [
{
"start": 65,
"end": 89,
"text": "(Zelle and Mooney, 1996;",
"ref_id": "BIBREF29"
},
{
"start": 90,
"end": 120,
"text": "Zettlemoyer and Collins, 2005;",
"ref_id": "BIBREF31"
},
{
"start": 121,
"end": 141,
"text": "Jia and Liang, 2016;",
"ref_id": "BIBREF9"
},
{
"start": 142,
"end": 159,
"text": "Guo et al., 2020)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recent work integrates the context-free grammar (CFG) of the target code into the generation process. Instead of generating tokens of the code (Dong and Lapata, 2016) , CFG-based semantic parsing predicts the grammar rules in the abstract syntax tree (AST). This guarantees the generated code complies with the CFG, and thus it has been widely adopted Guo et al., 2019; Bogin et al., 2019; Sun et al., 2019 Sun et al., , 2020 .",
"cite_spans": [
{
"start": 143,
"end": 166,
"text": "(Dong and Lapata, 2016)",
"ref_id": "BIBREF5"
},
{
"start": 352,
"end": 369,
"text": "Guo et al., 2019;",
"ref_id": "BIBREF8"
},
{
"start": 370,
"end": 389,
"text": "Bogin et al., 2019;",
"ref_id": "BIBREF1"
},
{
"start": 390,
"end": 406,
"text": "Sun et al., 2019",
"ref_id": "BIBREF18"
},
{
"start": 407,
"end": 425,
"text": "Sun et al., , 2020",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Typically, the neural semantic parsing models are trained by maximum likelihood estimation (MLE). The models predict the probability of the next rules in an autoregressive fashion, known as a locally normalized model. However, local normalization is often criticized for the label bias problem (Lafferty et al., 2001; Andor et al., 2016; Wiseman and Rush, 2016; Stanojevi\u0107 and Steedman, 2020) . In semantic parsing, for example, grammar rules that generate identifiers (e.g., variable names) have much lower probability than other grammar rules. Thus, the model will be biased towards such rules that can avoid predicting identifiers. More generally, the locally normalized model will prefer such early-step predictions that can lead to low entropy in future steps.",
"cite_spans": [
{
"start": 294,
"end": 317,
"text": "(Lafferty et al., 2001;",
"ref_id": "BIBREF12"
},
{
"start": 318,
"end": 337,
"text": "Andor et al., 2016;",
"ref_id": "BIBREF0"
},
{
"start": 338,
"end": 361,
"text": "Wiseman and Rush, 2016;",
"ref_id": "BIBREF21"
},
{
"start": 362,
"end": 392,
"text": "Stanojevi\u0107 and Steedman, 2020)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, we propose to apply global normalization to neural semantic parsing. Our model scores every grammar rule with an unbounded real value, instead of a probability, so that the model does not have to avoid high-entropy predictions and does not suffer from label bias. Specifically, we use max-margin loss for training, where the ground truth is treated as the positive sample and beam search results are negative samples. In addition, we accelerate training by initializing the globally normalized model with the parameters from a pretrained locally normalized model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We conduct experiments on three datasets: ATIS (Dahl et al., 1994) , CoNaLa , and Spider (Yu et al., 2018) . Compared with local normalization, our globally normalized model is able to achieve higher performance on the small ATIS and CoNaLa datasets with the long short-term memory (LSTM) architecture, but does not yield improvement on the massive Spider dataset when using a BERT-based pretrained language model.",
"cite_spans": [
{
"start": 47,
"end": 66,
"text": "(Dahl et al., 1994)",
"ref_id": "BIBREF2"
},
{
"start": 89,
"end": 106,
"text": "(Yu et al., 2018)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Early approaches to semantic parsing mainly rely on predefined templates, and are domain-specific (Zelle and Mooney, 1996; Zettlemoyer and Collins, 2005; Kwiatkowksi et al., 2010) . Later, researchers apply sequence-to-sequence models to semantic parsing. Dong and Lapata (2016) propose to generate tokens along the syntax tree of a program. Yin and Neubig (2017) generate a program by predicting the grammar rules; our work uses the TranX tool with this framework.",
"cite_spans": [
{
"start": 98,
"end": 122,
"text": "(Zelle and Mooney, 1996;",
"ref_id": "BIBREF29"
},
{
"start": 123,
"end": 153,
"text": "Zettlemoyer and Collins, 2005;",
"ref_id": "BIBREF31"
},
{
"start": 154,
"end": 179,
"text": "Kwiatkowksi et al., 2010)",
"ref_id": "BIBREF11"
},
{
"start": 256,
"end": 278,
"text": "Dong and Lapata (2016)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Globally normalized models, such as the conditional random field (CRF, Lafferty et al., 2001) , are able to mitigate the label bias problem. How-ever, their training is generally difficult due to the global normalization process. To tackle this challenge, Daum\u00e9 and Marcu (2005) propose learning as search optimization (LaSO), and Wiseman and Rush (2016) extend it to the neural network regime as beam search optimization (BSO). Specifically, they obtain negative partial samples whenever the ground truth falls out of the beam during the search, and \"restart\" the beam search with the ground truth partial sequence teacher-forced.",
"cite_spans": [
{
"start": 71,
"end": 93,
"text": "Lafferty et al., 2001)",
"ref_id": "BIBREF12"
},
{
"start": 256,
"end": 278,
"text": "Daum\u00e9 and Marcu (2005)",
"ref_id": "BIBREF3"
},
{
"start": 331,
"end": 354,
"text": "Wiseman and Rush (2016)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Our work is similar to BSO. However, we search for an entire output, and do not train with partial negative samples. This is because our decoder is tree-structured, and different partial trees cannot be implemented in batch efficiently. We instead perform locally normalized pretraining to ease the training of our globally normalized model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In this section, we first introduce the neural semantic parser TranX, which servers as the locally normalized base model in our work. We then elaborate how to construct its globally normalized version.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "TranX is a context-free grammar (CFG)-based neural semantic parsing system . TranX first encodes a natural language input X with a neural network encoder. Then, the model generates a program by predicting the grammar rules (also known as actions) along the abstract syntax tree (AST) of the program.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The TranX Framework",
"sec_num": "3.1"
},
{
"text": "In Figure 1 , for example, the rules generating the desired program include ApplyConstr(Expr.), ApplyConstr(Call), ApplyConstr(Attr.), and GenToken(sorted).",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 11,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "The TranX Framework",
"sec_num": "3.1"
},
{
"text": "In TranX, these actions are predicted in an autoregressive way based on the input X and the partially generated tree, given by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The TranX Framework",
"sec_num": "3.1"
},
{
"text": "P L (a t |a <t , X; \u03b8 L ) = exp{o(a t |a <t , X; \u03b8 L )} a t \u2208At(a<t) exp{o(a t |a <t , X; \u03b8 L )} (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The TranX Framework",
"sec_num": "3.1"
},
{
"text": "where \u03b8 L denotes the parameters of the neural network model, and the subscript L emphasizes that the probability is locally normalized. o(\u2022) denotes the logit at this step, and a t is an action (i.e., grammar rule) among all possible actions at this step A t (\u2022), which is based on previous predicted rules a <t .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The TranX Framework",
"sec_num": "3.1"
},
{
"text": "In other words, the prediction probability is normalized at every step, and the training objective is to maximize",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The TranX Framework",
"sec_num": "3.1"
},
{
"text": "P L (a 1:n |X; \u03b8 L ) = n t=1 P L (a t |a <t , X; \u03b8 L ) (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The TranX Framework",
"sec_num": "3.1"
},
{
"text": "where n is the total number of steps.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The TranX Framework",
"sec_num": "3.1"
},
{
"text": "A locally normalized model may suffer from the label bias problem (Lafferty et al., 2001 ). This is because such a model normalizes the probability to 1 at every step. However, the candidate action set A t (a <t ) may have different sizes, and the actions from a smaller A t (a <t ) typically have higher probabilities. Thus, the model would prefer such actions a <t that will yield smaller A t (a <t ) in future steps. 1 We propose to adapt TranX to a global normalized model to alleviate label bias. Instead of predicting a probability P (a t |a <t , X) as in (2), our globally normalized model predicts a positive score at a step as",
"cite_spans": [
{
"start": 66,
"end": 88,
"text": "(Lafferty et al., 2001",
"ref_id": "BIBREF12"
},
{
"start": 420,
"end": 421,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Globally Normalized Training",
"sec_num": "3.2"
},
{
"text": "s(a t |a <t , X; \u03b8 G ) = exp{o(a t |a <t , X; \u03b8 G )} (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Globally Normalized Training",
"sec_num": "3.2"
},
{
"text": "where o(\u2022) is the same logit as (1), and \u03b8 G is the parameters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Globally Normalized Training",
"sec_num": "3.2"
},
{
"text": "The probability of the sequence a 1:n is normalized only once in a global manner, given by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Globally Normalized Training",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P G (a 1:n , X; \u03b8 G ) = 1 Z G n t=1 s(a t |a <t , X; \u03b8 G )",
"eq_num": "(4)"
}
],
"section": "Globally Normalized Training",
"sec_num": "3.2"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Globally Normalized Training",
"sec_num": "3.2"
},
{
"text": "Z G = a 1:n n t=1 s(a t |a <t ; \u03b8 G ) is the par- tition function.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Globally Normalized Training",
"sec_num": "3.2"
},
{
"text": "A globally normalized model alleviates the label bias problem, because it does not normalize the probability at every prediction step, as seen from (4). Thus, it is not biased by the size of A t (a <t ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Globally Normalized Training",
"sec_num": "3.2"
},
{
"text": "The training objective is still to maximize the likelihood, albeit normalized in a global way. However, computing the partition function Z G requires enumerating all combinations of actions a 1:n in the partition function of (4), which is generally intractable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Globally Normalized Training",
"sec_num": "3.2"
},
{
"text": "In practice, the maximum likelihood training is approximated by max-margin loss between a positive sample a 1:n and a negative sample a \u2212 1:n , given by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Globally Normalized Training",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L(a \u2212 1:n , a 1:n ) = max{0, o(a \u2212 1:n |X) \u2212 o(a 1:n |X) + \u2206}",
"eq_num": "(5)"
}
],
"section": "Globally Normalized Training",
"sec_num": "3.2"
},
{
"text": "where o(a 1:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Globally Normalized Training",
"sec_num": "3.2"
},
{
"text": "n |X) = 1 n n t=1 o(a t |a <t )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Globally Normalized Training",
"sec_num": "3.2"
},
{
"text": "is the average of logits. \u2206 is a positive constant.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Globally Normalized Training",
"sec_num": "3.2"
},
{
"text": "The positive sample is simply the ground truth actions, whereas the negative samples are obtained by beam search. In other words, we perform beam search inference during training, and the sequences in the beam (other than the ground truth) serve as the negative samples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Globally Normalized Training",
"sec_num": "3.2"
},
{
"text": "Similar to MLE training for (4), the max-margin loss increases the logits of the ground truth sample, while decreasing the logits for others. It is noted that the quality of negative samples will largely affect the max-margin training, as only a few samples are used to approximate Z G .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Globally Normalized Training",
"sec_num": "3.2"
},
{
"text": "To address this issue, we initialize the parameters of the globally normalized model \u03b8 G with \u03b8 L in a pretrained locally normalized model. Thus, our negative samples are of higher quality, so that the max-margin training is easier and more stable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Globally Normalized Training",
"sec_num": "3.2"
},
{
"text": "TranX has a copy mechanism (Gu et al., 2016) as an important component for predicting the terminal nodes of the AST, as the target program largely overlaps with the source utterance, especially for entities (e.g., \"file.csv\" in Figure 1 ). In the locally normalized TranX, the copy mechanism marginalizes the probability of generating a token in the vocabulary and copying it from the source:",
"cite_spans": [
{
"start": 27,
"end": 44,
"text": "(Gu et al., 2016)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 228,
"end": 236,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Handling the Copy Mechanism",
"sec_num": "3.3"
},
{
"text": "P L (a t = GenToken[v] | a <t , X) = P (gen | a <t , X)P (v | gen, a <t , X) + P (copy | a <t , X)P (v | copy, a <t , X)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Handling the Copy Mechanism",
"sec_num": "3.3"
},
{
"text": "where GenToken[\u2022] denotes generating a terminal token v. P (copy|\u2022) is the predicted probability of copying the token v from the source utterance, and P (gen|\u2022) = 1 \u2212 P (copy|\u2022) is the probability of generating v from the vocabulary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Handling the Copy Mechanism",
"sec_num": "3.3"
},
{
"text": "However, the copy mechanism cannot be directly combined with global normalization, because we use unbounded, real-valued logits instead of probabilities. This would not make much sense when both logits are negative, whereas their product is positive.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Handling the Copy Mechanism",
"sec_num": "3.3"
},
{
"text": "Therefore, we propose a variant of copy mechanisms in the globally normalized setting. Specifically, we keep the probabilities P (copy|\u2022) and P (gen|\u2022), and use them to weight the logits of generating and copying a token v, given by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Handling the Copy Mechanism",
"sec_num": "3.3"
},
{
"text": "o(a t = GenToken[v] | a <t , X) = P (gen | a <t , X)o(v | gen, a <t , X) + P (copy | a <t , X)o(v | copy, a <t , X) Here, o(a t = GenToken[v] | \u2022)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Handling the Copy Mechanism",
"sec_num": "3.3"
},
{
"text": "is a linear interpolation of two logits, and thus fits the max-margin loss (5) naturally.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Handling the Copy Mechanism",
"sec_num": "3.3"
},
{
"text": "Datasets. We conduct experiments on three benchmark parsing datasets: ATIS (Zettlemoyer and Collins, 2007) , CoNaLa , and Spider (Yu et al., 2018) , which contain 4473, 2379, and 8695 training samples, respectively.",
"cite_spans": [
{
"start": 75,
"end": 106,
"text": "(Zettlemoyer and Collins, 2007)",
"ref_id": "BIBREF30"
},
{
"start": 129,
"end": 146,
"text": "(Yu et al., 2018)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "It should be pointed out that much work adopts data anonymization techniques to replace entities with placeholders (Dong and Lapata, 2016; Neubig, 2017, 2019; Sun et al., 2020) . This unfortunately causes a large number of duplicate samples between training and test. This is recently realized in Guo et al. (2020) , and thus, in our work, we only compare the models using the original, correct ATIS dataset.",
"cite_spans": [
{
"start": 115,
"end": 138,
"text": "(Dong and Lapata, 2016;",
"ref_id": "BIBREF5"
},
{
"start": 139,
"end": 158,
"text": "Neubig, 2017, 2019;",
"ref_id": null
},
{
"start": 159,
"end": 176,
"text": "Sun et al., 2020)",
"ref_id": "BIBREF19"
},
{
"start": 297,
"end": 314,
"text": "Guo et al. (2020)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Settings. Our globally normalized semantic parser is developed upon the open-sourced TranX 2 . We adopt the CFG grammars provided by TranX to convert lambda calculus and Python programs into ASTs and sequence of grammar rules (actions). For ATIS and CoNaLa datasets, we use long LSTM models as both the encoder and the decoder. Their dimensions are set to 256. For the Spider dataset, we use a pretrained BERT model 3 (Devlin et al., Dev Test Jia and Liang (2016) No copy N/A 69.90% Copy N/A 76.30% Copy + data recombination N/A 83.30% Guo et al. (2020) 2019) and the relation-aware Transformer (Wang et al., 2020) as the encoder and an LSTM as the decoder. The architecture generally follows the work by Xu et al. (2021) . The beam size is set to 20 to search for negative samples, and is set to 5 for inference. The margin \u2206 in (5) is set to 0.1. We use the Adam optimizer (Kingma and Ba, 2015) with a learning rate of 5e-4 for training.",
"cite_spans": [
{
"start": 418,
"end": 463,
"text": "(Devlin et al., Dev Test Jia and Liang (2016)",
"ref_id": null
},
{
"start": 536,
"end": 553,
"text": "Guo et al. (2020)",
"ref_id": "BIBREF7"
},
{
"start": 595,
"end": 614,
"text": "(Wang et al., 2020)",
"ref_id": "BIBREF20"
},
{
"start": 705,
"end": 721,
"text": "Xu et al. (2021)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "For both ATIS and CoNaLa datasets, we report the best results on the development sets and the corresponding results on the test set. For the Spider dataset, we only report the results on the development set as the ground truth of the test set is not publicly available.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "ATIS dataset. Following Yin and Neubig (2017); Sun et al. (2020), we report the exact match accuracy for ATIS. We first replicate locally normalized models with and without the copy mechanism and achieve similar results to Jia and Liang (2016) and Guo et al. (2020) , shown in Table 1 . This verifies that we have a fair implementation and are ready for the study of global normalization.",
"cite_spans": [
{
"start": 223,
"end": 243,
"text": "Jia and Liang (2016)",
"ref_id": "BIBREF9"
},
{
"start": 248,
"end": 265,
"text": "Guo et al. (2020)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 277,
"end": 284,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "We observe that the copy mechanism largely affects the accuracy on the test set, although it has little effect on the development set. This is because the training and validation distributions closely resemble each other, whereas the test distribution differs largely. Therefore, the copy mechanism is important for handling unseen entities in the test set, and our proposed copy variant in Section 3.3 is also essential to globally normalized models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "We then train our model with the max-margin loss. Our globally normalized model consistently improves the accuracy on both development and Dev Acc. Rubin and Berant (2020) 73.4% Yu et al. (2021) 74.7% Ours (local) 73.79% + Global 73.69% Table 3 : Exact match accuracy on the Spider dataset. Test performance requires submissions to the official website. We report validation performance instead.",
"cite_spans": [
{
"start": 148,
"end": 171,
"text": "Rubin and Berant (2020)",
"ref_id": "BIBREF16"
},
{
"start": 178,
"end": 194,
"text": "Yu et al. (2021)",
"ref_id": "BIBREF27"
},
{
"start": 206,
"end": 213,
"text": "(local)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 237,
"end": 244,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "test sets, compared with its locally normalized counterpart. This shows the effectiveness of our approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "In addition, we notice that a large number of entities in ATIS have a form like \"ap:denvor\" (Denver Airport). We thus use the combination of characterlevel ELMo embeddings (Peters et al., 2018) and word-level GloVe embeddings (Pennington et al., 2014) . This further improves the accuracy, which outperforms the previous methods by \u223c1.9% in the setting without data augmentation.",
"cite_spans": [
{
"start": 172,
"end": 193,
"text": "(Peters et al., 2018)",
"ref_id": "BIBREF15"
},
{
"start": 226,
"end": 251,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "CoNaLa dataset. For CoNaLa, BLEU is treated as the main metric in previous work (Yin and Neubig, 2019) , because accuracy is generally very low (<3%) on this dataset. From Table 2 , we observe that our globally normalized model improves the BLEU scores on both the development and test sets compared with the locally normalized baseline. Such improvement is consistent with that on ATIS.",
"cite_spans": [
{
"start": 80,
"end": 102,
"text": "(Yin and Neubig, 2019)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [
{
"start": 172,
"end": 179,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "We further compare our model with Yin and Neubig (2019) , which reranks beam search results by heuristics. Our method is outperformed by the reranking approach. Note that reranking can be considered as alleviating label bias with postprocessing, as the locally normalized model fails to assign the correct sequence with the highest joint probability. However, the reranking method requires training several reranking scorers, combined with an ad hoc feature (namely, length). By contrast, our global normalization does not rely on ad hoc human engineering.",
"cite_spans": [
{
"start": 34,
"end": 55,
"text": "Yin and Neubig (2019)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "Spider dataset. Table 3 lists the results on the Spider dataset. Here, our locally normalized model uses BERT as the encoder, and its performance is on par with that from the recent state-of-the-art approaches (Rubin and Berant, 2020; Yu et al., 2021) . However, our global normalization does not improve the performance. It is noted that BERT is a more powerful model than LSTM, and Spider has a much larger training set than CoNaLa and ATIS. We conjuncture that BERT learns the step-by-step local prediction probability very well, which in turn yields a satisfying joint probability and largely mitigates label bias by itself. Therefore, the globally normalized model does not exhibit its superiority on the Spider dataset.",
"cite_spans": [
{
"start": 210,
"end": 234,
"text": "(Rubin and Berant, 2020;",
"ref_id": "BIBREF16"
},
{
"start": 235,
"end": 251,
"text": "Yu et al., 2021)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [
{
"start": 16,
"end": 23,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "In this work, we propose to apply global normalization for neural semantic parsing. Our approach predicts the score of different grammar rules at an autoregressive step, and thus it does not suffer from the label bias problem. We observe that our proposed method is able to improve performance on small datasets with LSTM-based encoders. However, global normalization becomes less effective on the large dataset with a BERT architecture.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Or more generally, the model prefers At(a<t) with a smaller entropy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/pcyin/tranX 3 Specifically, we use the RoBERTa-base model as we find it performs better than the original BERTbase model(Devlin et al., 2019).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We acknowledge the support of the Mitacs Accelerate Program (Ref: IT16065) ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Globally normalized transition-based neural networks",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Andor",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Alberti",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Weiss",
"suffix": ""
},
{
"first": "Aliaksei",
"middle": [],
"last": "Severyn",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Presta",
"suffix": ""
},
{
"first": "Kuzman",
"middle": [],
"last": "Ganchev",
"suffix": ""
},
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2442--2452",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Andor, Chris Alberti, David Weiss, Aliaksei Severyn, Alessandro Presta, Kuzman Ganchev, Slav Petrov, and Michael Collins. 2016. Globally normal- ized transition-based neural networks. In Proceed- ings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 2442-2452.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Representing schema structure with graph neural networks for text-to-SQL parsing",
"authors": [
{
"first": "Ben",
"middle": [],
"last": "Bogin",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Berant",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4560--4565",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ben Bogin, Jonathan Berant, and Matt Gardner. 2019. Representing schema structure with graph neural networks for text-to-SQL parsing. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4560-4565.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Expanding the scope of the ATIS task: The ATIS-3 corpus",
"authors": [
{
"first": "Deborah",
"middle": [
"A"
],
"last": "Dahl",
"suffix": ""
},
{
"first": "Madeleine",
"middle": [],
"last": "Bates",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Brown",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Fisher",
"suffix": ""
},
{
"first": "Kate",
"middle": [],
"last": "Hunicke-Smith",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Pallett",
"suffix": ""
},
{
"first": "Christine",
"middle": [],
"last": "Pao",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Rudnicky",
"suffix": ""
},
{
"first": "Elizabeth",
"middle": [],
"last": "Shriberg",
"suffix": ""
}
],
"year": 1994,
"venue": "Human Language Technology: Proceedings of a Workshop held at Plainsboro",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Deborah A. Dahl, Madeleine Bates, Michael Brown, William Fisher, Kate Hunicke-Smith, David Pallett, Christine Pao, Alexander Rudnicky, and Elizabeth Shriberg. 1994. Expanding the scope of the ATIS task: The ATIS-3 corpus. In Human Language Tech- nology: Proceedings of a Workshop held at Plains- boro, New Jersey, March 8-11, 1994.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Learning as search optimization: Approximate large margin methods for structured prediction",
"authors": [
{
"first": "Hal",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 22nd International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "169--176",
"other_ids": {
"DOI": [
"10.1145/1102351.1102373"
]
},
"num": null,
"urls": [],
"raw_text": "Hal Daum\u00e9 and Daniel Marcu. 2005. Learning as search optimization: Approximate large margin methods for structured prediction. In Proceedings of the 22nd International Conference on Machine Learning, pages 169-176.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, pages 4171-4186.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Language to logical form with neural attention",
"authors": [
{
"first": "Li",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "33--43",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Li Dong and Mirella Lapata. 2016. Language to logi- cal form with neural attention. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics, pages 33-43.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Incorporating copying mechanism in sequence-to-sequence learning",
"authors": [
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Zhengdong",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Hang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "O",
"middle": [
"K"
],
"last": "Victor",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1631--1640",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O.K. Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In Proceedings of the 54th Annual Meeting of the Association for Com- putational Linguistics, pages 1631-1640.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Benchmarking meaning representations in neural semantic parsing",
"authors": [
{
"first": "Jiaqi",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Qian",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jian-Guang",
"middle": [],
"last": "Lou",
"suffix": ""
},
{
"first": "Zhenwen",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Xueqing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1520--1540",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiaqi Guo, Qian Liu, Jian-Guang Lou, Zhenwen Li, Xueqing Liu, Tao Xie, and Ting Liu. 2020. Bench- marking meaning representations in neural seman- tic parsing. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 1520-1540.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Towards complex text-to-SQL in crossdomain database with intermediate representation",
"authors": [
{
"first": "Jiaqi",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Zecheng",
"middle": [],
"last": "Zhan",
"suffix": ""
},
{
"first": "Yan",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Yan",
"middle": [],
"last": "Xiao",
"suffix": ""
},
{
"first": "Jian-Guang",
"middle": [],
"last": "Lou",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Dongmei",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4524--4535",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiaqi Guo, Zecheng Zhan, Yan Gao, Yan Xiao, Jian-Guang Lou, Ting Liu, and Dongmei Zhang. 2019. Towards complex text-to-SQL in cross- domain database with intermediate representation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4524-4535.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Data recombination for neural semantic parsing",
"authors": [
{
"first": "Robin",
"middle": [],
"last": "Jia",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "12--22",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robin Jia and Percy Liang. 2016. Data recombination for neural semantic parsing. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics, pages 12-22.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2015,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In International Conference on Learning Representations.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Inducing probabilistic CCG grammars from logical form with higherorder unification",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Kwiatkowksi",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Sharon",
"middle": [],
"last": "Goldwater",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1223--1233",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom Kwiatkowksi, Luke Zettlemoyer, Sharon Goldwa- ter, and Mark Steedman. 2010. Inducing probabilis- tic CCG grammars from logical form with higher- order unification. In Proceedings of the 2010 Con- ference on Empirical Methods in Natural Language Processing, pages 1223-1233.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data",
"authors": [
{
"first": "John",
"middle": [
"D"
],
"last": "Lafferty",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "Fernando",
"middle": [
"C N"
],
"last": "Pereira",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the Eighteenth International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "282--289",
"other_ids": {
"DOI": [
"https://dl.acm.org/doi/10.5555/645530.655813"
]
},
"num": null,
"urls": [],
"raw_text": "John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling se- quence data. In Proceedings of the Eighteenth In- ternational Conference on Machine Learning, pages 282-289.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "RoBERTa: A robustly optimized bert pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "GloVe: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Confer- ence on Empirical Methods in Natural Language Processing, pages 1532-1543.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "2227--2237",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word repre- sentations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2227-2237.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "SmBoP: Semiautoregressive bottom-up semantic parsing",
"authors": [
{
"first": "Ohad",
"middle": [],
"last": "Rubin",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Berant",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2010.12412"
]
},
"num": null,
"urls": [],
"raw_text": "Ohad Rubin and Jonathan Berant. 2020. SmBoP: Semi- autoregressive bottom-up semantic parsing. arXiv preprint arXiv:2010.12412.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Maxmargin incremental CCG parsing",
"authors": [
{
"first": "Milo\u0161",
"middle": [],
"last": "Stanojevi\u0107",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4111--4122",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Milo\u0161 Stanojevi\u0107 and Mark Steedman. 2020. Max- margin incremental CCG parsing. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4111-4122.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A grammar-based structural cnn decoder for code generation",
"authors": [
{
"first": "Zeyu",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Qihao",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Lili",
"middle": [],
"last": "Mou",
"suffix": ""
},
{
"first": "Yingfei",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Ge",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Lu",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "7055--7062",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zeyu Sun, Qihao Zhu, Lili Mou, Yingfei Xiong, Ge Li, and Lu Zhang. 2019. A grammar-based structural cnn decoder for code generation. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 7055-7062.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "TreeGen: A tree-based transformer architecture for code generation",
"authors": [
{
"first": "Zeyu",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Qihao",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Yingfei",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Yican",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Lili",
"middle": [],
"last": "Mou",
"suffix": ""
},
{
"first": "Lu",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "8984--8991",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zeyu Sun, Qihao Zhu, Yingfei Xiong, Yican Sun, Lili Mou, and Lu Zhang. 2020. TreeGen: A tree-based transformer architecture for code generation. In Pro- ceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence, pages 8984-8991.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "RAT-SQL: Relation-aware schema encoding and linking for text-to-SQL parsers",
"authors": [
{
"first": "Bailin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Shin",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Oleksandr",
"middle": [],
"last": "Polozov",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Richardson",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "7567--7578",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bailin Wang, Richard Shin, Xiaodong Liu, Oleksandr Polozov, and Matthew Richardson. 2020. RAT-SQL: Relation-aware schema encoding and linking for text-to-SQL parsers. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 7567-7578.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Sequence-to-sequence learning as beam-search optimization",
"authors": [
{
"first": "Sam",
"middle": [],
"last": "Wiseman",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Rush",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1296--1306",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sam Wiseman and Alexander M. Rush. 2016. Sequence-to-sequence learning as beam-search opti- mization. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1296-1306.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Optimizing deeper transformers on small datasets",
"authors": [
{
"first": "Peng",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Dhruv",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Wenjie",
"middle": [],
"last": "Zi",
"suffix": ""
},
{
"first": "Keyi",
"middle": [],
"last": "Tang",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2012.15355"
]
},
"num": null,
"urls": [],
"raw_text": "Peng Xu, Dhruv Kumar, Wei Yang, Wenjie Zi, Keyi Tang, Chenyang Huang, Jackie Chi Kit Cheung, Si- mon JD Prince, and Yanshuai Cao. 2021. Optimiz- ing deeper transformers on small datasets. arXiv preprint arXiv:2012.15355.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Learning to mine aligned code and natural language pairs from stack overflow",
"authors": [
{
"first": "Pengcheng",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Edgar",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Bogdan",
"middle": [],
"last": "Vasilescu",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the International Conference on Mining Software Repositories",
"volume": "",
"issue": "",
"pages": "476--486",
"other_ids": {
"DOI": [
"https://dl.acm.org/doi/10.1145/3196398.3196408"
]
},
"num": null,
"urls": [],
"raw_text": "Pengcheng Yin, Bowen Deng, Edgar Chen, Bogdan Vasilescu, and Graham Neubig. 2018. Learning to mine aligned code and natural language pairs from stack overflow. In Proceedings of the International Conference on Mining Software Repositories, pages 476-486.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "A syntactic neural model for general-purpose code generation",
"authors": [
{
"first": "Pengcheng",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "440--450",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pengcheng Yin and Graham Neubig. 2017. A syntactic neural model for general-purpose code generation. In Proceedings of the 55th Annual Meeting of the As- sociation for Computational Linguistics, pages 440- 450.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "TRANX: A transition-based neural abstract syntax parser for semantic parsing and code generation",
"authors": [
{
"first": "Pengcheng",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "7--12",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pengcheng Yin and Graham Neubig. 2018. TRANX: A transition-based neural abstract syntax parser for se- mantic parsing and code generation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstra- tions, pages 7-12.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Reranking for neural semantic parsing",
"authors": [
{
"first": "Pengcheng",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4553--4559",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pengcheng Yin and Graham Neubig. 2019. Reranking for neural semantic parsing. In Proceedings of the 57th Annual Meeting of the Association for Compu- tational Linguistics, pages 4553-4559.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "GraPPa: Grammar-augmented pre-training for table semantic parsing",
"authors": [
{
"first": "Tao",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Chien-Sheng",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Xi",
"middle": [],
"last": "Victoria Lin",
"suffix": ""
},
{
"first": "Yi",
"middle": [],
"last": "Chern Tan",
"suffix": ""
},
{
"first": "Xinyi",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Dragomir",
"middle": [],
"last": "Radev",
"suffix": ""
},
{
"first": "Socher",
"middle": [],
"last": "Richard",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
}
],
"year": 2021,
"venue": "The International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tao Yu, Chien-Sheng Wu, Xi Victoria Lin, bailin wang, Yi Chern Tan, Xinyi Yang, Dragomir Radev, richard socher, and Caiming Xiong. 2021. GraPPa: Grammar-augmented pre-training for table semantic parsing. In The International Conference on Learn- ing Representations.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Spider: A largescale human-labeled dataset for complex and crossdomain semantic parsing and text-to-SQL task",
"authors": [
{
"first": "Tao",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Michihiro",
"middle": [],
"last": "Yasunaga",
"suffix": ""
},
{
"first": "Dongxu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Zifan",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Irene",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Qingning",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Shanelle",
"middle": [],
"last": "Roman",
"suffix": ""
},
{
"first": "Zilin",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Dragomir",
"middle": [],
"last": "Radev",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "3911--3921",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tao Yu, Rui Zhang, Kai Yang, Michihiro Yasunaga, Dongxu Wang, Zifan Li, James Ma, Irene Li, Qingning Yao, Shanelle Roman, Zilin Zhang, and Dragomir Radev. 2018. Spider: A large- scale human-labeled dataset for complex and cross- domain semantic parsing and text-to-SQL task. In Proceedings of the Conference on Empirical Meth- ods in Natural Language Processing, pages 3911- 3921.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Learning to parse database queries using inductive logic programming",
"authors": [
{
"first": "M",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "Raymond",
"middle": [
"J"
],
"last": "Zelle",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mooney",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of the Thirteenth National Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "1050--1055",
"other_ids": {
"DOI": [
"https://dl.acm.org/doi/10.5555/1864519.1864543"
]
},
"num": null,
"urls": [],
"raw_text": "John M. Zelle and Raymond J. Mooney. 1996. Learn- ing to parse database queries using inductive logic programming. In Proceedings of the Thirteenth Na- tional Conference on Artificial Intelligence, pages 1050-1055.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Online learning of relaxed CCG grammars for parsing to logical form",
"authors": [
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "678--687",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luke Zettlemoyer and Michael Collins. 2007. Online learning of relaxed CCG grammars for parsing to logical form. In Proceedings of the Joint Conference on Empirical Methods in Natural Language Process- ing and Computational Natural Language Learning, pages 678-687.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars",
"authors": [
{
"first": "Luke",
"middle": [
"S"
],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Twenty-First Conference on Uncertainty in Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "658--666",
"other_ids": {
"DOI": [
"https://dl.acm.org/doi/10.5555/3020336.3020416"
]
},
"num": null,
"urls": [],
"raw_text": "Luke S. Zettlemoyer and Michael Collins. 2005. Learn- ing to map sentences to logical form: Structured classification with probabilistic categorial grammars. In Proceedings of the Twenty-First Conference on Uncertainty in Artificial Intelligence, pages 658- 666.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "An example of generating a Python program with TranX.",
"num": null,
"type_str": "figure"
},
"TABREF3": {
"type_str": "table",
"content": "<table/>",
"text": "BLEU score on the CoNaLa dataset.",
"html": null,
"num": null
}
}
}
}