Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "C18-1008",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:09:05.836372Z"
},
"title": "Neural Transition-based String Transduction for Limited-Resource Setting in Morphology",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Makarov",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Zurich",
"location": {
"country": "Switzerland"
}
},
"email": "[email protected]"
},
{
"first": "Simon",
"middle": [],
"last": "Clematide",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Zurich",
"location": {
"country": "Switzerland"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present a neural transition-based model that uses a simple set of edit actions (copy, delete, insert) for morphological transduction tasks such as inflection generation, lemmatization, and reinflection. In a large-scale evaluation on four datasets and dozens of languages, our approach consistently outperforms state-of-the-art systems on low and medium training-set sizes and is competitive in the high-resource setting. Learning to apply a generic copy action enables our approach to generalize quickly from a few data points. We successfully leverage minimum risk training to compensate for the weaknesses of MLE parameter learning and neutralize the negative effects of training a pipeline with a separate character aligner.",
"pdf_parse": {
"paper_id": "C18-1008",
"_pdf_hash": "",
"abstract": [
{
"text": "We present a neural transition-based model that uses a simple set of edit actions (copy, delete, insert) for morphological transduction tasks such as inflection generation, lemmatization, and reinflection. In a large-scale evaluation on four datasets and dozens of languages, our approach consistently outperforms state-of-the-art systems on low and medium training-set sizes and is competitive in the high-resource setting. Learning to apply a generic copy action enables our approach to generalize quickly from a few data points. We successfully leverage minimum risk training to compensate for the weaknesses of MLE parameter learning and neutralize the negative effects of training a pipeline with a separate character aligner.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Morphological string transduction involves mapping one word form into another, possibly given a feature specification for the mapping, and comprises such inflectional morphology tasks as reinflection and lemmatization ( Figure 1) , and related problems such as normalization of historical texts. Traditionally, this task has been solved with weighted finite state transducers (Mohri, 2004; Eisner, 2002, WFST) . Recently, it has been approached with neural sequence-to-sequence (seq2seq) methods (Faruqui et al., 2016; Kann and Sch\u00fctze, 2016) , inspired by the advances in neural machine translation (Sutskever et al., 2014; Bahdanau et al., 2014) . Albeit offering a general solution to a special case of the string-tostring mapping problem, seq2seq models are highly data-intensive. The long tradition of modeling for morphology offers insights into the specifics of the task, suggesting models that would exploit inductive biases and thereby attain lower sample complexity. Recent works in seq2seq morphology model full input string context and unbounded dependencies in the output, but also propose conditioning generation on the context-enriched representation of one input character at a time (Aharoni and Goldberg, 2017; Yu et al., 2016) . This and constraining character alignment to be monotonic bring this line of work close to traditional WFST approaches, which monotonically modify a string by performing local changes. Having as our starting point the hard monotonic attention model of Aharoni and Goldberg (2017, HA) , our goal is to improve seq2seq morphological processing by explicitly modeling local string edits commonly studied in traditional approaches. Our contributions are as follows:",
"cite_spans": [
{
"start": 376,
"end": 389,
"text": "(Mohri, 2004;",
"ref_id": "BIBREF22"
},
{
"start": 390,
"end": 409,
"text": "Eisner, 2002, WFST)",
"ref_id": null
},
{
"start": 496,
"end": 518,
"text": "(Faruqui et al., 2016;",
"ref_id": "BIBREF13"
},
{
"start": 519,
"end": 542,
"text": "Kann and Sch\u00fctze, 2016)",
"ref_id": "BIBREF19"
},
{
"start": 600,
"end": 624,
"text": "(Sutskever et al., 2014;",
"ref_id": "BIBREF32"
},
{
"start": 625,
"end": 647,
"text": "Bahdanau et al., 2014)",
"ref_id": "BIBREF4"
},
{
"start": 1199,
"end": 1227,
"text": "(Aharoni and Goldberg, 2017;",
"ref_id": "BIBREF0"
},
{
"start": 1228,
"end": 1244,
"text": "Yu et al., 2016)",
"ref_id": "BIBREF35"
},
{
"start": 1499,
"end": 1530,
"text": "Aharoni and Goldberg (2017, HA)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 220,
"end": 229,
"text": "Figure 1)",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 First, we explain HA as a neural transition-based system over edit actions. Alternative models are then available, differing in the choice of edit actions. We argue that extending HA with the COPY edit action is crucial and supported by the nature of the problem, accounting for large performance gains especially in the low-resource setting.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Second, trained with the original MLE procedure, HA relies on gold action sequences computed by a separate character aligner. As a result, the overall approach is a pipeline. We propose enabling exploration at training (e.g. via expected risk minimization (MRT) or reinforcement learning-style training), thereby allowing the model to prefer alternative actions that also lead to the correct output sequence and neutralizing negative effects of the pipelined architecture. Additionally, this approach benefits from directly optimizing a sequence-level performance metric.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Third, we conduct extensive experiments on the morphological inflection generation, reinflection and lemmatization tasks, showing that our approaches come near to or improve on the state-of-theart results. We make our code and model predictions publicly available. 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In our approach, we seek the most probable sequence of edit actions for a given input string and an optional feature specification for the transduction. Unlike traditional WFST approaches to this problem, we abandon the explicit modeling of all possible edit sequences via latent alignments in favor of a greedy, representationally rich RNN-powered transition-based architecture. When training with the MLE criterion following Aharoni and Goldberg (2017) , our overall set-up is a pipeline of a character aligner followed by a greedy neural string transducer. Character alignments generated by the aligner are mapped to gold action sequences, whose conditional likelihood the neural transducer then learns to maximize. Under training with exploration, the neural transducer no longer relies on gold action sequences. Instead, the parameters are adjusted to directly maximize the model's accuracy of producing training-set output sequences.",
"cite_spans": [
{
"start": 427,
"end": 454,
"text": "Aharoni and Goldberg (2017)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Description",
"sec_num": "2"
},
{
"text": "Let \u03a3 x , \u03a3 y , and \u03a3 a be alphabets of input characters, output characters, and edit actions, respectively. Let x = x 1 , . . . , x n , x i \u2208 \u03a3 x denote an input sequence, y = y 1 , . . . , y p , y j \u2208 \u03a3 y an output sequence, and a = a 1 , . . . , a m , a t \u2208 \u03a3 a an action sequence. Let {f h } H h=1 be the set of morpho-syntactic features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Description",
"sec_num": "2"
},
{
"text": "seq2seq state-transition system We build a greedy transition-based string transducer that uses a seq2seq neural network to model arbitrary dependencies in the input sequence, the unbounded action history, and the non-deterministic choice of the next action. The system operates a buffer filled with RNN-encoded input characters, and a decoder RNN, which implements a push-only stack. The configuration of the system is given by the decoder state. Transitions are scored based on the output of the decoder, which takes as input the encoded character from the top of the buffer. Here, we elaborate on the model architecture. We encode input sequence x with a bidirectional LSTM (Graves and Schmidhuber, 2005) ",
"cite_spans": [
{
"start": 676,
"end": 706,
"text": "(Graves and Schmidhuber, 2005)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Description",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h 1 , . . . , h n = BiLSTM(E(x 1 ), . . . , E(x n )),",
"eq_num": "(1)"
}
],
"section": "Model Description",
"sec_num": "2"
},
{
"text": "where E returns the embedding for x i . Vector h i is thus the representation of x i in the context of the entire sequence x. We push h 1 , . . . , h n in reversed order into the buffer. Transduction begins with the full buffer and the empty decoder state. The decoder LSTM keeps track of the past actions and-through conditioning at each step on h iknows of character x i at the top of the buffer and the full contents of the buffer. From the latest state of the decoder c t\u22121 , we compute the configuration of the system:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Description",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "s t = LSTM(c t\u22121 , [A(a t\u22121 ) ; h i ; f ]),",
"eq_num": "(2)"
}
],
"section": "Model Description",
"sec_num": "2"
},
{
"text": "where the input is the concatenation of (i) the embedding of the previous action (given by A), (ii) h i from the top of the buffer indicating the current position in x, and-optionally-(iii) feature vector f , which is the concatenation of the embedded morpho-syntactic features \u03c6 \u2286 {f h } H h=1 associated with this transduction:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Description",
"sec_num": "2"
},
{
"text": "h 7 h 6 h 5 h 4 h 3 g 1 g 2 g 3 Buffer Stack f s 3 ... COPY COPY i e g e n h 8 eos g 0 INS[bos] C O P Y D E L E T E I N S [ i ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Description",
"sec_num": "2"
},
{
"text": "f = [F (f 1 ) ; \u2022 \u2022 \u2022 ; F (f H )] and F (f h ) = F (0) if f h \u2208 \u03c6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Description",
"sec_num": "2"
},
{
"text": "To compute probabilities of transitions a t , we feed s t through a softmax classifier:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Description",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (a t = k | a <t , x, \u0398) = softmax k (W \u2022 s t + b)",
"eq_num": "(3)"
}
],
"section": "Model Description",
"sec_num": "2"
},
{
"text": "Model parameters \u0398 include softmax classifier parameters W and b, the embedding parameters, and the parameters of the encoder and decoder.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Description",
"sec_num": "2"
},
{
"text": "Edit actions Traditional transducers edit input sequence x into output sequence y by a sequence of single-character edit actions from the following set (Cotterell et al., 2014) : Let INSERTS y be the set of all insertions with respect to \u03a3 y . We consider the following two action alphabets:",
"cite_spans": [
{
"start": 152,
"end": 176,
"text": "(Cotterell et al., 2014)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Description",
"sec_num": "2"
},
{
"text": "\u2022",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Description",
"sec_num": "2"
},
{
"text": "\u03a3 HA a = INSERTS y \u222a {DELETE} and \u03a3 CA a = \u03a3 HA a \u222a {COPY}. Alphabet \u03a3 HA a",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Description",
"sec_num": "2"
},
{
"text": "is from Aharoni and Goldberg (2017) and includes only the INSERT and DELETE actions. Both substitution and copying of c are expressed as an INSERT(c) followed by a DELETE.",
"cite_spans": [
{
"start": 8,
"end": 35,
"text": "Aharoni and Goldberg (2017)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Description",
"sec_num": "2"
},
{
"text": "Alphabet \u03a3 CA a adds a designated COPY action to \u03a3 HA a . Thus, copying x i to the output sequence can be executed by one single action. This results in shorter and simpler action sequences dominated by COPY actions, following the observation that inflectional changes are typically small and most of x is preserved in y. 2 Action execution Operationally, reading x i corresponds to popping its representation h i from the top of the buffer. The transducer terminates when the buffer is empty and the latest action a t is INSERT(EOS), where EOS is the end-of-sequence character. If we constrain the number of successive insertions to at most q, the transducer runs in O(n) time, where n is the length of input x. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Description",
"sec_num": "2"
},
{
"text": "The model is trained to maximize the conditional log-likelihood of the data",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MLE training",
"sec_num": null
},
{
"text": "D = {(x (l) , a (l) )} N l=1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MLE training",
"sec_num": null
},
{
"text": ", which is an everywhere differentiable function of parameters \u0398:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MLE training",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L(D, \u0398) = N l=1 m t=1 log P (a (l) t | a (l) <t , x (l) , \u0398)",
"eq_num": "(4)"
}
],
"section": "MLE training",
"sec_num": null
},
{
"text": "The gold action sequences a (l) are computed by a deterministic algorithm from some character alignment: ",
"cite_spans": [
{
"start": 28,
"end": 31,
"text": "(l)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "MLE training",
"sec_num": null
},
{
"text": "a (l) = C Align \u03a3a (x (l) , y (l) ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MLE training",
"sec_num": null
},
{
"text": "b k \u2208 \u03a3 x \u222a { } and c k \u2208 \u03a3 y \u222a { } but not b k = c k = : d(b, c) = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 COPY, if b = c, DELETE, if c = , INSERT(c), if b = , DELETE, INSERT(c) otherwise % substitution",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MLE training",
"sec_num": null
},
{
"text": "Applying this procedure to e.g. the CRP alignment from Figure 3 , we obtain the following gold action sequence: COPY, COPY, DELETE, INSERT(o), DELETE, COPY, DELETE, DELETE.",
"cite_spans": [],
"ref_spans": [
{
"start": 55,
"end": 63,
"text": "Figure 3",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "MLE training",
"sec_num": null
},
{
"text": "Learning with exploration Training with MLE comes with a number of limitations. First, the model is not exposed to its own errors at training time: It makes predictions conditioned on gold-action histories, which is at odds with test time when the model has to condition on predicted actions. Second, MLE training increases the model's per-action likelihood, although at test time, the model's performance is assessed with sequence-level accuracy or edit distance. Both constitute well-known MLE training biases-the exposure bias and the loss-evaluation mismatch (Wiseman and Rush, 2016) . Finally, we would like the model to be less dependent on the gold actions generated by the aligner, which is uninformed of the downstream task, and that at training, the model can choose alternative action sequences leading to correct predictions, if that helps it generalize.",
"cite_spans": [
{
"start": 563,
"end": 587,
"text": "(Wiseman and Rush, 2016)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "MLE training",
"sec_num": null
},
{
"text": "To address all these issues at once, we train the model by minimizing the expected risk (Och, 2003; Smith and Eisner, 2006) of the actual training data T = {(x (l) , y (l) )} N l=1 :",
"cite_spans": [
{
"start": 88,
"end": 99,
"text": "(Och, 2003;",
"ref_id": "BIBREF24"
},
{
"start": 100,
"end": 123,
"text": "Smith and Eisner, 2006)",
"ref_id": "BIBREF31"
},
{
"start": 160,
"end": 163,
"text": "(l)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "MLE training",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "R(T, \u0398) = N l=1 E a|x (l) ; \u0398 \u2206(y, y (l) ) ,",
"eq_num": "(5)"
}
],
"section": "MLE training",
"sec_num": null
},
{
"text": "where y is computed from a and x, and the risk is given by a combination of normalized Levenshtein distance (NLD) and accuracy:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MLE training",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u2206(y, y (l) ) = NLD(y, y (l) ) \u2212 1{y = y (l) }",
"eq_num": "(6)"
}
],
"section": "MLE training",
"sec_num": null
},
{
"text": "Thus, an action sequence a attains the lowest risk of \u22121 if its corresponding output sequence y is identical to y (l) of the training sample and the highest risk of +1 if the number of edits from y to y (l) equals the maximum of their lengths. Figure 4 : Accuracy as a function of dataset size (left) and the ratio of dataset size to the number of unique transformations (right) for selected experiments. A log scale is used for the X axis. CLX50/CLX300=average scores on CELEX with 50/300 samples ( Figure 5 ), SGM16 =SIGMORPHON2016, SGM17L/SGM17M=SIGMORPHON2017-low/medium, LEM=average scores on lemmatization, LEMGA/LEMTL=lemmatization Irish/Tagalog.",
"cite_spans": [],
"ref_spans": [
{
"start": 244,
"end": 252,
"text": "Figure 4",
"ref_id": null
},
{
"start": 500,
"end": 508,
"text": "Figure 5",
"ref_id": "FIGREF6"
}
],
"eq_spans": [],
"section": "MLE training",
"sec_num": null
},
{
"text": "Following Shen et al. (2016) , we approximate the expectation under the posterior distribution P (a | x (l) ; \u0398) with ancestral sampling from the model and re-normalize the sampled probability scores to get a new distribution Q:",
"cite_spans": [
{
"start": 10,
"end": 28,
"text": "Shen et al. (2016)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "MLE training",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "R(D, \u0398) \u2248 N l=1 a\u2208S(x (l) ) Q(a | x (l) ; \u0398, \u03b1) \u2206(y, y (l) ) (7) Q(a | x (l) ; \u0398, \u03b1) = P (a | x (l) ; \u0398) \u03b1 a \u2208S(x (l) ) P (a | x (l) ; \u0398) \u03b1",
"eq_num": "(8)"
}
],
"section": "MLE training",
"sec_num": null
},
{
"text": "Here, S(x (l) ) denotes the set of samples from P (a | x (l) ; \u0398) and \u03b1 \u2208 R is a hyper-parameter that controls for the peakedness of the new distribution Q.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MLE training",
"sec_num": null
},
{
"text": "In the following experiments, we evaluate the performance of our model with an explicit copy action (referred to as CA) and show how it further improves with exploration training (-MRT). Unless stated otherwise, our MLE models are trained on gold actions computed using Mans Hulden's Chinese Restaurant Process string-pair aligner (indicated as CRP) 4 and decoded with beam search. On some problems, we find a simple strategy, which heuristically maximizes the number of COPY actions, to work surprisingly well: The Longest Common Substring aligner (LCS) first aligns the longest common substring of x and y and then pads both strings to the same length.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "MRT models are initialized with the corresponding MLE models and decoded with beam search. We found the best value of \u03b1 = 1 from {1, 0.1, 0.05} on the CELEX-ALL task ( \u00a7 3.1) and used that for all other datasets as well.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "We use the same embedding parameters for characters and insertion actions (i.e. A(INSERT(b)) = E(b)) to match closely the set-up of Aharoni and Goldberg (2017) . In all our systems, the dimension of the character and action embeddings is 100, LSTM hidden layers are of size 200, and all LSTMs are singlelayer. We use LSTMs with peephole connections and coupled input and forget gates (Greff et al., 2016) . We optimize with ADADELTA (Zeiler, 2012) and update parameters at a single training sample (=batch size 1) during MLE training. For MRT, we build sets S(x (l) ) by drawing twenty samples per training example. We mini-batch using these sets as batches. We include the gold action sequence (generated for MLE training) into the batch. We implement our models using DyNet (Neubig et al., 2017) .",
"cite_spans": [
{
"start": 132,
"end": 159,
"text": "Aharoni and Goldberg (2017)",
"ref_id": "BIBREF0"
},
{
"start": 384,
"end": 404,
"text": "(Greff et al., 2016)",
"ref_id": "BIBREF16"
},
{
"start": 776,
"end": 797,
"text": "(Neubig et al., 2017)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "All experiments report exact accuracies. They are mean accuracies over single runs with different initializations, unless the model is an ensemble (marked with an -E suffix). The ensembles are built with majority voting over differently initialized runs of the same model. We evaluate our approaches on four standard morphological datasets and compare to the following published systems: (HA) the ensemble of five MLE models over \u03a3 HA a of Aharoni and Goldberg (2017) as well as our re-implementation of a single model marked as HA * ; (MED) the ensemble of five softattentional models of Kann and Sch\u00fctze (2016) and an alternative implementation of the soft-attention approach, SOFT, by Aharoni and Goldberg (2017) ; (NWFST) the neural WFST model of Rastogi et al. (2016) and (LAT) the non-neural WFST with latent variables of Dreyer et al. (2008) .",
"cite_spans": [
{
"start": 440,
"end": 467,
"text": "Aharoni and Goldberg (2017)",
"ref_id": "BIBREF0"
},
{
"start": 589,
"end": 612,
"text": "Kann and Sch\u00fctze (2016)",
"ref_id": "BIBREF19"
},
{
"start": 688,
"end": 715,
"text": "Aharoni and Goldberg (2017)",
"ref_id": "BIBREF0"
},
{
"start": 751,
"end": 772,
"text": "Rastogi et al. (2016)",
"ref_id": "BIBREF26"
},
{
"start": 828,
"end": 848,
"text": "Dreyer et al. (2008)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "The task is to map an inflected form x into another form y of that word given a feature specification \u03c6 for this transformation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Morphological reinflection",
"sec_num": "3.1"
},
{
"text": "CELEX This dataset of German verbal morphology transformations was compiled by Dreyer et al. (2008) from the CELEX database (Baayen et al., 1993) . It comprises four transformations (13SIA \u219213SKE, 2PIE \u219213PKE, 2PKE \u2192z, rP \u2192pA), 5 featuring such morphological phenomena as circumfixation, infixation, and irregular stem changes. The data are split into five folds, each with 500 training samples per transformation. We conduct two types of evaluation on these data. In the original experiment, which we call CELEX-BY-TASK, models are trained on each transformation separately, and scores are averaged over the folds. In the second experiment, CELEX-ALL, five single models are trained on all the 2,000 samples of one fold and then ensembled. Again, scores are averaged over the folds. As part of CELEX-BY-TASK, we additionally evaluate how our models perform on even fewer-50, 100, and 300-training samples on two tasks, 2PKE and 13SIA. CELEX could be considered a relatively simple dataset as the ratio of the number of training samples to the number of unique transformations is high, even though the overall training-data size is modest. On the other hand, most CELEX tasks require learning complex lexical properties such as the distinction between strong and weak verbs or prefix types.",
"cite_spans": [
{
"start": 79,
"end": 99,
"text": "Dreyer et al. (2008)",
"ref_id": "BIBREF8"
},
{
"start": 124,
"end": 145,
"text": "(Baayen et al., 1993)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Morphological reinflection",
"sec_num": "3.1"
},
{
"text": "Given a feature specification \u03c6 and a base form x, the task is to generate the corresponding inflected form y.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Morphological inflection generation",
"sec_num": "3.2"
},
{
"text": "Sigmorphon 2017 The low (100 training samples) and medium (1,000 training samples) settings of the SIGMORPHON 2017 shared task data (Cotterell et al., 2017) feature fifty-two languages. The datasets contain extremely diverse language material and morphological transformations. Unlike CELEX, input x is always a dictionary form, however morphological changes are unrestricted. The low setting constitutes a very hard learning problem, with the ratio of training samples to unique transformations being 2.8 on average (SD = 2.9). In the medium setting, the mean number of unique transformations rises to 19.8 (SD = 29.3), with a minimum of 1.4 observed for Basque and a maximum of 200 for English. For this dataset, we also show the results for the official baseline, a ruled-based system that is particularly strong in the low setting, 6 and the best systems of the shared task (Makarov et al., 2017) .",
"cite_spans": [
{
"start": 132,
"end": 156,
"text": "(Cotterell et al., 2017)",
"ref_id": "BIBREF7"
},
{
"start": 878,
"end": 900,
"text": "(Makarov et al., 2017)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Morphological inflection generation",
"sec_num": "3.2"
},
{
"text": "Sigmorphon 2016 The SIGMORPHON 2016 shared task dataset is the largest dataset. It comprises ten languages with about 12,800 training examples on average. The number of samples per transformation varies from 6 for Maltese to 198 for Hungarian, being 112 samples per transformation on average (SD = 51.3). In both SIGMORPHONS, we train five single models for each language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Morphological inflection generation",
"sec_num": "3.2"
},
{
"text": "Given an inflected word form x (without any feature specification), the task is to predict the correct dictionary form y. Following Dreyer (2011) and Rastogi et al. (2016) , we evaluate our approach on a subset of the dataset by Wicentowski (2002) . The data, split into ten folds, comprise four languages, with per-fold training sizes ranging on average from 1,100 for Irish to 7,635 for Tagalog. For each language, we train a separate model for each fold and then average the scores over the folds.",
"cite_spans": [
{
"start": 150,
"end": 171,
"text": "Rastogi et al. (2016)",
"ref_id": "BIBREF26"
},
{
"start": 229,
"end": 247,
"text": "Wicentowski (2002)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Lemmatization",
"sec_num": "3.3"
},
{
"text": "Generally, comparing the performance of CA and HA (or HA * ), we observe that CA achieves great performance gains on small-sized problems while matching HA in the higher-resource setting (Figure 4 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 187,
"end": 196,
"text": "(Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4"
},
{
"text": "CA is a very competitive model on both CELEX-BY-TASK and CELEX-ALL, and adding exploration (CA-MRT) results in the strongest performance in both evaluations (Table 1) . In contrast to HA * , in very low settings ( Figure 5 ), CA performs not much worse than the only non-neural model, LAT. HA * and NWFST need around 300 training examples to start catching up, and the extremely low-resource conditions (50, 100) on 13SIA are especially troublesome for HA * . On CELEX-ALL, even with more training data, soft-attentional ensemble MED is typically much weaker, including tasks with infixation (2PKE) and circumfixation (rP).",
"cite_spans": [],
"ref_spans": [
{
"start": 157,
"end": 166,
"text": "(Table 1)",
"ref_id": "TABREF2"
},
{
"start": 214,
"end": 222,
"text": "Figure 5",
"ref_id": "FIGREF6"
}
],
"eq_spans": [],
"section": "Morphological reinflection",
"sec_num": "4.1"
},
{
"text": "Advancing further on most CELEX tasks is difficult due to morphological irregularities. As an example, examining the predictions of CA-MRT on one fold of the rP task reveals that the system largely fails to predict strong-verb participles (71% of the errors), conjugating 67% of them as if they were regular. Table 3 : Results on the SIGMORPHON 2016 dataset: ru=Russian, de=German, es=Spanish, ka=Georgian, fi=Finnish, tr=Turkish, hu=Hungarian, nv=Navaho, ar=Arabic, mt=Maltese. Table 2 summarizes the results on the SIGMORPHON 2017 dataset. In the low setting, CA easily beats the baseline system, whereas HA * fails to do so. Our simple majority-vote ensemble CA-MRT-E over five models comes very close to the complex 15-strong ensemble HA[EC]M-E15 of Makarov et al. (2017) , the best system of the shared task. Under a paired permutation test, the latter system is statistically significantly better (p < 0.05) on only twenty one languages.",
"cite_spans": [
{
"start": 754,
"end": 775,
"text": "Makarov et al. (2017)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [
{
"start": 309,
"end": 316,
"text": "Table 3",
"ref_id": null
},
{
"start": 479,
"end": 486,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Morphological reinflection",
"sec_num": "4.1"
},
{
"text": "In the medium setting, CA maintains the advantage, although the performance gap from HA * is much smaller. CA-MRT-E even outperforms the shared task's best system, although the gain is statistically significant for only ten languages. In both settings, MRT consistently improves the performance of both the HA * and CA models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Morphological inflection generation",
"sec_num": "4.2"
},
{
"text": "In the high-resource scenario of SIGMORPHON 2016 (Table 3) , HA * and CA attain virtually identical results, occasionally outperforming the soft-attentional ensembles. Unlike HA, we use the same set of hyper-parameters (the dimension of embeddings, the number of hidden LSTM layers, etc.) for all of our experiments, which might explain that both our reimplementation HA * and CA perform less strongly here. Due to computational restrictions, we could not apply MRT to this dataset.",
"cite_spans": [],
"ref_spans": [
{
"start": 49,
"end": 58,
"text": "(Table 3)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Morphological inflection generation",
"sec_num": "4.2"
},
{
"text": "On the lemmatization task (Table 4) , CA strongly outperforms WFST models LAT and NWFST on average. Yet, the HA * reimplementation consistently delivers the best results on every language. The error analysis for English in Rastogi et al. (2016) mentions the tendency of their system, NWFST, to simply copy the inflected word over, which accounts for 25% of English-language errors. Given that CA also has a dedicated copy action, one might suspect that the inferior performance of CA compared to HA * for English and Basque would be due to excessive copying. An inspection of the incorrectly predicted lemmas reveals that both systems produce virtually the same number of copy errors. The difference in error counts is actually due to cases where the system modifies the inflected word form. For English, errors typically occur in strong verbs and verbs with graphemic alternations, as e.g. \"oozing\" gets incorrectly lemmatized as \"ooz\". The scores of over 97% on every language and the kind of unsolved cases, likely requiring external resources, suggest that this task should be considered solved.",
"cite_spans": [
{
"start": 223,
"end": 244,
"text": "Rastogi et al. (2016)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [
{
"start": 26,
"end": 35,
"text": "(Table 4)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Lemmatization",
"sec_num": "4.3"
},
{
"text": "As a final remark, we note that with the datasets at hand, performance attribution is often hampered by the lack of explicit characterization of morphological phenomena or lexical properties at the example level (we have derived some of these meta-data for the CELEX rP task). Given the difficulties interpreting neural models, computational morphology could arguably profit from challenge sets that have recently been gaining popularity in machine translation (Sennrich, 2017; Avramidis et al., 2018) .",
"cite_spans": [
{
"start": 461,
"end": 477,
"text": "(Sennrich, 2017;",
"ref_id": "BIBREF29"
},
{
"start": 478,
"end": 501,
"text": "Avramidis et al., 2018)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Lemmatization",
"sec_num": "4.3"
},
{
"text": "Traditional models for morphological string transduction are discriminatively trained WFSTs (Cotterell et al., 2014; Dreyer et al., 2008; Eisner, 2002) . The transducer defines eligible edit sequences for x (each implying a different monotonic character alignment), and its weights are expressed in terms of handcrafted features. Rastogi et al. (2016) Table 4 : Results on the lemmatization dataset.",
"cite_spans": [
{
"start": 92,
"end": 116,
"text": "(Cotterell et al., 2014;",
"ref_id": "BIBREF5"
},
{
"start": 117,
"end": 137,
"text": "Dreyer et al., 2008;",
"ref_id": "BIBREF8"
},
{
"start": 138,
"end": 151,
"text": "Eisner, 2002)",
"ref_id": "BIBREF12"
},
{
"start": 330,
"end": 351,
"text": "Rastogi et al. (2016)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [
{
"start": 352,
"end": 359,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "WFST, thereby conditioning on global context. The powerful approach of Dreyer et al. (2008) adds latent variables to a globally normalized log-linear WFST to learn task-specific properties: a word's paradigm class and approximate morphological segmentation. Enabling soft character alignment via a deterministic function of inputs (Kann and Sch\u00fctze, 2016) has proven crucial to the success of seq2seq models first proposed for this task in Faruqui et al. (2016) . In line with the traditional simplification of the task, other neural-network approaches treat hard monotone character alignment as a latent variable that the model marginalizes out using dynamic programming, while enabling unbounded dependencies in the output and permitting online generation (Yu et al., 2016; Graves, 2012) . An appealing alternative to latent alignment is to learn from supervised alignment, an idea explored to train soft-attention models (Mi et al., 2016) . For hard-attention models (Aharoni and Goldberg, 2017) , training with an observed alignment is particularly simple as it results in learning from a single gold action sequence.",
"cite_spans": [
{
"start": 71,
"end": 91,
"text": "Dreyer et al. (2008)",
"ref_id": "BIBREF8"
},
{
"start": 331,
"end": 355,
"text": "(Kann and Sch\u00fctze, 2016)",
"ref_id": "BIBREF19"
},
{
"start": 440,
"end": 461,
"text": "Faruqui et al. (2016)",
"ref_id": "BIBREF13"
},
{
"start": 758,
"end": 775,
"text": "(Yu et al., 2016;",
"ref_id": "BIBREF35"
},
{
"start": 776,
"end": 789,
"text": "Graves, 2012)",
"ref_id": "BIBREF15"
},
{
"start": 924,
"end": 941,
"text": "(Mi et al., 2016)",
"ref_id": "BIBREF21"
},
{
"start": 970,
"end": 998,
"text": "(Aharoni and Goldberg, 2017)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "A state-transition system is an elegant, linear-time model for morphological string transduction, in which eligible monotonic edit sequences are implied by the semantics of the actions. As demonstrated on other tasks (Dyer et al., 2015; Andor et al., 2016) , when provided with global context via RNNs, the model overcomes the limitations of a locally normalized conditional distribution, while retaining computational efficiency.",
"cite_spans": [
{
"start": 217,
"end": 236,
"text": "(Dyer et al., 2015;",
"ref_id": "BIBREF10"
},
{
"start": 237,
"end": 256,
"text": "Andor et al., 2016)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "Using a single designated copy action in not new in morphological string transduction, e.g. the SIG-MORPHON 2016 feature-based state-transition baseline uses COPY[n] , where n is the number of characters to copy. Biasing towards copy edits is crucial to the performance of the model of Rastogi et al. (2016) . An alternative to the copy action is to introduce a binary latent variable that signals whether y i is copied from x i or generated (Gu et al., 2016; Gulcehre et al., 2016; See et al., 2017) . Extending models with alignment variables with such a copying mechanism is simple as the the choice of which x i has to be copied need not be modeled (Makarov et al., 2017) : The copy variable points to the x i that y j is aligned with. This alternative requires learning additional model parameters, which could explain its somewhat worse performance on smaller-sized problems.",
"cite_spans": [
{
"start": 158,
"end": 165,
"text": "COPY[n]",
"ref_id": null
},
{
"start": 286,
"end": 307,
"text": "Rastogi et al. (2016)",
"ref_id": "BIBREF26"
},
{
"start": 442,
"end": 459,
"text": "(Gu et al., 2016;",
"ref_id": "BIBREF17"
},
{
"start": 460,
"end": 482,
"text": "Gulcehre et al., 2016;",
"ref_id": "BIBREF18"
},
{
"start": 483,
"end": 500,
"text": "See et al., 2017)",
"ref_id": "BIBREF28"
},
{
"start": 653,
"end": 675,
"text": "(Makarov et al., 2017)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "Minimum risk training (Smith and Eisner, 2006; Och, 2003) is one simple solution enabling exploration and addressing the loss-evaluation mismatch. The approach of Shen et al. (2016) closely relates to classical policy gradient methods in reinforcement learning (Edunov et al., 2018) . A number of alternative methods have recently been proposed to address the MLE training biases in the context of seq2seq models (Andor et al., 2016; Wiseman and Rush, 2016; Ranzato et al., 2016; Rennie et al., 2017) .",
"cite_spans": [
{
"start": 22,
"end": 46,
"text": "(Smith and Eisner, 2006;",
"ref_id": "BIBREF31"
},
{
"start": 47,
"end": 57,
"text": "Och, 2003)",
"ref_id": "BIBREF24"
},
{
"start": 163,
"end": 181,
"text": "Shen et al. (2016)",
"ref_id": "BIBREF30"
},
{
"start": 261,
"end": 282,
"text": "(Edunov et al., 2018)",
"ref_id": "BIBREF11"
},
{
"start": 413,
"end": 433,
"text": "(Andor et al., 2016;",
"ref_id": "BIBREF1"
},
{
"start": 434,
"end": 457,
"text": "Wiseman and Rush, 2016;",
"ref_id": "BIBREF34"
},
{
"start": 458,
"end": 479,
"text": "Ranzato et al., 2016;",
"ref_id": "BIBREF25"
},
{
"start": 480,
"end": 500,
"text": "Rennie et al., 2017)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "In a large-scale evaluation on different morphological tasks and languages, we show that a neural transition-based system over edit actions consistently outperforms state-of-the-art systems on morphological string transduction tasks in low-and medium-resource settings and is competitive on large training sets. Crucially, adding a designated action to copy the input character over to the output string helps the transition model generalize quickly from very few data points. Using a training procedure that enables exploration of the action space (e.g. minimum risk training) consistently improves the performance of our models as they are exposed to action sequences other than those proposed by the character aligner underlying the static oracle in the MLE training procedure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "https://github.com/ZurichNLP/coling2018-neural-transition-based-morphology",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We also experimented with extending \u03a3 HA a and \u03a3 CA a with actions for character substitutions. The resulting models perform similarly to models without substitutions, and so we do not report them here.3 In practice, we simply cap the number of actions at 150.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/ryancotterell/sigmorphon2016/blob/master/src/baseline/align.c",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Glossary: 13SIA=1st/3rd person singular indicative past; 13SKE=1st/3rd person singular subjunctive present; 2PIE=2nd person plural indicative present; 13PKE=1st/3rd plural subjunctive present; 2PKE=2nd person plural subjunctive present; z=\"zu\" infinitive; rP=plural imperative; pA=past participle.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/sigmorphon/conll2017/tree/master/baseline",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank Tatyana Ruzsics, Tanja Samard\u017ei\u0107, Mathias M\u00fcller, Roee Aharoni, Pushpendre Rastogi, and the anonymous reviewers. Peter Makarov has been supported by European Research Council Grant No. 338875.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Morphological inflection generation with hard monotonic attention",
"authors": [
{
"first": "Roee",
"middle": [],
"last": "Aharoni",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2017,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roee Aharoni and Yoav Goldberg. 2017. Morphological inflection generation with hard monotonic attention. In ACL.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Globally normalized transition-based neural networks",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Andor",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Alberti",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Weiss",
"suffix": ""
},
{
"first": "Aliaksei",
"middle": [],
"last": "Severyn",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Presta",
"suffix": ""
},
{
"first": "Kuzman",
"middle": [],
"last": "Ganchev",
"suffix": ""
}
],
"year": 2016,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Andor, Chris Alberti, David Weiss, Aliaksei Severyn, Alessandro Presta, Kuzman Ganchev, Slav Petrov, and Michael Collins. 2016. Globally normalized transition-based neural networks. In ACL.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Fine-grained evaluation of quality estimation for machine translation based on a linguistically-motivated test suite",
"authors": [
{
"first": "Eleftherios",
"middle": [],
"last": "Avramidis",
"suffix": ""
},
{
"first": "Vivien",
"middle": [],
"last": "Macketanz",
"suffix": ""
},
{
"first": "Arle",
"middle": [],
"last": "Lommel",
"suffix": ""
},
{
"first": "Hans",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 1st Workshop on Translation Quality Estimation and Automatic Post-Editing. AMTA",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eleftherios Avramidis, Vivien Macketanz, Arle Lommel, and Hans Uszkoreit. 2018. Fine-grained evaluation of quality estimation for machine translation based on a linguistically-motivated test suite. In Proceedings of the 1st Workshop on Translation Quality Estimation and Automatic Post-Editing. AMTA.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "The CELEX lexical database",
"authors": [
{
"first": "",
"middle": [],
"last": "Rh Baayen",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Piepenbrock",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Van Rijn",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "RH Baayen, R Piepenbrock, and H Van Rijn. 1993. The CELEX lexical database. LDC.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. CoRR.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Stochastic contextual edit distance and probabilistic FSTs",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": ""
},
{
"first": "Nanyun",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2014,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan Cotterell, Nanyun Peng, and Jason Eisner. 2014. Stochastic contextual edit distance and probabilistic FSTs. In ACL.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "The SIGMORPHON 2016 Shared Task-Morphological Reinflection",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": ""
},
{
"first": "Christo",
"middle": [],
"last": "Kirov",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Sylak-Glassman",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Yarowsky",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
},
{
"first": "Mans",
"middle": [],
"last": "Hulden",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Meeting of SIGMORPHON",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan Cotterell, Christo Kirov, John Sylak-Glassman, David Yarowsky, Jason Eisner, and Mans Hulden. 2016. The SIGMORPHON 2016 Shared Task-Morphological Reinflection. In Proceedings of the 2016 Meeting of SIGMORPHON.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "The CoNLL-SIGMORPHON 2017 shared task: Universal morphological reinflection in 52 languages",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": ""
},
{
"first": "Christo",
"middle": [],
"last": "Kirov",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Sylak-Glassman",
"suffix": ""
},
{
"first": "G\u00e9raldine",
"middle": [],
"last": "Walther",
"suffix": ""
},
{
"first": "Ekaterina",
"middle": [],
"last": "Vylomova",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Manaal",
"middle": [],
"last": "Faruqui",
"suffix": ""
},
{
"first": "Sandra",
"middle": [],
"last": "K\u00fcbler",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Yarowsky",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the CoNLL-SIGMORPHON 2017 Shared Task: Universal Morphological Reinflection. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan Cotterell, Christo Kirov, John Sylak-Glassman, G\u00e9raldine Walther, Ekaterina Vylomova, Patrick Xia, Manaal Faruqui, Sandra K\u00fcbler, David Yarowsky, Jason Eisner, and Mans Hulden. 2017. The CoNLL-SIGMORPHON 2017 shared task: Universal morphological reinflection in 52 languages. In Proceedings of the CoNLL- SIGMORPHON 2017 Shared Task: Universal Morphological Reinflection. ACL.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Latent-variable modeling of string transductions with finite-state methods",
"authors": [
{
"first": "Markus",
"middle": [],
"last": "Dreyer",
"suffix": ""
},
{
"first": "Jason",
"middle": [
"R"
],
"last": "Smith",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2008,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Markus Dreyer, Jason R Smith, and Jason Eisner. 2008. Latent-variable modeling of string transductions with finite-state methods. In ACL.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A non-parametric model for the discovery of inflectional paradigms from plain text using graphical models over strings",
"authors": [
{
"first": "Markus",
"middle": [],
"last": "Dreyer",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Markus Dreyer. 2011. A non-parametric model for the discovery of inflectional paradigms from plain text using graphical models over strings. Ph.D. thesis, The Johns Hopkins University.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Transition-based dependency parsing with stack long short-term memory",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Ballesteros",
"suffix": ""
},
{
"first": "Wang",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Austin",
"middle": [],
"last": "Matthews",
"suffix": ""
},
{
"first": "Noah A",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2015,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A Smith. 2015. Transition-based depen- dency parsing with stack long short-term memory. In ACL.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Classical structured prediction losses for sequence to sequence learning",
"authors": [
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Marc'aurelio",
"middle": [],
"last": "Ranzato",
"suffix": ""
}
],
"year": 2018,
"venue": "NAACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sergey Edunov, Myle Ott, Michael Auli, David Grangier, and Marc'Aurelio Ranzato. 2018. Classical structured prediction losses for sequence to sequence learning. In NAACL-HLT.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Parameter estimation for probabilistic finite-state transducers",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2002,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Eisner. 2002. Parameter estimation for probabilistic finite-state transducers. In ACL.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Morphological inflection generation using character sequence to sequence learning",
"authors": [
{
"first": "Manaal",
"middle": [],
"last": "Faruqui",
"suffix": ""
},
{
"first": "Yulia",
"middle": [],
"last": "Tsvetkov",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
}
],
"year": 2016,
"venue": "NAACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manaal Faruqui, Yulia Tsvetkov, Graham Neubig, and Chris Dyer. 2016. Morphological inflection generation using character sequence to sequence learning. In NAACL-HLT.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Framewise phoneme classification with bidirectional LSTM and other neural network architectures",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Graves",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 2005,
"venue": "Neural Networks",
"volume": "",
"issue": "5",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Graves and J\u00fcrgen Schmidhuber. 2005. Framewise phoneme classification with bidirectional LSTM and other neural network architectures. Neural Networks, 18(5).",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Sequence transduction with recurrent neural networks",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Graves",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1211.3711"
]
},
"num": null,
"urls": [],
"raw_text": "Alex Graves. 2012. Sequence transduction with recurrent neural networks. arXiv preprint arXiv:1211.3711.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "LSTM: A search space odyssey",
"authors": [
{
"first": "K",
"middle": [],
"last": "Greff",
"suffix": ""
},
{
"first": "R",
"middle": [
"K"
],
"last": "Srivastava",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Koutnk",
"suffix": ""
},
{
"first": "B",
"middle": [
"R"
],
"last": "Steunebrink",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 2016,
"venue": "IEEE Transactions on Neural Networks and Learning Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Greff, R. K. Srivastava, J. Koutnk, B. R. Steunebrink, and J. Schmidhuber. 2016. LSTM: A search space odyssey. IEEE Transactions on Neural Networks and Learning Systems, PP(99).",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Incorporating copying mechanism in sequence-tosequence learning",
"authors": [
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Zhengdong",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Hang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "O",
"middle": [
"K"
],
"last": "Victor",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O. K. Li. 2016. Incorporating copying mechanism in sequence-to- sequence learning. CoRR.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Pointing the unknown words",
"authors": [
{
"first": "Caglar",
"middle": [],
"last": "Gulcehre",
"suffix": ""
},
{
"first": "Sungjin",
"middle": [],
"last": "Ahn",
"suffix": ""
},
{
"first": "Ramesh",
"middle": [],
"last": "Nallapati",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2016,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Caglar Gulcehre, Sungjin Ahn, Ramesh Nallapati, Bowen Zhou, and Yoshua Bengio. 2016. Pointing the unknown words. In ACL.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Single-model encoder-decoder with explicit morphological representation for reinflection",
"authors": [
{
"first": "Katharina",
"middle": [],
"last": "Kann",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2016,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katharina Kann and Hinrich Sch\u00fctze. 2016. Single-model encoder-decoder with explicit morphological represen- tation for reinflection. In ACL.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Align and copy: UZH at SIGMORPHON 2017 shared task for morphological reinflection",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Makarov",
"suffix": ""
},
{
"first": "Tatiana",
"middle": [],
"last": "Ruzsics",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Clematide",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the CoNLL SIGMORPHON 2017 Shared Task: Universal Morphological Reinflection. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Makarov, Tatiana Ruzsics, and Simon Clematide. 2017. Align and copy: UZH at SIGMORPHON 2017 shared task for morphological reinflection. In Proceedings of the CoNLL SIGMORPHON 2017 Shared Task: Universal Morphological Reinflection. ACL.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Supervised attentions for neural machine translation",
"authors": [
{
"first": "Haitao",
"middle": [],
"last": "Mi",
"suffix": ""
},
{
"first": "Zhiguo",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Abe",
"middle": [],
"last": "Ittycheriah",
"suffix": ""
}
],
"year": 2016,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Haitao Mi, Zhiguo Wang, and Abe Ittycheriah. 2016. Supervised attentions for neural machine translation. In EMNLP.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Weighted finite-state transducer algorithms. an overview. In Formal Languages and Applications, volume 148 of Studies in Fuzziness and Soft Computing",
"authors": [
{
"first": "Mehryar",
"middle": [],
"last": "Mohri",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mehryar Mohri. 2004. Weighted finite-state transducer algorithms. an overview. In Formal Languages and Applications, volume 148 of Studies in Fuzziness and Soft Computing. Springer Berlin Heidelberg.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Swabha Swayamdipta, and Pengcheng Yin. 2017. Dynet: The dynamic neural network toolkit",
"authors": [
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Austin",
"middle": [],
"last": "Matthews",
"suffix": ""
},
{
"first": "Waleed",
"middle": [],
"last": "Ammar",
"suffix": ""
},
{
"first": "Antonios",
"middle": [],
"last": "Anastasopoulos",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Ballesteros",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Clothiaux",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Duh",
"suffix": ""
},
{
"first": "Manaal",
"middle": [],
"last": "Faruqui",
"suffix": ""
},
{
"first": "Cynthia",
"middle": [],
"last": "Gan",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Garrette",
"suffix": ""
},
{
"first": "Yangfeng",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Lingpeng",
"middle": [],
"last": "Kong",
"suffix": ""
},
{
"first": "Adhiguna",
"middle": [],
"last": "Kuncoro",
"suffix": ""
},
{
"first": "Gaurav",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Chaitanya",
"middle": [],
"last": "Malaviya",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Michel",
"suffix": ""
},
{
"first": "Yusuke",
"middle": [],
"last": "Oda",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Richardson",
"suffix": ""
},
{
"first": "Naomi",
"middle": [],
"last": "Saphra",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1701.03980"
]
},
"num": null,
"urls": [],
"raw_text": "Graham Neubig, Chris Dyer, Yoav Goldberg, Austin Matthews, Waleed Ammar, Antonios Anastasopoulos, Miguel Ballesteros, David Chiang, Daniel Clothiaux, Trevor Cohn, Kevin Duh, Manaal Faruqui, Cynthia Gan, Dan Garrette, Yangfeng Ji, Lingpeng Kong, Adhiguna Kuncoro, Gaurav Kumar, Chaitanya Malaviya, Paul Michel, Yusuke Oda, Matthew Richardson, Naomi Saphra, Swabha Swayamdipta, and Pengcheng Yin. 2017. Dynet: The dynamic neural network toolkit. arXiv preprint arXiv:1701.03980.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Minimum error rate training in statistical machine translation",
"authors": [
{
"first": "Franz Josef",
"middle": [],
"last": "Och",
"suffix": ""
}
],
"year": 2003,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In ACL.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Sequence level training with recurrent neural networks",
"authors": [
{
"first": "Aurelio",
"middle": [],
"last": "Marc",
"suffix": ""
},
{
"first": "Sumit",
"middle": [],
"last": "Ranzato",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Chopra",
"suffix": ""
},
{
"first": "Wojciech",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zaremba",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marc'Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2016. Sequence level training with recurrent neural networks. In ICLR.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Weighting finite-state transductions with neural context",
"authors": [
{
"first": "Pushpendre",
"middle": [],
"last": "Rastogi",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pushpendre Rastogi, Ryan Cotterell, and Jason Eisner. 2016. Weighting finite-state transductions with neural context. In NAACL-HLT.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Self-critical sequence training for image captioning",
"authors": [
{
"first": "J",
"middle": [],
"last": "Steven",
"suffix": ""
},
{
"first": "Etienne",
"middle": [],
"last": "Rennie",
"suffix": ""
},
{
"first": "Youssef",
"middle": [],
"last": "Marcheret",
"suffix": ""
},
{
"first": "Jarret",
"middle": [],
"last": "Mroueh",
"suffix": ""
},
{
"first": "Vaibhava",
"middle": [],
"last": "Ross",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Goel",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven J. Rennie, Etienne Marcheret, Youssef Mroueh, Jarret Ross, and Vaibhava Goel. 2017. Self-critical se- quence training for image captioning. In CVPR 2017.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Get To The Point: Summarization with Pointer-Generator Networks",
"authors": [
{
"first": "Abigail",
"middle": [],
"last": "See",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2017,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abigail See, Peter J Liu, and Christopher D Manning. 2017. Get To The Point: Summarization with Pointer- Generator Networks. In ACL.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "How grammatical is character-level neural machine translation? assessing mt quality with contrastive translation pairs",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
}
],
"year": 2017,
"venue": "EACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich. 2017. How grammatical is character-level neural machine translation? assessing mt quality with contrastive translation pairs. In EACL.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Minimum risk training for neural machine translation",
"authors": [
{
"first": "Shiqi",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Yong",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Zhongjun",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2016,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shiqi Shen, Yong Cheng, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2016. Minimum risk training for neural machine translation. In ACL.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Minimum risk annealing for training log-linear models",
"authors": [
{
"first": "A",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2006,
"venue": "COLING/ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David A Smith and Jason Eisner. 2006. Minimum risk annealing for training log-linear models. In COLING/ACL.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Sequence to sequence learning with neural networks",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In NIPS.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Modeling and learning multilingual inflectional morphology in a minimally supervised framework",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Wicentowski",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Wicentowski. 2002. Modeling and learning multilingual inflectional morphology in a minimally super- vised framework. Ph.D. thesis, Johns Hopkins University.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Sequence-to-sequence learning as beam-search optimization",
"authors": [
{
"first": "Sam",
"middle": [],
"last": "Wiseman",
"suffix": ""
},
{
"first": "Alexander M",
"middle": [],
"last": "Rush",
"suffix": ""
}
],
"year": 2016,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sam Wiseman and Alexander M Rush. 2016. Sequence-to-sequence learning as beam-search optimization. In EMNLP.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Online segment to segment neural transduction",
"authors": [
{
"first": "Lei",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Buys",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lei Yu, Jan Buys, and Phil Blunsom. 2016. Online segment to segment neural transduction. In EMNLP.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "ADADELTA: an adaptive learning rate method",
"authors": [
{
"first": "D",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zeiler",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1212.5701"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew D Zeiler. 2012. ADADELTA: an adaptive learning rate method. arXiv:1212.5701.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"num": null,
"text": "Morphological inflection generation in German (left). Lemmatization in Irish (right).",
"uris": null,
"type_str": "figure"
},
"FIGREF2": {
"num": null,
"text": "Transduction of \"fliegen\" to \"flog\". (Above) Visualization of the system as it chooses a 3 = DELETE. (Below) Full transition sequence. Action a 0 is always fixed to INSERT(BOS).",
"uris": null,
"type_str": "figure"
},
"FIGREF3": {
"num": null,
"text": "DELETE: Read x i and write nothing. \u2022 SUBST(c) for c \u2208 \u03a3 y : Read x i and write c. \u2022 INSERT(c) for c \u2208 \u03a3 y : Write c and read nothing. \u2022 COPY: Read x i and write x i .",
"uris": null,
"type_str": "figure"
},
"FIGREF4": {
"num": null,
"text": "Longest Common Substring (LCS, left) and Chinese Restaurant Process (CRP, right) character alignments for the same x and y. Input sequence x is at the top, output sequence y at the bottom. A CRP aligner recovers this alignment given sufficient training data and number of iterations.",
"uris": null,
"type_str": "figure"
},
"FIGREF5": {
"num": null,
"text": "Figure 3illustrates different character alignment algorithms that we use in our experiments. A simple procedure for the generation of gold actions from alphabet \u03a3 CA a would call the following subroutine d on each pair of character alignment (b 1 , c 1 ), . . . , (b r , c r ) between input x and output y, where",
"uris": null,
"type_str": "figure"
},
"FIGREF6": {
"num": null,
"text": "Learning curves on the CELEX dataset.",
"uris": null,
"type_str": "figure"
},
"TABREF2": {
"text": "Results on the CELEX dataset.",
"html": null,
"num": null,
"content": "<table/>",
"type_str": "table"
},
"TABREF4": {
"text": "Results on the SIGMORPHON 2017 dataset.",
"html": null,
"num": null,
"content": "<table/>",
"type_str": "table"
},
"TABREF5": {
"text": "91.32 95.91 98.63 97.69 94.75 96.99 98.44 90.57 93.93 85.28 94.35 CA 90.81 95.97 98.75 97.97 95.59 97.11 98.64 89.74 93.59 85.77 94.39 ensembles MED 91.46 95.80 98.84 98.50 95.47 98.93 96.80 91.48 99.30 88.99 95.56 SOFT 92.18 96.51 98.88 98.88 96.99 99.37 97.01 95.41 99.30 88.86 96.34 HA 92.21 96.58 98.92 98.12 95.91 97.99 96.25 93.01 98.77 88.32 95.61 HA * -E 91.95 96.28 98.85 97.90 95.78 97.55 98.77 92.14 95.08 87.82 95.21 CA-E 91.87 96.36 98.84 98.35 96.50 97.74 98.90 92.14 94.63 87.66 95.30",
"html": null,
"num": null,
"content": "<table><tr><td>Model</td><td>RU</td><td>DE</td><td>ES</td><td>KA</td><td>FI</td><td>TR</td><td>HU</td><td>NV</td><td>AR</td><td>MT</td><td>Avg.</td></tr></table>",
"type_str": "table"
},
"TABREF6": {
"text": "employ RNNs to parametrize the weights of a globally normalized",
"html": null,
"num": null,
"content": "<table><tr><td>Model</td><td/><td colspan=\"5\">basque english irish tagalog Avg.</td></tr><tr><td>Size</td><td/><td>4.7K</td><td>3.9K</td><td>1.1K</td><td>7.6K</td><td>4.3K</td></tr><tr><td>LAT</td><td/><td>93.6</td><td>96.9</td><td>97.9</td><td>88.6</td><td>94.2</td></tr><tr><td colspan=\"2\">NWFST</td><td>91.5</td><td>94.5</td><td>97.9</td><td>97.4</td><td>95.3</td></tr><tr><td colspan=\"2\">HA * lcs</td><td>97.0</td><td>97.5</td><td>97.9</td><td>98.3</td><td>97.7</td></tr><tr><td>CA</td><td>lcs</td><td>96.3</td><td>96.9</td><td>97.7</td><td>98.3</td><td>97.3</td></tr><tr><td colspan=\"2\">HA * crp</td><td>96.2</td><td>97.7</td><td>97.3</td><td>97.9</td><td>97.3</td></tr><tr><td>CA</td><td>crp</td><td>96.1</td><td>96.7</td><td>96.8</td><td>97.6</td><td>96.8</td></tr></table>",
"type_str": "table"
}
}
}
}