|
{ |
|
"paper_id": "2022", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:34:34.156579Z" |
|
}, |
|
"title": "Neural String Edit Distance", |
|
"authors": [ |
|
{ |
|
"first": "Jind\u0159ich", |
|
"middle": [], |
|
"last": "Libovick\u00fd", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Charles University", |
|
"location": { |
|
"settlement": "Prague", |
|
"country": "Czech Republic" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Fraser", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "LMU Munich", |
|
"location": { |
|
"country": "Germany" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We propose the neural string edit distance model for string-pair matching and string transduction based on learnable string edit distance. We modify the original expectationmaximization learned edit distance algorithm into a differentiable loss function, allowing us to integrate it into a neural network providing a contextual representation of the input. We evaluate on cognate detection, transliteration, and grapheme-to-phoneme conversion, and show that we can trade off between performance and interpretability in a single framework. Using contextual representations, which are difficult to interpret, we match the performance of state-of-the-art string-pair matching models. Using static embeddings and a slightly different loss function, we force interpretability, at the expense of an accuracy drop.", |
|
"pdf_parse": { |
|
"paper_id": "2022", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We propose the neural string edit distance model for string-pair matching and string transduction based on learnable string edit distance. We modify the original expectationmaximization learned edit distance algorithm into a differentiable loss function, allowing us to integrate it into a neural network providing a contextual representation of the input. We evaluate on cognate detection, transliteration, and grapheme-to-phoneme conversion, and show that we can trade off between performance and interpretability in a single framework. Using contextual representations, which are difficult to interpret, we match the performance of state-of-the-art string-pair matching models. Using static embeddings and a slightly different loss function, we force interpretability, at the expense of an accuracy drop.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "State-of-the-art models for string-pair classification and string transduction employ powerful neural architectures that lack interpretability. For example, BERT (Devlin et al., 2019) compares all input symbols with each other via 96 attention heads, whose functions are difficult to interpret. Moreover, attention itself can be hard to interpret (Jain and Wallace, 2019; Wiegreffe and Pinter, 2019) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 162, |
|
"end": 183, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 347, |
|
"end": 371, |
|
"text": "(Jain and Wallace, 2019;", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 372, |
|
"end": 399, |
|
"text": "Wiegreffe and Pinter, 2019)", |
|
"ref_id": "BIBREF47" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In many tasks, such as in transliteration, a relation between two strings can be interpreted more simply as edit operations (Levenshtein, 1966) . The edit operations define the alignment between the strings and provide an interpretation of how one string is transcribed into another. Learnable edit distance (Ristad and Yianilos, 1998) allows learning the weights of edit operations from data using the expectation-maximization (EM) algorithm. Unlike post-hoc analysis of black-box models, which depends on human qualitative judgment (Adadi and Berrada, 2018; Hoover et al., 2020; Lipton, 2018) , the restricted set of edit operations allows direct Figure 1 : An example of applying the dynamic programming algorithm used to compute the edit probability score. It gradually fills the table of probabilities that prefixes of the word \"equal\" transcribe into prefixes of phoneme sequence \"IY K W AH L\". The probability (gray circles) depends on the probabilities of the prefixes and probabilities of plausible edit operations: insert (blue arrows), substitute (green arrows) and delete (red arrows).", |
|
"cite_spans": [ |
|
{ |
|
"start": 124, |
|
"end": 143, |
|
"text": "(Levenshtein, 1966)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 308, |
|
"end": 335, |
|
"text": "(Ristad and Yianilos, 1998)", |
|
"ref_id": "BIBREF37" |
|
}, |
|
{ |
|
"start": 534, |
|
"end": 559, |
|
"text": "(Adadi and Berrada, 2018;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 560, |
|
"end": 580, |
|
"text": "Hoover et al., 2020;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 581, |
|
"end": 594, |
|
"text": "Lipton, 2018)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 649, |
|
"end": 657, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "interpretation. Unlike hard attention (Mnih et al., 2014; Indurthi et al., 2019) which also provides a discrete alignment between input and output, edit distance explicitly says how the input symbols are processed. Also, unlike models like Levenshtein Transformer (Gu et al., 2019) , which does not explicitly align source and target uses edit operations to model intermediate generation steps only within the target string, learnable edit distance considers both source and target symbols to be a subject of the edit operations.", |
|
"cite_spans": [ |
|
{ |
|
"start": 38, |
|
"end": 57, |
|
"text": "(Mnih et al., 2014;", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 58, |
|
"end": 80, |
|
"text": "Indurthi et al., 2019)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 264, |
|
"end": 281, |
|
"text": "(Gu et al., 2019)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We reformulate the EM training used to train learnable edit distance as a differentiable loss function that can be used in a neural network. We propose two variants of models based on neural string edit distance: a bidirectional model for string-pair matching and a conditional model for string transduction. We evaluate on cognate detection, transliteration, and grapheme-to-phoneme (G2P) conver-sion. The model jointly learns to perform the task and to generate a latent sequence of edit operations explaining the output. Our approach can flexibly trade off performance and intepretability by using input representations with various degrees of contextualization and outperforms methods that offer a similar degree of interpretability (Tam et al., 2019) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 737, |
|
"end": 755, |
|
"text": "(Tam et al., 2019)", |
|
"ref_id": "BIBREF41" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Edit distance (Levenshtein, 1966) formalizes transcription of a string s = (s 1 , . . . , s n ) of n symbols from alphabet S into a string t = (t 1 , . . . , t m ) of m symbols from alphabet T as a sequence of operations: delete, insert and substitute, which have different costs. Ristad and Yianilos (1998) reformulated operations as random events drawn from a distribution of all possible operations: deleting any s \u2208 S, inserting any t \u2208 T , and substituting any pair of symbols from S \u00d7 T . The probability P(s, t) = \u03b1 n,m of t being edited from s can be expressed recursively:", |
|
"cite_spans": [ |
|
{ |
|
"start": 14, |
|
"end": 33, |
|
"text": "(Levenshtein, 1966)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 281, |
|
"end": 307, |
|
"text": "Ristad and Yianilos (1998)", |
|
"ref_id": "BIBREF37" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learnable Edit Distance", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "\u03b1 n,m = \u03b1 n,m\u22121 \u2022 P ins (t m ) +", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learnable Edit Distance", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "(1)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learnable Edit Distance", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "\u03b1 n\u22121,m \u2022 P del (s n ) + \u03b1 n\u22121,m\u22121 \u2022 P subs (s n , t m )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learnable Edit Distance", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "This can be computed using the dynamic programming algorithm of Wagner and Fischer (1974) , which also computes values of \u03b1 i,j for all prefixes s :i and t :j . The operation probabilities only depend on the individual pairs of symbols at positions i, j, so the same dynamic programming algorithm is used for computing the suffix-pair transcription probabilities \u03b2 i,j (the backward probabilities).", |
|
"cite_spans": [ |
|
{ |
|
"start": 64, |
|
"end": 89, |
|
"text": "Wagner and Fischer (1974)", |
|
"ref_id": "BIBREF44" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learnable Edit Distance", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "With a training corpus of pairs of matching strings, the operation probabilities can be estimated using the EM algorithm. In the expectation step, expected counts of all edit operations are estimated for the current parameters using the training data. Each pair of symbols s i and t j contribute to the expected counts of the operations:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learnable Edit Distance", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "E subs (s i , t j ) += \u03b1 i\u22121,j\u22121 P subs (s i , t j )\u03b2 i,j /\u03b1 n,m", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learnable Edit Distance", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "(2) and analogically for the delete and insert operations. In the maximization step, operation probabilities are estimated by normalizing the expected counts. See Algorithms 1-5 in Ristad and Yianilos (1998) for more details.", |
|
"cite_spans": [ |
|
{ |
|
"start": 181, |
|
"end": 207, |
|
"text": "Ristad and Yianilos (1998)", |
|
"ref_id": "BIBREF37" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learnable Edit Distance", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In our model, we replace the discrete table of operation probabilities with a probability estimation based on a continuous representation of the input, which brings in the challenge of changing the EM training into a differentiable loss function that can be back-propagated into the representation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Neural String Edit Distance Model", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Computation of the transcription probability is shown in Figure 1 . We use the same dynamic programming algorithm (Equation 1 and Algorithm 2 in Appendix A) that gradually fills a table of probabilities row by row. The input symbols are represented by learned, possibly contextual embeddings (yellow and blue boxes in Figure 1 ) which are used to compute a representation of symbol pairs with a small feed-forward network. The symbol pair representation is used to estimate the probabilities of insert, delete and substitute operations (blue, red and green arrows in Figure 1 ).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 57, |
|
"end": 65, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 318, |
|
"end": 326, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 567, |
|
"end": 575, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Neural String Edit Distance Model", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Formally, we embed the source sequence s of length n into a matrix h s \u2208 R n\u00d7d and analogically t into h t \u2208 R m\u00d7d (yellow and blue boxes in Figure 1) . We represent the symbol-pair contexts as a function of the respective symbol representations (small gray rectangles in Figure 1 ) as a function of repspective symbol representation c i,j = f (h s i , h t j ) depending on the task.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 141, |
|
"end": 150, |
|
"text": "Figure 1)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 272, |
|
"end": 280, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Neural String Edit Distance Model", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The logits (i.e., the probability scores before normalization) for the edit operations are obtained by concatenation of the following vectors (corresponds to red, green and blue arrows in Figure 1 ):", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 188, |
|
"end": 196, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Neural String Edit Distance Model", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "\u2022 z i,j del = Linear(c i\u22121,j ) \u2208 R d del , \u2022 z i,j ins = Linear(c i,j\u22121 ) \u2208 R d ins , \u2022 z i,j subs = Linear(c i\u22121,j\u22121 ) \u2208 R d subs ,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Neural String Edit Distance Model", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "where Linear(x) = Wx + b where W and b are trainable parameters of a linear projection and d del , d ins and d subs are the numbers of possible delete, insert and substitute operations given the vocabularies. The distribution P i,j \u2208 R d del +d ins +d subs over operations that lead to prefix pair s :i and t :j in a single derivation step is", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Neural String Edit Distance Model", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "P i,j = softmax(z i,j del \u2295 z i,j ins \u2295 z i,j subs ).i, j", |
|
"eq_num": "(3)" |
|
} |
|
], |
|
"section": "Neural String Edit Distance Model", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The probabilities P i,j del , P i,j ins and P i,j subs are obtained by taking the respective values from the distribution corresponding to the logits. 1 Note that P i,j only depends on (possibly contextual) input embeddings h s i , h s i\u22121 , h t j , and h t j\u22121 , but not on the derivation of prefix t :j from s :i .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Neural String Edit Distance Model", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "1: LEM \u2190 0 2: for i = 1 . . . n do 3:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm 1 Expectation-Maximization Loss", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "for j = 1 . . . m do 4: plausible \u2190 0 Indication vector 5:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm 1 Expectation-Maximization Loss", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "I.e., operations that can be used given si and tj 6:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm 1 Expectation-Maximization Loss", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "if j > 1 then Insertion is plausible 7:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm 1 Expectation-Maximization Loss", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "plausible += 1(insert tj) 8: E ins i,j \u2190 \u03b1i,j\u22121 \u2022 Pins(\u2022|ci,j\u22121) \u2022 \u03b2i,j 9:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm 1 Expectation-Maximization Loss", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "if i > 1 then Deletion is plausible 10:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm 1 Expectation-Maximization Loss", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "plausible += 1(delete si)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm 1 Expectation-Maximization Loss", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "11:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm 1 Expectation-Maximization Loss", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "E del i,j \u2190 \u03b1i\u22121,jPdel(\u2022|ci\u22121,j)\u03b2i,j 12:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm 1 Expectation-Maximization Loss", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "if i > 1 and j > 1 then Subs. is plausible 13:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm 1 Expectation-Maximization Loss", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "plausible += 1(substitute si \u2192 tj)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm 1 Expectation-Maximization Loss", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "14:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm 1 Expectation-Maximization Loss", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "E subs i,j \u2190 \u03b1i\u22121,j\u22121 \u2022 Psubs(\u2022|ci\u22121,j\u22121) \u2022 \u03b2i,j 15:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm 1 Expectation-Maximization Loss", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "expected \u2190 normalize(plausible 16:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm 1 Expectation-Maximization Loss", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "E ins i,j \u2295 E del i,j \u2295 E subs i,j ) 17:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm 1 Expectation-Maximization Loss", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Expected distr. can only contain plausible ops.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm 1 Expectation-Maximization Loss", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "LEM += KL(Pi,j|| expected) 19: return LEM The transduction probability \u03b1 i,j , i.e., a probability that s :i transcribes to t :j (gray circles in Figure 1 ) is computed in the same way as in Equation 1.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 146, |
|
"end": 154, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "18:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The same algorithm with the reversed order of iteration can be used to compute probabilities \u03b2 i,j , the probability that suffix s i: transcribes to t j: . The complete transduction probability is the same, i.e., \u03b2 1,1 = \u03b1 n,m . Tables \u03b1 and \u03b2 are used to compute the EM training loss L EM (Algorithm 1) which is then optimized using gradient-based optimization. Symbol \u2022 in the probability stands for all possible operations (the operations that the model can assign a probability score to), \"normalize\"' means scale the values such that they sum up to one. Unlike the statistical model that uses a single discrete multinomial distribution and stores the probabilities in a table, in our neural model the operation probabilities are conditioned on continuous vectors. For each operation type, we compute the expected distribution given the \u03b1 and \u03b2 tables (line 6-14). From this distribution, we only select operations that are plausible given the context (line 15), i.e., we zero out the probability of all operations that do not involve symbols s i and t j . Finally (line 18), we measure the KL divergence of the predicted operation distribution P i,j (Equation 3) from the expected distribution, which is the loss function", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "18:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "L EM .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "18:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "With a trained model, we can estimate the probability of t being a good transcription of s. Also, by replacing the summation in Equation 1 by the max operation, we can obtain the most probable operation sequence of operation transcribing s to t using the Viterbi (1967) algorithm.", |
|
"cite_spans": [ |
|
{ |
|
"start": 255, |
|
"end": 269, |
|
"text": "Viterbi (1967)", |
|
"ref_id": "BIBREF43" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "18:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Note that the interpretability of our model depends on how contextualized the input representations h s and h t are. The degree of contextualization spans from static symbol embeddings with the same strong interpretability as statistical models, to Transformers with richly contextualized representations, which, however, makes our model more similar to standard black-box models.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "18:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Here, our goal is to train a binary classifier deciding if strings t and s match. We consider strings matching if t can be obtained by editing s, with the probability P(s, t) = \u03b1 n,m higher than a threshold. The model needs to learn to assign a high probability to derivations of matching the source string to the target string and low probability to derivations matching different target strings.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "String-Pair Matching", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The symbol-pair context c i,j is computed as", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "String-Pair Matching", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "LN ReLU Linear(h s i \u2295 h t j ) \u2208 R d ,", |
|
"eq_num": "(4)" |
|
} |
|
], |
|
"section": "String-Pair Matching", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "where LN stands for layer normalization and \u2295 means concatenation. The statistical model assumes a single multinomial table over edit operations. A non-matching string pair gets little probability because all derivations (i.e., sequence of edit operations) of nonmatching string pairs consist of low-probability operations and high probability is assigned to operations that are not plausible. In the neural model, the same information can be kept in model parameters and we can thus simplify the output space of the model (see Appendix B for thought experiments justifying the design choices).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "String-Pair Matching", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We no longer need to explicitly model the probability of implausible operations and can only use a single class for each type of edit operation (insert, delete, substitute) and one additional non-match option that stands for the case when the inputs strings do not match and none of the plausible edit operations is probable (corresponding to the sum of probabilities of the implausible operations in the statistical model).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "String-Pair Matching", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The value of P(s, t) = \u03b1 m,n serves as a classification threshold for the binary classification. As additional training signal, we also explicitly optimize the probability using the binary cross-entropy as an auxiliary loss, pushing the value towards 1 for positive examples and towards 0 for negative examples. We set the classification threshold dynamically to maximize the validation F 1 -score.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "String-Pair Matching", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "In the second use case, we use neural string edit distance as a string transduction model: given a source string, edit operations are applied to generate a target string. Unlike classification, we model the transcription process with vocabulary-specificoperations, but still use only a single class for deletion. For the insertion and substitution operation, we use |T | classes corresponding to the target string alphabet. Unlike classification, we do not add the non-match class. To better contextualize the generation, we add attention to the symbol-pair representation c i,j :", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "String Transduction", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "LN ReLU Linear(h s i \u2295 h t j ) \u2295 Att h t j , h s (5) of dimension 2d, where Att(q, v)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "String Transduction", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "is a multihead attention with queries q and keys and values v.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "String Transduction", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "While generating the string left-to-right, the only way a symbol can be generated is either by inserting it or by substituting a source symbol. Therefore, we estimate the probability of inserting symbol t j+1 given a target prefix t :j from the probabilities of inserting a symbol after t j or substituting any s i by t j+1 (i.e., averaging over a row in Figure 1 ):", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 355, |
|
"end": 363, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "String Transduction", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "P (t j+1 |t :j , s) = |S| j=1 \u03b1 i,j P ins (t j+1 |c i,j ) + |S| j=2 \u03b1 i,j P subs (s i , t j+1 |c i,j ).", |
|
"eq_num": "(6)" |
|
} |
|
], |
|
"section": "String Transduction", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Probabilities P ins and P subs are respective parts of the distribution P i,j (Equation 3). Probablity P del is unkown at this point because computing it would be computed based on state c i,j+1 which is impossible without what the (j + 1)-th target symbol is, where logits for P ins and P subs use c i,j and c i\u22121,j . Therefore, we approximate Equation 3 a\u015d", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "String Transduction", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "P i,j = softmax z i,j ins \u2295 z i,j subs .", |
|
"eq_num": "(7)" |
|
} |
|
], |
|
"section": "String Transduction", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "At inference time, we decide the next symbolt j based onP i,j . Knowing the symbol allows computing the P i,j distribution and values \u03b1 \u2022,j that are used in the next step of inference. The inference can be done using the beam search algorithm as is done with sequence-to-sequence (S2S) models.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "String Transduction", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We also use the probability distributionP to define an additional training objective which is the negative log-likelihood of the ground truth output with respect to this distribution, analogically to training S2S models,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "String Transduction", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "L NLL = \u2212 |t| j=0 log |s| i=0P i,j /|s|.", |
|
"eq_num": "(8)" |
|
} |
|
], |
|
"section": "String Transduction", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "In our preliminary experiments with Viterbi decoding, we noticed that the model tends to avoid the substitute operation and chose an order of insert and delete operations that is not interpretable. To prevent this behavior, we introduce an additional regularization loss. To decrease the values of \u03b1 that are further from the diagonal, we add the term", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Interpretability Loss", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "n i=1 m j=1 |i \u2212 j| \u2022 \u03b1 i,j", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Interpretability Loss", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "to the loss function. Note that this formulation assumes that the source and target sequence have similar lengths. For tasks where the sequence lengths vary significantly, we would need to consider the sequence length in the loss function.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Interpretability Loss", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "In the string transduction model, optimization of this term can lead to a degenerate solution by flattening all distributions and thus lowering all values in table \u03b1. We thus compensate for this loss by adding the \u2212 log \u03b1 n,m term to the loss function which enforces increasing the \u03b1 values.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Interpretability Loss", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "We evaluate the string-pair matching model on cognate detection, and the string transduction model on Arabic-to-English transliteration and English grapheme-to-phoneme conversion.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "In all tasks, we study four ways of representing the input symbols with different degrees of contextualization. The interpretable context-free (unigram) encoder uses symbol embeddings summed with learned position embeddings. We use a 1-D convolutional neural network (CNN) for locally contexualized representation where hidden states correspond to consecutive input n-grams. We use bidirectional recurrent networks (RNNs) and Transformers (Vaswani et al., 2017) for fully contextualized input representations.", |
|
"cite_spans": [ |
|
{ |
|
"start": 439, |
|
"end": 461, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": "BIBREF42" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Architectural details and hyperparameters are listed in Appendix C. All hyperparameters are set manually based on preliminary experiments. Further hyperparameter tuning can likely lead to better accuracy of both baselines and our model. Table 1 : F 1 and training time for cognate detection. F 1 on validation is in Table 6 in the Appendix.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 237, |
|
"end": 244, |
|
"text": "Table 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 316, |
|
"end": 323, |
|
"text": "Table 6", |
|
"ref_id": "TABREF9" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "However, preliminary experiments showed that increasing the model size only has a small effect on model accuracy. We run every experiment 5 times and report the mean performance and the standard deviation to control for training stability. The source code for the experiments is available at https://github.com/jlibovicky/ neural-string-edit-distance.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Cognate Detection. Cognate detection is the task of detecting if words in different languages have the same origin. We experiment with Austro-Asiatic languages (Sidwell, 2015) and Indo-European languages (Dunn, 2012) normalized into the international phonetic alphabet as provided by Rama et al. (2018) . 2 For Indo-European languages, we have 9,855 words (after excluding singleton-class words) from 43 languages forming 2,158 cognate classes. For Austro-Asiatic languages, the dataset contains 11,828 words of 59 languages, forming only 98 cognate classes without singletons. We generate classification pairs from these datasets by randomly sampling 10 negative examples for each true cognate pair. We use 20k pairs for validation and testing, leaving 1.5M training examples for Indo-European and 80M for Austro-Asiatic languages.", |
|
"cite_spans": [ |
|
{ |
|
"start": 160, |
|
"end": 175, |
|
"text": "(Sidwell, 2015)", |
|
"ref_id": "BIBREF40" |
|
}, |
|
{ |
|
"start": 204, |
|
"end": 216, |
|
"text": "(Dunn, 2012)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 284, |
|
"end": 302, |
|
"text": "Rama et al. (2018)", |
|
"ref_id": "BIBREF34" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Many cognate detection methods are unsupervised and are evaluated by comparison of a clustering from the method with true cognate classes. We train a supervised classifier, so we use F 1 -score on our splits of the dataset.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Because the input and the output are from the same alphabet, we share the parameters of the encoders of the source and target sequences.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "As a baseline we use the original statistical learn-able edit distance (Ristad and Yianilos, 1998) . The well-performing black-box model used as another baseline for comparison with our model is a Transformer processing a concatenation of the two input strings. Similar to BERT (Devlin et al., 2019) , we use the representation of the first technical symbol as an input to a linear classifier. We also compare our results with the STANCE model (Tam et al., 2019 ), a neural model utilizing optimaltransport-based alignment over input text representation which makes similar claims about interpretability as we do. Similar to our model, we experiment with various degrees of representation contextualization.", |
|
"cite_spans": [ |
|
{ |
|
"start": 71, |
|
"end": 98, |
|
"text": "(Ristad and Yianilos, 1998)", |
|
"ref_id": "BIBREF37" |
|
}, |
|
{ |
|
"start": 278, |
|
"end": 299, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 444, |
|
"end": 461, |
|
"text": "(Tam et al., 2019", |
|
"ref_id": "BIBREF41" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Transliteration and G2P Conversion. For string transduction, we test our model on two tasks: Arabic-to-English transliteration (Rosca and Breuel, 2016) 3 and English G2P conversion using the CMUDict dataset (Weide, 2017) 4 . The Arabic-to-English transliteration dataset consists of 12,877 pairs for training, 1,431 for validation, and 1,590 for testing. The source-side alphabet uses 47 different symbols; the target side uses 39. The CMUDict dataset contains 108,952 training, 5,447 validation, and 12,855 test examples, 10,999 unique. The dataset uses 27 different graphemes and 39 phonemes.", |
|
"cite_spans": [ |
|
{ |
|
"start": 127, |
|
"end": 151, |
|
"text": "(Rosca and Breuel, 2016)", |
|
"ref_id": "BIBREF38" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We evaluate the output strings using Character Error Rate (CER): the standard edit distance between the generated hypotheses and the ground truth string divided by the ground-truth string length; and Word Error Rate (WER): the proportion of words that were transcribed incorrectly. The CMUDict dataset contains multiple transcriptions for some words, as is usually done we select the transcription with the lowest CER as a reference.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Unlike the string-matching task, the future target symbols are unknown. Therefore, when using the contextual representations, we encode the target string using a single-direction RNN and using a masked Transformer, respectively.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "To evaluate our model under low-resource conditions, we conduct two sets of additional experiments with the transliteration of Arabic. We compare our unigram and RNN-based models with the RNN-based S2S model trained on smaller subsets of training data (6k, 3k, 1.5k, 750, 360, 180, and 60 training examples) and different embedding and hidden state size (8, 16, . . . , 512).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "For the G2P task, where the source and target symbols can be approximately aligned, we further quantitatively assess the model's interpretability by measuring how well it captures alignment between the source and target string. We consider the substitutions in the Viterbi decoding to be aligned symbols. We compare this alignment with statistical word alignment and report the F 1 score. We obtain the source-target strings alignment using Efmaral (\u00d6stling and Tiedemann, 2016), a state-ofthe-art word aligner, by running the aligner on the entire CMUDict dataset. We use grow-diagonal for alignment symmetrization.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The baseline models are RNN-based (Bahdanau et al., 2015) and Transformer-based (Vaswani et al., 2017) S2S models.", |
|
"cite_spans": [ |
|
{ |
|
"start": 34, |
|
"end": 57, |
|
"text": "(Bahdanau et al., 2015)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 80, |
|
"end": 102, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": "BIBREF42" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Cognate Detection. The results of cognate detection are presented in Table 1 (learning curves are in Figure 5 in Appendix). In cognate detection, our model significantly outperforms both the statistical baseline and the STANCE model. The F 1 -score achieved by the unigram model is worse than the Transformer classifier by a large margin. Local representation contextualization with CNN reaches similar performance as the black-box Transformer classifier while retaining a similar strong interpretability to the static embeddings. Models with RNN encoders outperform the baseline classifier, whereas the Transformer encoder yields slightly worse results. Detecting cognates seems to be more difficult in Austro-Asiatic languages than in Indo-European languages. The training usually converges before finishing a single epoch of the training data. An example of how the \u03b1 captures the prefix-pair probabilities is shown in Figure 2 . The interpretability loss only has a negligible (although mostly slightly negative) influence on the accuracy, within the variance of training runs. The ablation study on loss functions (Table 2) shows that the binary cross-entropy plays a more important role. The EM loss alone works remarkably well given that it was trained on positive examples only.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 69, |
|
"end": 76, |
|
"text": "Table 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 101, |
|
"end": 109, |
|
"text": "Figure 5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 922, |
|
"end": 930, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
}, |
|
{ |
|
"start": 1119, |
|
"end": 1128, |
|
"text": "(Table 2)", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Transliteration and G2P Conversion. The results for the two transduction tasks are presented in Table 3 (learning curves are in Figure 5 in Appendix). Our transliteration baseline slightly outperforms the baseline presented with the dataset (Rosca and Breuel, 2016, 22 .4% CER, 77.1% WER). Our baselines for the G2P conversion perform slightly worse than the best models by Table 3 : Model error rates for Arabic-to-English transliteration and English G2P generation and respective training times. For the second data set, we also report the alignment F 1 scores (Align.). Our best models are in bold. The error rates on the validation data are in Table 7 Yolchuyeva et al. 2019, which had 5.4% CER and 22.1% WER with a twice as large model, and 6.5% CER and 23.9% WER with a similarly sized one. The transliteration of Arabic appears to be a simpler problem than G2P conversion. The performance matches S2S, has fast training times, and there is a smaller gap between the error rates of the context-free and contextualized models.", |
|
"cite_spans": [ |
|
{ |
|
"start": 241, |
|
"end": 268, |
|
"text": "(Rosca and Breuel, 2016, 22", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 96, |
|
"end": 103, |
|
"text": "Table 3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 128, |
|
"end": 136, |
|
"text": "Figure 5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 374, |
|
"end": 381, |
|
"text": "Table 3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 648, |
|
"end": 655, |
|
"text": "Table 7", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The training time of our transduction models is 2-3\u00d7 higher than with the baseline S2S models because the baseline models use builtin PyTorch functions, whereas our model is implemented using loops using TorchScript 5 (15% faster than plain Python). The performance under low data conditions and with small model capacity is in Figure 3 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 328, |
|
"end": 336, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Models that use static symbol embeddings as the input perform worse than the black-box S2S models in both tasks. Local contextualization with CNN improves the performance over static symbol embeddings. Using the fully contextualized input representation narrows the performance gap between S2S models and neural string edit distance models at the expense of decreased interpretability because all input states can, in theory, contain information about the entire input sequence. The ability to preserve source-target alignment is highest when the input is represented by embeddings only. RNN models not only have the best accuracy, but also 5 https://pytorch.org/docs/stable/jit.html capture quite well the source-target alignment. We hypothesize that RNNs work well because of their inductive bias towards sequence processing, which might be hard to learn from position embeddings given the task dataset sizes.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Including the interpretability loss usually slightly improves the accuracy and improves the alignment between the source and target strings. It manifests both qualitatively (Table 5 ) and quantitatively in the increased alignment accuracy.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 173, |
|
"end": 181, |
|
"text": "(Table 5", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Compared to S2S models, beam search decoding leads to much higher accuracy gains, with beam search 5 reaching around 2\u00d7 error reduction compared to greedy decoding. For all input representations except the static embeddings, length normalization does not improve decoding. Unlike machine translation models, accuracy doesn't degrade with increasing beam size. See Figure 4 in Appendix.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 364, |
|
"end": 372, |
|
"text": "Figure 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The ablation study on loss functions (Table 4) shows that all loss functions contribute to the final accuracy. The EM loss is most important, direct optimization of the likelihood is second.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 37, |
|
"end": 46, |
|
"text": "(Table 4)", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Weighted finite-state transducers. Rastogi et al. (2016) use a weighted-finite state transducer (WFST) with neural scoring function to model sequence transduction. As in our model, they back-propagate the error via a dynamic program. Our model is stronger because, in the WFST, the output symbol generation only depends on the contextualized source symbol embedding, disregarding the string generated so far. Lin et al. (2019) extend the model by including contextualized target string representation and edit operation history. This makes their model more powerful than ours, but the loss function cannot be exactly computed by dynamic programming and graphemes phonemes edit operations Neural sequence matching. Several neural sequence-matching methods utilize a scoring function similar to symbol-pair representation. Cuturi and Blondel (2017) propose integrating alignment between two sequences into a loss function that eventually leads to finding alignment between the sequences. The STANCE model (Tam et al., 2019 ), which we compare results with, first computes the alignment as an optimal transfer problem between the source and target representation. In the second step, they assign a score using a convolutional neural network applied to a soft-alignment matrix.", |
|
"cite_spans": [ |
|
{ |
|
"start": 35, |
|
"end": 56, |
|
"text": "Rastogi et al. (2016)", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 409, |
|
"end": 426, |
|
"text": "Lin et al. (2019)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 821, |
|
"end": 846, |
|
"text": "Cuturi and Blondel (2017)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 1003, |
|
"end": 1020, |
|
"text": "(Tam et al., 2019", |
|
"ref_id": "BIBREF41" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "GOELLER G OW L ER G )G -O -E -L L )OW +L -E R )ER G )G O )OW -E -L L )L -E R )ER VOGAN V OW G AH N V )V -O G )OW +G +AH -A N )N V )V +OW -O G )G -A N )N FLAGSHIPS F L AE G SH IH P S F )F L )L -A -G S )AE +G -H +SH -I P )IH +P +S F )F L )L +AE -A G )G -S H )SH +IH -I P )P S )S ENDLER EH N D L ER +EH -E N )N D )D L )L -E R )ER E )EH N )N D )D L )L -E R )ER SWOOPED S W UW P T S )S W )W +UW -O -O P )P -E D )T S )S W )W -O O )UW P )P -E D )T", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We showed that our model reaches better accuracy with the same input representation. Similar to our model, these approaches provide interpretability via alignment. They allow many-to-many alignments, but cannot enforce a monotonic sequence of operations unlike WFSTs and our model. McCallum et al. (2005) used trainable edit distance in combination with CRFs for string matching. Recently, Riley and Gildea (2020) integrated the statistical learnable edit distance within a pipeline for unsupervised bilingual lexicon induction. As far as we know, our work is the first using neural networks directly in dynamic programming for edit distance.", |
|
"cite_spans": [ |
|
{ |
|
"start": 282, |
|
"end": 304, |
|
"text": "McCallum et al. (2005)", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 390, |
|
"end": 413, |
|
"text": "Riley and Gildea (2020)", |
|
"ref_id": "BIBREF36" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Learnable edit distance.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Edit distance in deep learning. LaserTagger (Malmi et al., 2019) and EditNTS (Dong et al., 2019) formulate sequence generation as tagging of the source text with edit operations. They use standard edit distance to pre-process the data (so, unlike our model cannot work with different alphabets) and then learn to predict the edit operations. Levenshtein Transformer (Gu et al., 2019 ) is a partially non-autoregressive S2S model generating the output iteratively via insert and delete operations. It delivers a good trade-off of decoding speed and translation quality, but is not interpretable.", |
|
"cite_spans": [ |
|
{ |
|
"start": 44, |
|
"end": 64, |
|
"text": "(Malmi et al., 2019)", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 77, |
|
"end": 96, |
|
"text": "(Dong et al., 2019)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 366, |
|
"end": 382, |
|
"text": "(Gu et al., 2019", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Dynamic programming in deep learning.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Combining dynamic programming and neuralnetwork-based estimators is a common technique, especially in sequence modeling. Connectionist Temporal Classification (CTC; Graves et al., 2006) uses the forward-backward algorithm to estimate the loss of assigning labels to a sequence with implicit alignment. The loss function of a linear-chain conditional random field propagated into a neural network (Do and Artieres, 2010) became the state-of-the-art for tasks like named entity recognition (Lample et al., 2016) . Loss functions based on dynamic programming are also used in non-autoregressive neural machine translation (Libovick\u00fd and Helcl, 2018; Saharia et al., 2020) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 165, |
|
"end": 185, |
|
"text": "Graves et al., 2006)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 396, |
|
"end": 419, |
|
"text": "(Do and Artieres, 2010)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 488, |
|
"end": 509, |
|
"text": "(Lample et al., 2016)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 619, |
|
"end": 646, |
|
"text": "(Libovick\u00fd and Helcl, 2018;", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 647, |
|
"end": 668, |
|
"text": "Saharia et al., 2020)", |
|
"ref_id": "BIBREF39" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Cognate detection. Due to the limited amount of annotated data, cognate detection is usually approached using unsupervised methods. Strings are compared using measures such as pointwise mutual information (J\u00e4ger, 2014) or LexStat similarity (List, 2012) , which are used as an input to a distance-based clustering algorithm (List et al., 2016) . J\u00e4ger et al. (2017) used a supervised SVM classifier trained on one language family using features that were previously used for clustering and applied the classifier to other language families.", |
|
"cite_spans": [ |
|
{ |
|
"start": 205, |
|
"end": 218, |
|
"text": "(J\u00e4ger, 2014)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 241, |
|
"end": 253, |
|
"text": "(List, 2012)", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 324, |
|
"end": 343, |
|
"text": "(List et al., 2016)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 346, |
|
"end": 365, |
|
"text": "J\u00e4ger et al. (2017)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Transliteration. Standard S2S models (Bahdanau et al., 2015; Gehring et al., 2017; Vaswani et al., 2017) or CTC-based sequence-labeling (Graves et al., 2006) are the state of the art for both transliteration (Rosca and Breuel, 2016; Kundu et al., 2018) and G2P conversion (Yao and Zweig, 2015; Peters et al., 2017; Yolchuyeva et al., 2019) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 37, |
|
"end": 60, |
|
"text": "(Bahdanau et al., 2015;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 61, |
|
"end": 82, |
|
"text": "Gehring et al., 2017;", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 83, |
|
"end": 104, |
|
"text": "Vaswani et al., 2017)", |
|
"ref_id": "BIBREF42" |
|
}, |
|
{ |
|
"start": 136, |
|
"end": 157, |
|
"text": "(Graves et al., 2006)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 208, |
|
"end": 232, |
|
"text": "(Rosca and Breuel, 2016;", |
|
"ref_id": "BIBREF38" |
|
}, |
|
{ |
|
"start": 233, |
|
"end": 252, |
|
"text": "Kundu et al., 2018)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 272, |
|
"end": 293, |
|
"text": "(Yao and Zweig, 2015;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 294, |
|
"end": 314, |
|
"text": "Peters et al., 2017;", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 315, |
|
"end": 339, |
|
"text": "Yolchuyeva et al., 2019)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We introduced neural string edit distance, a neural model of string transduction based on string edit distance. Our novel formulation of neural string edit distance critically depends on a differentiable loss. When used with context-free representations, it offers a direct interpretability via insert, delete and substitute operations, unlike widely used S2S models. Using input representations with differing amounts of contextualization, we can trade off interpretability for better performance. Our experimental results on cognate detection, Arabic-to-English transliteration and grapheme-to-phoneme conversion show that with contextualized input representations, the proposed model is able to match the performance of standard black-box models. We hope that our approach will help motivate more work on this type of interpretable model and that our framework will be useful in such future work. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Algorithm 2 is a procedural implementation of Equation 1. In the Viterbi decoding used for obtaining the alignment, the summation on line 6, 8 and 10 is replaced by taking the maximum.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Inference algorithm", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Algorithm 2 Forward evaluation 1: \u03b1 \u2208 R n\u00d7m \u2190 0 2: \u03b10,0 \u2190 1 3: for i = 1 . . . n do 4: for j = 1 . . . m do 5:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Inference algorithm", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "if j > 0 then 6: \u03b1i,j += Pins(tj|ci,j\u22121) \u2022 \u03b1i,j\u22121 7:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Inference algorithm", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "if i > 0 then 8:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Inference algorithm", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u03b1i,j += Pdel(si|ci\u22121,j) \u2022 \u03b1i\u22121,j 9:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Inference algorithm", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "if i > 0 and j > 0 then 10:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Inference algorithm", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u03b1i,j += Psubs(si ) tj|ci\u22121,j\u22121) \u2022 \u03b1i\u22121,j\u22121 B Motivation for design choices in the string-matching model", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Inference algorithm", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Let us assume a toy example transliteration. The source alphabet is {A, B, C}, the target alphabet is {a, b, c}, the transcription rules are:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Inference algorithm", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "1. If B is at the beginning of the string, delete it.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Inference algorithm", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "As rewrite to a single a.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multiple", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "3. Rewrite B to b and C to c.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multiple", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "The statistical learnable edit distance would not be capable of properly learning rules 1 and 2 because it would not know that B was at the beginning of the string and if an occurrence of A is the first A. This problem gets resolved by introducing a contextualized representation of the input.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multiple", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "The original statistical EM algorithm only needs positive examples to learn the operation distribution. For instance, rewriting B to c will end up as improbable due to the inherent limitation of a single sharing static probability table. Using a single table regardless of the context means that if some operations become more probable, the others must become less probable. A neural network does not have such limitations. A neural model can in theory find solutions that maximize the probability of the training data, however, do not correspond to the original set of rules by finding a highly probable sequence of operations for any string pair. For instance, it can learn to count the positions in the string:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multiple", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "1 . Whatever symbols at the same position i (s i and t i ) are, substitute s i with t j with the probability of 1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multiple", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "2 . If i < j, assign probability of 1 to deleting s i .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multiple", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "3 . If i > j, assign probability of 1 to inserting t j .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multiple", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "For this reason, we introduce the binary crossentropy as an additional loss function. This should steer the model away from degenerate solutions assigning a high probability score to any input string pair.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multiple", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "But our ablation study in Table 2 showed that even without the binary cross-entropy loss, the model converges to a good non-degenerate solution.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 26, |
|
"end": 33, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Multiple", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "This thought experiment shows keeping the full table of possible model outcomes is no longer crucial for the modeling strength. Let us assume that the output distribution of the neural model contains all possible edit operations as they are in the static probability tables of the statistical model. The model can learn to rely on the position information only and select the correct symbols in the output probability distribution ignoring the actual content of the symbols, using their embeddings as a key to identify the correct item from the output distribution. The model can thus learn to ignore the function the full probability table had in the statistical model. Also, given the inputs, it is always clear what the plausible operations are, it is easy for the model not to assign any probability to the implausible operations (unlike the statistical model).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multiple", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "These thoughts lead us to the conclusion that there is no need to keep the full output distribution and we only can use four target classes: one for insertion, one for deletion, one for substitution, and one special class that would get the part of probability mass that would be assigned to implausible operations in the statistical model. We call the last one the non-match option.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multiple", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "Following Gehring et al. (2017) , the CNN uses gated linear units as non-linearity , layer normalization (Ba et al., 2016) and residual connections (He et al., 2016) . The symbol embeddings are summed with learnable position embeddings before the convolution.", |
|
"cite_spans": [ |
|
{ |
|
"start": 10, |
|
"end": 31, |
|
"text": "Gehring et al. (2017)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 105, |
|
"end": 122, |
|
"text": "(Ba et al., 2016)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 148, |
|
"end": 165, |
|
"text": "(He et al., 2016)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C Model Hyperparameters", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The RNN uses gated recurrent units (Cho et al., 2014) and follows the scheme of Chen et al. (2018), which includes residual connections (He et al., 2016 ), layer normalization (Ba et al., 2016 , and multi-headed scaled dot-product attention (Vaswani et al., 2017) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 35, |
|
"end": 53, |
|
"text": "(Cho et al., 2014)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 136, |
|
"end": 152, |
|
"text": "(He et al., 2016", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 153, |
|
"end": 192, |
|
"text": "), layer normalization (Ba et al., 2016", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 241, |
|
"end": 263, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": "BIBREF42" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C Model Hyperparameters", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The Transformers follow the architecture decisions of BERT (Devlin et al., 2019) as implemented in the Transformers library (Wolf et al., 2020) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 59, |
|
"end": 80, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 124, |
|
"end": 143, |
|
"text": "(Wolf et al., 2020)", |
|
"ref_id": "BIBREF48" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C Model Hyperparameters", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "All hyperparameters are set manually based on preliminary experiments. For all experiments, we use embedding size of 256. The CNN encoder uses a single layer with kernel size 3 and ReLU non-linearity. For both the RNN and Transformer models, we use 2 layers with 256 hidden units. The Transformer uses 4 attention heads of dimension 64 in the self-attention. The same configuration is used for the encoder-decoder attention for both RNN and Transformer. We use the same hyperparameters also for the baselines.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C Model Hyperparameters", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We include all main loss functions with weight 1.0, i.e., for string-pair matching: the EM loss, non-matching negative log-likelihood and binary cross-entropy; for string transduction: the EM loss and next symbol negative log-likelihood. We test each model with and without the interpretability loss, which is included with weight 0.1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C Model Hyperparameters", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We optimize the models using the Adam optimizer (Kingma and Ba, 2015) with an initial learning rate of 10 \u22124 , and batch size of 512. We validate the models every 50 training steps. We decrease the learning rate by a factor of 0.7 if the validation performance does not increase in two consecutive validations. We stop the training after the learning rate decreases 10 times.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C Model Hyperparameters", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The training times were measured on machines with GeForce GTX 1080 Ti GPUs and with Intel Xeon E5-2630v4 CPUs (2.20GHz). We report average wall time of training including data preprocessing, validation and testing. The measured time might be influenced by other processes running on the machines.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "D Notes on Reproducibility", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Validation scores are provided in Tables 6 and 7 Table 7 : Model error-rates for Arabic-to-English transliteration and English G2P generation on validation data.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 34, |
|
"end": 48, |
|
"text": "Tables 6 and 7", |
|
"ref_id": "TABREF9" |
|
}, |
|
{ |
|
"start": 49, |
|
"end": 56, |
|
"text": "Table 7", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "D Notes on Reproducibility", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": ".", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "D Notes on Reproducibility", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Using Python-like notation P i,j del = Pi,j[ : ddel], P i,j ins = Pi,j[ddel : ddel+dins], P i,j subs = Pi,j[ddel+dins : ].", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://www.aclweb.org/anthology/attachments/ N18-2063.Datasets.zip", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://github.com/google/transliteration 4 https://github.com/microsoft/CNTK/tree/master/ Examples/SequenceToSequence/CMUDict/Data", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "The work at LMU Munich was supported by was supported by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (No. 640550) and by the German Research Foundation (DFG; grant FR 2829/4-1). The work at CUNI was supported by the European Commission via its Horizon 2020 research and innovation programme (No. 870930).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Peeking inside the black-box: A survey on explainable artificial intelligence (xai)", |
|
"authors": [ |
|
{ |
|
"first": "Amina", |
|
"middle": [], |
|
"last": "Adadi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohammed", |
|
"middle": [], |
|
"last": "Berrada", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "IEEE Access", |
|
"volume": "6", |
|
"issue": "", |
|
"pages": "52138--52160", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Amina Adadi and Mohammed Berrada. 2018. Peek- ing inside the black-box: A survey on explainable artificial intelligence (xai). IEEE Access, 6:52138- 52160.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Neural machine translation by jointly learning to align and translate", |
|
"authors": [ |
|
{ |
|
"first": "Dzmitry", |
|
"middle": [], |
|
"last": "Bahdanau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "3rd International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd Inter- national Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "The best of both worlds: Combining recent advances in neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Mia", |
|
"middle": [], |
|
"last": "Xu Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Orhan", |
|
"middle": [], |
|
"last": "Firat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ankur", |
|
"middle": [], |
|
"last": "Bapna", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Melvin", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wolfgang", |
|
"middle": [], |
|
"last": "Macherey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "George", |
|
"middle": [], |
|
"last": "Foster", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Llion", |
|
"middle": [], |
|
"last": "Jones", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Schuster", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noam", |
|
"middle": [], |
|
"last": "Shazeer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Niki", |
|
"middle": [], |
|
"last": "Parmar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ashish", |
|
"middle": [], |
|
"last": "Vaswani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jakob", |
|
"middle": [], |
|
"last": "Uszkoreit", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lukasz", |
|
"middle": [], |
|
"last": "Kaiser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhifeng", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yonghui", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Macduff", |
|
"middle": [], |
|
"last": "Hughes", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "76--86", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P18-1008" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mia Xu Chen, Orhan Firat, Ankur Bapna, Melvin Johnson, Wolfgang Macherey, George Foster, Llion Jones, Mike Schuster, Noam Shazeer, Niki Parmar, Ashish Vaswani, Jakob Uszkoreit, Lukasz Kaiser, Zhifeng Chen, Yonghui Wu, and Macduff Hughes. 2018. The best of both worlds: Combining recent advances in neural machine translation. In Proceed- ings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 76-86, Melbourne, Australia. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Learning phrase representations using RNN encoder-decoder for statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bart", |
|
"middle": [], |
|
"last": "Van Merri\u00ebnboer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Caglar", |
|
"middle": [], |
|
"last": "Gulcehre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dzmitry", |
|
"middle": [], |
|
"last": "Bahdanau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fethi", |
|
"middle": [], |
|
"last": "Bougares", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Holger", |
|
"middle": [], |
|
"last": "Schwenk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1724--1734", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/v1/D14-1179" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kyunghyun Cho, Bart van Merri\u00ebnboer, Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Nat- ural Language Processing (EMNLP), pages 1724- 1734, Doha, Qatar. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Soft-DTW: a differentiable loss function for time-series", |
|
"authors": [ |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Cuturi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mathieu", |
|
"middle": [], |
|
"last": "Blondel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of Machine Learning Research", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "894--903", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marco Cuturi and Mathieu Blondel. 2017. Soft-DTW: a differentiable loss function for time-series. vol- ume 70 of Proceedings of Machine Learning Re- search, pages 894-903, International Convention Centre, Sydney, Australia. PMLR.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Language modeling with gated convolutional networks", |
|
"authors": [ |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Yann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Angela", |
|
"middle": [], |
|
"last": "Dauphin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Fan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Auli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Grangier", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 34th International Conference on Machine Learning", |
|
"volume": "70", |
|
"issue": "", |
|
"pages": "933--941", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yann N. Dauphin, Angela Fan, Michael Auli, and David Grangier. 2017. Language modeling with gated convolutional networks. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, volume 70 of Proceedings of Machine Learn- ing Research, pages 933-941. PMLR.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "4171--4186", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N19-1423" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Neural conditional random fields", |
|
"authors": [ |
|
{ |
|
"first": "Trinh-Minh-Tri", |
|
"middle": [], |
|
"last": "Do", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thierry", |
|
"middle": [], |
|
"last": "Artieres", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics", |
|
"volume": "9", |
|
"issue": "", |
|
"pages": "177--184", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Trinh-Minh-Tri Do and Thierry Artieres. 2010. Neu- ral conditional random fields. In Proceedings of the Thirteenth International Conference on Artificial In- telligence and Statistics, volume 9 of Proceedings of Machine Learning Research, pages 177-184, Chia Laguna Resort, Sardinia, Italy. PMLR.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "EditNTS: An neural programmer-interpreter model for sentence simplification through explicit editing", |
|
"authors": [ |
|
{ |
|
"first": "Yue", |
|
"middle": [], |
|
"last": "Dong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zichao", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mehdi", |
|
"middle": [], |
|
"last": "Rezagholizadeh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jackie Chi Kit", |
|
"middle": [], |
|
"last": "Cheung", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3393--3402", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P19-1331" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yue Dong, Zichao Li, Mehdi Rezagholizadeh, and Jackie Chi Kit Cheung. 2019. EditNTS: An neural programmer-interpreter model for sentence simplifi- cation through explicit editing. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 3393-3402, Florence, Italy. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Indo-European lexical cognacy database (IELex)", |
|
"authors": [ |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Dunn", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michael Dunn. 2012. Indo-European lexical cognacy database (IELex). Nijmegen, The Netherlands. Max Planck Institute for Psycholinguistics.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Convolutional sequence to sequence learning", |
|
"authors": [ |
|
{ |
|
"first": "Jonas", |
|
"middle": [], |
|
"last": "Gehring", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Auli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Grangier", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Denis", |
|
"middle": [], |
|
"last": "Yarats", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yann", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Dauphin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 34th International Conference on Machine Learning", |
|
"volume": "70", |
|
"issue": "", |
|
"pages": "1243--1252", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jonas Gehring, Michael Auli, David Grangier, De- nis Yarats, and Yann N. Dauphin. 2017. Convolu- tional sequence to sequence learning. In Proceed- ings of the 34th International Conference on Ma- chine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, volume 70 of Proceedings of Ma- chine Learning Research, pages 1243-1252. PMLR.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Graves", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Santiago", |
|
"middle": [], |
|
"last": "Fern\u00e1ndez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Faustino", |
|
"middle": [], |
|
"last": "Gomez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J\u00fcrgen", |
|
"middle": [], |
|
"last": "Schmidhuber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the 23rd international conference on Machine learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "369--376", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alex Graves, Santiago Fern\u00e1ndez, Faustino Gomez, and J\u00fcrgen Schmidhuber. 2006. Connectionist temporal classification: labelling unsegmented se- quence data with recurrent neural networks. In Pro- ceedings of the 23rd international conference on Ma- chine learning, pages 369-376. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Levenshtein transformer", |
|
"authors": [ |
|
{ |
|
"first": "Jiatao", |
|
"middle": [], |
|
"last": "Gu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Changhan", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Junbo", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "11179--11189", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jiatao Gu, Changhan Wang, and Junbo Zhao. 2019. Levenshtein transformer. In Advances in Neural In- formation Processing Systems, pages 11179-11189.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Deep residual learning for image recognition", |
|
"authors": [ |
|
{ |
|
"first": "Kaiming", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiangyu", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shaoqing", |
|
"middle": [], |
|
"last": "Ren", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jian", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "2016 IEEE Conference on Computer Vision and Pattern Recognition", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "770--778", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1109/CVPR.2016.90" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recog- nition. In 2016 IEEE Conference on Computer Vi- sion and Pattern Recognition, CVPR 2016, Las Ve- gas, NV, USA, June 27-30, 2016, pages 770-778. IEEE Computer Society.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "2020. exBERT: A Visual Analysis Tool to Explore Learned Representations in Transformer Models", |
|
"authors": [ |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Hoover", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hendrik", |
|
"middle": [], |
|
"last": "Strobelt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Gehrmann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "187--196", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.acl-demos.22" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Benjamin Hoover, Hendrik Strobelt, and Sebastian Gehrmann. 2020. exBERT: A Visual Analysis Tool to Explore Learned Representations in Transformer Models. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics: System Demonstrations, pages 187-196, Online. As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Look harder: A neural machine translation model with hard attention", |
|
"authors": [ |
|
{ |
|
"first": "Insoo", |
|
"middle": [], |
|
"last": "Sathish Reddy Indurthi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sangha", |
|
"middle": [], |
|
"last": "Chung", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3037--3043", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P19-1290" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sathish Reddy Indurthi, Insoo Chung, and Sangha Kim. 2019. Look harder: A neural machine translation model with hard attention. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 3037-3043, Florence, Italy. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Phylogenetic inference from word lists using weighted alignment with empirically determined weights", |
|
"authors": [ |
|
{ |
|
"first": "Gerhard", |
|
"middle": [], |
|
"last": "J\u00e4ger", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Quantifying Language Dynamics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "155--204", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gerhard J\u00e4ger. 2014. Phylogenetic inference from word lists using weighted alignment with empiri- cally determined weights. In Quantifying Language Dynamics, pages 155-204. Brill.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Using support vector machines and state-ofthe-art algorithms for phonetic alignment to identify cognates in multi-lingual wordlists", |
|
"authors": [ |
|
{ |
|
"first": "Gerhard", |
|
"middle": [], |
|
"last": "J\u00e4ger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Johann-Mattis", |
|
"middle": [], |
|
"last": "List", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pavel", |
|
"middle": [], |
|
"last": "Sofroniev", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1205--1216", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gerhard J\u00e4ger, Johann-Mattis List, and Pavel Sofroniev. 2017. Using support vector machines and state-of- the-art algorithms for phonetic alignment to identify cognates in multi-lingual wordlists. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Vol- ume 1, Long Papers, pages 1205-1216, Valencia, Spain. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Attention is not Explanation", |
|
"authors": [ |
|
{ |
|
"first": "Sarthak", |
|
"middle": [], |
|
"last": "Jain", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Byron", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Wallace", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "3543--3556", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N19-1357" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sarthak Jain and Byron C. Wallace. 2019. Attention is not Explanation. In Proceedings of the 2019 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long and Short Pa- pers), pages 3543-3556, Minneapolis, Minnesota. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Adam: A method for stochastic optimization", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Diederik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jimmy", |
|
"middle": [], |
|
"last": "Kingma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ba", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "3rd International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd Inter- national Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "A deep learning based approach to transliteration", |
|
"authors": [ |
|
{ |
|
"first": "Soumyadeep", |
|
"middle": [], |
|
"last": "Kundu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sayantan", |
|
"middle": [], |
|
"last": "Paul", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Santanu", |
|
"middle": [], |
|
"last": "Pal", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Seventh Named Entities Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "79--83", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W18-2411" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Soumyadeep Kundu, Sayantan Paul, and Santanu Pal. 2018. A deep learning based approach to translitera- tion. In Proceedings of the Seventh Named Entities Workshop, pages 79-83, Melbourne, Australia. As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Neural architectures for named entity recognition", |
|
"authors": [ |
|
{ |
|
"first": "Guillaume", |
|
"middle": [], |
|
"last": "Lample", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Miguel", |
|
"middle": [], |
|
"last": "Ballesteros", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sandeep", |
|
"middle": [], |
|
"last": "Subramanian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kazuya", |
|
"middle": [], |
|
"last": "Kawakami", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "260--270", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N16-1030" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Guillaume Lample, Miguel Ballesteros, Sandeep Sub- ramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 260-270, San Diego, California. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Binary codes capable of correcting deletions, insertions, and reversals", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Vladimir I Levenshtein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1966, |
|
"venue": "Soviet physics doklady", |
|
"volume": "10", |
|
"issue": "", |
|
"pages": "707--710", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vladimir I Levenshtein. 1966. Binary codes capable of correcting deletions, insertions, and reversals. In Soviet physics doklady, volume 10, pages 707-710.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "End-toend non-autoregressive neural machine translation with connectionist temporal classification", |
|
"authors": [ |
|
{ |
|
"first": "Jind\u0159ich", |
|
"middle": [], |
|
"last": "Libovick\u00fd", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jind\u0159ich", |
|
"middle": [], |
|
"last": "Helcl", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3016--3021", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D18-1336" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jind\u0159ich Libovick\u00fd and Jind\u0159ich Helcl. 2018. End-to- end non-autoregressive neural machine translation with connectionist temporal classification. In Pro- ceedings of the 2018 Conference on Empirical Meth- ods in Natural Language Processing, pages 3016- 3021, Brussels, Belgium. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Neural finite-state transducers: Beyond rational relations", |
|
"authors": [ |
|
{ |
|
"first": "Chu-Cheng", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hao", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matthew", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Gormley", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Eisner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "272--283", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N19-1024" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chu-Cheng Lin, Hao Zhu, Matthew R. Gormley, and Jason Eisner. 2019. Neural finite-state transducers: Beyond rational relations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, Volume 1 (Long and Short Papers), pages 272-283, Minneapolis, Min- nesota. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "The mythos of model interpretability. Queue", |
|
"authors": [ |
|
{ |
|
"first": "Zachary", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Lipton", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "16", |
|
"issue": "", |
|
"pages": "31--57", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1145/3236386.3241340" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zachary C. Lipton. 2018. The mythos of model inter- pretability. Queue, 16(3):31-57.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "LexStat: Automatic detection of cognates in multilingual wordlists", |
|
"authors": [ |
|
{ |
|
"first": "Johann-Mattis", |
|
"middle": [], |
|
"last": "List", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the EACL 2012 Joint Workshop of LINGVIS & UNCLH", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "117--125", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Johann-Mattis List. 2012. LexStat: Automatic de- tection of cognates in multilingual wordlists. In Proceedings of the EACL 2012 Joint Workshop of LINGVIS & UNCLH, pages 117-125, Avignon, France. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Using sequence similarity networks to identify partial cognates in multilingual wordlists", |
|
"authors": [ |
|
{ |
|
"first": "Johann-Mattis", |
|
"middle": [], |
|
"last": "List", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philippe", |
|
"middle": [], |
|
"last": "Lopez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eric", |
|
"middle": [], |
|
"last": "Bapteste", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "599--605", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P16-2097" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Johann-Mattis List, Philippe Lopez, and Eric Bapteste. 2016. Using sequence similarity networks to iden- tify partial cognates in multilingual wordlists. In Proceedings of the 54th Annual Meeting of the As- sociation for Computational Linguistics (Volume 2: Short Papers), pages 599-605, Berlin, Germany. As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Encode, tag, realize: High-precision text editing", |
|
"authors": [ |
|
{ |
|
"first": "Eric", |
|
"middle": [], |
|
"last": "Malmi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Krause", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sascha", |
|
"middle": [], |
|
"last": "Rothe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniil", |
|
"middle": [], |
|
"last": "Mirylenka", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aliaksei", |
|
"middle": [], |
|
"last": "Severyn", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5054--5065", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D19-1510" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eric Malmi, Sebastian Krause, Sascha Rothe, Daniil Mirylenka, and Aliaksei Severyn. 2019. Encode, tag, realize: High-precision text editing. In Proceed- ings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Inter- national Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), pages 5054-5065, Hong Kong, China. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "A conditional random field for discriminatively-trained finite-state string edit distance", |
|
"authors": [ |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Mccallum", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kedar", |
|
"middle": [], |
|
"last": "Bellare", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fernando", |
|
"middle": [ |
|
"C N" |
|
], |
|
"last": "Pereira", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "UAI '05, Proceedings of the 21st Conference in Uncertainty in Artificial Intelligence", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "388--395", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrew McCallum, Kedar Bellare, and Fernando C. N. Pereira. 2005. A conditional random field for discriminatively-trained finite-state string edit dis- tance. In UAI '05, Proceedings of the 21st Confer- ence in Uncertainty in Artificial Intelligence, Edin- burgh, Scotland, July 26-29, 2005, pages 388-395. AUAI Press.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Recurrent models of visual attention", |
|
"authors": [ |
|
{ |
|
"first": "Volodymyr", |
|
"middle": [], |
|
"last": "Mnih", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicolas", |
|
"middle": [], |
|
"last": "Heess", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Graves", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Koray", |
|
"middle": [], |
|
"last": "Kavukcuoglu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2204--2212", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Volodymyr Mnih, Nicolas Heess, Alex Graves, and Ko- ray Kavukcuoglu. 2014. Recurrent models of visual attention. In Advances in Neural Information Pro- cessing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8- 13 2014, Montreal, Quebec, Canada, pages 2204- 2212.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Efficient word alignment with Markov Chain Monte Carlo", |
|
"authors": [ |
|
{ |
|
"first": "J\u00f6rg", |
|
"middle": [], |
|
"last": "Robert\u00f6stling", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Tiedemann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Prague Bulletin of Mathematical Linguistics", |
|
"volume": "106", |
|
"issue": "", |
|
"pages": "125--146", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Robert\u00d6stling and J\u00f6rg Tiedemann. 2016. Effi- cient word alignment with Markov Chain Monte Carlo. Prague Bulletin of Mathematical Linguistics, 106:125-146.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Massively multilingual neural grapheme-tophoneme conversion", |
|
"authors": [ |
|
{ |
|
"first": "Ben", |
|
"middle": [], |
|
"last": "Peters", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jon", |
|
"middle": [], |
|
"last": "Dehdari", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Josef", |
|
"middle": [], |
|
"last": "Van Genabith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the First Workshop on Building Linguistically Generalizable NLP Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "19--26", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W17-5403" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ben Peters, Jon Dehdari, and Josef van Genabith. 2017. Massively multilingual neural grapheme-to- phoneme conversion. In Proceedings of the First Workshop on Building Linguistically Generalizable NLP Systems, pages 19-26, Copenhagen, Denmark. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Are automatic methods for cognate detection good enough for phylogenetic reconstruction in historical linguistics?", |
|
"authors": [ |
|
{ |
|
"first": "Taraka", |
|
"middle": [], |
|
"last": "Rama", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Johann-Mattis", |
|
"middle": [], |
|
"last": "List", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Johannes", |
|
"middle": [], |
|
"last": "Wahle", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gerhard", |
|
"middle": [], |
|
"last": "J\u00e4ger", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "393--400", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N18-2063" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Taraka Rama, Johann-Mattis List, Johannes Wahle, and Gerhard J\u00e4ger. 2018. Are automatic methods for cognate detection good enough for phylogenetic re- construction in historical linguistics? In Proceed- ings of the 2018 Conference of the North Ameri- can Chapter of the Association for Computational Linguistics: Human Language Technologies, Vol- ume 2 (Short Papers), pages 393-400, New Orleans, Louisiana. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Weighting finite-state transductions with neural context", |
|
"authors": [ |
|
{ |
|
"first": "Pushpendre", |
|
"middle": [], |
|
"last": "Rastogi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Cotterell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Eisner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "623--633", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N16-1076" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pushpendre Rastogi, Ryan Cotterell, and Jason Eisner. 2016. Weighting finite-state transductions with neu- ral context. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 623-633, San Diego, California. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "Unsupervised bilingual lexicon induction across writing systems", |
|
"authors": [ |
|
{ |
|
"first": "Parker", |
|
"middle": [], |
|
"last": "Riley", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Gildea", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "CoRR", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Parker Riley and Daniel Gildea. 2020. Unsupervised bilingual lexicon induction across writing systems. CoRR, abs/2002.00037.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "Learning string-edit distance", |
|
"authors": [ |
|
{ |
|
"first": "Eric", |
|
"middle": [ |
|
"Sven" |
|
], |
|
"last": "Ristad", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Peter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Yianilos", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "IEEE Trans. Pattern Anal. Mach. Intell", |
|
"volume": "20", |
|
"issue": "5", |
|
"pages": "522--532", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1109/34.682181" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eric Sven Ristad and Peter N. Yianilos. 1998. Learn- ing string-edit distance. IEEE Trans. Pattern Anal. Mach. Intell., 20(5):522-532.", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "Sequenceto-sequence neural network models for transliteration", |
|
"authors": [ |
|
{ |
|
"first": "Mihaela", |
|
"middle": [], |
|
"last": "Rosca", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Breuel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mihaela Rosca and Thomas Breuel. 2016. Sequence- to-sequence neural network models for translitera- tion. CoRR, abs/1610.09565.", |
|
"links": null |
|
}, |
|
"BIBREF39": { |
|
"ref_id": "b39", |
|
"title": "Non-autoregressive machine translation with latent alignments", |
|
"authors": [ |
|
{ |
|
"first": "Chitwan", |
|
"middle": [], |
|
"last": "Saharia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "William", |
|
"middle": [], |
|
"last": "Chan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Saurabh", |
|
"middle": [], |
|
"last": "Saxena", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohammad", |
|
"middle": [], |
|
"last": "Norouzi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1098--1108", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.emnlp-main.83" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chitwan Saharia, William Chan, Saurabh Saxena, and Mohammad Norouzi. 2020. Non-autoregressive ma- chine translation with latent alignments. In Proceed- ings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1098-1108, Online. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF40": { |
|
"ref_id": "b40", |
|
"title": "Austroasiatic dataset for phylogenetic analysis: 2015 version", |
|
"authors": [ |
|
{ |
|
"first": "Paul", |
|
"middle": [], |
|
"last": "Sidwell", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Mon-Khmer Studies (Notes, Reviews, Data-Papers)", |
|
"volume": "44", |
|
"issue": "", |
|
"pages": "lxviii--ccclvii", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Paul Sidwell. 2015. Austroasiatic dataset for phyloge- netic analysis: 2015 version. Mon-Khmer Studies (Notes, Reviews, Data-Papers), 44:lxviii-ccclvii.", |
|
"links": null |
|
}, |
|
"BIBREF41": { |
|
"ref_id": "b41", |
|
"title": "Optimal transport-based alignment of learned character representations for string similarity", |
|
"authors": [ |
|
{ |
|
"first": "Derek", |
|
"middle": [], |
|
"last": "Tam", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicholas", |
|
"middle": [], |
|
"last": "Monath", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ari", |
|
"middle": [], |
|
"last": "Kobren", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aaron", |
|
"middle": [], |
|
"last": "Traylor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rajarshi", |
|
"middle": [], |
|
"last": "Das", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Mccallum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5907--5917", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P19-1592" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Derek Tam, Nicholas Monath, Ari Kobren, Aaron Tray- lor, Rajarshi Das, and Andrew McCallum. 2019. Op- timal transport-based alignment of learned character representations for string similarity. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5907-5917, Flo- rence, Italy. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF42": { |
|
"ref_id": "b42", |
|
"title": "Attention is all you need", |
|
"authors": [ |
|
{ |
|
"first": "Ashish", |
|
"middle": [], |
|
"last": "Vaswani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noam", |
|
"middle": [], |
|
"last": "Shazeer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Niki", |
|
"middle": [], |
|
"last": "Parmar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jakob", |
|
"middle": [], |
|
"last": "Uszkoreit", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Llion", |
|
"middle": [], |
|
"last": "Jones", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aidan", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Gomez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lukasz", |
|
"middle": [], |
|
"last": "Kaiser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Illia", |
|
"middle": [], |
|
"last": "Polosukhin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5998--6008", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 Decem- ber 2017, Long Beach, CA, USA, pages 5998-6008.", |
|
"links": null |
|
}, |
|
"BIBREF43": { |
|
"ref_id": "b43", |
|
"title": "Error bounds for convolutional codes and an asymptotically optimum decoding algorithm", |
|
"authors": [ |
|
{ |
|
"first": "Andrew", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Viterbi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1967, |
|
"venue": "IEEE Trans. Inf. Theory", |
|
"volume": "13", |
|
"issue": "2", |
|
"pages": "260--269", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1109/TIT.1967.1054010" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrew J. Viterbi. 1967. Error bounds for convolu- tional codes and an asymptotically optimum decod- ing algorithm. IEEE Trans. Inf. Theory, 13(2):260- 269.", |
|
"links": null |
|
}, |
|
"BIBREF44": { |
|
"ref_id": "b44", |
|
"title": "The string-to-string correction problem", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Robert", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Wagner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Fischer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1974, |
|
"venue": "J. ACM", |
|
"volume": "21", |
|
"issue": "1", |
|
"pages": "168--173", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1145/321796.321811" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Robert A. Wagner and Michael J. Fischer. 1974. The string-to-string correction problem. J. ACM, 21(1):168-173.", |
|
"links": null |
|
}, |
|
"BIBREF45": { |
|
"ref_id": "b45", |
|
"title": "The Carnegie-Mellon pronouncing dictionary", |
|
"authors": [ |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Weide", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Robert Weide. 2017. The Carnegie-Mellon pronounc- ing dictionary [cmudict. 0.7].", |
|
"links": null |
|
}, |
|
"BIBREF47": { |
|
"ref_id": "b47", |
|
"title": "Attention is not not explanation", |
|
"authors": [ |
|
{ |
|
"first": "Sarah", |
|
"middle": [], |
|
"last": "Wiegreffe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuval", |
|
"middle": [], |
|
"last": "Pinter", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "11--20", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D19-1002" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sarah Wiegreffe and Yuval Pinter. 2019. Attention is not not explanation. In Proceedings of the 2019 Con- ference on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 11-20, Hong Kong, China. Associ- ation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF48": { |
|
"ref_id": "b48", |
|
"title": "Transformers: State-of-the-art natural language processing", |
|
"authors": [ |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Wolf", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lysandre", |
|
"middle": [], |
|
"last": "Debut", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Victor", |
|
"middle": [], |
|
"last": "Sanh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julien", |
|
"middle": [], |
|
"last": "Chaumond", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Clement", |
|
"middle": [], |
|
"last": "Delangue", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anthony", |
|
"middle": [], |
|
"last": "Moi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pierric", |
|
"middle": [], |
|
"last": "Cistac", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tim", |
|
"middle": [], |
|
"last": "Rault", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Remi", |
|
"middle": [], |
|
"last": "Louf", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Morgan", |
|
"middle": [], |
|
"last": "Funtowicz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joe", |
|
"middle": [], |
|
"last": "Davison", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sam", |
|
"middle": [], |
|
"last": "Shleifer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Clara", |
|
"middle": [], |
|
"last": "Patrick Von Platen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yacine", |
|
"middle": [], |
|
"last": "Ma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julien", |
|
"middle": [], |
|
"last": "Jernite", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Canwen", |
|
"middle": [], |
|
"last": "Plu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Teven", |
|
"middle": [ |
|
"Le" |
|
], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sylvain", |
|
"middle": [], |
|
"last": "Scao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mariama", |
|
"middle": [], |
|
"last": "Gugger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Drame", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "38--45", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.emnlp-demos.6" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Remi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Trans- formers: State-of-the-art natural language process- ing. In Proceedings of the 2020 Conference on Em- pirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Asso- ciation for Computational Linguistics.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF1": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "Visualization of the \u03b1 table (0 is dark blue, 1 is yellow) for cognate detection using a unigram model. Left: A cognate pair, Right: a non-cognate pair" |
|
}, |
|
"TABREF2": { |
|
"text": "Ablation study for loss function on Cognate classification with a model with RNN contextualizer.", |
|
"html": null, |
|
"content": "<table><tr><td/><td>.8</td><td colspan=\"2\">S2S unigram RNN 1.00 1.25</td><td>S2S unigram RNN</td></tr><tr><td>CER</td><td>.6</td><td/><td>0.75</td></tr><tr><td/><td>.4</td><td/><td>0.50</td></tr><tr><td/><td>.2</td><td/><td>0.25</td></tr><tr><td/><td>200</td><td>10 3</td><td>10 4</td><td>8 16 32 64 128 256 512</td></tr><tr><td/><td/><td>Training data size</td><td/><td>Symbol representation size</td></tr><tr><td colspan=\"5\">Figure 3: Character Error Rate for Arabic translitera-tion into English for various training data sizes (left) and various representation sizes (right).</td></tr></table>", |
|
"num": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF3": { |
|
"text": "\u00b11.8 85.2 \u00b10.9 31.2 \u00b11.4 85.0 \u00b10.5 36m 20.9 \u00b10.3 67.5 \u00b11.0 -gram) 1.1M 24.6 \u00b10.6 80.5 \u00b10.3 24.5 \u00b10.9 80.1 \u00b10.9 41m 12.8 \u00b11.0 48.4\u00b13.1", |
|
"html": null, |
|
"content": "<table><tr><td colspan=\"2\">Method</td><td># Param.</td><td>CER</td><td>Plain</td><td colspan=\"3\">Arabic \u2192 English + Interpret. loss WER CER WER</td><td>Time</td><td>CER</td><td>Plain WER</td><td>Align.</td><td>CMUDict + Interpret. loss CER WER</td><td>Align.</td><td>Time</td></tr><tr><td colspan=\"2\">RNN Seq2seq Transformer</td><td colspan=\"4\">3.3M 22.0 \u00b10.2 75.8 \u00b10.6 3.1M 22.9 \u00b10.2 78.5 \u00b10.4</td><td>--</td><td>--</td><td>12m 11m</td><td colspan=\"2\">5.8 \u00b10.1 23.6 \u00b10.9 6.5 \u00b10.1 26.6 \u00b10.3</td><td>24.5 33.2</td><td>--</td><td>--</td><td>--</td><td>1.8h 1.1h</td></tr><tr><td>ours</td><td colspan=\"11\">unigram CNN (335.4 0.7M 31.7 55.7 Deep CNN 3.0M 24.4 \u00b10.5 80.0 \u00b10.7 23.8 \u00b10.3 79.3 \u00b10.1 52m 10.8 \u00b10.5 41.4 \u00b11.9 23.3 RNN 2.9M 24.1 \u00b10.2 77.0 \u00b12.0 22.0 \u00b10.3 77.4 \u00b10.8 60m 7.8 \u00b10.3 31.9 \u00b11.3 44.7 Transformer 3.2M 24.3 \u00b10.9 79.0 \u00b10.7 23.9 \u00b11.6 78.6 \u00b11.3 1.2h 10.7 \u00b11.0 41.8 \u00b13.1 33.3</td><td>20.6 \u00b10.3 66.3 \u00b10.2 12.8 \u00b10.2 48.4 \u00b10.6 10.8 \u00b10.5 42.1 \u00b11.6 7.3 \u00b10.4 33.3 \u00b11.5 10.2 \u00b11.1 43.6 \u00b13.2</td><td>59.5 38.1 28.8 48.9 37.9</td><td>2.4h 2.5h 2.5h 2.3h 2.3h</td></tr></table>", |
|
"num": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF4": { |
|
"text": "in the Appendix.", |
|
"html": null, |
|
"content": "<table><tr><td>Loss functions</td><td>CER</td><td>WER</td></tr><tr><td colspan=\"3\">Complete loss -expectation maximization 68.2 \u00b17.4 93.5 \u00b11.0 22.5 \u00b10.3 77.4 \u00b10.8 -next symbol NLL 27.2 \u00b11.4 81.1 \u00b12.2 -\u03b1m,n maximization 23.5 \u00b11.3 79.2 \u00b12.5</td></tr></table>", |
|
"num": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF5": { |
|
"text": "", |
|
"html": null, |
|
"content": "<table/>", |
|
"num": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF6": { |
|
"text": "", |
|
"html": null, |
|
"content": "<table><tr><td>: Edit operations predicted by RNN-based model for grapheme (blue) to phoneme (green) conversion with and without the interpretability loss (when provided ground-truth target). Green boxes are insertions, blue boxes deletions, yellow boxes substitutions.</td></tr><tr><td>requires sampling possible operation sequences.</td></tr><tr><td>Segment to Segment Neural Transduction. Yu et al. (2016) use two operation algorithm (shift and</td></tr><tr><td>emit) for string transduction. Unlike our model</td></tr><tr><td>directly, it models independently the operation type</td></tr><tr><td>and target symbols and lacks the concept of symbol</td></tr><tr><td>substitution.</td></tr></table>", |
|
"num": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF7": { |
|
"text": "Kaisheng Yao and Geoffrey Zweig. 2015. Sequenceto-sequence neural net models for grapheme-tophoneme conversion. In INTERSPEECH 2015, 16th Annual Conference of the International Speech Communication Association, Dresden, Germany, September 6-10, 2015, pages 3330-3334. ISCA.", |
|
"html": null, |
|
"content": "<table><tr><td>Sevinj Yolchuyeva, G\u00e9za N\u00e9meth, and B\u00e1lint Gyires-T\u00f3th. 2019. Transformer based grapheme-to-phoneme conversion. In Interspeech 2019, 20th Annual Conference of the International Speech Communication Association, Graz, Austria, 15-19 September 2019, pages 2095-2099. ISCA.</td></tr><tr><td>Lei Yu, Jan Buys, and Phil Blunsom. 2016. Online seg-ment to segment neural transduction. In Proceed-ings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1307-1316, Austin, Texas. Association for Computational Lin-guistics.</td></tr></table>", |
|
"num": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF8": { |
|
"text": "Effect of beam search on test data for grapheme-to-phoneme conversion. Learning curves for Cognate classification for Indo-European languages (left) and for grapheme-tophoneme conversion (right).", |
|
"html": null, |
|
"content": "<table><tr><td/><td>RNN Sequence-to-sequence</td><td/><td>Ours w/ static embeddings</td><td/><td>Ours w/ shallow CNN</td></tr><tr><td/><td/><td/><td>0.235</td><td/><td/></tr><tr><td/><td>6.82\u00d7</td><td/><td>0.230</td><td/><td>0.130</td></tr><tr><td>CER</td><td>6.81\u00d7</td><td>CER</td><td>0.225</td><td>CER</td><td>0.128</td></tr><tr><td colspan=\"6\">1 1 0 0 Figure 5: Method 10 20 30 Beam size 6.80\u00d7 10 \u22122 Ours w/ deep CNN 10 20 30 Beam size 0.112 0.114 0.116 0.118 0.120 CER Lenght normalization: Figure 4: Cognate detection on IELEX 40 0.215 0.220 40 0.084 0.086 0.088 0.090 CER 0.0 0.2 1 1 2000 4000 6000 8000 Training steps 0.0 0.1 0.2 0.3 0.4 0.5 Training loss ours: embeddings Ours w/ RNN 10 20 Beam size 10 20 Beam size 0.4 0.6 10000 ours: CNN ours: RNN ours: Transformer 2000 4000 6000 8000 10000 Training steps 0.2 0.4 0.6 0.8 1.0 Validation F 1 score Transformer STANCE RNN ours: embeddings ours: CNN ours: RNN ours: Transformer 0.0 0.5 1.0 1.5 Training loss 0.2 0.4 0.6 Validation CER Indo-European Base + Int. loss 0 30 30 0.8 Grapheme-to-phoneme conversion 40 1 10 20 Beam size 0.124 0.126 40 Ours w/ Transformer 30 1 10 20 30 Beam size 0.102 0.104 0.106 0.108 0.110 CER 1.0 1.2 1.4 1.6 10000 20000 30000 Training steps ours: embeddings 40000 40 40 ours: CNN ours: RNN ours: Transformer 0 10000 20000 30000 40000 Training steps S2S RNN S2S Transformer ours: embeddings ours: RNN ours: CNN ours: Transformer Austro-Asiatic Base + Int. loss Transformer [CLS] 91.4 \u00b12.8 -78.8 \u00b10.8 -STANCE unigram 46.5 \u00b14.7 -16.5 \u00b10.4 -RNN 80.4 \u00b11.6 -16.5 \u00b10.1 -Transformer 76.8 \u00b11.3 -17.2 \u00b10.2 -unigram 81.2 \u00b11.0 82.0 \u00b10.5 52.6 \u00b10.8 53.9 \u00b10.6 CNN (3-gram) 95.2 \u00b10.6 94.9 \u00b10.7 78.9 \u00b10.8 78.1 \u00b11.7 ours RNN 97.2 \u00b10.2 88.8 \u00b11.1 82.8 \u00b10.6 83.1 \u00b10.7 Transformer 88.8 \u00b11.6 88.7 \u00b11.1 71.5 \u00b11.1 71.5 \u00b11.1</td><td>.</td></tr><tr><td/><td/><td/><td>64</td><td/><td/></tr></table>", |
|
"num": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF9": { |
|
"text": "F 1 -score for cognate detection on the validation data. \u00b10.7 84.1 \u00b10.8 28.3 \u00b10.5 84.3 \u00b10.7 21.2 \u00b11.0 66.4 \u00b11.9 21.5 \u00b10.8 68.0 \u00b12.1 CNN (3-gram) 34.4 \u00b11.1 86.5 \u00b10.8 32.2 \u00b11.1 86.5 \u00b10.8 36.0 \u00b15.7 80.9 \u00b13.2 33.8 \u00b13.5 79.0 \u00b12.8 RNN 42.4 \u00b19.0 90.9 \u00b15.4 45.2 \u00b12.6 90.9 \u00b11.8 59.1 \u00b12.5 96.2 \u00b10.7 43.6 \u00b15.6 80.5 \u00b15.6 Transformer 41.2 \u00b19.1 91.7 \u00b14.4 47.7 \u00b13.6 92.5 \u00b12.4 24.6 \u00b14.3 73.8 \u00b16.1 43.5 \u00b13.6 84.9 \u00b12.5", |
|
"html": null, |
|
"content": "<table><tr><td>Method</td><td>Base</td><td colspan=\"3\">Arabic \u2192 English + Int. loss</td><td>Base</td><td colspan=\"2\">CMUDict</td><td>+ Int. loss</td></tr><tr><td/><td>CER</td><td>WER</td><td>CER</td><td>WER</td><td>CER</td><td>WER</td><td colspan=\"2\">CER</td><td>WER</td></tr><tr><td>RNN Seq2seq Transformer</td><td colspan=\"2\">21.7 \u00b10.1 75.0 \u00b10.6 22.8 \u00b10.2 77.7 \u00b10.6</td><td>--</td><td>--</td><td colspan=\"2\">7.4 \u00b10.0 31.5 \u00b10.1 7.8 \u00b10.1 32.7 \u00b10.3</td><td colspan=\"2\">--</td><td>--</td></tr><tr><td>unigram</td><td>28.4</td><td/><td/><td/><td/><td/><td/></tr><tr><td>ours</td><td/><td/><td/><td/><td/><td/><td/></tr></table>", |
|
"num": null, |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |