ACL-OCL / Base_JSON /prefixS /json /scil /2020.scil-1.22.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:39:58.378679Z"
},
"title": "Probing RNN Encoder-Decoder Generalization of Subregular Functions using Reduplication",
"authors": [
{
"first": "Max",
"middle": [],
"last": "Nelson",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Massachusetts Amherst",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Hossep",
"middle": [],
"last": "Dolatian",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Jonathan",
"middle": [],
"last": "Rawski",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Brandon",
"middle": [],
"last": "Prickett",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Massachusetts",
"location": {
"settlement": "Amherst"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper examines the generalization abilities of encoder-decoder networks on a class of subregular functions characteristic of natural language reduplication. We find that, for the simulations we run, attention is a necessary and sufficient mechanism for learning generalizable reduplication. We examine attention alignment to connect RNN computation to a class of 2-way transducers.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper examines the generalization abilities of encoder-decoder networks on a class of subregular functions characteristic of natural language reduplication. We find that, for the simulations we run, attention is a necessary and sufficient mechanism for learning generalizable reduplication. We examine attention alignment to connect RNN computation to a class of 2-way transducers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Reduplication is a cross-linguistically common morphological process (Moravcsik, 1978; Rubino, 2005) . It is estimated that total reduplication and partial reduplication occur in 85% and 75% of the world's languages, respectively (Rubino, 2013) . Total reduplication places no bound on the size of the reduplicant while partial does.",
"cite_spans": [
{
"start": 69,
"end": 86,
"text": "(Moravcsik, 1978;",
"ref_id": "BIBREF34"
},
{
"start": 87,
"end": 100,
"text": "Rubino, 2005)",
"ref_id": "BIBREF43"
},
{
"start": 230,
"end": 244,
"text": "(Rubino, 2013)",
"ref_id": "BIBREF44"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1. (a) wanita ! wanita\u21e0wanita (Indonesian) 'woman ! women' (b) guyon ! gu\u21e0guyon (Sundanese) 'to jest ! to jest repeatedly' Morphological and phonological processes are sufficiently characterized by the regular class of languages and functions, and effectively computed by finite-state transducers (FSTs) (Johnson, 1972; Kaplan and Kay, 1994; Koskenniemi, 1984; Roark and Sproat, 2007) . In finite-state calculus, an FST can process the input string either once in one direction (1-way FST), or multiple times by going back and forth (2-way FST). 1way FSTs compute rational functions, while 2way FSTs are more expressive, computing regular functions (Engelfriet and Hoogeboom, 2001; Filiot and Reynier, 2016) . 1 Most morphological and phonological processes are in fact restricted to subclasses of rational functions and their corresponding 1-way FSTs (Chandlee, 2014 (Chandlee, , 2017 Chandlee and Heinz, 2018) . The exception is total reduplication, which is uncomputable by 1way FSTs due to its unboundedness (Culy, 1985; Sproat, 1992) . It needs the power of 2-way FSTs, and requires subclasses of the regular functions (Dolatian and Heinz, 2018b) .",
"cite_spans": [
{
"start": 30,
"end": 42,
"text": "(Indonesian)",
"ref_id": null
},
{
"start": 304,
"end": 319,
"text": "(Johnson, 1972;",
"ref_id": "BIBREF27"
},
{
"start": 320,
"end": 341,
"text": "Kaplan and Kay, 1994;",
"ref_id": "BIBREF28"
},
{
"start": 342,
"end": 360,
"text": "Koskenniemi, 1984;",
"ref_id": "BIBREF30"
},
{
"start": 361,
"end": 384,
"text": "Roark and Sproat, 2007)",
"ref_id": "BIBREF41"
},
{
"start": 649,
"end": 681,
"text": "(Engelfriet and Hoogeboom, 2001;",
"ref_id": "BIBREF21"
},
{
"start": 682,
"end": 707,
"text": "Filiot and Reynier, 2016)",
"ref_id": "BIBREF22"
},
{
"start": 710,
"end": 711,
"text": "1",
"ref_id": null
},
{
"start": 852,
"end": 867,
"text": "(Chandlee, 2014",
"ref_id": "BIBREF7"
},
{
"start": 868,
"end": 885,
"text": "(Chandlee, , 2017",
"ref_id": "BIBREF8"
},
{
"start": 886,
"end": 911,
"text": "Chandlee and Heinz, 2018)",
"ref_id": "BIBREF12"
},
{
"start": 1012,
"end": 1024,
"text": "(Culy, 1985;",
"ref_id": "BIBREF16"
},
{
"start": 1025,
"end": 1038,
"text": "Sproat, 1992)",
"ref_id": "BIBREF48"
},
{
"start": 1124,
"end": 1151,
"text": "(Dolatian and Heinz, 2018b)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper uses these subregular functions that characterize reduplication to probe the learning and generalization capacities of Recurrent Neural Network (RNN) architectures. While given infinite computational power, RNNs can simulate Turing machines (Siegelmann, 2012), many RNN classes and their gating mechanisms are actually expressively equivalent to weighted finitestate acceptors (Rabusseau et al., 2019; Peng et al., 2018) . Furthermore, growing evidence suggests that RNNs and other sequential networks practically function as subregular automata (Merrill, 2019; Weiss et al., 2018) .",
"cite_spans": [
{
"start": 388,
"end": 412,
"text": "(Rabusseau et al., 2019;",
"ref_id": "BIBREF38"
},
{
"start": 413,
"end": 431,
"text": "Peng et al., 2018)",
"ref_id": "BIBREF36"
},
{
"start": 557,
"end": 572,
"text": "(Merrill, 2019;",
"ref_id": "BIBREF33"
},
{
"start": 573,
"end": 592,
"text": "Weiss et al., 2018)",
"ref_id": "BIBREF50"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We extend these subregular characterizations to 1 In the French literature on formal language theory, 1-way FSTs compute rational functions. In contrast, most work in American computer science calls this class the regular functions. We follow French conventions because we also discuss 2-way FSTs which compute regular functions in their system. test encoder-decoder (ED; Sutskever et al., 2014) networks. We use a typology of reduplication patterns computed by subregular 2-way FSTs (Dolatian and Heinz, 2019) to probe the ability of the networks to learn patterns of varying complexity. Our results suggest that when adding attention to these models, not only do they successfully learn and generalize all of the attested reduplication patterns that we test, but the attention acts in an alignment suggestive of the subregular 2-way FSTs. In contrast, lack of attention prohibits learning of the functions, and the generalization is suggestive of 1-way FSTs. This provides a principled glimpse into the interpretability of these networks on well-understood computational grounds, motivated by linguistic insight (Rawski and Heinz, 2019) .",
"cite_spans": [
{
"start": 48,
"end": 49,
"text": "1",
"ref_id": null
},
{
"start": 367,
"end": 371,
"text": "(ED;",
"ref_id": null
},
{
"start": 372,
"end": 395,
"text": "Sutskever et al., 2014)",
"ref_id": "BIBREF49"
},
{
"start": 484,
"end": 510,
"text": "(Dolatian and Heinz, 2019)",
"ref_id": "BIBREF19"
},
{
"start": 1114,
"end": 1138,
"text": "(Rawski and Heinz, 2019)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The paper proceeds as follows. \u00a72 overviews the computation and learnability of reduplication. Methods, results, and discussion are in \u00a73, \u00a74, \u00a75, respectively. Conclusions are in \u00a76.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As stated, reduplication is characterized by different subclasses of regular functions and computed by their corresponding FSTs, forming the hierarchy shown in Figure 1 . 1-way FSTs compute rational functions. They are widely used in computational linguistics and NLP (Roche and Schabes, 1997; Beesley and Karttunen, 2003; Roark and Sproat, 2007) . 2-way FSTs are more powerful. They exactly compute regular functions, which mathematically correspond to string-tostring transductions using Monadic Second Order logic (Engelfriet and Hoogeboom, 2001) , making them the functional counterpart of the regular languages (B\u00fcchi, 1960) . They have mostly been used outside of NLP (Alur and\u010cern\u00fd, 2011) . When defined over a 1-way FST, all partial reduplicative functions are computable by Subsequential (Seq) functions Chandlee, 2017) , which are computed by deterministic 1-way FSTs. Total reduplication is uncomputable by 1-way FSTs because there is no bound on the size of the reduplicant (Culy, 1985) , so its output language is at least Mildly Context-Sensitive (Seki et al., 1991 (Seki et al., , 1993 .",
"cite_spans": [
{
"start": 268,
"end": 293,
"text": "(Roche and Schabes, 1997;",
"ref_id": "BIBREF42"
},
{
"start": 294,
"end": 322,
"text": "Beesley and Karttunen, 2003;",
"ref_id": "BIBREF3"
},
{
"start": 323,
"end": 346,
"text": "Roark and Sproat, 2007)",
"ref_id": "BIBREF41"
},
{
"start": 517,
"end": 549,
"text": "(Engelfriet and Hoogeboom, 2001)",
"ref_id": "BIBREF21"
},
{
"start": 616,
"end": 629,
"text": "(B\u00fcchi, 1960)",
"ref_id": "BIBREF6"
},
{
"start": 674,
"end": 695,
"text": "(Alur and\u010cern\u00fd, 2011)",
"ref_id": null
},
{
"start": 813,
"end": 828,
"text": "Chandlee, 2017)",
"ref_id": "BIBREF8"
},
{
"start": 986,
"end": 998,
"text": "(Culy, 1985)",
"ref_id": "BIBREF16"
},
{
"start": 1061,
"end": 1079,
"text": "(Seki et al., 1991",
"ref_id": "BIBREF45"
},
{
"start": 1080,
"end": 1100,
"text": "(Seki et al., , 1993",
"ref_id": "BIBREF46"
}
],
"ref_spans": [
{
"start": 160,
"end": 168,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Computing reduplication",
"sec_num": "2.1"
},
{
"text": "Over 2-way FSTs, both partial and total reduplication can be alternatively computed by a concatenation of subclasses of regular functions that are analogous to 1-way FST subclasses. 2 Almost all reduplicative processes, including total reduplication, are computed by Concatenated-Sequential (C-Seq) functions, which are concatenations of Seq functions (Dolatian and Heinz, 2018a,b) . Most reduplication processes are sufficiently characterized by C-Seq functions because they can almost always be decomposed into two concatenated Seq functions: one to produce the reduplicant via truncation Trunc(x), and one to produce an identical copy of the base ID(x). Seq functions as 1-way FSTs and C-Seq functions as 2-way FSTs both compute partial reduplication, but differ in their origin semantics (Dolatian and Heinz, 2018b), the finite-state analog to alignment (Boja\u0144czyk, 2014) . Consider a function f , an FST T which computes f , and an inputoutput pair (x, y) such that f (x) = y. Given some substring y j in y, the origin information of y j with respect to T is the position x i in x such that the Finite-state transducer Origin information 1-way a.i a.ii FST's input-read head is in position x i of the input x when the FST outputs the substring y j . To illustrate, consider initial-CV copying: f (pat) = papat. This function is computable by either the 1-way FST in Figure 3 .a.i or the 2way FST in Figure 3 .b.i. The input is flanked by the end boundaries o,n. The 1-way FST implicitly advances from left-to-right on the input string. The 2-way FST advances left-to-right via the explicit +1 direction parameter until it produces the first CV string (=the reduplicant). After that, it moves right-to-left via the -1 direction parameter and reaches the start boundary o. It then advances left-to-right and outputs the base. 4 For the inputoutput pair (pat, papat), the 1-way FST generates an 'alignment' or origin information such that the entire second copy 'pa' is associated or generated from the vowel 'a' in the input (Figure 3 .a.ii). In contrast, the 2-way FST generates the alignment in Figure 3 .b.ii where the second output 'p' is associated with the input consonant 'p'. The role of origin semantics and alignment acts as a diagnostic for understanding whether the neural networks we probe behave more like a 1-way or 2-way FST.",
"cite_spans": [
{
"start": 352,
"end": 381,
"text": "(Dolatian and Heinz, 2018a,b)",
"ref_id": null
},
{
"start": 858,
"end": 875,
"text": "(Boja\u0144czyk, 2014)",
"ref_id": "BIBREF5"
},
{
"start": 1829,
"end": 1830,
"text": "4",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1371,
"end": 1379,
"text": "Figure 3",
"ref_id": "FIGREF3"
},
{
"start": 1404,
"end": 1412,
"text": "Figure 3",
"ref_id": "FIGREF3"
},
{
"start": 2028,
"end": 2037,
"text": "(Figure 3",
"ref_id": "FIGREF3"
},
{
"start": 2100,
"end": 2108,
"text": "Figure 3",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Computing reduplication",
"sec_num": "2.1"
},
{
"text": "q 0 start q 1 q 2 q 3 q 4 q f (o:o) (t:t) (p:p) (a:a\u21e0ta) (a:a\u21e0pa) (\u2303 : \u2303) (n:n) p a t p a p a t 2-way b.i b.ii q 0 start q 1 q 2 q 3 q 4 q f",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing reduplication",
"sec_num": "2.1"
},
{
"text": "Chandlee et al. (2015) and Dolatian and Heinz (2018a) respectively show that ISL (Seq) and C-OSL (C-Seq) reduplicative processes are provably learnable by inducing their corresponding 1-way or 2-way FSTs in polynomial time and data. For Dolatian and Heinz (2018a) , their proof relies on making the training data 'boundary enriched' with the reduplicative boundary symbol \u21e0, e.g. the training data for initial-CV reduplication is {(pat, pa\u21e0pat), (mara, ma\u21e0mara), etc.}. They hypothesize that learning without the boundary \u21e0 is tantamount to learning morpheme segmentation. Gasser (1993) used simple RNNs to model reduplication and copying functions, finding that they could not properly learn reduplicative patterns. However, Prickett et al. (2018) found that ED networks, a class of RNNs that have performed well on a number of other morphological tasks (Cotterell et al., 2016; Kirov and Cotterell, 2018) could learn simple reduplicative patterns. These patterns used training data that did not represent a realistic language learning scenario, since all words had the same length and syllables were limited to a CV structure. We test the extent to which ED networks are capable of learning more realistic reduplicative functions. We find that vanilla EDs, like Prickett et al.'s, struggle to scale to realistic data, while EDs augmented with an attention mechanism easily acquire complex, natural-language-based reduplication patterns.",
"cite_spans": [
{
"start": 27,
"end": 53,
"text": "Dolatian and Heinz (2018a)",
"ref_id": "BIBREF17"
},
{
"start": 237,
"end": 263,
"text": "Dolatian and Heinz (2018a)",
"ref_id": "BIBREF17"
},
{
"start": 573,
"end": 586,
"text": "Gasser (1993)",
"ref_id": "BIBREF23"
},
{
"start": 726,
"end": 748,
"text": "Prickett et al. (2018)",
"ref_id": "BIBREF37"
},
{
"start": 855,
"end": 879,
"text": "(Cotterell et al., 2016;",
"ref_id": "BIBREF15"
},
{
"start": 880,
"end": 906,
"text": "Kirov and Cotterell, 2018)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning reduplication",
"sec_num": "2.2"
},
{
"text": "We use a library of C-Seq transducers derived from the typology of natural language reduplication patterns (Dolatian and Heinz, 2019) to generate sets of input-output mappings which we use to query several ED architectures.",
"cite_spans": [
{
"start": 107,
"end": 133,
"text": "(Dolatian and Heinz, 2019)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3.1"
},
{
"text": "The typology exhibits multiple parameters and distinctions. Already mentioned was the distinction between partial and total reduplication: copying a bounded substring of the input gu\u21e0guyon (1b) vs. copying the entire potentially unbounded input wanita ! wanita\u21e0wanita (1a).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3.1"
},
{
"text": "For partial reduplication, one subparameter is whether the reduplicant has a fixed size or a variable size that is still smaller than some fixed natural number. Fixed-sized partial reduplication is the most common pattern, e.g. initial CVcopying: gu\u21e0guyon (1b) (Moravcsik, 1978; Rubino, 2005) . One instantiation of variable-length partial reduplication is copying the initial foot (2(a)i) (Marantz, 1982) , or syllable (2(b)i) (Haugen, 2005) , which used to be unattested (Moravcsik, 1978) . Another subparameter is whether the reduplicant is adjacent to the segments it copied (1b) or non-adjacent, i.e. wrong-sided (2c). Wrong-sided reduplication is controversial (Nelson, 2003 ) but attested (Riggle, 2004) .",
"cite_spans": [
{
"start": 261,
"end": 278,
"text": "(Moravcsik, 1978;",
"ref_id": "BIBREF34"
},
{
"start": 279,
"end": 292,
"text": "Rubino, 2005)",
"ref_id": "BIBREF43"
},
{
"start": 390,
"end": 405,
"text": "(Marantz, 1982)",
"ref_id": "BIBREF32"
},
{
"start": 428,
"end": 442,
"text": "(Haugen, 2005)",
"ref_id": "BIBREF24"
},
{
"start": 473,
"end": 490,
"text": "(Moravcsik, 1978)",
"ref_id": "BIBREF34"
},
{
"start": 667,
"end": 680,
"text": "(Nelson, 2003",
"ref_id": "BIBREF35"
},
{
"start": 696,
"end": 710,
"text": "(Riggle, 2004)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3.1"
},
{
"text": "2. (a) i. (dimu)rU ! dimu\u21e0dimurU (Yidin) 'house' ! 'houses' ii. (gindal)ba ! gindal\u21e0gindalba 'lizard sp.' ! 'lizards' (b) i. vu.sa ! vu\u21e0vusa (Yaqui) 'awaken' ! 'awaken (habitual)' ii. vam.se ! vam\u21e0vamse 'hurry' ! 'hurry (habitual)' (c) qanga ! qanga\u21e0qan (Koryak) 'fire' ! 'fire (absolute)'",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3.1"
},
{
"text": "Over 1-way FSTs, adjacent partial reduplication and foot/syllable copying are ISL while wrongsided reduplication is Seq. Over 2-way FSTs, total reduplication and all the above partial reduplication functions are C-OSL, a subclass of C-Seq. 5 We tested multiple patterns, including partial initial and wrong-sided reduplication of the first two syllables, total reduplication, and partial initial reduplication of the first two segments. For each pattern, the models are given base strings as input and trained to reproduce the base string along with its reduplicant (i.e. a right or left concatenated fully or partially copied form). For all patterns, 10,000 input-output pairs are generated, 7,000 of which are used to train the models while the remaining 3,000 are held out to test model generalization. For clarity the \u21e0 symbol is used throughout this paper to denote the boundary between a base and its reduplicant, however no such boundary is present in the model's training data.",
"cite_spans": [
{
"start": 240,
"end": 241,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3.1"
},
{
"text": "Many ED networks were built and trained on the datasets described above. EDs are composed of a recurrent encoder, which sequentially processes an input string to yield a vector representation of the sequence in R n , and a recurrent decoder which takes the encoded representation of the input as a starting state and continues producing outputs until it produces a target stop symbol or reaches an experimenter-defined maximum length. The use of recurrent layers in both in the encoder and decoder allows EDs to map variable-length input sequences to variable-length output sequences, with no necessary relationship between the length of the input and target output (Sutskever et al., 2014) .",
"cite_spans": [
{
"start": 666,
"end": 690,
"text": "(Sutskever et al., 2014)",
"ref_id": "BIBREF49"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "3.2"
},
{
"text": "Simple (SRNN) and gated (GRU) recurrence relations were tested as the encoder and decoder recurrent layers. 6 In SRNN layers the network's state at any timepoint, h t , is dependent only on the input at that timepoint and the network's state at the previous timepoint (Elman, 1990) .",
"cite_spans": [
{
"start": 108,
"end": 109,
"text": "6",
"ref_id": null
},
{
"start": 268,
"end": 281,
"text": "(Elman, 1990)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "3.2"
},
{
"text": "h t = tanh(W x x t + b ih + W h h t 1 + b hh ) (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "3.2"
},
{
"text": "Consequently, in an SRNN there is only one path for the forward and backward propogation of information. This leads to potential problems for SRNNs in representing long-distance dependencies (Bengio et al., 1994) and problems with the backward flow of information during training (Hochreiter et al., 2001) . GRU layers have a series of gates, called the reset r t , update z t , and new n t gates, which create an alternative path of information flow , as shown in (2).",
"cite_spans": [
{
"start": 191,
"end": 212,
"text": "(Bengio et al., 1994)",
"ref_id": "BIBREF4"
},
{
"start": 280,
"end": 305,
"text": "(Hochreiter et al., 2001)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "3.2"
},
{
"text": "r t = (W ir x t + b ir + W hr h t 1 + b hr ) z t = (W iz x t + b iz + W hz h t 1 + b hz ) n t =tanh(W in x t + b in + r t (W hn h t 1 + b hn )) h t =(1 z t ) n t + z t h t 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "3.2"
},
{
"text": "(2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "3.2"
},
{
"text": "In a classic ED architecture, the encoded representation of the input is the only piece of infor-mation that is passed from the encoder to the decoder. This forces all necessary information in the input to be stored in this vector and preserved throughout the decoding process. In all experiments presented below, the target outputs consist of a concatenated reduplicant and base. Because the model must reproduce the base. it must preserve the identity of all phonemes in the input sequence. In order to test the ability of the model to learn the reduplicative function independent of its ability to store segment identities over arbitrarily long spans, a global weighted attention mechanism was incorporated into some of the models. This is a key point of departure from previous attempts to model reduplication with ED networks. Attention allows the decoder to selectively attend to the hidden states of the encoder by learning a set of weights, W att , which map the decoder's current state to a set of weights over timesteps in the input, and then concatenating the current decoder hidden state, h t , the weighted combination of all encoder hidden states to yield a new current decoder state, h tt Luong et al., 2015) . This is illustrated in Equation 3, where E is a matrix of size input length \u21e5 hidden dimensionality such that the ith row contains the encoder hidden state at timepoint i.",
"cite_spans": [
{
"start": 1204,
"end": 1223,
"text": "Luong et al., 2015)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h tt = CAT(h t , (W att h t ) T E)",
"eq_num": "(3)"
}
],
"section": "Models",
"sec_num": "3.2"
},
{
"text": "In this way, the decoder can pull information directly from the encoder by learning an alignment between the output and input representations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "3.2"
},
{
"text": "The next section presents the results of training networks with either SRNN or GRU recurrent layers with and without an attention mechanism and then testing their ability to generalize the target pattern. All networks are trained to minimize phoneme level cross-entropy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "3.2"
},
{
"text": "In this section, we test ED networks on their ability to learn partial ( \u00a74.1,4.3) and total reduplication (4.2). Within partial reduplication, we test if they can learn adjacent reduplication vs. wrongsided reduplication, and fixed-size vs. variablelength reduplication.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "One simplifying assumption of previous work is that the reduplicant is a fixed-length substring of the base. This section tests the extent to which ED networks are able to learn reduplicative functions that copy a variably sized substring of the base in a way that is sensitive to linguistic structure which is not explicitly encoded in the training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Partial reduplication",
"sec_num": "4.1"
},
{
"text": "Models were trained on initial and wrong-sided reduplication in which the reduplicant consisted of the first two-syllables in the word. Syllables were defined to be as onset-maximizing as possible and complex onsets and codas were included in the training data. This means that, for words with more than two syllables, the target reduplicant included everything between the left edge of the word and the right edge of the second vowel (initial: tasgatri!tasga\u21e0tasgatri, wrongsided: tasgatri!tasgatri\u21e0tasgat). For words with only one or two vowels the reduplicant was the entire word (tasgat!tasgat\u21e0tasgat). Due to the variable presence of onsets and codas, both simple and complex, reduplicants in these test cases vary in length between 2 and 10 phonemes, and may contain either 1 or 2 vowels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Partial reduplication",
"sec_num": "4.1"
},
{
"text": "In order for the model to learn this pattern, it must learn to identify which phonemes are consonants and which are vowels, must learn the syllabification rules, and must learn to handle the onesyllable exceptional case. Table (1) shows the generalization accuracy for the tested network architectures on datasets instantiating this pattern. As will be discussed in \u00a74.3, the success of networks without attention is partially dependent on characteristics of the target language, namely the size of the language's segment inventory and permitted string lengths. To highlight these effects, results are reported from a representative small language, which has 10 unique phonemes and permits bases of between 3 and 9 segments, and a large language, which has 26 unique phonemes and permits bases of between 3 and 15 segments. The results suggest that the attention-based models are able to learn and generalize both initial and wrong-sided two-syllable reduplication patterns in a way that is robust to recurrence rela-tion and language size. Non-attention GRU models show mild success in the small language, but seem heavily affected by language size, a result that will be explored thoroughly in \u00a74.3. Nonattention RNN models are unable to learn the patterns in any of the simulations we ran.",
"cite_spans": [],
"ref_spans": [
{
"start": 221,
"end": 230,
"text": "Table (1)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Partial reduplication",
"sec_num": "4.1"
},
{
"text": "The attention-based models are able to learn an alignment between the input and output that allows them to pull information directly from the input during decoding, sidestepping a potential information bottleneck at the encoded representation. To illustrate the alignment functions, an SRNN trained on two-syllable initial reduplication was used to make predictions about novel forms and the attention weights were stored. Figure (4) plots the attention weights for this model at every step in decoding for the three-syllable word pastapo and the two-syllable word spaftof ('<' and '>' represent start-of-sequence and end-ofsequence tokens, respectively). The attention weights confirm that the model learned an alignment between corresponding phonemes in the input and output. A single phoneme in the input has an output correspondent in both the base and reduplicant. These examples also illustrate the model's ability to i) identify the cut-off point for the reduplicant even when it is not explicitly marked and to ii) identify exceptional cases where the word is only two syllables and thus the reduplicant consists of material past the second vowel. In pastapo the model cuts off the reduplicant after the second vowel and in spaftof the model correctly includes the coda consonant because the word consists of only two syllables. This section showed that attention-based models can learn initial and wrong-sided reduplication even when the pattern is complicated by sensitivity to linguistic structure that results in variablelength reduplicants. Once the network has learned enough structure to perform syllabification, the two-syllable partial reduplicative function is C-Seq. The next section examines the extent to which these networks learn unbounded copying, i.e. total reduplication.",
"cite_spans": [],
"ref_spans": [
{
"start": 423,
"end": 433,
"text": "Figure (4)",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Partial reduplication",
"sec_num": "4.1"
},
{
"text": "We test the ability of ED networks to learn and generalize total reduplication: wanita ! wanita\u21e0wanita (1a). As mentioned, total reduplication is not a rational function and is uncomputable with a 1-way FST, since there is no upper bound on the size of the copied string. However, it is a C-Seq function and computable by the corresponding 2-way FST. Total reduplication is thus a crucial test case for the RNN behavior.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Total reduplication",
"sec_num": "4.2"
},
{
"text": "As in \u00a74.1, SRNN and GRU models with and without attention are trained on large and small languages where small languages have 10 phonemes and base lengths between 3 and 9 segments, and large languages have 26 phonemes and base lengths between 3 and 15 segments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Total reduplication",
"sec_num": "4.2"
},
{
"text": "Attention Small Large Small Large SRNN 0.046 0.0 0.999 0.985 GRU 0.705 0.211 0.999 0.995 Table 2 : Generalization accuracy by network type on both the large and small total reduplication patterns. Table 2 shows the generalization accuracy for all network configurations. The results are nearly identical to those for the partial reduplication patterns in \u00a74.1. Attention models can robustly learn the pattern, with negligible effects of recurrence relation or language size. Without attention, no model fully succeeds in generalizing the total reduplication pattern, with the best performance coming from the GRU on the small language.",
"cite_spans": [],
"ref_spans": [
{
"start": 89,
"end": 96,
"text": "Table 2",
"ref_id": null
},
{
"start": 197,
"end": 204,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Non-attention",
"sec_num": null
},
{
"text": "These results show that attention-based models can learn a generalizable total reduplication function as well as they can learn partial reduplication functions. This means that attention-based ED network generalization does not distinguish between total and partial reduplication, despite glaring functional and automata-theoretic differences in the functions themselves. This clearly suggests that an RNN architecture that can learn both functions necessarily computes a C-Seq function, which properly includes both processes. Furthermore, as discussed in \u00a75, the interpretability of the corresponding FST characterization (2-way vs 1-way) and its origin semantics provides a direct computational link to the attention mechanism of these RNN architectures.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Non-attention",
"sec_num": null
},
{
"text": "As shown so far, network architecture is not the only factor that influences a network's ability to learn a target reduplicative function. The composition of the target language, in terms of the number of segments in the language and the number of permitted string lengths, can have a dramatic effect on model behavior.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Alphabet size and string length effects",
"sec_num": "4.3"
},
{
"text": "The effect of model architecture and language composition was investigated by testing the extent to which all network configurations could learn simple reduplication pattern while systematically varying the size of the segment inventory and permitted base lengths in the data. The reduplicative function chosen for these tests copied a fixedwindow of two segments for initial reduplication: guyon!gu\u21e0guyon. This was chosen because it is typologically well-attested (Moravcsik, 1978; Rubino, 2005 Rubino, , 2013 and also predicted to be the simplest reduplication pattern for the network to learn (since it is insensitive to linguistic structure and has a fixed-length reduplicant).",
"cite_spans": [
{
"start": 465,
"end": 482,
"text": "(Moravcsik, 1978;",
"ref_id": "BIBREF34"
},
{
"start": 483,
"end": 495,
"text": "Rubino, 2005",
"ref_id": "BIBREF43"
},
{
"start": 496,
"end": 510,
"text": "Rubino, , 2013",
"ref_id": "BIBREF44"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Alphabet size and string length effects",
"sec_num": "4.3"
},
{
"text": "Data that followed this pattern was generated for languages with 10, 18, and 26 unique phonemes in their inventory and which permit bases to vary from 3 to between 5 and 10 segments. These results are shown in Figure (5) . 7 The top panel shows the effect of alphabet size; string lengths are fixed between 3 and 8. The bottom panel, which shows the effect of string lengths; alphabet size is fixed at 26. The lines paralleling 1.0 in the top panel show that the ability of attentionbased models to learn the target function is robust to alphabet size. The lines paralleling 1.0 in the bottom panel illustrate that attention-based models are similarly robust to string length variation.",
"cite_spans": [
{
"start": 223,
"end": 224,
"text": "7",
"ref_id": null
}
],
"ref_spans": [
{
"start": 210,
"end": 220,
"text": "Figure (5)",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Alphabet size and string length effects",
"sec_num": "4.3"
},
{
"text": "In contrast, the non-attention models show large effects of alphabet size and string length. The non- attention SRNN shows very limited success. It is able to generalize with a very limited number of string lengths; but when maximum string length exceeds 7, it is no longer able to learn the target function at all. Consequently, the accuracy of the SRNN in the top panel, where maximum string length is fixed at 9, is stuck at 0.0 across all alphabet sizes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Alphabet size and string length effects",
"sec_num": "4.3"
},
{
"text": "The effects of both string length and alphabet size are also visible for the non-attention GRU. In the top panel, where maximum string length is fixed at 9, a decrease in generalization accuracy as a function of alphabet size is observed. The effect of maximum string length on the nonattention GRU is less dramatic than on the SRNN, but the GRU still displays a decrease from near ceiling accuracy with lengths between 3 and 5, to \u21e0 0.60 when lengths range between 3 and 10.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Alphabet size and string length effects",
"sec_num": "4.3"
},
{
"text": "The sensitivity of non-attention SRNN and GRU models to alphabet size and string length are likely a result of the fact that these models are unable to directly reference the input during decoding and must pass all information through the encoder bottleneck. This hypothesis is strengthened by the fact that, without attention, the GRU performs much better than the SRNN. The GRU has extra gates between timepoints which aid in the long-distance preservation of information, mitigating the bottleneck problem to an extent. How-ever, while this assists the GRU network, it is not enough to make alphabet size and word length non-issues. The non-attention GRU is similar in architecture to the LSTM model of Prickett et al. (2018) , with a slightly different training objective, suggesting that their model would similarly have difficulty scaling up.",
"cite_spans": [
{
"start": 706,
"end": 728,
"text": "Prickett et al. (2018)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Alphabet size and string length effects",
"sec_num": "4.3"
},
{
"text": "The lack of a difference between the attentionbased GRU and SRNN corroborates the idea that when this information bottleneck is not an issue both architectures are capable of learning generalizable reduplication.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Alphabet size and string length effects",
"sec_num": "4.3"
},
{
"text": "As explained in \u00a72.1, partial reduplication can be computed as a function with either 1-way or 2-way FSTs. However, the two finite-state algorithms differ in their origin semantics or alignment. The alignment difference is simulated by the attentionbased RNNs. The alignments learned by attentionbased models for partial reduplication in \u00a74.1 and \u00a74.3 are analogous to the origin semantics computed by the 2-way FST. We illustrate in Figure 6 . While both Seq and C-Seq functions sufficiently characterize partial reduplication, this 2-way-like alignment suggests that the RNNs are generalizing C-Seq functions (see Fig. 4 for other examples). This extends to total reduplication ( \u00a74.2) whose alignment when learned by the attentionbased RNNs suggests the same origin information as 2-way FSTs. These results hint at the expressivity of the ED models, explicitly connecting their computations to the 2-way automata characterizing this subregular class.",
"cite_spans": [],
"ref_spans": [
{
"start": 434,
"end": 442,
"text": "Figure 6",
"ref_id": "FIGREF6"
},
{
"start": 616,
"end": 622,
"text": "Fig. 4",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Origin semantics and alignment",
"sec_num": "5.1"
},
{
"text": "The results suggest that the same general-purpose mechanism can be used to model both partial and total reduplication. The attention-based RNNs learned both processes with near-equal ease and generalizability and the same tools. This learning result fits well with reduplicative typology and theory. Partial and total reduplication are typologically and diachronically linked. If a language has partial reduplication, then it almost always has total reduplication, often because the former developed from the latter (Rubino, 2013) . Because of this dependence, certain linguistic theories use the same mechanisms to generate both processes (Inkelas and Zoll, 2005) .",
"cite_spans": [
{
"start": 516,
"end": 530,
"text": "(Rubino, 2013)",
"ref_id": "BIBREF44"
},
{
"start": 640,
"end": 664,
"text": "(Inkelas and Zoll, 2005)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generality of copying mechanisms",
"sec_num": "5.2"
},
{
"text": "Computationally, our result fits with the characterization of reduplication over 2-way FSTs (Dolatian and Heinz, 2018b) but not over 1-way FSTs . Because total reduplication cannot be modeled by a 1-way FSTs, some suggest that total and partial reduplication are ontologically different and should be computed with separate mechanisms (Roark and Sproat, 2007; Chandlee, 2017) . In contrast, when computed over 2-way FSTs, both reduplicative processes fall under the same subclass of C-Seq functions.",
"cite_spans": [
{
"start": 335,
"end": 359,
"text": "(Roark and Sproat, 2007;",
"ref_id": "BIBREF41"
},
{
"start": 360,
"end": 375,
"text": "Chandlee, 2017)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generality of copying mechanisms",
"sec_num": "5.2"
},
{
"text": "The results from \u00a74.3 shows that attention-based RNNs could equally well learn a partial reduplication function regardless of alphabet size input size. In contrast, attention-less RNNs suffer. For an attention-less RNN, learning initial-CV copying with a small alphabet over smaller words is significantly easier then learning it with a larger alphabet over larger words. Their scaling difficulty is reminiscent of 1-way FST treatments of partial reduplication. To compute partial reduplication, 1-way FSTs can suffer a significant state explosion as alphabet size or reduplicant size increases. This is why some call 1-way FSTs 'burdensome models' for partial reduplication (Roark and Sproat, 2007, 54) . 2-way FSTs do not suffer from state explosion (Dolatian and Heinz, 2018b) .",
"cite_spans": [
{
"start": 675,
"end": 703,
"text": "(Roark and Sproat, 2007, 54)",
"ref_id": null
},
{
"start": 752,
"end": 779,
"text": "(Dolatian and Heinz, 2018b)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Scaling problems",
"sec_num": "5.3"
},
{
"text": "We showed that RNN encoder-decoder networks with attention can learn partial and total redupli-cation patterns. Non-attention models exhibited mixed success in learning generalizable reduplication functions in a way that was dependent on alphabet size and string length, suggesting that their failure is attributable to the information bottleneck between encoder and decoder rather than an inability to learn the target function. This corroborates the finding by Weiss et al. (2018) that recurrent networks' expressive power is restricted in practice, and shows the fruitfulness of using wellunderstood subregular classes to probe this expressivity.",
"cite_spans": [
{
"start": 463,
"end": 482,
"text": "Weiss et al. (2018)",
"ref_id": "BIBREF50"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "The definition and illustration for 2-way FSTs are taken from Dolatian and Heinz (2018b) . We use o,nas the start and end boundaries.",
"cite_spans": [
{
"start": 62,
"end": 88,
"text": "Dolatian and Heinz (2018b)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendix",
"sec_num": null
},
{
"text": "3) Definition: A 2-way, deterministic FST is a six-tuple (Q, \u2303 n , , q 0 , F, ) such that:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendix",
"sec_num": null
},
{
"text": "Q is a finite set of states, \u2303 n = \u2303 [ {o, n} is the input alphabet, is the output alphabet, q 0 2 Q is the initial state, F \u2713 Q is the set of final states,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendix",
"sec_num": null
},
{
"text": ":",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendix",
"sec_num": null
},
{
"text": "Q \u21e5 \u2303 ! Q \u21e5 \u21e4 \u21e5 D is the transition function where the direction D = { 1, 0, +1}.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendix",
"sec_num": null
},
{
"text": "For a survey on legitimate configurations in a 2way FSTs, its computational properties, and complexity diagnostics, please see Dolatian and Heinz (2018b) .",
"cite_spans": [
{
"start": 127,
"end": 153,
"text": "Dolatian and Heinz (2018b)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendix",
"sec_num": null
},
{
"text": "To illustrate 2-way FSTs, Figure 7 shows a 2way FST for total reduplication. The 2-way operates by:",
"cite_spans": [],
"ref_spans": [
{
"start": 26,
"end": 34,
"text": "Figure 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "A Appendix",
"sec_num": null
},
{
"text": "1. reading the input tape once from left to right in order to output the first copy, 2. going back to the start of the input tape by moving left until the start boundary o is reached, 3. reading the input tape once more from left to right in order to output the second copy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendix",
"sec_num": null
},
{
"text": "Specifically, this figure is interpreted as follows. The symbol \u2303 stands for any segment in the alphabet except for {o, n}. The arrow from q 1 to itself means this 2-way FST reads \u2303, writes \u2303, and advances the read head one step to the right on the input tape. The boundary symbol \u21e0 is a symbol in the output alphabet , and is not necessary. We include it only for illustration.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendix",
"sec_num": null
},
{
"text": "We show an example derivation in Figure 8 for the input-output pair (wanita, wanita\u21e0wanita) (1a using the 2-way FST in Figure 7 . The derivation shows the configurations of the computation for the input wanita and is step by step. Each tuple consists of four parts: input string, output string, current state, transition. In the input string, we underline the input symbol which FST will read next. The output string is what the 2-way FST has outputted up to that point. The symbol marks the empty string. The current state is what state the FST is currently in. The transition represents the used transition arc from input to output. In the first tuple, there is no transition arc used (N/A). But for other tuples, the form of the arc is: ",
"cite_spans": [],
"ref_spans": [
{
"start": 33,
"end": 41,
"text": "Figure 8",
"ref_id": "FIGREF7"
},
{
"start": 119,
"end": 127,
"text": "Figure 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "A Appendix",
"sec_num": null
},
{
"text": "See Alur et al. (2014) on the use of concatenation as a function combinator.3Chandlee (2017) andDolatian and Heinz (2018a)'s results are actually stronger. Over 1-way FSTs, most partial reduplicative processes are Input-Strictly Local (ISL) functions, a subclass of Seq functions. Over 2-way FSTs, most reduplicative processes are the concatenation of Output-Strictly Local (C-OSL) functions, a subclass of C-Seq.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "See the appendix for more details on 2-way FSTs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Foot and syllable copying are C-OSL if the input is marked by syllable/foot boundaries; otherwise they're C-Seq.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "GRU layers have been shown to behave comparably to LSTMs, despite having fewer parameters(Chung et al., 2014). One difference between GRU and LSTM comes from(Weiss et al., 2018), who suggests that LSTMs are able to learn arbitrary a n b n patterns while GRUs are not.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The reported results are from initial reduplication with a window size of two segments, however, wrong-sided reduplication and a larger window size of three were also tested with nearly identical results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Regular combinators for string transformations",
"authors": [
{
"first": "Rajeev",
"middle": [],
"last": "Alur",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Freilich",
"suffix": ""
},
{
"first": "Mukund",
"middle": [],
"last": "Raghothaman",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Joint Meeting of the Twenty-Third EACSL Annual Conference on Computer Science Logic (CSL) and the Twenty-Ninth Annual ACM/IEEE Symposium on Logic in Computer Science (LICS), CSL-LICS '14",
"volume": "9",
"issue": "",
"pages": "1--9",
"other_ids": {
"DOI": [
"10.1145/2603088.2603151"
]
},
"num": null,
"urls": [],
"raw_text": "Rajeev Alur, Adam Freilich, and Mukund Raghothaman. 2014. Regular combinators for string transformations. In Proceedings of the Joint Meeting of the Twenty-Third EACSL Annual Conference on Computer Science Logic (CSL) and the Twenty-Ninth Annual ACM/IEEE Symposium on Logic in Computer Science (LICS), CSL-LICS '14, pages 9:1-9:10, New York, NY, USA. ACM.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Streaming transducers for algorithmic verification of single-pass list-processing programs",
"authors": [
{
"first": "Rajeev",
"middle": [],
"last": "Alur",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pavol\u010dern\u00fd",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 38th Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, POPL '11",
"volume": "",
"issue": "",
"pages": "599--610",
"other_ids": {
"DOI": [
"10.1145/1926385.1926454"
]
},
"num": null,
"urls": [],
"raw_text": "Rajeev Alur and Pavol\u010cern\u00fd. 2011. Streaming trans- ducers for algorithmic verification of single-pass list-processing programs. In Proceedings of the 38th Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, POPL '11, pages 599-610, New York, NY, USA. ACM.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1409.0473"
]
},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Finite-state morphology: Xerox tools and techniques",
"authors": [
{
"first": "Kenneth",
"middle": [],
"last": "Beesley",
"suffix": ""
},
{
"first": "Lauri",
"middle": [],
"last": "Karttunen",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenneth Beesley and Lauri Karttunen. 2003. Finite-state morphology: Xerox tools and techniques. CSLI Publications.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Learning long-term dependencies with gradient descent is difficult",
"authors": [
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Patrice",
"middle": [],
"last": "Simard",
"suffix": ""
},
{
"first": "Paolo",
"middle": [],
"last": "Frasconi",
"suffix": ""
}
],
"year": 1994,
"venue": "IEEE transactions on neural networks",
"volume": "5",
"issue": "2",
"pages": "157--166",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoshua Bengio, Patrice Simard, Paolo Frasconi, et al. 1994. Learning long-term dependencies with gradi- ent descent is difficult. IEEE transactions on neural networks, 5(2):157-166.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Transducers with origin information",
"authors": [
{
"first": "Miko\u0142aj",
"middle": [],
"last": "Boja\u0144czyk",
"suffix": ""
}
],
"year": 2014,
"venue": "Automata, Languages, and Programming",
"volume": "",
"issue": "",
"pages": "26--37",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Miko\u0142aj Boja\u0144czyk. 2014. Transducers with ori- gin information. In Automata, Languages, and Programming, pages 26-37, Berlin, Heidelberg. Springer.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Weak second-order arithmetic and finite automata",
"authors": [
{
"first": "J",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "B\u00fcchi",
"suffix": ""
}
],
"year": 1960,
"venue": "Mathematical Logic Quarterly",
"volume": "6",
"issue": "1-6",
"pages": "66--92",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Richard B\u00fcchi. 1960. Weak second-order arithmetic and finite automata. Mathematical Logic Quarterly, 6(1-6):66-92.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Strictly Local Phonological Processes",
"authors": [
{
"first": "Jane",
"middle": [],
"last": "Chandlee",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jane Chandlee. 2014. Strictly Local Phonological Processes. Ph.D. thesis, University of Delaware, Newark, DE.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Computational locality in morphological maps. Morphology",
"authors": [
{
"first": "Jane",
"middle": [],
"last": "Chandlee",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "1--43",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jane Chandlee. 2017. Computational locality in mor- phological maps. Morphology, pages 1-43.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Evidence for classifying metathesis patterns as subsequential",
"authors": [
{
"first": "Jane",
"middle": [],
"last": "Chandlee",
"suffix": ""
},
{
"first": "Angeliki",
"middle": [],
"last": "Athanasopoulou",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Heinz",
"suffix": ""
}
],
"year": 2012,
"venue": "The Proceedings of the 29th West Coast Conference on Formal Linguistics",
"volume": "",
"issue": "",
"pages": "303--309",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jane Chandlee, Angeliki Athanasopoulou, and Jeffrey Heinz. 2012. Evidence for classifying metathesis patterns as subsequential. In The Proceedings of the 29th West Coast Conference on Formal Linguistics, pages 303-309, Somerville, MA. Cascillida Press.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Output strictly local functions",
"authors": [
{
"first": "Jane",
"middle": [],
"last": "Chandlee",
"suffix": ""
},
{
"first": "R\u00e9mi",
"middle": [],
"last": "Eyraud",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Heinz",
"suffix": ""
}
],
"year": 2015,
"venue": "14th Meeting on the Mathematics of Language",
"volume": "",
"issue": "",
"pages": "112--125",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jane Chandlee, R\u00e9mi Eyraud, and Jeffrey Heinz. 2015. Output strictly local functions. In 14th Meeting on the Mathematics of Language, pages 112-125.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Bounded copying is subsequential: Implications for metathesis and reduplication",
"authors": [
{
"first": "Jane",
"middle": [],
"last": "Chandlee",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Heinz",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 12th Meeting of the ACL Special Interest Group on Computational Morphology and Phonology, SIG-MORPHON '12",
"volume": "",
"issue": "",
"pages": "42--51",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jane Chandlee and Jeffrey Heinz. 2012. Bounded copying is subsequential: Implications for metathe- sis and reduplication. In Proceedings of the 12th Meeting of the ACL Special Interest Group on Computational Morphology and Phonology, SIG- MORPHON '12, pages 42-51, Montreal, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Strict locality and phonological maps",
"authors": [
{
"first": "Jane",
"middle": [],
"last": "Chandlee",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Heinz",
"suffix": ""
}
],
"year": 2018,
"venue": "Linguistic Inquiry",
"volume": "49",
"issue": "1",
"pages": "23--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jane Chandlee and Jeffrey Heinz. 2018. Strict lo- cality and phonological maps. Linguistic Inquiry, 49(1):23-60.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "On the properties of neural machine translation",
"authors": [
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Bart",
"middle": [],
"last": "Van Merri\u00ebnboer",
"suffix": ""
},
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1409.1259"
]
},
"num": null,
"urls": [],
"raw_text": "Kyunghyun Cho, Bart Van Merri\u00ebnboer, Dzmitry Bah- danau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder ap- proaches. arXiv preprint arXiv:1409.1259.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Empirical evaluation of gated recurrent neural networks on sequence modeling",
"authors": [
{
"first": "Junyoung",
"middle": [],
"last": "Chung",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Aglar G\u00fcl\u00e7ehre",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Junyoung Chung, \u00c7 aglar G\u00fcl\u00e7ehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence model- ing. CoRR, abs/1412.3555.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "The sigmorphon 2016 shared taskmorphological reinflection",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": ""
},
{
"first": "Christo",
"middle": [],
"last": "Kirov",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Sylak-Glassman",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Yarowsky",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 14th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology",
"volume": "",
"issue": "",
"pages": "10--22",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan Cotterell, Christo Kirov, John Sylak-Glassman, David Yarowsky, Jason Eisner, and Mans Hulden. 2016. The sigmorphon 2016 shared taskmor- phological reinflection. In Proceedings of the 14th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 10-22.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "The complexity of the vocabulary of Bambara",
"authors": [
{
"first": "Christopher",
"middle": [],
"last": "Culy",
"suffix": ""
}
],
"year": 1985,
"venue": "Linguistics and philosophy",
"volume": "8",
"issue": "",
"pages": "345--351",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher Culy. 1985. The complexity of the vo- cabulary of Bambara. Linguistics and philosophy, 8:345-351.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Learning reduplication with 2-way finite-state transducers",
"authors": [
{
"first": "Hossep",
"middle": [],
"last": "Dolatian",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Heinz",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of Machine Learning Research: International Conference on Grammatical Inference",
"volume": "93",
"issue": "",
"pages": "67--80",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hossep Dolatian and Jeffrey Heinz. 2018a. Learn- ing reduplication with 2-way finite-state transduc- ers. In Proceedings of Machine Learning Research: International Conference on Grammatical Inference, volume 93 of Proceedings of Machine Learning Research, pages 67-80, Wroclaw, Poland.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Modeling reduplication with 2-way finite-state transducers",
"authors": [
{
"first": "Hossep",
"middle": [],
"last": "Dolatian",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Heinz",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 15th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hossep Dolatian and Jeffrey Heinz. 2018b. Model- ing reduplication with 2-way finite-state transduc- ers. In Proceedings of the 15th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, Brussells, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Redtyp: A database of reduplication with computational models",
"authors": [
{
"first": "Hossep",
"middle": [],
"last": "Dolatian",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Heinz",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Society for Computation in Linguistics",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hossep Dolatian and Jeffrey Heinz. 2019. Redtyp: A database of reduplication with computational mod- els. In Proceedings of the Society for Computation in Linguistics, volume 2. Article 3.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Finding structure in time",
"authors": [
{
"first": "",
"middle": [],
"last": "Jeffrey L Elman",
"suffix": ""
}
],
"year": 1990,
"venue": "Cognitive science",
"volume": "14",
"issue": "2",
"pages": "179--211",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey L Elman. 1990. Finding structure in time. Cognitive science, 14(2):179-211.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "MSO definable string transductions and two-way finite-state transducers",
"authors": [
{
"first": "Joost",
"middle": [],
"last": "Engelfriet",
"suffix": ""
},
{
"first": "Hendrik",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2001,
"venue": "ACM Trans. Comput. Logic",
"volume": "2",
"issue": "2",
"pages": "216--254",
"other_ids": {
"DOI": [
"10.1145/371316.371512"
]
},
"num": null,
"urls": [],
"raw_text": "Joost Engelfriet and Hendrik Jan Hoogeboom. 2001. MSO definable string transductions and two-way finite-state transducers. ACM Trans. Comput. Logic, 2(2):216-254.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Transducers, logic and algebra for functions of finite words",
"authors": [
{
"first": "Emmanuel",
"middle": [],
"last": "Filiot",
"suffix": ""
},
{
"first": "Pierre-Alain",
"middle": [],
"last": "Reynier",
"suffix": ""
}
],
"year": 2016,
"venue": "ACM SIGLOG News",
"volume": "3",
"issue": "3",
"pages": "4--19",
"other_ids": {
"DOI": [
"10.1145/2984450.2984453"
]
},
"num": null,
"urls": [],
"raw_text": "Emmanuel Filiot and Pierre-Alain Reynier. 2016. Transducers, logic and algebra for functions of finite words. ACM SIGLOG News, 3(3):4-19.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Learning words in time: Towards a modular connectionist account of the acquisition of receptive morphology. Indiana University",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Gasser",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Gasser. 1993. Learning words in time: Towards a modular connectionist account of the acquisition of receptive morphology. Indiana Uni- versity, Department of Computer Science.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Reduplicative allomorphy and language prehistory in Uto-Aztecan",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Haugen",
"suffix": ""
}
],
"year": 2005,
"venue": "Studies on reduplication",
"volume": "28",
"issue": "",
"pages": "315--350",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Haugen. 2005. Reduplicative allomorphy and language prehistory in Uto-Aztecan. In Bernhard Hurch, editor, Studies on reduplication, 28, pages 315-350. Walter de Gruyter, Berlin.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Gradient flow in recurrent nets: the difficulty of learning long-term dependencies",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Paolo",
"middle": [],
"last": "Frasconi",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 2001,
"venue": "John F Kolen and Stefan C Kremer, editors, A field guide to dynamical recurrent networks",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter, Yoshua Bengio, Paolo Frasconi, and J\u00fcrgen Schmidhuber. 2001. Gradient flow in recur- rent nets: the difficulty of learning long-term depen- dencies. In John F Kolen and Stefan C Kremer, edi- tors, A field guide to dynamical recurrent networks. John Wiley & Sons.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Reduplication: Doubling in Morphology",
"authors": [
{
"first": "Sharon",
"middle": [],
"last": "Inkelas",
"suffix": ""
},
{
"first": "Cheryl",
"middle": [],
"last": "Zoll",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sharon Inkelas and Cheryl Zoll. 2005. Reduplication: Doubling in Morphology. Cambridge University Press, Cambridge.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Formal aspects of phonological description",
"authors": [
{
"first": "C",
"middle": [],
"last": "Douglas",
"suffix": ""
},
{
"first": "Johnson",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 1972,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C Douglas Johnson. 1972. Formal aspects of phonological description. Mouton The Hague.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Regular models of phonological rule systems",
"authors": [
{
"first": "M",
"middle": [],
"last": "Ronald",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Kaplan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kay",
"suffix": ""
}
],
"year": 1994,
"venue": "Computational linguistics",
"volume": "20",
"issue": "3",
"pages": "331--378",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronald M Kaplan and Martin Kay. 1994. Regular mod- els of phonological rule systems. Computational linguistics, 20(3):331-378.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Recurrent neural networks in linguistic theory: Revisiting pinker and prince (1988) and the past tense debate",
"authors": [
{
"first": "Christo",
"middle": [],
"last": "Kirov",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": ""
}
],
"year": 2018,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "6",
"issue": "",
"pages": "651--665",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christo Kirov and Ryan Cotterell. 2018. Recurrent neural networks in linguistic theory: Revisiting pinker and prince (1988) and the past tense debate. Transactions of the Association for Computational Linguistics, 6:651-665.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "A general computational model for word-form recognition and production",
"authors": [
{
"first": "Kimmo",
"middle": [],
"last": "Koskenniemi",
"suffix": ""
}
],
"year": 1984,
"venue": "Proceedings of the 10th international conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "178--181",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kimmo Koskenniemi. 1984. A general computational model for word-form recognition and production. In Proceedings of the 10th international conference on Computational Linguistics, pages 178-181. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Effective approaches to attention-based neural machine translation",
"authors": [
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minh-Thang Luong, Hieu Pham, and Christo- pher D. Manning. 2015. Effective approaches to attention-based neural machine translation. CoRR, abs/1508.04025.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Re reduplication",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Marantz",
"suffix": ""
}
],
"year": 1982,
"venue": "Linguistic inquiry",
"volume": "13",
"issue": "3",
"pages": "435--482",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Marantz. 1982. Re reduplication. Linguistic inquiry, 13(3):435-482.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Sequential neural networks as automata",
"authors": [
{
"first": "William",
"middle": [],
"last": "Merrill",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Deep Learning and Formal Languages workshop at ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William Merrill. 2019. Sequential neural networks as automata. In Proceedings of the Deep Learning and Formal Languages workshop at ACL 2019.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Reduplicative constructions",
"authors": [
{
"first": "Edith",
"middle": [],
"last": "Moravcsik",
"suffix": ""
}
],
"year": 1978,
"venue": "",
"volume": "1",
"issue": "",
"pages": "297--334",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Edith Moravcsik. 1978. Reduplicative constructions. In Joseph Greenberg, editor, Universals of Human Language, volume 1, pages 297-334. Stanford Uni- versity Press, Stanford, California.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Asymmetric anchoring",
"authors": [
{
"first": "Nicole",
"middle": [
"Alice"
],
"last": "",
"suffix": ""
},
{
"first": "Nelson",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nicole Alice Nelson. 2003. Asymmetric anchoring. Ph.D. thesis, Rutgers University, New Brunswick, NJ.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Rational recurrences",
"authors": [
{
"first": "Hao",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Roy",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Thomson",
"suffix": ""
},
{
"first": "Noah A",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1203--1214",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hao Peng, Roy Schwartz, Sam Thomson, and Noah A Smith. 2018. Rational recurrences. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1203-1214.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Seq2seq models with dropout can learn generalizable reduplication",
"authors": [
{
"first": "Brandon",
"middle": [],
"last": "Prickett",
"suffix": ""
},
{
"first": "Aaron",
"middle": [],
"last": "Traylor",
"suffix": ""
},
{
"first": "Joe",
"middle": [],
"last": "Pater",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Fifteenth Workshop on Computational Research in Phonetics, Phonology, and Morphology",
"volume": "",
"issue": "",
"pages": "93--100",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brandon Prickett, Aaron Traylor, and Joe Pater. 2018. Seq2seq models with dropout can learn generaliz- able reduplication. In Proceedings of the Fifteenth Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 93-100.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Connecting weighted automata and recurrent neural networks through spectral learning",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Rabusseau",
"suffix": ""
},
{
"first": "Tianyu",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Doina",
"middle": [],
"last": "Precup",
"suffix": ""
}
],
"year": 2019,
"venue": "AISTATS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guillaume Rabusseau, Tianyu Li, and Doina Precup. 2019. Connecting weighted automata and recur- rent neural networks through spectral learning. In AISTATS.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "No free lunch in linguistics or machine learning: Response to pater",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Rawski",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Heinz",
"suffix": ""
}
],
"year": 2019,
"venue": "Language",
"volume": "94",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan Rawski and Jeffrey Heinz. 2019. No free lunch in linguistics or machine learning: Response to pater. Language, 94:1.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Nonlocal reduplication",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Riggle",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 34th meeting of the North Eastern Einguistics Society. Graduate Linguistic Student Association",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Riggle. 2004. Nonlocal reduplication. In Proceedings of the 34th meeting of the North Eastern Einguistics Society. Graduate Linguistic Student Association, University of Massachusetts.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Computational Approaches to Morphology and Syntax",
"authors": [
{
"first": "Brian",
"middle": [],
"last": "Roark",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Sproat",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brian Roark and Richard Sproat. 2007. Computational Approaches to Morphology and Syntax. Oxford University Press, Oxford.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Finite-state language processing",
"authors": [
{
"first": "Emmanuel",
"middle": [],
"last": "Roche",
"suffix": ""
},
{
"first": "Yves",
"middle": [],
"last": "Schabes",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emmanuel Roche and Yves Schabes. 1997. Finite-state language processing. MIT press.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Reduplication: Form, function and distribution",
"authors": [
{
"first": "Carl",
"middle": [],
"last": "Rubino",
"suffix": ""
}
],
"year": 2005,
"venue": "Studies on reduplication",
"volume": "",
"issue": "",
"pages": "11--29",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carl Rubino. 2005. Reduplication: Form, function and distribution. In Studies on reduplication, pages 11- 29. Mouton de Gruyter, Berlin.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Reduplication. Max Planck Institute for Evolutionary Anthropology",
"authors": [
{
"first": "Carl",
"middle": [],
"last": "Rubino",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carl Rubino. 2013. Reduplication. Max Planck Insti- tute for Evolutionary Anthropology, Leipzig.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "On multiple contextfree grammars",
"authors": [
{
"first": "Hiroyuki",
"middle": [],
"last": "Seki",
"suffix": ""
},
{
"first": "Takashi",
"middle": [],
"last": "Matsumura",
"suffix": ""
},
{
"first": "Mamoru",
"middle": [],
"last": "Fujii",
"suffix": ""
},
{
"first": "Tadao",
"middle": [],
"last": "Kasami",
"suffix": ""
}
],
"year": 1991,
"venue": "Theoretical Computer Science",
"volume": "88",
"issue": "2",
"pages": "191--229",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hiroyuki Seki, Takashi Matsumura, Mamoru Fujii, and Tadao Kasami. 1991. On multiple context- free grammars. Theoretical Computer Science, 88(2):191-229.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Parallel multiple context-free grammars, finite-state translation systems, and polynomial-time recognizable subclasses of lexical-functional grammars",
"authors": [
{
"first": "Hiroyuki",
"middle": [],
"last": "Seki",
"suffix": ""
},
{
"first": "Ryuichi",
"middle": [],
"last": "Nakanishi",
"suffix": ""
},
{
"first": "Yuichi",
"middle": [],
"last": "Kaji",
"suffix": ""
},
{
"first": "Sachiko",
"middle": [],
"last": "Ando",
"suffix": ""
},
{
"first": "Tadao",
"middle": [],
"last": "Kasami",
"suffix": ""
}
],
"year": 1993,
"venue": "Proceedings of the 31st annual meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "130--139",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hiroyuki Seki, Ryuichi Nakanishi, Yuichi Kaji, Sachiko Ando, and Tadao Kasami. 1993. Par- allel multiple context-free grammars, finite-state translation systems, and polynomial-time recog- nizable subclasses of lexical-functional grammars. In Proceedings of the 31st annual meeting on Association for Computational Linguistics, pages 130-139. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Neural networks and analog computation: beyond the Turing limit",
"authors": [
{
"first": "T",
"middle": [],
"last": "Hava",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Siegelmann",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hava T Siegelmann. 2012. Neural networks and analog computation: beyond the Turing limit. Springer Sci- ence & Business Media.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Morphology and computation",
"authors": [
{
"first": "Richard",
"middle": [
"William"
],
"last": "Sproat",
"suffix": ""
}
],
"year": 1992,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard William Sproat. 1992. Morphology and computation. MIT press, Cambridge:MA.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Sequence to sequence learning with neural networks",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural net- works. CoRR, abs/1409.3215.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "On the practical computational power of finite precision rnns for language recognition",
"authors": [
{
"first": "Gail",
"middle": [],
"last": "Weiss",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Eran",
"middle": [],
"last": "Yahav",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "740--745",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gail Weiss, Yoav Goldberg, and Eran Yahav. 2018. On the practical computational power of finite preci- sion rnns for language recognition. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 740-745.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"text": "Hierarchy of subregular functions",
"type_str": "figure"
},
"FIGREF1": {
"num": null,
"uris": null,
"text": "Figure 2shows such a division of a reduplicated word gu\u21e0guyon (1b). 3Figure 2shows this division of a reduplicated word gu\u21e0guyon (1b).",
"type_str": "figure"
},
"FIGREF2": {
"num": null,
"uris": null,
"text": "Initial-CV reduplication as a concatenation of subsequential functions.",
"type_str": "figure"
},
"FIGREF3": {
"num": null,
"uris": null,
"text": "FSTs and origin information for initial-CV reduplication",
"type_str": "figure"
},
"FIGREF4": {
"num": null,
"uris": null,
"text": "Attention weights over input (horizontal) at each time step of correct decoding of reduplicated form (vertical) for two-syllable initial reduplication of the words pastapo and spaftof. Darker squares indicate a lower weight on the alignment between two timesteps.",
"type_str": "figure"
},
"FIGREF5": {
"num": null,
"uris": null,
"text": "Effect on varying alphabet size and maximum string length, with minumum string length fixed at 3, on generalization accuracy.",
"type_str": "figure"
},
"FIGREF6": {
"num": null,
"uris": null,
"text": "(left): Attention weights over input (horizontal) at each time step of correct decoding of reduplicated form (vertical) for the mapping pat!pa\u21e0pat. Darker squares indicate a lower weight on the alignment between two timesteps. (right): Origin semantics of 2-way FST fromFigure 3b.ii.",
"type_str": "figure"
},
"FIGREF7": {
"num": null,
"uris": null,
"text": "Derivation of wanita!wanita\u21e0wanita.",
"type_str": "figure"
}
}
}
}