Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "W18-0309",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T05:31:04.926489Z"
},
"title": "Modeling the Complexity and Descriptive Adequacy of Construction Grammars",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Dunn",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper uses the Minimum Description Length paradigm to model the complexity of CxGs (operationalized as the encoding size of a grammar) alongside their descriptive adequacy (operationalized as the encoding size of a corpus given a grammar). These two quantities are combined to measure the quality of potential CxGs against unannotated corpora, supporting discovery-device CxGs for English, Spanish, French, German, and Italian. The results show (i) that these grammars provide significant generalizations as measured using compression and (ii) that more complex CxGs with access to multiple levels of representation provide greater generalizations than single-representation CxGs.",
"pdf_parse": {
"paper_id": "W18-0309",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper uses the Minimum Description Length paradigm to model the complexity of CxGs (operationalized as the encoding size of a grammar) alongside their descriptive adequacy (operationalized as the encoding size of a corpus given a grammar). These two quantities are combined to measure the quality of potential CxGs against unannotated corpora, supporting discovery-device CxGs for English, Spanish, French, German, and Italian. The results show (i) that these grammars provide significant generalizations as measured using compression and (ii) that more complex CxGs with access to multiple levels of representation provide greater generalizations than single-representation CxGs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Construction Grammars (CxGs; Goldberg, 2006; Langacker, 2008) operate at multiple levels of representation (lexical, syntactic, and semantic) making them potentially much more complex than purely syntactic grammars. This paper models both (i) the computational complexity of CxGs and (ii) their descriptive adequacy against unannotated corpora using Minimum Description Length (MDL). These two measures, complexity and descriptive adequacy, can be used together as an objective function for measuring the quality of CxGs: the optimum grammar balances higher descriptive adequacy against lower complexity. Once we can measure the quality of a particular grammar in reference to a corpus of observed language use, we can search until we find the optimum grammar for that corpus. This paper uses measures of complexity and descriptive adequacy to learn CxGs for English, Spanish, French, German, and Italian.",
"cite_spans": [
{
"start": 29,
"end": 44,
"text": "Goldberg, 2006;",
"ref_id": "BIBREF11"
},
{
"start": 45,
"end": 61,
"text": "Langacker, 2008)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Complexity and Descriptive Adequacy",
"sec_num": "1"
},
{
"text": "The goal is not to examine the representational capacity of CxGs in general because CxG is a fundamentally usage-based paradigm (Hopper, 1987; Kay & Fillmore, 1999; Bybee, 2006) . This means that the general capacity of its grammars must be weighted by their actual content: how can we model the complexity of a specific CxG used to describe a specific language, where that language is represented by a specific observable corpus?",
"cite_spans": [
{
"start": 128,
"end": 142,
"text": "(Hopper, 1987;",
"ref_id": "BIBREF15"
},
{
"start": 143,
"end": 164,
"text": "Kay & Fillmore, 1999;",
"ref_id": "BIBREF16"
},
{
"start": 165,
"end": 177,
"text": "Bybee, 2006)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Complexity and Descriptive Adequacy",
"sec_num": "1"
},
{
"text": "Previous computational work on CxG (Steels, 2004; Bryant, 2004; Chang, et al., 2012; Steels, 2012) has relied on introspection-based representations that require a linguist to determine the optimum constructions by intuition. From a linguistic perspective, these representations are neither replicable nor falsifiable and are unable to test hypotheses about the mechanisms of emergence that map from observed usage to learned generalizations. From a computational perspective, these representations are not scalable across domains and languages and are subject to all the constraints of knowledge-based systems. Other data-driven approaches (Wible & Tsao, 2010; Forsberg, et al., 2014) generate potential constructions but do not evaluate the quality of competing CxGs as collections of constructions.",
"cite_spans": [
{
"start": 35,
"end": 49,
"text": "(Steels, 2004;",
"ref_id": "BIBREF24"
},
{
"start": 50,
"end": 63,
"text": "Bryant, 2004;",
"ref_id": "BIBREF2"
},
{
"start": 64,
"end": 84,
"text": "Chang, et al., 2012;",
"ref_id": "BIBREF4"
},
{
"start": 85,
"end": 98,
"text": "Steels, 2012)",
"ref_id": "BIBREF25"
},
{
"start": 641,
"end": 661,
"text": "(Wible & Tsao, 2010;",
"ref_id": "BIBREF26"
},
{
"start": 662,
"end": 685,
"text": "Forsberg, et al., 2014)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Complexity and Descriptive Adequacy",
"sec_num": "1"
},
{
"text": "Section 2 discusses how CxGs are represented and Section 3 considers interactions between different levels of representation. Section 4 presents Minimum Description Length as a joint measure of complexity and descriptive adequacy suitable for measuring grammar quality while Section 5 operationalizes CxG encoding. Section 6 describes the search algorithm for optimizing grammar quality. Section 7 describes a multi-lingual experiment in measuring CxG complexity and descriptive adequacy and Appendix A discusses constructions learned from the corpus of English. (1a) [SLOT 1 -SLOT 2 -SLOT 3 -SLOT 4] (1b) [NOUN -\"gave\" -(animate) -\"a hand\"] (1c) \"Bill gave Peter a hand.\" (1d) [NOUN -(transf er) -(animate) -NOUN] (1e) \"Bill sent Peter a package.\" ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Complexity and Descriptive Adequacy",
"sec_num": "1"
},
{
"text": "This section introduces the symbolic notation used to represent CxGs and describes how these representations are implemented. The algorithm recognizes three distinct types of representation as atomic units in its descriptions: Lexical representation consists of tokenized word-forms (in lowercase). Syntactic representation consists of partof-speech categories (defined using the Universal POS tagset, Petrov, et al., 2012, and computed using RDRPosTagger, Nguyen, et al., 2016) . Semantic representation consists of clusters of distributionally similar words that represent semantic domains and are computed using GenSim's implementation of word2vec (Rehurek & Sojka, 2010 ). The embedding model is trained using 1 billion words from web-crawled corpora for each language (from the WAC corpora: Baroni, et al., 2009 ; and Aranea corpora: Benko, 2014) using skip-grams with 500 dimensions. These embeddings are segmented into categorical domains using k-means clustering (k = 100). The idea behind these three types of representation is that a particular slot in a construction can be defined or constrained at the lexical, syntactic, or semantic level. These representations thus form the basic alphabet of the algorithm.",
"cite_spans": [
{
"start": 402,
"end": 427,
"text": "Petrov, et al., 2012, and",
"ref_id": "BIBREF20"
},
{
"start": 428,
"end": 478,
"text": "computed using RDRPosTagger, Nguyen, et al., 2016)",
"ref_id": null
},
{
"start": 651,
"end": 673,
"text": "(Rehurek & Sojka, 2010",
"ref_id": "BIBREF21"
},
{
"start": 796,
"end": 816,
"text": "Baroni, et al., 2009",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Representing CxGs",
"sec_num": "2"
},
{
"text": "Constructions are sequences containing a certain number of slots, as in (1a) with four individual slots. Each construction is surrounded by brackets and each slot within a construction is separated by a dash (\"-\"). Each slot in a construction is represented or defined by constraints that govern which units can occupy that slot. Lexical constraints are indicated using single quotes (e.g.,\"gave\" in 1b). Syntactic constraints are indicated using part-ofspeech tags in uppercase (e.g., NOUN in 1b). Semantic constraints are indicated within parentheses with the identifier for the semantic domain (e.g., animate in 1b). Thus, the construction in (1b) describes the utterance in (1c) but not the utterance in (1e); the construction in (1d) describes the utterances in both (1c) and (1e).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Representing CxGs",
"sec_num": "2"
},
{
"text": "This provides a good example of the complexity problem: CxGs potentially have multiple overlapping representations for any given sentence. The sentence in (1e), for example, can be represented by the construction in (1d), in which slots are defined by both syntactic constraints (i.e., NOUN) and semantic constraints (i.e., animate). CxGs can distinguish between (1e) and its more idiomatic counterpart (1c) using representations such as (1d) and (1b). The question, however, is how many of these item-specific or idiomatic representations are needed in the grammar: each item-specific construction increases grammar complexity.",
"cite_spans": [
{
"start": 286,
"end": 291,
"text": "NOUN)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Representing CxGs",
"sec_num": "2"
},
{
"text": "In this paper, the term construction refers to the grammatical description (e.g., 1b) and the term construct refers to a member of the set of utterances which that construction represents (e.g.,1c). For a given grammar, the set of constructions is closed but the set of constructs is open. A construct or utterance can be represented by multiple constructions: representations like (1b) that are more item-specific alongside representations like (1d) that are more schematic. This leads to relationships between constructions: an inheritance hierarchy in which (1b) is a child of (1d). The current implementation has three limitations in respect to the ideal CxG: First, constraints are limited to a single type of representation per slot. For example, if a slot is constrained to the semantic domain animate, any syntactic category could be used to fill that slot. Second, although constituents are able to fill construction slots (i.e., \"a hand\" can occupy a single slot as a single NOUN), larger constructions such as (1d) cannot fill slots in other constructions. Third, no relations are learned between constructions in the grammar (i.e., the inheritance hierarchy is not modeled).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Representing CxGs",
"sec_num": "2"
},
{
"text": "In computational terms, each construction is an array of slots. Each slot is defined as a tuple that contains two pointers: first, a pointer to the alphabet constraining that slot (i.e., lexical or syntactic units) and, second, a pointer to a particular unit within that alphabet (i.e., \"a hand\" or NOUN). Constituents are allowed to fill slots. This is accomplished using a context-free phrase structure grammar containing rules such as DETERMINER -NOUN \u2192 NOUN that is learned during a syntax-only iteration described in the next section. The syntactic alphabet, then, also supports pointers to complex sequences through this CFG: the construction points to a NOUN and the CFG allows larger constituents to be labeled as a single NOUN. The current implementation produces a context-free CxG.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Representing CxGs",
"sec_num": "2"
},
{
"text": "In the experiments that follow, each language is represented by a large web-crawled corpus in that language. Its grammar is learned by searching across potential grammars, each of which is evaluated against the corpus until the optimum grammar is found (using a measure defined in Section 4). The search for the optimum grammar is conducting using a tabu search (Glover 1989 (Glover , 1990a with multi-unit association measures (Dunn, 2017) used to sample potential constructions. The main focus of this paper is on defining the objective function: how can we know that one grammar is better than another without evaluating them against gold-standard annotations?",
"cite_spans": [
{
"start": 362,
"end": 374,
"text": "(Glover 1989",
"ref_id": null
},
{
"start": 375,
"end": 390,
"text": "(Glover , 1990a",
"ref_id": "BIBREF9"
},
{
"start": 428,
"end": 440,
"text": "(Dunn, 2017)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Finding CxGs",
"sec_num": "3"
},
{
"text": "Three levels of CxGs are learned: The first pass operates on only lexical representations, CxG LEX . This identifies purely lexical constructions: sequences of lexical items that have been fused together so that their internal structure can be ignored. For example, \"could be\" and \"will be\" are identified as single units when the algorithm is applied to English. Later passes view these lexical constructions as a single lexical item with a single syntactic type.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Finding CxGs",
"sec_num": "3"
},
{
"text": "The second pass operates on only syntactic representations, CxG SY N . Syntactic constructions are later used as phrase structure rules. For example, when applied to English the sequence",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Finding CxGs",
"sec_num": "3"
},
{
"text": "[VERB \u2212 NOUN]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Finding CxGs",
"sec_num": "3"
},
{
"text": "is identified as a purely syntactic construction. In later passes, these sequences are converted into constituents that can be treated as a single unit. The third pass operates on all levels of representation, CxG F U LL . The grammar accumulates struc-ture across these iterations in the sense that constructions output from a previous pass become atomic units in the current pass. This set-up allows us to examine complexity and descriptive adequacy across CxGs with access to different levels of representation: do we actually benefit from more complex multi-level grammars?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Finding CxGs",
"sec_num": "3"
},
{
"text": "This approach depends on the central insight of MDL (Rissanen, 1978 (Rissanen, , 1986 Gr\u00fcnwald & Rissanen, 2007) : a grammar is a method for encoding observed linguistic utterances and the learner is searching for the smallest adequate encoding method. Explanation here is a matter of prediction: can the grammar produce the utterances observed in held-out test-sets? The optimum grammar balances model complexity (the number and type of constructions in the grammar) and the amount of compression achieved when the model is used to encode a test corpus (c.f., Goldsmith, 2001; . The complexity of the grammar is balanced against its descriptive adequacy on a held-out corpus. This is formalized in MDL as",
"cite_spans": [
{
"start": 52,
"end": 67,
"text": "(Rissanen, 1978",
"ref_id": "BIBREF22"
},
{
"start": 68,
"end": 85,
"text": "(Rissanen, , 1986",
"ref_id": "BIBREF23"
},
{
"start": 86,
"end": 112,
"text": "Gr\u00fcnwald & Rissanen, 2007)",
"ref_id": "BIBREF14"
},
{
"start": 561,
"end": 577,
"text": "Goldsmith, 2001;",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Measuring Grammar Quality",
"sec_num": "4"
},
{
"text": "M DL = min G {L 1 (G) + L 2 (D | G)}",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Measuring Grammar Quality",
"sec_num": "4"
},
{
"text": "This defines the optimum grammar as the one which minimizes the model complexity, represented by the encoding size of the grammar, plus the size of the dataset encoded by means of the grammar. Encoding size in MDL (here based on the natural log) is further defined as L C (X n ) = \u2212log e P (X n ) Methods for calculating the encoding size of CxGs are discussed below in Section 5. An additional term, L 3 (G), is sometimes used (Gr\u00fcnwald & Rissanen, 2007: 409) to control for the size of the encoding required for the universal code used to determine the size of G. This term is often not included in the MDL metric (it is not necessary when evaluating models against one another). It will be necessary here, however, when measuring grammar quality against the baseline of an unencoded test set. We are using the MDL principle as a metric for model selection. One aspect of model selection is confidence: to what degree is G A better than G B ? This is given by Higher values indicate a more significant difference between G A and G B (c.f., Gr\u00fcnwald & Rissanen, 2007: 411) . This measure of confidence will be useful for evaluating the quality of grammars against the baseline of an unencoded dataset. We can refine this measure of confidence further",
"cite_spans": [
{
"start": 428,
"end": 460,
"text": "(Gr\u00fcnwald & Rissanen, 2007: 409)",
"ref_id": null
},
{
"start": 1042,
"end": 1073,
"text": "Gr\u00fcnwald & Rissanen, 2007: 411)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Measuring Grammar Quality",
"sec_num": "4"
},
{
"text": "|M DL(G A ) \u2212 M DL(G B )|",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Measuring Grammar Quality",
"sec_num": "4"
},
{
"text": "1 \u2212 M DL(G A ) M DL(U )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Measuring Grammar Quality",
"sec_num": "4"
},
{
"text": "This is the relative degree of compression adjusted so that values close to 1 represent higher compression (U represents the size of the data without the grammar). Thus, if the MDL of G A is 256 and the unencoded MDL, U , is 927, this gives a compression of 0.7239 over the unencoded baseline as a measure of grammar quality. Negative values indicate that the grammar makes the MDL metric worse, an unlikely but possible occurrence. This ratio measure is important because, without it, the MDL metric and the significance of the metric are both dependent on the encoding size of a specific test set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Measuring Grammar Quality",
"sec_num": "4"
},
{
"text": "We are searching for the grammar with the lowest MDL metric on a held-out test set, but we also need to measure the amount of variation across restarts. This provides a measure of stability: a restart is a search technique that restarts the search for the optimum grammar from scratch on a different portion of the corpus in order to determine if similar grammars are discovered. Let G k be the optimum grammar across restarts and G i...n be the set of all output grammars across restarts regardless of whether they are optimal. The agreement between the two grammars is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Measuring Grammar Quality",
"sec_num": "4"
},
{
"text": "A k i = (k \u2229 i) (k \u222a i)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Measuring Grammar Quality",
"sec_num": "4"
},
{
"text": "The significance of the difference between the encoding quality of k and i relative to the encoding quality of the optimum grammar is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Measuring Grammar Quality",
"sec_num": "4"
},
{
"text": "M k i = 1 \u2212 |M DL(k) \u2212 M DL(i)| M DL(k)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Measuring Grammar Quality",
"sec_num": "4"
},
{
"text": "This is adjusted to make large differences closer to 0 and small differences closer to 1. The stability measure, ST A(k), is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Measuring Grammar Quality",
"sec_num": "4"
},
{
"text": "n i=0 A k i (M k i ) n",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Measuring Grammar Quality",
"sec_num": "4"
},
{
"text": "This is the mean agreement between the current grammar and the optimum grammar for all restarts n with each weighted so that more significant differences in the MDL metric lower the agreement. This is a joint measure of stability in grammar content and grammar quality, with higher scores (toward 1) indicating stable grammars and lower scores (toward 0) indicating unstable grammars.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Measuring Grammar Quality",
"sec_num": "4"
},
{
"text": "This section has used the Minimum Description Length paradigm to develop measures of grammar complexity (i.e, L 1 ) and descriptive adequacy (i.e., L 2 ) that do not rely on gold-standard annotations. This is important for two reasons: First, we do not necessarily have gold-standard annotations for every language and language variety we are interested in (i.e., CxGs are also subject to variation).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Measuring Grammar Quality",
"sec_num": "4"
},
{
"text": "Second, simply relying on gold-standard annotations ignores the question we are most interested in: how do we know empirically that one grammar is better than another?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Measuring Grammar Quality",
"sec_num": "4"
},
{
"text": "The MDL paradigm depends on the concept of encoding size to measure complexity and descriptive adequacy: how do we calculate this for CxGs? The MDL metric contains three terms: L 1 , or the encoding size of the grammar; L 2 , or the encoding size of the corpus given the grammar; and L 3 , or the encoding size of the universal code necessary for encoding L 1 . Additionally, we need to determine the uncompressed encoding size of the corpus to serve as a baseline for measuring the overall rate of compression of competing grammars.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Measuring the Encoding Size of CxGs",
"sec_num": "5"
},
{
"text": "The basic encoding model, shown in Figure 1 , has two top level categories composing its alphabet: Constructions (representations within the current CxG), and Regret (units not described by known constructions). Each of these top-level categories is assigned the same probability, 0.5, and thus, because encoding size is equivalent to \u2212log e P (X n ), each comes with an initial encoding size of 0.693 nats (where a nat is a bit based on the natural logarithm).",
"cite_spans": [],
"ref_spans": [
{
"start": 35,
"end": 43,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Measuring the Encoding Size of CxGs",
"sec_num": "5"
},
{
"text": "The reason for separating these top-level categories is that each has a different number of units, each of which is again assigned equal probability. For example, if there are 1,000 constructions in the grammar, then each usage of a construction costs 0.693 nats (for indicating a construction) and 6.907 nats (for pointing to a specific construction). Rather than assume that each construction in a given CxG is equally probable, an alternate approach is to assign probabilities to individual constructions and use these to determine the cost of encoding constructions on an individual basis. This problem is left for future work. Here, constructions are distinguished from one another only using (i) their relative complexity and (ii) the productivity of the particular grammar they belong to.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Measuring the Encoding Size of CxGs",
"sec_num": "5"
},
{
"text": "The Regret category holds units in the corpus that are not described by a construction in the current grammar. Each occurrence of a nonconstruction unit is encoded on-the-fly: as the number of undescribed units increases, the cost in nats of encoding each occurrence also increases. For example, if there are 1,000 undescribed units the cost per unit is 0.693 nats plus 6.907 nats; if there are 10,000 undescribed units the cost per unit is 0.693 nats plus 9.210 nats. This cost is specific to a given dataset, not to a given model, because the cost per undescribed unit depends on the total number of undescribed units. It is important to note that each instance of a unit not described by the grammar is stored in the Regret category independently: this is a measure of model error.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Measuring the Encoding Size of CxGs",
"sec_num": "5"
},
{
"text": "If the encoded dataset were transmitted, the model itself would need to be encoded and transmitted at the same time in order to decode the dataset; this is the information-theory rationale behind L 1 , the encoding size of the grammar. In linguistic terms, grammars with larger encoding sizes are more complex. The Regret category has already been encoded with unique pointers for each undescribed unit; thus, it does not incur an additional model cost. The cost of encoding the model, then, consists entirely of the cost of encoding each construction it contains: the sum of all unit-encoding costs for each slot-filler representation in the construction,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Measuring the Encoding Size of CxGs",
"sec_num": "5"
},
{
"text": "N SLOT S i \u2212 log e ( 1 N R i ) + \u2212log e ( 1 T R )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Measuring the Encoding Size of CxGs",
"sec_num": "5"
},
{
"text": "N SLOT S here is the number of slots in the construction being encoded, N (R i ) is the number of units available for a given representation type, and T R is the number of representation types total for the current grammar. This is the total cost of encoding both (i) which representation type (alphabet) fills the slot and (ii) which unit of that alphabet fills the slot.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Measuring the Encoding Size of CxGs",
"sec_num": "5"
},
{
"text": "The full CxG has three representation types so that, for this grammar type, the encoding size for each slot is 1.098 nats (the cost of encoding a three-way distinction) plus log e (1/N ) where N is the total vocabulary of that unit type. Thus, if there are 20,000 lexical items in the vocabulary, the cost of encoding a construction with three lexicallyfilled slots is 11.001 nats per slot or 33.003 nats total. This is a one-time encoding cost: each occurrence of a construction is a pointer that incurs the encoding cost described above.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Measuring the Encoding Size of CxGs",
"sec_num": "5"
},
{
"text": "The Regret category more properly belongs as an added term in L 2 , the size of the dataset as encoded by the grammar. However, in this case it clarifies the discussion of grammar complexity to show the impact that each unencoded unit has on the MDL metric as a whole. Note that the complexity cost includes L 3 or the cost of encoding the In other words, in order for each construction to be encoded we also have to encode the lexicon of lexical, syntactic, and semantic units used in construction descriptions. This is included as part of the cost of each construction in the grammar.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Measuring the Encoding Size of CxGs",
"sec_num": "5"
},
{
"text": "The data consists of atomic units from three types of representation. In order to maintain the grammar as a lossless encoding of the corpus, we define the task for CxG F U LL as encoding one of these representations for each unit. Importantly, this means that each unit needs to be represented by only one type of representation in the decoded version of the dataset; part of the learning task for CxG F U LL is to choose the optimum type of representation for each slot. What lossless encoding means, in practice, depends on the type of CxG. For CxG LEX lossless encoding means to return the same word-forms but for CxG SY N , it means to return the original sequence of parts-of-speech. It is important that CxG SY N is evaluated while not in Chomsky normal form in order to correctly encode the complexity of the grammar. Consider the individual phrase structure rules in (2a) through (2c) which map from a particular sequence of part-of-speech tags to a single constituent type. For CxG SY N internally each sequence is a construction (i.e., phrase structure rules are not typed). In the same way, for CxG LEX internally, each sequence of wordforms is not typed (i.e., assigned to a part-ofspeech category). Constructions from each of these passes need to be typed before filling slots in later passes. The part-of-speech tagger is used to assign lexical constructions to a single partof-speech. An additional algorithm (outside the scope of this paper but available in the external resources) converts CxG SY N sequences into phrase structure rules to support the CFG that allows longer sequences to fill individual slots.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maintaining Lossless Encoding",
"sec_num": "5.1"
},
{
"text": "(2a) DET -NOUN \u2192 NOUN (2b) NOUN \u2192 NOUN (2c) NOUN -NOUN \u2192 NOUN",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maintaining Lossless Encoding",
"sec_num": "5.1"
},
{
"text": "The point is that, while the representations in (2a) through (2c) do not provide a lossless encoding of the observed utterances, the MDL metric is not applied to these representations but to their untyped forms (e.g., [DET -NOUN] ). The CxG encoding system consists of AtomicU nits located within Constructions. As the level of abstraction increases (i.e., as we go through multiple iterations), members of the Construction repository for the current pass become members of the AtomicU nits repository for the next pass. Thus, lexical constructions are considered part of Constructions in CxG LEX but part of AtomicU nits in CxG SY N . The effect of this is to maintain lossless encoding at each level of abstraction while incorporating previously learned representations into the next level of abstraction.",
"cite_spans": [
{
"start": 218,
"end": 229,
"text": "[DET -NOUN]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Maintaining Lossless Encoding",
"sec_num": "5.1"
},
{
"text": "This means that grammar complexity is not directly comparable across iterations because each iteration is encoding a different level of abstraction. For example, the task for CxG SY N is to provide a lossless encoding of sequences of syntactic units (out of an inventory of 14 unit types). A relatively small number of syntactic sequences will be able to form phrase structure rules that, taken together, provide a high rate of compression. A full CxG, however, must do much more than predict sequences of syntactic units because it also incorporates lexical and semantic representations. On the other hand, though, the full MDL metric is comparable across iterations because it balances complexity and descriptive adequacy: does the more complex CxG F U LL provide enough descriptive adequacy to justify incorporating multiple types of representation?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maintaining Lossless Encoding",
"sec_num": "5.1"
},
{
"text": "6 Searching Over Potential CxGs",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maintaining Lossless Encoding",
"sec_num": "5.1"
},
{
"text": "The search algorithm has three components: (i) randomly initializing the starting state, or what set of constructions belongs in the initial grammar; (ii) an indirect tabu search (Glover, 1989 (Glover, , 1990a to move toward the optimum grammar by updating construction sampling parameters; and (iii) a direct tabu search across constructions to determine if small changes in the inventory of constructions improves the quality of the current CxG. The first tabu search takes a randomly initialized starting state and searches for improved grammars by exploring different sampling parameters. These parameters take a number of association measures (c.f., Dunn, 2017) and use them to determine which constructions belong in the grammar. The essential idea of tabu search is (i) to define the set of possible moves from the current grammar state to a new grammar state and (ii) to combine tabu restrictions and aspiration criteria to move the search toward promising areas of the overall search space that are not directly reachable from the current state. We divide the parameter space into n discrete values for each of the 30 direction-specific association measures, with the maximum and minimum values defined empirically. This provides a finite set of possible moves from any given state.",
"cite_spans": [
{
"start": 179,
"end": 192,
"text": "(Glover, 1989",
"ref_id": null
},
{
"start": 193,
"end": 209,
"text": "(Glover, , 1990a",
"ref_id": "BIBREF9"
},
{
"start": 655,
"end": 666,
"text": "Dunn, 2017)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Maintaining Lossless Encoding",
"sec_num": "5.1"
},
{
"text": "For each turn, the algorithm generates a set of possible moves and, after evaluating each, takes the best available move even if it reduces the over-all grammar quality. Best is defined using the MDL-metric: the best move is that which provides the smallest MDL-metric of all possible moves. Available is defined as a move that is either (i) not on the tabu list or (ii) satisfies an aspiration criteria that overrules the tabu list. The tabu list is a shortterm memory item that contains the last n moves, each represented using the association measures that have been changed. For practical reasons, n is set at 7 (c.f., Glover, 1990b) ; this means that for any given turn the best move cannot involve a sampling parameter that has been changed over the last 7 turns. This prevents the algorithm from cycling between local optima in an endless loop. The aspiration criteria used is that the grammar produced by a move is not only the best available grammar but also the best observed grammar: a new global minimization of the MDL metric. Thus, the tabu against altering a recently changed sampling parameter can be overruled if that change creates a new best grammar. The use of such an aspiration criteria makes intuitive sense: the tabu search is designed to prevent cycling between previously visited states, but a grammar which reaches a new global minimum has not previously been visited.",
"cite_spans": [
{
"start": 623,
"end": 637,
"text": "Glover, 1990b)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Maintaining Lossless Encoding",
"sec_num": "5.1"
},
{
"text": "Three types of moves are available at each turn: First, a parameter can be removed from the current sampler (i.e., OFF); this allows the tabu search to eliminate sampling parameters that reduce grammar quality. Second, a parameter can be grouped with n randomly chosen changes to other parameters (i.e., AND); this allows the tabu search to explore states similar to the current grammar. Third, a parameter can be allowed to overrule all other parameters (i.e., OR); this allows the tabu search to move toward better but more distant states.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maintaining Lossless Encoding",
"sec_num": "5.1"
},
{
"text": "Potential moves for each turn are generated as follows: for each association measure, one move is added with that measure removed from the sampler (OFF); two OR moves above and two below the feature's current threshold, serving as escape hatches; and 25 AND moves that include the current feature and 1...k other features (k = 5). The stopping criteria is that a new best grammar has not been observed for 14 turns, twice the size of the tabu list. This stopping criteria is an intermediate memory item that monitors the general direction of the search. The intuition is that, if a new optimum grammar has not been reached within two complete cycles of the short-term tabu list, such a grammar is unlikely to exist. It is important to keep in mind that each turn evaluates a wide range of possible moves. This means that a large number of potential grammars are evaluated in determining each move. Given the size of the space reachable from any given state and the number of states visited during the tabu search, it is unlikely at this point that a significantly better grammar exists.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maintaining Lossless Encoding",
"sec_num": "5.1"
},
{
"text": "The evaluation uses web-crawled corpora (from the WaC and Aranea projects) for English, Spanish, French, German, and Italian. The same data segmentation shown in Table 3 is used for each language. Each grammar type is evaluated using cross-validation with two folds; the trainingtesting split is randomly assigned. The search stage uses two restarts, each with a unique segment of the training data. This means that the learning algorithm makes four passes per iteration (two folds with two restarts) over which we can measure stability. Figure 2 , is the compression achieved over the unencoded dataset on held-out testing data. Values close to 1 represent a large amount of compression while values close to 0 represent very little compression. We see across languages that lexical constructions (i.e., \"because of\") do not provide much compression. In part, this is because few such constructions are selected: an average of 22 per language.",
"cite_spans": [],
"ref_spans": [
{
"start": 162,
"end": 169,
"text": "Table 3",
"ref_id": null
},
{
"start": 538,
"end": 547,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "7"
},
{
"text": "Purely syntactic constructions, however, do provide compression (with an average of 120 identified per language). For all languages except English CxG F U LL has the highest rate of compression, with each grammar containing between 4k and 5k constructions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "7"
},
{
"text": "This measure shows us three things: First, there is a balance between complexity and descriptive adequacy that is forced by MDL. In other words, the descriptive power of the purely lexical constructions are only able to justify 22 constructions as opposed to 4k -5k for CxG F U LL . CxGs with multiple types of representations are allowed to produce more complex grammars because they produce better descriptions of the corpus. Second, the learned grammars provide meaningful generalizations. In other words, this compression metric shows that not only does the algorithm find the optimum grammar with respect to competing grammars but that it also finds grammars that offer above-the-baseline compression. Full compression is, of course, impossible and these results provide a benchmark for future work. Third, these results show that the addition of semantic representations provide improved descriptive adequacy. A representative sample of the output of CxG F U LL for English is shown in Appendix A.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "7"
},
{
"text": "How consistent are the grammars learned across different sub-sets of the corpora? This is shown in Figure 3 using the stability metric introduced in Section 4 over the grammars produced from different sub-sets of the corpora. We see that more complicated grammars are less stable. Thus, CxG LEX has low compression but almost perfect stability because the same small number of lexical constructions are consistently identified. CxG F U LL , on the other hand, has a much larger number of constructions that provide much higher compression; but the inventory of these constructions is subject to more variation.",
"cite_spans": [],
"ref_spans": [
{
"start": 99,
"end": 107,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "7"
},
{
"text": "Lack of stability here is not necessarily caused by error: grammars are subject to variation. Some amount of this variation results from errors: tagging errors, parsing errors, and learning errors in which the search algorithm does not converge on the best grammar. Some amount of this variation, however, comes from differences in usage across different portions of the corpus: these large corpora contain many varieties, dialects, domains, and speakers, each introducing variant constructions. To what degree do these variations represent error and to what degree do they represent actual grammatical differences across the corpora? That is a question for future work because it requires testing grammars over data explicitly drawn from different varieties of a language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "7"
},
{
"text": "This paper has shown that the MDL paradigm can be used to jointly model the complexity and descriptive adequacy of CxGs against unannotated corpora. This is important because methods that rely on gold-standard annotations to evaluate grammar quality ultimately depend on the introspections behind those annotations. How valid are these CxGs using external measures? One application-specific evaluation of a learned gram-mar is its ability to model dialectal variations. Separate work using these learned CxGs for dialectometry (Dunn, Forthcoming) shows that these grammars are able to model regional varieties with a high degree of accuracy.",
"cite_spans": [
{
"start": 527,
"end": 546,
"text": "(Dunn, Forthcoming)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "7"
},
{
"text": "Resources. Code and models for this work are available at jdunn.name and github.com/jonathandunn/c2xg",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "7"
},
{
"text": "Proceedings of the Society for Computation in Linguistics (SCiL) 2018, pages 81-90. Salt Lake City, Utah, January 4-7, 2018",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Acknowledgements. This research was supported in part by an appointment to the Visiting Scientist Fellowship at the National Geospatial-Intelligence Agency administered by the Oak Ridge Institute for Science and Education through an interagency agreement between the U.S. Department of Energy and NGA. The views expressed in this presentation are the author's and do not imply endorsement by the DoD or the NGA.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "acknowledgement",
"sec_num": null
},
{
"text": "Appendix A: Representative Examples [ADVERB -\"about\"] Modified Adverbs \"at about\" This simple construction modifies adverbs \"how about\"to include information about vagueness. \"only about\" \"on about\" [\"provide\" -25 -25] Verb-Specific Direct Object \"provide added value\"This verb-specific construction constrains \"provide an opportunity\" the object of \"provide\" to members of an \"provide general advice\" unlabeled semantic domain. \"provide information about\" [25 -\"to\" -14]Complex Verb Phrase \"designed to ensure\"This construction represents a complex event \"want to improve\" phrase that contains both a main verb, \"want,\" \"made to ensure\" as well as an infinitive verb, \"improve.\" \"able to understand\" [VERB -\"to\" -25 -ADVERB]Evaluative Verb Phrase \"need to consider how\" This construction describes a basic verb phrase \"wish to consider how\" embedded within an evaluative verb describing \"want to be here\" how the speaker perceives the event. \"like to find out\" [DETERMINER -NOUN -ADPOSITION -14] Complex Noun Phrase \"some experience in research\" This construction encodes a noun phrase that \"a need for research\" contains a modifying prepositional phrase. \"the process of planning\" \"a number of activities\"Subordinated Noun Phrase \"whether small independent companies\" This construction provides sub-ordinated \"that the international community\" noun phrases that attach to main clause verbs \"because the current version\"and then act as the subject for additional \"while the other party\" modifying material that remains unspecified.Partial Main Clause \"you should continue to receive\" This construction represents the largest \"i was told to make\" representations that are identified by the \"they were going to have\" algorithm; it specifies most of a main clause \"this was going to be\" with a pronominal subject.",
"cite_spans": [
{
"start": 36,
"end": 53,
"text": "[ADVERB -\"about\"]",
"ref_id": null
},
{
"start": 199,
"end": 218,
"text": "[\"provide\" -25 -25]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "annex",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The WaCky Wide Web: A Collection of Very Large Linguistically Processed Web-crawled Corpora. Language Resources and Evaluation",
"authors": [
{
"first": "M",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Bernardini",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Ferraresi",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Zanchetta",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "43",
"issue": "",
"pages": "209--226",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Baroni, M., Bernardini, S., Ferraresi, A., and Zanchetta, E. 2009. The WaCky Wide Web: A Collection of Very Large Linguistically Processed Web-crawled Corpora. Lan- guage Resources and Evaluation, 43: 209-226.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Aranea: Yet Another Family of (Comparable) Web Corpora",
"authors": [
{
"first": "V",
"middle": [],
"last": "Benko",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of Text, Speech and Dialogue. 17th International Conference",
"volume": "",
"issue": "",
"pages": "257--264",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Benko, V. 2014. Aranea: Yet Another Family of (Compa- rable) Web Corpora. In Proceedings of Text, Speech and Dialogue. 17th International Conference. 257-264.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Scalable Construction-based Parsing and Semantic Analysis",
"authors": [
{
"first": "J",
"middle": [],
"last": "Bryant",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the Workshop on Scalable Natural Language Understanding",
"volume": "",
"issue": "",
"pages": "33--40",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bryant, J. 2004. Scalable Construction-based Parsing and Se- mantic Analysis. In Proceedings of the Workshop on Scal- able Natural Language Understanding (HLT-NAACL): 33-40.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "From Usage to Grammar: The Mind's Response to Repetition",
"authors": [
{
"first": "J",
"middle": [],
"last": "Bybee",
"suffix": ""
}
],
"year": 2006,
"venue": "Language",
"volume": "82",
"issue": "4",
"pages": "711--733",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bybee, J. 2006. From Usage to Grammar: The Mind's Re- sponse to Repetition. Language, 82(4): 711-733.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Computational construction grammar: Comparing ECG and FCG",
"authors": [
{
"first": "N",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "De Beule",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Micelli",
"suffix": ""
}
],
"year": 2012,
"venue": "Computational Issues in Fluid Construction Grammar",
"volume": "",
"issue": "",
"pages": "259--288",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chang, N.; De Beule, J.; and Micelli, V. 2012. Computa- tional construction grammar: Comparing ECG and FCG. In Steels, L. (ed.), Computational Issues in Fluid Con- struction Grammar. Berlin: Springer. 259-288.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Computational Learning of Construction Grammars",
"authors": [
{
"first": "J",
"middle": [],
"last": "Dunn",
"suffix": ""
}
],
"year": 2017,
"venue": "Language & Cognition",
"volume": "9",
"issue": "2",
"pages": "254--292",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dunn, J. 2017. Computational Learning of Construction Grammars. Language & Cognition, 9(2): 254-292.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Finding Variants for Construction-Based Dialectometry: A Corpus-Based Approach to Regional CxGs",
"authors": [
{
"first": "J",
"middle": [],
"last": "Dunn",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Forthcoming",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dunn, J. Forthcoming. Finding Variants for Construction- Based Dialectometry: A Corpus-Based Approach to Re- gional CxGs. Cognitive Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "From Construction Candidates to Constructicon Entries: An experiment using semi-automatic methods for identifying constructions in corpora",
"authors": [
{
"first": "M",
"middle": [],
"last": "Forsberg",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Johansson",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Bckstrm",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Borin",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Lyngfelt",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Olofsson",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Prentice",
"suffix": ""
}
],
"year": 2014,
"venue": "Constructions and Frames",
"volume": "6",
"issue": "1",
"pages": "114--135",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Forsberg, M.; Johansson, R.; Bckstrm, L.; Borin, L.; Lyn- gfelt, B.; Olofsson, J.; and Prentice, J. 2014. From Con- struction Candidates to Constructicon Entries: An exper- iment using semi-automatic methods for identifying con- structions in corpora. Constructions and Frames, 6(1): 114-135.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Tabu Search, Part 2",
"authors": [
{
"first": "",
"middle": [
"F"
],
"last": "Glover",
"suffix": ""
}
],
"year": 1990,
"venue": "ORSA Journal on Computing",
"volume": "2",
"issue": "1",
"pages": "4--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Glover. F. 1990a. Tabu Search, Part 2. ORSA Journal on Computing, 2(1): 4-32.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Tabu Search: A Tutorial. Interfaces",
"authors": [
{
"first": "",
"middle": [
"F"
],
"last": "Glover",
"suffix": ""
}
],
"year": 1990,
"venue": "",
"volume": "20",
"issue": "",
"pages": "74--94",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Glover. F. 1990b. Tabu Search: A Tutorial. Interfaces, 20(4): 74-94.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Constructions at Work: The Nature of Generalization in Language",
"authors": [
{
"first": "A",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Goldberg, A. 2006. Constructions at Work: The Nature of Generalization in Language. Oxford: Oxford University Press.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Unsupervised Learning of the Morphology of a Natural Language",
"authors": [
{
"first": "J",
"middle": [],
"last": "Goldsmith",
"suffix": ""
}
],
"year": 2001,
"venue": "Computational Linguistics",
"volume": "27",
"issue": "2",
"pages": "153--198",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Goldsmith, J. 2001. Unsupervised Learning of the Morphol- ogy of a Natural Language. Computational Linguistics, 27(2): 153-198.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "An Algorithm for the Unsupervised Learning of Morphology",
"authors": [
{
"first": "J",
"middle": [],
"last": "Goldsmith",
"suffix": ""
}
],
"year": 2006,
"venue": "Natural Language Engineering",
"volume": "12",
"issue": "4",
"pages": "353--371",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Goldsmith, J. 2006. An Algorithm for the Unsupervised Learning of Morphology. Natural Language Engineering, 12(4): 353-371.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "The Minimum Description Length Principle",
"authors": [
{
"first": "P",
"middle": [],
"last": "Gr\u00fcnwald",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Rissanen",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gr\u00fcnwald, P. and Rissanen, J. 2007. The Minimum Descrip- tion Length Principle. Cambridge, MA: The MIT Press.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Emergent Grammar",
"authors": [
{
"first": "P",
"middle": [],
"last": "Hopper",
"suffix": ""
}
],
"year": 1987,
"venue": "Proceedings of the 13th Annual Meeting of the Berkeley Linguistics Society",
"volume": "",
"issue": "",
"pages": "139--157",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hopper, P. 1987. Emergent Grammar. In Proceedings of the 13th Annual Meeting of the Berkeley Linguistics Society, 139-157.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Grammatical Constructions and Linguistic Generalizations: The Whats X Doing Y? Construction. Language",
"authors": [
{
"first": "P",
"middle": [],
"last": "Kay",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Fillmore",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "75",
"issue": "",
"pages": "1--33",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kay, P. and Fillmore, C. 1999. Grammatical Constructions and Linguistic Generalizations: The Whats X Doing Y? Construction. Language, 75(1): 1-33.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Cognitive Grammar: A Basic Introduction",
"authors": [
{
"first": "R",
"middle": [],
"last": "Langacker",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Langacker, R. 2008. Cognitive Grammar: A Basic Introduc- tion. Oxford: Oxford University Press.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A Robust Transformation-Based Learning Approach Using Ripple Down Rules for Part-Of-Speech Tagging",
"authors": [
{
"first": "Son",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bao",
"suffix": ""
}
],
"year": 2016,
"venue": "AI Communications",
"volume": "29",
"issue": "3",
"pages": "409--422",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "and Pham, Son Bao. 2016. A Robust Transformation- Based Learning Approach Using Ripple Down Rules for Part-Of-Speech Tagging. AI Communications, 29(3): 409-422.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A Universal Part-of-Speech Tagset",
"authors": [
{
"first": "S",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Mcdonald",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Eight International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Petrov, S.; Das, D.; and McDonald, R. 2012. A Universal Part-of-Speech Tagset. In Proceedings of the Eight Inter- national Conference on Language Resources and Evalua- tion.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Software Framework for Topic Modelling with Large Corpora",
"authors": [
{
"first": "R",
"middle": [],
"last": "Rehurek",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Sojka",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rehurek, R. and Sojka, P. 2010. Software Framework for Topic Modelling with Large Corpora. In Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Modeling by the Shortest Data Description",
"authors": [
{
"first": "J",
"middle": [],
"last": "Rissanen",
"suffix": ""
}
],
"year": 1978,
"venue": "Automatica",
"volume": "14",
"issue": "",
"pages": "465--471",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rissanen, J. 1978. Modeling by the Shortest Data Descrip- tion. Automatica, 14: 465-471.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Stochastic Complexity and Modeling. Annals of Statistics",
"authors": [
{
"first": "J",
"middle": [],
"last": "Rissanen",
"suffix": ""
}
],
"year": 1986,
"venue": "",
"volume": "14",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rissanen, J. 1986. Stochastic Complexity and Modeling. An- nals of Statistics, 14: 1,080-1,100.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Constructivist development of grounded construction grammar",
"authors": [
{
"first": "L",
"middle": [],
"last": "Steels",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 42nd Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "9--16",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steels, L. 2004. Constructivist development of grounded construction grammar. In Proceedings of the 42nd Meet- ing of the Association for Computational Linguistics: 9- 16.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Computational Issues in Fluid Construction Grammar",
"authors": [
{
"first": "L",
"middle": [],
"last": "Steels",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "3--36",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steels, L. 2012. Design methods for fluid construction gram- mar. In Steels, L. (ed), Computational Issues in Fluid Construction Grammar. Berlin: Springer. 3-36.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "StringNet as a Computational Resource for Discovering and Investigating Linguistic Constructions",
"authors": [
{
"first": "D",
"middle": [],
"last": "Wible",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Tsao",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Workshop on Extracting and Using Constructions in Computational Linguistics",
"volume": "",
"issue": "",
"pages": "25--31",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wible, D. and Tsao, N. 2010. StringNet as a Computa- tional Resource for Discovering and Investigating Lin- guistic Constructions. In Proceedings of the Workshop on Extracting and Using Constructions in Computational Linguistics (NAACL-HTL): 25-31.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "Figure 1: Encoding Model",
"num": null,
"uris": null
},
"FIGREF1": {
"type_str": "figure",
"text": "Compression Rates Across Grammar Types encoding size of the grammar.",
"num": null,
"uris": null
},
"FIGREF2": {
"type_str": "figure",
"text": "Stability Over Folds Across Grammar Types",
"num": null,
"uris": null
},
"TABREF0": {
"num": null,
"text": "Construction Notation and Examples",
"html": null,
"type_str": "table",
"content": "<table/>"
},
"TABREF3": {
"num": null,
"text": "Data Segmentation Per FoldThe first measure of grammar quality, in",
"html": null,
"type_str": "table",
"content": "<table/>"
}
}
}
}