|
{ |
|
"paper_id": "W13-0110", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T04:54:36.180817Z" |
|
}, |
|
"title": "Probabilistic induction for an incremental semantic grammar *", |
|
"authors": [ |
|
{ |
|
"first": "Arash", |
|
"middle": [], |
|
"last": "Eshghi", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "Cognitive Science Research Group", |
|
"institution": "Queen Mary University of London", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Purver", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "Cognitive Science Research Group", |
|
"institution": "Queen Mary University of London", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Julian", |
|
"middle": [], |
|
"last": "Hough", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "Cognitive Science Research Group", |
|
"institution": "Queen Mary University of London", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We describe a method for learning an incremental semantic grammar from a corpus in which sentences are paired with logical forms as predicate-argument structure trees. Working in the framework of Dynamic Syntax, and assuming a set of generally available compositional mechanisms, we show how lexical entries can be learned as probabilistic procedures for the incremental projection of semantic structure, providing a grammar suitable for use in an incremental probabilistic parser. By inducing these from a corpus generated using an existing grammar, we demonstrate that this results in both good coverage and compatibility with the original entries, without requiring annotation at the word level. We show that this semantic approach to grammar induction has the novel ability to learn the syntactic and semantic constraints on pronouns. * We would like to thank Ruth Kempson and Yo Sato for helpful comments and discussion.", |
|
"pdf_parse": { |
|
"paper_id": "W13-0110", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We describe a method for learning an incremental semantic grammar from a corpus in which sentences are paired with logical forms as predicate-argument structure trees. Working in the framework of Dynamic Syntax, and assuming a set of generally available compositional mechanisms, we show how lexical entries can be learned as probabilistic procedures for the incremental projection of semantic structure, providing a grammar suitable for use in an incremental probabilistic parser. By inducing these from a corpus generated using an existing grammar, we demonstrate that this results in both good coverage and compatibility with the original entries, without requiring annotation at the word level. We show that this semantic approach to grammar induction has the novel ability to learn the syntactic and semantic constraints on pronouns. * We would like to thank Ruth Kempson and Yo Sato for helpful comments and discussion.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Dynamic Syntax (DS) is an inherently incremental semantic grammar formalism (Kempson et al., 2001; Cann et al., 2005) in which semantic representations are projected on a word-by-word basis. It recognises no intermediate layer of syntax (see below), but instead reflects grammatical constraints via constraints on the incremental construction of partial logical forms (LFs). Given this, and its definition of parsing and generation in terms of the same incremental processes, it is in principle capable of modelling and providing semantic interpretations for phenomena such as unfinished utterances, co-constructions and interruptions, beyond the remit of standard grammar formalisms but important for dialogue systems.", |
|
"cite_spans": [ |
|
{ |
|
"start": 76, |
|
"end": 98, |
|
"text": "(Kempson et al., 2001;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 99, |
|
"end": 117, |
|
"text": "Cann et al., 2005)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "However, its definition in terms of semantics (rather than the more familiar syntactic phrase structure) makes it hard to define or extend broad-coverage grammars: expert linguists are required. Here, we present a method for automatically inducing DS grammars, by learning lexical entries from sentences paired with complete, compositionally structured, propositional LFs. By assuming only the availability of a small set of general compositional semantic operations, reflecting the properties of the lambda calculus and semantic conjunction, we ensure that the lexical entries learnt include the grammatical constraints and corresponding compositional semantic structure of the language; by additionally assuming a general semantic copying operation, we can also learn the syntactic and semantic properties of pronouns.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Existing grammar induction methods can be divided into two major categories: supervised and unsupervised. Fully supervised methods use a parsed corpus as the training data, pairing sentences with syntactic trees and words with their syntactic categories, and generalise over the phrase structure rules to learn a grammar which can be applied to a new set of data. By estimating probabilities for production rules that share the same LHS category, this produces a grammar suitable for probabilistic parsing and disambiguation (e.g. PCFGs, Charniak, 1996) . Such methods have shown great success, but presuppose detailed prior linguistic information (and are thus not adequate as human grammar learning models). Unsupervised methods, on the other hand, proceed from unannotated raw data; they are thus closer to the human language acquisition setting, but have seen less success. In its pure form -positive data only, without bias-unsupervised learning has been demonstrated to be computationally too complex ('unlearnable') in the worst case (Gold, 1967) . Successful approaches involve some prior learning or bias, e.g. a fixed set of known lexical categories, a probability distribution bias (Klein and Manning, 2005) or a hybrid, semi-supervised method with shallower (e.g. POS-tagging) annotation (Pereira and Schabes, 1992) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 538, |
|
"end": 553, |
|
"text": "Charniak, 1996)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 1007, |
|
"end": 1022, |
|
"text": "('unlearnable')", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1041, |
|
"end": 1053, |
|
"text": "(Gold, 1967)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 1193, |
|
"end": 1218, |
|
"text": "(Klein and Manning, 2005)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 1300, |
|
"end": 1327, |
|
"text": "(Pereira and Schabes, 1992)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Previous work on grammar induction", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "More recently, another interesting line of work has emerged: lightly supervised learning guided by semantic rather than syntactic annotation, using sentence-level propositional logical form rather than detailed word-level annotation (more justifiably arguable to be 'available' to a human learner in a realworld situation, with some idea of what a string in an unknown language could mean). This has been successfully applied in Combinatorial Categorial Grammar (Steedman, 2000) , as it tightly couples compositional semantics with syntax (Zettlemoyer and Collins, 2007; Kwiatkowski et al., 2010 Kwiatkowski et al., , 2012 ; as CCG is a lexicalist framework, grammar learning involves inducing a lexicon assigning to each word its syntactic and semantic contribution. Moreover, the grammar is learnt ground-up in an 'incremental' fashion, in the sense that the learner collects data over time and does the learning sentence by sentence.", |
|
"cite_spans": [ |
|
{ |
|
"start": 462, |
|
"end": 478, |
|
"text": "(Steedman, 2000)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 539, |
|
"end": 570, |
|
"text": "(Zettlemoyer and Collins, 2007;", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 571, |
|
"end": 595, |
|
"text": "Kwiatkowski et al., 2010", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 596, |
|
"end": 622, |
|
"text": "Kwiatkowski et al., , 2012", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Previous work on grammar induction", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Here we follow this spirit, inducing grammar from a propositional meaning representation and building a lexicon which specifies what each word contributes to the target semantics. However, taking advantage of the DS formalism, we make two novel contributions: first, we bring an added dimension of incrementality: not only is learning sentence-by-sentence incremental, but the grammar learned is word-by-word incremental, commensurate with psycholinguistic results showing incrementality to be a fundamental feature of human parsing and production Lombardo and Sturt (1997) ; Ferreira and Swets (2002) . While incremental parsing algorithms for standard grammar formalisms have seen much research (Hale, 2001; Collins and Roark, 2004; Clark and Curran, 2007) , to the best of our knowledge, a learning system for an explicitly incremental grammar is yet to be presented. Second, by using a grammar in which syntax and parsing context are defined in terms of the growth of semantic structures, we can learn lexical entries for items such as pronouns the constraints on which depend on semantic context.", |
|
"cite_spans": [ |
|
{ |
|
"start": 548, |
|
"end": 573, |
|
"text": "Lombardo and Sturt (1997)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 576, |
|
"end": 601, |
|
"text": "Ferreira and Swets (2002)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 697, |
|
"end": 709, |
|
"text": "(Hale, 2001;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 710, |
|
"end": 734, |
|
"text": "Collins and Roark, 2004;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 735, |
|
"end": 758, |
|
"text": "Clark and Curran, 2007)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Previous work on grammar induction", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "?T y(e), \u2666", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "?T y(t)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "?T y(e \u2192 t) \u2212\u2192 \"john\" ?T y(t) T y(e), john ?T y(e \u2192 t), \u2666 \u2212\u2192 \"upset\" ?T y(t) T y(e), john", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "?T y(t)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "?T y(e \u2192 t)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "?T y(t)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "?T y(e), \u2666 T y(e \u2192 (e \u2192 t)), \u03bby\u03bbx.upset", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "?T y(t)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2032 (x)(y) \u2212\u2192 \"mary\" T y(t), \u2666, upset \u2032 (john \u2032 )(mary \u2032 ) T y(e), john T y(e \u2192 t), \u03bbx.upset \u2032 (x)(mary \u2032 )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "?T y(t)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "T y(e), mary \u2032 T y(e \u2192 (e \u2192 t)), \u03bby\u03bbx.upset \u2032 (x)(y) Figure 1 : Incremental parsing in DS producing semantic trees: \"John upset Mary\"", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 53, |
|
"end": 61, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "?T y(t)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Dynamic Syntax is a parsing-directed grammar formalism, which models the word-by-word incremental processing of linguistic input. Unlike many other formalisms, DS models the incremental building up of interpretations without presupposing or indeed recognising an independent level of syntactic processing. Thus, the output for any given string of words is a purely semantic tree representing its predicate-argument structure; tree nodes correspond to terms in the lambda calculus, decorated with la-bels expressing their semantic type (e.g. T y(e)) and formula, with beta-reduction determining the type and formula at a mother node from those at its daughters ( Figure 1 ). These trees can be partial, containing unsatisfied requirements for node labels (e.g. ?T y(e) is a requirement for future development to T y(e)), and contain a pointer \u2666 labelling the node currently under development. Grammaticality is defined as parsability: the successful incremental construction of a tree with no outstanding requirements (a complete tree) using all information given by the words in a sentence. The input to our induction task here is therefore sentences paired with such complete, semantic trees, and what we learn are constrained lexical procedures for the incremental construction of such trees. Note that in these trees, leaf nodes do not necessarily correspond to words, and may not be in linear sentence order (see Figure 1) ; and syntactic structure is not explicitly represented, only the structure of semantic predicate-argument combination.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 662, |
|
"end": 670, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1417, |
|
"end": 1426, |
|
"text": "Figure 1)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Dynamic Syntax", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The parsing process is defined in terms of conditional actions: procedural specifications for monotonic tree growth. These take the form both of general structure-building principles (computational actions), putatively independent of any particular natural language, and of language-specific actions induced by parsing particular lexical items (lexical actions). The latter are what we here try to learn from data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Actions in DS", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Computational actions These form a small, fixed set, and we assume them as given here. Some merely encode the properties of the lambda calculus and the logical tree formalism itself (LoFT Blackburn and Meyer-Viol, 1994 ) -these we term inferential actions. Examples include THINNING (removal of satisfied requirements) and ELIMINATION (beta-reduction of daughter nodes at the mother). These actions are entirely language-general, cause no ambiguity, and add no new information to the tree; as such, they apply non-optionally whenever their preconditions are met.", |
|
"cite_spans": [ |
|
{ |
|
"start": 188, |
|
"end": 218, |
|
"text": "Blackburn and Meyer-Viol, 1994", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Actions in DS", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Other computational actions reflect the fundamental predictivity and dynamics of the DS framework. For example, *ADJUNCTION introduces a single unfixed node with underspecified tree position (replacing feature-passing concepts for e.g. long-distance dependency); and LINK-ADJUNCTION builds a paired (\"linked\") tree corresponding to semantic conjunction (licensing relative clauses, apposition and more). These actions represent possible parsing strategies and can apply optionally at any stage of a parse if their preconditions are met. While largely language-independent, some are specific to language type (e.g. INTRODUCTION-PREDICTION in the form used here applies only to SVO languages).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Actions in DS", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The lexicon associates words with lexical actions; like computational actions, these are sequences of tree-update actions in an IF..THEN..ELSE format, and composed of explicitly procedural atomic tree-building actions such as make, go, put. make creates a new daughter node, go moves the pointer, and put decorates the pointed node with a label. Figure 2 shows an example for a proper noun, John. The action checks whether the pointed node (marked as \u2666) has a requirement for type e; if so, it decorates it with type e (thus satisfying the requirement), formula John \u2032 and the bottom restriction \u2193 \u22a5 (meaning that the node cannot have any daughters). Otherwise (if no requirement ?T y(e)), the action aborts, meaning that the word 'John' cannot be parsed in the context of the current tree.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 346, |
|
"end": 354, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Lexical actions", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Input tree Output tree", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Action", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "John IF ?T y(e) THEN put(T y(e)) put(F o(John \u2032 ) put( \u2193 \u22a5) ELSE ABORT ?T y(t) ?T y(e), \u2666 ?T y(e \u2192 t) John \u2212\u2192 ?T y(t) T y(e), ?T y(e) John \u2032 , \u2193 \u22a5, \u2666 ?T y(e \u2192 t)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Action", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Figure 2: Lexical action for the word 'John'", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Action", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "These actions define the parsing process. Given a sequence of words (w 1 , w 2 , ..., w n ), the parser starts from the axiom tree T 0 (a requirement ?T y(t) to construct a complete tree of propositional type), and applies the corresponding lexical actions (a 1 , a 2 , . . . , a n ), optionally interspersing computational actionssee Figure 1 . Sato (2011) shows how this parsing process can be modelled on a Directed Acyclic Graph (DAG), rooted at T 0 , with partial trees as nodes, and computational and lexical actions as edges (i.e. transitions between trees):", |
|
"cite_spans": [ |
|
{ |
|
"start": 346, |
|
"end": 357, |
|
"text": "Sato (2011)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 335, |
|
"end": 343, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Graph Representation of DS Parsing", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "T 0 T 1 intro T 2 pred T 3 'john' T 1 \u2032 *Adj T 2 \u2032 'john' T 3 \u2032 intro T 4 \u2032 pred T 5 \u2032", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Graph Representation of DS Parsing", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "In this DAG, intro, pred and *Adj correspond to the computational actions INTRODUCTION, PRE-DICTION and *-ADJUNCTION respectively; and 'john' is a lexical action. Different paths through the DAG represent different parsing strategies, which may succeed or fail depending on how the utterance is continued. Here, the path T 0 \u2212 T 3 will succeed if 'John' is the subject of an upcoming verb (\"John upset Mary\"); T 0 \u2212 T 4 will succeed if 'John' turns out to be a left-dislocated object (\"John, Mary upset\").", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Graph Representation of DS Parsing", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "This DAG makes up the parse state at any point, and contains all information available to the parser. This includes semantic tree and tree-transition information taken to make up the linguistic context for ellipsis and pronominal construal (Purver et al., 2011) . It also provides us with a basis for probabilistic parsing (see Sato, 2011) : given a conditional probability distribution P (a|w, T ) over possible actions a given a word w and (some set of features of) the current partial tree T , the DAG can then be incrementally constructed and traversed in a best-first, breadth-first or beam parsing manner.", |
|
"cite_spans": [ |
|
{ |
|
"start": 240, |
|
"end": 261, |
|
"text": "(Purver et al., 2011)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 328, |
|
"end": 339, |
|
"text": "Sato, 2011)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Graph Representation of DS Parsing", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Our task here is data-driven, probabilistic learning of lexical actions for all the words occurring in the corpus. Throughout, we will assume that the (language-independent) computational actions are known. We also assume that the supervision information is structured: i.e. our dataset pairs sentences with complete DS trees encoding their predicate-argument structures, rather than just a flat logical form (LF) as in e.g. Zettlemoyer and Collins (2007) . DS trees provide more information than LFs in that they disambiguate between different possible predicate-argument decompositions of the corresponding LF; note however that this provides no extra information on the mapping from words to meaning. The input to the induction procedure is now as follows:", |
|
"cite_spans": [ |
|
{ |
|
"start": 425, |
|
"end": 455, |
|
"text": "Zettlemoyer and Collins (2007)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning lexical actions 4.1 Problem Statement", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "\u2022 the set of computational actions in Dynamic Syntax, G.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning lexical actions 4.1 Problem Statement", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "\u2022 a set of training examples of the form S i , T i , where S i = w 1 . . . w n is a sentence of the language and T i -henceforth referred to as the target tree -is the complete semantic tree representing the compositional structure of the meaning of S i .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning lexical actions 4.1 Problem Statement", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The output is a grammar specifying the possible lexical actions for each word in the corpus. Given our data-driven approach, we take a probabilistic view: we take this grammar as associating each word w with a probability distribution \u03b8 w over lexical actions. In principle, for use in parsing, this distribution should specify the posterior probability p(a|w, T ) of using a particular action a to parse a word w in the context of a particular partial tree T . However, here we make the simplifying assumption that actions are conditioned solely on one feature of a tree, the semantic type T y of the currently pointed node; and that actions apply exclusively to one such type (i.e. ambiguity of type leads to multiple actions). This effectively simplifies our problem to specifying the probability p(a|w).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning lexical actions 4.1 Problem Statement", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "In traditional DS terms, this is equivalent to assuming that all lexical actions have a simple IF clause of the form IF ?T y(X); this is true of most lexical actions in existing DS grammars (see examples above), but not all. This assumption will lead to some over-generation -inducing actions which can parse some ungrammatical strings -we must rely on the probabilities learned to make such parses unlikely, and evalute this in Section 5. Given this, the focus of what we learn here is effectively the THEN clause of lexical actions: a sequence of DS atomic actions such as go, make, and put (see Fig. 2 ), but now with an attendant posterior probability. We will henceforth refer to these sequences as lexical hypotheses. We first describe our method for constructing lexical hypotheses with a single training example (a sentencetree pair). We then discuss how to generalise over these outputs, while updating the corresponding probability distributions incrementally as we process more training examples.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 598, |
|
"end": 604, |
|
"text": "Fig. 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Learning lexical actions 4.1 Problem Statement", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "DS is strictly monotonic: actions can only extend the tree under construction, deleting nothing except satisfied requirements. Thus, hypothesising lexical actions consists in an incremental search through the space of all monotonic, and well-formed extensions of the current tree, T cur , that subsume (i.e. can be extended to) the target tree T t . This gives a bounded space which can be described by a DAG equivalent to the parsing DAG of section 3.2: nodes are trees, starting with T cur and ending with T t , and edges are possible extensions. These extensions may be either DS's basic computational actions (already known) or new lexical hypotheses.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hypothesis Construction", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "This space is further constrained by the fact that not all possible trees and tree extensions are wellformed (meaningful) in DS, due to the properties of the lambda-calculus and those of the modal tree logic LoFT. Mother nodes must be compatible with the semantic type and formula of their daughters, as would be derived by beta-reduction; formula decorations cannot apply without type decorations; and so on. We also prevent arbitrary type-raising by restricting the types allowed, taking the standard DS assumption that noun phrases have semantic type e (rather than a higher type as in Generalized Quantifier theory) and common nouns their own type cn (see Cann et al., 2005 , chapter 3 for details).", |
|
"cite_spans": [ |
|
{ |
|
"start": 660, |
|
"end": 677, |
|
"text": "Cann et al., 2005", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hypothesis Construction", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "We implement these constraints by packaging together permitted sequences of tree updates as macros (sequences of DS atomic actions such as make, go, and put), and hypothesising possible DAG paths based on these macros. We can divide these into two classes of lexical hypothesis macros: (1) treebuilding hypotheses, independent of the target tree, and in charge of building appropriately typed daughters for the current node; and (2) content decoration hypotheses in charge of the semantic decoration of the leaves of the current tree (T cur ), with formulae taken from the leaves of the target tree (T t ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hypothesis Construction", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "?T y(X), \u2666 ?T y(e) ?T y(e \u2192 X)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hypothesis Construction", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "?T y(X) X = e THEN make( \u21930 ); go( \u21930 ) put(?T y(e)); go( \u2191 ) make( \u21931 ); go( \u21931 ) put(?T y(e \u2192 X)); go(\u2191) ELSE ABORT ?T y(e), \u2666", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "IF", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "?T y(cn) ?T y(cn \u2192 e) IF ?T y(e) THEN make( \u21930 ); go( \u21930 ) put(?T y(cn)); go( \u2191 ) make( \u21931 ); go( \u21931 ) put(?T y(cn \u2192 e)); go(\u2191) ELSE ABORT Figure 3 : Target-independent tree-building hypotheses Figure 3 shows example tree-building hypotheses which extend a mother node with a type requirement to have two daughter nodes which would (once themselves developed) combine to satisfy that requirement. On the left, an general rule in which a currently pointed node of some type X can be hypothesised to be formed of types e and e \u2192 X (e.g. if X = e \u2192 t, the daughters will have types e and e \u2192 (e \u2192 t)). This reflects only the fact that DS trees correspond to lambda calculus terms, with e being a possible type. The other is more specific, suitable only for a type e node, allowing it to be composed of nodes of type cn and cn \u2192 e (where cn \u2192 e turns out to be the type of determiners), but again reflects only general semantic properties which would apply in any language.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 139, |
|
"end": 147, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 194, |
|
"end": 202, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "IF", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Content decoration hypotheses on the other hand depend on the target tree: they posit possible addition of semantic content, via sequences of put operations (e.g. content-dec: put(Ty(e)); put(Fo(john)) which develop the pointed node on T cur towards the corresponding leaf node on T t . They are constrained to apply only to leaf nodes (i.e. nodes in T cur whose counterparts on T t are leaf nodes), other nodes being assumed to receive their content via beta-reduction of their daughters.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "IF", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Hypothesis construction therefore produces, for each training sentence w 1 . . . w n , all possible sequences of actions that lead from the axiom tree T 0 to the target tree T t (henceforth, the complete sequences); where these sequences contain both lexical hypotheses and general computational actions. To form discrete lexical entries, we must split each such sequence into n sub-sequences, cs 1 . . . cs n , with each candidate subsequence cs i , corresponding to a word w i , by hypothesising a set of word boundaries.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hypothesis Splitting", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "This splitting process is subject to two constraints. Firstly, each candidate sequence cs i must contain exactly one content decoration lexical hypothesis (see above); this ensures both that every word has some contribution to the sentence's semantic content, and that no word decorates the leaves of the tree with semantic content more than once. Secondly, candidate subsequences cs i are computationally maximal on the left: cs i may begin with (possibly multiple) computational actions, but must end with a lexical hypothesis. This reduces the splitting hypothesis space, and aids lexical generalisation (see below).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hypothesis Splitting", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Each such possible set of boundaries corresponds to a candidate sequence tuple cs 1 . . . cs n . Importantly, this means that these cs i are not independent, e.g. when processing \"John arrives\", a hypothesis for 'John' is only compatible with certain hypotheses for 'arrives'. This is reflected below in how probabilities are assigned to the word hypotheses.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hypothesis Splitting", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "DS's general computational actions can apply at any point before the application of a lexical action, thus providing strategies for adjusting the syntactic context in which a word is parsed. Removing computational actions on the left of a candidate sequence will leave a more general albeit equivalent hypothesis: one which will apply successfully in more syntactic contexts. However, if a computational subsequence seems to occur whenever a word is observed, we would like to lexicalise it, including it within the lexical entry for a more efficient and constrained grammar. We therefore want to generalise over our candidate sequence tuples to partition them into portions which seem to be achieved lexically, and portions which are better achieved by computational actions alone. We therefore group the candidate sequence tuples produced by splitting, storing them as members of equivalence classes which form our final word hypotheses. Two tuples belong to the same equivalence class if they can be made identical by removing only computational actions from the beginning of either one. We implement this via a single packed data-structure which is again a DAG, as shown in Fig. 4 ; this represents the full set of candidate sequences by their intersection (the solid central common path) and differences (the dotted diverging paths at beginning). Nodes here therefore no longer represent single trees, but sets of trees. Figure 4 shows this process over three training examples containing the unknown word 'John' in different syntactic positions. The 'S' and 'F' nodes mark the start and finish of the intersection -initially the entire sequence. As new candidate sequences arrive, the intersection -the maximal common path -is reduced as appropriate. Word hypotheses thus remain as general as possible.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 1178, |
|
"end": 1184, |
|
"text": "Fig. 4", |
|
"ref_id": "FIGREF1" |
|
}, |
|
{ |
|
"start": 1426, |
|
"end": 1434, |
|
"text": "Figure 4", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Hypothesis Generalisation", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "In our probabilistic framework, these DAGs themselves are our lexical entries, with associated probabilities (see below). If desired, we can form traditional DS lexical actions: the DAG intersection corresponds to the THEN clause, with the IF clauses being a type requirement obtained from the pointed node on all partial trees in the initial 'S' node. As lexical hypotheses within the intersection are identical, and were constrained when formed to add type information before formula information (see Section 4.2), any type information must be common across these partial trees. In Figure 4 for 'john', this is ?T y(e), i.e. a requirement for type e, common to all three training examples.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 584, |
|
"end": 592, |
|
"text": "Figure 4", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Hypothesis Generalisation", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "The set of possible word hypotheses induced as above can of course span a very large space: we must therefore infer a probability distribution over this space to produce a useful grammar. This can be estimated from the observed distribution of hypotheses, as these are constrained to be compatible with the target tree for each sentence; and the estimates can be incrementally updated as we process each training example. For this process of probability estimation, the input is the output of the splitting and generalisation procedure above, i.e. for the current training sentence S = w 1 . . . w n a set HT of Hypothesis Tuples (sequences of word hypotheses), each of the form HT j = h j 1 . . . h j n , where h j i is the word hypothesis for w i in HT j . The desired output is a probability distribution \u03b8 w over hypotheses for each word w, where \u03b8 w (h) is the posterior probability p(h|w) of a given word hypothesis h being used to parse w.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Probability Estimation", |
|
"sec_num": "4.5" |
|
}, |
|
{ |
|
"text": "Re-estimation Given some prior estimate of \u03b8 \u2032 w , we can use a new training example to produce an updated estimate \u03b8 \u2032\u2032 w directly. We assign each hypothesis tuple HT j a probability based on \u03b8 \u2032 w ; the probability of a sequence h j 1 . . . h j n is the product of the probabilities of the h i 's within it (by the Bayes chain rule):", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Probability Estimation", |
|
"sec_num": "4.5" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "p(HT j |S) = n i=1 p(h j i |w i ) = n i=1 \u03b8 \u2032 w i (h j i )", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Probability Estimation", |
|
"sec_num": "4.5" |
|
}, |
|
{ |
|
"text": "Now, for any word w and possible hypothesis h, we can re-estimate the probability p(h|w) as the normalised sum of the probabilities of all observed tuples HT j which contain h, that is the set of tuples, HT h = {HT j |h \u2208 HT j }:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Probability Estimation", |
|
"sec_num": "4.5" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u03b8 \u2032\u2032 w (h) = p(h|w) = 1 Z HT j \u2208HT h p(HT j |S) = 1 Z HT j \u2208HT h n i=1 \u03b8 \u2032 w i (h j i )", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "Probability Estimation", |
|
"sec_num": "4.5" |
|
}, |
|
{ |
|
"text": "where Z, the normalising constant, is the sum of the probabilities of all the HT j 's:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Probability Estimation", |
|
"sec_num": "4.5" |
|
}, |
|
{ |
|
"text": "Z = HT j \u2208HT n i=1 \u03b8 \u2032 w i (h j i )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Probability Estimation", |
|
"sec_num": "4.5" |
|
}, |
|
{ |
|
"text": "Incremental update Our procedure is now to update our overall esimate \u03b8 w incrementally: after the N th example, our new estimate \u03b8 N w is a weighted average of the previous estimate \u03b8 N \u22121 w and the new value from the current example \u03b8 \u2032\u2032 w from equation (2), with weights reflecting the amount of evidence on which these estimates are based:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Probability Estimation", |
|
"sec_num": "4.5" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u03b8 N w (h) = N \u2212 1 N \u03b8 N \u22121 w (h) + 1 N \u03b8 \u2032\u2032 w (h)", |
|
"eq_num": "(3)" |
|
} |
|
], |
|
"section": "Probability Estimation", |
|
"sec_num": "4.5" |
|
}, |
|
{ |
|
"text": "Note that for training example 1, the first term's numerator is zero, so \u03b8 N \u22121 w is not required and the new estimates are equal to \u03b8 \u2032\u2032 w . However, to produce \u03b8 \u2032\u2032 w we need some prior estimate \u03b8 \u2032 w ; in the absence of any information, we simply assume uniform distributions \u03b8 \u2032 w = \u03b8 0 w over the lexical hypotheses observed in the first training example.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Probability Estimation", |
|
"sec_num": "4.5" |
|
}, |
|
{ |
|
"text": "In subsequent training examples, there will arise new hypotheses h not seen in previous examples, and for which the prior estimate \u03b8 \u2032 w gives no information. We incorporate these hypotheses into \u03b8 \u2032 w by discounting the probabilities assigned to known hypotheses, reserving some probability mass which we then assume to be evenly distributed over the new unseen hypotheses. For this we use the same weight as in equation 3:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Probability Estimation", |
|
"sec_num": "4.5" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u03b8 \u2032 w (h) = \uf8f1 \uf8f2 \uf8f3 N \u22121 N \u03b8 N \u22121 w (h) if h in \u03b8 N \u22121 w 1 Nu h\u2208\u03b8 N\u22121 w 1 N \u03b8 N \u22121 w (h) otherwise", |
|
"eq_num": "(4)" |
|
} |
|
], |
|
"section": "Probability Estimation", |
|
"sec_num": "4.5" |
|
}, |
|
{ |
|
"text": "where N u here is number of new unseen hypotheses in example N . Given (4), we can now more accurately specify the update procedure in (3) to be:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Probability Estimation", |
|
"sec_num": "4.5" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u03b8 N w (h) = \u03b8 \u2032 w (h) + 1 N \u03b8 \u2032\u2032 w (h)", |
|
"eq_num": "(5)" |
|
} |
|
], |
|
"section": "Probability Estimation", |
|
"sec_num": "4.5" |
|
}, |
|
{ |
|
"text": "Non-incremental estimation Using this incremental procedure, we use the estimates from previous sentences to assign prior probabilities to each hypothesis tuple (i.e. each possible path through the hypothesised parse DAG), and then derive updated posterior estimates given the observed distributions. Such a procedure could similarly be applied non-incrementally at each point, by repeatedly reestimating and using the new estimates to re-calculate tuple probabilities in a version of the Expectation-Maximisation algorithm (Dempster et al., 1977) . However, this would require us to keep all HT sets from every training example; this would be not only computationally demanding but seems psycholinguistically implausible (requiring memory for all lexical and syntactic dependencies for each sentence). Instead, we restrict ourselves here to assuming that this detailed information is only kept in memory for one sentence; intermediate versions would be possible.", |
|
"cite_spans": [ |
|
{ |
|
"start": 524, |
|
"end": 547, |
|
"text": "(Dempster et al., 1977)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Probability Estimation", |
|
"sec_num": "4.5" |
|
}, |
|
{ |
|
"text": "Standard approaches to grammar induction treat pronouns simply as entries of a particular syntactic category. Here, as we learn from semantic annotations, we can learn not only their anaphoric nature, but syntactic and semantic constraints on their resolution. To achieve this, we assume one further general strategy for lexical hypothesis formation: a copying operation from context whereby the semantic content (formula and type decorations) can be copied from any existing type-compatible and complete node on T cur (possibly more than one) accessible from the current pointed node via some finite tree modality. This assumption therefore provides the general concept of anaphoricity, but nothing more: it can be used in hypothesis formation for any word, and we rely on observed probabilities of its providing a successful parse to rule it out for words other than pronouns. By requiring access via some tree modality (\u2191 0 , \u2193 * etc), we restrict it to intrasentential anaphora here, but the method could be applied to intersentential cases where suitable LFs are available. This modal relation describes the relative position of the antecedent; by storing this as part of the hypothesis DAG, and subjecting it to a generalisation procedure similar to that used for computational actions in Section 4.4, the system learns constraints on these modal relations. The lexical entries resulting can therefore express constraints on the possible antecedents, and grammatical constraints on their presence, akin to Principles A and B of Government and Binding theory (see Cann et al. (2005) , chapter 2); in this paper, we evaluate the case of relative pronouns only (see below).", |
|
"cite_spans": [ |
|
{ |
|
"start": 1569, |
|
"end": 1587, |
|
"text": "Cann et al. (2005)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pronouns", |
|
"sec_num": "4.6" |
|
}, |
|
{ |
|
"text": "This induction method has been implemented and tested over a 200-sentence artificial corpus. The corpus was generated using a manually defined DS grammar, with words randomly chosen to follow the Table 1 . 90% of the sentences were used as training data to induce a grammar, and the remaining 10% used to test it. We evaluate the results in terms of both parse coverage and semantic accuracy, via comparison with the logical forms derived using the original, hand-crafted grammar. The induced hypotheses for each word were ranked according to their probability; three separate grammars were formed using the top one, top two and top three hypotheses and were then used independently to parse the test set. Table 2 shows the results, discounting sentences containing words not encountered in training at all (for which no parse is possible). We give the percentage of test sentences for which a complete parse was obtained; and the percentage of those for which one of the top 3 parses resulted in a logical form identical to the correct one.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 196, |
|
"end": 203, |
|
"text": "Table 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 706, |
|
"end": 713, |
|
"text": "Table 2", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Parse coverage", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "As Table 2 shows, when the top three hypotheses are retained for each word, we obtain 80% formula derivation accuracy. Manual inspection of the individual actions learned revealed that the words which have incorrect lexical entries at rank one were those which were sparse in the corpus -we did not control for the exact frequency of occurrence of each word. The required frequency of occurrence varied across different categories; while transitive verbs require about four occurrences, intransitive verbs require just one. Count nouns were particularly sparse (see type/token ratios in Table 1 ).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 10, |
|
"text": "Table 2", |
|
"ref_id": "TABREF1" |
|
}, |
|
{ |
|
"start": 587, |
|
"end": 594, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Parse coverage", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "As we have not yet evaluated our method on a real corpus, the results obtained are difficult to compare directly with other baselines such as that of Kwiatkowski et al. (2012) who achieve state-of-the-art results; cross-validation of this method on the CHILDES corpus is work in progress, which will allow direct comparison with Kwiatkowski et al. (2012) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 150, |
|
"end": 175, |
|
"text": "Kwiatkowski et al. (2012)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 329, |
|
"end": 354, |
|
"text": "Kwiatkowski et al. (2012)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parse coverage", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "We introduced lexically ambiguous words into the corpus to test the ability of the system to learn and distinguish between their different senses; 10% of word types were ambiguous between 2 or 3 different senses with different syntactic category. Inspection of the induced actions for these words shows that, given appropriately balanced frequencies of occurrence of each separate word sense in the corpus, the system is able to learn and distinguish between them. 57% of the ambiguous words had lexical entries with both senses among the top three hypotheses, although in only one case were the two senses ranked one and two. This was the verb 'tramped' with transitive and intransitive readings, with 4 and 21 occurrences in the corpus respectively.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Lexical Ambiguity", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "For pronouns, we wish to learn both their anaphoric nature (resolution from context) and appropriate syntactic constraints. Here, we tested on relative pronouns such as 'who' in \"John likes Mary, who runs\": the most general lexical action hypothesis learned for these is identical to hand-crafted versions of the action (see Cann et al. (2005) , chapter 3): who", |
|
"cite_spans": [ |
|
{ |
|
"start": 325, |
|
"end": 343, |
|
"text": "Cann et al. (2005)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pronouns", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "?T y(e) \u2191 * \u2191L F o(X) THEN put(T y(e)) put(F o(X)) put( \u2193 \u22a5) ELSE ABORT This action instructs the parser to copy a semantic type and formula from a type T y(e) node at the modality \u2191 * \u2191 L , relative to the pointed node. The system has therefore learnt that pronouns involve resolution from context (note that many other hypotheses are possible, as pronouns are paired with different LFs in different sentences). It also expresses a syntactic constraint on relative pronouns, that is, the relative position of their antecedents \u2191 * \u2191 L (the first node above a dominating LINK tree relationi.e. the head of the containing NP). Of course, relative pronouns are a special case: the modality from which their antecedents are copied is relatively fixed. Equivalent constraints could be learned for other pronouns, given generalisation over several modal relations; e.g. locality of antecedents for reflexives is specified in DS via a constraint \u2191 0 \u2191 * 1 \u2193 0 requiring the antecedent to be in some local argument position. In learning reflexives, this modal relation can come from generalisation over several different modalities obtained from different training examples; this will require larger corpora.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "IF", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In this paper we have outlined a novel method for the probabilistic induction of new lexical entries in an inherently incremental and semantic grammar formalism, Dynamic Syntax, with no independent level of syntactic phrase structure. Our method learns from sentences paired with semantic trees representing the sentences' predicate-argument structures, assuming only very general compositional mechanisms. While the method still requires evaluation on real data, evaluation on an artificial but statistically representative corpus demonstrates that the method achieves good coverage. A further bonus of using a semantic grammar is that it has the potential to learn both semantic and syntactic constraints on pronouns: our evaluation demonstrates this for relative pronouns, but this can be extended to other pronoun types.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and Future work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Our research now focusses on evaluating this method on real data (the CHILDES corpus), and on reducing the level of supervision by adapting the method to learn from sentences paired not with trees but with less structured LFs, using Type Theory with Records Cooper (2005) and/or the lambda calculus. Other work planned includes the integration of the actions learned into a probabilistic parser.", |
|
"cite_spans": [ |
|
{ |
|
"start": 258, |
|
"end": 271, |
|
"text": "Cooper (2005)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and Future work", |
|
"sec_num": "6" |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Linguistics, logic and finite trees", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Blackburn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Meyer-Viol", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "Logic Journal of the Interest Group of Pure and Applied Logics", |
|
"volume": "2", |
|
"issue": "1", |
|
"pages": "3--29", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Blackburn, P. and W. Meyer-Viol (1994). Linguistics, logic and finite trees. Logic Journal of the Interest Group of Pure and Applied Logics 2(1), 3-29.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "The Dynamics of Language", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Cann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Kempson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Marten", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Cann, R., R. Kempson, and L. Marten (2005). The Dynamics of Language. Oxford: Elsevier.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Statistical Language Learning", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Charniak", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Charniak, E. (1996). Statistical Language Learning. MIT Press.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Wide-coverage efficient statistical parsing with CCG and log-linear models", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Curran", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Computational Linguistics", |
|
"volume": "33", |
|
"issue": "4", |
|
"pages": "493--552", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Clark, S. and J. Curran (2007). Wide-coverage efficient statistical parsing with CCG and log-linear models. Computational Linguistics 33(4), 493-552.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Incremental parsing with the perceptron algorithm", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Collins", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Roark", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the 42nd Meeting of the ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "111--118", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Collins, M. and B. Roark (2004). Incremental parsing with the perceptron algorithm. In Proceedings of the 42nd Meeting of the ACL, Barcelona, pp. 111-118.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Records and record types in semantic theory", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Cooper", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Journal of Logic and Computation", |
|
"volume": "15", |
|
"issue": "2", |
|
"pages": "99--112", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Cooper, R. (2005). Records and record types in semantic theory. Journal of Logic and Computa- tion 15(2), 99-112.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Maximum likelihood from incomplete data via the EM algorithm", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Dempster", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Laird", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [ |
|
"B" |
|
], |
|
"last": "Rubin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1977, |
|
"venue": "Journal of the Royal Statistical Society. Series B (Methodological)", |
|
"volume": "39", |
|
"issue": "1", |
|
"pages": "1--38", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dempster, A., N. Laird, and D. B. Rubin (1977). Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society. Series B (Methodological) 39(1), 1-38.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "How incremental is language production? evidence from the production of utterances requiring the computation of arithmetic sums", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Ferreira", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Swets", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Journal of Memory and Language", |
|
"volume": "46", |
|
"issue": "", |
|
"pages": "57--84", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ferreira, F. and B. Swets (2002). How incremental is language production? evidence from the production of utterances requiring the computation of arithmetic sums. Journal of Memory and Language 46, 57- 84.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Language identification in the limit", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Gold", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1967, |
|
"venue": "Information and Control", |
|
"volume": "10", |
|
"issue": "5", |
|
"pages": "447--474", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gold, E. M. (1967). Language identification in the limit. Information and Control 10(5), 447-474.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "A probabilistic Earley parser as a psycholinguistic model", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Hale", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Proceedings of the 2nd Conference of the North American Chapter of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hale, J. (2001). A probabilistic Earley parser as a psycholinguistic model. In Proceedings of the 2nd Conference of the North American Chapter of the Association for Computational Linguistics, Pitts- burgh, PA.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Dynamic Syntax: The Flow of Language Understanding", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Kempson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Meyer-Viol", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Gabbay", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kempson, R., W. Meyer-Viol, and D. Gabbay (2001). Dynamic Syntax: The Flow of Language Under- standing. Blackwell.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Natural language grammar induction with a generative constituentcontext mode", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Pattern Recognition", |
|
"volume": "38", |
|
"issue": "9", |
|
"pages": "1407--1419", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Klein, D. and C. D. Manning (2005). Natural language grammar induction with a generative constituent- context mode. Pattern Recognition 38(9), 1407-1419.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "A probabilistic model of syntactic and semantic acquisition from child-directed utterances and their meanings", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Kwiatkowski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Goldwater", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Steedman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the Conference of the European Chapter of the Association for Computational Linguistics (EACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kwiatkowski, T., S. Goldwater, L. Zettlemoyer, and M. Steedman (2012). A probabilistic model of syntactic and semantic acquisition from child-directed utterances and their meanings. In Proceedings of the Conference of the European Chapter of the Association for Computational Linguistics (EACL).", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Inducing probabilistic CCG grammars from logical form with higher-order unification", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Kwiatkowski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Goldwater", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Steedman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1223--1233", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kwiatkowski, T., L. Zettlemoyer, S. Goldwater, and M. Steedman (2010, October). Inducing proba- bilistic CCG grammars from logical form with higher-order unification. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, Cambridge, MA, pp. 1223-1233. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Incremental processing and infinite local ambiguity", |
|
"authors": [ |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Lombardo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Sturt", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Proceedings of the 1997 Cognitive Science Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lombardo, V. and P. Sturt (1997). Incremental processing and infinite local ambiguity. In Proceedings of the 1997 Cognitive Science Conference.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "The CHILDES Project: Tools for Analyzing Talk", |
|
"authors": [ |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Macwhinney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "MacWhinney, B. (2000). The CHILDES Project: Tools for Analyzing Talk (Third ed.). Mahwah, New Jersey: Lawrence Erlbaum Associates.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Inside-outside reestimation from partially bracketed corpora", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Pereira", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Schabes", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "Proceedings of the 30th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "128--135", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pereira, F. and Y. Schabes (1992, June). Inside-outside reestimation from partially bracketed corpora. In Proceedings of the 30th Annual Meeting of the Association for Computational Linguistics, Newark, Delaware, USA, pp. 128-135. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Incremental semantic construction in a dialogue system", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Purver", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Eshghi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Hough", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 9th International Conference on Computational Semantics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "365--369", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Purver, M., A. Eshghi, and J. Hough (2011, January). Incremental semantic construction in a dialogue system. In J. Bos and S. Pulman (Eds.), Proceedings of the 9th International Conference on Compu- tational Semantics, Oxford, UK, pp. 365-369.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Local ambiguity, search strategies and parsing in Dynamic Syntax", |
|
"authors": [ |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Sato", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "The Dynamics of Lexical Interfaces. CSLI Publications", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sato, Y. (2011). Local ambiguity, search strategies and parsing in Dynamic Syntax. In E. Gre- goromichelaki, R. Kempson, and C. Howes (Eds.), The Dynamics of Lexical Interfaces. CSLI Pub- lications.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "The Syntactic Process", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Steedman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Steedman, M. (2000). The Syntactic Process. Cambridge, MA: MIT Press.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Online learning of relaxed CCG grammars for parsing to logical form", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Collins", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zettlemoyer, L. and M. Collins (2007). Online learning of relaxed CCG grammars for parsing to logical form. In Proceedings of the Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL).", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"text": "Training Example: 'john' in fixed object position; Sequence intersected: LH : content-dec : put(T y(e)); put(F o(John \u2032 )) :S FLH:content-dec:put(Ty(e));put(Fo(John'))Second Training Example: 'john' in subject position; Sequence intersected: CA : intro, CA : predict, LH : content-dec : put(T y(e)); put(F o(John \u2032 )) : 'john' on unfixed node, i.e. left-dislocated object; Sequence intersected: CA : star-adj, LH : content-dec : put(T y(e)); put(F o(John \u2032 ))", |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"FIGREF1": { |
|
"text": "Incremental intersection of candidate sequences; CA=Computational Action, LH=Lexical Hypothesis", |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"TABREF1": { |
|
"html": null, |
|
"content": "<table/>", |
|
"text": "Test parse results: showing percentage parsability, and percentage of parses deriving the correct semantic content for the whole sentence distributions of the relevant POS types and tokens in the CHILDES maternal speech data (MacWhinney, 2000) -see", |
|
"type_str": "table", |
|
"num": null |
|
} |
|
} |
|
} |
|
} |