|
{ |
|
"paper_id": "W91-0103", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T04:42:50.167621Z" |
|
}, |
|
"title": "Towards Uniform Processing of Constraint-based Categorial Grammars", |
|
"authors": [ |
|
{ |
|
"first": "Gertjan", |
|
"middle": [], |
|
"last": "Van Noord", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Universit~t", |
|
"location": { |
|
"addrLine": "des Saarlandes Im Stadtwald 15", |
|
"postCode": "D-6600", |
|
"settlement": "Saarbrficken 11", |
|
"region": "FRG" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "A class of constraint-based categorial grammars is proposed in which the construction of both logical forms and strings is specified completely lexically. Such grammars allow the construction of a uniform algorithm for both parsing and generation. Termination of the algorithm can be guaranteed if lexical entries adhere to a constraint, that can be seen as a computationally motivated version of GB's projection principle.", |
|
"pdf_parse": { |
|
"paper_id": "W91-0103", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "A class of constraint-based categorial grammars is proposed in which the construction of both logical forms and strings is specified completely lexically. Such grammars allow the construction of a uniform algorithm for both parsing and generation. Termination of the algorithm can be guaranteed if lexical entries adhere to a constraint, that can be seen as a computationally motivated version of GB's projection principle.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "In constraint-based approaches to grammar the semantic interpretation of phrases is often defined in the lexical entries. These lexical entries specify their semantic interpretation, taking into account the semantics of the arguments they subcategorize for (specified in their subcat list). The grammar rules simply percolate the semantics upwards; by the selection of the arguments, this semantic formula then gets further instantiated (Moore, 1989) . Hence in such approaches it can be said that all semantic formulas are 'projected from the lexicon' (Zeevat et al., 1987) . Such an organization of a grammar is the starting point of a class of generation algorithms that have become popular recently (Calder et al., 1989; Shieber el al., 1990) . These semantic-head-driven algorithms are both geared towards the input semantic representation and the information contained in lexical entries. If the above sketched approach to semantic interpretation is followed systematically, it is possible to show that such a semantic-head-driven gen-eration algorithm terminates (Dymetman et al., 1990) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 437, |
|
"end": 450, |
|
"text": "(Moore, 1989)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 553, |
|
"end": 574, |
|
"text": "(Zeevat et al., 1987)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 703, |
|
"end": 724, |
|
"text": "(Calder et al., 1989;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 725, |
|
"end": 746, |
|
"text": "Shieber el al., 1990)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1070, |
|
"end": 1093, |
|
"text": "(Dymetman et al., 1990)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Motivations", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In van Noord (1991) I define a head-driven parser (based on Kay (1989) ) for a class of constraint-based grammars in which the construction of strings may use more complex operations that simple context-free concatenation. Again, this algorithm is geared towards the input (string) and the information found in lexical entries. In this paper I investigate an approach where the construction of strings is defined lexically. Grammar rules simply percolate strings upwards. Such an approach seems feasible if we allow for powerful constraints to be defined. The head-corner parser knows about strings and performs operations on them; in the types of grammars defined here these operations are replaced by general constraint-solving techniques (HShfeld and Smolka, 1988; Tuda et al., 1989; Damas et al., 1991) . Therefore, it becomes possible to view both the head-driven generator and the head-driven parser as one and the same algorithm.", |
|
"cite_spans": [ |
|
{ |
|
"start": 60, |
|
"end": 70, |
|
"text": "Kay (1989)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 741, |
|
"end": 767, |
|
"text": "(HShfeld and Smolka, 1988;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 768, |
|
"end": 786, |
|
"text": "Tuda et al., 1989;", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 787, |
|
"end": 806, |
|
"text": "Damas et al., 1991)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Motivations", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "For this uniform algorithm to terminate, we generalize the constraint proposed by Dymetman et ai. (1990) to both semantic interpretations and strings. That is, for each lexical entry we require that its string and its semantics is larger than the string and the semantics associated with each of its arguments. The following picture then emerges. The depth of a derivation tree is determined by the subcat list of the ultimate head of the tree. Furthermore, the string and the semantic representation of each of the non heads in the derivation tree is determined by the subcat list as well. A specific condition on the relation between elements in the subcat list and their se-L mantics and string representation ensures termination. This condition on lexical entries can be seen as a lexicalized !and computationally motivated version of GB's projection principle.", |
|
"cite_spans": [ |
|
{ |
|
"start": 98, |
|
"end": 104, |
|
"text": "(1990)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Motivations", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Word-order domains. The string associated with a linguistic object (sign) is defined in terms of its word-order domain (Reape, 1989; Reape, 1990a ). I take a word=order domain as a sequence of signs. Each of the \u00a7e signs is associated with a word-order domain recursively, or with a sequence of words. A word-order domain is thus a tree. Linear precedence rules are defined that constrain possible orderings of signs in such a word-order domain. Surface strings are a direct function of word-order domains.' In the lexicon, the wordorder domain of a lexical entry is defined by sharing parts of this domain with the arguments it subcategorizes for. Word-order domains are percolated upward. Hence word-order domains are constructed in a derivation by gradual instantiations (hence strings are constructued in a derivation by gradual instantiation as well). Note that this implies that an unsaturated sign is not associated with one string, but merely with a set of possible strings (this is similar to the semantic interpretation of unsaturated signs (Moore, 1989) where Xt are variables, c is a constant, I, l' are attributes. I also use some more powerful constraints that are written as atoms. This formalism is used to define what possible 'signs' are, by the definition of the unary predicate s:i.gn/1. There is only one nonunit clause for this predicate. The idea is that unit clauses for sign/1 are lexical entries, and the one nonunit clause defines the (binary) application rule. I assume that lexical entries are specified for their arguments in their 'subcat list' (sc). In the application rule a head selects the first (f) element from its subcat list, and the tail (r) of the subcat list is the subcat list of the mother; the semantics (sere) and strings (phon) are shared between the head and the mother. I write such rules using matrix notation as follows; string(X) represents the value Y, where string(X,\u00a5).", |
|
"cite_spans": [ |
|
{ |
|
"start": 119, |
|
"end": 132, |
|
"text": "(Reape, 1989;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 133, |
|
"end": 145, |
|
"text": "Reape, 1990a", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1051, |
|
"end": 1064, |
|
"text": "(Moore, 1989)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Motivations", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "phon : [~] synsem : X1 :", |
|
"cite_spans": [ |
|
{ |
|
"start": 7, |
|
"end": 10, |
|
"text": "[~]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "synsem : Xo :", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "phon : [~] X2 :[~", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "synsem : Xo :", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The grammar also consists of a number of lexical entries. Each of these lexical entries is specified for its subcat list, and for each subcat element the semantics and word-order domain is specified, such that they satisfy a termination condition to be defined in the following section. For example, this condition is satisfied if the semantics of each element in the subcat list is a proper subpart of the semantics of the entry, and each element of the subcat list is a proper subpart of the word-order domain of the entry. The phonology of a sign is defined with respect to the word-order domain with the predicate 'string'. This predicate simply defines a left-to-right depth-first traversel of a word-order domain and picks up all the strings at the terminals. It should be noted that the way strings are computed from the wordorder domains implies that the string of a node not necessarily is the concatenation of the strings of its daughter nodes. In fact, the relation between the strings of nodes is defined indirectly via the word-order domains.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "synsem : Xo :", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The word-order domains are sequences of signs. One of these signs is the sign corresponding to the lexical entry itself. However, the domain of this sign is empty, but other values can be shared. Hence the entry for an intransitive German verb such as 'schl~ft' (sleeps) is defined as in figure 1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "synsem : Xo :", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "I Note that in this entry we merely stipulate that the verb preceded by the subject constitutes the word-order domain of the entire phrase. However, we may also use more complex constraints to define word-order constraints. In particular, as already stated above, LP constraints are defined which holds for word-order domains. I use the sequence-union predicate (abbreviated su) defined by Reape as a possible constraint as well. This predicate is motivated by clause union and scrambling phenomena in German. A linguistically motivated example of the use of this constraint can be found in section 4. The predicate su(A, B, C) is true in case the elements of the list C is the multi set union of the elements of the lists A and B; moreover, a < b in either A or B iff a < b in C. I also use the notation X U 0 Y to denote the value Seq, where su(X,Y,$eq). For example, su ([a, d, e] , [b, c, f] , [a, b, c, d, e, In the following I (implicitly) assume that for each lexical entry the following holds:", |
|
"cite_spans": [ |
|
{ |
|
"start": 873, |
|
"end": 883, |
|
"text": "([a, d, e]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 886, |
|
"end": 895, |
|
"text": "[b, c, f]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 898, |
|
"end": 913, |
|
"text": "[a, b, c, d, e,", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "synsem : Xo :", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "dora: [] phon : string(lp(D) ] 3", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "synsem : Xo :", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In van Noord (1991) I define a parsing strategy, called 'head-corner parsing' for a class of I i grammars allowing more complex constraints on strings than context-free concatenation. Reape defines generalizations of the shift-reduce parser and the CYK parser (Reape, 1990b) , for the same class of grammars. For generation headdriven generators can be used (van Noord, 1989; Calder et al., 1989; Shieber et al., 1990) . Alternatively I propose a generalization of these headdriven parsing-and generation algorithms. The generalized algorithm can be used both for parsing and generation. Hence we obtain a uniform algorithm for both processes. Shieber (1988) argues for a uniform architecture for parsing in generation. In his proposal, both processes are (different) instantiations of a parameterized algorithm. The algoritthm I define is not parameterized in this sense, but really uses the same code in both directions. Some of the specific properties of the head-driven generator on the one hand, and the head-driven parser on the other hand, follow from general constraint-solving techniques. We thus obtain a uniform algorithm that is suitable for linguistic processing. This result should be compared with other uniform scheme's such as SLD-resolution or some implementations of type inference (Zajae, 1991, this volume) which clearly are also uniform but facessevere problems in the case of lexicalist grammars, as such scheme's do not take into account the specific nature of lexicalist grammars (Shieber et al., 1990).", |
|
"cite_spans": [ |
|
{ |
|
"start": 260, |
|
"end": 274, |
|
"text": "(Reape, 1990b)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 358, |
|
"end": 375, |
|
"text": "(van Noord, 1989;", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 376, |
|
"end": 396, |
|
"text": "Calder et al., 1989;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 397, |
|
"end": 418, |
|
"text": "Shieber et al., 1990)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Uniform Processing", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Algorithm. The algorithm is written in the same formalism as the grammar and thus constitutes a meta-interpreter. The definite clauses of the object-grammar are represented as The associated interpreter is a Prolog like topdown backtrack interpreter where term unification is replaced by more general constraintsolving techniques~, (HShfeld and Smolka, 1988; Tuda et aL, 1989; Damas et al., 1991) . The meta-interpreter defines a head-driven bottom-up strategy with top-down prediction (figure 2), and is a generalization of the head-driven generator (van Noord, 1989; Calder et al., 1989; van Noord, 1990a ) and the head-corner parser (Kay, 1989; van Noord, 1991) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 332, |
|
"end": 358, |
|
"text": "(HShfeld and Smolka, 1988;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 359, |
|
"end": 376, |
|
"text": "Tuda et aL, 1989;", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 377, |
|
"end": 396, |
|
"text": "Damas et al., 1991)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 551, |
|
"end": 568, |
|
"text": "(van Noord, 1989;", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 569, |
|
"end": 589, |
|
"text": "Calder et al., 1989;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 590, |
|
"end": 606, |
|
"text": "van Noord, 1990a", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 636, |
|
"end": 647, |
|
"text": "(Kay, 1989;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 648, |
|
"end": 664, |
|
"text": "van Noord, 1991)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Uniform Processing", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "prove(T) :- lexical_entry( L ), connect(L, T), (T phon) ~ (L phon),", |
|
"eq_num": "(T synsem" |
|
} |
|
], |
|
"section": "Uniform Processing", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "sere) =\" (L synsem sem). In the formalism defined in the preceding section there are two possible ways where nontermination may come in, in the constraints or in the definite relations over these constraints. In this paper I am only concerned with the second type of non-termination, that is, I simply assume that the constraint language is decidable (HShfeld and Smolka, 1988) . 1 For the grammar sketched in the foregoing section we can define a very natural condition on lexical entries that guarantees us termination of both parsing and generation, provided the constraint language we use is decidable.", |
|
"cite_spans": [ |
|
{ |
|
"start": 351, |
|
"end": 377, |
|
"text": "(HShfeld and Smolka, 1988)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Uniform Processing", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The basic idea is that for a given semantic representation or (string constraining a) word-order domain, the derivation tree that derives these representations has a finite depth. Lexical entries are specified for (at least) ae, phon and nero. The constraint merely states that the values of these attributes are dependent. It is not possible for one value to 'grow' unless the values of the other attributes grow as well. Therefore the constraint we propose can be compared with GB's projection principle if we regard each of the attributes to define a 'level of description'. Termination can then be guaranteed because derivation trees are restricted in depth by the value of the se attribute.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "connect(T, T). connect(S, T) :rule(S, M, A), prove(A), connect ( M, T).", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In order to define a condition to guarantee termination we need to be specific about the inter-1This is the case if we only have PATH equations; but probably not if we use t.J(), string/2smd lp/2 unlimited. pretation of a lexical entry. Following Shieber (1989) I assume that the interpretation of a set of path equations is defined in terms of directed graphs; the interpretation of a lexical entry is a set of such graphs. The 'size' of a graph simply is defined as the number of nodes the graph consists of. We require that for each graph in the interpretation of a lexical entry, the size of the subgraph at sere is strictly larger than each of the sizes of the sere part of the (subgraphs corresponding to the) elements of the subcat list. I require that for each graph in the interpretation of a lexicM entry, the size of phon is strictly larger than each of the sizes of (subgraphs corresponding to) the phon parts of the elements of the subcat lists. Summarizing, all lexical entries should satisfy the following condition: The most straightforward way to satisfy this condition is for an element of a subcat list to share its semantics with a proper part of the semantics of the lexical entry, and to include the elements of the subcat list in its word-order domain.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "connect(T, T). connect(S, T) :rule(S, M, A), prove(A), connect ( M, T).", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Possible inputs. In order to prove termination of the algorithm we need to make some assumptions about possible inputs. For a discussion cf. van Noord (1990b) and also Thompson (1991, this volume) . The input to parsing and generation is specified as the goal ?--sign(Xo), \u00a2.", |
|
"cite_spans": [ |
|
{ |
|
"start": 145, |
|
"end": 158, |
|
"text": "Noord (1990b)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 168, |
|
"end": 196, |
|
"text": "Thompson (1991, this volume)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "connect(T, T). connect(S, T) :rule(S, M, A), prove(A), connect ( M, T).", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "where \u00a2 restricts the variable X0.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "connect(T, T). connect(S, T) :rule(S, M, A), prove(A), connect ( M, T).", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We require that for each interpretation of X0 there is a maximum for parsing of size[{Xo phonl] , and that there is a maximum for generation of size[ (Xo synsem sem) ].", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 150, |
|
"end": 165, |
|
"text": "(Xo synsem sem)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "connect(T, T). connect(S, T) :rule(S, M, A), prove(A), connect ( M, T).", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "If the input has a maximum size for either semantics or phonology, then the uniform algorithm terminates (assuming the constraint language is decidable), because each recursive call to 'prove' will necessarily be a 'smaller' problem, and as the order on semantics and word-order domains is well-founded, there is a 'smallest' problem. As a lexical entry specifies the length of its subcat list, there is only a finite number of embeddings of the 'connect' clause possible.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "connect(T, T). connect(S, T) :rule(S, M, A), prove(A), connect ( M, T).", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Verb raising. First I show how Reape's analysis of Dutch and German verb raising constructions can be incorporated in the current grammar (Reape, 1989; Reape, 1990a) . For a linguistic discussion of verb-raising constructions the reader is referred to Reape's papers. A verb raiser such as the German verb 'versprechen' (promise) selects three arguments, a vp, an object np and a subject np. The word-order domain of the vp is unioned into the word order domain of versprechen. This is necessary because in German the arguments of the embedded vp can in fact occur left from the other arguments of versprechen, as in:", |
|
"cite_spans": [ |
|
{ |
|
"start": 138, |
|
"end": 151, |
|
"text": "(Reape, 1989;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 152, |
|
"end": 165, |
|
"text": "Reape, 1990a)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Some examples", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "esi ihmj jemandk zu leseni versprochenj hatk (it him someone to read promised had i.e. someome had promised him to read it.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Some examples", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Hence, the lexical entry for the raising verb 'versprechen' is defined as in figure 3. The word-order domain of 'versprechen' simply is the sequence union of the word-order domain of its vp object, with the np object, the subject, and ver~prechen itself. This allows any of the permuations (allowed by the LP constraints) of the np object, versprechen, the subject, and the elements of the domain of the vp object (which may contain signs that have been unioned in recursively).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Some examples", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Seperable prefixes. The current framework offers an interesting account of seperable prefix verbs in German and Dutch. For an overview of alternative accounts of such verbs, see Uszkoreit (1987) [chapter 4] . At first sight, such verbs may seem problematic for the current approach because their prefixes seem not to have any semantic content. However, in my analysis a seperable prefix is lexically specified as part of the wordorder domain of the verb. Hence a particle is not identified as an element of the subcat list. Figure 4 might be the encoding of the German verb 'anrufen' (call up). Note that this analysis conforms to the condition of the foregoing section, because the particle is not on the subcat list. The advantages of this analysis can be summarized as follows.", |
|
"cite_spans": [ |
|
{ |
|
"start": 178, |
|
"end": 194, |
|
"text": "Uszkoreit (1987)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 195, |
|
"end": 206, |
|
"text": "[chapter 4]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 524, |
|
"end": 530, |
|
"text": "Figure", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Some examples", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Firstly, there is no need for a feature system to link verbs with the correct prefixes, as eg. in Uszkoreit's proposal. Instead, the correspondence is directly stated in the lexical entry of the particle verb which seems to me a very desirable :1.6 : synsem : sc : (I sc : (NP4) dora : E] do : (<< >>)uO E]uo (DUo 0 ", |
|
"cite_spans": [ |
|
{ |
|
"start": 265, |
|
"end": 278, |
|
"text": "(I sc : (NP4)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Some examples", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Secondly, the analysis predicts that particles can 'move away' from the verb in case the verb is sequence-unioned into a larger word-order domain. This prediction is correct. The clearest examples are possibly from Dutch. In Dutch, the particle of a verb can be placed (nearly) anywhere in the verb cluster, as long as it precedes its matrix verb: *dat jan marie piet heefft willen zien bellen op dat jan marie piet heeft willen zien op bellen dat jan marie pier heeft willen op zien bellen dat jan marie piet heeft op willen zien bellen dat jan marie piet op heeft willen zien bellen that john mary pete up has want see call (i.e. john wanted to see mary call up pete) The fact that the particle is not allowed to follow its head word is easily explained by the (independently motivated) LP constraint that arguments of a verb precede the verb. Hence these curious facts follow immediately in our analysis (the analysis makes the same prediction for German, but because of the different order of German verbs, this prediction can not be tested).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "HPSG Markers", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Thirdly, Uszkoreit argues that a theory of seperable prefixes should also account for the 'systematic orthog!aphic insecurity felt by native speakers' i.e. whether or not they should write the prefix and the verb as one word. The current approach can be seen as one such explanation: in the lexical entry for a seperable prefix verb the verb and prefix are already there, on the other hand each of the words is in a different part of the word-order domain.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "HPSG Markers", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "In newer versions of HPSG (Pollard and Sag, 1991) a special 'marker' category is assumed for which our projection principle does not seem to work. For example, complementizers are analyzed as markers. They are not taken to be the head of a phrase, but merely 'mark' a sentence for some features. On the other hand, a special principle is assumed such that markers do in fact select for certain type of constituents. In the present framework a simple approach would be to analyze such markers as functors, i.e. heads, that have one element in their subcat list: However, the termination condition defined in the third section can not always be satisfied because these markers usually do not have much semantic content (as in the preceding example). Furthermore these markers may also be phonetically empty, for example in the HPSG-2 analysis of infinite vp's that occur independently such an empty marker is assumed. Such an entry would look presumably as follows, where it is assumed that the empty marker constitutes no element of its own domain:", |
|
"cite_spans": [ |
|
{ |
|
"start": 26, |
|
"end": 49, |
|
"text": "(Pollard and Sag, 1991)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "HPSG Markers", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "8ynseTn : $eTn ..", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "HPSG Markers", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "dom: Q", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "sc : (L~J V P_I N F1)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "It seems, then, that analyses that rely on such marker categories can not be defined in the current framework. On the other hand, however, such markers have a very restricted distribution, and are never recursive. Therefore, a slight mod- ification of the termination condition can be defined that take into account such marker categories. To make this feasible we need a constraint that markers can not apply arbitrarily. In HPSG-2 the distribution of the English complementizer 'that' is limited by the introduction of a special binary feature whose single purpose is to disallow sentences such as 'john said that that that mary loves pete'. It is possible to generalize this to disallow any marker to be repeatedly applied in some domain. The 'seed' of a lexical entry is this entry itself; the seed of a rule is the seed of the head of this rule unless this head is a marker in which case the seed is defined as the seed of the argument. In a derivation tree, no marker may be applied more than once to the same seed. This 'don't stutter' principle then subsumes the feature machinery introduced in HPSG-2, and parsing and generation terminates for the resulting system. Given such a system for marker categories, we need to adapt our algorithm. I assume lexical entries are divided (eg. using some userdefined predicate) into markers and not markers; markers are defined with the predicate marker (Sign,Name) where Name is a unique identifier. Other lexical entries are encoded as before, marktypes(L) is the list of all marker identifiers. The idea simply is that markers are applied top-down, keeping track of the markers that have already been used. The revised algorithm is given in figure 5.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1402, |
|
"end": 1413, |
|
"text": "(Sign,Name)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "sc : (L~J V P_I N F1)", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "I am supported by SFB 314, Project N3 BiLD.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": ". ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "prove(T):marktypes( M), prove(T, M)", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "An algorithm for generation in unification categorial grammar", |
|
"authors": [ |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Calder", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Reape", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Henk", |
|
"middle": [], |
|
"last": "Zeevat", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1989, |
|
"venue": "Fourth Conference of the European Chapter of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "233--240", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jonathan Calder, Mike Reape, and Henk Zeevat. An algorithm for generation in unification cat- egorial grammar. In Fourth Conference of the European Chapter of the Association for Com- putational Linguistics, pages 233-240, Manch- ester, 1989.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "The formal and processing models of CLG", |
|
"authors": [ |
|
{ |
|
"first": "Luis", |
|
"middle": [], |
|
"last": "Damas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nelma", |
|
"middle": [], |
|
"last": "Moreira", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Giovanni", |
|
"middle": [ |
|
"B" |
|
], |
|
"last": "Varile", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1991, |
|
"venue": "Fifth Conference of the European Chapter of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Luis Damas, Nelma Moreira, and Giovanni B. Varile. The formal and processing models of CLG. In Fifth Conference of the European Chapter of the Association for Computational Linguistics, Berlin, 1991.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "A symmetrical approach to parsing and generation", |
|
"authors": [ |
|
{ |
|
"first": "Marc", |
|
"middle": [], |
|
"last": "Dymetman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pierre", |
|
"middle": [], |
|
"last": "Isabelle", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Francois", |
|
"middle": [], |
|
"last": "Perrault", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "Proceedings of the 13th Ini ternational Conference on Computational Linguistics (COLING)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marc Dymetman, Pierre Isabelle, and Francois Perrault. A symmetrical approach to parsing and generation. In Proceedings of the 13th In- i ternational Conference on Computational Lin- guistics (COLING), Helsinki, 1990.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Definite relations over constraint languages", |
|
"authors": [ |
|
{ |
|
"first": "Markus", |
|
"middle": [], |
|
"last": "Hshfeld", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gert", |
|
"middle": [], |
|
"last": "Smolka", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1988, |
|
"venue": "LILOG Report", |
|
"volume": "53", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Markus HShfeld and Gert Smolka. Definite rela- tions over constraint languages. Technical re- port, 1988. LILOG Report 53; to appear in Journal of Logic Programming.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Head driven parsing", |
|
"authors": [ |
|
{ |
|
"first": "Martin", |
|
"middle": [], |
|
"last": "Kay", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1989, |
|
"venue": "Proceedings of Workshop on Parsing Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Martin Kay. Head driven parsing. In Proceedings of Workshop on Parsing Technologies, Pitts- burgh, 1989.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Unification-based semantic interpretation", |
|
"authors": [ |
|
{ |
|
"first": "Robert", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Moore", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1989, |
|
"venue": "27th Annual Meeting of the Associationi for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Robert C. Moore. Unification-based semantic interpretation. In 27th Annual Meeting of the Associationi for Computational Linguistics, Vancouver, 1989.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Center for the Study of Language and Information Stanford", |
|
"authors": [ |
|
{ |
|
"first": "Carl", |
|
"middle": [], |
|
"last": "Pollard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Sag", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1991, |
|
"venue": "", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Carl Pollard and ilvan Sag. Information Based Syntax and Semantics, Volume 2. Center for the Study of Language and Information Stan- ford, 1991. to appear.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "A ilogical treatment of semi-free word order and bounded discontinuous constituency", |
|
"authors": [ |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Reape", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1989, |
|
"venue": "the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mike Reape. A ilogical treatment of semi-free word order and bounded discontinuous con- stituency. In Fourth Conference of the Euro- pean Chapter o[ the Association for Computa- tional Linguistics, UMIST Manchester, 1989.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Getting things in order", |
|
"authors": [ |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Reape", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "Proceedings of the Symposium on Discontinuous Constituency, ITK Tilburg", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mike Reape. Getting things in order. In Proceed- ings of the Symposium on Discontinuous Con- stituency, ITK Tilburg, 1990.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Parsing bounded discontinous constituents: Generalisations of the shift-reduce and CKY algorithms, 1990. Paper presented at the first CLIN meeting", |
|
"authors": [ |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Reape", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mike Reape. Parsing bounded discontinous con- stituents: Generalisations of the shift-reduce and CKY algorithms, 1990. Paper presented at the first CLIN meeting, October 26, OTS Utrecht.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "A semantic-head-driven generation algorithm for unification based formalisms", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Stuart", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Shieber", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Gertjan Van Noord", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fernando", |
|
"middle": [ |
|
"C N" |
|
], |
|
"last": "Moore", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Pereira", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1989, |
|
"venue": "27th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stuart M. Shieber, Gertjan van Noord, Robert C. Moore, and Fernando C.N. Pereira. A semantic-head-driven generation algorithm for unification based formalisms. In 27th Annual Meeting of the Association for Computational Linguistics, Vancouver, 1989. b", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Semantichead-driven gefieration", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Stuart", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Shieber", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Gertjan Van Noord", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fernando", |
|
"middle": [ |
|
"C N" |
|
], |
|
"last": "Moore", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Pereira", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "Computational Linguistics", |
|
"volume": "16", |
|
"issue": "1", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stuart M. Shieber, Gertjan van Noord, Robert C. Moore, and Fernando C.N. Pereira. Semantic- head-driven gefieration. Computational Lin- guistics, 16(1), 1990.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "A uniform architecture for I parsing and generation", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Stuart", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Shieber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1988, |
|
"venue": "Proceedings of the 12th International Conference on Computational Linguistics (COLING)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stuart M. Shieber. A uniform architecture for I parsing and generation. In Proceedings of the 12th International Conference on Computa- tional Linguistics (COLING), Budapest, 1988.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Parsing and Type Inference for Natural and Computer Languages", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Stuart", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Shieber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1989, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stuart M. Shieber. Parsing and Type Inference for Natural and Computer Languages. PhD thesis, Menlo Park, 1989. Technical note 460.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Generation and translation -towards a formalism-independent characterization", |
|
"authors": [ |
|
{ |
|
"first": "Henry", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Thompson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1991, |
|
"venue": "Proceedings of ACL workshop Reversible Grammar in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Henry S. Thompson. Generation and transla- tion -towards a formalism-independent char- acterization. In Proceedings of ACL workshop Reversible Grammar in Natural Language Pro- cessing, Berkeley, 1991.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "JPSG parser on constraint logic programming", |
|
"authors": [ |
|
{ |
|
"first": "Hirosi", |
|
"middle": [], |
|
"last": "Tuda", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K6iti", |
|
"middle": [], |
|
"last": "Hasida", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hidetosi", |
|
"middle": [], |
|
"last": "Sirai", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1989, |
|
"venue": "Fourth Conference of the European Chapter of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hirosi Tuda, K6iti Hasida, and Hidetosi Sirai. JPSG parser on constraint logic programming. In Fourth Conference of the European Chapter of the Association for Computational Linguis- tics, Manchester, 1989.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Word Order and Constituent Structure in German", |
|
"authors": [ |
|
{ |
|
"first": "Hans", |
|
"middle": [], |
|
"last": "Uszkoreit", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1987, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hans Uszkoreit. Word Order and Constituent Structure in German. CSLI Stanford, 1987.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "BUG: A directed bottomup generator for unification based formalisms", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Gertjan Van Noord", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1989, |
|
"venue": "Current Research in Natural Language Generation", |
|
"volume": "4", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gertjan van Noord. BUG: A directed bottom- up generator for unification based formalisms. Working Papers in Natural Language Process- ing, Katholieke Universiteit Leuven, Stichting Taaitechnologie Utrecht, 4, 1989. Gertjan van Noord. An overview of head- driven bottom-up generation. In Robert Dale, Chris Mellish, and Michael Zock, editors, Cur- rent Research in Natural Language Generation. Academic Press, 1990.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Gertjan van Noord. Head corner parsing for discontinuous constituency", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Gertjan Van Noord", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "Proceedings of the 13th International Conference on Computational Linguistics (COLING)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gertjan van Noord. Reversible unification-based machine translation. In Proceedings of the 13th International Conference on Computa- tional Linguistics (COLING), Helsinki, 1990. Gertjan van Noord. Head corner parsing for dis- continuous constituency. In 29th Annual Meet- ing of the Association for Computational Lin- guistics, Berkeley, 1991.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "A uniform architecture for parsing, generation and transfer", |
|
"authors": [ |
|
{ |
|
"first": "R~!mi", |
|
"middle": [], |
|
"last": "Zajac", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1991, |
|
"venue": "Proceedings of ACL workshop Reversible Grammar in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "R~!mi Zajac. A uniform architecture for parsing, generation and transfer. In Proceedings of ACL workshop Reversible Grammar in Natural Lan- guage Processing, Berkeley, 1991.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "In Nicholas Haddock, Ewan Klein, and Glyn Morrill, editors, Categorial Grammar, Unification Grammar and Parsing", |
|
"authors": [ |
|
{ |
|
"first": "Ilenk", |
|
"middle": [], |
|
"last": "Zeevat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ewan", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jo", |
|
"middle": [], |
|
"last": "Calder", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1987, |
|
"venue": "of Working Papers in Cognitive Science", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ilenk Zeevat, Ewan Klein, and Jo Calder. Unifi- cation categorial grammar. In Nicholas Had- dock, Ewan Klein, and Glyn Morrill, edi- tors, Categorial Grammar, Unification Gram- mar and Parsing. Centre for Cognitive Science, University of Edinburgh, 1987. Volume 1 of Working Papers in Cognitive Science.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"text": "sign(Xo) :-sign(X1), sign(X2), (Xo synsem sern) --\" (X1 synsem sere), (Xo phon) ~ (X1 phon), (Xo synsem sc) \u00b1 iX, synsem sc r), (21 synsem sc f) =\" (X2).", |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null |
|
}, |
|
"FIGREF1": { |
|
"text": "The German verb 'schlKft'", |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null |
|
}, |
|
"FIGREF2": { |
|
"text": "f]); [a, e] o 0 [b] stands for [a, c, b],[a, b, c] or [b, a, c]. In fact, I assume that this predicate is also used in the simple cases, in order to be able to spel out generalizations in the linear precedence constraints. Hence the entry for 'schlafen' is defined as follows, where I write lp(X) to indicate that the lp constraints should be satisfied for X. I have nothing to say about the definition of these constraints. synsem : sere: schla/en(E]) sc : ([~]N Pi) D \u00b0m: (D u0 (<< schla/t >>) phon: string(tp([ [))", |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null |
|
}, |
|
"FIGREF3": { |
|
"text": ":-sign(H), sign(A), oh.", |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null |
|
}, |
|
"FIGREF4": { |
|
"text": "The uniform algorithm", |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null |
|
}, |
|
"FIGREF5": { |
|
"text": "Termination condition. For each interpretation L of a lexical entry, if E is an element of L's subcat list (i.e. (L synsem sc r* f) ~ E), then: size[(E phon)] < size[(L phon)] size[(E synsem sere)] < size[(L synsem sere)]", |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null |
|
}, |
|
"FIGREF6": { |
|
"text": "The German verb 'versprechen' result.", |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null |
|
}, |
|
"FIGREF7": { |
|
"text": "dass >>, [~])", |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null |
|
}, |
|
"FIGREF8": { |
|
"text": "Tile verb 'anrufen'", |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null |
|
}, |
|
"TABREF0": { |
|
"num": null, |
|
"text": "", |
|
"html": null, |
|
"content": "<table><tr><td>).</td></tr><tr><td>In lexical entries, word order domains are defined</td></tr><tr><td>using Reape's sequence union operation (R.eape,</td></tr><tr><td>1990a). Hence the grammars are not only based</td></tr><tr><td>on context-free string concatenation.</td></tr><tr><td>The formalism I assume consists of definite clau-</td></tr><tr><td>ses over constraint languages in the manner of</td></tr><tr><td>HShfeld and Smolka (1988). The constraint lan-</td></tr><tr><td>guage at least consists of the path equations</td></tr><tr><td>known from PATR II (Shieber, 1989), augmented</td></tr><tr><td>with variables. I write such a definite clause as:</td></tr><tr><td>P :-ql ...qn,\u00a2.</td></tr><tr><td>where p, qi are atoms and \u00a2 is a (conjunction of)</td></tr><tr><td>constraint(s). The path equations are written as</td></tr><tr><td>in PATR II, but each I path starts with a variable:</td></tr><tr><td>(Xi il...l,) =\" c</td></tr><tr><td>or</td></tr><tr><td>(x, i,...t.) = (xj tl... i\u00a3)</td></tr></table>", |
|
"type_str": "table" |
|
}, |
|
"TABREF1": { |
|
"num": null, |
|
"text": "Furthermore, in lexical entries the s~sem part is shared with the synsem part of an element of the word order domain, that is furthermore specified for the empty domain and some string. I will write: << string >> in a lexical entry to stand for the sign whose synsem value is shared with the synsem of the lexical entry itself; its dora value is 0 and its phon value is string.", |
|
"html": null, |
|
"content": "<table><tr><td>The foregoing</td></tr><tr><td>entry is abreviated as:</td></tr><tr><td>synsern : sere : sehla/en([T])</td></tr><tr><td>: ([]N P, )</td></tr><tr><td>dora: []([], << s hla/t >>)</td></tr><tr><td>phon : string(D</td></tr></table>", |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |