|
{ |
|
"paper_id": "W18-0306", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T05:31:13.154819Z" |
|
}, |
|
"title": "A bidirectional mapping between English and CNF-based reasoners", |
|
"authors": [ |
|
{ |
|
"first": "Steven", |
|
"middle": [], |
|
"last": "Abney", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Michigan", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "If language is a transduction between sound and meaning, the target of semantic interpretation should be the meaning representation expected by general cognition. Automated reasoners provide the best available fully-explicit proxies for general cognition, and they commonly expect Clause Normal Form (CNF) as input. There is a well-known algorithm for converting from unrestricted predicate calculus to CNF, but it is not invertible, leaving us without a means to transduce CNF back to English. I present a solution, with possible repercussions for the overall framework of semantic interpretation.", |
|
"pdf_parse": { |
|
"paper_id": "W18-0306", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "If language is a transduction between sound and meaning, the target of semantic interpretation should be the meaning representation expected by general cognition. Automated reasoners provide the best available fully-explicit proxies for general cognition, and they commonly expect Clause Normal Form (CNF) as input. There is a well-known algorithm for converting from unrestricted predicate calculus to CNF, but it is not invertible, leaving us without a means to transduce CNF back to English. I present a solution, with possible repercussions for the overall framework of semantic interpretation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "1 Overview 1.1 The problem I would like to address a problem that illustrates how considerations of the place of semantic interpretation in the larger cognitive system, even very schematic considerations, can have consequences for the manner and target of interpretation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Let us take seriously the idea that language is a mapping between sound and meaning-which is to say, essentially, an input-output device for general cognition-and let us provisionally accept current automated reasoners as the best available fully explicit models of general cognition. Then an important goal for a semantics of English is to define an invertible transduction between English sentences and a meaning representation that is suitable for use with an automated reasoner. Model-theoretic interpretation is good and useful, but it does not provide us with a transducer.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Standard accounts are easily recast as defining a mapping f from English sentences to predicate calculus. However, the mapping does not appear to be invertible. For one thing, not every predicatecalculus expression is in the range of f . If general cognition produces an arbitrary predicate-calculus expression \u03c6 to render into English, we must find a logically equivalent expression \u03c6 such that f \u22121 (\u03c6 ) is defined, but logical equivalence is undecideable, a problem pointed out by Shieber (1993) . Even if f \u22121 (\u03c6) is defined, it is unclear how to compute it.", |
|
"cite_spans": [ |
|
{ |
|
"start": 484, |
|
"end": 498, |
|
"text": "Shieber (1993)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In addition, the most common choice of meaning representation for automated reasoners is not general predicate calculus, but a normal form known as Clause Normal Form (CNF). Reasoners that require CNF input include systems based on resolution (McCune, 2003b) , some model-building algorithms (McCune, 2003a) , probabilistic reasoners using weighted model-counting (Gogate and Domingos, 2011), and more general cognitive architectures that incorporate such reasoners.", |
|
"cite_spans": [ |
|
{ |
|
"start": 243, |
|
"end": 258, |
|
"text": "(McCune, 2003b)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 292, |
|
"end": 307, |
|
"text": "(McCune, 2003a)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "CNF is a genuine normal form, in the sense that for every expression of first-order predicate calculus (FOPC), there is a unique logically-equivalent CNF expression. Fortuitously, by mapping to CNF, we eliminate a significant part of the variation that leads to Shieber's logical-equivalence problem. But there is a catch. There is a well-known algorithm that converts FOPC expressions to CNF, but it is not invertible. That is the problem: once we have interpreted a sentence, converted the meaning to CNF, and passed it to an automated reasoner, we do not have a way of taking the CNF expressions that the reasoner produces as output and mapping them to English.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Proceedings of the Society for Computation in Linguistics (SCiL) 2018, pages 55-63.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "55", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Salt Lake City, Utah, January 4-7, 2018 Figure 1: A tree that serves simultaneously as English LF and parse tree for the CNF translation \u2212S(x) \u2228 \u2212C(x) \u2228 F (k, x).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "55", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The solution I propose can be stated briefly as follows.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A solution", |
|
"sec_num": "1.2" |
|
}, |
|
{ |
|
"text": "(1) Use CNF as the target of semantic translation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A solution", |
|
"sec_num": "1.2" |
|
}, |
|
{ |
|
"text": "(2) Instead of assembling the translation in a bottom-up pass through the parse tree, creating larger and larger partial translations at each step, let us label selected nodes in the parse tree with CNF operators. In other words, take the English parse tree to be the CNF parse tree, albeit with some extraneous nodes and labels. The resulting tree is symmetric between English and CNF (e.g., Figure 1 ). In particular, the leaf nodes are labeled symmetrically with English words and CNF literals. Define a standard grammar with features to generate such trees.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 393, |
|
"end": 401, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "A solution", |
|
"sec_num": "1.2" |
|
}, |
|
{ |
|
"text": "(3) Given a CNF expression as input from general cognition, use the grammar to parse the sequence of literals, constructing an English/CNF parse tree, and read off the English sentence. Figure 1 provides an illustration of a combined English/CNF parse tree. The English portion is the LF for the sentence Kim feeds every stray cat, and the CNF portion represents the translation \u2212S(x) \u2228 \u2212C(x) \u2228 F (k, x). Each node has a label pair \u03b1:\u03b2. For the purposes of the grammar, the label pairs are simply complex categories; we construct a single grammar that generates the pairs. When parsing English, the input consists of the English labels of leaf nodes (Kim feeds every stray cat) and when parsing CNF, the input consists of the CNF labels \u2212S(x), \u2212C(x), F (k, x).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 186, |
|
"end": 194, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "A solution", |
|
"sec_num": "1.2" |
|
}, |
|
{ |
|
"text": "A few complications must be addressed, but they have known solutions. We must convert the tree from LF to SS before parsing English, creating a necessity for two different versions of the grammar, one for LF and one for SS. However, both versions generate the same labels, and the two-step process of parsing and converting to LF is standard and fa-miliar. In the CNF-to-English direction, the CNF input will actually be partially parsed input: for example,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A solution", |
|
"sec_num": "1.2" |
|
}, |
|
{ |
|
"text": "[ \u2228 \u2212S(x), \u2212C(x), F (k, x)].", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A solution", |
|
"sec_num": "1.2" |
|
}, |
|
{ |
|
"text": "We do not pass the nonterminal nodes directly as input to the parser, but rather use them to constrain the operation of the parser. In our example, the constraint prevents the parser from constructing a node whose right label is not \u2228. Further, CNF is a \"free word order\" language. Unordered inputs make for less efficient parsing, but they are manageable, and the partialparse constraints actually ameliorate the problem. There are also more empty nodes in the CNF-to-English direction than in the other direction-for example, in Figure 1 , the nodes labeled \"every:,\" \"Kim:,\" and \"t 1 :\" are all empty nodes in the CNF-to-English direction-but parsers routinely deal with empty nodes, and dealing with many of them is no harder than dealing with a few. In short, handling these issues requires some care in implementation but no novel parsing techniques.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 531, |
|
"end": 539, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "A solution", |
|
"sec_num": "1.2" |
|
}, |
|
{ |
|
"text": "The main question I will address in the rest of the paper is how we systematically design the grammar, that is, how we determine what the CNF labels should be.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A solution", |
|
"sec_num": "1.2" |
|
}, |
|
{ |
|
"text": "2 Direct translation constrained by feature propagation", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A solution", |
|
"sec_num": "1.2" |
|
}, |
|
{ |
|
"text": "To assign CNF operators to LF nodes, I propose (at least conceptually) that we first label the tree with the usual FOPC translation, and then apply the standard conversion to CNF. I adopt the particularly direct form of translation sketched in the previous section. A key desideratum is that the assignment of semantic operators and atomic formulae to parse-tree nodes should be constrained by local feature constraints of the usual sort. The full power of feature grammars will not be required; features with atomic values will suffice. Let us construct the LF tree for the sentence Kim feeds every stray cat and annotate it with the obvious FOPC translation (Figure 2 ). I have made one unusual assumption in the LF tree: the determiner every has been raised to become head of the quantifierraising structure. This is not essential, but it will simplify the statement of certain constraints in what follows. How do we specify this labeling within the grammar? Constraining the occurrence of semantic operators and predicates is generally straightforward and local. For our example, we may state as a general rule that an NP representing adjectival modification translates as \u2227, that an S headed by a (raised) universal determiner translates as \u2200 (we return shortly to the question of the variable), and that an S that is sibling of a universal determiner translates as \u2192. In the leaf-node translations, the predicates obviously represent the lexical translations of the corresponding English words (S for stray, C for cat, F for feed). It is less obvious how to constrain the choice of variables.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 660, |
|
"end": 669, |
|
"text": "(Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Assigning FOPC labels", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "The standard approach relies heavily on lambda expressions to specify which variable goes in which position. This is a major cause of difficulty in inversion: the inverse of beta-reduction is infinitely ambiguous. Instead of using lambda expressions, let us replace numeric syntactic indices with the semantic variables themselves and propagate them through the tree by syntactic feature-passing. We can then use the syntactic indices to determine the choice of variables in the atomic formulae.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Index propagation", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "In our example, let us use the variable k for the subject DP and x for the raised object DP. Let us propagate the DP index throughout the DP, and use it to determine the variables in the atomic formulae S(x) and C(x), as in Figure 3 . (I give a more rigorous characterization of the spreading in Constraint 2 below.)", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 224, |
|
"end": 232, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Index propagation", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "As for the variables in F (k, x), let us assume a form of syntactic concord in which the subject's index k is shared with the VP and then, because V is the head of VP, with the V. Let us also impose an object-agreement constraint on the verb, requiring its second subscript to match the object. Only two minor items remain: traces obtain their indices from their antecedents in the usual way, and the variable associated with \u2200 is now written as an index. Figure 3 shows the final result. Henceforth I omit colons when the semantic label is empty. I also usually omit preterminal nodes-Adj, N, V-to save space, but I include them when needed for clarity.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 456, |
|
"end": 464, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Index propagation", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "S x :\u2200 every x S:\u2192 DP x e x NP x :\u2227 Adj x stray:S(x) NP x N x cat:C(x) S DP k Kim VP k V k,x feeds:F (k, x) t x", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Index propagation", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Let us consider some more examples (adapted from Heim and Kratzer (1997) ). These will motivate additions to the index propagation rules, and will illustrate at least a small range of cases in which index propagation can be used in lieu of lambda expressions. The tree in Figure 4 illustrates negation and disjunction, and the trees in Figure 5 illustrate the handling of \"case-marking\" versus \"lexical\" prepositions. Relative clauses and multiple quantifiers are illustrated in later trees.", |
|
"cite_spans": [ |
|
{ |
|
"start": 49, |
|
"end": 72, |
|
"text": "Heim and Kratzer (1997)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 272, |
|
"end": 280, |
|
"text": "Figure 4", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 336, |
|
"end": 344, |
|
"text": "Figure 5", |
|
"ref_id": "FIGREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Index propagation", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Note that Figure 4 includes an extension of the index propagation rules: the auxiliary (namely, does) shares its index with its complement (the disjunctive VP). In Figure 5 we have extended the objectagreement rule to apply to the two-place adjective fond, and we have treated of like an auxiliary in the sense that it shares its index with its complement. In the right-hand tree, in is treated like a transitive verb, sharing its first index with its parent and sharing its second index with its object.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 10, |
|
"end": 18, |
|
"text": "Figure 4", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 164, |
|
"end": 172, |
|
"text": "Figure 5", |
|
"ref_id": "FIGREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Index propagation", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Based on the examples we have considered, we may hazard a general statement of the index propagation rules. Indices are present only as required by the following two constraints.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Constraints on indices", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Constraint 1 (Intrinsic Indices) Every DP has an index (excluding pleonastics). A leaf node labeled with atomic formula \u03c0(\u03b1 1 , . . . , \u03b1 n ) must be child of a preterminal with syntactic indices \u03b1 1 , . . . , \u03b1 n .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Constraints on indices", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Constraint 2 (Index Propagation) Syntactic indices are propagated as necessary to satisfy the following requirements.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Constraints on indices", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "1. A trace has the same index as its antecedent.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Constraints on indices", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "2. A modifier has the same index as the node it modifies.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Constraints on indices", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "3. A function word (e.g., auxiliary, case marker) has the same index as its complement.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Constraints on indices", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "4. An argument-taker's last index is the same as the argument's index. In this case and this case only, the index is discharged.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Constraints on indices", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "A parent inherits its head's undischarged indices.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "5.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\"Head\" includes X heads, as well as the head in an adjunction structure, all heads in a coordination structure, and the relative pronoun in a relative clause. Figure 6 : Local propagation of node polarity substitutes for negation lowering.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 159, |
|
"end": 167, |
|
"text": "Figure 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "5.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "+ \u00ac \u2212 \u2228 \u2212P \u2212 \u2227 \u2212Q \u2212 \u00ac +R", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "5.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The standard conversion from unrestricted FOPC to CNF involves a sequence of tree transformations:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conversion to CNF", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "(1) rewriting conditionals, (2) lowering negation,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conversion to CNF", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "(3) Skolemization, and (4) distribution (of disjunction over conjunction). We would like to consider how to implement the conversion via feature constraints, without altering the basic structure of our LF trees.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conversion to CNF", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Let us begin with negation lowering. Its effect is to eliminate the negation operator in favor of literals, consisting of an atomic formula and a sign (positive or negative). By adding polarity as an attribute of all nodes, not just terminal nodes, we can give a succinct characterization of negation lowering in the form of a local constraint that is readily implemented in a feature grammar.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Negation lowering", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Constraint 3 (Polarity) (a) The polarity of the root node is positive. (b) The polarity of a child node is the same as the polarity of its parent, unless the parent node is labeled with an operator that is polarityreversing for the child in question, in which case the child's polarity is the opposite of the parent's.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Negation lowering", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "For our purposes there are two polarity-reversing operators: negation and the conditional \u2192, which is polarity-reversing for its first child.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Negation lowering", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "As an example, the FOPC expression \u00ac(P \u2228 (Q \u2227 \u00acR)) has the parse tree in Figure 6 . Polarities have been added in accordance with Constraint 3. In particular, \u00ac has inverse polarity to that of its child, but otherwise parent and child always have the same polarity. We achieve the effect of negation lowering by interpreting signed operators as specified in Table 1 . Replacing the signed operators with their unsigned equivalents for readability, Figure 6 corresponds to the expression \u2212P \u2227 (\u2212Q \u2228 R), which is indeed logically equivalent to \u00ac(P \u2228 (Q \u2227 \u00acR)).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 73, |
|
"end": 81, |
|
"text": "Figure 6", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 358, |
|
"end": 365, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
}, |
|
{ |
|
"start": 448, |
|
"end": 456, |
|
"text": "Figure 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Negation lowering", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "+ \u00ac = \u2212 \u00ac = + \u2227 = \u2227 \u2212 \u2227 = \u2228 + \u2228 = \u2228 \u2212 \u2228 = \u2227 + \u2192 = \u2228 \u2212 \u2192 = \u2227 + \u2200 = \u2212 \u2200 = + \u2203 = \u2212 \u2203 =", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Negation lowering", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Note that indicates deletion of the operator.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Negation lowering", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "In the standard conversion to CNF, rewriting conditionals precedes negation lowering. We can deal with conditionals as follows. We define the signed operator + \u2192 to be a synonym for \u2228 and \u2212 \u2192 to be a synonym for \u2227. However, unlike \u2228 or \u2227, \u00b1 \u2192 reverses the polarity of its first child. Consider the example of Figure 7 . The \u2192 operator inverts the polarity of its first child, but otherwise polarities are passed unchanged from parent to child. Accordingly, Figure 7 is equivalent to \u2212D(a) \u2228 \u2212S(a) \u2228 +H(b), which is indeed equivalent to D(a) \u2227 S(a) \u2192 H(b), the natural translation for if Ann dances and sings, Betty is happy. Note that Table 1 is used in the process of \"reading off\" the CNF expression for input to the reasoner; it is not used to eliminate signed operators from the LF tree. The signed operators do serve a purpose beyond the truth function they represent. For one thing, even though + \u2192 is equivalent to \u2228, the former reverses its first child's polarity whereas the latter does not. The signed operators also permit us to use local constraints to define the assignment of translations. An example of such a local constraint is the following: a node has semantic operator \u2192 if it is headed by a CP headed by \"if.\" Such a statement remains valid whether the polarity of the node is positive or negative, though in the former case the signed operator + \u2192 is interpreted as \u2228 and in the latter case the signed operator \u2212 \u2192 is interpreted as \u2227.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 309, |
|
"end": 317, |
|
"text": "Figure 7", |
|
"ref_id": "FIGREF5" |
|
}, |
|
{ |
|
"start": 457, |
|
"end": 465, |
|
"text": "Figure 7", |
|
"ref_id": "FIGREF5" |
|
}, |
|
{ |
|
"start": 635, |
|
"end": 642, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Rewriting conditionals", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The third step of the conversion to CNF is Skolemization. As usually formulated, one replaces existentially bound variables with Skolem terms consisting of a Skolem function applied to the list of outscoping universal variables, then one deletes all quantifiers. The deletion is already reflected in Table 1 -though, as already mentioned, the signed operators remain in the LF tree and are not actually deleted until we read off the CNF expression for input to the reasoner.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 300, |
|
"end": 307, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Skolemization", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "I will use the term variable to refer loosely to both universal variables (that is, implicitly universallybound variables) and Skolem terms. I write Skolem functions with a dot, e.g.,\u1e8b, to make it easy to distinguish them from universal variables. Whether a variable should be a universal variable or a Skolem term is determined by the signed operator at the variable's home, which I define to be the node labeled with the quantifier that originally bound it. Specifically:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Skolemization", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Constraint 4 (Variable type determination) (a) The syntactic index of a node whose signed operator is + \u2200 or \u2212 \u2203 must be a universal variable, and (b) the syntactic index of a node whose signed operator is + \u2203 or \u2212 \u2200 must be a Skolem term.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Skolemization", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "This constraint determines the type of variable, and the variable is then propagated to other nodes by Constraint 2. See Figure 8 for an example.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 121, |
|
"end": 129, |
|
"text": "Figure 8", |
|
"ref_id": "FIGREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Skolemization", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The variable\u1e8f in Figure 8 is a shorthand for the Skolem term\u1e8f(x). To avoid clutter, I have suppressed the argument list, but it does need to be computed in a complete implementation. One may use a feature ouv whose value for a given node \u03bd is the list of outscoping universal variables, that is, the list of universal variables whose home position dominates \u03bd. It is straightforward but tedious to write out the feature constraints that determine the correct value for ouv; I omit the details. Figure 8 provides an example with two quantifiers. Note that there are two polarity reversals, both occurring at the first child of a node with operator \u2192. The boxed nodes are the homes of the two quantifiers. Because the upper one has signed operator + \u2200, the variable is a universal, and because the lower one has signed operator \u2212 \u2200, the variable is a Skolem term.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 17, |
|
"end": 25, |
|
"text": "Figure 8", |
|
"ref_id": "FIGREF6" |
|
}, |
|
{ |
|
"start": 494, |
|
"end": 502, |
|
"text": "Figure 8", |
|
"ref_id": "FIGREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Skolemization", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "S x : + \u2200 every x S: + \u2192 DP \u2212 x e x NP x : \u2212 \u2227 NP \u2212 x dog:\u2212D(x) RC \u2212 x RP \u2212 x that S\u1e8f: \u2212 \u2200 every\u1e8f S: \u2212 \u2192 DP + y e\u1e8f NP + y cat:+C(\u1e8f) S \u2212 t x VP \u2212 x avoids:\u2212A(x,\u1e8f) t\u1e8f S + t x VP + x is x Adj + x happy x :+H(x)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Skolemization", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Reading off the CNF, we obtain the following. For readability, I have again replaced the signed operators with their more familiar unsigned equivalents; I also indicate the Skolem argument lists explicitly.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Skolemization", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "\u2212D(x) \u2228 [C(\u1e8f(x)) \u2227 \u2212A(x,\u1e8f(x))] \u2228 H(x).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Skolemization", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "In words: either x is not a dog, or else x's\u1e8f is a cat that x fails to avoid, or else x is happy. That is inferentially equivalent to the original sentence every dog that avoids every cat is happy.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Skolemization", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "A pleasant side effect of the lack of explicit quantifiers in CNF is that donkey anaphora becomes available without any stipulations. The structure of every farmer that owns a donkey beats it is essentially the same as that in Figure 8 except for the choice of lower quantifier: see Figure 9 . I assume that the pronoun it picks up the index of its antecedent a donkey. The resulting CNF translation", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 227, |
|
"end": 235, |
|
"text": "Figure 8", |
|
"ref_id": "FIGREF6" |
|
}, |
|
{ |
|
"start": 283, |
|
"end": 291, |
|
"text": "Figure 9", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Donkey anaphora", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "is \u2212F (x) \u2228 \u2212D(y) \u2228 \u2212O(x, y) \u2228 B(x, y), which S x : + \u2200 every x S: + \u2192 DP \u2212 x e x NP x : \u2212 \u2227 NP \u2212 x farmer:\u2212F (x) RC \u2212 x RP \u2212 x that S y : \u2212 \u2203 a y S: \u2212 \u2227 DP \u2212 y e y NP \u2212 y donkey:\u2212D(y) S \u2212 t x VP \u2212 x owns:\u2212O(x, y) t y S + t x VP + x V +", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Donkey anaphora", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "x,y beats:B(x, y) DP + y it Figure 9 : Donkey anaphora is covered without stipulation.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 28, |
|
"end": 36, |
|
"text": "Figure 9", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Donkey anaphora", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "correctly captures the strong reading. 1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Donkey anaphora", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "The final step in the conversion to CNF is distribution of disjunction over conjunction. Distribution unavoidably involves a structural transformation of the tree, so we will not attempt to incorporate it into the LF structure. Although distribution is not uniquely invertible, the degree of ambiguity that arises in inversion is limited. When translating from CNF to English, let us assume the inverse of distribution, which we may call consolidation, as a preprocessing step. To fix ideas, let us consider the following example:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Distribution", |
|
"sec_num": "3.5" |
|
}, |
|
{ |
|
"text": "\u03c6 = P \u2228 (Q \u2227 R)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Distribution", |
|
"sec_num": "3.5" |
|
}, |
|
{ |
|
"text": "The result of distribution is \u03c8:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Distribution", |
|
"sec_num": "3.5" |
|
}, |
|
{ |
|
"text": "\u03c8 = (P \u2228 Q) \u2227 (P \u2228 R)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Distribution", |
|
"sec_num": "3.5" |
|
}, |
|
{ |
|
"text": "In general, whenever distribution is non-trivial, it has the effect of introducing copies of existing atomic formulae, as with P in \u03c8. Hence inverting distribution (consolidation) involves recombining copies. Consolidation is ambiguous. For example, \u03c6 is not the only undistributed expression that may give rise to \u03c8: \u03c8 itself might have been the source. More generally, every way of combining copies produces a form that constitutes a possible undistributed source. On the other hand, the amount of ambiguity is limited by the number of duplicates in the input (going from CNF to English). Moreover, each possible result of consolidation does give rise to a valid English sentence. The choice among them is not one of well-formedness but of stylistic preference. Since each duplicate atomic formula gives rise to duplicated words, it seems natural to prefer to do as much consolidation as possible, and we may adopt that as a heuristic.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Distribution", |
|
"sec_num": "3.5" |
|
}, |
|
{ |
|
"text": "In most cases, there is a unique most-consolidated form, though it is possible to construct examples with multiple distinct maximally-consolidated forms. For example, given the CNF expression (P \u2228 Q) \u2227 (Q \u2228 R) \u2227 (R \u2228 P ), we may eliminate any one of the three duplicate pairs, but only one, leaving us with three different maximally-consolidated forms. This is not likely to be a major problem in practice.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Distribution", |
|
"sec_num": "3.5" |
|
}, |
|
{ |
|
"text": "Important questions remain. Perhaps the most urgent is how generalized quantifiers are to be accommodated in the proposed approach. Generalized quantifiers are relations between sets, so the question can be rephrased as accommodating phrases (namely, NPs) that define sets.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generalized quantifiers", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "One can include sets in a first-order account by reification. That is, introduce a membership predicate M and define a set such as\u1e61 = \u03bbx . \u03c6[x] as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generalized quantifiers", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "M (x,\u1e61) \u2194 \u03c6[x].", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generalized quantifiers", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The problem is that converting an LF tree that contains \u2194 to CNF involves a substantial structural change. For example, M (x,\u1e61) \u2194 F (x) \u2227 G(x), converted to CNF, expands out as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generalized quantifiers", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "[\u2212M (x,\u1e61) \u2228 F (x)] \u2227 [\u2212M (x,\u1e61) \u2228 G(x)] \u2227 [\u2212F (x) \u2228 \u2212G(x) \u2228 M (x,\u1e61)].", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generalized quantifiers", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "(1)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generalized quantifiers", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "My proposal relies crucially on the conversion to CNF being structure-preserving, but this is a case in which it emphatically does not preserve structure.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generalized quantifiers", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "One possibility is to permit \u2194 in the LF tree as a primitive operator, and to handle it much as we handled distribution. Going from the LF tree to the reasoner, an expression containing \u2194 is expanded out as in (1). In the opposite direction, as a preprocessing step one seeks instances of the pattern illustrated in (1) and replaces them with M (x,\u1e61) \u2194 F (x) \u2227 G(x), much as we recognize the repetitions that may be consolidated. A more ambitious alternative is to incorporate the replacement into the reasoner as an inference rule, much as reasoners often include primitive support for an equality predicate and substitution of equals. I leave this as a question for future research.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generalized quantifiers", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "There have been proposals in the literature for reversible grammars, which support both interpretation and generation (Appelt, 1987; de Kok et al, 2011; Copestake et al, 1996; Melamed, 2003; Shieber, 1988; Shieber and Schabes, 1990; Strzalkowski, 1991; Strzalkowski, 1994) . Reversibility was indeed one of the original motivations for unification grammars (Kay, 1975; Kay, 1996) , though the translational target was predicate-argument structure rather than FOPC. The present paper can be seen as extending that work to map bidirectionally between English and CNF using a feature grammar.", |
|
"cite_spans": [ |
|
{ |
|
"start": 118, |
|
"end": 132, |
|
"text": "(Appelt, 1987;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 133, |
|
"end": 152, |
|
"text": "de Kok et al, 2011;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 153, |
|
"end": 175, |
|
"text": "Copestake et al, 1996;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 176, |
|
"end": 190, |
|
"text": "Melamed, 2003;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 191, |
|
"end": 205, |
|
"text": "Shieber, 1988;", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 206, |
|
"end": 232, |
|
"text": "Shieber and Schabes, 1990;", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 233, |
|
"end": 252, |
|
"text": "Strzalkowski, 1991;", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 253, |
|
"end": 272, |
|
"text": "Strzalkowski, 1994)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 357, |
|
"end": 368, |
|
"text": "(Kay, 1975;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 369, |
|
"end": 379, |
|
"text": "Kay, 1996)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "For general unification grammars, it was proposed that one define an interpretation relation I(s, \u03c6) in Prolog: to parse, provide the sentence s and solve for the meaning \u03c6, and to generate, provide \u03c6 and solve for s (Shieber, 1988; . Unfortunately, solving for s proved to be beyond Prolog's abilities, and much work went into elaborate methods for helping Prolog along (Strzalkowski, 1991; Strzalkowski, 1994) . In additional, the usual unification grammars were susceptible to the logicalequivalence problem that Shieber pointed out: the range of \u03c6 in I(s, \u03c6) is typically not the entire space of FOPC expressions but only a subset of the space, and given an arbitrary input \u03c8 one must seek a logically equivalent \u03c8 for which I(s, \u03c8 ) is defined; but logical equivalence is not decideable (Shieber, 1993) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 217, |
|
"end": 232, |
|
"text": "(Shieber, 1988;", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 371, |
|
"end": 391, |
|
"text": "(Strzalkowski, 1991;", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 392, |
|
"end": 411, |
|
"text": "Strzalkowski, 1994)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 792, |
|
"end": 807, |
|
"text": "(Shieber, 1993)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "These difficulties led to an interest in flat semantic languages, which, one hoped, reduce the number of logically equivalent expressions corresponding to a given semantic input (Whitelock, 1992; Trujillo, 1995) . Perhaps the best known current approach is Minimal Recursion Semantics (MRS) (Copestake et al, 2005) . However, MRS expressions are not \"flat\" in the right way-an MRS expression is actually a meta-logical description of a standard FOPC parse tree-and the use of MRS does not ameliorate the logical equivalence problem. The main attraction of MRS is not that it addresses the problems of interest here, but that it supports a transparent and compact representation of certain ambiguities, particularly quantifier-scope ambiguities.", |
|
"cite_spans": [ |
|
{ |
|
"start": 178, |
|
"end": 195, |
|
"text": "(Whitelock, 1992;", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 196, |
|
"end": 211, |
|
"text": "Trujillo, 1995)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 291, |
|
"end": 314, |
|
"text": "(Copestake et al, 2005)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "When genuinely flat semantic languages have been proposed (Whitelock, 1992; Trujillo, 1995) , they usually have severely limited expressivity, permitting only conjunctions of ground clauses, and excluding disjunction and quantification. By contrast, CNF represents a flat semantic language with the full expressive power of FOPC. It is flat in the sense that no CNF expression has depth greater than three: the most complex CNF expression is a conjunction of clauses, each clause being a disjunction of literals. There are no explicit quantifiers, but their expressive capacity is preserved via the distinction between universal variables and Skolem terms. The use of CNF for semantic translations does ameliorate the logical equivalence problem. A CNF expression is the normal form for an (infinite) equivalence class of unrestricted FOPC expressions, and the commonest sorts of logically equivalent pairs fall together when we map to CNF.", |
|
"cite_spans": [ |
|
{ |
|
"start": 58, |
|
"end": 75, |
|
"text": "(Whitelock, 1992;", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 76, |
|
"end": 91, |
|
"text": "Trujillo, 1995)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "I build also on a line of inquiry into reversibility that involves simultaneous grammars, in which a single derivation constructs two parse trees. Simultaneous grammars have been used both for machine translation (Melamed, 2003) and for translation between English and FOPC (Shieber and Schabes, 1990 ). However, previous work has not considered the further conversion from FOPC to CNF. Moreover, the simultaneous grammars considered in this paper are unusually simple: the two syntax trees are homomorphic, allowing them to be treated as a single tree with paired labels.", |
|
"cite_spans": [ |
|
{ |
|
"start": 213, |
|
"end": 228, |
|
"text": "(Melamed, 2003)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 274, |
|
"end": 300, |
|
"text": "(Shieber and Schabes, 1990", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "I have described a method of translating between English and CNF whose advantages are as follows: it provides a direct connection to automated reasoners; it is fully invertible; it is arguably simpler than simultaneous-tree or standard direct-interpretation approaches; it ameliorates the logical-equivalence problem by virtue of CNF's status as normal form; it is computable using an atomic-valued feature grammar, enabling efficient parsing/generation; and it predicts the existence of donkey anaphora as a side effect of Skolemization, which is an essential step in the conversion to CNF. To the extent that the proposal has merit, it illustrates how considerations of the role of interpretation in the larger cognitive system can influence the form of the semantic account in fundamental ways.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "I suggest that the weak reading is spurious; the use of the singular \"a donkey\" presupposes that the set of owned donkeys is a singleton, in which case the strong and weak readings are equivalent.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "I have benefited greatly from discussions with Ezra Keshet and joint work we have done on semantic consequences of using CNF as metalanguage. The paper has also benefited from the comments of anonymous reviewers. Obviously, they bear no responsibility for any remaining shortcomings of the work. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Semantics in generative grammar", |
|
"authors": [ |
|
{ |
|
"first": "Irene", |
|
"middle": [], |
|
"last": "Heim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Angelika", |
|
"middle": [], |
|
"last": "Kratzer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Irene Heim and Angelika Kratzer. 1997. Semantics in generative grammar. Blackwell Publishers.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Speech and Language Processing", |
|
"authors": [ |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Jurafsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Martin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Daniel Jurafsky and James H. Martin. 2009. Speech and Language Processing (2nd edition). Prentice Hall, Upper Saddle River, NJ.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Syntactic processing and functional sentence perspective", |
|
"authors": [ |
|
{ |
|
"first": "Martin", |
|
"middle": [], |
|
"last": "Kay", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1975, |
|
"venue": "Proceedings of TINLAP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Martin Kay. 1975. Syntactic processing and functional sentence perspective. Proceedings of TINLAP.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Chart generation", |
|
"authors": [ |
|
{ |
|
"first": "Martin", |
|
"middle": [], |
|
"last": "Kay", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "Proceedings of the Conference of the Association for Computational Linguistcs (ACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Martin Kay. 1996. Chart generation. Proceedings of the Conference of the Association for Computational Linguistcs (ACL).", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "The Soar Cognitive Architecture", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Laird", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John E. Laird. 2012. The Soar Cognitive Architecture. The MIT Press, Cambridge, MA and London, Eng- land.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Mace4 Reference Manual and Guide", |
|
"authors": [ |
|
{ |
|
"first": "William", |
|
"middle": [], |
|
"last": "Mccune", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "William McCune. 2003a. Mace4 Reference Manual and Guide. Tech. Memo ANL/MCS-TM-264, Mathemat- ics and Computer Science Division, Argonne National Laboratory, Argonne, IL.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Otter 3.3 Reference Manual", |
|
"authors": [ |
|
{ |
|
"first": "William", |
|
"middle": [], |
|
"last": "Mccune", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "William McCune. 2003b. Otter 3.3 Reference Manual. Tech. Memo ANL/MCS-TM-263, Mathematics and Computer Science Division, Argonne National Labo- ratory, Argonne, IL.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Multitext Grammars and Synchronous Parsers", |
|
"authors": [ |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Melamed", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of NAACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dan Melamed. 2003. Multitext Grammars and Syn- chronous Parsers, Proceedings of NAACL.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Unification-based semantic interpretation", |
|
"authors": [ |
|
{ |
|
"first": "Robert", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Moore", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1989, |
|
"venue": "Proceedings of the 27th Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Robert C. Moore. 1989. Unification-based semantic in- terpretation. Proceedings of the 27th Meeting of the Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Insideoutside reestimation from partially bracketed corpora", |
|
"authors": [ |
|
{ |
|
"first": "Fernando", |
|
"middle": [], |
|
"last": "Pereira", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yves", |
|
"middle": [], |
|
"last": "Schabes", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "Proceedings of the Association for Computational Linguistics 30th Annual Meeting", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "128--135", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Fernando Pereira and Yves Schabes. 1992. Inside- outside reestimation from partially bracketed corpora. Proceedings of the Association for Computational Linguistics 30th Annual Meeting, 128-135. Newark, Delaware.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Artificial Intelligence: A Modern Approach", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Stuart", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Russell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Norvig", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stuart J. Russell and Peter Norvig. 2002. Artificial Intel- ligence: A Modern Approach (2nd edition). Prentice Hall, Upper Saddle River, NJ.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "A uniform architecture for parsing and generation", |
|
"authors": [ |
|
{ |
|
"first": "Stuart", |
|
"middle": [], |
|
"last": "Shieber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1988, |
|
"venue": "Proceedings of the 12th Conference on Computational Linguistics (COLING)", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "614--619", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stuart Shieber. 1988. A uniform architecture for pars- ing and generation. Proceedings of the 12th Confer- ence on Computational Linguistics (COLING), vol. 2, pp. 614-619.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "The problem of logical-form equivalence", |
|
"authors": [ |
|
{ |
|
"first": "Stuart", |
|
"middle": [], |
|
"last": "Shieber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Computational Linguistics", |
|
"volume": "19", |
|
"issue": "1", |
|
"pages": "179--190", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stuart Shieber. 1993. The problem of logical-form equivalence. Computational Linguistics 19(1), 179- 190.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Synchronous Tree-Adjoining Grammars", |
|
"authors": [ |
|
{ |
|
"first": "Stuart", |
|
"middle": [], |
|
"last": "Shieber", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yves", |
|
"middle": [], |
|
"last": "Schabes", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "Proceedings of the Conference on Computational Linguistics (COLING)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stuart Shieber and Yves Schabes. 1990. Synchronous Tree-Adjoining Grammars. Proceedings of the Con- ference on Computational Linguistics (COLING).", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Semantic head-driven generation", |
|
"authors": [ |
|
{ |
|
"first": "Stuart", |
|
"middle": [], |
|
"last": "Shieber", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fernando", |
|
"middle": [], |
|
"last": "Gertjan Van Noord", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Pereira", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Moore", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "Computational Linguistics", |
|
"volume": "16", |
|
"issue": "1", |
|
"pages": "30--42", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stuart Shieber, Gertjan van Noord, Fernando Pereira, and Robert Moore. 1990. Semantic head-driven genera- tion. Computational Linguistics 16(1):30-42.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "A general computational method for grammar inversion", |
|
"authors": [ |
|
{ |
|
"first": "Tomek", |
|
"middle": [], |
|
"last": "Strzalkowski", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1991, |
|
"venue": "Proceedings of the ACL Workshop on Reversible Grammar in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tomek Strzalkowski. 1991. A general computational method for grammar inversion. Proceedings of the ACL Workshop on Reversible Grammar in Natural Language Processing.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Reversible Grammar in Natural Language Processing", |
|
"authors": [], |
|
"year": 1994, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tomek Strzalkowski (ed.) 1994. Reversible Grammar in Natural Language Processing. Kluwer Academic Publishers.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Lexicalist Machine Translation of Spatial Prepositions", |
|
"authors": [ |
|
{ |
|
"first": "Arturo", |
|
"middle": [], |
|
"last": "Indalecio", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Trujillo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Indalecio Arturo Trujillo. 1995. Lexicalist Machine Translation of Spatial Prepositions. PhD dissertation, University of Cambridge.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Shake-and-bake translation", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Whitelock", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "Proceedings of the Conference on Computational Linguistics (COLING)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "P. Whitelock. 1992. Shake-and-bake translation. Pro- ceedings of the Conference on Computational Linguis- tics (COLING).", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF1": { |
|
"text": "The LF tree labeled with the standard FOPC translation.", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF2": { |
|
"text": "The results of index propagation. Negation and VP disjunction; the translation is \u00ac(S(a) \u2228 D(a)).", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF3": { |
|
"text": "Differing treatments of case-marking (left) and lexical (right) prepositions.", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF5": { |
|
"text": "An LF tree illustrating polarity reversal under \u2192.", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF6": { |
|
"text": "A complex example. The boxed nodes are the home nodes for the variables x and\u1e8f.", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"TABREF0": { |
|
"content": "<table><tr><td/><td>: + \u2192</td><td/></tr><tr><td>CP \u2212</td><td/><td/></tr><tr><td>if</td><td>S \u2212</td><td/></tr><tr><td colspan=\"2\">Ann a VP a : \u2212 \u2227</td><td/></tr><tr><td>VP \u2212 a</td><td>and</td><td>VP \u2212 a</td></tr><tr><td colspan=\"2\">dances a :\u2212D(a)</td><td/></tr></table>", |
|
"type_str": "table", |
|
"text": "The interpretations of signed operators.", |
|
"num": null, |
|
"html": null |
|
} |
|
} |
|
} |
|
} |