Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "W89-0208",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:44:59.475069Z"
},
"title": "The Computational Implementation of Principle-Based Parsers1",
"authors": [
{
"first": "Sandiway",
"middle": [],
"last": "Fong",
"suffix": "",
"affiliation": {
"laboratory": "Artificial Intelligence Laboratory",
"institution": "Massachusetts Institute of Technology",
"location": {}
},
"email": ""
},
{
"first": "Robert",
"middle": [
"C"
],
"last": "Berwick",
"suffix": "",
"affiliation": {
"laboratory": "Artificial Intelligence Laboratory",
"institution": "Massachusetts Institute of Technology",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "T h is p ap er ad d resse s the issu e of how to organ ize lin gu istic prin ciples for efficient p rocessin g. B ased on the general ch aracterizatio n of princi ples in term s o f purely co m p u ta tio n a l p ro p erties, the effects of principleorderin g on p arser p erform ance are in v e stigated. A novel p arser th a t ex plo its the p o ssib le variatio n in prin ciple-ordering to dy n am ically reorder prin ciples is d e scrib ed. H eu ristics for m inim izing the am ou n t of unn eces sa ry work perform ed du ring the p arsin g p rocess are also d iscu ssed .",
"pdf_parse": {
"paper_id": "W89-0208",
"_pdf_hash": "",
"abstract": [
{
"text": "T h is p ap er ad d resse s the issu e of how to organ ize lin gu istic prin ciples for efficient p rocessin g. B ased on the general ch aracterizatio n of princi ples in term s o f purely co m p u ta tio n a l p ro p erties, the effects of principleorderin g on p arser p erform ance are in v e stigated. A novel p arser th a t ex plo its the p o ssib le variatio n in prin ciple-ordering to dy n am ically reorder prin ciples is d e scrib ed. H eu ristics for m inim izing the am ou n t of unn eces sa ry work perform ed du ring the p arsin g p rocess are also d iscu ssed .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Recently, there has been some interest in the implementation of grammatical theories based on the principles and parameters approach (Correa [3] , Dorr [4] , Johnson [5] , Kolb & Thiersch [6] , and Stabler [10] ). In this framework, a fixed set of universal principles parameterized according to particular languages interact deductively to account for diverse linguistic phenomena. Much of the work to date has focused on the not inconsiderable task of formalizing such theories. The primary goal of this paper is to explore the computationally-relevant properties of this framework. In particular, we address the hitherto largely unexplored issue of how to organize linguistic principles for efficient processing. More specifically, this paper examines if, and how, a parser can re-order principles to avoid doing unnecessary work. Many im portant questions exist: for example, (1) W hat effect, if any, does principle-ordering have on the amount of work needed to parse a given sentence? (2) If the effect of principle-ordering is significant, then are some orderings much better than others? (3) If so, is it possible to predict (and explain) which ones these are?",
"cite_spans": [
{
"start": 141,
"end": 144,
"text": "[3]",
"ref_id": "BIBREF2"
},
{
"start": 152,
"end": 155,
"text": "[4]",
"ref_id": null
},
{
"start": 166,
"end": 169,
"text": "[5]",
"ref_id": "BIBREF4"
},
{
"start": 188,
"end": 191,
"text": "[6]",
"ref_id": "BIBREF5"
},
{
"start": 206,
"end": 210,
"text": "[10]",
"ref_id": "BIBREF9"
},
{
"start": 991,
"end": 994,
"text": "(2)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "By characterizing principles in terms of the purely computational notions of \"filters\" and \"generators\" , we show how how principle-ordering can be utilized to minimize the amount of work performed in the course of parsing. Basically, some principles, like Move-a (a principle relating 'gaps' and 'fillers') and Free Indexing (a principle relating referential items) are \"generators\" in the sense that they build more hypothesized output structures than their inputs. Other principles, like the 0-Criterion which places restrictions on the assignment of them atic relations, the Case Filter which requires certain noun phrases to be",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "marked with abstract Case, and Binding Theory constraints, act as filters and weed-out ill-formed structures.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "! T he work o f the first author is su pp orted by an IBM G raduate Fellow ship. R .C . Berw ick is su pp orted by N S F G rant D C R -85552543 under a Presidential Young Investigator's Award.",
"sec_num": null
},
{
"text": "A novel, logic-based parser, the Principle-Ordering Parser ( p op a r s e r ), was built to investigate and demonstrate the effects of principle-ordering. The p op a r s e r was deliberately constructed in a highly-modular fashion to allow for maximum flexibility in exploring alternative orderings of principles. For in stance, each principle is represented separately as an atomic parser operation. A structure is deemed to be well-formed only if it passes all parser operations. The scheduling of parser operations is controlled by a dynamic ordering mech anism that attem pts to eliminate unnecessary work by eliminating ill-formed structures as quickly as possible. (For comparison purposes, the p op a r s e r also allows the user to turn off the dynamic ordering mechanism and to parse with a user-specified (fixed) sequence of operations.)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "! T he work o f the first author is su pp orted by an IBM G raduate Fellow ship. R .C . Berw ick is su pp orted by N S F G rant D C R -85552543 under a Presidential Young Investigator's Award.",
"sec_num": null
},
{
"text": "Although we are primarily interested in exploiting the (abstract) computa tional properties of principles to build more efficient parsers, the P O -P A R SE R is also designed to be capable of handling a reasonably wide variety of linguistic phenomena. The system faithfully implements most of the principles contained in Lasnik k. Uriagereka's [7] textbook. T hat is, the parser makes the same grammaticality judgements and reports the same violations for ill-formed structures as the reference text. Some additional theory is also drawn from Chomsky [1] and [2] . Parser operations implement principles from T heta Theory, Case The ory, Binding Theory, Subjacency, the Empty Category Principle, movement at the level of Logical Form as well in overt syntax, and some Control Theory. This enables it to handle diverse phenomena including parasitic gaps constructions, strong crossover violations, passive, raising, and super-raising examples.",
"cite_spans": [
{
"start": 345,
"end": 348,
"text": "[7]",
"ref_id": "BIBREF6"
},
{
"start": 552,
"end": 555,
"text": "[1]",
"ref_id": null
},
{
"start": 560,
"end": 563,
"text": "[2]",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "! T he work o f the first author is su pp orted by an IBM G raduate Fellow ship. R .C . Berw ick is su pp orted by N S F G rant D C R -85552543 under a Presidential Young Investigator's Award.",
"sec_num": null
},
{
"text": "This section addresses the issue of how to organize linguistic principles in the P O -P A R SE R framework for efficient processing. iMore precisely, we discuss the problem of how to order the application of principles to minimize the amount o f 'work' th at the parser has to perform. We will explain why certain orderings may be better in this sense than others. We will also describe heuristics that the P O -P A R SE R employs in order to optimize the the ordering of its operations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "T he Principle Ordering Problem",
"sec_num": "2"
},
{
"text": "But first, is there a significant performance difference between various order ings? Alternatively, how im portant an issue is the principle ordering problem in parsing? An informal experiment was conducted using the p op a r s e r de scribed in the previous section to provide some indication on the magnitude of the problem. Although we were unable to examine all the possible orderings, it turns out th at order-of-magnitude variations in parsing times could be achieved merely by picking a few sample orderings.2 ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "T he Principle Ordering Problem",
"sec_num": "2"
},
{
"text": "The variation in parsing times for various principle orderings that we observed can be explained by assuming that overgeneration is the main problem, or bot tleneck, for parsers such as the P O -PA R SE R . That is, in the course of parsing a single sentence, a parser will hypothesize many different structures. Most of these structures, the ill-formed ones in particular, will be accounted for by one or more linguistic filters. A sentence will be deemed acceptable if there exists one or more structures that satisfy every applicable filter. Note that even when parsing grammatical sentences, overgeneration will produce ill-formed structures that need to be ruled out. Given that our goal is to minimize the amount of work performed during the parsing process, we would expect a parse using an ordering that requires the parser to perform extra work compared with another ordering to be slower.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "E x p la in in g t h e V a r ia tio n in P r in c ip le O r d e r in g",
"sec_num": "2.1"
},
{
"text": "Overgeneration implies that we should order the linguistic filters to elimi nate ill-formed structures as quickly as possible. For these structures, applying any parser operation other them one that rules it out may be considered as doing extra, or unnecessary, work (modulo any logical dependencies between principles).3 However, in the case of a well-formed structure, principle ordering cannot improve parser performance. By definition, a well-formed structure is one that passes all relevant parser operations: Unlike the case of an ill-formed structure, applying one operation cannot possibly preclude having to apply an other.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "E x p la in in g t h e V a r ia tio n in P r in c ip le O r d e r in g",
"sec_num": "2.1"
},
{
"text": "Since some orderings perform better than others, a natural question to ask is: Does there exist a 'globally' optimal ordering? The existence of such an ordering would have im portant implications for the design of the control structure of any principle-based parser. The P O -P A R SE R has a novel 'dynamic' control structure in the sense that it tries to determine an ordering-efficient strategy for every structure generated. If such a globally optimal ordering could be found, then we can do away with the run-time overhead and parser machinery associated with calculating individual orderings. T hat is, we can build an ordering-efficient parser simply by 'hardwiring' the optimal ordering into its control structure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": ".O p t im a l O r d e r in g s",
"sec_num": "2"
},
{
"text": "Unfortunately, no such ordering can exist. The impossibility of the globally optimal ordering follows directly from the \"eliminate unnecessary work\" ethic. Computationally speaking, an optimal ordering is one th at rules out ill-formed structures at the earliest possible op portunity. A globally optimal ordering would be one that always ruled out every . Hence, the optimal ordering must also invoke Condition A as early as possible. In particular, given that the two operations are independent, the. optimal ordering must order Condition A before the ECP and vice-versa. Similarly, example (lc) demands that the kCase Condition on Traces' operation must precede the other two operations. Hence a globally optimal ordering is impossible.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": ".O p t im a l O r d e r in g s",
"sec_num": "2"
},
{
"text": "The principle-ordering problem can be viewed as a limited instance of the wellknown conjunct ordering problem (Smith & Genesereth [9] ). Given a set of conjuncts, we are interested in finding all solutions that satisfy all the conjuncts simultaneously. The parsing problem is then to find well-formed structures (i.e. solutions) that satisfy all the parser operations (i.e. conjuncts) simultane ously. Moreover, we are particularly interested in minimizing the cost of finding these structures by re-ordering the set of parser operations. This section outlines some of the heuristics used by the PO -PA R SER to deter mine the minimum co6t ordering for a given structure. The p op a r s e r contains a dynamic ordering mechanism that attem pts to compute a minimum cost or dering for every phrase -ucture generated during the parsing process.4 The mechanism can be subdi led into two distinct phases. First, we will describe how the dynamic ordering mechanism decides which principle is the most likely candidate for eliminating a given structure. Then, we will explain how it makes use of this information to re-order parser operation sequences to minimize the total work performed by the parser.",
"cite_spans": [
{
"start": 130,
"end": 133,
"text": "[9]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": ".3 H e u r is t ic s for P r in c ip le O r d e r in g",
"sec_num": "2"
},
{
"text": "Given any structure, the dynamic ordering mechanism attem pts to satisfy the \"eliminate unnececessary work\" ethic by predicting a \"failing\" filter for that *78-structure. More precisely, it will try to predict the principle that a given struc ture violates on the basis of the simple structure cues. Since the ordering mech anism cannot know whether a structure is well-formed or not, it assumes that all structures are ill-formed and attem pts to predict a failing filter for every structure. In order to minimize the amount of work involved, the types of cues that the dynamic ordering mechanism can test for are deliberately limited. Only inexpensive tests such as whether a category contains certain features (e.g. ianaphoric, iinfinitival, or whether it is a trace or a non-argument) may be used. Any cues that may require significant computation, such as searching for an antecedent, are considered to be too expensive. Each structure cue is then associated with a list of possible failing filters. (Some examples of the mapping between cues and filters are shown below.) The system then chooses one of the possible failing filters based on this mapping.5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicting Failing Filters",
"sec_num": "2.3.1"
},
{
"text": "(2 ) S tr u c tu r e cue P o ssib le fsuling filters trace E m p ty C ate go ry Prin ciple, and C ase C ondition on traces in tran sitiv e C ase F ilter p assiv e T h e ta C riterion C ase F ilter n on-argum ent T h e ta C riterion -(-anaphoric B in din g T h eory Principle A + pronom inal Bin din g T h eory Prin ciple B",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicting Failing Filters",
"sec_num": "2.3.1"
},
{
"text": "The correspondence between each cue and the set of candidate filters may be systematically derived from the definitions of the relevant principles. For example, Principle A of the Binding Theory deals with the conditions under which antecedents for anaphoric items, such as \"each other\" and \"him self', must appear. Hence, Principle A can only be a candidate failing filter for struc tures that contain an item with the -f-anaphoric feature. Other correspondences may be somewhat less direct: for example, the Case Filter merely states that all overt noun phrase must have abstract Case. Now, in the P O -P A R SE R the conditions under which a noun phrase may receive abstract Case are defined by two separate operations, namely, Inherent Case Assignment and Structural Case Assignment. It turns out that an instance where Structural Case Assignment will not assign Case is when a verb that normally assigns Case has passive mor phology. Hence, the presence of a passive verb in a given structure may cause an overt noun phrase to fail to receive Case during Structural Case Assignment -which, in turn may cause the Case Filter to fail.6",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicting Failing Filters",
"sec_num": "2.3.1"
},
{
"text": "The failing filter mechanism can been seen as an approximation to the Cheapest-first heuristic in conjunct ordering problems. It turns out that if the cheapest conjunct at any given point will reduce the search space rather than expand it, then it can be shown that the optimal ordering must contain that conjunct at that point. Obviously, a failing filter is a \"cheapest\" operation in the sense that it immediately eliminates one structure from the set of possible structures under consideration.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicting Failing Filters",
"sec_num": "2.3.1"
},
{
"text": "Although the dynamic ordering mechanism performs well in many of the test cases drawn from the reference text, it is by no means foolproof. There are also many cases where the prediction mechanism triggers an unprofitable re-ordering of the default order of operations. (We will present one example of this in the next section.) A more sophisticated prediction scheme, perhaps one based on more complex cues, could increase the accuracy of the ordering mechanism. However, we will argue that it is not cost-effective to do so. The basic reason is that, in general, there is no simple way to determine whether a given structure will violate a certain principle.7 T hat is, as far as one can tell, it is difficult to produce a cheap (relative to the cost of the actual operation itself), but effective approximation to a filter operation. For example, in Binding Theory, it is diffi cult to determine if an anaphor and its antecedent satisfies the complex locality restrictions imposed by Principle A without actually doing some searching for a binder. Simplifying the locality restrictions is one way of reducing the co6t of approximation, but the very absence of search is the main reason why the overhead of the present ordering mechanism is relatively small.8 Hence, having more sophisticated cues may provide better approximations, but the tradeoff is th at the prediction methods may be almost as expensive as performing the real operations themselves.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicting Failing Filters",
"sec_num": "2.3.1"
},
{
"text": "Given a candidate failing filter, the dynamic ordering mechanism has to schedule the sequence of parser operations so that the failing filter is performed as early as possible. Simply moving the failing filter to the front of the operations queue is not a workable approach for two reasons.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Logical Dependencies and Re-ordering",
"sec_num": "2.3.2"
},
{
"text": "Firstly, simply fronting the failing filter may violate logical dependencies be tween various parser operations. For example, suppose the Case Filter was cho sen to be the failing filter. To create the conditions under which the Case Filter can apply, both Case assignment operations, namely, Inherent Case Assignment and Structural Case Assignment, must be applied first. Hence, fronting the Case Filter will also be accompanied by the subsequent fronting of both assignment operations unless, of course, they have already been applied to the structure in question.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Logical Dependencies and Re-ordering",
"sec_num": "2.3.2"
},
{
"text": "Secondly, the failing filter approach does not take into account the behaviour of generator operations. A generator may be defined as any parser operation that always produces one output, and possibly more than one output, for each input. For example, the operations corresponding to X rules, Move-a, Free Indexing and LF Movement are the generators in the p op a r s e r . (Similarly, the operations that we have previously referred to as \"filters\" may be characterized as parser operations that, when given N structures as input, pass N and possibly fewer than N structures.) Due to logical dependencies, it may be necessary in some situations to invoke a generator operation before a failure filter can be applied. For example, the filter Principle A of the Binding Theory is logically dependent on the generator Free Indexing to generate the possible antecedents for the anaphors in a structure. Consider the possible binders for the anaphor \"him self\" in \"John thought that Bill saw him self\" as shown below:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Logical Dependencies and Re-ordering",
"sec_num": "2.3.2"
},
{
"text": "(3) a. *John, thought that Bill,-saw himself, b. John, thought that Billy saw himself; c.*John, thought that Billy saw himself* Only in example (3b), is the antecedent close enough to satisfy the locality restrictions imposed by Principle A. Note that Principle A had to be applied a total of three times in the above example in order to show that there is only one possible antecedent for \"him self\". This situation arises because of the gen eral tendency of generators to overgenerate. But this characteristic behaviour of generators can greatly magnify the extra work that the parser does when the dynamic ordering mechanism picks the wrong failing filter. Consider the ill-formed structure u*John seems that he likes t\" (a violation of the princi ple that traces of noun phrase cannot receive Case.) If however, Principle B of the Binding Theory is predicted to be the failure filter (on the basis of the structure cue \"he\" ), then Principle B will be applied repeatedly to the index ings generated by the Free Indexing operation. On the other hand, if the Case Condition on Traces operation was correctly predicted to be the failing filter, then Free Indexing need not be applied at ail. The dynamic ordering mech anism of the P O -P A R SE R is designed to be sensitive to the potential problems caused by selecting a candidate failing filter that is logically dependent on many generators.9 ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Logical Dependencies and Re-ordering",
"sec_num": "2.3.2"
},
{
"text": "In this section we describe how the characterization of parser operations in terms of filters and generators may be exploited further to improve the perfor mance of the p op a r s e r for some operations. More precisely, we make use of certain computational properties of linguistic filters to improve the backtrack ing behaviour of the p op a r s e r . The behaviour of this optimization will turn out to complement that of the ordering selection procedure quite nicely. That is, the optimization is most effective in exactly those cases where the selection procedure is least effective.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "L in g u is t ic F ilt e r s a n d D e t e r m in is m",
"sec_num": null
},
{
"text": "We hypothesize that linguistic filters, such as the Case Filter, Binding Con ditions, ECP, and so on, may be characterized as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "L in g u is t ic F ilt e r s a n d D e t e r m in is m",
"sec_num": null
},
{
"text": "(4) Hypothesis: Linguistic filters are side-effect free conditions on configurations",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "L in g u is t ic F ilt e r s a n d D e t e r m in is m",
"sec_num": null
},
{
"text": "In terms of parser operations, this means that filters should never cause structure to be built or attem pt to fill in feature slots.10 Moreover, computa tionally speaking, the parser operations corresponding to linguistic filters should be deterministic. That is, any given structure should either fail a filter or just pass. A filter operation should never need to-succeed more than once, simply because it is side-effect free.11 By contrast, operations that we have character ized as generators, such as Move-a and Free Indexing, are not deterministic in this sense. T hat is, given a structure as input, they may produce one or more structures as output.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "L in g u is t ic F ilt e r s a n d D e t e r m in is m",
"sec_num": null
},
{
"text": "the \"d istan ce\" of p oten tial failure filters from the current state of the parser in term s of the num ber of generators yet to be applied. T h en the failing filter will be chosen on the basis of som e com bination of structure cues and generator distance. Currently, the PO -PA RSER uses a sligh tly different and cheaper schem e. T he failure filter is chosen solely on the basis of structure cues. However, the fronting m echanism is restricted so that the chosen filter can only m ove a lim ited num ber of position s ahead .A' its original position . T he original operation sequence is designed such that the distance of the filter from the front of th e sequence is roughly proportional to the num ber of (ou tstan d in g) operations that the filter is depend en t on.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "L in g u is t ic F ilt e r s a n d D e t e r m in is m",
"sec_num": null
},
{
"text": "10 So far, we have not encountered any lin guistic filters that require either structure building or feature assign m en t. O perations such as 5-role and Case assignm ent are not considered filters in the sense of the definition given in the previous section. In the PO -PA RSER , these operation s will never fail. However, definitions that involve som e elem ent (3 would not violate the (i-within-i) filter \u2022 ( 7, ...S,...] 11 It turns out th at there are situ ation s where a filter operation (although side-effect free) could su cceed m ore than once. For exam ple, the linguistic filter know n as the \"E m pty C ate gory Principle\" (E C P ) im plies that all traces m ust be \"properly governed\" . A trace m ay satisfy proper governm ent by being either \"lexically governed\" or \"anteced en t governed\" . Now con sider the structxire [c p VVhati did you [vp read ti]]. T he trace ti is b oth lexically governed (by the verb read) and anteced en t governed (by its antecedent w hat). In the PO -PA RSER the EC P op eration can su cceed tw ice for cases such as t\\ above.",
"cite_spans": [],
"ref_spans": [
{
"start": 366,
"end": 427,
"text": "(3 would not violate the (i-within-i) filter \u2022 ( 7, ...S,...]",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "L in g u is t ic F ilt e r s a n d D e t e r m in is m",
"sec_num": null
},
{
"text": "Intemational Parsing Workshop '89",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "O bviously, there are m any ways to im plem ent such a selection procedure. Currently, the PO -PA R SER uses a voting schem e based on the frequency of cues. T he (unproven) underlying assu m p tion ia th at the probability of a filter being a failing filter increases w ith the num ber o f occurrences o f its associated cues in a given structure. For exam ple, the more traces there are in a structure, the more Likely it is that one of them will violate som e filter applicable to traces, such as the E m pty C ategory Principle (E C P ). 8 It is possible to au tom ate the process of finding structure cues sim ply by in sp ectin g the closure o f the definitions of each filter and all dependent operations. O ne m eth o d of deriving",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "13O bviously, the sp eed -u p ob tain ed will depend on the number o f principles present in the sy stem and the degree o f 'fin e-tu n in g' of the failure filter selection criteria.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "From our informal experiments with the PO -PA R SE R , we have found that dy namic principle-ordering can provide a significant improvement over any fixed ordering. We have found that speed-ups varying from three-or four-fold to order-of-magnitude improvements are possible in many cases.13The control structure of the PO -PA R SER forces linguistic principles to be ap plied one at a time. Many other machine architectures are certainly possible. For example, we could take advantage of the independence of many principles and apply principles in parallel whenever possible. However, any improvement in parsing performance would come at the expense of violating the minimum (un necessary) work ethic. Lazy evaluation of principles is yet another alternative. However, principle-ordering would still be an im portant consideration for effi cient processing in this case. Finally, we should also consider principle-ordering from the viewpoint of scalability. The experience from building prototypes of the p op a r s e r suggests that as the level of sophistication of the parser increases (both in terms of the number and complexity of individual principles), the effect of principle-ordering also becomes more pronounced.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions: The Utility of Principle-Ordering",
"sec_num": null
},
{
"text": "Given the above hypothesis, we can cut down on the amount of work done by the p op a r s e r by modifying its behaviour for filter operations. Currently, the parser employs a backtracking model of computation. If a particular parser op eration fails, then the default behaviour is to attem pt to re-satisfy the operation that was called immediately before the failing operation. In this situation, the p op a r s e r will only attem pt to re-satisfy the preceding operation if it happens to be a generator. When the preceding operation is a filter, then the parser will skip the filter and, instead, attem pt to resatisfy the next most recent operation and so on.12 For example, consider the following calling sequence:Suppose that a structure generated by generator G 2 passes filters and F2, but fails on filter F3 . None of the three filters could have been the cause of the failure by the side-effect free hypothesis. Hence, we can skip trying to resatisfy any of them and backtrack straight to G2.Note that this optimization is just a limited form of dependency-directed backtracking. Failures are traced directly to the last generator invoked, thereby skipping over any intervening filters as possible causes of failure. However, the backtracking behaviour is limited in the sense that the most recent generator may not be the cause of a failure. Consider the above example again. The failure of F3 need not have been caused by G2. Instead, it could have been caused by structure-building in another generator further back in the calling sequence, say G x. But the parser will still try out all the other possibilities in G2 first.Consider a situation in which the principle selection procedure performs poorly. T hat is, for a particular ill-formed structure, the selection procedure will fail to immediately identify a filter that will rule out the structure. The advantages of the modified mechanism over the default backtrack scheme will be more pronounced in such situations -especially if the parser has to try several filters before finding a \"failing\" filter. By contrast, the behaviour of the modified mechanism will resemble that of the strict chronological scheme in situations where the selection procedure performs relatively well (i.e. when a true failing filter is fronted). In such cases, the advantages, if significant, will be small. (In an informal comparison between the two schemes using about eighty sentences from the reference text, only about half the test cases exhibited a noticeable decrease in parsing time.)13T h is behaviour can be easily sim ulated using the 'c u t' predicate in P rolog. We can route all calls to filter operation s through a predicate that calls the filter and then cuts off all internal choice p oin ts. (For ind ep en dent reasons, the PO -PA RSER does not actu ally use this approach.)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "annex",
"sec_num": null
}
],
"bib_entries": {
"BIBREF1": {
"ref_id": "b1",
"title": "Its N a tu re , O rigin, and U se",
"authors": [
{
"first": "N",
"middle": [
"A"
],
"last": "C H Om Sk Y",
"suffix": ""
}
],
"year": 1986,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C h om sk y , N .A ., K n o w le d g e o f L a n g u a g e : Its N a tu re , O rigin, and U se.\" 1986. P rag e r.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "S y n ta c tic A n a ly sis of E n glish with re sp ect to G o vern m en t-B in d in g G r a m m a r",
"authors": [
{
"first": "N",
"middle": [],
"last": "C O Rre A",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C o rre a , N ., \" S y n ta c tic A n a ly sis of E n glish with re sp ect to G o vern m en t-B in d in g G r a m m a r ,\" P h .D D isse rta tio n , 1988. S y ra cu se U niversity.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "rain and C o gn itiv e Scien ces",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jo h n so n , M ., \" K n o w led ge aa L a n g u a g e ,\" m s. M .I.T . B rain and C o gn itiv e Scien ces.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "L evels and E m p ty C ate g o rie s in a P rin cip les and P a ra m e te r s A p p ro ach to P a rsin g",
"authors": [
{
"first": "H",
"middle": [
"P"
],
"last": "K O Lb",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K o lb , H .P ., k C . T h ie rsch , \" L evels and E m p ty C ate g o rie s in a P rin cip les and P a ra m e te r s A p p ro ach to P a rsin g ,\" m s. 1988. T ilb u rg U niversity.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "L e c tu r e s on B in d in g a n d E m p t y C a teg o ries. 1988. M .I.T . P ress",
"authors": [
{
"first": "H",
"middle": [],
"last": "L Aan Ik",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Riagerek A, A C O U Rse In G B S Y N Ta X",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L aan ik , H. k J . U riagerek a, A C o u rse in G B S y n ta x : L e c tu r e s on B in d in g a n d E m p t y C a teg o ries. 1988. M .I.T . P ress.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "S o m e S tu d ie s in M achine L earn in g U sing the G a m e of C heckers. II -R e ce n t P r o g re ss",
"authors": [
{
"first": "A",
"middle": [
"L"
],
"last": "S A M U E Ls",
"suffix": ""
}
],
"year": 1967,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S a m u e ls, A .L ., \" S o m e S tu d ie s in M achine L earn in g U sing the G a m e of C heckers. II -R e ce n t P r o g re ss,\" IBM Journal. N ovem ber 1967.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "T h e L o gical A p p ro ach to S y n ta x : F o u n d atio n s",
"authors": [
{
"first": "S",
"middle": [],
"last": "Ta B Le R",
"suffix": ""
},
{
"first": "E",
"middle": [
"P"
],
"last": "",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S ta b le r, E .P ., J r . \"T h e L o gical A p p ro ach to S y n ta x : F o u n d atio n s, S p e cific a tio n s an d Im p le m e n ta tio n s of T h eo rie s of G overn m en t and B in d in g .\" m s. 1989. U ni v ersity o f W estern O n ta rio .",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "PO -PA R SER has abou t tw elve to six teen parser operations. G iven a set of one dozen operation s, there are ab ou t 500 m illion different ways to order these operations. F ortunately, only abou t h*Jf a m illion o f these are actually valid, due to logical depend en cies b etw een the various operation s. However, this is still far too m any to test exhaustively. Instead, only a few w ell-chosen orderings were tested on a num ber of sentences from the reference. T h e procedure",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF1": {
"text": "involved choosing a default sequence of operation* and 'scram b lin g' the sequence by m oving operation s as far as possible from their original p osition s (m odulo any logical dependencies b etw een op eration s). 3In th e PO -PA R SER for exam ple, the Case F ilter operation w hich require* that all overt noun phrases have abstract Case assigned, is dependent on b oth the inherent and structural Case assign m en t operations. T h at is, in any valid ordering the filter m ust be preceded by both operation s.",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF2": {
"text": "In their paper, S m ith Sc G enesereth drew a d istin ction betw een \"sta tic \" and \"dynam ic\" ordering strategies. In sta tic strategies, the conjuncts are first ordered, and then solved in the order presented . By con trast, in dyn am ic strategies the chosen ordering m ay be revised b etw een solvin g ind ividu al conjun cts. Currently, th e PO -PA RSER em ploys a d yn am ic strategy. T h e ordering m ech anism com putes an ordering baaed on certain features of each structure to be processed. T h e ordering m ay be revised after certain operations (e.g. m ovem ent) that m odify the structure in question.",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF3": {
"text": "cue* i> to collect the n egation of all condition* involving category features. For exam ple, if an operation contain* the condition \"n o t ( I t \u00ab \u00ab h a * -f\u00ab a tu r* i n t r a n s i t i v * ) \" , then we can take the presence of an intransitive item a* a possible reason for failure of that operation. However, this approach ha* the p o te n tia l problem of generating too m any cues. A lthough, it m ay be relatively inexpen*ive to test each individual cue, a large number of cues will significantly increase the overhead o f th e ordering m echanism . Furtherm ore, it turns out that not all cues are equally useful in predicting failure filter*. O ne solution m ay be to use \"weight*\" to rank the predictive u tility o f each cue w ith respect to each filter. T h en an adap tive algorithm could be used to \"learn\" the w eighting value*, in a m anner rem iniscent of Sam uels [8]. T he failure filter prediction process could then autom atically elim inate testin g for relatively un im p ortan t cue*. Thi* approach is currently being investigated . 7If *uch a schem e can be found, then it can effectively replace the definition o f the principle itself. 8 W e ignore the ad d ition al co*t of re-ordering the sequence of operation* once a failing filter ha* been predicted. T he actual re-ordering can be m ade relatively inexpensive usin g various trick*. For exam ple, it ia po*\u00abible to \"cache\" or com pute (off-line) com m on ca*es of re-ordering a default sequence w ith respect to various failing filters, thu* reducing the cost of re-ordering to th at o f a sim ple look-up.",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF4": {
"text": "9O bviously, there are m any different w ays to accom plish this. O ne m eth od is to com pute 2 .4",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF5": {
"text": "of 'm o d a lity ' are p o ten tially problem atic. For exam ple, C h om sk y's definition of an accessible SUBJECT, a definition relevant to the principles o f B inding Theory, contains the follow ing phrase assignment to or of the index of",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF6": {
"text": ". A transparent im p lem en tation of such a definition would seem to require som e m an ip u lation of indices. H owever, Lasnik (p .58) poin ts out that there exists an em pirically ind istin guish able version o f accessible SUBJECT w ithout the elem ent of m odality present in C h om sky's version.",
"num": null,
"uris": null,
"type_str": "figure"
},
"TABREF0": {
"text": "possible ill-formed structure without doing any unnecessary work. Consider the following three structures (taken from Lasnik's book): (1) a. *Johni is crucial [c p [ip < 1 to see this ]] b. *[,vpJohni's mother ][vp likes himselfi] c. *Johni seems that hei likes t\\ Example (1) violates the Empty Category Principle (ECP). Hence the op timal ordering must invoke the ECP operation before any other operation that it is not dependent on. On the other hand, example (lb ) violates a Binding Theory principle, 'Condition A'",
"content": "<table/>",
"type_str": "table",
"html": null,
"num": null
}
}
}
}