Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "C90-1002",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:35:57.650094Z"
},
"title": "Design of a Hybrid Deterministic Parser",
"authors": [
{
"first": "Kanaan",
"middle": [
"A"
],
"last": "Faisal",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Fahd University of Petroleum and Minerals T Dhahran",
"location": {
"addrLine": "31261 Kingdom of Saudi Aa'abia"
}
},
"email": ""
},
{
"first": "Start",
"middle": [
"C"
],
"last": "Kwasny",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Washington University St. Louis",
"location": {
"postCode": "63130-4899",
"region": "MO",
"country": "U.S.A"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Some sm~l modifications to deterministic grammar rules arc necessary to insure the suitability of each rule for use with our \"winner-take-all\" network. Many of these changes are simplifications that have been ~: The sponsors of the Center are McDonnell Douglas Coq~oration and Southwestern Bell Telephone Company.",
"pdf_parse": {
"paper_id": "C90-1002",
"_pdf_hash": "",
"abstract": [
{
"text": "Some sm~l modifications to deterministic grammar rules arc necessary to insure the suitability of each rule for use with our \"winner-take-all\" network. Many of these changes are simplifications that have been ~: The sponsors of the Center are McDonnell Douglas Coq~oration and Southwestern Bell Telephone Company.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "A deterministic parser is under development which represents a departure from traditional deterministic parsers in that it combines both symbolic and connectionist components. The connectionist component is trained either from patterns derived from the rules of a deterministic grammar. ~The development and evolution of such a hybrid architecture has lead to a parser which is superior to any known deterministic parser.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Experiments are described and powerful training techniques are demonstrated that permit decision-making by the connectionist component in the parsing process. This approach has permitted some simplifications to the rules of other deterministic parsers, including the elimination of rule packets and priorities. Furthermore, parsing is performed more robustly and with more tolerance for error. Data are presented which show how a connectionist (neural) network trained with linguistic rules can parse both expected (grammatical) sentences as well as some novel (ungrammatical or lexically ambiguous) sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "The determinism hypothesis which forms the basis for PARSIFAL (Marcus, 1980) If we accept this hypothesis, it must follow that processing need not depend in any fundamental way on backtracking. As a further consequence, no partial structures are produced during parsing which fail to become part of the final structure. PARSIFAL was the first of a number of systems to demonstrate how deterministic parsing of Natural Language can be performed using a rule-based grammar. Extensions to PARSIFAL have been researched independently including the parsing of ungrammatical sentences in PARAGRAM (Charniak, 1983) , the resolution of lexical ambiguities in ROBIE (Milne, 1986) , and the acquiring of syntactic rules from examples in LPARSIFAL (Berwick, 1985) . Traditional deterministic parsers process input sentences primarily left-to-right. Determinism is accomplished by permitting a lookahead of up to three constituents with a constituent buffer designated for that purpose. To permit embedded structures, a stack is also part of the architecture. Rules are partitioned into rule packets which dynamically become active or inactive during parsing, but are usually associated with the current (top-level) node of the structure being built. A single processing step consists of selecting a rule that can fire from an active rule packet, firing the rule, and performing its action. Conflicts are resolved within packets from the static ordering (priority) of rules. The action effects changes to the stack and buffer. After a series of processing steps, a termination rule fires and processing ends. The final structure is left on top of the stack.",
"cite_spans": [
{
"start": 62,
"end": 76,
"text": "(Marcus, 1980)",
"ref_id": "BIBREF8"
},
{
"start": 591,
"end": 607,
"text": "(Charniak, 1983)",
"ref_id": "BIBREF1"
},
{
"start": 657,
"end": 670,
"text": "(Milne, 1986)",
"ref_id": "BIBREF10"
},
{
"start": 737,
"end": 752,
"text": "(Berwick, 1985)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Determinism and Parsing",
"sec_num": "2."
},
{
"text": "Our parser takes the approach of deterministic parsing and combines it with connectionism. McClelland and Kawamoto (1986, p.317 ) first suggested the combination of these ideas. Deterministic parsing provides a setting in which no backtracking occurs while connectionism provides generalization and robustness. Our goal is to combine the two in a way that enhances their advantages and minimizes their faults. In simple terms, the rules of the deterministic parser are replaced by a network which is trained from training sequences derived from the grammar rules. The network embodies the decision-making component of the parser and maps a state of the parser to an action. Actions are performed in the traditional way by symbolically manipulating the stack and buffer contents.",
"cite_spans": [
{
"start": 91,
"end": 127,
"text": "McClelland and Kawamoto (1986, p.317",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hybrid Deterministic Parsing",
"sec_num": "3."
},
{
"text": "Parsing experiments are conducted to determine the effectiveness of training by attempting to process ungrammatical and lexically ambiguous sentence forms. The performance of our hybrid parser depends on the extent and nature of the training. Once trained, the network is efficient, both in terms of representation and execution. proposed by others and are not essential to the success of our approach. All of these changes are made without substantially altering the capabilities represented in the original grammar rules. Changes include: elimination of the packet system; removal of attention-shifting rules; removal of rule priorities; reduction of lookahead to two positions instead of three; and revision of the rules so that a single action is performed by each.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hybrid Deterministic Parsing",
"sec_num": "3."
},
{
"text": "As an example, consider part of one sample grammar rule from PARSIFAL and its reformulation in the hybrid parser. Figure 1 shows the two styles side-byside. Rule actions are in capital letters; rule names are in bold. In the PARSIFAL rule, a priority number is given explicitly and the rule contains multiple actions and conditionals similar to a programming language. It explicitly activates and deactivates rule packets, executes rules, creates new phrase structure nodes, and tests for complex properties of the elements in the buffer.",
"cite_spans": [],
"ref_spans": [
{
"start": 114,
"end": 122,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Hybrid Deterministic Parsing",
"sec_num": "3."
},
{
"text": "Rules in the hybrid parser eliminate many of these details without substantially changing the capabilities of the grammar. In the figure, two of several rules derived from the Main-verb rule are shown. In the first rule, a new VP active node is created on the stack and in the second rule the verb is attached as a main verb to the active node (VP) on top of the stack. With the elimination of rule packeting, no priorities nor explicit packet activations/deactivations are required. While this mechanism is precisely what is required for efficient design of a symbolic parser, priorities are at the essence of what is learned when training the connectionist component of the hybrid. Actions such as creating and attaching or selecting the argument structure of the verb are carried out symbolically in the hybrid parser. Also, a symbolic lexicon is consulted to determine the properties of words. When a predicate such as a verb is encountered, the requirements or expectations for its arguments are made part of the features of the active VP node, thus affecting which actions will be executed later on.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hybrid Deterministic Parsing",
"sec_num": "3."
},
{
"text": "Elimination of the packet system. In PARSIFAL, rules are organized into packets. Only those rules in an active packet are considered while processing. Often, more than one packet is active. For example, the packet CPOOL, or clause level packet, is always active. Since the hybrid parser has no packets, every rule is considered in parallel with the situation dictating which action should be taken.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evolutionary Steps from PARSIFAL",
"sec_num": "3.2."
},
{
"text": "on attention-shifting rules to transparently build certain constituents, particularly NPs, which begin in the second buffer position. For example, in the sentence taken from Marcus: Have the students who missed the exam taken the makeup today?, the subject-aux inversion mechanism (switch) must be deliberately postponed until the NP starting in the second position is analyzed as a complete constituent. Only then can the inversion take place. PARSIFAL solves this problem by temporarily shifting buffer positions so that the parser is viewing the buffer beginning in the second position. The second lefunost complete constituent (the NP) is then reduced before the first element constituent. We follow the lead of Berwick (1985) and others in our treatment of such cases by using the parse stack as a \"movement stack\" and stack the postponed item. Two actions, PUSH and DROP, are suitable for this purpose. In the example above, the end of the noun phrase, the students, can not be determined without applying the rules to the embedded clause. When complete, the NP is dropped into the buffer and the auxilim'y verb can be re-inserted into the buffer allowing the inversion can take place. Note that at no point is the \"monotonic\" property of determinism violated by undoing previous actions.",
"cite_spans": [
{
"start": 716,
"end": 730,
"text": "Berwick (1985)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Removal of attention-shifting rules. PARSIFAL relies",
"sec_num": null
},
{
"text": "Removal of rule priorities. In PARSIFAL, rules are ordered by priority. In the hybrid parser, rules have no priority. They compete with each other and the most relevant rule, based on training, wins the competition. Only one action, corresponding to the Iiring of one single-action rule, will be performed on each processing step. The current active node and its attachments along with the contents of the two buffer cells is the basis for this decision. The rules are coded in such a way that every rule has a unique left-hand side and is thus relevant to situations most similar to its left-hand side pattern.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Removal of attention-shifting rules. PARSIFAL relies",
"sec_num": null
},
{
"text": "Restriction of grammar rule format. The format of grammar rules in the hybrid parser is different from PARSIFAL in two ways. First, grammar rules are forbidden to have more than a single action which is performed on the first buffer cell only; and second, rule patterns are defined to uniformly mention items in both buffer cells. These are the only actions the grammar rules can perform. The buffer is manageA symbolically and if a position is vacated, an item is taken from the input stream to fill the position. The connectionist component can only examine the current active node, its immediate attachments, and the features of the first two buffer items. Once a node is attached to its parent, it can never again be examined.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Removal of attention-shifting rules. PARSIFAL relies",
"sec_num": null
},
{
"text": "The hybrid parser is capable of successfully processing a wide; variety of sentence forms such as simple declarative sentences, passives, imperatives, yes-no questions, wh-questions, wh-clauses, and other embedted sentences. The grammar to be learned by the subsymbolic system, which has 73 rules, can be separated into base phrase structure rules and transformationaltype rules. The base structure system can be further broken down into rules for NPs, VPs, auxiliaries, main sentence, PPs, and embedded sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Grammar",
"sec_num": "3.3."
},
{
"text": "Transformational rules fall into two groups: simple local transformations (like subject-aux inversion) and major movement rules like wh movement. In general, for each type of phrase, creation of the phrase (creating a new node on the active node stack) and completion of the phrase (dropping it into the buffer) is carried out by a separate grammar rule action.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Grammar",
"sec_num": "3.3."
},
{
"text": "The rules for analyzing verb phrases discriminate among verbs that take different kinds of complements. For example, verbs that take a wh complement are discriminated from ones that take a that complement. Verbs like want that take either a missing or lexical subject in embedded sentential complements are separated from verbs like try or believe that do not take a lexical subject. Verbs that take one NP object are distinguished from ones that take two NP objects through lexical features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Grammar",
"sec_num": "3.3."
},
{
"text": "The hybrid parser is composed of a connectionist network trained using backward propagation (Werbos 1974; Rumelhart et al, 1986) As Figure 2 illustrates, the hybrid parser is organized into a symbolic component and a connectionist component. The latter component is implemented as a numeric simulation of an adaptive neural network. The symbolic and connectionist components cooperate in a tightly coupled manner since there are proven advantages to this type of organization (Kitzmiller and Kowalik, 1987) . For the hybrid parser, the advantages are performance and robustness.",
"cite_spans": [
{
"start": 92,
"end": 105,
"text": "(Werbos 1974;",
"ref_id": "BIBREF13"
},
{
"start": 106,
"end": 128,
"text": "Rumelhart et al, 1986)",
"ref_id": "BIBREF11"
},
{
"start": 476,
"end": 506,
"text": "(Kitzmiller and Kowalik, 1987)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 132,
"end": 140,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Architecture of the Hybrid Parser",
"sec_num": "4."
},
{
"text": "The symbolic component manages the input sentence and the flow of constituents into the lookahead buffer, coding them as required for the input level of the network in the connectionist component. On the return side, it evaluates the activations of the output units, decides which action to perform, and performs that action, potentially modifying the stack and buffer in the process. The responsibility of the connectionist component, therefore, is to examine the contents of the buffer and stack and yield a preference for a specific action. These preferences are garnered from many iterations of back-propagation learning with instances of the rule templates. Learning itself occurs off-line and is a time-consuming process, but once learned the processing times for file system are excellent. Computations need only flow in one direction in the network. The feed-forward multiplication of weights and computation of activation levels for individual units produce the pattern of activation on the output level. Activation of output units is interpreted in a winner-take-all manner, with the highest activated unit determining the action to be taken. In tile set of experiments described here, the network has a three-layer architecture, as illustrated, with 66 input units, 40 hidden units, and 40 output units. Each input pattern consists of two feature vectors from the buffer items and one vector from the stack. The first vector activates 26 input units and the second vector activates 12 input units in a pattern vector representing a word or constituent of the sentence. The stack vector activates 28 units representing the current node on the stack and its attachments. One hidden layer has proven sufficient in all of these experiments. The output layer permits the choice of one out of 40 possible actions that can be performed on a single iteration of processing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Architecture of the Hybrid Parser",
"sec_num": "4."
},
{
"text": "During sentence processing, the network is presented with encodings of the buffer and the top of the stack. What the model actually sees as input is not the raw sentence but a coded representation of each word in the sentence in a form that could be produced by a simple lexicon, although such a lexicon is not part of the model in its present form. The network produces the action to be taken which is then performed. If the action creates a vacancy in the buffer and if more of the sentence is left to be processed then the next sentence component is moved into the buffer. The process then repeats until a stop action is performed, usually when the buffer becomes empty. Iteration over the input stream is achieved in this fashion. Figure 2 illustrates the nature of the processing, although it shows a composite of the initial and final states of the parser. When a sentence form like \"John should have scheduled the meeting\" appears in the input stream, the first two constituents fill the buffer. These contents along with the contents of the top of the stack and its attachments are encoded and presented to the network. Coding is based on a simple scheme in which those features of the buffer and stack that are actually tested by grammar rules are represented (see Faisal, 1990) . The network, in turn, produces a single action. Specification of the action by the network is done by activating one of the output units more than the others thus determining the winner (called \"winnertake-all\"). This action is then executed symbolically, yielding changes in the buffer and stack. The process repeats until a stop action is performed at which time the resultant parse structure is left on top of the stack as shown.",
"cite_spans": [
{
"start": 1274,
"end": 1287,
"text": "Faisal, 1990)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 735,
"end": 743,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Architecture of the Hybrid Parser",
"sec_num": "4."
},
{
"text": "Training of the hybrid parser proceeds by presenting patterns to the network and teaching it to respond with an appropriate action. The input patterns represent encodings of the buffer positions and the top of the slick from the deterministic parser. The output of the network contains a series of units representing actions to be performed during processing and judged in a winner-take-all fashion. Network convergence is observed once the network can achieve a perfect score on the training patterns themselves and the error measure has decreased to an acceptable level (set as a parameter). Once the network is trained, the weights are stored in a file so that sentences can be parsed. A sentence is parsed by iteratively presenting the network with coded inputs and performing the action specified by the network.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning a Grammar",
"sec_num": "4.1."
},
{
"text": "Our neural network simulator features a logistic function that computes values in the range of -1 to +1. Each grammar rule is coded as a training template which is a list of feature values. In general, each constituent is represented by an ordered feature vector in which one or more values is ON(+1) for features of the form and all other values are either OFF(-1) or DO NOT CARE (?). A rule template is inslintiated by randomly changing ? to +1 or -1. Thus, each template can be instantiated to give many training patterns and each training epoch is slightly different. It is obviously impossible to test the performance of all these cases, so for the purpose of judging convergence, a zero is substituted for each ? in the rule template to provide testing patterns. For more discussion of the training process, see Faisal and Kwasny (1990) . (1) Scheduled a meeting for Monday.",
"cite_spans": [
{
"start": 818,
"end": 842,
"text": "Faisal and Kwasny (1990)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning a Grammar",
"sec_num": "4.1."
},
{
"text": "(2) John has scheduled the meeting for Monday.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning a Grammar",
"sec_num": "4.1."
},
{
"text": "(3) The meeting seems to have been scheduled for Monday. (4) The jar seems broken. (5) I persuaded John to do it. (6) I saw him do it. (7) Ma131 wants John to have a party. (8) Mary wants to have a party. (9) What will the man put in the comer? (10) What will the man put the book in? (11) Who (lid John see? (12) Who broke the jar? (13) Who is carrying the baby? (14) What is the baby carrying? (15) What did Bob give Mary? (16) The man who wanted to meet Mary has disappeared. (17) The: man who hit Mary with a book has disappeared. (18) The man whom Mary hit with a book has disappeared. (19) I told that boy that boys should do it. (20) That mouse that the cat chased had squeaked. (21) I told Sue you would schedule the meeting. (22) I told the girl that you would schedule the meeting. (23) John is scheduling the meeting for Monday.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning a Grammar",
"sec_num": "4.1."
},
{
"text": "For testing purposes, several sentences are ceded that would parse correctly by the rules of the deterministic parser. Additionally, severed mildly ungrammatical and lexical ambiguous sentences are coded to determine if the network would generalize in any useful way. Most of these examples were drawn from work cited earlier by Chamiak and Milne. The objective is to discover exactly how syntactic context can aid in resolving such problems. In previous work, a simpler (23-rule) grammar was tested with similar results (Kwasny and Faisal,1989) .",
"cite_spans": [
{
"start": 521,
"end": 545,
"text": "(Kwasny and Faisal,1989)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Performance",
"sec_num": "5."
},
{
"text": "Experimentation with grammatical sentences confirms that indeed the rules from the grammar have been learned sufficiently to parse sentences. When training with the rule templates, testing for convergence is possible by changing each ? to a zero value. Here the performance of the hybrid parser is examined with actual sentences and the claim that the parser simulates both PARSIFAL and LPARSIFAL is substantiated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Grammatical Sentences",
"sec_num": "5.1."
},
{
"text": "Gramrnatical sentences, by our definition, are those which parse correctly in the rule-based grammar from which the training set is derived. Table 1 shows several examples of grammatical sentences which are parsed successfully. Parse trees are developed which are identical with ones produced by other deterministic parsing systems.",
"cite_spans": [],
"ref_spans": [
{
"start": 141,
"end": 148,
"text": "Table 1",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Parsing Grammatical Sentences",
"sec_num": "5.1."
},
{
"text": "Capabilities described above only duplicate what can be done rather comfortably symbolically. Of course, the feedforward network in the hybrid parser allows very fast decision-making due to the nature of the model. But what other features does the model possess? Importantly, how robust is the processing? As a 36.8 (3a) *John is schedule the meeting for Monday. 9.5 (3b) John is scheduling the meeting for Monday.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Ungrammatical Sentences",
"sec_num": "5.2."
},
{
"text": "54.7 (4a) *John is is scheduling the meeting for Monday.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Ungrammatical Sentences",
"sec_num": "5.2."
},
{
"text": "7.2 (4b) John is scheduling the meeting for Monday.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Ungrammatical Sentences",
"sec_num": "5.2."
},
{
"text": "54.7 (5a) *The boy did hitting Jack.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Ungrammatical Sentences",
"sec_num": "5.2."
},
{
"text": "14.8 (5b) The boy did hit Jack.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Ungrammatical Sentences",
"sec_num": "5.2."
},
{
"text": "137.7 (6a) *'llae meeting is been scheduled for Monday.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Ungrammatical Sentences",
"sec_num": "5.2."
},
{
"text": "559.6 (6b) The meeting has been scheduled for Monday.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Ungrammatical Sentences",
"sec_num": "5.2."
},
{
"text": "565.5 symbolic model, PARAGRAM extends PARSIFAL to handle ungrammatical sentences. This is accomplished by considering all rules in parallel and scoring each test performed on the left-hand side of a rule according to predefined weights. The rule with the best score fires. In this way, processing will always have some rule to fire. Reported experimentation with PARAGRAM shows this to be an effective method of extending the inherent capabilities of the grammar.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Ungrammatical Sentences",
"sec_num": "5.2."
},
{
"text": "To demonstrate its generalization capabilities, the hybrid parser is tested with several exmnples of ungrammatical sentences. Its performance is strictly dependent upon its training experiences since no relaxation rules (Kwasny and Sondheimer, 1981) , meta-rules (Weischedel and Sondheimer, 1983) , or other special mechanisms were added to the original grammar rules to handle ungrammatical cases. In Table 2 , ungrammatical sentences used in testing are shown along with their strengths. These strengths are computed as the reciprocal of the average error per processing step for each sentence and reflect the certainty with which individual actions for building structures are being selected. Although there is no real meaning in the values of these numbers, they are a useful means of comparison. These examples produce reasonable structures when presented to our system. Note that overall average strength is lower for ungrammatical sentences when compared to similar grammatical ones.",
"cite_spans": [
{
"start": 220,
"end": 249,
"text": "(Kwasny and Sondheimer, 1981)",
"ref_id": "BIBREF6"
},
{
"start": 263,
"end": 296,
"text": "(Weischedel and Sondheimer, 1983)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 402,
"end": 409,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Parsing Ungrammatical Sentences",
"sec_num": "5.2."
},
{
"text": "In sentence (la), for example, the structure produced was identical to that produced while parsing sentence (lb). The only difference is that the two auxiliary verbs, have and should, were reversed in the parse tree.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Ungrammatical Sentences",
"sec_num": "5.2."
},
{
"text": "Sentence (2a) contains a disagreement between the auxiliary has and the main verb schedule and yet the comparable grammatical sentence (2b) parsed identically. Sentences (3a) and (4a) parse comparable to sentence (3b). Sentence (5a) is processed as if it were progressive tense ('The boy is hitting Jack'). In PARAGRAM, a nonsensical parse structure is produced for this sentence, as reported by Charniak (p. 137) . It can be compared with sentence (5b), but there is not one clear choice for how the sentence should appear if grammatical. The problems with using a syntax-based approach to handling ungrammatical sentences are well-known (see, for example, Kwasny, 1980) . 13.6 (3b) Tom hit(v) Mary.",
"cite_spans": [
{
"start": 396,
"end": 413,
"text": "Charniak (p. 137)",
"ref_id": null
},
{
"start": 658,
"end": 671,
"text": "Kwasny, 1980)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Ungrammatical Sentences",
"sec_num": "5.2."
},
{
"text": "29.5 (4a) The <will> gave the money to Mary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Ungrammatical Sentences",
"sec_num": "5.2."
},
{
"text": "16.6 (4b) The will(noun) gave the money to Mary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Ungrammatical Sentences",
"sec_num": "5.2."
},
{
"text": "61.9 (5a) They <can> fish(np). 20.6 (5b) They can(v) fish(np).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Ungrammatical Sentences",
"sec_num": "5.2."
},
{
"text": "30.0 (6a) They can(aux) <fish>.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Ungrammatical Sentences",
"sec_num": "5.2."
},
{
"text": "2.9 (6b) They can(aux) fish(v).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Ungrammatical Sentences",
"sec_num": "5.2."
},
{
"text": "6.3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Ungrammatical Sentences",
"sec_num": "5.2."
},
{
"text": "As a further test of the generalization properties of the hybrid parser, sentences containing lexically ambiguous words are tested. Some of these sentences are shown in Table 3 . Of course, ROBIE takes a symbolic approach in extending PARSIFAL to address these issues by requiring additional rules and lexical features. Note that in the deterministic approach, it is essential for lexical items to be properly disambiguated or backtracking will be required.",
"cite_spans": [],
"ref_spans": [
{
"start": 169,
"end": 176,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Lexical Ambiguity",
"sec_num": "5.3."
},
{
"text": "In testing the hybrid parser, normal sentences are presented, except that selected words are coded ambiguously (here indicated by angle brackets < > around the word). Sentences containing words followed by parentheses are presented to the hybrid parser unambiguously, even though these words have ambiguous uses. The lexical choices are shown in parentheses. In the cases shown, the lexically ambiguous words were correx:tly interpreted and reasonable structures resulted, although lower strengths were observed. The hybrid parser utilizes syntactic context to resolve these ambiguities and automatically works to relate novel situations to training cases through the generalization capability of the network. As before, no additional rules or mechanisms are required to provide this capability.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Ambiguity",
"sec_num": "5.3."
},
{
"text": "Sentence (la) contains the word will coded ambiguously as an NP and an auxiliary, modal verb. In the context of the sentence, it is clearly being used as a modal auxiliary and the parser treats it that way as (lb). A similar result was obtained for sentence (2a) which parses as (2b). In sentence (3a), hit is coded to be ambiguous between an NP (as in a baseball hit) and a verb. The network correctly identifies it as the main verb of the sentence as in sentence (3b). Sentence (4a) is constructed as for sentence (4b). Sentence (5a) presents can ambiguously as an auxiliary, modal, and main verb, while fish is presented uniquely as an NP. Can is processed as the main verb of the sentence and results in the same structure as sentence (5b). Likewise, sentence (6a), which contains fish coded ambiguously as a verb/NP and can coded uniquely as an auxiliary verb, produces the same structure as sentence (6b). In the cases shown, the lexically ambiguous words were disambiguated and reasonable structures resulted. Note that the overall average strengths were lower than comparable grammatical sentences discussed, as expected.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Ambiguity",
"sec_num": "5.3."
},
{
"text": "Our hybrid deterministic parser is based on a deterministic grammar modified slightly from that found in traditional systems. Our grammar is derived from one used by Marcus, but with much inspiration from the work of Milne, Berwick, and Chamiak. The rules of the grammar are utilized in training a connectionist component. The result is a hybrid system which exhibits characteristics from several well-known extensions of the basic deterministic parser. In particular, some ungrammatical and lexically ambiguous inputs can be successfully processed although no special provisions are made for them. These extended properties come essentially for free due to the coupling of a symbolic component with connectionism.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary",
"sec_num": "6."
},
{
"text": "t The first author gratefldly ackmowledge the support of Khlg Fahd University of Petroleum and Minerals.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The Acquisition of Syntactic Knowledge",
"authors": [
{
"first": "R",
"middle": [
"C"
],
"last": "Berwick",
"suffix": ""
}
],
"year": 1985,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Berwick, R.C. 1985. The Acquisition of Syntactic Knowledge. MIT Press, Cambridge, MA.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A Parser with Something for Everyone",
"authors": [
{
"first": "E",
"middle": [],
"last": "Charniak",
"suffix": ""
}
],
"year": 1983,
"venue": "Parsing Natural Language, M. King",
"volume": "",
"issue": "",
"pages": "117--150",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Charniak, E. 1983. \"A Parser with Something for Every- one.\" In Parsing Natural Language, M. King, ed. Academic Press, New York, NY, 117-150.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Cormectionist Deterministic Parsing",
"authors": [
{
"first": "K",
"middle": [
"A"
],
"last": "Faisal",
"suffix": ""
}
],
"year": 1990,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Faisal, K.A. 1990. Cormectionist Deterministic Parsing.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Deductive and Inductive Learning in a Connectionist Deterministic Parser",
"authors": [
{
"first": "K",
"middle": [
"A"
],
"last": "Faisal",
"suffix": ""
},
{
"first": "S",
"middle": [
"C"
],
"last": "Kwasny",
"suffix": ""
}
],
"year": 1990,
"venue": "Proc lntl Joint Conf Neural Networks",
"volume": "",
"issue": "",
"pages": "1--471",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Faisal, K.A. and S.C. Kwasny. 1990. Deductive and Induc- tive Learning in a Connectionist Deterministic Parser. Proc lntl Joint Conf Neural Networks, 1~471-474.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Coupling Symbolic and Numeric Computing in Knowledge-Based Systems",
"authors": [
{
"first": "C",
"middle": [
"T"
],
"last": "Kitzmiller",
"suffix": ""
},
{
"first": "J",
"middle": [
"S"
],
"last": "Kowalik",
"suffix": ""
}
],
"year": 1987,
"venue": "AI Magazine",
"volume": "8",
"issue": "2",
"pages": "85--90",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kitzmiller, C.T., and J.S. Kowalik. 1987. Coupling Symbolic and Numeric Computing in Knowledge-Based Systems. AI Magazine 8, no. 2, 85-90.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Competition and Learning in a Connectionist Deterministic Parser",
"authors": [
{
"first": "S",
"middle": [
"C"
],
"last": "Kwasny",
"suffix": ""
},
{
"first": "K",
"middle": [
"A"
],
"last": "",
"suffix": ""
}
],
"year": 1989,
"venue": "Proc llth Conf Cog Sci Society",
"volume": "",
"issue": "",
"pages": "690--697",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kwasny, S.C. and K.A. Faisal. 1989. Competition and Learning in a Connectionist Deterministic Parser. Proc llth Conf Cog Sci Society, 690-697.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Relaxation Techniques for Parsing Ill-Formed Input",
"authors": [
{
"first": "S",
"middle": [
"C"
],
"last": "Kwasny",
"suffix": ""
},
{
"first": "N",
"middle": [
"K"
],
"last": "Sondheimer",
"suffix": ""
}
],
"year": 1981,
"venue": "Am J Comp Ling",
"volume": "7",
"issue": "2",
"pages": "99--108",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kwasny, S.C. and N.K. Sondheimer. 1981. Relaxation Tech- niques for Parsing Ill-Formed Input. Am J Comp Ling 7, no. 2, 99-108.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Treatment of Ungrammatical and Extra-Grammatical Phenomena in Natural Language Understanding Systems",
"authors": [
{
"first": "S",
"middle": [
"C"
],
"last": "Kwasny",
"suffix": ""
}
],
"year": 1980,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kwasny, S.C. 1980. \"Treatment of Ungrammatical and Extra-Grammatical Phenomena in Natural Language Under- standing Systems.\" Indiana University Linguistics Club, Bloomington, Indiana.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A Theory of Syntactic Recognition for Natural Language",
"authors": [
{
"first": "M",
"middle": [
"P"
],
"last": "Marcus",
"suffix": ""
}
],
"year": 1980,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marcus, M. P. 1980. A Theory of Syntactic Recognition for Natural Language. MIT Press, Cambridge, MA.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Mechanisms of Sentence Processing: Assigning Roles to Constituents of Sentences",
"authors": [
{
"first": "J",
"middle": [
"L"
],
"last": "Mcclelland",
"suffix": ""
},
{
"first": "A",
"middle": [
"H E"
],
"last": "Kawamoto ; D",
"suffix": ""
},
{
"first": "J",
"middle": [
"L"
],
"last": "Rumelhart",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mcclelland",
"suffix": ""
}
],
"year": 1986,
"venue": "Parallel Distributed Processing",
"volume": "",
"issue": "",
"pages": "272--325",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "McClelland, J. L., & A. H. Kawamoto. 1986. \"Mechanisms of Sentence Processing: Assigning Roles to Constituents of Sentences.\" In Parallel Distributed Processing, D.E. Rumelhart and J.L. McClelland, MIT Press, Cambridge, MA, 272-325.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Resolving Lexical Ambiguity in a Deterministic Parser",
"authors": [
{
"first": "R",
"middle": [],
"last": "Milne",
"suffix": ""
}
],
"year": 1986,
"venue": "Comp Ling",
"volume": "12",
"issue": "1",
"pages": "1--12",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Milne, R. 1986. Resolving Lexical Ambiguity in a Deter- ministic Parser. Comp Ling 12, No. 1, 1-12.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Learning Internal Representations by Error Propagation",
"authors": [
{
"first": "D",
"middle": [
"E"
],
"last": "Rumelhart",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Hintoh",
"suffix": ""
},
{
"first": "R",
"middle": [
"J E"
],
"last": "Williams ; D",
"suffix": ""
},
{
"first": "J",
"middle": [
"L"
],
"last": "Rumelhart",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mccmland",
"suffix": ""
}
],
"year": 1986,
"venue": "Parallel Distributed Processing",
"volume": "",
"issue": "",
"pages": "318--364",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rumelhart, D. E., G. Hintoh, and R.J. Williams. 1986. \"Learning Internal Representations by Error Propagation.\" In Parallel Distributed Processing, D.E. Rumelhart and J.L. McCMland, MIT Press, Cambridge, MA, 318-364.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Meta-Rules as a Basis for Processing Ill-Formed Input",
"authors": [
{
"first": "R",
"middle": [
"M"
],
"last": "Weischedel",
"suffix": ""
},
{
"first": "N",
"middle": [
"K"
],
"last": "Sondheimer",
"suffix": ""
}
],
"year": 1983,
"venue": "Am J Comp Ling",
"volume": "9",
"issue": "3-4",
"pages": "161--177",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Weischedel, R.M. and N.K. Sondheimer. 1983. Meta-Rules as a Basis for Processing Ill-Formed Input. Am J Comp Ling 9, No. 3-4, 161-177.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Beyond Regression: New Tools for Prediction and Analysis in Behavioral Science",
"authors": [
{
"first": "P",
"middle": [],
"last": "Werbos",
"suffix": ""
}
],
"year": 1974,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Werbos, P. 1974. \"Beyond Regression: New Tools for Pred- iction and Analysis in Behavioral Science.\" Ph.D. Thesis. Harvard University, Cambridge, Ma.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "Grammar actions. The repertoire of rule actions is slightly different in the hybrid parser. Actions such as ACTIVATE and DEACTIVATE have been removed. The basic actions are: a) ATTACH as <node>: The first item in the buffer is attached through an intermediate descriptive <node> to the current active node. b) CREATE <type>: Generates a new node of type <type> and pushes it onto the parse stack as the current active node. c) DROP: Pops a node or an item off the top of tile stack and inserts it into the buffer in the first buffer position. The previous contents of the buffer is shifted back by one position. d) INSERT <item>: Inserts the designated item into the buffer in the first buffer position. The previous contents of the buffer is shifted back by one position. In the general form, only a small number of designated lexical items (you, to, be, wh-marker) can be inserted. The special form INSERT TRACE inserts an (unbounded) NP trace.e) LABEL <feature>: Adds designated feature to the first buffer item. f) PUSH: Pushes an item onto the stack for temporary storage whenever the parse stack is used as a movement stack. g) SWITCH: Exchanges the items in the first and second buffer positions.",
"uris": null,
"type_str": "figure"
},
"FIGREF1": {
"num": null,
"text": "Figure 2: System Overview",
"uris": null,
"type_str": "figure"
},
"TABREF0": {
"content": "<table><tr><td colspan=\"4\">\"Natural Language can be parsed by a</td></tr><tr><td>mechanism</td><td>that</td><td>operates</td><td>'strictly</td></tr><tr><td colspan=\"4\">deterministically' in that it does not simulate a</td></tr><tr><td colspan=\"3\">nondeterministic machine...\"</td><td/></tr></table>",
"type_str": "table",
"num": null,
"text": "imposes important restrictions on NatLu'al Language Processing. It states (p.ll) that",
"html": null
},
"TABREF3": {
"content": "<table><tr><td>Sentence Form</td></tr></table>",
"type_str": "table",
"num": null,
"text": "Examples of Grammatical Sentences",
"html": null
},
"TABREF4": {
"content": "<table><tr><td>Sentence Form</td><td>Strength</td></tr><tr><td colspan=\"2\">(la) *John have should scheduled the meeting for Monday. 14.4</td></tr><tr><td colspan=\"2\">(lb) John should have scheduled the meeting for Monday. 56.9</td></tr><tr><td>(2a) *Ilas John schedule the meeting for Monday?</td><td>32.3</td></tr><tr><td>(2b) Itas John scheduled the meeting for Monday?</td><td/></tr></table>",
"type_str": "table",
"num": null,
"text": "Ungrammatical vs. Grammatical Sentences",
"html": null
},
"TABREF5": {
"content": "<table><tr><td>Sentence Form</td><td>Strength</td></tr><tr><td colspan=\"2\">(la) &lt;Will&gt; John schedule the meeting for Monday? 5.0</td></tr><tr><td colspan=\"2\">(lb) Will(aux) John schedule the meeting for Monday? 57.46</td></tr><tr><td>(2a) Tom &lt;will&gt; hit Mary.</td><td>29.8</td></tr><tr><td>(2b) Tom will(aux) hit Mary.</td><td>125.8</td></tr><tr><td>(3a) Tom &lt;hit&gt; Mary.</td><td/></tr></table>",
"type_str": "table",
"num": null,
"text": "Lexically Ambiguous vs. Unambiguous Sentences",
"html": null
}
}
}
}