Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "T75-2003",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:43:18.857299Z"
},
"title": "COMPUTATIONAL UNDERSTANDING",
"authors": [
{
"first": "Christopher",
"middle": [
"K"
],
"last": "Riesbeck",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "",
"pdf_parse": {
"paper_id": "T75-2003",
"_pdf_hash": "",
"abstract": [],
"body_text": [
{
"text": "The problem of computational understanding has often been broken into two sub-problems: how to syntactically analyze a natural language sentence and how to semantically interpret the results of the syntactic analysis. There are many reasons for this subdivision of the task, involving historical influences from American structural linguistics and the early \"knowledge-free\" approaches to Artificial Intelligence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "I. METHODOLOGICAL POSITION",
"sec_num": null
},
{
"text": "The sub-division has remained basic to much work in the area because syntactic analysis seems to be much more amenable to computational methods than semantic interpretation does, and thus more workers have been attracted developing syntactic analyzers first.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "I. METHODOLOGICAL POSITION",
"sec_num": null
},
{
"text": "It is my belief that this subdivision has hindered rather than helped workers in this area.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "I. METHODOLOGICAL POSITION",
"sec_num": null
},
{
"text": "It has led to much wasted effort on syntactic parsers as ends in themselves. This simple statement has several important implications about what a comprehension model should look like. Comprehension as a memory process implies a set of concerns very different from those that arose when natural language processing was looked at by linguistics. It implies that the answers involve the generation of simple mechanisms and large data bases. It implies that these mechanisms should either be or at least look like the mechanisms used for common-sense reasoning.",
"cite_spans": [],
"ref_spans": [
{
"start": 77,
"end": 465,
"text": "This simple statement has several important implications about what a comprehension model should look like. Comprehension as a memory process implies a set of concerns very different from those that arose when natural language processing was looked at by linguistics. It implies that the answers involve the generation of simple mechanisms and large data bases.",
"ref_id": null
}
],
"eq_spans": [],
"section": "I. METHODOLOGICAL POSITION",
"sec_num": null
},
{
"text": "It implies that the information in the data bases should be organized for usefulness --i.e., so that textual cues lead to the RAPID retrieval of ALL the RELEVANT information --rather than for uniformity --e.g., syntax in one place, semantics in another. A general idea of the way the analyzer worked can be obtained by following the flow of analysis of the simple sentence \"John gave Mary a beating.\" The chart on the next page gives an outline of the basic sequence of events that takes place in the analyzer as the sentence is read, one word at a time, from left to right.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "I. METHODOLOGICAL POSITION",
"sec_num": null
},
{
"text": "The column headed \"WORD READ\" indicates where the analyzer is in the sentence when something occurs. refers to whatever has just been read or constructed from the input stream.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "I. METHODOLOGICAL POSITION",
"sec_num": null
},
{
"text": "Step 0 is the initial state of the analyzer before the sentence is begun. The analyzer sets up one expectation which assumes that the first NP it sees is the subject of a verb that will come later.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "I. METHODOLOGICAL POSITION",
"sec_num": null
},
{
"text": "Step I, the first word --\"John\" --is read.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "In",
"sec_num": null
},
{
"text": "Because \"John\" is a proper name, it is treated as a noun phrase and thus Expectation I is triggered. The program for Expectation I chooses \"John\" to be the subject of whatever verb will follow.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "In",
"sec_num": null
},
{
"text": "Expectation I is then removed from the set of active expectations. There were no expectations listed in the lexical entry for \"John\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "In",
"sec_num": null
},
{
"text": "In Step 2, \"gave\" is read.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "In",
"sec_num": null
},
{
"text": "The lexical entry for the root form \"give\" has three expectations listed an~ these are added to the set of active expectations. None of them are triggered.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "In",
"sec_num": null
},
{
"text": "In Step 3, \"Mary\" is read.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "In",
"sec_num": null
},
{
"text": "\"Mary\" is a noun phrase referring to a human and so Expectation 2 is triggered.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "In",
"sec_num": null
},
{
"text": "The program for Expectation 2 chooses \"Mary\" to be the recipient of the verb \"give\". Then Expectation 2 is removed. There were no expectatons in the lexical entry for \"Mary\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "In",
"sec_num": null
},
{
"text": "Step 4, \"a\" is read.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "In",
"sec_num": null
},
{
"text": "There is one expectation in the lexicon for \"a\". This is Expectation 5 which has a predicate that is always true. In its place it puts Expectation 6, which will be triggered when something in the input stream indicates that the noun phrase begun by \"a\" is complete.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "In",
"sec_num": null
},
{
"text": "In Step 5, \"beating\" is read. There are no lexical entries and \"beating\" is not a word that finishes a noun phrase, so nothing happens.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "In",
"sec_num": null
},
{
"text": "Step 6, the end of the sentence is seen.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "In",
"sec_num": null
},
{
"text": "This does finish a noun phrase and so Expectation 6 is triggered.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "In",
"sec_num": null
},
{
"text": "The program for Expectation 5 builds a noun phrase from the words that have been read since the \"a\" was seen.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "In",
"sec_num": null
},
{
"text": "It This expectation if triggered would fill the recipient of \"give\" with the object of the \"to\", as in sentences like \"John gave the book to Mary.\" Both of these expectations have the same purpose: to fill the recipient case of the verb \"give\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "In",
"sec_num": null
},
{
"text": "As long as no recipient is found there is a reason for keeping both expectations active.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "In",
"sec_num": null
},
{
"text": "And this implies that when the recipient case is finally filled, either by one of the expectations set up by \"give\" or by some expectation set up by some later word, then there is no longer any reason for keeping any of these expectations and they should all be removed. about what an expectation should produce, we can then make predictions about the sub-structures that the expectation builds with.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "In",
"sec_num": null
},
{
"text": "These new predictions can then influence the expectations producing those sub-structures, and so on.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "In",
"sec_num": null
},
{
"text": "example, consider the two expectations for \"give\" that were given above.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "For",
"sec_num": null
},
{
"text": "Suppose the predicate of first expectation looks for a syntactic object referring to an action --such as \"a sock\" in one interpretation of the sentence \"John gave Mary a sock.\" Since the second expectation is the one that fills in the syntactic object slot of \"give\", there is now a prediction that the second expectation will produce a noun phrase referring to an action.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "For",
"sec_num": null
},
{
"text": "Since the second expectation fills the syntactic object of \"give\" with a noun phrase that it finds in the input stream, the monitor can predict that a noun phrase referring to an action will appear in the input stream.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "For",
"sec_num": null
},
{
"text": "The effect of this prediction is that when words are seen in the input, the first thing that is looked for is to see if they can refer to an action.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "For",
"sec_num": null
},
{
"text": "If so, then that sense of the word is taken immediately. Thus a word like \"sock\" is disambiguated immediately as a result of an expectation originally made about the syntactic object of \"give\". ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "For",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {},
"ref_entries": {
"TABREF6": {
"type_str": "table",
"num": null,
"content": "<table><tr><td>prediction throughout expectations predictions about things like word senses. can be quickly disseminated a dependency network of and lead eventually to STEP WORD READ EXPECTATIONS ACTIVE For example, my thesis describes how the interpretation of the text \"John was mad at Mary. He gave her a sock,\" uses a conceptual prediction that \"John wants something bad to happen to Mary,\" which 0 none I -is INPUT a NP? I John I -is INPUT a follows from the first sentence, to choose the appropriate sense of the word \"sock\" in the second sentence the first time the'word is seen. This can be done because the NP? 2 gave 2 -does INPUT refer general conceptual prediction in interaction to a human? with the expectations in the lexical entry for \"give\" led to predictions about the nature of the syntactic object of \"give\", which in turn led to predictions about the 3 -does INPUT refer to a physical object? 4 -does INPUT refer</td><td colspan=\"2\">is we need to know what the predicate of the expectation is applied to. This information can be specified in the same way that the purpose of the expectation was: by giving a conceptual or syntactic slot. In this case, instead of giving the slot that EXPECTATIONS ACTION TAKEN TRIGGERED the expectation fills if triggered, we specify the slot that the predicate of the expectation is applied to. Then by knowing what slot an expectation looks at, we know what expectaions this expectation none none I choose \"John to be depends on. It depends on those expectations that fill this slot --i.e., that have a \"purpose slot\" equal to the \"lock at slot\" of the expectation. Let me summarize this discussion by giving the current format for specifying expectations: the subject of the verb to come none none</td><td colspan=\"2\">I I I I II I I l I l</td></tr><tr><td>words stream. In system --both that would other 3 Mary the original one and the new be seen in the input to an action? words, the analysis 2 -does INPUT refer to a human? version --as an approach to the computational understanding problem, exemplifies the general points made in the 3 -does INPUT refer to a physical object? 4 -does INPUT refer methodological portion of this paper. It to an action? demonstrates the feasibility of doing understanding using very simple mechanisms for manipulating small but flexible units of knowledge, without forcing the development 4 a 3 -does INPUT refer to a physical object? of independent syntactic analyzers or 4 -does INPUT refer semantic interpreters. These simple mechansisms allow for a direct attack on such problems as what information is absolutely necessary for understanding, how to an action? 5 -true 5 beating 6 -does INPUT end it is a NP? 6 period 6 -does INPUT end a NP? called for, REFERENCE Riesbeck, C. \"Computational Understanding: Analysis of Sentences and Context,\" Ph.D. Thesis, Computer Science Dept., 7 none 3 -does INPUT refer to a physical Stanford University, Stanford, CA. 1974. object? 4 -does INPUT refer to an action?</td><td>2 5 none 6 4</td><td>(NEED FOCUS TEST ACTION SIDE-EFFECTS) where NEED is the slot the expectation fills choose \"Mary\" to be the recipient triggered, FOCUS is the slot the expectation looks at, if TEST is the predicate portion of the of \"give\" expectation, ACTION is the structure building portion of the expectation, SIDE-EFFECTS are those programs that act upon other expectations and are not --at the moment --incorporated into save the current set of expectations and the network of dependencies and predictions. The analysis monitor is fairly content-independent. Its job is replace it with: 6 -does INPUT end a NP? none to take input, use it to access clusters of expectations, keep active those expectations that might fill slots that are still empty in partially-built structures, and keep track of the predictions/preferences that are induced by the dependency relationships between expectations. The actual knowledge about language and the world is still contained in the expectations, as was set INPUT to the NP \"a beating\" and reset the expectation set set the main action of the true in the original analyzer. This encoding of knowledge into small interpretation to the action named by INPUT; set the actor to pieces of programs that have both procedural and declarative aspects is of both practical and theoretical importance. In terms of implementing an AI model, I have found it much easier to specify procedural knowledge the subject (John) and set the object to the recipient (Mary)</td><td colspan=\"2\">I .I I l i i i 1 I D. I D I I I I</td></tr><tr><td/><td/><td>in small units of \"in Further it is much easier, as a programmer, situation X do Y\". to extend and modify procedures written in this form. It is also easier for a program</td><td colspan=\"2\">I I</td></tr><tr><td/><td/><td>to manipulate knowledge in this way. Theoretically, the expectation seems to me to be a viable representation for highly procedural format memory</td><td colspan=\"2\">I 1</td></tr><tr><td/><td>IS</td><td>knowledge. a theory of computational understanding that With it we can design explicitly does not have the forced division between syntactic and semantic analysis. Individual</td><td colspan=\"2\">i !</td></tr><tr><td>To expectation pass expectation would like to see, the information to the next about we know where the expectation is looking. from what need 16 That one an to</td><td>14</td><td>expectations syntactic or conceptual structures, but are usually concerned of the expectations are maintained in one with all large set. This allows for those important expectations that convert information about syntactic structures in semantic information and vice-versa. Thus information that originally started as an abstract conceptual</td><td>I I I</td><td>I I II</td></tr></table>",
"html": null,
"text": "and how a workably sized set of active information can be maintained."
}
}
}
}