|
{ |
|
"paper_id": "C92-1049", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T12:33:03.404847Z" |
|
}, |
|
"title": "USING LINGUISTIC, WORLD, AND CONTEXTUAL KNOWLEDGE IN A PLAN RECOGNITION MODEL OF DIALOGUE ~", |
|
"authors": [ |
|
{ |
|
"first": "Lynn", |
|
"middle": [], |
|
"last": "Lambert", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Delaware Newark", |
|
"location": { |
|
"postCode": "19716", |
|
"settlement": "Delaware", |
|
"country": "USA" |
|
} |
|
}, |
|
"email": "lambert@cia" |
|
}, |
|
{ |
|
"first": "Sandra", |
|
"middle": [], |
|
"last": "Carberry", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Delaware Newark", |
|
"location": { |
|
"postCode": "19716", |
|
"settlement": "Delaware", |
|
"country": "USA" |
|
} |
|
}, |
|
"email": "earberry@cis" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "This paper presents a plan-based model of dialogue that combines world, linguistic, and contextual knowledge in order to recognize complex communicative actions such as expressing doubt. Linguistic knowledge suggests certain discourse acts, a speaker's beliefs, aud the strength of those beliefs; contextual knowledge suggests the most coherent continuation of the dialogue; and world knowledge provides evidence that the applicability conditions hold for those discourse acts that capture the relationship of the current utterance to the discourse as a whole.", |
|
"pdf_parse": { |
|
"paper_id": "C92-1049", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "This paper presents a plan-based model of dialogue that combines world, linguistic, and contextual knowledge in order to recognize complex communicative actions such as expressing doubt. Linguistic knowledge suggests certain discourse acts, a speaker's beliefs, aud the strength of those beliefs; contextual knowledge suggests the most coherent continuation of the dialogue; and world knowledge provides evidence that the applicability conditions hold for those discourse acts that capture the relationship of the current utterance to the discourse as a whole.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Recognizing the roles that utterances play in a dialogue and how the utterances should be interpreted in the context of preceding dialogue is a crucial part of a robust model of understanding. In order to perform this recognition, our tripartite plan-based model of dialogue identifies not only domain and problem-solving actions but also discourse or communicative actions that determine how utterances relate to each other. For this communicative action recognition, we combine information gleaned from a variety of knowledge sources: contextual, linguistic, and world knowledge. The combination of these different knowledge sources enables the recognition of complex communicative actions such as expressing donbt. Although our tripartite model recognizes three different kinds of actions (domain, problem-solving, and discourse), the focus of this paper will be the recognition of discourse actions and how a combination of knowledge sonrces enables us to perform this recognition.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "A number of researchers have contended that a coherent discourse consists of segments that are related to one another through some type of structuring relation [Gri75, MT83] or have modeled discourse based on the semantic relationship of individual clauses [Pol86] or groups of clauses [Rei78] . But all of these fail to capture the goal-oriented natnre of discoursc. Grosz and Sidner [GS86] argue that recognizing the structural relationships among the intentions underlying a discourse is necessary to identify discourse structure; although they do not provide the details of a computational mechanism for recognizing these relationships, they do argue convincingly that it requires multiple knowledge sources. We have developed a plan-based model of dialogue and have incorporated into our model Grosz and Sidner's claim that linguistic, contextual, and world knowledge should be combined in recognizing the role of an utterance in a discourse. .", |
|
"cite_spans": [ |
|
{ |
|
"start": 160, |
|
"end": 167, |
|
"text": "[Gri75,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 168, |
|
"end": 173, |
|
"text": "MT83]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 257, |
|
"end": 264, |
|
"text": "[Pol86]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 286, |
|
"end": 293, |
|
"text": "[Rei78]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 368, |
|
"end": 391, |
|
"text": "Grosz and Sidner [GS86]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Our Tripartite Model", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We contend that at least three kinds of actions, domain, problem-solving, and discourse, should be captured by a model of task-oriented dialogue. Many researchers [A1179, Car87, LAB7, Sid85, GS86] have demonstrated that recognition of domain actions endows a system with the ability to successfnlly address many important and difficult problems in understanding. Several researchers have also investigated the recognition of problem-solving actions [LA87, Ram9I, Wil81] . For example, if a user wants to earn a degree, the user might perform problem-solving actions of 1) evaluating alternative degrees (i.e., the user might decide whether a BS or a BA is more desirable), 2) instantiating the type of degree to be earned, and 3) building a plan for performing the domain action of earning the selected degree. Carberry [Car89] points out the importance of recognizing discourse actions, the communicative actions that speakers perform iu making an utterance (e.g., asking a question, providing baekgrouml information, or expressing surprise). Discourse actions provide expectations for subsequent utterances (e.g., when a question is asked, one expects the question to be accepted and eventually answered). Recognition of some discourse actions such ms Give-Background also explains the purpose of an utterance and bow it should be interpreted; rather than just a statement of fact, the utterance providing background information should be used by the system to fill in necessary background knowledge in order to fully understand related utterantes.", |
|
"cite_spans": [ |
|
{ |
|
"start": 163, |
|
"end": 196, |
|
"text": "[A1179, Car87, LAB7, Sid85, GS86]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 449, |
|
"end": 455, |
|
"text": "[LA87,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 456, |
|
"end": 462, |
|
"text": "Ram9I,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 463, |
|
"end": 469, |
|
"text": "Wil81]", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 811, |
|
"end": 827, |
|
"text": "Carberry [Car89]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Our Tripartite Model", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We capture these three types of actions in separate levels of our disconrse model, which we refer to as a DM [LC91] . Within each of these levels, actions may contribute to other actions on the same level; for example, on the discourse level, providing background data, asking a question, and answering a question all can be part of obtaining information. Thus, actions at each level form a tree structure in which each node represents an action that a participant is performing and the children of a node represcnt actions pursued in order to perform the parent action, ttowever, discourse, problem-solving, and domain actions are not completely independent of one another; discourse actions may be executed in order to obtain the information necessary for performing a problem-solving action and problem-solving actions may bc executed in order to construct a domain plan. Our model captures this interaction by allowing links between the actions at adjacent levels. Figure I contains a sample DM derived from two utterances, and section 3 describes how the DM in Figure 1 is constructed.", |
|
"cite_spans": [ |
|
{ |
|
"start": 109, |
|
"end": 115, |
|
"text": "[LC91]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 969, |
|
"end": 977, |
|
"text": "Figure I", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1066, |
|
"end": 1074, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Our Tripartite Model", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The following dialogue will be used to demonstrate why our system needs world, contextual, and linguistic knowledge, and to show }low the combination of these different knowledge sources enables the system to recognize implicit acceptance of previously commuuicated propositions and to identify the role of utterances that cannot be determined from one or two knowledge sources alone. The system is playing the role of $2,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Different Kinds of Knowledge", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "(1) SI: Who is teaching CS3607 (2) $2: Dr, Smith is teaching CS360.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Different Kinds of Knowledge", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "(3) SI:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Different Kinds of Knowledge", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Bul isn't CS360 an undergrad course? (d) $2: Yes. (5)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Different Kinds of Knowledge", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Dr. Smith leaches gradnate and undergrad courses. (6) Sl:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Different Kinds of Knowledge", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Who handles the CS'360 lab?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Different Kinds of Knowledge", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "In order to recognize how actions are intended to contribute to otlmr actions, our system needs knowledge abont how to perform actions. This world knowledge is I)rovided in the form of a library of discourse, problem-solving, and domain recipes [Pol90]. Our representation of a recipe includes a header giving the natoe of tile recipe and the action that it accomplishes, preconditions, applicability conditions, constraints, a body, effects, and a goal. Applicability conditions represent conditions that nmst be satisfied ill order for the recipe to be reasonable to apply ill tile given situation whereas constraints limit the allowable instantialion of variables in each of the components of a recipe [LA87, Car87] . Figure 2 contains a sample discourse recipe.", |
|
"cite_spans": [ |
|
{ |
|
"start": 705, |
|
"end": 711, |
|
"text": "[LA87,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 712, |
|
"end": 718, |
|
"text": "Car87]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 721, |
|
"end": 729, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "World Knowledge", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Given the senlalltic representation of a new utterance, the system mnst assimilate tim utterance and produce an updated dialogue model (DM). Plan inference rules [A1179] and constraint satisfaction [LA87, Car87] suggest chains of higher level actions that an utterance may in! part of, and foetlsing heuristics [Car87, SidSl] order these inference paths according to coherence. For exanlple, the semantic rel)resentation of (1) is:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "World Knowledge", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Surface WH-Question(fil, $2, _fac, Teaches(_fac, csa@))", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "World Knowledge", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "From this surface question, t)lan inference rules sug gest that (1) is executing a Rcquest action and that this Request action is part of an Ask-Re] action which in turn is part of an Obtain-lnfl~-Re] action since each of these actions is part of the body of a recipe that performs the higher level action As the system infers these actions, tile system also tentatively ascribes certain beliet~ that must hold in order fl~r the agent to be pursuing these discourse actions l\"or example, m order for (1) to lw part of ;m Obtaln-ln]o-Rcf action, ,ql must not know the answer to thv questh)n; if SI knew who was teaching CS360, this utterance might be part of a 7'est-L+slcncr action instead. These requisite beliefs are captured ill tile applicability conditions of discourse recipes. As tile system inDrs actions, it must be plausibh~ that the applicability conditions are ACRES DE COLING-92, NANTES. 23-28 AO~r 1992 3 1 1 PRec. OF COLING-92, NANTES, AUG. 23-28. 1992", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "World Knowledge", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Discourse Recipe-Cl: {_agent1 expresses doubt to _agent2 about _propl because _agent1 believes _prop2 to be true} Action: Express-Doubt(_agentl, _agent2, _propl, _prop2, _rule) App Cond: believe(_agentl, _prop2) believe(_agentl, befieve(_agent2, _propl)) believe(_agent 1, _rule) believe(_agentl, ((_prop2 A _rule) =\u00a2, ~_propl)) in-focus(_propl)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "World Knowledge", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Body:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "World Knowledge", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Convey-Uncertain-Belief(_agentl, _agent2, .prop2) Address-Q-Acceptance(_agent2, _agent1, -prop2)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "World Knowledge", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Effects:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "World Knowledge", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "believe(_agent2, ~beheve(_agentl, _propl)) beheve(_agent2, want(_agentl, Resolve-Conflict(_agent2, _agentl, _prop1, _prop2))) Goal:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "World Knowledge", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "want(_agent2, Resolve-Conflict(_agent2, _agentl, _propl, _prop2)) Figure 2 : Sample Discourse Recipe satisfied; otherwise, the inference is rejected. So, another part of our system's world knowledge is a model of the speaker's beliefs. Since our investigation of naturally occurring dialogues indicates that people express shades of belief in propositions and expect others to recognize these beliefs, we maintain a multi-strength model of beliefs to represent an agent's varying degrees of belief in a proposition, ranging from having no idea whether a proposition is true (or false), to being certain that a proposition is true (or false). We also maintain a model of a stereotypical user whose beliefs may be attributed to the speaker as appropriate during tim course of the conversation.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 66, |
|
"end": 74, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "World Knowledge", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "After the system has inferred actions on the discourse level, it must identify how these relate to problem-solving and domain actions. This is accomplished by chaining between actions on adjacent levels of the DM. For example, once the system infers that (1) contributes to an Obtain-lnfo-Ref action on the discourse level, plan inference rules suggest that S1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "World Knowledge", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "wants the goal of the Obtain-lnfo-Ref action, namely knowref(S1, _fac, teaches(_fac, CS360)). Since knowing possible fillers for a parameter is a precondition to instantiating that parameter, the system infers that S1 wants to know who is teaching CS360 in order to instantiate the instructor parameter in an action to learn the material for that course; that is, the system infers that S1 wants lnstantiate- $2, _fac, CS360, _fac) ). Since instautiating one paranteter m an action is part of a plan to instantiate all of the parameters in that action, the system infers that St wants lnslanlialc-Vars(S1, $2, Learn-Matenal(Sl, CS360, _fac)), and since this latter action is part of a recipe for building a plan, the system then infers the problem-solving action, Build-Plan(S1, $2, Take-Course(S1, CS360)). These instantiate-Single-Var, lnstantiate-Vars, and Build-Plan actions are entered into the problem-solving level of the DM. Building a plan to perform some domain action is a precondition to doing that action (assuming agents are acting intentionally), so the system infers that Sl wants Take-Course (Sl, CS360) , and this domain action is entered into the domain level of the DM.", |
|
"cite_spans": [ |
|
{ |
|
"start": 409, |
|
"end": 412, |
|
"text": "$2,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 413, |
|
"end": 418, |
|
"text": "_fac,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 419, |
|
"end": 425, |
|
"text": "CS360,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 426, |
|
"end": 431, |
|
"text": "_fac)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1109, |
|
"end": 1113, |
|
"text": "(Sl,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1114, |
|
"end": 1120, |
|
"text": "CS360)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "World Knowledge", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Once the system has assimilated an utterance into the DM, it nmst update its belief nmdel for the speaker to reflect the beliefs that were tentatively ascribed to the speaker during the plan inference process. These beliefs can then be used in understanding subsequent utterances.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "World Knowledge", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Each new utterance must be interpreted with respect to the existing dialogue context [GS86, Car87] . This process requires contextual knowledge, which our system captures with the use of the DM, a focus of attention which designates the most salient .action on each level of the DM, and focusing heuristics which suggest the most coherent continuation of the dialogue. For example, on the discourse level, utterances that contribute to the currently focused action are more expected, and thus more coherent, than utterances that contribute to an ancestor that is further removed from the focus of attention. This contextual knowledge creates expectations that help determine how to interpret new utterances. For example, after a qnestion is asked, the context suggests that acceptance of the question will be pursued (i.e., the listener will ensure that the question is understood, justified, and answerable); then it is expected that the question will be answered.", |
|
"cite_spans": [ |
|
{ |
|
"start": 85, |
|
"end": 91, |
|
"text": "[GS86,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 92, |
|
"end": 98, |
|
"text": "Car87]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Contextual Knowledge", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Because our system does not yet include a planner, we incorporate S2's utterances into the DM in the same way as Sl's have been. So, continuing with our example dialogue, we express the semantic representation of (2) as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Contextual Knowledge", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Surface-lnform(S2, $1~ Teacites(Dr. Smith, CS360))", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Contextual Knowledge", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Plan inference rules suggest that the surface inform might be part of a Tell action which might be part of an Inform action 2 which might he part of an Answer-Ref action which might in turn be part of an Obtain-lnfo-Rcf action since each of these actions is part of the body of a recipe that performs the higher le4el action. Contextual knowledge is then used to determine how to relate (2) to the previous dialogue. Focusing heuristics suggest that the best interpretation of (2) is that it is part of a plan for performing the Obtain-lnfo-Ref action that was an ancestor of the Request action of utterance (1). No new problem-solving or domain actious are inferred. Figure 1 gives the DM that our system builds from utterances (l) and (2) with the current focus of 2We differentiate between telling a listener some string of words said informing a listener of a proposition, hi order to inform a listener of some proposition, the listener must first understaald the content of the proposition; tiffs is the goal of the Tell action. The goal of the ln]orm action is that the listener believe the COlmnunicated proposition.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 668, |
|
"end": 676, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Contextual Knowledge", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "AcrEs DE COL1NG-92, NANTES, 23-28 AOI~T 1992 3 1 2 PROC. OF COLING-92, NANTES, AUG. 23-28, 1992 attention on each level marked with an asterisk. Thus, we have seen that both world knowledge (consisting of a plan library and beliefs about the speaker's beliefs) and contextual knowledge (consisting of the existing DM, the current focus of attention, and focusing heuristics) are required in order to determine what actions a speaker is performing and how these actions relate to the l)revious dialogue.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Contextual Knowledge", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Linguistic knowledge must also be taken into account in recognizing the actions that a speaker is performing. This knowledge inehldes the surface form of an utterance and clue words. The surface form of an utterance is one way that a speaker communicates varying degrees of belief in a proposition. Consider, for example, the following two utterances:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Linguistic Knowledge", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "(7) Is Dr. Smith teaching CS3107 (8) Isn't Dr. Smith teaching CS3107", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Linguistic Knowledge", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The form of the utterance in (8) indicates an uncertain belief in tim proposition that Dr. Smith is teaching CS310; utterance (7), however, conveys only a lack of knowledge about the proposition. Similarly (3) is not merely a Yes/No question; instead, this surface form conveys that S1 thinks that CS360 is an undergrad course, but is not certain of it. Our system uses the form of the utterance to recognize the strength of a speaker's beliefs; these beliefs are then used to determine whether the applicability conditions for the suggested discourse actions are satisfied.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Linguistic Knowledge", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Tile second type of linguistic information that we use is clue words. These lingtfistic clues often snggest what type of discourse action the speaker might be pursuing [LAB7, Hin89] . a We use these linguistic clues as evidence for discourse actions. For example, utterance (3) contains the clue word \"but,\" which suggests a non-acceptance discourse action. Thus, the linguistic information that our system captures includes knowledge about the surface form of an utterance and about clue words. 4", |
|
"cite_spans": [ |
|
{ |
|
"start": 168, |
|
"end": 174, |
|
"text": "[LAB7,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 175, |
|
"end": 181, |
|
"text": "Hin89]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Linguistic Knowledge", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Discourse Act Recognition", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combining Knowledge for", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Because there are nlany ways that an utterance can continue a dialogue and because the correct interpretation is not always the one most strongly suggested by plan chaining and focusing heuristics, evidence from other knowledge sources is needed to identify the intended relationship between an utterance and the existing dialogue context. For example, the interpretation of (3) most strongly suggested by focusing heuristics is that of requesting clarification of (2) in order to nnderstand it. Intuitively, however, (3) seems to be e\u00d7pressing doubt at S2's answer, not trying to understand 3The surface form of some utterances may also serve this purpose.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combining Knowledge for", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "4The utter~ame itself in fact contains more ilfformation than just clue words, surface fornl, and propositional content, but our system uses only these three. Part of our future work includes incorporating other linguistic information such as tense.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combining Knowledge for", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "it. Plan inference rules and focasing heuristics then are not sufficient to determine what role (3) is serving in the discourse. More knowledge is needed from other sources.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combining Knowledge for", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Intuitively, for each new utterance that relates to the previous dialogue on the discourse level, 5 there is some discourse action that captures this relationship and serves to describe the role of the new utterance with respect to the preceding dialogue. Since many such relationships are plausible (e.g., (3) could be interpreted as expressing doubt or as indicating a lack of understanding), we contend that evidence is required for recognizing the discourse action that identifies the correct relationship.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combining Knowledge for", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "In our model, this discourse action will he an element of an inference path from the utterance to some action in the current tree structure on the discourse level. Furthermore, it will introduce new parameters which must be instantiated based on values from the DM in order for the path t \u00b0 terminate with an action already in the DM. By replacing these new parameters with values from the DM that are not present in the semantic representation of the utterance, wc are hypothesizing a relationship between the new utterance and the existing discourse level of tile DM. Thus, this action serves the aforementioned role of capturing how the new utterance relates to the current dialogue context. We will refer to such actions as e-actions since we contend that there must be evidence to support the inference of these actions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combining Knowledge for", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "For example, suppose that the semantic representation of an utterance such as 8 and propagating the resultant substitutions back down the inference chain, will result in _propt anti _vu:l.e being instantiated based on information from the DM. In particular, _propl will be instantiated with the proposition from the Inform action and _vu:l_e will be constrained to a rule that SI might ttfink suggests that _prop2 and _prt~pl are inconsistent. llowever, it is not enough that these instantiations plausibly satisfy the applicability conditions for the Express-Doubt action. For example, consider tile following dialogue:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combining Knowledge for", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "5 Solne utterances, sllch a~ (1) and 6 While it is at least plausible that, in the mind of the speaker, there is a rule which makes winning a teaching award inconsistent with teaching CS360, interpreting (11) as expressing doubt at (10) seems incorrect. Thus, to prevent such erroneous interpretations, we contend that evidence is needed to recognize discourse actions that capture the relationship between a new utterance and the existing dialogue context. In our recognition algorithm, evidence may take two forms: 1) world knowledge indicating that tile applicability conditions for an e-action are satisfied, and 2) linguistic evidence from clue words suggesting a partitular discourse action. If the system has evidence that the applicability conditions of an e-action are satisfied, tilen the system will use the knowledge as evidence that this may be the discourse action that the speaker is pursuing. On the other hand, if there is sufficient linguistic knowledge suggesting a particular discourse action, then these applicability conditions should be attributed to the speaker, as long as they are plausible (i.e., if there is nothing in the system's model of the speaker's beliefs to suggest that the applicability conditions are not satisfied). So, if the clue word but is used, then a non-acceptance discourse action such as expressing doubt should be easier to recognize (i.e., silould require less evidence that the applicability conditions hold) than if the clue word is not present.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combining Knowledge for", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "For example, consider the following, in which there are no clue words in (14a) and (14b), but (14~) contains the clue word but.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combining Knowledge for", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Sl: Who is teaching CS3607 (13) $2: Dr. Smith. (14,) SI: Isn't Dr. Smith on sabbatical next year? (14b) SI: Isn't Dr. Smith the professor who won the teaching award last year? (14c) SI: But isn't Dr. Smith the professor who won the teaching award last year?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "(12)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Chaining from (14~) could produce an inference path containing an Express-Doubt discourse action. The surface form of (14,) establishes that S1 has an uncertain belief that Dr. Smith is on sabbatical, the first applicability condition for an Express-Doubt action (see Figure 2 ).s The effect of the question/answer pair in (12) and (13) is that SI believes that $2 thinks that Dr. Smith teaches CS360, tile second applicability condition in all Express-Doubt action. Tile stereotype mmhl[e contains the belief that professors on sabbatical do not teach, so the system can ascribe to SI the following belief: Vx,y (course(y) A professor(x) A on-sabbatical(x)) ~ ~teachcs (x, y) . This belief satisfies tbe third applicability condition (the belief about a rule), and this belief, along with the belief that Dr. Smith is on sabbatical, implies that Dr. Smith is not teacbmg CS360, the fourth applicability condition. Finally, a check of the DM indicates that the proposition eThe first applicability condition is actually an uncertain belief.", |
|
"cite_spans": [ |
|
{ |
|
"start": 670, |
|
"end": 676, |
|
"text": "(x, y)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 268, |
|
"end": 276, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "(12)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "[LC92] describes our fortnal belief model and how belief strengths are represented in recipes and in our belief model. that Dr. Smith teaches CS360 is in focus, the last applicability condition. Therefore, since there is evidence for the applicability conditions of the Express-Doubt action, and since focusing heuristics suggest that this is a coherent discourse action (although not the most preferred), (14a) is recognized as expressing doubt at S2's answer, that Dr. Smith is teaching CS360.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "(12)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "If the dialogue included (14b) or (14c) instead of (14a), some of the same evidence would exist. Ill both (14b) and (14c), the system believes l) that S1 believes that Dr. Smitll won the teaching award last year (though S1 is not sure of this), 2) that S1 believes tlmt $2 thinks that Dr. Smith is teaching CS360, and 3) that the proposition that Dr. Smith teaches CS360 is in focus. Thus, some of the applicability conditions for expressing doubt at Dr. Smith teaching CS360 hold. However, tim system has no knowledge for tile crucial implication that determines how this utterance relates to tile preceding dialogue; the system has no evidence that S1 believes that winning a teaching award implies that Dr. Smith is not teaching CS360. Therefore, (14b) is not interpreted as an expression of doubt at the response in (13), and other discourse acts are considered. The presence of the clue word in (14e), however, strongly suggests an Express-Doubt discourse act and thus less evidence is needed to recognize it; that is, the system does not need explicit evidence that S1 holds the requisite beliefs but only needs to be able to plansibly ascribe them to SI. Therefore, since mlr model call plausibly ascribe to S1 belief in the implication that Dr. Smith winning a teaching award implies that Dr. Smith is not teaching CS360, the system will recognize (14~) and (14c) as expressions of doubt, using evidence from linguistic knowledge for (14~) aml from world knowledge for (14o).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "(12)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Thus, we want our system to use linguistic knowledge when present, world knowledge when present, and both when possible. Onr algorithm for processing is tim following: from the semantic representation of the utterance, infer sequences of actions (inference paths) using plan inference roles. If the applicability conditions for any of these actions are implausible, reject the inference. For actions which are not e-actions, tentatively ascribe the beliefs in the applicability comlitions. For actions that are e-actions, determine how much evidence is available for tile action. If there is more than one e-action for which there is evidence from both linguistic and world knowledge, then choose the inference path closest to the focus of attention for which there is multiple evidence. If linguistic or world evidence is available for nmre than one e-action, then choose the inference path closest to the focus of attention with this single supporting piece of evidence. If there is no evidence for any e-action, then choose tim inference path which contains no e-actions and is closest to the focus of attention.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "(12)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We return briefly to the dialogue given in Section 3 to ilhlstrate how our process model uses the above algorithm to recognize complex communicative actions such as expressing doubt as well as implicit acceptance of previous utterances.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Completing the Example", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Utterances 1and 2 Tile semantic representation of(3) is: Surface-Neg-YN-Question(Sl, $2, undergrad-eonrse(CS360)). tlowever, since this is an e-action, evidence is necessary to infer this action. The linguistic clue provides one piece of evidence to support this inference, but the system also looks for evidence from world knowledge. It therefore cheeks to see if it is plausible that S1 's belief about CS360 being an undergrad course might mq)ty t, hat Dr. Smith is not teaching CS360. Although tbe system has no such belief explicitly represented in its model of S 1, there is also no evidence to suggest that S 1 does not helieve that this implication might hold. Since there is no other e-action for which there is evidence and since tim applicability conditions for an Express-Doubt action are plausible, the system infers that the Convey-Uncertain-Belief action may be an action in an Express-Doubt action which may be an action in an Address-Unacceptance discourse action which may be an action in an Address-Believability discourse action which may be an action in an lnJorm action. Focusing heuristics suggest that this hfform is the same Inform that (2) is pursuing. Tiros, (3) is interpreted ms not accet)ting (2) by expressing doubt at it. v lnferencing for the rcmainder of the dialogue is similar to the first three utterances.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Completing the Example", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The int~rence paths which result from utterances (4) and (5) are shown in Fig~lre 3 above the appropriate numbers.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Ilnstantiate-V~", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Finally, although the system initially a~tentpts to closely relate utterance (6) to other utterances at the discourse level, there is no evidence for any e-action that might link this utterancc to the existing context on tile discourse level. There arc no clue words to suggest a relationship, and there arc no e-actions for which there is evidence that the applicability conditions hoht. Therefore, a completely new discourse action of obtaining information is inferred, s Ttfis initiation of a new discourse action indicates implicit acceptance of tile previous discourse actions since if S1 did not accept $2'.~ answer, $1 would be required to indicate non-acceptance [LC92] . The DM for the entire dialogue is shown in 1,'igure 3 (for space rea.sons, only tile action names are shown). Thus our model recognizes both acceptance and non-acceptance of communicated 7Although not discussed, there must also be m~ c-actlon that relates (2) to tile DM. This action is tile Answer-J~e]; evidence for the An.~wer-Re] action is fi'om world knowledge, whic:h indicates that the applicat)ility conditions for this Answer-Re] action hold. htferencing of (2) is then similar to that of (3).", |
|
"cite_spans": [ |
|
{ |
|
"start": 671, |
|
"end": 677, |
|
"text": "[LC92]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Ilnstantiate-V~", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "8Further discussion may determine that SI is trying to continue the negotiation dialogue; however, that has not been communicated by this uttera~lce. We a~'e ittvestigatlng how our systern might modify the I)M that it has built if it discovers later that the structure built previously is incorrect.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Ilnstantiate-V~", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "ACTES DE COLING-92, NANTES, 23-28 AOt~Tr 1992 3 1 5 PROC. OF COLING-92, NANTES, AUG. [23] [24] [25] [26] [27] [28] 1992 propositions, including acceptance after negotiation of [GS86] conflicting beliefs. This example has illustrated: 1) how the structure of a discourse is identified in our three level model, how our model recognizes the relationship of the cur-", |
|
"cite_spans": [ |
|
{ |
|
"start": 85, |
|
"end": 89, |
|
"text": "[23]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 90, |
|
"end": 94, |
|
"text": "[24]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 95, |
|
"end": 99, |
|
"text": "[25]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 100, |
|
"end": 104, |
|
"text": "[26]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 105, |
|
"end": 109, |
|
"text": "[27]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 110, |
|
"end": 114, |
|
"text": "[28]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 115, |
|
"end": 119, |
|
"text": "1992", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Ilnstantiate-V~", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "[Rin89] rent utterance to the existing context and to other utterances, and how the tripartite structure produces a richer model of discourse structure than previous models; 2) how beliefs are communicated, recognized, and used in the identification of discourse actions and discourse structure; and 3) how our process model uses [LA87] linguistic, world, and contextual knowledge together in order to recognize acceptance and non-acceptance of communicated propositions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Ilnstantiate-V~", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Conclusions and Future Work [LC91]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "6", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Our plan-based model of dialogue incorporates world, linguistic, and contextual knowledge sources into the recognition of communicative actions. Lin-", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "6", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "[LC92] guistic knowledge suggests certain discourse acts, a speaker's beliefs, and the strength of those beliefs; contextual knowledge suggests the most coherent continuations of the dialogue; and world knowledge provides evidence that the applicability conditions hold for those", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "6", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "[MT83] discourse acts that identify the relationship of the current utterance to the discourse as a whole. By combining these different knowledge sources, we are able to recognize complex discourse acts such as express-", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "6", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "[Po186] ing doubt, to identify the relationship of utterances to one another, and to capture the rich structure of taskoriented dialogue. Grosz and Sidner [GS86] claim that a robust model of understanding must use constraint satisfaction to interpret utterances; that is, when evidence is", |
|
"cite_spans": [ |
|
{ |
|
"start": 138, |
|
"end": 161, |
|
"text": "Grosz and Sidner [GS86]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "6", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "[Pol90] available from one source, less evidence is needed from other sources. We have partially included their suggestion by using world and linguistic knowledge when contextual knowledge is not sufficient to infer actions for which there must he some evidence. However, we would like to expand our notion of partial evidence", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "6", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "[Ram91] to allow evidence from the three knowledge sources to be represented in terms of degree: thus, when world knowledge is overwhelmingly strong, no other knowledge is needed, but when it is very weak, knowledge from other sources will be needed to support the inferences not allowed by the weak world knowledge alone.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "6", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "This work is being supported by the National Science Foundation trader Graait No. IR1-9122026. The ~overrlnlellt h&s cer-tMn rights in tiffs material.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "A Plan-Based Approach to Speech Act Recognition", |
|
"authors": [ |
|
{ |
|
"first": "James", |
|
"middle": [ |
|
"F" |
|
], |
|
"last": "Allen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1979, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "James F. Allen. A Plan-Based Approach to Speech Act Recognition. PhD thesis, Univer- sity of qbronto, Toronto, Ontario, Canada, 1979.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Pragmatic Modeling: Toward a Robust Natural Language Interface", |
|
"authors": [ |
|
{ |
|
"first": "Sandra", |
|
"middle": [], |
|
"last": "Carberry", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1987, |
|
"venue": "Computational Intelligence", |
|
"volume": "3", |
|
"issue": "", |
|
"pages": "117--136", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sandra Carberry. Pragmatic Modeling: To- ward a Robust Natural Language Interface. Computational Intelligence, 3:117-136, 1987.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "A Pragmatics-Based Approach to Ellipsis Resolution", |
|
"authors": [ |
|
{ |
|
"first": "Sandra", |
|
"middle": [], |
|
"last": "Carberry", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1989, |
|
"venue": "Computational Linguistics", |
|
"volume": "15", |
|
"issue": "2", |
|
"pages": "75--96", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sandra Carberry. A Pragmatics-Based Ap- proach to Ellipsis Resolution. Computational Linguistics, 15(2):75-96, 1989.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "The Thread of Discourse. Mouton, The Hague", |
|
"authors": [ |
|
{ |
|
"first": "Joseph", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Grimes", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1975, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Joseph E. Grimes. The Thread of Discourse. Mouton, The Hague, Paris, 1975.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Attention, Intention, and the Structure of Discourse", |
|
"authors": [ |
|
{ |
|
"first": "Barbara", |
|
"middle": [], |
|
"last": "Grosz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Candaee", |
|
"middle": [], |
|
"last": "Sidner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1986, |
|
"venue": "Computational Linguistics", |
|
"volume": "12", |
|
"issue": "3", |
|
"pages": "175--204", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Barbara Grosz and Candaee Sidner. Atten- tion, Intention, and the Structure of Discourse. Computational Lin- guistics, 12(3):175-204, 1986.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Two Constraints on Speech Act Ambiguity", |
|
"authors": [ |
|
{ |
|
"first": "Elizabeth", |
|
"middle": [], |
|
"last": "Hinkelman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1989, |
|
"venue": "Proceedings of the 27th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "212--219", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Elizabeth Hinkelman. Two Constraints on Speech Act Ambiguity. In Proceedings of the 27th Annual Meeting of the Association for Computational Linguistics, pages 212- 219, Vancouver, Canada, 1989.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "A Plan Recognition Model for Subdialogues in Conversation", |
|
"authors": [ |
|
{ |
|
"first": "Diane", |
|
"middle": [], |
|
"last": "Litmus", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Alien", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "Cognitive Science", |
|
"volume": "11", |
|
"issue": "", |
|
"pages": "163--200", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Diane Litmus and James Alien. A Plan Recognition Model for Subdialogues in Con- versation. Cognitive Science, 11:163-200,", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "A Tripartite Planobased Model of Dialogue", |
|
"authors": [ |
|
{ |
|
"first": "Lynn", |
|
"middle": [], |
|
"last": "Lambert", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sandra", |
|
"middle": [], |
|
"last": "Carberry", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1991, |
|
"venue": "Proceedings of the 29th Annual Meeting of the ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "47--54", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lynn Lambert and Sandra Carberry. A Tri- partite Planobased Model of Dialogue. In Pro- ceedings of the 29th Annual Meeting of the ACL, pages 47-54, Berkeley, CA, June 1991.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Modeling Negotiation Dialogues", |
|
"authors": [ |
|
{ |
|
"first": "Lynn", |
|
"middle": [], |
|
"last": "Lambert", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sandra", |
|
"middle": [], |
|
"last": "Carberry", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "Proceedings of the 30th Annual Meeting of the ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lynn Lambert and Sandra Carberry. Model- ing Negotiation Dialogues. In Proceedings of the 30th Annual Meeting of the ACL, Newark, DE, June 1992. To appear.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Relational Propositions in Discourse", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "William", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sandra", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Mann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Thompson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1983, |
|
"venue": "ISI/USC, November", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "William C. Mann and Sandra A. Thompson. Relational Propositions in Discourse. Techni- cal Report ISI/RR-83-115, ISI/USC, Novem- ber 1983.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "The Linguistics Discourse Model: Towards a Formal Theory of Discourse Structure", |
|
"authors": [ |
|
{ |
|
"first": "Livia", |
|
"middle": [], |
|
"last": "Polanyi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1986, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Livia Polanyi. The Linguistics Discourse Model: Towards a Formal Theory of Dis- course Structure. Technical Report 6409, Bolt Beranek and Newman Laboratories Inc., Cambridge, Massachusetts, 1986.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Plans as Complex Mental Attitudes", |
|
"authors": [ |
|
{ |
|
"first": "Martha", |
|
"middle": [], |
|
"last": "Pollack", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "Intentions in Communication", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "77--104", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Martha Pollack. Plans as Complex Mental Attitudes. In Philip R. Cohen, Jerry Mor- gan, and Martha E. Pollack, editors, Inten- tions in Communication, pages 77-104. MIT Press, 1990.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "A Three-Level Model for Plan Exploration", |
|
"authors": [ |
|
{ |
|
"first": "Lance", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Ramshaw", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1991, |
|
"venue": "Proceedings of the 29th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "36--46", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lance A. Ramshaw. A Three-Level Model for Plan Exploration. In Proceedings of the 29th Annual Meeting of the Association for Com- putational Linguistics, pages 36-46, Berkeley, California, 1991.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Focusing for Interpretation of Pronouns", |
|
"authors": [ |
|
{ |
|
"first": "Candace", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Sidner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1981, |
|
"venue": "American Journal of Computational Linuistics", |
|
"volume": "7", |
|
"issue": "4", |
|
"pages": "217--231", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Candace L. Sidner. Focusing for Interpreta- tion of Pronouns. American Journal of Com- putational Linuistics, 7(4):217-231, 1981.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Plan Parsing for Intended Response Recognition in Discourse", |
|
"authors": [ |
|
{ |
|
"first": "Candace", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Sidner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1985, |
|
"venue": "Computational Intelligence", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1--10", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Candace L. Sidner. Plan Parsing for Intended Response Recognition in Discourse. Compu- tational Intelligence, 1:1-10, 1985.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Representing and Using Knowledge About Planning in Problem Solving and Natural Language Understanding", |
|
"authors": [ |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Wilensky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Meta-Ptanning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1981, |
|
"venue": "Cognitive Science", |
|
"volume": "5", |
|
"issue": "", |
|
"pages": "197--233", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Robert Wilensky. Meta-Ptanning: Represent- ing and Using Knowledge About Planning in Problem Solving and Natural Language Un- derstanding. Cognitive Science, 5:197-233, 1981.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"text": "................... -.............. i ......... , ..... 2\",'. .....................uot(sl, ss, :.~,'r+~h\u00a2~ t.\u00a2, css6o)) Reque~l(Sl. S2, lnform+Ref(S2. [st, f~, Teachex(_ fac, CS360)) [ Sur face-Wn-Question(S 1, S2, I [ fac, Teach~(_fac, C8360)) [", |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null |
|
}, |
|
"FIGREF1": { |
|
"text": "fa~, CS36.O), Teaches~D6 Smith~ CS360 ) | [ hlform(S2, Sl, Teaches(Dr. Snlim. CS360)) I \u00a2 [*T'\u00a2II(S;,'SI, Teaches(Dr. Smlth, lLq360))\" } l I Surface-Inform(S2, Sl, Teaches(Ill. Smith, CS360)) I Figure 1: Dialogue Model for two utterances --~\" = Emfi)le Arc = Subactlon Arc", |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null |
|
}, |
|
"FIGREF2": { |
|
"text": "is Surface-Neg- $2, PropA) and that chaining suggests that the utterance is part of a Convey-Uncertain.Belief action which is part of a recipe for the Express-Doubt action shown inFigure 2. The parameters _agent;I, _agent2, and _p~rop2 in the Express-Doubt action will be instantiatedwith the values $1, $2, and PropA that appear in the semantic representation of the utterance and propagate during chaining to the Convey Uncertain-Belief action and in turn to the Express-Doubt action, tlowever, the parameters _propl and Aru].e are introduced for the first time in the Express-Doubt action and have many plausible instantiations. Continued chaining from the Express-Doubt discourse action could eventually lead to an Inform action, and we might equate this Inform with an Inform that already exists in the DM, thereby interpreting tile new utterance as related to this previously identified action. Unifying the Inform action on tim inference path with an Inform action in the DM,", |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null |
|
}, |
|
"FIGREF3": { |
|
"text": "... -:~-.............................", |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null |
|
}, |
|
"FIGREF4": { |
|
"text": "Dialogue Model for Example and the I}M for these utterances is given inFigure 1.", |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null |
|
}, |
|
"TABREF0": { |
|
"num": null, |
|
"html": null, |
|
"text": ", do Ilot relate to previous dialogue on the discourse level.", |
|
"type_str": "table", |
|
"content": "<table><tr><td>(</td><td>t9t SI: Who is teaching CS3607 $2: Dr. Smith.</td><td/></tr><tr><td colspan=\"2\">(11) SI: Isn't Dr. Smith lhe professor who won</td><td/></tr><tr><td/><td>the teaching award last year?</td><td/></tr><tr><td colspan=\"2\">ACRES DE COLING-92, NANq~S, 23-28 AO'/Zr 1992</td><td>3 1 3</td><td>PROC. OF COLING-92, NANTES, AUG. 23-28, 1992</td></tr></table>" |
|
}, |
|
"TABREF1": { |
|
"num": null, |
|
"html": null, |
|
"text": "r ol?l.em -..,~ly} n~ .1: .re(el .... ] ...........................", |
|
"type_str": "table", |
|
"content": "<table><tr><td colspan=\"2\">Dom~dn Level</td><td/></tr><tr><td colspan=\"2\">i..~..i</td><td/></tr><tr><td>Dluoourse Level</td><td/><td/></tr><tr><td/><td/><td>were discussed earlier,</td></tr><tr><td>ACTES DE COLING-92, NANTES, 23-28 Ao(rr 1992</td><td>3 1 4</td><td>PROC. OF COLING-92, NANTES, AUO. 23-28, 1992</td></tr></table>" |
|
} |
|
} |
|
} |
|
} |