Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "H90-1023",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:36:27.178808Z"
},
"title": "Beyond Class A: A Proposal for Automatic Evaluation of Discourse",
"authors": [
{
"first": "Lynette",
"middle": [],
"last": "Hirschman",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Deborah",
"middle": [
"A"
],
"last": "Dahl",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Donald",
"middle": [
"P"
],
"last": "Mckay",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Lewis",
"middle": [
"M"
],
"last": "Norton",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Marcia",
"middle": [
"C"
],
"last": "Linebarger",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "",
"pdf_parse": {
"paper_id": "H90-1023",
"_pdf_hash": "",
"abstract": [],
"body_text": [
{
"text": "The DARPA Spoken Language community has just completed the first trial evaluation of spontaneous query/response pairs in the Air Travel (ATIS) domain. 1 Our goal has been to find a methodology for evaluating correct responses to user queries. To this end, we agreed, for the first trial evaluation, to constrain the problem in several ways:",
"cite_spans": [
{
"start": 151,
"end": 152,
"text": "1",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "Database Application: Constrain the application to a database query application, to ease the burden of a) constructing the back-end, and b) determining correct responses;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "Canonical Answer: Constrain answer comparison to a minimal \"canonical answer\" that imposes the fewest constraints on the form of system response displayed to a user at each site;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "Typed Input: Constrain the evaluation to typed input only;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "Class A: Constrain the test set to single unambiguous intelligible utterances taken without context that have well-defined database answers (\"class A\" sentences).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "These were reasonable constraints to impose on the first trial evaluation. However, it is clear that we need to loosen these constraints to obtain a more realistic evaluation of spoken language systems. The purpose of this paper is to suggest how we can move beyond evaluation of class A sentences to an evaluation of connected dialogue, including out-of-domain queries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "The training data consisted of almost 800 sentences, approximately 60% of which could be evaluated completely independent of context. Of the remaining sentences, approximately half of them (19%) require context, and almost that many do not have a unique database answer (17%). Table 1 shows these figures for the four sets of ATIS training data; note that the total adds up to more than 100% because some sentences belonged to multiple classes. 2 1This work was supported by DARPA contract N000014-89-C0171, administered by the Office of Naval Research.",
"cite_spans": [],
"ref_spans": [
{
"start": 277,
"end": 284,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Analysis of the Training Data",
"sec_num": null
},
{
"text": "2 This table counts the so-called context-removable sentences as context dependent, because the answer to such sentences changes depending on whether context is used or not. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of the Training Data",
"sec_num": null
},
{
"text": "We originaUy postponed evaluation of non-class A sentences because there was no consensus on automated evaluation techniques for these sentences. We would like here to propose a methodology for both \"unanswerable\" sentences and for automated evaluation of contextdependent sentences. By capturing these two additional classes in the evaluation, we can evaluate on more than 90% of the data; in addition, we can evaluate entire (wellformed) dialogues, not just isolated query/answer pairs. Unanswerable Queries For unanswerable queries, we propose that the system recognize that the query is unanswerable and generate (for evaluation purposes) a canonical answer such as UNANSWERABLE_QUERY.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Modest Proposal",
"sec_num": null
},
{
"text": "This would be scored correct in exactly those cases where the query is in fact unanswerable. The use of a canonical message side-steps the tricky issue of exactly what kind of error message to issue to the user. This solution is proposed in the general spirit of the Canonical Answer Specification [1] which requires only a minimal answer, in order to impose the fewest constraints on the exact nature of the system's answer to the user. This must be distinguished from the use of NO_ANSWER, which flags cases where the system does not attempt to formulate a query. The NO.ANSWER response allows the system to admit that it doesn't understand something. By contrast, the UNANSWERABLE_QUERY answer actually diagnoses the cases where the system understands the query and determines that the query cannot be answered by the database. ",
"cite_spans": [
{
"start": 298,
"end": 301,
"text": "[1]",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Modest Proposal",
"sec_num": null
},
{
"text": "The major obstacle to evaluation of context-dependent sentences is how to provide the context required for understanding the sentences. If each system were able to replicate the context in which the data is collected, it should be possible to evaluate context-dependent queries. This context (which we will call the \"canonical context\") consists of the query-answer pairs seen by the subject up to that point during data collection. These examples show how contextual information is used. Query 2 (... I would like to find flights going on to San Francisco on Monda~t the 9th of July) requires the previous query Q1 to determine that the starting point of this leg is Denver. Query 3 (What would be the fare on United 3~37) refers to an entity mentioned in the answer of Query 2, namely United 343. United 343 may well include several legs, flying from Chicago to Denver to San Francisco, for example, with three fares for the different segments (Chicago to Denver, Chicago to San Francisco, and Denver to San Francisco). However, Query 3 depends on context from the previous display to focus only on the fare from Denver to San Francisco. Finally, Query 4 (What about Continental 1~g57) requires the previous query Q3 and its contezt to establish what is being asked about (fare from Denver to San Francisco); it also refers to an entity mentioned in the display D2 associated with Query 2 (Continental 1295). By building up a context using information from both the query and the answer, it is possible to interpret these queries correctly. This is shown schematically in Figure 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 1574,
"end": 1582,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Capturing the Context",
"sec_num": null
},
{
"text": "In Figure 3 points out an additional difficulty in evaluating sentences dependent on context, namely the possibility of \"getting out of synch\". In this example, the system misprocesses the original request, saying that there are no flights from Atlanta to Denver leaving before 11. When the follow-up query asks Show me the cheapest one, there is an apparent incoherence, since there is no \"cheapest\" one in the empty set. However, if the canonical query/answer pairs are provided during evaluation, the system can \"resynchronize\" to the information originally displayed to the user and thus recognize that it should chose the cheapest flight from the set given in the canonical answer.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 11,
"text": "Figure 3",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Keeping in Synch",
"sec_num": null
},
{
"text": "The above examples illustrate what information is needed in order to understand queries in context. The next question is how to provide this \"canonical context\" (consisting of the query/answer pairs generated during data collection) for purposes of automated evaluation. Providing the set of queries is, of course, not a problem: this is exactly the set of input data. a Providing the canonical answers is more of a problem, because it requires each system to reproduce the answer displayed during data gathering. Since there is no agreement as to what constitutes the best way to display the data, requiring that each system reproduce the original display seems far too constraining. However, we can provide, for evaluation purposes, the display seen by the subject during data collection. The log file in the training data contains this information in human-readable form. It can be provided in more convenient form for automatic processing by representing the display as a list of lists, where the first element in the list is the set of column headings, and the remaining elements are the rows of data. This \"canonical display format\" is illustrated in Figure 4 . For evaluation, the canonical (transcribed) query and the canonical display would be furnished with each 30f course, if the input is speech data, then the system could misunderstand the speech data; therefore, to preserve synchronization as much as possible, we propose that the transcribed input be provided for evaluation of speech input.",
"cite_spans": [],
"ref_spans": [
{
"start": 1157,
"end": 1165,
"text": "Figure 4",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Providing the Canonical Context",
"sec_num": null
},
{
"text": "FLT CODE FLT DAY FRM TO DEPT query, to provide the full context to the system, allowing it to \"resynchronize\" at each step in the dialogue. 4 The system could then process the query (which creates any context associated with the query) and answer the query (producing the usual CAS output). It would then reset its context to the state before query processing and add the \"canonical context\" from the canonical query and from the canonical display, leaving the system with the appropriate context to handle the next query. This is illustrated in Figure 5 .",
"cite_spans": [
{
"start": 140,
"end": 141,
"text": "4",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 546,
"end": 554,
"text": "Figure 5",
"ref_id": "FIGREF7"
}
],
"eq_spans": [],
"section": "DISPLAY SHOWN TO USER:",
"sec_num": null
},
{
"text": "This methodology allows the processing of an entire dialogue, even when the context may not be from the directly preceding query, but from a few queries back. At Unisys, we have already demonstrated the feasibility of substituting an \"external\" DB answer for the internally generated answer [3] . We currently treat the display (that is, the set ofDB tuples returned) as an entity available for reference, in order to capture answer/question dependencies, as illustrated in Figure 3. 4There is still the possibility that the system mlslnterprets the query and then needs to use the query as context for a subsequent query. In thls case, providing the answer may not help, unless there is some redundancy between the query and the answer. In addition to the suggestions for handling unanswerable queries and context-dependent queries, there seems to be an emerging consensus that ambiguous queries can be handled by allowing any of several possible answers to be counted as correct. The system would then be resynchronized as described above, to use the canonical answer furnished during data collection. For evaluation, the system still outputs a transcription and an answer in CAS format; these are evaluated against the SNOR transcription and the reference answer in CAS, as is done now.",
"cite_spans": [
{
"start": 291,
"end": 294,
"text": "[3]",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 474,
"end": 483,
"text": "Figure 3.",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "DISPLAY SHOWN TO USER:",
"sec_num": null
},
{
"text": "With each utterance, the system processes the utterance, then is allowed to \"resynchronize\" against the correct question-answer pair, provided as part of the evaluation input data before evaluating the next utterance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DISPLAY SHOWN TO USER:",
"sec_num": null
},
{
"text": "One obvious drawback of this proposal is that it makes it extremely easy to cheat -the user is provided with the transcription and the database display. It is clearly easy to succumb to the temptation to look at the answer -but it is easy to look at the input sentences under the current system; only honesty prevents us from doing that. Providing a canonical display raises the possibility of deriving the correct answer by a simple reformatting of the canonical display. However, it would be easy to prevent this simple kind of cheating by inserting extra tuples or omitting a required tuple from the canonical display answer. This would make any answer derived from the display not compare correctly to the canonical answer. In short, the issue of cheating does not seem like an insurmountable obstacle: we are now largely on the honor system, and if we wished to make it more difficult to cheat, it is not difficult to think of minor alterations that would protect the system from obvious mappings of input to correct answer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Is It Too Easy To Cheat.*",
"sec_num": null
},
{
"text": "There are several arguments in favor of moving beyond class A queries:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluating Whole Discourses",
"sec_num": null
},
{
"text": "\u2022 Yield is increased from 60% to over 90%;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluating Whole Discourses",
"sec_num": null
},
{
"text": "\u2022 Data categorization is easier (due to elimination of the context-removable class);",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluating Whole Discourses",
"sec_num": null
},
{
"text": "\u2022 Data validation is easier (no need to rerun contextremovable queries);",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluating Whole Discourses",
"sec_num": null
},
{
"text": "\u2022 Data from different data collection paradigms can be used by multiple sites;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluating Whole Discourses",
"sec_num": null
},
{
"text": "\u2022 We address a realistic problem, not just an artificial subset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluating Whole Discourses",
"sec_num": null
},
{
"text": "This is particularly important in light of the results from the June evaluation. In general, systems performed in the 50-60% range on class A sentences. This means that the coverage of the data was in the 30-40% range. If we move on to include unanswerable queries and context dependent queries, we are at least looking at more than 90% of the data. Given that several sites already have the ability to process context-dependent material ( [4] , [6] , [3] ), this should enable contractors to report significantly better overall coverage of the corpus.",
"cite_spans": [
{
"start": 440,
"end": 443,
"text": "[4]",
"ref_id": "BIBREF3"
},
{
"start": 446,
"end": 449,
"text": "[6]",
"ref_id": "BIBREF5"
},
{
"start": 452,
"end": 455,
"text": "[3]",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluating Whole Discourses",
"sec_num": null
},
{
"text": "In addition to these fully automated evaluation criteria, we also propose that we include some subjective evaluation criteria, specifically:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Subjective Evaluation Criteria",
"sec_num": null
},
{
"text": "\u2022 User Satisfaction",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Subjective Evaluation Criteria",
"sec_num": null
},
{
"text": "At the previous meeting, the MIT group reported on results using outside evaluators to assess system performance ( [5] ). We report on a similar experiment at this meeting( [2] ), in which three evaluators showed good reliability in scoring correct system answers. This indicates that subjective black box evaluation is a feasible approach to system evaluation. Out suggestion is that subjective evaluation techniques be used to supplement and complement the various automated techniques under development.",
"cite_spans": [
{
"start": 115,
"end": 118,
"text": "[5]",
"ref_id": "BIBREF4"
},
{
"start": 173,
"end": 176,
"text": "[2]",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 Task Completion Quality and Time",
"sec_num": null
},
{
"text": "This proposal does not address several important issues. For example, clearly a useful system would move towards an expert system, and not remain restricted to a DB interface. We agree that this is an important direction, but have not addressed it here. We also agree with observations that the Canonical Answer hides or conflates information. It does not capture the notion of focus, for example. And we have explicitly side-stepped the difficult issues of what kind of detailed error messages a system should provide, how it should handle failed presupposition, how it should respond to queries outside the DB. For the next round, we are suggesting that it is sufficient to recognize the type of problem the system has, and to supplement the objective measures with some subjective measures of how actual users react to the system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A Proposal for SLS Evaluation",
"authors": [
{
"first": "Sean",
"middle": [],
"last": "Boisen",
"suffix": ""
},
{
"first": "Lance",
"middle": [],
"last": "Ramshaw",
"suffix": ""
},
{
"first": "Damaris",
"middle": [],
"last": "Ayuso",
"suffix": ""
},
{
"first": "Madeleine",
"middle": [],
"last": "Bates",
"suffix": ""
}
],
"year": 1989,
"venue": "Proceedings of the DARPA Speech and Natural Language Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sean Boisen, Lance Ramshaw, Damaris Ayuso, and Madeleine Bates. A Proposal for SLS Evaluation In Proceedings of the DARPA Speech and Natural Language Workshop, Cape Cod, MA, October 1989.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Training and evaluation of a language understanding system for a spoken language application",
"authors": [
{
"first": "Deborah",
"middle": [
"A"
],
"last": "Dam",
"suffix": ""
},
{
"first": "Lynette",
"middle": [],
"last": "Hirschman",
"suffix": ""
},
{
"first": "Lewis",
"middle": [
"M"
],
"last": "Norton",
"suffix": ""
},
{
"first": "Marcia",
"middle": [
"C"
],
"last": "Linebarger",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Magerman",
"suffix": ""
},
{
"first": "Nghi",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Catherine",
"middle": [
"N"
],
"last": "Ball",
"suffix": ""
}
],
"year": 1990,
"venue": "Proceedings of the Darpa Speech and Language Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Deborah A. DaM, Lynette Hirschman, Lewis M. Norton, Marcia C. Linebarger, David Magerman, Nghi Nguyen, and Catherine N. Ball. Training and evaluation of a language understanding system for a spoken language application. In Proceedings of the Darpa Speech and Language Workshop, Hidden Valley, PA, June 1990.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Management and evaluation of interactive diaiog in the air travel domain",
"authors": [
{
"first": "Lewis",
"middle": [
"M"
],
"last": "Norton",
"suffix": ""
},
{
"first": "Deborah",
"middle": [
"A"
],
"last": "Dahl",
"suffix": ""
},
{
"first": "Lynette",
"middle": [],
"last": "Hirschman",
"suffix": ""
},
{
"first": "Marcia",
"middle": [
"C"
],
"last": "Linebarger",
"suffix": ""
},
{
"first": "Catherine",
"middle": [
"N"
],
"last": "Ball",
"suffix": ""
}
],
"year": 1990,
"venue": "Proceedings of the Darpa Speech and Language Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lewis M. Norton, Deborah A. Dahl, Lynette Hirschman, Marcia C. Linebarger, and Catherine N. Ball. Management and evaluation of interactive di- aiog in the air travel domain. In Proceedings of the Darpa Speech and Language Workshop, Hidden Val- ley, PA, June 1990.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "The CMU Air Travel Information Service: Understanding Spontaneous Speech In Proceedings of the Darpa Speech and Language Workshop",
"authors": [
{
"first": "Wayne",
"middle": [],
"last": "Ward",
"suffix": ""
}
],
"year": 1990,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wayne Ward. The CMU Air Travel Informa- tion Service: Understanding Spontaneous Speech In Proceedings of the Darpa Speech and Language Workshop, Hidden Valley, PA, June 1990.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Preliminary evaluation of the voyager spoken language system",
"authors": [
{
"first": "Victor",
"middle": [],
"last": "Zue",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Glass",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Goodine",
"suffix": ""
},
{
"first": "Hong",
"middle": [],
"last": "Leung",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Phillips",
"suffix": ""
},
{
"first": "Joseph",
"middle": [],
"last": "Polifroni",
"suffix": ""
},
{
"first": "Stephanie",
"middle": [],
"last": "Seneff",
"suffix": ""
}
],
"year": 1989,
"venue": "Proceedings of the DARPA Speech and Natural Language Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Victor Zue, James Glass, David Goodine, Hong Leung, Michael Phillips, Joseph Polifroni, and Stephanie Seneff. Preliminary evaluation of the voy- ager spoken language system. In Proceedings of the DARPA Speech and Natural Language Workshop, Cape Cod, MA, October 1989.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Preliminary ATIS Development at",
"authors": [
{
"first": "Victor",
"middle": [],
"last": "Zue",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Glass",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Goodine",
"suffix": ""
},
{
"first": "Hong",
"middle": [],
"last": "Leung",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Phillips",
"suffix": ""
},
{
"first": "Joseph",
"middle": [],
"last": "Polifroni",
"suffix": ""
},
{
"first": "Stephanie",
"middle": [],
"last": "Seneff",
"suffix": ""
}
],
"year": 1990,
"venue": "MIT In Proceedings of the DARPA Speech and Natural Language Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Victor Zue, James Glass, David Goodine, Hong Leung, Michael Phillips, Joseph Polifroni, and Stephanie Seneff. Preliminary ATIS Development at MIT In Proceedings of the DARPA Speech and Nat- ural Language Workshop, Hidden Valley, PA, June, 1990.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "Using Context to Understand Queries",
"type_str": "figure",
"uris": null
},
"FIGREF1": {
"num": null,
"text": "shows the kind of context dependencies that are found in the ATIS corpus.",
"type_str": "figure",
"uris": null
},
"FIGREF2": {
"num": null,
"text": ", we show an example of what can happen when context is not properly taken into account. This Current Handling of Context in PUNDIT",
"type_str": "figure",
"uris": null
},
"FIGREF3": {
"num": null,
"text": "Figure 4: Canonical Display Format",
"type_str": "figure",
"uris": null
},
"FIGREF4": {
"num": null,
"text": "Example of Losing Synchronization AmbiguousQueries",
"type_str": "figure",
"uris": null
},
"FIGREF5": {
"num": null,
"text": "EvaluationFormat Taking the need for context into consideration and the need to allow systems to resynchronize as much as possible, the proposed form of test input for each utterance in a dialogue is:\u2022INPUT during TESTING -Digitized speech -Canonical query for synchronization -Canonical display for synchronization \u2022 OUTPUT during TESTING -Transcription -CAS (with UNANSWERABLE responses)",
"type_str": "figure",
"uris": null
},
"FIGREF7": {
"num": null,
"text": "Updating the Context via Canonical Query and Display",
"type_str": "figure",
"uris": null
},
"TABREF1": {
"num": null,
"content": "<table/>",
"html": null,
"type_str": "table",
"text": ""
},
"TABREF2": {
"num": null,
"content": "<table><tr><td colspan=\"7\">###01 Utterance: What are the flights from Atlanta to Denver on mid-day on the 5th of July?</td></tr><tr><td>&gt;&gt;&gt;D1 Display to the User:</td><td/><td/><td/><td/><td/><td/></tr><tr><td>FLT CODE FLT DAY FRM TO</td><td colspan=\"6\">DEPT ARRV AL FLT# CLASSES EQP MEAL STOP DC DURA</td></tr><tr><td>102122 1234567 ATL DEN</td><td>840</td><td>955 DL</td><td>445 FYBM0</td><td>757 B</td><td>0 N</td><td>195</td></tr><tr><td>102123 1234567 ATL DEN</td><td>934</td><td>1054 EA</td><td>821FYHOK</td><td>725 B</td><td>0 N</td><td>200</td></tr><tr><td colspan=\"3\">###02 Utterance: 1336 UA</td><td>343 FYBMQ</td><td>D8S L</td><td>0 N</td><td>156</td></tr><tr><td/><td/><td colspan=\"2\">1416 CO 1295 FYqHK</td><td>733 L</td><td>0 N</td><td>176</td></tr></table>",
"html": null,
"type_str": "table",
"text": "Okay, now I would like to find flights going on to San Francisco on Monday the 9th of July."
},
"TABREF4": {
"num": null,
"content": "<table><tr><td>.</td><td>.</td><td>.</td><td>.</td><td>.</td><td>.</td><td>.</td><td>.</td><td>.</td><td>.</td><td>.</td><td>.</td><td>.</td><td>.</td><td>.</td><td>.</td><td>.</td><td>.</td><td>.</td><td>.</td><td>.</td><td>.</td><td>.</td><td>.</td><td>.</td><td>.</td><td>.</td><td>.</td><td>.</td><td>.</td><td>.</td><td>.</td><td>.</td><td>.</td><td>.</td><td>.</td><td>.</td><td>.</td><td>.</td><td>.</td><td>.</td><td>.</td><td>.</td><td>.</td><td>.</td><td>.</td><td>.</td><td>.</td><td>.</td><td>.</td><td>.</td><td>.</td><td>.</td><td>.</td><td>.</td><td>.</td><td>.</td><td>.</td><td>.</td><td>.</td><td>.</td><td>.</td><td>.</td><td>.</td></tr><tr><td colspan=\"6\">102122</td><td/><td/><td/><td/><td colspan=\"19\">1234567 ATL DEN 840</td><td/><td/><td/><td/><td colspan=\"11\">955 DL 445</td><td/><td/><td colspan=\"7\">FYBMQ</td><td/><td/><td colspan=\"8\">757 S/B</td><td>0 N 195</td></tr><tr><td colspan=\"6\">102123</td><td/><td/><td/><td/><td colspan=\"19\">1234567 ATL DEN 934</td><td/><td/><td/><td colspan=\"12\">1054 EA 821</td><td/><td/><td colspan=\"5\">FYEQK</td><td/><td/><td/><td/><td colspan=\"8\">72S S/B</td><td>0 g 200</td></tr><tr><td/><td>.oo</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td colspan=\"17\">Follow-up Query:</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td colspan=\"35\">USER: Show me the cheapest one.</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td colspan=\"47\">Synchronization lost; can regain with canonical display!</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr></table>",
"html": null,
"type_str": "table",
"text": ""
}
}
}
}