|
{ |
|
"paper_id": "H91-1021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T03:33:00.590113Z" |
|
}, |
|
"title": "Augmented Role Filling Capabilities for Semantic Interpretation of Spoken Language", |
|
"authors": [ |
|
{ |
|
"first": "Lewis", |
|
"middle": [], |
|
"last": "Norton", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Marcia", |
|
"middle": [], |
|
"last": "Linebarger", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Deborah", |
|
"middle": [], |
|
"last": "Dahl", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Nghi", |
|
"middle": [], |
|
"last": "Nguyen", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "This paper describes recent work on the Unisys ATIS Spoken Language System, and reports benchmark results on natural language, spoken language, and speech recognition. We describe enhancements to the system's semantic processing for handling non.transparent argument structure and enhancements to the system's pragmatic processing of material in art. swers displayed to the user. We found that the system's score on the natural language benchmark test decreased from ~8~o to 36~ without these enhancements. We also report results for three spoken language systems, Unisys natural language coupled with MIT-Summit speech recognition, Unisys natural language coupled wish MIT-Lincoln Labs speech recognition and Unisys natural language coupled with BBN speech recognition. Speech recognition results are reported on the results of the Unisys natural language selecting a candidate from the MIT-Summit N-best (N=16).", |
|
"pdf_parse": { |
|
"paper_id": "H91-1021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "This paper describes recent work on the Unisys ATIS Spoken Language System, and reports benchmark results on natural language, spoken language, and speech recognition. We describe enhancements to the system's semantic processing for handling non.transparent argument structure and enhancements to the system's pragmatic processing of material in art. swers displayed to the user. We found that the system's score on the natural language benchmark test decreased from ~8~o to 36~ without these enhancements. We also report results for three spoken language systems, Unisys natural language coupled with MIT-Summit speech recognition, Unisys natural language coupled wish MIT-Lincoln Labs speech recognition and Unisys natural language coupled with BBN speech recognition. Speech recognition results are reported on the results of the Unisys natural language selecting a candidate from the MIT-Summit N-best (N=16).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Improving the performance of spoken language systems requires addressing issues along several fxonts, including basic improvements in natural language processing and speech recognition as well as issues of integration of these components in spoken language systems. In this paper we report the results of our recent work in each of these areas. * One major area of work has been in the the semantic and pragmatic components of the Unisys natural language processins system. The work in semantics enhances the robustness of semantic processing by allowing parses which do not directly express the argument structure expected by semantics to nevertheless be processed in a rule-governed way. In the area of pragmatics we have extended our techniques for bringing material displayed to the user into the dialog context to handle several additional classes of references to material in the display.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "INTRODUCTION", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 This work was supported by DARPA contract N000014-89-C0171, administered by the Office of Naval Research. We are grate-fuI to Victor Zue of MIT, Doug Paul of MIT Lincoln Laboratories and John Mak.houl of BBN for making output from their speech recognition systems available to us. We also wish to thank Tim Finln, Rich Fritzson, Don McKay, Jim Meldlnger, and Jan Pastor of Unisys and Lynette Hirschnmn of MIT for their contributions to this work.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "INTRODUCTION", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In the area of integration of speech and natural language, we report on an experiment with three spoken language systems, coupling the same Unisys natural language system to three different speech recognisers as shown in Figure 1 . We believe this is a very promising technique for evaluating the components of spoken language systems. Using this technique we can make a very straightforward compe~ison of the performance of the recognizers in a spoken language context. Furthermore, this technique also allows us to make a fine-gralned comparision of the interaction between speech and natural language in the three systems by looking at such questions as the relative proportion of speech recognizer outputs that fail to parse, fall to receive a semantic analysis and so on.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 221, |
|
"end": 229, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "INTRODUCTION", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Finally, we report on speech recognition results obtained by filtering the N-best (N=16) from MIT-Summlt through the Unisys natural language system. We note that there was a higher error rate for context-dependent speech as compared to context-independent speech (54.6% compared to 45.8~) and suggest two hypotheses which may account for this difference.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "INTRODUCTION", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "When evaluating our system after the Hidden Valley workshop, we observed two phenomena about PUNDIT (the Unlsys natttral language understanding system [3] ) that warranted improvement. The first was that PUNDIT's semantic interpreter was sometimes fA;llng to correctly recognize predicate argument relationships for syntactic constituents that were not immediately associated with their intended head. The second was that PUNDIT was producing different representations for queries with different syntactic/lexical content but identical (or nearly identical) semantic content. We see both of these shortcomings as due to what we will term \"non-transparent argument structure\": syntactic representations in which syntactic constituents are not associated with their intended head, or semantic representations in which predicate-argument relationships are underspecified. Our approach to dealing with these shortcomings has been to maintain a rule-governed approach to role-filling despite non-transparent syntactic and semantic structures. We believe that the extensions we are about to describe are especially relevant to Spoken Language Understanding, because non-transparent argument structure appears to be particularly characteristic of spontaneous spoken utterances, for reasons we will sketch below.", |
|
"cite_spans": [ |
|
{ |
|
"start": 151, |
|
"end": 154, |
|
"text": "[3]", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SEMANTICS", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The semantic interpreter and non-transparent parses Semantic interpretation in PUNDIT is the process of instantiating the arguments of case frame structures called decompositions, which are associated with verbs and selected nouns and adjectives ( [7] ). The arguments of decompositions are assigned thematic role labels such as agent, patient, source, and so forth. Semantic interpretation is a search for suitable grammatical role/thematic role correspondences, using syntax-semantics mapping rules, which specify what syntactic constituents may fill a particular role; and semantic class constraints, which specify the semantic properties required of potential fillers. The syntactic constraints on potential role fillers are of two types: CATEGORIAL constraints, which require that the potential filler be of a certain grammatical type such as subject, object, or prepositional phrase; and ATTACH-MENT constraints, which require that the potential filler occur within the phrase headed by the predicate of which it is to be an argument. The categorial constraints are stated explicitly in the syntax-semantics mapping rules; the latter are implicit in the functioning of the semantic interpreter. For example, the source role of flight_C, the domain model predicate associated with the noun \"flight\", can, in accordance with the syntax-semantics mapping rules, be filled by the entity associated with the object of a \"from'-pp occurring within the same noun phrase as \"flight\" (The flight from Boston takes three hours). Unfortunately, the parse does not always express the argument structure of the sentence as transparently as it does in this example; constituents that should provide role fillers for a predicate are not always syntactically associated with the predicate. There are several causes for such a mismatch between the parse and the intended interpretation. They include (l) a variety of syntactic deformations which we will refer to as extraposition ( What flights do you have to Boston, where the \"to'-pp belongs in the subject np; I need ticket information from Boston to Dallas, where the pp's modify the prenominal noun \"ticket\", not the head noun \"information\"; or I toant a cheaper flight than Delta 66, where the \"than'-pp modifies \"cheaper\", not \"flight\"), (2) metonymy (I toant the $50.00 flight, where the speaker means that s/he wants the flight whose FARE is $50.00), and (3) suboptimal parses (e.g., parses with incorrect pp-attachment).", |
|
"cite_spans": [ |
|
{ |
|
"start": 248, |
|
"end": 251, |
|
"text": "[7]", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SEMANTICS", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Our changes to the semantic interpreter allow it to fill roles correctly in cases such as the above, utilising its existing knowledge of syntax-semantics correspondences, but relaxing certain expectations about the syntactic attachment of rolefilling constituents. Thus the CATEGORIAL constraints remain in force, but the ATTACHMENT constraints have been loosened somewhat. The system now identifies prepositional phrases and adverbs which have not frilled a role in the predicate with which they are syntactically associated, and offers them as role fillers to fillers of this predicate. This strategy applies recursively to fillers of fillers of roles; for example, in What types of ground transportation services are available from the airport in Atlanta to downtown A tlanta f , the two final pp's ultimately fill roles in the decomposition associated with \"ground transportation\" since neither \"types\" nor \"services\" has mapping rules to consume them. The same mechanism already in place for role-filling is employed in these cases, the only difference being that unused syntactic constituents are propagated downward. Note that we continue to take syntax into account; we do not wish to ignore the syntax of leftover constituents and fill roles indiscriminately on the basis of semantic properties alone.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SEMANTICS", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We conducted an experiment to assess the effects of these changes upon the system's performance, using a set of 138 queries (both class A and non-class A) on which the system was previously trained. The measure of performance used was the standard ATIS metric of the number of correct answers minus the number incorrect. Disabling the semantic changes described above lowered the system's score from 82 to 63, a decrease of 23~.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SEMANTICS", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Our second improvement was directed at cases where PUN-DIT's semantic interpreter may have correctly represented the meaning of a sentence but in an irregular way. For exampie, the instantiated decomposition produced for \"flights from Boston\" is: Clearly it would be preferable for the flight_C decomposition to be the same in both cases, but in the second case the source role of the decomposition associated with flightl was unfilled, although it could be inferred from the leaveP decomposition that the flight's source was Boston. In other words, PUNDIT had not captured a basic synonymy relation between these np's.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The application module and non-transparent semantic representations", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Our response to this was to augment the semantic interpreter with a routine which can perform inferences involving more than one decomposition. The actual inferences are expressed in the form of rules which are domain-dependent; the inference-performlng mechanism is domain-independent. For the above example, we have written a rule which, paraphrased in English, says that if a verb is one of a class of motion verbs used to express flying (e.g., \"leave\"), and if the source role of this verb is filled, propagate that filler to the source role of the flight involved. Thus the flight_C decomposition becomes the same for both inputs. Thirty-four such rules have been written for the ATIS domain, and we estimate that they are applicable to 10% to 15% of the training data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The application module and non-transparent semantic representations", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The payoff from this extension comes in the use of PUNDIT's output by application modules. For the ATIS domain, the application module is the program that takes PUNDIT's output and uses it to formulate a query to the ATIS DB. It is obviously advantageous for the creation and maintenance of an application module that its input be regularized to the greatest extent possible, thus making such a module simpler, and avoiding duplication of code to compensate for non-regularized input in different application modules.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The application module and non-transparent semantic representations", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "When we ran the same set of 138 queries used in the experiment described in the previous subsection without the rules just discussed (but with the semantics improvements of the previous subsection), the system's score dropped from 82 to 62, or 24%. There appears to be little interaction between the semantics improvements and the rules of this subsection-they apply to different phenomena in input data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The application module and non-transparent semantic representations", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In our June 1990 workshop paper ([6]), we described a feature of our system which we included to handle correctly a particular kind of discourse phenomenon. In particular, in the ATIS domain there are frequent references to flights by flight nnmher (e.g., \"Delta flight 123\") which the user means to be unambiguous, but which in general have to be disambiguated in context. The reason is that the user learned about \"Delta 123\" from some previous answer, where it was returned as one of the flights between two cities City1 and City2. The problem is that \"Delta 123\" may have additional legs; for instance it may go on from City2 to City3. The user, when asking for the fares for \"Delta 123\", is presumably interested only in the City1 to City2 fare, not the City2 to City3 one and not the City1 to City3 one. So our system looked back at previous answers to find a mention of \"Delta 123\", thereby determining the flight leg of interest.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "PRAGMATICS", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "This kind of disamhiguation can take other forms, and we have added some of them to our system since June. One of these capabilities is illustrated by the two queries What does LH meanf and What does EQP meanf Without context, the first of these cannot be correctly answered, because \"LH\" is a code for both an airline and a fare class. The second of these queries would yield a table with two rows, one row for each table for which \"EQP\" is one of the table's column headings. In both of these queries, however, the user is asking for clarification of something which has been presented as part of a previous answer display. So what our system needs to do, and does do, is refer back to previous answers much in the spirit of the \"Delta 123\" example above. For the first query, we will find the most recent answer which has \"LH\" as a column entry in some row: for the second we will find the most recent answer which has \"EQP\" as a column heading. Our system can then make the proper disambiguation and present the user with an appropriate cooperative response to the follow-up query. There were only a handful of follow-up queries of this form in the training data, hut the extension to handle them was easy to add given the code in place to handle the \"Delta 123\" example.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "PRAGMATICS", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Similarly, the training data contained numerous instances of queries such as What are the claues. ~ In the absence of context, the best answer to this seems to be a llst of the more than 100 different fare classes. However, queries such as these invariably follow the display of some fare classes in either flight tables or fare tables. The cooperative response, then, is to display a table of fare classes whose rows have been limited to those classes previously mentioned in the most recent flight or fare table. Our system also uses a generalization of this algorithm to filter requests for other kinds of codes, such as restrictions, ground transportation codes, aircraft codes, and meal codes. In all, from the TI training data ([2]) we have noticed 19 follow-up queries (out of 912) which now get the correct answer in context because of this extension to our system; there may be more queries which requite this extension that we have not yet processed correctly for other reasons.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "PRAGMATICS", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We make it possible to refer to previous answer tables in our system by means of the following mechanism. Whenever an answer table is returned, a discourse entity representing it is added to the discourse context, and a semantics for this entity is provided. Roughly speaking, if the query leading to the answer table is a request for X, the semantics can be thought of as being \"the table of X\" ( [6] ). For example, if the query was a request for flights from City1 to City2, the semantics assigned to the discourse entity representing the answer is \"the table of flights from City1 to City2\". Note that we do NOT create discourse entities for each row (particular flights from City1 to City2 in the example) or for each column entry in a row (e.g., the departure time of a particular flight from Cityl to City2). Doing so would make the discourse context munanageably large. But the table (complete with column headings) is available and accessible to our system, and can be searched for particular values when it is desirable to do so, as in the capabilities being described in this section.", |
|
"cite_spans": [ |
|
{ |
|
"start": 398, |
|
"end": 401, |
|
"text": "[6]", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "PRAGMATICS", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The techniques just described depend on the availability of previous ANSWERS. Some of the follow-up queries which they enable to be answered correctly could perhaps be handled by reference to previous QUERIES only, particularly in the special case where there is known to be only one previous query. We believe that our techniques are superior for at least two reasons. First, in the presence of more than one previous query, the answers to those previous queries are for our system a more compact and modular representation of the content of those queries than the discourse entities created while analysing the queries themselves; in short, it is simply easier to find what we want in the answers rather than in our representations of the queries. Second, there are follow-up queries which cannot he answered unless reference is made to previous answers, so such techniques are necessary in a complete system. Therefore, why not use them whenever they can be used, even when alternative techniques might be available?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "PRAGMATICS", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The February 1991 D1 pairs test, which limited context dependency to dependency which could be resolved by examination of a single previous query (and not its answer), provides additional data on the applicability of these methods. In particular, 27 of the 38 pairs involved the disambiguation of a flight number to the flight leg of interest. It appears that four additional queries can be successfully answered by the technique we discussed above for handling the query What are the classes? The remaining 7 queries appear to be such that reference to previous answers is not helpful.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "PRAGMATICS", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We describe here the five spoken language tests in which we participated. Our methodology in these tests has been to couple the speech recognition output from different recognizers to the same natural language processing system. Because the natural language component and the application module are held constant in these systems, this methodology provides us with a means of comparing the performance of speech recognlzers in a spoken language context.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SPOKEN LANGUAGE SYSTEMS", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The spoken language system used in this test consists of the Unisys PUNDIT natural language system coupled via an N-best interface to the MIT SUMMIT speech recognition system. We will refer to this system as Unisys-MIT. These results were run with N=16, except for 4 utterances which could not be run at N=16 because of insufficient memory in the speech recognition system. N=I was used for these utterances. SUMMIT produced the N-best and PUNDIT selected from the N-best the most acoustically probable utterance which also passed PUN-DIT's syntactic, semantic, and pragmatic constraints. PUNDIT then processed the selected candidate to produce the spoken language system output. The value of N of 16 was selected on the basis of experiments reported in [1] , which demonstrated that using larger N's than 10-15 leads to a situation where the chance of getting an F begins to outweigh the possible benefit of additional T's.", |
|
"cite_spans": [ |
|
{ |
|
"start": 754, |
|
"end": 757, |
|
"text": "[1]", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Class .A: UnSays PUNDIT system coupled with MIT Summit", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The SUMMIT system is a speaker-independent continuous speech recognition system developed at the MIT Laboratory of Computer Science. It is described in [10] .", |
|
"cite_spans": [ |
|
{ |
|
"start": 152, |
|
"end": 156, |
|
"text": "[10]", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Class .A: UnSays PUNDIT system coupled with MIT Summit", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Unlsys PUNDIT coupled with Lincoln Labs Speech Recognizer The spoken language system used in this test consists of the Unisys PUNDIT natural language system loosely coupled to the MIT Lincoln Labs speech recognition system. The Lincoln Labs system selected the top-1 output, which PUNDIT then processed to produce the spoken language system output.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Class .A: UnSays PUNDIT system coupled with MIT Summit", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The LincoLn Labs system is a speaker independent continuous speech recognition system which was trained on a corpus of 5020 training sentences from 37 speakers. It used a bigram backoff language model of perplexity 17.8. The system is described in more detail in [8] .", |
|
"cite_spans": [ |
|
{ |
|
"start": 263, |
|
"end": 266, |
|
"text": "[8]", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Class .A: UnSays PUNDIT system coupled with MIT Summit", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Class A: Unlsys PUNDIT system coupled with BBN BY-BLOS", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Class .A: UnSays PUNDIT system coupled with MIT Summit", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In this test N-best output from the BBN BYBLOS system as described in [4] was input to PUNDIT. As in the system which used the MIT N-best, we used an N of 16. The N-best from ", |
|
"cite_spans": [ |
|
{ |
|
"start": 70, |
|
"end": 73, |
|
"text": "[4]", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Class .A: UnSays PUNDIT system coupled with MIT Summit", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We also report on spoken language results on the optional class A test, using both the Unlsys-MIT system and the Unisys-BBN systems described above.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Optional Class A Tests", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The speecli recognition tests were done using the natural language constraints provided by the Unisys PUNDIT natural language system to select one candidate from the N-best output of the MIT Laboratory of Computer Science SUMMIT speech recognition system. Using an N of 16, PUNDIT selected the first candidate of the N-best which passed its natural language constraints based on syntactic, semantic and pragmatic knowledge. If all candidates were rejected by the natural language system, the first candidate in the N-best was considered to be the recognized string.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SPEECH RECOGNITION TESTS", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Unisys attempted all four of the nature,] language tests; both the required and the optional class A and class D1 tests. Our scores as released by NIST are as shown in table 1. The overall level of success is unimpressive. For the class A test, which corresponds most closely to the test last June, our performance is not much better, in spite of eight more months of work on our system. (If the scoring algorithm in effect now had been in effect in June, our score then would have been 42.2Ye) As this paper is being written, we have not had the time to examine our performance on a sentence by sentence basis. It appears likely, however, that the amount of training data has not yet adequately covered the full range of the various ways that people can formulate queries to the ATIS database.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Natural Language Common Task Evaluatlon", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We are fairly pleased that our \"false alarm\" rate has not gone up since June. It was 11% then; if we take the 196 sentences involved in the latest 4 tests as a single group, we find our rate of F's to be less than 8%. When we discuss our spoken language results in a subsequent section, we will see that although the rate of correct answers drops noticeably when a speech recognizer is added to the system, the rate of incorrect answers does not appear to increase. The importance of a low \"false alarm\" rate is well appreciated by spoken language understanding researchers; from a user's point of view nothing could be worse than an answer which is wrong although the user may have no way of telling it is wrong. It will be important to lower the rate of such errors to a level well below S%.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Natural Language Common Task Evaluatlon", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Our best performance came on the D1 pairs test. One would have expected a lower score on any test that requires two consecutive sentences to be understood than on a test of selfcontained sentences. While we wish we could claim that our work discussed in the earlier section on pragmatics was instrumental in achieving our score, it appears that much of what we added to our system did not come into play in this test. A more likely explanation of the unexpectedly high score is that when a user queries the system in a mode which utilizes follow-up queries, he or she tends to use simpler individual queries. Perhaps a user who does not use follow-up queries is trying to put more into each individual query. Some evidence for this is that our score for just the 20 distinct class A antecedent sentences for the D1 pairs test was 75%, well above our 48.3% score for all the class A sentences. Even more striking is the fact that of the 9 speakers represented in this round of tests, only two contributed more than 3 pairs to the class D1 test-speakers CK and CO contributed 13 pairs each. Our scores restricted to just those two speakers were 93~ for the class A test and 65% for the class D1 test (100% for speaker CO in the class D1 test!).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Natural Language Common Task Evaluatlon", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The optional tests clearly were too small to have much significance. It is not surprising that our system proved to be incapable at this point of dealing with extraneous words in the input queries, for we have made no efforts as yet to compensate for such inputs. These tests will be useful as a benchxnark for comparison after we have addressed such issues.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Natural Language Common Task Evaluatlon", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In the section on semantics we reported the results of two experiments that we ran to assess the effects of extensions to our system. We performed the same tests using the data of the latest class A test of 145 queries. When the extensions to our semantic interpreter were removed, our performance dropped to 72 T, 19 F, or 36.6%, a decrease of 24% from our score of 48.3%. This reinforces our belief that these extensions are very important and useful. When we ran the test without the rules relating multiple decompositions, our performance was 83 T, 14 F, or 47.6%, a decrease of less than 2%. This latter finding was most surprising-basically it implies that in the 1991 test data there were virtually no constructions of the kind which those rules enable us to process, because the absence of the rules relating the decompositions corresponding to those constructions resulted in almost no reduction in our score. In particular, there must have been no nouns modified by relative clauses (\"flights that arrive before noon\") or participial modifiers (\"flights serving dinner\"). This has some implication regarding the distribution of various forms of syntactic expression across speakers, for phenomena which were dearly significant in our training data apparently were absent from 9 speakers' worth of test data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semantics Extensions and the Common Task Tests", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The above experiments imply that our system as of last June would have gotten a score of less than 35% on the cur-rent class A test, for the extensions discussed in the section on semantics were not the only improvements we have made to our system. This is another indication of variability among speakers; for our system the 5 speakers of last June's test were easier to process. It appears to us that larger test sets are necessary to make a broad evaluation of natural language understanding capabilities. (We do not extend this suggestion to tests involving speech input because of the level of effort that would consume.) We have already noted the absence of relative clauses and participial modifiers in the recent class A test. We also noticed that 23 of 145, or 16%, of the sentences used the word \"available\", usually in constructions like \"what X is available\", while this word only appeared in 4% of the pilot training data. In the class D1 test, there were few discourse phenomena represented, and we noted in an earlier section that over 70% (27 of 38) of the D1 pairs involved just the phenomenon of flight leg disambiguation. Tests of such size, then, are not broadly representative of the range of query formulations in the ATIS domain.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semantics Extensions and the Common Task Tests", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Related to the last point is the suspicion that the few thousand sentences of training data are themselves too few to represent the range of user queries for this domain. We have noticed that fewer new words are appearing in the more recent sets of training data, so vocabulary closure is probably occurring. Even so, in the 145 class A queries of the recent test, our system found 12 with unknown words, or 8% of the queries. This was actually higher than the 5.5% we experienced with the test last June, but that is more a comment on the variability due to small test size. It is an open question whether more and more training data is the answer to making our systems more complete, however. After all, larger volumes of data are both expensive to collect and expensive to train from. The lack of closure for the syntactic and semantic variation in user queries presents a challenge for further research in spoken language understanding. It may we]] be that we will have to begin studying reasonable ways in which the variation in the range of user expression can be limited, without unduly contrainlng the user in the natural performance of the task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semantics Extensions and the Common Task Tests", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Unisys-MIT The spoken language results for this system were 29 T, 15 F, and 101 NA, for a weighted score of 9.7%. The system examined an average of 6.5 candidates in the N-best before finding an acceptable one. Of all candidates considered by the system, we found that 85~ were rejected by the syntax component and 3% by the semantics/praKmatics component, and 11% were accepted by both components. It should be pointed out that the syntax component uses a form of compiled semantics constraints during its search for parses ( [5] ), thus the resnlts for purely syntactic rejection are not as high as appears in this comparison, because some semantic constraints are applied during parsing. After a candidate is accepted by both syntax and semantics, the search in the N-best is terminated. However, the application component, which contains a great deal of information about domain-specific pragmatics, can also reject syntactically and semantically acceptable inputs for which it cannot construct a sensible database query. In fact, a syntactically and semanticul]y acceptable candidate was found in 75% of the N-best candidate lists, but a call was The false alarm (or F) rate we observed in this test was around 10~, which is consistent with our previous spoken language results ( [1] ) and with our natural language results, as discussed above.", |
|
"cite_spans": [ |
|
{ |
|
"start": 527, |
|
"end": 530, |
|
"text": "[5]", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 1285, |
|
"end": 1288, |
|
"text": "[1]", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Spoken Language Evaluations", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Unisys-BBN This system received a score of 77 T, 20 F and 48 NA for a weighted score of 39.3%. In this system 74% of all inputs were rejected by syntax, 11% of inputs were accepted by syntax but rejected by semantics and 15~ were accepted by both syntax and semantics. The false alarm rate is 13.8~, which is slightly higher but in the same range as previous false alarm rates.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Spoken Language Evaluations", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "As can be seen in Figure 2 , in general the system found an acceptable candidate earlier in the N-best with the BBN Nbest than with the MIT N-best. The average location of the selected candidate in the N-best with the BBN data was 3.8 compared to 6.5 with the MIT N-best.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 18, |
|
"end": 26, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Spoken Language Evaluations", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Unisys-LL Using the top-one candidate from the Lincoln Labs speech recognizer the spoken language results for this system were 32 T, 5 F and 108 NA for a weighted score of 18.6~. The false alarm rate for this system was only 3.4~, which is lower than that for the other spoken language and natural language systems on which we report in this paper. There is no obvious explanation for this. The simple hypothesis of better speech recognition in the Unisys-LL system will not suffice, because the BBN system has better speech recognition but the false alarm rate is higher than the Unisys-LL rate. In addition, the Unisys system's performance on the NL test tells us how the system would do given perfect speech recognition, and the false alarm rate there is around 8~. One possible hypothesis is that the bigram language model used in the Lincoln Labs system is in some sense more conservative than the language models used in the BBN and MIT systems and consequently prevents some of the inputs which might have led to an F in the natural language system from being recognized well enough for the natural language system to generate an F.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Spoken Language Evaluations", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In this system, based on one input per utterance, we found that 59% of the inputs failed to receive a syntactic analysis (including compiled semantics, as discussed above) and 2% failed to receive a semantic analysis. No database call could be generated for 13% of the inputs and a call was made for the remaining 25~ of the inputs.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Spoken Language Evaluations", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Evaluation of the Natural Language System In [1] we reported on a technique for evaluation of the natural language component of our spoken language system, based on the question of how often did the natural language system do the right thing. If the reference answer for an utterance is found in the N-best, the right thing for the natural language system is to find the reference answer (or a semantic equivalent) in the Nbest and give the right answer. The operational definition of doing the right thing, then, is for the system to receive a \"T\" on such inputs. On the other hand if the reference answer is not in the N-best the right thing for the system to do is to either find a semantic equivalent to the reference answer or to reject all inputs. Thus, doing the right thing in the case of no reference answer can be operationally defined as \"T\" + \"NA'.", |
|
"cite_spans": [ |
|
{ |
|
"start": 45, |
|
"end": 48, |
|
"text": "[1]", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Spoken Language Evaluations", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Reference in N-best not in N-best Pundit right 54% 90% Pundit wrong 45% 10%", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reference", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Overall 84% 16% Several interesting comparisons can be made based on tables 2 and 3. To begin with, it seems clear that the BBN N-best is better than the MIT N-best based on three quite distinct measures -first of all the speech recognition score is better (16.1% word error rate for BBN vs. 43.6~ word error rate for MIT), secondly, the spoken language score (with the natural language system held constant) for Unisys-BBN is better than Unlsys-MIT (39.3% for Unisys-BBN vs. 9.7% for Unlsys-MIT) and thirdly, the reference answer occurred in MIT's top 16 candidates only 15~ of the time vs. 65% of the time for the BBN N-best. Thus this experiment allows us to ask the question of what effect does better speech recognition have on the interaction between speech recognition and natural language?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reference", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In the case where the reference answer is in the N-best, PUNDIT does much better with the BBN N-best. Since less search in the N-best is required with BBN data the reference answer or equivalent is likely to be found sooner, and consequently there will be fewer chances for PUNDIT to find a syntactically and semantically acceptable sentence in the N-best which differs crucially from what was uttered. On the other hand, PUNDIT actually does better with the poorer speech recogniser output from MIT when the reference answer is not in the N-best. We suspect that the poorer speech recognizer output is in some sense easier to reject; that is, it is more likely to seriously violate the syntactic and semantic constraints of English. If this is so then it is possible that a relatively accepting natural language system might work wen with worse speech recognition outputs (because even a relatively accepting natural language system can reject very had inputs), but with better speech recognizer output one might get good performance with a stricter natural language system. We plan to test this hypothesis in future research.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reference", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "It is natural to ask why we should care about what to do with poorer speech recognizer output; one would tlllnlc that we should use the best recognizer output possible. The answer is that many potential applications have requirements such as large vocabulary size which are somewhat at odds with high accuracy, consequently the best recognizer output available may nevertheless be relatively inaccurate. Thus it is important to have speech/natu.~al language integration strategies which allow us to fine tune the interaction to compensate for less accurate speech recognition.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reference", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Optional Class A We used both the Unisys-MIT system and the UUIsys-BBN system for this test. For both speech recognizers in this test of eleven utterances with verbal deletions we received two T's and sero F's for a weighted score of 18.2%. There is too little test data in this condition to draw reliable conclusions from the results.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reference", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We believe coupling of a single natural language system with multiple speech recognition systems has the potential for being a very useful technique for comparing speech recognizers in a spoken language context. Of course speech recognizers can he compared on the basis of word and sentence accuracy, but we do not know how direct the mapping is between these measures of performance and spoken language performance. The most direct comparision for spoken language evaluation, then, is to define an experimental condition in which the systems to be compared differ only in the speech recognition component. Not only is this strategy useful for comparing system level measurements of performance of speech recognizers, hut it is also useful for more fine grained analyses of the interaction between the speech recognition component and the natural language system. Figure 3 shows the distribution of T's, F's and NA's for specific queries across the three systems.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 864, |
|
"end": 872, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Comparison of Spoken Language Systems", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Note that for 52 queries, or 36~ of the total, the systems received the same score, although in no case did all three sys-tents receive an \"F'. The largest difference among the three systems was in the number of cases where Unisys-BBN received a \"T\" but the other two systems received an \"NA'. This occurred for 31 queries.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparison of Spoken Language Systems", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Another interesting comparison is to look at the cases where Unisys-MIT and Unisys-BBN issued a call based on the ffi~st candidate in the N-best, since this corresponds to the one-best interface used in Unisys-LL. In Unisys-MIT twenty-seven calls were issued based on the first candidate, out of a total of 45 cans. Of the calls issued on the first candidate, 15 received a score of T and 12 received a score of F, for a weighted score of 2%. In Unisys-BBN the first candidate was selected from the N-best 70% of the time. 26 of these candidates resulted in scores of \"F\" and 42 resulted in a \"T\" for a weighted score of 11%.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparison of Spoken Language Systems", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Overall, the number of calls made was quite similar for the Unisys-LL and Unisys-MIT systems (25% of utterances for Unisys-LL and 30% for Unisys-MIT), but it was much higher for Unisys-BBN (67%). In all three systems most of the inputs were rejected by the syntax component (59% of all inputs for Unisys-LL, 74% of all inputs for Unisys-BBN and 85% of all inputs for Unisys-MIT). We can compare this to a baseline syntactic falluxe of 14% of inputs on the Unisys natural language test. (Note that since multiple inputs per utterance are possible with the N-best systems, the N-best vs. one-best systems are not strictly comparable.)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparison of Spoken Language Systems", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Using speech recognition data from MIT, we submitted resuits for the Class A, Class D1, Class AO and Class D10 speech recognition tests, shown in tables 4, 5, 6, and 7.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Speech Recognition Evaluatlons", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "As expected, we observed a higher error rate for the optional tests, which contained verbal deletions, and we also observed a wide range of performance across speakers. The comparison of D1 pales and Class A speech recognition showed poorer word recognition in the D1 pairs than in the Class A test. An average 45.8% word error rate was observed for the Class A utterances compared to a 54.6% error rate for the D1 utterances. As tables 4 and 6 show, this was fairly consistent across speakers, except for speaker CJ. There are at least two hypotheses which may explain this higher error in context dependent spontaneous utterances. One hypothesis suggests that the higher error rate may be due in part to the presence of prosodic phenomena common in dialog such as destressing of \"old\" information. Because the specific dialog context affects the pronunciation of words corresponding to old and new information, the training data used so far may not provide a complete sample of how words are pronounced in a wide range of dialog contexts, consequently leading to poorer word recognition. Another hypothesis is based on the fact that the context-dependent sentences contain many references to flight numbers. Flight numbers may be difficult to recognize hecause there is very little opportunity for syntactic or semantic information to constrain which number was uttered. and speech recognition. Our weighted score for the Class A natural language test was 48.3%, for the D1 pairs, 63.2%, for the Class AO test, 9.1% and for the DO test, -50%. We presented five benchmark tests of spoken language systems, Unisys-MIT on Class A, which received a weighted score of 9.7%, Unisys-MIT on Class AO, which received a weighted score of 18.2~, Unlsys-LL on Class A, which received a weighted score of 18.6%, Unisys-BBN on Class A, which received a weighted score of 39.6%, and Unlsys-BBN on Class AO, which received a weighted score of 18.2%. Fined]y, we presented speech recognition results using the Unisys natural language system as a filter on the N-best output of the MIT SUMMIT system. The semantics enhancements to the natural language system are motivating us to revisit the tightly integrated archltecture of semantics/pragmatics processing in our system, because with these enhancements, semantic information regarding a discourse entity can become available to the processing at a much later point than previously. Thus, pragmatic processing must be invoked at a later point to ensure that ed] relevant semantic information has been exploited.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Speech Recognition Evaluatlons", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The spoken language results are especially interesting, because we are now beginning to be able to look at the interactions of the natural language system with different speech recognisers,, and see how to tune the natural language system to make the best use of the information available from the various speech recognlsers. We believe that it is important to make these kinds of comparisons and we are planning to work with at least one other speech recognition system using the N-best interface. We also plan to begin exploring more tightly coupled systems using the stack decoder architecture ([9]).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "CONCLUSIONS", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Training and evaluation of spoken language understanding system", |
|
"authors": [ |
|
{ |
|
"first": "Deborah", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Dald", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lynette", |
|
"middle": [], |
|
"last": "Hirschman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lewis", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Norton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marcia", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Linebarger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Magerman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Catherine", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Ball", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "Peoceedings of the DARPA Speech and Language Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Deborah A. Dald, Lynette Hirschman, Lewis M. Norton, Marcia C. Linebarger, David Magerman, and Cather- ine N. Ball. Training and evaluation of spoken language understanding system. In Peoceedings of the DARPA Speech and Language Workshop, Hidden Valley, PA, June 1990.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "The ATIS spoken language systems pilot corpus", |
|
"authors": [ |
|
{ |
|
"first": "Charles", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Hemphill", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Godfrey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "George", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Doddington", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "Proceedings of the DARPA Speech and Language Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Charles T. Hemphill, John J. Godfrey, and George R. Doddington. The ATIS spoken language systems pilot cor- pus. In Proceedings of the DARPA Speech and Language Workshop, Hidden Valley, PA, June 1990.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "The PUNDIT natural-language processing system", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Hirschman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Palmer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Dowding", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Dahl", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Linebarger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Passonneau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F.-M", |
|
"middle": [], |
|
"last": "Lang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Ball", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Weir", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1989, |
|
"venue": "AI Systems in Government Conf. Computer Society of the IEEE", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "L. Hirschman, M. PaLmer, J. Dowding, D. Dahl, M. Linebarger, R. Passonneau, F.-M. Lang, C. Ball, and C. Weir. The PUNDIT natural-language processing sys- tem. In AI Systems in Government Conf. Computer So- ciety of the IEEE, March 1989.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "BYBLOS speech recognition benclmmrk results", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Kubala", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Austin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Barry", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Makhoul", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Placeway", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Schwaxts", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1991, |
|
"venue": "Proceedings of the Darpa Speech and Natural Language Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "F. Kubala, S. Austin, C. Barry, J. Makhoul, P. Placeway, and R. Schwaxts. BYBLOS speech recognition benclmmrk results. In Proceedings of the Darpa Speech and Natural Language Workshop, Asilomar, CA, February 1991.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Improved portability and parsing through interactive acquisition of semantic information", |
|
"authors": [ |
|
{ |
|
"first": "F.-M", |
|
"middle": [], |
|
"last": "Lang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Hirschman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1988, |
|
"venue": "Proc. of the Second Conf. on Applied Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "F.-M. Lang and L. Hirschman. Improved portability and parsing through interactive acquisition of semantic infor- mation. In Proc. of the Second Conf. on Applied Natural Language Processing, Austin, TX, February 1988.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Management and evaluation of interactive dialog in the air travel domain", |
|
"authors": [ |
|
{ |
|
"first": "Lewis", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Norton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Deborah", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Dahl", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Donald", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Mckay", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lynette", |
|
"middle": [], |
|
"last": "Hirsclunan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Marcla", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Linebarger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Catherine", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Makerman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ball", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "Proceedings of the DARPA Speech and Language Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lewis M. Norton, Deborah A. Dahl, Donald P. McKay, Lynette Hirsclunan, Marcla C. Linebarger, David MaKer- man, and. Catherine N. Ball. Management and evaluation of interactive dialog in the air travel domain. In Pro- ceedings of the DARPA Speech and Language Workshop, Hidden Valley, PA, 3une 1990.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Semantic Processing for\" Finite Domains", |
|
"authors": [ |
|
{ |
|
"first": "Ma~tha", |
|
"middle": [], |
|
"last": "Palmer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ma~tha Palmer. Semantic Processing for\" Finite Domains. Cambridge University Press, Cambridge, England, 1990.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "New results with the Lincoln tied-mixture HMM CSI~ system", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [ |
|
"B" |
|
], |
|
"last": "Paul", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1991, |
|
"venue": "Proceedings of the DARPA Speech and Natural Language WorI~shop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "D. B. Paul. New results with the Lincoln tied-mixture HMM CSI~ system. In Proceedings of the DARPA Speech and Natural Language WorI~shop, February 1991.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "A CS11-NL interface specification", |
|
"authors": [ |
|
{ |
|
"first": "Douglas", |
|
"middle": [ |
|
"B" |
|
], |
|
"last": "Paul", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1989, |
|
"venue": "Proceedinga of the DARPA Speech and Natural Language Wort~shop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Douglas B. Paul. A CS11-NL interface specification. In Proceedinga of the DARPA Speech and Natural Language Wort~shop, 1989.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Development and pre]hninv~y evaluation of the MIT ATIS system", |
|
"authors": [ |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Zue", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Glass", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dave", |
|
"middle": [], |
|
"last": "Goddeau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lynette", |
|
"middle": [], |
|
"last": "Goodine", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Hixsclunan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Leung", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Phillips", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Polifxonl", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Seneff", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "Proceedings of the DARPA Speech and Natural Language Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "V. Zue, 3. Glass, D. Goddeau, Dave Goodine, Lynette Hixsclunan, H. Leung, M. Phillips, 3. Polifxonl, and S. Seneff. Development and pre]hninv~y evaluation of the MIT ATIS system. In Proceedings of the DARPA Speech and Natural Language Workshop, February 3991.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "\" :.,~:::::::.~::~:::::::~::::::::~,,-::::~,.:: l~.'.-'.;,-',.~i.~~.,.~,.,,...,...,,~,~,.,.i-.,..-,,~:~il", |
|
"num": null |
|
}, |
|
"FIGREF1": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "fiight_\u00a2 (~iightl, source (boston) .... ) while \"flights leaving Boston\" resulted in: ~light C (flight 1, source (_), ...) loavoP (lsavel, flight (flight 1), source (boston), ...)", |
|
"num": null |
|
}, |
|
"FIGREF2": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "Location of Selected candidate I (all rejected) i 1~ MiT N-beSt ~ BBN N-best ] Comparison of Location of Accepted Candidate in N-best (N=16) for Unlsys-MIT and Unisys-BBN Systems made for only 30% of inputs. The application component was not able to rnal~e a sensible call for the remaining inputs.", |
|
"num": null |
|
}, |
|
"TABREF0": { |
|
"content": "<table><tr><td>Class</td><td>Number</td><td>T</td><td>F</td><td>Score</td></tr><tr><td>Class A</td><td>145 queries</td><td>84</td><td colspan=\"2\">14 48.3~</td></tr><tr><td>Class D1</td><td>38 pairs</td><td>24</td><td colspan=\"2\">0 63.2~</td></tr><tr><td>Class AO</td><td>11 queries</td><td>1</td><td>0</td><td>9.1%</td></tr><tr><td>Class D10</td><td>2 pairs</td><td>0</td><td>1</td><td>-50%</td></tr></table>", |
|
"html": null, |
|
"num": null, |
|
"text": "Unisys System Scores BBN was the output from BYBLOS rescored using cross-word models and a 4-gram model and then reordered before input to the natural language system.", |
|
"type_str": "table" |
|
}, |
|
"TABREF1": { |
|
"content": "<table><tr><td colspan=\"3\">PUNDIT's performance</td><td colspan=\"2\">on Class A (145</td></tr><tr><td>queries), depending</td><td colspan=\"4\">on whether or not reference</td></tr><tr><td colspan=\"5\">query occurred in N-best (N=16) from MIT-LCS</td></tr><tr><td>SPREC.</td><td/><td/><td/><td/></tr><tr><td colspan=\"2\">Reference</td><td colspan=\"2\">Reference</td><td/></tr><tr><td colspan=\"2\">in N-best</td><td colspan=\"2\">not in N-best</td><td>Overall</td></tr><tr><td>Pundit right</td><td>69~</td><td/><td>81~</td><td>73~</td></tr><tr><td>Pundit wrong</td><td>31%</td><td>]</td><td>19%</td><td>27~</td></tr></table>", |
|
"html": null, |
|
"num": null, |
|
"text": "", |
|
"type_str": "table" |
|
}, |
|
"TABREF2": { |
|
"content": "<table><tr><td colspan=\"3\">: PUNDIT's performance</td><td colspan=\"2\">on Class A (145</td></tr><tr><td>queries),</td><td>depending</td><td colspan=\"2\">on whether</td><td>or not refer-</td></tr><tr><td colspan=\"5\">ence query occurred in N-best (N=16) from BBN</td></tr><tr><td>SPREC.</td><td/><td/><td/></tr></table>", |
|
"html": null, |
|
"num": null, |
|
"text": "", |
|
"type_str": "table" |
|
}, |
|
"TABREF3": { |
|
"content": "<table><tr><td colspan=\"2\">Speaker [Co,,</td><td colspan=\"4\">Sub [ Del I Ins I E~</td><td>S. Err I</td></tr><tr><td>CE</td><td>56.0</td><td>33.3</td><td colspan=\"2\">10.6] 5.1</td><td>49.1</td><td>95.0</td></tr><tr><td>CH</td><td>47.4</td><td>44.7</td><td colspan=\"2\">7.9 23.7</td><td>76.3</td><td>100.0</td></tr><tr><td>CI</td><td>45.6</td><td>46,8</td><td>7,6</td><td>24,0</td><td>78.4</td><td>100.0</td></tr><tr><td>CJ</td><td>75.8</td><td>18.3</td><td>5.9</td><td>3.1</td><td>27.3</td><td>84.6</td></tr><tr><td>CK</td><td>56.9</td><td colspan=\"2\">29.4 13.7</td><td>2.0</td><td>45.1</td><td>91.7</td></tr><tr><td>CL</td><td>75.0</td><td>22.5</td><td>2.5</td><td>9,7</td><td>34.7</td><td>84.6</td></tr><tr><td>CM</td><td>31.1</td><td>62.1</td><td>6.8</td><td>15.9</td><td>84.8</td><td>100.0</td></tr><tr><td>CO</td><td>74.1</td><td>19.1</td><td>6.8</td><td>8.6</td><td>34.6</td><td>100.0</td></tr><tr><td>CP</td><td>! 71.7</td><td>26.5</td><td>1.8</td><td>7.5</td><td>35.8</td><td>88.9</td></tr><tr><td colspan=\"6\">I Average I 63.5 30.3 I 6.2 I 9.3 ] 45.8</td><td>91.2 I</td></tr></table>", |
|
"html": null, |
|
"num": null, |
|
"text": "In this paper we presented benchmark test results on natural language understanding, spoken language understanding", |
|
"type_str": "table" |
|
}, |
|
"TABREF4": { |
|
"content": "<table><tr><td colspan=\"7\">System Summary Percentages by Speaker</td></tr><tr><td>for Class A</td><td/><td/><td/><td/><td/><td/></tr><tr><td colspan=\"7\">]Speaker I Corr I Sub ]Del I Ins ] Err ]S. Err</td></tr><tr><td>CE</td><td>59.5</td><td>40.5</td><td>0.0</td><td>16.2</td><td>56.8</td><td>100.0</td></tr><tr><td>cI</td><td>21.9</td><td>71.9</td><td>6.2</td><td>21.9</td><td>lOO.O</td><td>lOO.O</td></tr><tr><td>CJ</td><td>72.6</td><td>17.7</td><td>9.7</td><td>1.6</td><td>29.0</td><td>100.0</td></tr><tr><td>CM</td><td colspan=\"2\">42.7 ! 50.7</td><td>6.7</td><td>21.3</td><td>78.7</td><td>100.0</td></tr><tr><td colspan=\"3\">[Average I 51.5 [42:2</td><td colspan=\"4\">6.3 I 14.6 [ 63.1 I 100.0</td></tr><tr><td colspan=\"7\">Table 5: System Summary Percentages by Speaker</td></tr><tr><td colspan=\"2\">for Class .4,0</td><td/><td/><td/><td/><td/></tr><tr><td colspan=\"2\">Speaker ] Corr</td><td colspan=\"3\">Sub I De] I Ins</td><td>Err</td><td>S. Err [</td></tr><tr><td>CH</td><td>40.0</td><td>53.3</td><td>6.7</td><td colspan=\"2\">13.3 73.3</td><td>100.0</td></tr><tr><td>CI</td><td>24.2</td><td colspan=\"4\">46.3 29.5 15.8 91.6</td><td>100.0</td></tr><tr><td>CJ</td><td>83.9</td><td>10.7</td><td>5.4</td><td>3.6</td><td>19.6</td><td>50.0</td></tr><tr><td>CK</td><td>48.0</td><td colspan=\"2\">39.9 12.2</td><td>2.7</td><td>54.7</td><td>100.0</td></tr><tr><td>CL</td><td>66.7</td><td>31.7</td><td>1.7</td><td>8.3</td><td>41.7</td><td>83.3</td></tr><tr><td>CO</td><td>54.7</td><td colspan=\"2\">27.3 18.0</td><td>3.1</td><td>48.4</td><td>94.4</td></tr><tr><td>CP</td><td>80.0</td><td>20.0</td><td>0.0</td><td>8.0</td><td>28.0</td><td>100.0</td></tr><tr><td colspan=\"2\">Average [ 52.0</td><td colspan=\"3\">34.1 I 13.9 [ 6.6</td><td>54.6</td><td>91.4</td></tr></table>", |
|
"html": null, |
|
"num": null, |
|
"text": "", |
|
"type_str": "table" |
|
}, |
|
"TABREF5": { |
|
"content": "<table><tr><td/><td colspan=\"10\">System Summary Percentages by Speaker</td></tr><tr><td colspan=\"2\">for Class D1</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td colspan=\"9\">Speaker [ Cot, I Sub ] Del [ Ins</td><td>Err</td><td>S. Err</td></tr><tr><td>CM</td><td>{</td><td>34.8</td><td>{</td><td>65.2</td><td>[</td><td>0.0</td><td>[</td><td colspan=\"2\">26.1 91.3</td><td>100.0</td></tr><tr><td colspan=\"10\">Average [ 59.6 [ 40.4 [ 0.0 [ 15.8 56.1</td><td>100.0</td></tr></table>", |
|
"html": null, |
|
"num": null, |
|
"text": "", |
|
"type_str": "table" |
|
}, |
|
"TABREF6": { |
|
"content": "<table/>", |
|
"html": null, |
|
"num": null, |
|
"text": "System Summary Percentages by Speaker for Class D10", |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |