Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "H93-1008",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:30:41.754215Z"
},
"title": "Gemini: A Natural Language System for Spoken-Language Understanding*",
"authors": [
{
"first": "John",
"middle": [],
"last": "Dowding",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "SRI International",
"location": {
"addrLine": "333 Ravenswood Avenue",
"postCode": "94025",
"settlement": "Menlo Park",
"region": "CA"
}
},
"email": ""
},
{
"first": "Jean",
"middle": [
"Mark"
],
"last": "Gawron",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "SRI International",
"location": {
"addrLine": "333 Ravenswood Avenue",
"postCode": "94025",
"settlement": "Menlo Park",
"region": "CA"
}
},
"email": ""
},
{
"first": "Doug",
"middle": [],
"last": "Appelt",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "SRI International",
"location": {
"addrLine": "333 Ravenswood Avenue",
"postCode": "94025",
"settlement": "Menlo Park",
"region": "CA"
}
},
"email": ""
},
{
"first": "John",
"middle": [],
"last": "Bear",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "SRI International",
"location": {
"addrLine": "333 Ravenswood Avenue",
"postCode": "94025",
"settlement": "Menlo Park",
"region": "CA"
}
},
"email": ""
},
{
"first": "Lynn",
"middle": [],
"last": "Cherny",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "SRI International",
"location": {
"addrLine": "333 Ravenswood Avenue",
"postCode": "94025",
"settlement": "Menlo Park",
"region": "CA"
}
},
"email": ""
},
{
"first": "Robert",
"middle": [],
"last": "Moore",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "SRI International",
"location": {
"addrLine": "333 Ravenswood Avenue",
"postCode": "94025",
"settlement": "Menlo Park",
"region": "CA"
}
},
"email": ""
},
{
"first": "Doug",
"middle": [],
"last": "Moran",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "SRI International",
"location": {
"addrLine": "333 Ravenswood Avenue",
"postCode": "94025",
"settlement": "Menlo Park",
"region": "CA"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "",
"pdf_parse": {
"paper_id": "H93-1008",
"_pdf_hash": "",
"abstract": [],
"body_text": [
{
"text": "Gemini is a natural language understanding system developed for spoken language applications. This paper describes the details of the system, and includes relevant measurements of size, efficiency, and performance of each of its sub-components in detail.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1."
},
{
"text": "The demands on a natural language understanding system used for spoken language differ somewhat from the demands of text processing. For processing spoken language, there is a tension between the system being as robust as necessary, and as constrained as possible. The robust system will attempt to find as sensible an interpretation as possible, even in the presence of performance errors by the speaker, or recognition errors by the speech recognizer. In contrast, in order to provide language constraints to a speech recognizer, a system should be able to detect that a recognized string is not a sentence of English, and disprefer that recognition hypothesis from the speech recognizer. If the coupling is to be tight, with parsing and recognition interleaved, then the parser should be able to enforce as many constraints as possible for partial utterances. The approach taken in Gemini is to tightly constrain language recognition to limit overgeneration, but to extend the language analysis to recognize certain characteristic patterns of spoken utterances (but not generally thought of as part of grammar) and to recognize specific types of performance errors by the speaker.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1."
},
{
"text": "Processing starts in Gemini when syntactic, semantic, and lexical rules are applied by a bottom-up all-paths constituent parser to populate a chart with edges containing syntactic, semantic, and logical form information. Then, a second utterance parser is used to apply a second set of syntactic and semantic rules that are required to span the entire utterance. If no semantically-acceptable utterance-spanning edges are found during this phase, a component to recognize and correct certain grammatical disfluencies is applied. When an acceptable interpretation is found, a set of parse preferences are used to choose a single best-interpretation from the chart to be used for subsequent processing. Quantifier scoping rules are applied to this best-interpretation to produce the final logical form, which is then used as input to a query answering system. The following sections will describe each of these components in detail, with the exception of the query answering subsystem, which will not be described in this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1."
},
{
"text": "Since this paper describes a component by component view of Gemini, we will provide detailed statistics on the size, speed, coverage, and accuracy of the various components. These numbers detail our performance on the subdomain of air-travel planning that is currently being used by the DARPA spoken language understanding community [13] . Gemini was trained on a 5875 utterance dataset from this domain, with another 688 utterances used as a blind test (not explicitly trained on, but run multiple times) to monitor our performance on a dataset that we didn't train on. We will also report here our results on another 756 utterance fair test set, that we ran only once. Table 1 contains a summary of the coverage of the various components on the both the training and fair test sets. More detailed explanations of these numbers are given in the relevant sections. Office of Naval Research. The views and conclusions contained in this document are those of the authors and should not he interpreted as necessarily representing the official policies, either expressed or implied, of the Advanced Research Projects Agency of the U.S. Government. SYSTEM DESCRIPTION Gemini maintains a firm separation between the language-and domain-specific portions of the system, and the underlying infrastructure and execution strategies. The Gemini kernel consists of a set of compilers to interpret the high-level languages in which the lexicon and syntactic and semantic grammar rules are written, as wellI as the parser, semantic interpretation, quantifier scoping, and repair correction mechanisms, as well as all other aspects of Gemini that are not specific to a language or domain. Although this paper describes the lexicon, grammar, and semantics of English, Gemini has also been used in a Japanese spoken language understanding system [10] We list some ways in which Gemini differs from other unification formalisms. Since many of the most interesting issues regarding the formalism concern typing, we defer discussing motivation until section 2.5.",
"cite_spans": [
{
"start": 333,
"end": 337,
"text": "[13]",
"ref_id": "BIBREF12"
},
{
"start": 1829,
"end": 1833,
"text": "[10]",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 671,
"end": 678,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1."
},
{
"text": ". Gemini uses typed-unification. Each category has a set of features declared for it. Each feature has a declared value-space of possible values (value spaces may be shared by different features). Feature structures in Gemini can be recursive, but only by having categories in their value-space, so typing is also recursive. Typed feature-structures are also used in HPSG [19] . One important difference with the use in Gemini is that Gemini has no type-inheritance.",
"cite_spans": [
{
"start": 372,
"end": 376,
"text": "[19]",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1."
},
{
"text": "2. Some approaches do not assume a syntactic skeleton of category-introducing rules (for example, Functional Unification Grammar [11] ). Some make such rules implicit (for example, the various categorial unification approaches, such as Unification Categorial Grammar [24] ).",
"cite_spans": [
{
"start": 129,
"end": 133,
"text": "[11]",
"ref_id": "BIBREF10"
},
{
"start": 267,
"end": 271,
"text": "[24]",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1."
},
{
"text": ". Even when a syntactic skeleton is assumed, some approaches do not distinguish the category of a constituent (np, vp, etc.) from its other features (for example, pers_num, gapsin, gapsout). Thus for example, in one version of GPSG, categories were simply feature bundles (attribute-value structures) and there was a feature MAJ taking values like N,V,A,P which determined the major category of constituent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1."
},
{
"text": "Gemini does not allow rules schematizing over syntactic categories.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4.",
"sec_num": null
},
{
"text": "The Gemini lexicon uses the same category notation as the Gemini syntactic rules. Lexical categories are types as well, with sets of features defined for them. The lexical component of Gemini includes the lexicon of base forms, lexical templates, morphological rules, and the lexical type and feature default specifications.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexicon",
"sec_num": "2.2."
},
{
"text": "The Gemini lexicon used for the air-travel planning domain contains 1,315 base entries. These expand by morphological rules to 2,019. In the 5875 utterance training set, 52 sentences contained unknown words (0.9%), compared to 31 sentences in the 756 utterance fair test (4.1%).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexicon",
"sec_num": "2.2."
},
{
"text": "A simplified example of a syntactic rule is: Here the semantics of the mother s is just the semantics of the daughter s with the illocutionary force marker whq wrapped around it. Also the semantics of the s gap's np's gapsem has been unified with the semantics of the wh-phrase. Through a succession of unifications this will end up assigning the wh-phrases semantics to the gap position in the argument structure of the s. Although each semantic rule must be keyed to a pre-existing syntactic rule, there is no assumption of rule-to-rule uniqueness. Any number of semantic rules maybe written for a single syntactic rule. We discuss some further details of the semantics in section .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constituent Grammar",
"sec_num": "2.3."
},
{
"text": "The constituent grammar used in Gemini contains 243 syntactic rules, and 315 semantic rules. Syntactic coverage on the 5875 utterance training set was 94.2%, and on the 756 utterance test set was 90.9%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constituent Grammar",
"sec_num": "2.3."
},
{
"text": "Since Gemini was designed with spoken language interpretation in mind, key aspects of the Gemini parser are motivated by the increased needs for robustness and efficiency that characterize spoken language. Gemini uses essentially a pure bottom-up chart parser, with some limited left-context constraints applied to control creation of categories containing syntactic gaps.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parser",
"sec_num": "2.4."
},
{
"text": "Some key properties of the parser are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parser",
"sec_num": "2.4."
},
{
"text": ". The parser is all-paths bottom-up, so that all possible edges admissible by the grammar are found.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parser",
"sec_num": "2.4."
},
{
"text": "\u2022 The parser uses subsumption checking to reduce the size of the chart. Essentially, an edge is not added to the chart if it is less general than a pre-existing edge, and pre-existing edges are removed from the chart if the new edge is more general.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parser",
"sec_num": "2.4."
},
{
"text": "\u2022 The parser is on-line [7] , essentially meaning that all edges that end at position i are constructed before any that end at position i + 1. This feature is particularly desirable if the final architecture of the speech-understanding system couples Gemini tightly with the speech recognizer, since it guarantees for any partial recognition input that all possible constituents will be built.",
"cite_spans": [
{
"start": 24,
"end": 27,
"text": "[7]",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parser",
"sec_num": "2.4."
},
{
"text": "An important feature of the parser is the mechanism used to constrain the construction of categories containing syntactic gaps. In earlier work [17] , we showed that approximately 80% of the edges built in an all-paths bottom-up parser contained gaps, and that it is possible to use prediction in a bottom-up parser only to constrain the gap categories, without requiring prediction for nongapped categories. This limited form of left context constraint greatly reduces the total number of edges built for a very low overhead. In the 5875 utterance training set, the chart for the average sentence contained 313 edges, but only 23 predictions.",
"cite_spans": [
{
"start": 144,
"end": 148,
"text": "[17]",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parser",
"sec_num": "2.4."
},
{
"text": "The main advantage of typed-unification is for grammar development. The type information on features allows the lexicon, grammar, and semantics compilers to provide detailed error analysis regarding the flow of values through the grammar, and warn if features are assigned improper values, or variables of incompatible types are unified. Since the type-analysis is performed statically at compile-time, there is no run-time overhead associated with adding types to the grammar.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Typing",
"sec_num": "2.5."
},
{
"text": "Syntactic categories play a special role in the typingscheme of Gemini. For each syntactic category, Gemini makes a set of declarations stipulating its allowable features and the relevant value spaces. Thus, the distinction between the syntactic category of a constituent and its other features can be cashed out as follows: the syntactic category can be thought of as the feature-structure type. The only other types needed by Gemini are the value-spaces used by features. Thus for example, the type v (verb) admits a feature v:form, whose value-space vform-types can be instantiated with values like present participle, finite, and past participle. Since all recursive features are category-valued, these two kinds of types SUf~Ce. Sorts are located in a conceptual hierarchy and are implemented as Prolog terms such that more general sorts subsume more specific sorts [16] . This allows the subsumption checking and packing in the parser to share structure whenever possible. Semantic coverage when applying sortal constraints was 87.4% on the training set, and on the test set was 83.7%.",
"cite_spans": [
{
"start": 871,
"end": 875,
"text": "[16]",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Typing",
"sec_num": "2.5."
},
{
"text": "Interleaving Semantics with Parsing In Gemini syntactic and semantic processing is fully interleaved.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interleaving Syntactic and Semantic Information",
"sec_num": "2.6."
},
{
"text": "Building an edge requires that syntactic constraints be applied, which results in a tree structure, to which semantic rules can be applied, which results in a logical form to which sortal contraints can be applied. Table 2 contains average edge counts and parse timing statistics I statistics for the 5875 utterance training set.",
"cite_spans": [],
"ref_spans": [
{
"start": 215,
"end": 222,
"text": "Table 2",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Interleaving Syntactic and Semantic Information",
"sec_num": "2.6."
},
{
"text": "The constituent parser uses the constituent grammar to build all possible categories bottom-up, independent of location within the string. Thus, the constituent parser does not force any constituent to occur either at the beginning of the utterance, or at the end. The utterance parser is a top-down back-tracking parser that uses a different grammar called the utterance grammar to glue the constituents found during constituent parsing together to span the entire utterance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Utterance Grammar and Utterance Parser",
"sec_num": "2.7."
},
{
"text": "Many systems [4] , [9] , [20] , [22] have added robustness 1 Gemini is implemented primarily in Quintus Prolog version 3.1.1. All timing numbers given in this paper were run on a lightly loaded Sun Spaxcstation 2 with at least 48MB of memory. Under normal conditions, Gemini runs in under 12MB of memory.",
"cite_spans": [
{
"start": 13,
"end": 16,
"text": "[4]",
"ref_id": "BIBREF3"
},
{
"start": 19,
"end": 22,
"text": "[9]",
"ref_id": "BIBREF8"
},
{
"start": 25,
"end": 29,
"text": "[20]",
"ref_id": "BIBREF19"
},
{
"start": 32,
"end": 36,
"text": "[22]",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Utterance Grammar and Utterance Parser",
"sec_num": "2.7."
},
{
"text": "with a similar post-processing phase. The approach taken in Gemini differs in that the utterance grammar uses the same syntactic and semantic rule formalism used by the constituent grammar. Thus the same kinds of logical forms built during constituent-parsing are the output of utterance-parsing, with the same sortal constraints enforced. For example, an utterance consisting of a sequence of modifier fragments (like on Tuesday at 3'o'clock on United) is interpreted as a conjoined property of a flight, because the only sort of thing in the ATIS domain which can be on Tuesday at 3'o'clock on United is a flight.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Utterance Grammar and Utterance Parser",
"sec_num": "2.7."
},
{
"text": "The utterance grammar is significantly smaller than the constituent grammar, only 37 syntactic rules and 43 semantic rules.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Utterance Grammar and Utterance Parser",
"sec_num": "2.7."
},
{
"text": "Grammatical disfluencies occur frequently in spontaneous spoken language. We have implemented a component to detect and correct a large sub-class of these disfluencies (called repairs, or self-corrections) where the speaker intends that the meaning of the utterance be gotten by deleting one or more words. Often, the speaker gives clues of their intention by repeating words or adding cue words that signal the repair:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Repairs",
"sec_num": "2.8."
},
{
"text": "(1) a. How many American airline flights leave Denver on June June tenth. b. Can you give me information on all the flights from San Francisco no from Pittsburgh to San Francisco on Monday.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Repairs",
"sec_num": "2.8."
},
{
"text": "The mechanism used in Gemini to detect and correct repairs is currently applied as a fall-back mechanism if no semantically acceptable interpretation is found for the complete utterance. The mechanism finds sequences of identical or related words, possibly separated by a cue word indicating a repair, and attempts to interpret the string with the first of the sequences deleted. This approach is presented in detail in [2] .",
"cite_spans": [
{
"start": 420,
"end": 423,
"text": "[2]",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Repairs",
"sec_num": "2.8."
},
{
"text": "The repair correction mechanism helps increase the syntactic and semantic coverage of Gemini (as reported in Table 1 ), at the cost miscorrecting some sentences that do not contain repairs. In the 5875 utterance training set, there were 178 sentences containing nontrivial repairs 2, of which Gemini found 89 (50%). Of the sentences Gemini corrected, 81 were analyzed correctly (91%), 8 contained repairs, but were corrected wrongly.",
"cite_spans": [],
"ref_spans": [
{
"start": 109,
"end": 116,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Repairs",
"sec_num": "2.8."
},
{
"text": "2For these results, we ignored repairs consisting of only an isolate fragment word, or sentence-initial filler words like \"yes\" and \"okay\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Repairs",
"sec_num": "2.8."
},
{
"text": "In the entire training set, Gemini only misidentified 15 sentences (0.25%) as containing repairs when they did not. Similarly, the 756 utterance test set contained 26 repairs, of which Gemini found 11 (42%). Of those 11, 8 were analyzed correctly (77%), and 3 were analysed incorrectly. In the training set, 2 sentences were misidentiffed as containing repairs (0.26%).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Repairs",
"sec_num": "2.8."
},
{
"text": "The parse preference mechanism used in Gemini begins with a simple strategy to disprefer parse trees containing specific \"marked\" syntax rules. As an example of a dispreferred rule, consider: Book those three flights to Boston. This sentence has a parse on which those three is a noun phrase with a missing head (consider a continuation of the discourse Three of our clients have sufficient credit). After penalizing such dispreferred parses, the preference mechanism applies attachment heuristics based on the work by Pereira [18] .",
"cite_spans": [
{
"start": 527,
"end": 531,
"text": "[18]",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parse Preference Mechanism",
"sec_num": "2.9."
},
{
"text": "Pereira's paper shows how the heuristics of Minimal Attachment and Right Association [12] can both be implemented using a bottom-up shift-reduce parser.",
"cite_spans": [
{
"start": 85,
"end": 89,
"text": "[12]",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parse Preference Mechanism",
"sec_num": "2.9."
},
{
"text": "(2) (a) John sang a song for Mary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parse Preference Mechanism",
"sec_num": "2.9."
},
{
"text": "(b) John canceled the room Mary reserved yesterday.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parse Preference Mechanism",
"sec_num": "2.9."
},
{
"text": "Minimal Attachment selects for the tree with the fewest nodes, so in (2a), the parse which makes for Mary a complement of sings is preferred. Right Association selects for the tree which incorporates a constituent A into the rightmost possible constituent (where rightmost here means beginning the furthest to the right). Thus, in (2b) the parse in which yesterday modifies reserved is preferred.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parse Preference Mechanism",
"sec_num": "2.9."
},
{
"text": "The problem with these heuristics is that when they are formulated loosely, as in the previous paragraph, they appear to conflict. In particular, in (2a), Right Association seems to call for the parse which makes for Mary a modifier of song.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parse Preference Mechanism",
"sec_num": "2.9."
},
{
"text": "Pereira's goal is to show how a shift-reduce parser can enforce both heuristics without conflict and enforce the desired preferences for examples like (2a) and (2b). He argues that Minimal Attachment and Right Association can be enforced in the desired way by adopting the following heuristics for the oracle to resolve conflicts with:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parse Preference Mechanism",
"sec_num": "2.9."
},
{
"text": "1. Right Association: In a shift-reduce conflict, prefer shifts to reduces.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parse Preference Mechanism",
"sec_num": "2.9."
},
{
"text": "2. Minimal Attachment: In a reduce-reduce conflict, prefer longer reduces to shorter reduces.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parse Preference Mechanism",
"sec_num": "2.9."
},
{
"text": "Since these two principles never apply to the same choice, they never conflict.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parse Preference Mechanism",
"sec_num": "2.9."
},
{
"text": "In Gemini, Pereira's heuristics are enforced when extracting syntactically and semantically well-formed parsetrees from the chart. In this respect, our approach differs from many other approaches to the problem of parse preferences, which make their preference decisions as parsing progresses, pruning subsequent parsing paths [5] , [8] , [14] . Applying parse preferences requires comparing two subtrees spanning the same portion of the utterance. For purposes of invoking Pereira's heuristics, the derivation of a parse can be represented as the sequence of S's (Shift) and R's (Reduce) needed to construct the parse's unlabeled bracketing. Consider, for example, tim choice between two unlabeled bracketings of ( 2a): Questions about the exact nature of parse preferences (and thus about the empirical adequacy of Pereira's proposal) still remain open, but the mechanism sketched does provide plausible results for a number of examples.",
"cite_spans": [
{
"start": 327,
"end": 330,
"text": "[5]",
"ref_id": "BIBREF4"
},
{
"start": 333,
"end": 336,
"text": "[8]",
"ref_id": "BIBREF7"
},
{
"start": 339,
"end": 343,
"text": "[14]",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parse Preference Mechanism",
"sec_num": "2.9."
},
{
"text": "(a) [",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parse Preference Mechanism",
"sec_num": "2.9."
},
{
"text": "The final logical form produced by Gemini is the result of applying a set of quantifier scoping rules to the best-interpretation chosen by the parse preference mechanism. The semantic rules build quasi-logical forms, which contain complete semantic predicate-argument structure, but do not specify quantifier scoping. The scoping algorithm that we use combines syntactic and semantic information with a set of quantifier scoping preference rules to rank the possible scoped logical forms consistent with the quasi-logical form selected by parse preferences. This algorithm is described in detail in [15] .",
"cite_spans": [
{
"start": 599,
"end": 603,
"text": "[15]",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Scoping",
"sec_num": "2.10."
},
{
"text": "This paper describes the approach we have taken to reesolving the tension between overgeneration and robustness in a spoken language understanding system. Some aspects of Gemini are specifically oriented towards limiting overgeneration, such as the on-line property for the parser, and fully interleaved syntactic and semantic processing. Other components, such as the fragment and run-on processing provided by the utterance grammar, and the correction of recognizable grammatical repairs, increase the robustness of Gemini. We believe a robust system can still recognize and disprefer utterances containing recognition errors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CONCLUSION",
"sec_num": "3."
},
{
"text": "We have described the current state of the research in the construction of the Gemini system. Research is ongoing to improve the speed and coverage of Gemini, as well as examining deeper integration strategies with speech recognition, and integration of prosodic information into spoken language disambiguation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CONCLUSION",
"sec_num": "3."
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The Core Language Engine",
"authors": [
{
"first": "H",
"middle": [],
"last": "Alshawi",
"suffix": ""
}
],
"year": 1992,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alshawi, H. (ed) (1992). The Core Language Engine, MIT Press, Cambridge.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Integrating Multiple Knowledge Sources for the Detection and Correction of Repairs in Human-Computer Dialog",
"authors": [
{
"first": "J",
"middle": [],
"last": "Bear",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Dowding",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Shriberg",
"suffix": ""
}
],
"year": 1992,
"venue": "30th Annual Meeting of the Association for Computational Linguists",
"volume": "",
"issue": "",
"pages": "56--63",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bear, J., Dowding, J., and Shriberg, E. (1992). \"Inte- grating Multiple Knowledge Sources for the Detection and Correction of Repairs in Human-Computer Dialog\", 30th Annual Meeting of the Association for Computa- tional Linguists, Newark, DE, pp. 56-63.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The Mental Representation of Grammatical Relations",
"authors": [
{
"first": "J",
"middle": [],
"last": "Bresnan",
"suffix": ""
}
],
"year": 1982,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bresnan, J. (ed) (1982) The Mental Representation of Grammatical Relations. MIT Press, Cambridge.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Recovery Strategies for Parsing Extragrammatical Language",
"authors": [
{
"first": "J",
"middle": [],
"last": "Carbonell",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Hayes",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 1983,
"venue": "American Journal of Computational Linguistics",
"volume": "9",
"issue": "4",
"pages": "123--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carbonell, J. and P. Hayes, P., (1983). \"Recovery Strate- gies for Parsing Extragrammatical Language,\" Ameri- can Journal of Computational Linguistics, Vol. 9, Num- bers 3-4, pp. 123-146.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "The Sausage Machine: A New Two-Stage Parsing Model",
"authors": [
{
"first": "L",
"middle": [],
"last": "Frazier",
"suffix": ""
},
{
"first": "J",
"middle": [
"D"
],
"last": "Fodor",
"suffix": ""
}
],
"year": 1978,
"venue": "",
"volume": "6",
"issue": "",
"pages": "291--325",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Frazier, L. and Fodor, J.D. (1978). \"The Sausage Ma- chine: A New Two-Stage Parsing Model\", Cognition, Vol. 6, pp. 291-325.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Generalized Phrase Structure Grammar",
"authors": [
{
"first": "G",
"middle": [],
"last": "Gazdar",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Pullum",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Sag",
"suffix": ""
}
],
"year": 1982,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gazdar, G., Klein, E., Pullum, G., Sag, I. (1982). Gen- eralized Phrase Structure Grammar. Harvard University Press, Cambridge.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "An Improved Context-Free Recognizer",
"authors": [
{
"first": "S",
"middle": [],
"last": "Graham",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Harrison",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Ruzzo",
"suffix": ""
}
],
"year": 1980,
"venue": "A CM Transactions on Programming Languages and Systems",
"volume": "2",
"issue": "",
"pages": "415--462",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Graham, S., Harrison, M., Ruzzo, W. (1980). \"An Im- proved Context-Free Recognizer\", in A CM Transactions on Programming Languages and Systems, Vol. 2, No. 3, pp. 415-462.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Two Principles of Parse Preference",
"authors": [
{
"first": "J",
"middle": [],
"last": "Hobbs",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Bear",
"suffix": ""
}
],
"year": 1990,
"venue": "Proceedings of the 13th International Conference on Computational Linguistics",
"volume": "3",
"issue": "",
"pages": "162--167",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hobbs,J., Bear, J. (1990). \"Two Principles of Parse Pref- erence\", in Proceedings of the 13th International Confer- ence on Computational Linguistics, Helsinki, Vol. 3, pp. 162-167.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Robust Processing of Real-World Natural-Language Texts",
"authors": [
{
"first": "J",
"middle": [],
"last": "Hobbs",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Appelt",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Bear",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Tyson",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Magerman",
"suffix": ""
}
],
"year": 1992,
"venue": "Text Based Intelligent Systems",
"volume": "",
"issue": "",
"pages": "13--33",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hobbs, J., Appelt, D., Bear, J., Tyson, M., Magerman, D. (1992). \"Robust Processing of Real-World Natural- Language Texts\", in Text Based Intelligent Systems, ed. P. Jacobs, Lawrence Erlbaum Associates, Hillsdale, N J, pp. 13-33.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "The syntax and semantics of the Japanese Language Engine",
"authors": [
{
"first": "M",
"middle": [],
"last": "Kameyama",
"suffix": ""
}
],
"year": 1992,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kameyama, M., (1992). \"The syntax and semantics of the Japanese Language Engine.\" forthcoming. In Mazuka, R. and N. Nagai Eds. Japanese Syntactic Pro- cessing Hillsdale, N J: Lawrence Erlbaum Associates.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Functional Grammar",
"authors": [
{
"first": "M",
"middle": [],
"last": "Kay",
"suffix": ""
}
],
"year": 1979,
"venue": "Proceedings of the 5th Annual Meeting of the Berkeley Linguistics Society",
"volume": "",
"issue": "",
"pages": "142--158",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kay, M. (1979). \"Functional Grammar\". In Proceedings of the 5th Annual Meeting of the Berkeley Linguistics Society. pp. 142-158.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Seven Principles of Surface Structure Parsing in Natural Language",
"authors": [
{
"first": "J",
"middle": [],
"last": "Kimball",
"suffix": ""
}
],
"year": 1973,
"venue": "Cognition",
"volume": "2",
"issue": "1",
"pages": "15--47",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kimball, J. (1973) \"Seven Principles of Surface Struc- ture Parsing in Natural Language,\" Cognition, Vol. 2, No. 1, pp. 15-47.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Multi-site Data Collection for a Spoken Language Corpus",
"authors": [
{
"first": "",
"middle": [],
"last": "Madcow",
"suffix": ""
}
],
"year": 1992,
"venue": "Proceedings of the DARPA Speech and Natural Language Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "MADCOW (1992). \"Multi-site Data Collection for a Spoken Language Corpus,\" Proceedings of the DARPA Speech and Natural Language Workshop, February 23- 26, 1992.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A Theory of Syntactic Recognition for Natural Language",
"authors": [
{
"first": "M",
"middle": [],
"last": "Marcus",
"suffix": ""
}
],
"year": 1980,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marcus, M. (1980). A Theory of Syntactic Recognition for Natural Language, MIT Press, Cambridge, Mas- sachusetts.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Quantifier Scoping in the SRI Core Language Engine",
"authors": [
{
"first": "D",
"middle": [],
"last": "Moran",
"suffix": ""
}
],
"year": 1988,
"venue": "Proceedings of the 26th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "33--40",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Moran, D. (1988). \"Quantifier Scoping in the SRI Core Language Engine\", Proceedings of the 26th Annual Meet- ing of the Association for Computational Linguistics, State University of New York at Buffalo, Buffalo, NY, pp. 33-40.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Implementing Systemic Classification by Unification",
"authors": [
{
"first": "C",
"middle": [],
"last": "Mellish",
"suffix": ""
}
],
"year": 1988,
"venue": "Computational Linguistics",
"volume": "14",
"issue": "",
"pages": "40--51",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mellish, C. (1988). \"Implementing Systemic Classifica- tion by Unification\". Computational Linguistics Vol. 14, pp. 40-51.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Efficient Bottom-up Parsing",
"authors": [
{
"first": "R",
"middle": [],
"last": "Moore",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Dowding",
"suffix": ""
}
],
"year": 1991,
"venue": "Proceedings of the DARPA Speech and Natural Language Workshop",
"volume": "",
"issue": "",
"pages": "200--203",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Moore, R. and J. Dowding (1991). \"Efficient Bottom-up Parsing,\" Proceedings of the DARPA Speech and Natural Language Workshop, February 19-22, 1991, pp. 200-203.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A New Characterization of Attachment Preferences",
"authors": [
{
"first": "F",
"middle": [],
"last": "Pereira",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Karttunen",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Zwicky",
"suffix": ""
}
],
"year": 1985,
"venue": "Natural Language Parsing",
"volume": "",
"issue": "",
"pages": "307--319",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pereira, F. (1985). \"A New Characterization of Attach- ment Preferences.\", in Natural Language Parsing, Ed. by Dowty, D., Karttunen, L., and Zwicky, A., Cambridge University Press, Cambridge, pp. 307-319.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": ") Information-BasedSyntax and Semantics",
"authors": [
{
"first": "C",
"middle": [],
"last": "Pollard",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Sag",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pollard, C. and Sag, I. (in press) Information-BasedSyn- tax and Semantics, Vol. 2, CSLI Lecture Notes.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A Relaxation Method for Understanding Spontaneous Speech Utterances",
"authors": [
{
"first": "S",
"middle": [],
"last": "Seneff",
"suffix": ""
}
],
"year": 1992,
"venue": "Proceedings of the Speech and Natural Language Workshop",
"volume": "",
"issue": "",
"pages": "299--304",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Seneff, S. (1992) \"A Relaxation Method for Understand- ing Spontaneous Speech Utterances\", in Proceedings of the Speech and Natural Language Workshop, Harriman, NY, pp. 299-304.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "The Formalism and Implementation of PATR-II",
"authors": [
{
"first": "S",
"middle": [],
"last": "Shieber",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Pereira",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Robinson",
"suffix": ""
},
{
"first": "Tyson",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 1983,
"venue": "Research on Interactive Acquisition and Use of Knowledge, SRI International",
"volume": "",
"issue": "",
"pages": "39--79",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shieber, S., Uszkoreit, H., Pereira, F., Robinson, J., and Tyson, M. (1983). \"The Formalism and Implementation of PATR-II\", In Grosz,B. and Stickel,M. (eds) Research on Interactive Acquisition and Use of Knowledge, SRI International. pp. 39-79.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Fragment Processing in the DELPHI System",
"authors": [
{
"first": "D",
"middle": [],
"last": "Stallard",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Bobrow",
"suffix": ""
}
],
"year": 1992,
"venue": "Proceedings of the Speech and Natural Language Workshop",
"volume": "",
"issue": "",
"pages": "305--310",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stallard, D. and Bobrow, R. (1992) \"Fragment Process- ing in the DELPHI System\", in Proceedings of the Speech and Natural Language Workshop, Harriman, NY, pp. 305-310.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Categorial Unification Grammars",
"authors": [
{
"first": "H",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
}
],
"year": 1986,
"venue": "Proceedings of the 11th International Conference on Computational Linguistics and the the 2~th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Uszkoreit, H. (1986) \"Categorial Unification Gram- mars\". In Proceedings of the 11th International Con- ference on Computational Linguistics and the the 2~th Annual Meeting of the Association for Computational Linguistics, Institut fur Kummunikkationsforschung und Phonetik, Bonn University.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "An Introduction to Unification Categorial Grammar",
"authors": [
{
"first": "H",
"middle": [],
"last": "Zeevat",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Calder",
"suffix": ""
}
],
"year": 1987,
"venue": "Edinburgh Working Papers",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zeevat, H., Klein, E., and Calder, J. (1987) \"An Intro- duction to Unification Categorial Grammar\". In Had- dock, N.,Klein,E., Merrill, G. (eds.) Edinburgh Work- ing Papers in Cognitive Science, Volume 1: Categorial Grammar, Unification Grammar, and Parsing.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"type_str": "table",
"content": "<table><tr><td/><td>Training</td><td>Test</td></tr><tr><td>Lexicon</td><td colspan=\"2\">99.1% 95.9%</td></tr><tr><td>Syntax</td><td colspan=\"2\">94.2% 90.9%</td></tr><tr><td>Semantics</td><td colspan=\"2\">87.4% 83.7%</td></tr><tr><td>Syntax (Repair Correction)</td><td colspan=\"2\">96.0% 93.1%</td></tr><tr><td>Semantics (Repair Correction)</td><td colspan=\"2\">89.1% 86.0%</td></tr><tr><td>*This research was supported by the Advanced Research</td><td/></tr><tr><td>Projects Agency</td><td/></tr></table>",
"html": null,
"text": "",
"num": null
},
"TABREF1": {
"type_str": "table",
"content": "<table/>",
"html": null,
"text": "Domain Coverage by Component 2.",
"num": null
},
"TABREF3": {
"type_str": "table",
"content": "<table><tr><td>[([whq,S], s: []), (Np, np:[]), This sentence sem (whq_ynq_s lash_np, (S, s: [gapsin=np: [gapsem=Np]] )]).</td></tr></table>",
"html": null,
"text": "syntax rule (named whq_ynq_slash_up) says that a sentence (category s) can be built by finding a noun phrase (category up) followed by a sentence. It requires that the daughter np have the value ynq for its wh feature and that it have the value N (a variable) for its have a category value for its gapsin feature, namely an np with a person number value N, which is the same as the person number value on the wh;bearing noun phrase. The interpretation of the entire rule is that a gapless sentence with sentence_type whq can be built by finding a wh-phrase followed by a sentence with a noun-phrase gap in it that has the same person number as the wh-phrase.Semantic rules are written in much the same rule format, except that in a semantic rule, each of the constituents mentioned in the phrase-structure skeleton is associated with a logical form. Thus, the semantics for the rule above is:",
"num": null
},
"TABREF5": {
"type_str": "table",
"content": "<table><tr><td>obviously a domain-specific requirement. But the same</td></tr><tr><td>machinery also restricts a determiner like all to take two</td></tr><tr><td>propositions, and an adjective like further to take dis-</td></tr><tr><td>tances as its measure-specifier (as in thirty miles fur-</td></tr><tr><td>ther). In fact, sortal constraints are assigned to every</td></tr><tr><td>atomic predicate and operator appearing in the logical</td></tr><tr><td>forms constructed by the semantic rules.</td></tr></table>",
"html": null,
"text": "Average number of edges built by interleaved processing the object of the transitive verb depart (as in flights departing Boston) is restricted to be an airport or a city,",
"num": null
},
"TABREF6": {
"type_str": "table",
"content": "<table><tr><td/><td colspan=\"4\">John [sang [a song ] [for Mary ] ] ]</td></tr><tr><td/><td>S</td><td>S</td><td>S S R S S</td><td>RRR</td></tr><tr><td>(b)</td><td colspan=\"4\">[John [sang [ [a song ] [for Mary ]] ]]</td></tr><tr><td/><td>S</td><td>S</td><td>S S R S S</td><td>RRRR</td></tr><tr><td colspan=\"5\">There is a shift for each word and a reduce for each right</td></tr><tr><td>bracket.</td><td/><td/><td/></tr></table>",
"html": null,
"text": "Comparison of the two parses consists simply of pairing the moves in the shift-reduce derivation from left to right. Any parse making a shift move that corresponds to a reduce move loses by Right Association. Any parse making a reduce move that corresponds to a longer reduce loses by Minimal Attachment. In derivation (b) above the third reduce move builds the constituent a song for Mary from two constituents, while the corresponding reduce in (a) builds sang a song for Mary from three constituents. Parse (b) thus loses by Minimal Attachment.",
"num": null
}
}
}
}