Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "C96-1010",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:50:57.904029Z"
},
"title": "Parsing spoken language without syntax",
"authors": [
{
"first": "Jean-Yves",
"middle": [],
"last": "Antoine",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "CLIPS-IMAG",
"location": {
"addrLine": "BP 53 --F-38040 GRENOBLE Cedex 9",
"country": "FRANCE"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Parsing spontaneous speech is a difficult task because of the ungrammatical nature of most spoken utterances. To overpass this problem, we propose in this paper to handle the spoken language without considering syntax. We describe thus a microsemantic parser which is uniquely based on an associative network of semantic priming. Experimental results on spontaneous speech show that this parser stands for a robust alternative to standard ones.",
"pdf_parse": {
"paper_id": "C96-1010",
"_pdf_hash": "",
"abstract": [
{
"text": "Parsing spontaneous speech is a difficult task because of the ungrammatical nature of most spoken utterances. To overpass this problem, we propose in this paper to handle the spoken language without considering syntax. We describe thus a microsemantic parser which is uniquely based on an associative network of semantic priming. Experimental results on spontaneous speech show that this parser stands for a robust alternative to standard ones.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The need of a robust parsing of spontaneous speech is a more and more essential as spoken human -machine communication meets a really impressive development. Now, the extreme structural variability of the spoken language balks seriously the attainment of such an objective. Because of its dynamic and uncontrolled nature, spontaneous speech presents indeed a high rate of ungrammatical constructions (hesitations, repetitious, a.s.o...). As a result, spontaneous speech catch rapidly out most syntactic parsers, in spite of the frequent addition of some ad hoc corrective methods [Seneff 92 ]. Most speech systems exclude therefore a complete syntactic parsing of the sentence. They on the contrary restrict the analysis to a simple keywords extraction [Appelt 92 ]. This selective approach led to significant results in some restricted applications (ATIS...). It seems however unlikely that it is appropriate for higher level tasks, which involve a more complex communication between the user and the computer. Thus, neither the syntactic methods nor the selective approaches can fully satisfy the constraints of robustness and of exhaustivity spoken human-machine communication needs. This paper presents a detailed semantic parser which masters most spoken utterances. In a first part, we describe the semantic knowledge our parser relies on. We then detail its implementation. Experimental results, which suggest the suitability of this model, are finally provided.",
"cite_spans": [
{
"start": 580,
"end": 590,
"text": "[Seneff 92",
"ref_id": null
},
{
"start": 753,
"end": 763,
"text": "[Appelt 92",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "i. Introduction",
"sec_num": null
},
{
"text": "Most syntactic formalisms (LFG [Bresnan 82], HPSG ]Pollard 87], TAG [Joshi 87]) give a major importance to subcategorization, which accounts for the grammatical dependencies inside the sentence. We consider on the contrary that subcategorization issue from a lexical semantic knowledge we will further name microsemantics [Rastier 94 ]. Our parser aims thus at building a microsemantic structure (figure 1) which fully describes the meaning dependencies inside the sentence. The corresponding relations are labeled by several microsemantic cases (Table 1) which only intend to cover the system's application field (computer-helped drawing).",
"cite_spans": [
{
"start": 68,
"end": 79,
"text": "[Joshi 87])",
"ref_id": null
},
{
"start": 322,
"end": 333,
"text": "[Rastier 94",
"ref_id": null
}
],
"ref_spans": [
{
"start": 546,
"end": 555,
"text": "(Table 1)",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Microsemantics",
"sec_num": "2."
},
{
"text": "The microsemantic parser achieves a fully lexicalized analysis. It relies indeed on a microsemantic lexicon in which every input represents a peculiar lexeme I. Each lexeme is described by the following features structure : PRED lexeme identifier MORPI1 morphological realizations SEM semantic domain SUBCAT subcategorization frame I Lexeme = lexical unit of meaning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Microsemantics",
"sec_num": "2."
},
{
"text": "Pr ed = 'to draw' Morph = {' draw',' draws',' drew',' drawn The microsemantic subcategorization frames describe the meaning dependencies the lexeme dominate. Their arguments are not ordered. The optional arguments are in brackets, by opposition with the compulsory ones. At last, the adverbial phrases are not subcategorized. ",
"cite_spans": [
{
"start": 18,
"end": 59,
"text": "Morph = {' draw',' draws',' drew',' drawn",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Exam )le : to draw",
"sec_num": null
},
{
"text": "Any speech recognition system involves a high perplexity which requires the definition of topdown parsing constraints. This is why we based the microsemantic parsing on a priming process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Priming",
"sec_num": "3."
},
{
"text": "The semantic priming is a predictive process where some already uttered words (priming words) are calling some other ones (primed words) through various meaning associations. It aims a double goal :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Priming process",
"sec_num": "3.1."
},
{
"text": "\u2022 It constrains the speech recognition.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Priming process",
"sec_num": "3.1."
},
{
"text": "\u2022 It characterizes the meaning dependencies inside the sentence. Each priming step involves two successive processes. At first, the contextual adaptation favors the priming words which are consistent with the semantic context. The latter is roughly modeled by two semantic fields: the task domain and the computing domain. On the other hand, the relational priming identifies the lexemes which share a microsemantic relation with one of the already uttered words. These relations issue directly from the subcategorization frames of these priming words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Priming process",
"sec_num": "3.1."
},
{
"text": "The priming process is carried out by an associative multi-layered network (figure 2) which results from the compilation of the lexicon. Each cell of the network corresponds to a specific lexeme. The inputs represent the priming words. Their activities are propagated up to the output layer which corresponds to the primed words. An additional layer (Structural layer S) handles furthermore the coordinations and the prepositions. We will now describe the propagation of the priming activities. Let us consider :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Priming network",
"sec_num": "3.2."
},
{
"text": "\u2022 t current step of analysis \u2022 a;/(t) activity of the cell j of the layer i at stept (i e {1, 2, 3, 4, 5, 6, S} ) \u2022 ~J(t) synaptic weight between the cell k of the layer i and the cell I of the layer j.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Priming network",
"sec_num": "3.2."
},
{
"text": "Temporal forgetting --At first, the input activities are slightly modulated by a process of temporal forgetting : ail(t) =amax if i is to the current word. ail(t) = amax if i is to the primer of thisword. a~l(t) = Max (0, ail(t-1)-Afo, g~t ) otherwise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Priming network",
"sec_num": "3.2."
},
{
"text": "Although it favors the most recent lexemes, this process does not prevent long distance primings. Contextual adaptation --Each cell of the second layer represents a peculiar semantic field. Its activity depends on the semantic affiliations of the priming words :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Priming network",
"sec_num": "3.2."
},
{
"text": "a~ (t)= Eoli,:~(t).air (t) (1) i C0il',~(t) = COma x if i belongs to j. with: c01j:~(t) = -Olma x otherwise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Priming network",
"sec_num": "3.2."
},
{
"text": "Then, these contextual cells modulate the initial priming activities :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Priming network",
"sec_num": "3.2."
},
{
"text": "(t)= ai i (t) + y__. i i i m2,3(t) . a 2 (t) i with: coi2'i (t) = Aoo,~,ex, if j belongs to i.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Priming network",
"sec_num": "3.2."
},
{
"text": "coi2'i (t) = -Aco,,ext otherwise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Priming network",
"sec_num": "3.2."
},
{
"text": "The priming words which are consistent with the current semantic context are therefore favored. (1) otherwise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Priming network",
"sec_num": "3.2."
},
{
"text": "The inner synaptic weights of the case-based sub-networks represent the relations between the priming and the primed words :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Priming network",
"sec_num": "3.2."
},
{
"text": "031~',5<,(t) = mmax if iandj share a microscmanlic relation which corresponds io the case <z. The primed words aims at constraining the speech recognition, thereby warranting the semantic coherence of the analysis. These constraints can be relaxed by considering the primable words. Every recognized word is finally handled by the parsing process with its priming relation (see section 4).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Priming network",
"sec_num": "3.2."
},
{
"text": "Prepositions restrict the microsemantic assignment of lhe objects they introduce. As a resttlt, the in'epositional cells of the structural layer tnodulate dynamically the case-based dispatching weights to prohibit any inconsistent priming. The rule (3') stands therefore for (3) :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prepositions",
"sec_num": "3.3."
},
{
"text": "i i,i (~ .a~(t) ) ai3(t) a4c ~ (t) = O3,4c~(t ) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prepositions",
"sec_num": "3.3."
},
{
"text": "\u00a2Ok~x (t) with: c0k(t) = ~0ma x if ~ is consistent with the preposition k. \u00a2ok(t) = 0 otherwise. and \u2022 ask(t) = area x while the object of k is not assigned a case. ak(t) = 0 otherwise.",
"cite_spans": [
{
"start": 6,
"end": 9,
"text": "(t)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Prepositions",
"sec_num": "3.3."
},
{
"text": "At last, the preposition is assigned the TAG argument of the introduced object.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prepositions",
"sec_num": "3.3."
},
{
"text": "The parser deals only for the moment being with logical coordinations (and, or, but...) . In such cases, the coordinated elements must share the same microsemantic case. This constraint is worked out by the recall of the already fulfilled microsemantic relations, which were all previously stacked. The dispatching is thus restricted to the recalled relations every time a coordination occurs :",
"cite_spans": [
{
"start": 70,
"end": 87,
"text": "(and, or, but...)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Coordinations",
"sec_num": "3.4."
},
{
"text": "i,j i,j",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coordinations",
"sec_num": "3.4."
},
{
"text": "m3.4~(t) = m3.4~(0) for a stacked relation i,j 033,4, ~ (t) = 0 otherwise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coordinations",
"sec_num": "3.4."
},
{
"text": "The coordinate words are finally considered the coo arguments of the conjunction, which is assigned to the shared microsemantic case.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coordinations",
"sec_num": "3.4."
},
{
"text": "Generally speaking, the priming process provides a set of words that should follow the already uttered lexemes. In some cases, a lexeme might however occur before its priming word : (a) I want to enlarge the small window Back priming situations are handled through the following algorithm :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3,5. Back priming",
"sec_num": null
},
{
"text": "Evm~\u00a2 time a new word occurs : 1. If this word was not primed, it is pushed it in a back priming stack. 2. Otherwise, one checks whether this word back primes some stacked ones. Back primed words are then popped out.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3,5. Back priming",
"sec_num": null
},
{
"text": "The microsemantic parsing relies on the unification of the subcategorization frames of the lexemes that are progressively recognized. This unification must respect four principles :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unification",
"sec_num": "4.1."
},
{
"text": "Unicity --Any argume~B'~nust be at the most fulfilled by a unique lexeme or a coordination.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unification",
"sec_num": "4.1."
},
{
"text": "Any lexeme must fulfil at the most a unique argument.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coherence-",
"sec_num": null
},
{
"text": "Coordination --Coordinate lexemes must fulfil the same subcategorized argument.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coherence-",
"sec_num": null
},
{
"text": "Relative completeness --Any argument might remain unfulfilled although the parser must always favor the more complete analysis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coherence-",
"sec_num": null
},
{
"text": "The principle of relative completeness is motivated by the high frequency of incomplete utterances (ellipses, interruptions...) spontaneous speech involves. The parser aims only at extracting an unfinished microsemantic structure pragmatics should then complete. As noticed previously with the coordinations, these principles govern preventively the contextual adaptation of the network weights, so that any incoherent priming is excluded.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coherence-",
"sec_num": null
},
{
"text": "As illustrated by the previous example, the microsemantic parser masters rather complex sentences. The study of its linguistic abilities offers a persuasive view of its structural power.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LINGUISTIC ABILITIES",
"sec_num": "5."
},
{
"text": "Although our parser is dedicated to French applications, we expect our semantic approach to be easily extended to other languages. We will now study several linguistic phenomena the parser masters easily.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linguistic coverage",
"sec_num": "5.1."
},
{
"text": "Compound tenses and passive --According to the microsemantic point of view, the auxiliaries appear as a mark of modality of the verb. As a result, the parser considers ordinarily any auxiliary an ordinary MOD argument of the verb. Since the parser ignores most word-order considerations, the interrogative utterances are processed like any declarative ones. This approach suits perfectly to spontaneous speech, which rarely involves a subject inversion. Closed questions are consequently characterized either by a prosodic analysis or by the adverbial phrase est-ce-que.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linguistic coverage",
"sec_num": "5.1."
},
{
"text": "oft ddplafons nous le carrd ?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(g)",
"sec_num": null
},
{
"text": "Open questions (g) are on the contrary introduced explicitly by an interrogative pronoun which stands for the missing argument.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(g)",
"sec_num": null
},
{
"text": "relative clause is considered an argtunent of the lexeme the relative pronoun refers to.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relative clauses---Every",
"sec_num": null
},
{
"text": "The microsemantic structures of the main and the relative clauses are however kept distinct to respect the principle of coherence. The two parse trees are indirectly related by an anaphoric a~lation (REF).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(h) It encumbers the window which is here",
"sec_num": null
},
{
"text": "Provided the dependent clause is not a relative one, the subordinate verb is subcategorized by the main one.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Subordinate clauses-",
"sec_num": null
},
{
"text": "As a result, subordinate clauses are parsed like any ordinary object.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(i) Draw a circle as soon as the square is erased",
"sec_num": null
},
{
"text": "The suitability of the semantic parser is rcally patent when considering spontaneous speech. The parser masters indeed most of the spontaneous ungrammatical constructions without any specific mechanism :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Spontaneous constructions",
"sec_num": "5.2."
},
{
"text": "Repetitions and self-corrections --Repetitions and self-corrections seem to violate the principle of unicity. They involve indeed sevcral lexemes which share the same lnicroselnantic case : These constructions are actually considered a peculiar coordination where the conjunction is missing [De Smedt 87] . Then, they are parsed like any coordination\u00b0",
"cite_spans": [
{
"start": 291,
"end": 304,
"text": "[De Smedt 87]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Spontaneous constructions",
"sec_num": "5.2."
},
{
"text": "Ellipses and interruptions --The principle of relative completeness is mainly designed for the ellipses and the interruptions, Our parser is thus able to extract alone the incomplete structure of any interrupted utterance. On the contrary, the criterion of relative completeness is deficient for most of the ellipses like (t), where the upper predicate to move is omitted : Such wide ellipses should nevertheless be recovered at a upper pragmatic level.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Spontaneous constructions",
"sec_num": "5.2."
},
{
"text": "Comments --Generally speaking, comments do not share any microsemantic relation with the sentence they are inserted in : For instance, the idiomatic phrase that's it is related to (o) at the pragmatic level and not at the semantic one. As a result, the microsemantic parser can not unify the main clause and the comment. We expect however filrther studies on pragmatic marks to enhance the parsing of these constructions. Despite this weakness, the robustness of the microsemantic parser is already substantial. The following experimental results will thus suggest the suitability of our mnodcl for spontaneous speech parsing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Spontaneous constructions",
"sec_num": "5.2."
},
{
"text": "This section presents several experiments that were carried out on our microsemantic analyzer as well as on a LFG parser [Zweigenbaum 91]. These experiments were achieved on the literal written transcription of three corpora of spontaneous speech (table 2) which all correspond to a collaborative task of drawing between two human subjects (wizard of Oz experiment). The dialogues were totally unconstrained, so that the corpora are corresponding to natural spontaneous speech. We compared the two parser according on their robustness and their perplexity.",
"cite_spans": [],
"ref_spans": [
{
"start": 247,
"end": 256,
"text": "(table 2)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6."
},
{
"text": "The table 3 provides the accuracy rates of the two parsers. These results show the benefits of our approach. Around four utterances over five (-\u00a3=83.5%) are indeed processed correctly by the microsemantic parser whereas the LFG's accuracy is limited to 40% on the two first corpora. Its robustness is noticeably higher on the third corpus, which presents a moderate ratio of ungrammatical utterances. The overall performances of the LFG suggest nevertheless that a syntactic approach is not suitable for spontaneous speech, by opposition with the microsemantic one. Besides, the independence of microsemantics from the grammatical shape of the utterances warrants its robustness remains relatively unaltered (standard deviation CYn = 0.036).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Robustness",
"sec_num": "6.1."
},
{
"text": "As mentioned above, the microsemantic parser ignores in a large extent most of the constraints of linear precedence. This tolerant approach is motivated by the frequent ordering violations spontaneous speech involves. It however leads to a noticeable increase of perplexity. This deterioration is particularly patent for sentences which include at least eight lexemes (Table 4) . At first, we proposed to reduce this perplexity through a cooperation between the microsemantic analyzer and a LFG parser [Antoine 9411 . Although this cooperation achieves a noticeable reduction of the perplexity, it is however ineffective when the LFG parser collapses. This is why we intend at present to inserl, directly some ordering constraints spontaneous speech never violates. [Rainbow 9411 established that any ordering rule should be expressed lexically. We suggest consequently to order partially the arguments of every lexical subcategorization. Thus, each frame will be assigned few equations which will characterize some ordering priorities among its arguments.",
"cite_spans": [
{
"start": 502,
"end": 515,
"text": "[Antoine 9411",
"ref_id": null
},
{
"start": 766,
"end": 779,
"text": "[Rainbow 9411",
"ref_id": null
}
],
"ref_spans": [
{
"start": 368,
"end": 377,
"text": "(Table 4)",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Perplexity",
"sec_num": "6.2."
},
{
"text": "In this paper, we argued the structural variability of spontaneous speech prevents its parsing by standard syntactic analyzers. We have then described a semantic analyzer, based on an associative priming network, which aims at parsing spontaneous speech without considering syntax. The linguistic coverage of this parser, as well as several its robustness, have clearly shown the benefits of this approach. We expect furthermore the insertion of word-order constraints to noticeably decrease the perplexity of the microsemantic analyzer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7."
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Automatic adaptive understanding of spoken language",
"authors": [
{
"first": "J",
"middle": [
"Y"
],
"last": "Antoine",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Caelen",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Caillaud",
"suffix": ""
}
],
"year": 1994,
"venue": "ICSLP'94",
"volume": "799",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J.Y. Antoine, J. Caelen, B. Caillaud (1994). \"Automatic adaptive understanding of spoken language\", ICSLP'94, Yokoham, Japan, 799:802.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Locative inversion in Chichewa",
"authors": [
{
"first": "D",
"middle": [],
"last": "Appelt",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Jakson",
"suffix": ""
},
{
"first": ";",
"middle": [
"J"
],
"last": "Bresnan",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Kanerva",
"suffix": ""
}
],
"year": 1989,
"venue": "5th DARPA Workshop on Speech and Natural Language",
"volume": "20",
"issue": "",
"pages": "1--50",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Appelt, E. Jakson (1992), \"SRI International ATIS Benchmark Test Results\", 5th DARPA Workshop on Speech and Natural Language, Harriman, NY. J. Bresnan, J. Kanerva (1989). \"Locative inversion in Chichewa\", Linguistic Inquiry, 20, 1-50.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The relevance of TAG to generation",
"authors": [
{
"first": "A",
"middle": [],
"last": "Joshi",
"suffix": ""
}
],
"year": 1987,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Joshi (1987) \"The relevance of TAG to generation\", in G. Kempen (ed.), \"Natural Language Generation\", Reidel, Dordrecht, NL.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Speaking : from intention to articulation",
"authors": [
{
"first": "W",
"middle": [],
"last": "Levelt",
"suffix": ""
}
],
"year": 1989,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W. Levelt (1989). \"Speaking : from intention to articulation\", MIT Press, Cambridge, Ma.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Information based syntax and semantics",
"authors": [
{
"first": "C",
"middle": [],
"last": "Pollard",
"suffix": ""
}
],
"year": 1987,
"venue": "CSLI Lectures notes",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Pollard, 1. Sag (1987), \"Information based syntax and semantics\", CSLI Lectures notes, 13, University of Chicago Press, IL.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A Formal Look at Dependancy Grammars and Phrase-Structure Grammars",
"authors": [
{
"first": "O",
"middle": [],
"last": "Rainbow",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Joslfi",
"suffix": ""
}
],
"year": 1994,
"venue": "Current Issues in Meaning-Text Theory",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "O. Rainbow, A. Joslfi (1994). \"A Formal Look at Dependancy Grammars and Phrase-Structure Grammars \", in L. Wanner (ed.), \"Current Issues in Meaning-Text Theory\", Pinter, London, 1994.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "S6mantique pour l'analyse",
"authors": [
{
"first": "F",
"middle": [],
"last": "Rastier",
"suffix": ""
}
],
"year": 1994,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. Rastier et al (1994). \"S6mantique pour l'analyse\", Masson, Paris.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Robust Parsing for Spoken Language Systems\"~ ICASSP'92, volo I",
"authors": [
{
"first": "S",
"middle": [],
"last": "Seneff",
"suffix": ""
}
],
"year": 1992,
"venue": "",
"volume": "",
"issue": "",
"pages": "189--192",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Seneff (1992). \"Robust Parsing for Spoken Language Systems\"~ ICASSP'92, volo I, 189-192, San Francisco, CA,",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"uris": null,
"text": "Microsemantic structure of the sentence I select the left device",
"num": null,
"type_str": "figure"
},
"FIGREF2": {
"uris": null,
"text": "'} I AGT = / element / + / animate / Subcat = /oBJ = / element / + / concrete / [( LOC) = / property / + / place Sere = / task -domain /",
"num": null,
"type_str": "figure"
},
"FIGREF3": {
"uris": null,
"text": "Relational Priming --The priming activities are then dispatched mnong several sub-networks which perform parallel analyses on distinct cases (fig. 3). The dispatched activities represents therefore the priming power of the priming lexemes o51 each microselnantic case : are dynamically adapted during the parsing (see section 4)+ Their initial values issue front the compilation ol the lexical subcategorization flames : ml4~J,,5,(t) : C\u00b0min otherwise. The outputs of the case-based sub-networks, as well as the final priming excitations, are then calculated through a maximum heuristic : a{,,Ct) = Max ( i,j ro .... (t).a{.(taa,..,,,y,,,. Em.d;.+ll+ contextual ++ L~._+# + .............. . ............................ ++,;~.#7:~-+++.,~+ / layer '.:\".: .-. ({(~~]'~-M>~7~ '\"',''''",
"num": null,
"type_str": "figure"
},
"FIGREF5": {
"uris": null,
"text": "logidel,' , AGT =/DET = [Pred = le ] LTAG =' par'Interrogations-Three interrogative forms are met in French : subject inversion (fl), est-ce-que questions (f2) and intonative questions (f3).(fl) ddpla~'ons nous le carrd ? (f2) est-ce-que nous ddplafons le carrd ? (f3) nous dgplacfons le carrd ?",
"num": null,
"type_str": "figure"
},
"FIGREF6": {
"uris": null,
"text": "(1l) *Select the device ... the right (_tevice. (12) *Close the display ~ ... the window.",
"num": null,
"type_str": "figure"
},
"FIGREF7": {
"uris": null,
"text": "The left door on the right too.",
"num": null,
"type_str": "figure"
},
"FIGREF8": {
"uris": null,
"text": "line ... that's it ... on the right..",
"num": null,
"type_str": "figure"
},
"TABREF0": {
"text": "Some examples of microsemantic cases.",
"content": "<table><tr><td>Label</td><td>Semantic case</td></tr><tr><td>DET</td><td>determiner</td></tr><tr><td>AGT</td><td>agent</td></tr><tr><td>ATT</td><td>attribute</td></tr><tr><td>OBJ</td><td>object / theme</td></tr><tr><td>LOC</td><td>location / destination</td></tr><tr><td>OWN</td><td>meronomy / ownership</td></tr><tr><td>MOD</td><td>modality</td></tr><tr><td>INS</td><td>instrument</td></tr><tr><td>COO</td><td>coordination</td></tr><tr><td>TAG</td><td>case marker (prdposition)</td></tr><tr><td>REF</td><td>anaphoric reference</td></tr></table>",
"type_str": "table",
"html": null,
"num": null
},
"TABREF3": {
"text": "",
"content": "<table><tr><td colspan=\"4\">.\" Average robustness of the LFG and the</td></tr><tr><td colspan=\"4\">microsemantic. Accuracy rate = number of correct</td></tr><tr><td colspan=\"3\">analyses /number of tested utterances.</td></tr><tr><td>Parser</td><td colspan=\"2\">corpus 1 corpus 2 corpus 3</td><td>~-~n</td></tr><tr><td>LFG</td><td>0.408 0.401</td><td colspan=\"2\">0.767 0.525 0.170</td></tr><tr><td>Semantics</td><td>0.853 0.785</td><td colspan=\"2\">0.866 0.835 0.036</td></tr></table>",
"type_str": "table",
"html": null,
"num": null
},
"TABREF4": {
"text": "Number of parallel hypothetic structuresl according to utterances' length",
"content": "<table><tr><td>Length</td><td>LFG parser</td><td>Microsemantic</td></tr><tr><td>4 words</td><td>1,5</td><td>2,5</td></tr><tr><td>6 words</td><td>1,5</td><td>3,5</td></tr><tr><td>8 words</td><td>2</td><td>8</td></tr><tr><td>10 words</td><td>2</td><td>12,5</td></tr><tr><td>12 words</td><td>1,25</td><td>19,75</td></tr></table>",
"type_str": "table",
"html": null,
"num": null
}
}
}
}