|
{ |
|
"paper_id": "A97-1012", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T02:14:28.585167Z" |
|
}, |
|
"title": "INCREMENTAL FINITE-STATE PARSING", |
|
"authors": [ |
|
{ |
|
"first": "Salah", |
|
"middle": [], |
|
"last": "A'it-Mokhtar", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Chemin de Maupertuis", |
|
"location": { |
|
"postCode": "F-38240", |
|
"settlement": "Meylan", |
|
"country": "France" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Jean-Pierre", |
|
"middle": [], |
|
"last": "Chanod", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Chemin de Maupertuis", |
|
"location": { |
|
"postCode": "F-38240", |
|
"settlement": "Meylan", |
|
"country": "France" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "This paper describes a new finite-state shallow parser. It merges constructive and reductionist approaches within a highly modular architecture. Syntactic information is added at the sentence level in an incremental way, depending on the contextual information available at a given stage. This approach overcomes the inefficiency of previous fully reductionist constraintbased systems, while maintaining broad coverage and linguistic granularity. The implementation relies on a sequence of networks built with the replace operator. Given the high level of modularity, the core grammar is easily augmented with corpusspecific sub-grammars. The current system is implemented for French and is being expanded to new languages.", |
|
"pdf_parse": { |
|
"paper_id": "A97-1012", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "This paper describes a new finite-state shallow parser. It merges constructive and reductionist approaches within a highly modular architecture. Syntactic information is added at the sentence level in an incremental way, depending on the contextual information available at a given stage. This approach overcomes the inefficiency of previous fully reductionist constraintbased systems, while maintaining broad coverage and linguistic granularity. The implementation relies on a sequence of networks built with the replace operator. Given the high level of modularity, the core grammar is easily augmented with corpusspecific sub-grammars. The current system is implemented for French and is being expanded to new languages.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Previous work in finite-state parsing at sentence level falls into two categories: the constructive approach or the reductionist approach. The origins of the constructive approach go back to the parser developed by Joshi (Joshi, 1996) . It is based on a lexical description of large collections of syntactic patterns (up to several hundred thousand rules) using subcategorisation frames (verbs + essential arguments) and local grammars (Roche, 1993) . It is, however, still unclear whether this heavily lexicalized method can account for all sentence structures actually found in corpora, especially due to the proliferation of non-argumental complements in corpus analysis.", |
|
"cite_spans": [ |
|
{ |
|
"start": 221, |
|
"end": 234, |
|
"text": "(Joshi, 1996)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 436, |
|
"end": 449, |
|
"text": "(Roche, 1993)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Another constructive line of research concentrates on identifying basic phrases such as in the FASTUS information extraction system (Appelt et al., 1993) or in the chunking approach proposed in (Abney, 1991; Federici et al., 1996) . Attempts were made to mark the segments with additional syntactic information (e.g. subject or object) (Grefenstette, 1996) using simple heuristics, for the purpose of information retrieval, but not for robust parsing.", |
|
"cite_spans": [ |
|
{ |
|
"start": 132, |
|
"end": 153, |
|
"text": "(Appelt et al., 1993)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 194, |
|
"end": 207, |
|
"text": "(Abney, 1991;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 208, |
|
"end": 230, |
|
"text": "Federici et al., 1996)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 336, |
|
"end": 356, |
|
"text": "(Grefenstette, 1996)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The reductionist approach starts from a large number of alternative analyses that get reduced through the application of constraints. The constraints may be expressed by a set of elimination rules applied in a sequence (Voutilainen, Tapanainen, 1993) or by a set of restrictions applied in parallel (Koskenniemi et al., 1992) . In a finitestate constraint grammar (Chanod, Tapanainen, 1996) , the initial sentence network represents all the combinations of the lexical readings associated with each token. The acceptable readings result from the intersection of the initial sentence network with the constraint networks. This approach led to very broad coverage analyzers, with good linguistic granularity (the information is richer than in typical chunking systems). However, the size of the intermediate networks resulting from the intersection of the initial sentence network with the sets of constraints raises serious efficiency issues.", |
|
"cite_spans": [ |
|
{ |
|
"start": 219, |
|
"end": 250, |
|
"text": "(Voutilainen, Tapanainen, 1993)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 299, |
|
"end": 325, |
|
"text": "(Koskenniemi et al., 1992)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 373, |
|
"end": 390, |
|
"text": "Tapanainen, 1996)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The new approach proposed in this paper aims at merging the constructive and the reductionist approaches, so as to maintain the coverage and granularity of the constraint-based approach at a much lower computational cost. In particular, segments (chunks) are defined by constraints rather than patterns, in order to ensure broader coverage. At the same time, segments are defined in a cautious way, to ensure that clause boundaries and syntactic functions (e.g. subject, object, PP-Obj) can be defined with a high degree of accuracy.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The incremental parser", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The input to the parser is a tagged text. We currently use a modified version of the Xerox French tagger (Chanod, Tapanainen, 1995) . The revisions are meant to reduce the impact of the most frequent errors of the tagger (e.g. errors between adjectives and past participles), and to refine the tagset. Each input token is assigned a single tag, generally representing the part-of-speech and some limited morphological information (e.g the number, but not the gender of nouns). The sentence is initially represented by a sequence of wordform-plus-tag pairs.", |
|
"cite_spans": [ |
|
{ |
|
"start": 114, |
|
"end": 131, |
|
"text": "Tapanainen, 1995)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Overview", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "The incremental parser consists of a sequence of transducers. These transducers are compiled from regular expressions that use finite-state calculus operators, mainly the Replace operators (Karttunen, 1996) . Each of these transducers adds syntactic information represented by reserved symbols (annotations), such as brackets and names for segments and syntactic functions. The application of each transducer composes it with the result of previous applications.", |
|
"cite_spans": [ |
|
{ |
|
"start": 189, |
|
"end": 206, |
|
"text": "(Karttunen, 1996)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Overview", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "If the constraints stipulated in a given transducer are not verified, the string remains unchanged. This ensures that there is always an output string at the end of the sequence, with possibly underspecified segments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Overview", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Each transducer performs a specific linguistic task. For instance, some networks identify segments for NPs, PPs, APs (adjective phrases) and verbs, while others are dedicated to subject or object. The same task (e.g. subject assignment or verb segmentation) may be performed by more than one transducer. The additional information provided at each stage of the sequence is instrumental in the definition of the later stages of the sequence. Networks are ordered in such a way that the easiest tasks are addressed first.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Overview", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "The replace operators allow one not only to add information but also to modify previously computed information. It is thus possible to reassign syntactic markings at a later stage of the sequence. This has two major usages:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Non-monotonicity", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "\u2022 assigning some segments with a default marking at some stage of the process in order to provide preliminary information that is essential to the subsequent stages; and correcting the default marking later if the context so requires", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Non-monotonicity", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "\u2022 assigning some segments with very general marking; and refining the marking later if the context so permits.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Non-monotonicity", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "In that sense, our incremental parser is nonmonotonic: earlier decisions may be refined or even revised. However, all the transducers can, in principle, be composed into a single transducer which produces the final outcome in a single step.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Non-monotonicity", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Each transducer defines syntactic constructions using two major operations: segmentation and syntactic marking. Segmentation consists of bracketing and labeling adjacent constituents that belong to a same partial construction (e.g. a nominal or a verbal phrase, or a more primitive/partial syntactic chain if necessary). Segmentation also includes the identification of clause boundaries. Syntactic marking annotates segments with syntactic functions (e.g. subject, object, PPObj). The two operations, segmentation and syntactic marking, are performed throughout the sequence in an interrelated fashion. Some segmentations depend on previous syntactic marking and vice versa.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Cautious segmentation and syntactic marking", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "If a construction is not recognized at some point of the sequence because the constraints are too strong, it can still be recognized at a later stage, using other linguistic statements and different background information. This notion of delayed assignment is crucial for robust parsing, and requires that each statement in the sequence be linguistically cautious. Cautious segmentation prevents us from grouping syntactically independent segments. This is why we avoid the use of simplifying approximations that would block the possibility of performing delayed assignment. For example, unlike (Abney, 1991), we do not systematically use longest pattern matching for segmentation. Segments are restricted by their underlying linguistic indeterminacy (e.g. post-nominal adjectives are not attached to the immediate noun on their left, and coordinated segments are not systematically merged, until strong evidence is established for their linkage).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Cautious segmentation and syntactic marking", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "The parsing process is incremental in the sense that the linguistic description attached to a given transducer in the sequence:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Incremental parsing and linguistic description", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "\u2022 relies on the preceding sequence of transducers * covers only some occurrences of a given linguistic phenomenon", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Incremental parsing and linguistic description", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "\u2022 can be revised at a later stage.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Incremental parsing and linguistic description", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "This has a strong impact on the linguistic character of the work. The ordering of the linguistic descriptions is in itself a matter of linguistic description: i.e. the grammarian must split the description of phenomena into sub-descriptions, depending on the available amount of linguistic knowledge at a given stage of the sequence.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Incremental parsing and linguistic description", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "This may sound like a severe disadvantage of the approach, as deciding on the order of the transducers relies mostly on the grammarian's intuition. But we argue that this incremental view of parsing is instrumental in achieving robust parsing in a principled fashion. When it comes to parsing, no statement is fully accurate (one may for instance find examples where even the subject and the verb do not agree in perfectly correct French sentences). However, one may construct statements which are true almost everywhere, that is, which are always true in some frequently occuring context.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Incremental parsing and linguistic description", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "By identifying the classes of such statements, we reduce the overall syntactic ambiguity and we simplify the task of handling less frequent phenomena. The less frequent phenomena apply only to segments that are not covered by previous linguistic description stages.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Incremental parsing and linguistic description", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "To some extent, this is reminiscent of the optimality theory, in which:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Incremental parsing and linguistic description", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "\u2022 Constraints are ranked;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Incremental parsing and linguistic description", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "\u2022 Constraints can be violated.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Incremental parsing and linguistic description", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "Transducers at the top of the sequence are ranked higher, in the sense that they apply first, thus blocking the application of similar constructions at a later stage in the sequence.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Incremental parsing and linguistic description", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "If the constraints attached to a given transducer are not fulfilled, the transducer has no effect. The output annotated string is identical to the input string and the construction is bypassed. However, a bypassed construction may be reconsidered at a later stage, using different linguistic statements. In that sense, bypassing allows for the violation of constraints.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Incremental parsing and linguistic description", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "As French is typically SVO, the first transducer in the sequence to mark subjects checks for NPs on the left side of finite verbs. Later in the sequence, other transducers allow for subject inversion (thus violating the constraint on subject-verb order), especially in some specific contexts where inversion is likely to occur, e.g. within relative or subordinate clauses, or with motion verbs. Whenever a transducer defines a verbsubject construction, it is implicitly known at this stage that the initial subject-verb construction was not recognized for that particular clause (otherwise, the application of the verb-subject construction would be blocked).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "An example of incremental description: French Subjects", |
|
"sec_num": "2.5" |
|
}, |
|
{ |
|
"text": "Further down in the sequence, transducers may allow for verb-subject constructions outside the previously considered contexts. If none of these subject-pickup constructions applies, the final sentence string remains underspecified: the output does not specify where the subject stands.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "An example of incremental description: French Subjects", |
|
"sec_num": "2.5" |
|
}, |
|
{ |
|
"text": "It should be observed that in real texts, not only may one find subjects that do not agree with the verb (and even in correct sentences), but one may also find finite verbs without a subject. This is the case for instance in elliptic technical reports (esp. failure reports) or on cigarette packs with inscriptions like Nuit gravement ~ la santg 1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "An example of incremental description: French Subjects", |
|
"sec_num": "2.5" |
|
}, |
|
{ |
|
"text": "This is a major feature of shallow and robust parsers (Jensen et al., 1993; Ejerhed, 1993) : they may provide partial and underspecified parses when full analyses cannot be performed; the issue of grammaticality is independent from the parsing process; the parser identifies the most likely interpretations for any given input.", |
|
"cite_spans": [ |
|
{ |
|
"start": 54, |
|
"end": 75, |
|
"text": "(Jensen et al., 1993;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 76, |
|
"end": 90, |
|
"text": "Ejerhed, 1993)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "An example of incremental description: French Subjects", |
|
"sec_num": "2.5" |
|
}, |
|
{ |
|
"text": "An additional feature of the incremental parser derives from its modular architecture: one may handle underspecified elements in a tractable fashion, by adding optional transducers to the sequence. For instance, one may use corpus specific transducers (e.g. sub-grammars for technical manuals are specially useful to block analyses that are linguistically acceptable, but unlikely in technical manuals: a good example in French is to forbid second person singular imperatives in technical manuals as they are often ambiguous with nouns in a syntactically undecidable fashion). One may also use heuristics which go beyond the cautious statements of the core grammar (to get back to the example of French subjects, heuristics can identify any underspecified NP as the subject of a finite verb if the slot is available at the end of the sequence). How specific grammars and heuristics can be used is obviously application dependent.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "An example of incremental description: French Subjects", |
|
"sec_num": "2.5" |
|
}, |
|
{ |
|
"text": "The parser has four main linguistic modules, each of them consisting of one or several sequenced transducers:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Architecture", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "1 Seriously endangers your health. This example represents an interesting case of deixis and at the same time a challenge for the POS tagger as Nuit is more likely to be recognized as a noun (Night) than as a verb (Endangers) in this particular context. The input text is first tagged with part-of-speech information using the Xerox tagger. The tagger uses 44 morphosyntactic tags such as NOUN-SG for singular nouns and VERB-P3SG for verb 3rd person singular.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Architecture", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The morphosyntactic tags are used to mark AP, NP, PP and VP segments. We then use the segmentation tags and some additional information (including typography) to mark subjects which, in turn, determine to what extent VCs (Verb Chunks) can be expanded. Finally, other syntactic functions are tagged within the segments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Architecture", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Marking transducers are compiled from regular expressions of the form A \u00a9-> T1 ... T2 that contains the left-to-right longest match replace operator \u00a9-> . Such a transducer marks in a left-to-right fashion the maximal instances of A by adding the bracketing strings T1 and T2.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Architecture", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "A segment is a continuous sequence of words that are syntactically linked to each other or to a main word (the Head). In the primary segmentation step, we mark segment boundaries within sentences as shown below where NP stands for Noun Phrase, PP for Preposition Phrase and VC for Verb Chunk (a VC contains at least one verb and possibly some of its arguments and modifiers). Example: All the words within a segment should be linked to words in the same segment at the same level, except the head. For instance, in the NP le commutateur (the switch), le should be linked to commutateur (the head) which, in turn, should be linked to the verb tourne, and not to the verb retourne because the two words are not in the same segment. The main purpose of marking segments is therefore to constrain the particular linguistic space that determines the syntactic function of a word.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Primary Segmentation", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "[", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Primary Segmentation", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "As one can notice from the example above, segmentation is very cautious, and structural ambiguity inherent to modifier attachment (even postnominal adjectives), verb arguments and coordination is not resolved at this stage.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Primary Segmentation", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "In order to get more robust linguistic descriptions and networks that compile faster, segments are not defined by marking sequences that match classical regular expressions of the type [Det (Coord Det) Adj* Noun], except in simple or heavily constrained cases (APs, Infinitives, etc). Rather, we take advantage of the fact that, within a linguistic segment introduced by some grammatical words and terminated by the head, there is no attachement ambiguity and therefore these words can be safely used as segment delimiters (Bbs, 1993) . We first mark possible beginnings and endings of a segment and then associate each beginning tag with an ending if some internal constraints are satisfied. Hence, the main steps in segmentation are:", |
|
"cite_spans": [ |
|
{ |
|
"start": 523, |
|
"end": 534, |
|
"text": "(Bbs, 1993)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Primary Segmentation", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "\u2022 Tag potential beginnings and ends of a segment", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Primary Segmentation", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "\u2022 Use these temporary tags to mark the segment * Remove the temporary tags.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Primary Segmentation", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Adjective phrases are marked by a replacement transducer which inserts the [AP and AP] boundaries around any word sequence that matches the regular expression (RE): ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "AP Segmentation", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "[", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "AP Segmentation", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Unlike APs, NPs are marked in two steps where the basic idea is the following: we first insert a special mark wherever a beginning of an NP is possible, i.e, on the left of a determiner, a numeral, a pronoun, etc. The mark is called a temporary beginning of NP (TBeginNP). The same is done for all possible ends of NP (TEndNP), i.e. nouns, numerals, pronouns, etc. Then, using a replacement transducer, we insert the [NP and NP] boundaries around the longest sequence that contains at least one temporary beginning of NP followed by one temporary end of NP:", |
|
"cite_spans": [ |
|
{ |
|
"start": 417, |
|
"end": 428, |
|
"text": "[NP and NP]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "NP Segmentation", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "This way, we implicitly handle complicated NPs such as le ou les responsables ( the-SG or the-PL person(s) in charge), les trois ou quatre affaires (the three or four cases), etc.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "NP Segmentation", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Once NP boundaries are marked, we insert on the left of any preposition a temporary PP beginning mark (TBeginPP = <PP):", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "PP Segmentation", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "<PP Avec ou <PP sans [NP le premier ministre NP 3]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "PP Segmentation", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Then the longest sequence containing at least one TBeginPP followed by one EndNP is surrounded with the [PP and PP] boundaries using the RE:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "PP Segmentation", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "[TBeginPP -$ [EndNPITVerb] Encl~P] @-> BeginPP ... EndPP", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "PP Segmentation", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "which eventually leads to:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "PP Segmentation", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "[PP Avec ou sans le premier ministre PP]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "PP Segmentation", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "A VC (Verb Chunk) is a sequence containing at least one verb (the head). It may include words or segments (NPs, PPs, APs or other VCs) that are possibly linked as arguments or adjuncts to the verb. There are three types of VCs: infinitives, present participle phrases and finite verb phrases. We first mark infinitives and present participle segments as they are simpler than finite verb phrases-they are not recursive, they cannot contain other VCs.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "VC Segmentation", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "The infinitive phrases are recognized using the regular expression: ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Infinitives", |
|
"sec_num": "4.4.1" |
|
}, |
|
{ |
|
"text": "Here we use the basic idea described in the NP marking: temporary beginnings (TBeginVC) and ends (TEndVC) of VC are first marked.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Finite Verb Segments", |
|
"sec_num": "4.4.3" |
|
}, |
|
{ |
|
"text": "Temporary beginnings of VCs are usually introduced by grammatical words such as qui (relative pronoun), lorsque, et (coordination) etc. However, not all these words are certain VC boundaries: et could be an NP coordinator, while que (tagged as CONJQUE by the HMM tagger) could be used in comparatives (e.g. plus blanc que blanc). Therefore, we use three kinds of TBeginVC to handle different levels of uncertainty: a certain TBeginVC (TBe-ginVC1), a possible BeginVC (TBeginVC2) and an initial TBeginVC (TBeginVCS) automatically inserted at the beginning of every sentence in the input text. With TBeginVCS, we assume that the sentence has a main finite verb, as is usually the case, but this is just an assumption that can be corrected later.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Finite Verb Segments", |
|
"sec_num": "4.4.3" |
|
}, |
|
{ |
|
"text": "A temporary end of VC (TEndVC) is then inserted on the right of any finite verb, and the process of recognizing VCs consists of the following steps: *", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Finite Verb Segments", |
|
"sec_num": "4.4.3" |
|
}, |
|
{ |
|
"text": "Step 1: Each certain TBeginVC1 is matched with a TEndVC, and the sequence is marked with [VC and VC] . The matching is applied iteratively on the input text to handle the case of embedded clauses (arbitrarily bound to three iterations in the current implementations).", |
|
"cite_spans": [ |
|
{ |
|
"start": 89, |
|
"end": 100, |
|
"text": "[VC and VC]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Finite Verb Segments", |
|
"sec_num": "4.4.3" |
|
}, |
|
{ |
|
"text": "Step 2: The same is done with the TBeginVCS (inserted at the beginning of a sentence).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "*", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "*", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Step 3: If there is still a TEndVC that was not matched in (1) or (2), then it is matched with a possible TBeginVC2, if any, and the sequence is marked with [VC and VC].", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "*", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "*", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Step 4: Any TBeginVC that was not matched in (1), (2) or (3) is removed. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "*", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The process of tagging words and segments with syntactic functions is a good example of the non-monotonic nature of the parser and its hybrid constructive-reductionnist approach. Syntactic functions within non recursive segments (AP, NP and PP) are addressed first because they are easier to tag. Then other functions within verb segments and at sentence level (subject, direct object, verb modifier, etc.) are considered.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Marking Syntactic Functions", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Potential subjects are marked first: an NP is a potential subject if and only if it satisfies some typographical conditions (it should not be separated from the verb with only one comma, etc.). This prevents the NP Jacques, for example, from being marked as a subject in the sentence below: Then constraints are applied to eliminate some of the potential subject candidates. The constraints are mainly syntactic: they are about subject uniqueness (unless there is a coordination), the necessary sharing of the subject function among coordinated NPs, etc. The remaining candidates are then considered as real subjects. The other syntactic functions, such as object, PP-Obj, verb modifier, etc. are tagged using similar steps.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Marking Syntactic Functions", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "5The USA president, Jacques Boutet, decided to present his profession of faith.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Marking Syntactic Functions", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Expanding Verb Segments", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "6", |
|
"sec_num": "77" |
|
}, |
|
{ |
|
"text": "Because primary segmentation is cautious, verb segments end right after a verb in order to avoid arbitrary attachment of argument or adjunct segments (NPs, PPs and APs on the right of a verb). However, experiments have shown that in some kinds of texts, mainly in technical manuals written in a \"controlled language\", it is worth applying the \"nearest attachment\" principle. We expand VCs to include segments and to consider them as arguments or adjuncts of the VC head. This reduces structural ambiguity in the parser output with a very small error rate. For instance, expanding VCs in the sentence given in the previous section leads to the following structure: Nevertheless, as this principle leads to a significant number of incorrect attachments in the case of more free-style texts, the VC expansion network is optionally applied depending on the input text.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "6", |
|
"sec_num": "77" |
|
}, |
|
{ |
|
"text": "[", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "6", |
|
"sec_num": "77" |
|
}, |
|
{ |
|
"text": "As mentioned above, the parser is implemented as a sequence of finite state networks. The total size of the 14 networks we currently use is about 500 KBytes of disk space. The speed of analysis is around 150 words per second on a SPAP~Cstation 10 machine running in a development environment that we expect to optimize in the future. As for linguistic performance, we conducted a preliminary evaluation of subject recognition over a technical manual text (2320 words, 157 sentences) and newspaper articles from Le Monde (5872 words, 249 sentences). The precision and recall rates were respectively 99.2% and 97.8% in the first case, 92.6% and 82.6% in the case of the newspaper articles. This difference in performance is due to the fact that, on the one hand, we used the technical manual text to develop the parser and on the other hand, it shows much less rich syntactic structures than the newspaper text. We are currently conducting wider experiments to evaluate the linguistic accuracy of the parser.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Performance", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Below are some parsing samples, where the output is slightly simplified to make it more readable. In particular, morphosyntactic tags are hidden and only the major functions and the segment boundaries appear. lAP supr@me AP]/<NM ./SENT", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parsing Samples", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "The incremental finite-state parser presented here merges both constructive and reductionist approaches. As a whole, the parser is constructive: it makes incremental decisions throughout the parsing process. However, at each step, linguistic contraints may eliminate or correct some of the previously added information. Therefore, the analysis is non-monotonic and handles uncertainty. The linguistic modularity of the system makes it tractable and easy to adapt for specific texts (e.g. technical manuals or newspaper texts). This is done by adding specialized modules into the parsing sequence. This way, the core grammar is clearly separated from optional linguistic descriptions and heuristics.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "9" |
|
}, |
|
{ |
|
"text": "Ongoing work includes expansion of the French grammar, a wider evaluation, and grammar development for new languages. We will also experiment with our primary target applications, information retrieval and translation assistance.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "9" |
|
}, |
|
{ |
|
"text": "Turning the starter switch to the auxiliary position, the pointer will then return to zero.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We would like to thank Kenneth It. Beesley and Lauri Karttunen for their editorial advice and Gregory Grefenstette for the valuable discussions we had about finite-state parsing and filtering.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Parsing by chunks", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Steven", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Abney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1991, |
|
"venue": "Dordrecht", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Steven P. Abney, 'Parsing by chunks', in Principled- Based Parsing, eds., It. Berwick, S. Abney, and C. Tenny, Kluwer Academic Publishers, Dor- drecht, (1991).", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "FASTUS: A Finite-State Processor for Information Extraction from Iteal-World Text", |
|
"authors": [ |
|
{ |
|
"first": "Douglas", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Appelt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jerry", |
|
"middle": [ |
|
"It" |
|
], |
|
"last": "Hobbs", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Bear", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Israel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mabry", |
|
"middle": [], |
|
"last": "Tyson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Proceedings IJCAI-93", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Douglas E. Appelt, Jerry It. Hobbs, John Bear, David Israel, and Mabry Tyson 'FASTUS: A Finite-State Processor for Information Extraction from Iteal-World Text', in Proceedings IJCAI-93, Chambery, France, August 1993.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Axiomas y algoritmos en la de-scripci6n de las len, guas naturales', V Congreso Argentino de Lingiiistica", |
|
"authors": [ |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Gabriel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "B~s", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gabriel G. B~s, 'Axiomas y algoritmos en la de- scripci6n de las len, guas naturales', V Congreso Argentino de Lingiiistica, Mendoza, 1993.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Tagging French -comparing a statistical and a constraintbased method", |
|
"authors": [ |
|
{ |
|
"first": "Jean-", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pierre", |
|
"middle": [], |
|
"last": "Chanod", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pasi", |
|
"middle": [], |
|
"last": "Tapanainen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Proceedings of the Seventh Conference of the European Chapter of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "149--156", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jean-Pierre Chanod and Pasi Tapanainen, 'Tagging French -comparing a statistical and a constraint- based method', in Proceedings of the Seventh Con- ference of the European Chapter of the Associa- tion for Computational Linguistics, pp. 149-156, Dublin, (1995).", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "A Robust Finite-State Parser for French", |
|
"authors": [ |
|
{ |
|
"first": "Jean-", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pierre", |
|
"middle": [], |
|
"last": "Chanod", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pasi", |
|
"middle": [], |
|
"last": "Tapanainen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jean-Pierre Chanod and Pasi Tapanainen. 'A Ro- bust Finite-State Parser for French', in ESSLLI'96", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Workshop on Robust Parsing", |
|
"authors": [], |
|
"year": 1996, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "12--16", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Workshop on Robust Parsing, August 1996 12-16, Prague, Czech Republic.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Nouveaux courants en analyse syntaxique", |
|
"authors": [], |
|
"year": 1993, |
|
"venue": "Traitement automatique des langues", |
|
"volume": "34", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eva Ejerhed, 'Nouveaux courants en analyse syntax- ique', Traitement automatique des langues, 34(1), (1993).", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Shallow Parsing and Text Chunking: a View on Underspecification in Syntax", |
|
"authors": [ |
|
{ |
|
"first": "Stefano", |
|
"middle": [], |
|
"last": "Federici", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Simonetta", |
|
"middle": [], |
|
"last": "Montemagni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vito", |
|
"middle": [], |
|
"last": "Pirrelli", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "ESS-LLI'96 Workshop on Robust Parsing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "12--16", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stefano Federici, Simonetta Montemagni and Vito Pirrelli 'Shallow Parsing and Text Chunking: a View on Underspecification in Syntax', in ESS- LLI'96 Workshop on Robust Parsing, August 1996 12-16, Prague, Czech Republic.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Extended finite state models of language", |
|
"authors": [ |
|
{ |
|
"first": "Gregory", |
|
"middle": [], |
|
"last": "Grefenstette", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "Proceedings ECAI '96 workshop on", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gregory Grefenstette, 'Light Parsing as Finite-State Filtering', in Proceedings ECAI '96 workshop on \"Extended finite state models of language\" Aug. 11-12, 1996, Budapest.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Natural language processing: the PLNLP approach, number 196 in The Kluwer international series in engineering and computer science", |
|
"authors": [], |
|
"year": 1993, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Karen Jensen, George E. Heidorn, and Stephen D. Richardson, eds., Natural language processing: the PLNLP approach, number 196 in The Kluwer international series in engineering and computer science, Kluwer Academic Publishers, Boston/Dordrecht/London, 1993.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "A Parser from Antiquity: An Early Application of Finite State Transducers to Natural Language Parsing", |
|
"authors": [ |
|
{ |
|
"first": "Aravind", |
|
"middle": [], |
|
"last": "Joshi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "Proceedings ECAI '96 workshop on", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Aravind Joshi. 'A Parser from Antiquity: An Early Application of Finite State Transducers to Natu- ral Language Parsing', in Proceedings ECAI '96 workshop on \"Extended finite state models of lan- guage\", Budapest, August 11-12, 1996, Budapest.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Directed replacement", |
|
"authors": [ |
|
{ |
|
"first": "Lauri", |
|
"middle": [], |
|
"last": "Karttunen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "Proceedings of the 34th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lauri Karttunen, 'Directed replacement', in Proceed- ings of the 34th Annual Meeting of the Association for Computational Linguistics, Santa Cruz, USA, (June 1996). Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Compiling and using finite-state syntactic rules", |
|
"authors": [ |
|
{ |
|
"first": "Kimmo", |
|
"middle": [], |
|
"last": "Koskenniemi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pasi", |
|
"middle": [], |
|
"last": "Tapanainen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Atro", |
|
"middle": [], |
|
"last": "Voutilainen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "Proceedings of the Fourteenth International Conference on Computational Linguistics COLING-92", |
|
"volume": "I", |
|
"issue": "", |
|
"pages": "156--162", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kimmo Koskenniemi, Pasi Tapanainen, and Atro Voutilainen, 'Compiling and using finite-state syn- tactic rules', in Proceedings of the Fourteenth International Conference on Computational Lin- guistics COLING-92 vol. I, pp. 156-162. Nantes, (1992).", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Analyse syntaxique transformationnelle du franfais par transducteurs et lexiquegrammaire", |
|
"authors": [ |
|
{ |
|
"first": "Emmanuel", |
|
"middle": [], |
|
"last": "Roche", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Emmanuel Roche, Analyse syntaxique transforma- tionnelle du franfais par transducteurs et lexique- grammaire, Ph.D. dissertation, Universit6 de Paris 7, 1993.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Ambiguity resolution in a reductionistic parser", |
|
"authors": [ |
|
{ |
|
"first": "Atro", |
|
"middle": [], |
|
"last": "Voutilainen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pasi", |
|
"middle": [], |
|
"last": "Tapanainen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Proceedings of the Sixth Conference of the European Chapter of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "394--403", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Atro Voutilainen and Pasi Tapanainen, 'Ambigu- ity resolution in a reductionistic parser', in Pro- ceedings of the Sixth Conference of the Euro- pean Chapter of the Association for Computa- tional Linguistics, pp. 394-403, Utrecht, (1993).", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF1": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "(ADVP) ADJ ( COMMA [ (ADVP) ADJ COMMA ]+ ) ( COORD (ADVP) ADJ ) ] ADVP stands for adverb phrase and is defined as: [ ADV+ [[COORD[COMMA] ADV\u00f7]* ]", |
|
"num": null |
|
}, |
|
"FIGREF2": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": ".: sans m~me prdvenir (without even warning): [VC [NP Mr NP] [NP Guilhaume NP] supprime VC] [PP des ~missions PP] [VC sans m~me pr~venir VC] [NP leurs responsables NP] PastPartV+) [ PastPartV*]] e.g.: en ddnongant (while denouncing) [VC en d6non~ant VC] [NP les provocations NP] [ADJ mensong~res ADJ] a With or without the prime minister.", |
|
"num": null |
|
}, |
|
"FIGREF3": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "76", |
|
"num": null |
|
}, |
|
"FIGREF4": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "VC [NP le president NP]/SUBJ [PP du CSA PP], [NP Jacques NP] [NP Boutet NP] , a d4cid4 VC] [VC de publier VC] [NP la profession NP] [PP de foi PP] ./SENT 5", |
|
"num": null |
|
}, |
|
"TABREF1": { |
|
"html": null, |
|
"text": "Temporary tagging of VC boundaries <VCS <VC1 Lorsqu' [NP on NP] appuie VC> [PP sur 1' interrupteur PP] [PP de feux PP] [PP de", |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>dftresse PP] , [NP tousles indicateurs NP] [PP</td></tr><tr><td>de direction PP] clignotent VC> simultan4ment</td></tr><tr><td><VC2 et [NP un triangle NP] [AP rouge AP]</td></tr><tr><td>clignote VC> [PP dans i' interrupteur PP]</td></tr><tr><td>\u2022/SENT</td></tr><tr><td>VC marking</td></tr><tr><td>[VC [VC Lorsqu' [NP on NP] appuie VC] [PP sur</td></tr><tr><td>I' interrupteur PP] [PP de feux PP] [PP de</td></tr><tr><td>d4tresse PP] , [NP tousles indicateurs NP] [PP</td></tr><tr><td>de direction PP] clignotent VC] simultan4ment</td></tr><tr><td>[VC et [NP un triangle NP] lAP rouge AP]</td></tr><tr><td>clignote VC] [PP dans i' interrupteur PP]</td></tr><tr><td>\u2022/SENT</td></tr><tr><td>Verb Segmentation Example:</td></tr><tr><td>Initial input</td></tr><tr><td>Lorsqu' [NP on NP] appuie [PP sur 1'</td></tr><tr><td>interrupteur PP] [PP de feux PP] [PP de</td></tr><tr><td>d~tresse PP] , [NP tous_les indicateurs NP] [PP</td></tr><tr><td>de direction PP] clignotent simultan~ment et</td></tr><tr><td>[NP un triangle NP] lAP rouge AP] clignote [PP</td></tr><tr><td>dans 1' interrupteur PP] ./SENT 4</td></tr><tr><td>4 When the hazard warning switch is pressed all the</td></tr><tr><td>direction indicators will flash in unison and the switch</td></tr><tr><td>will flash a red triangle.</td></tr></table>" |
|
} |
|
} |
|
} |
|
} |