|
{ |
|
"paper_id": "W97-0104", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T04:36:46.687392Z" |
|
}, |
|
"title": "A Statistics-Based Chinese Parser", |
|
"authors": [ |
|
{ |
|
"first": "Zhou", |
|
"middle": [], |
|
"last": "Qiang", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Tsinghua University", |
|
"location": { |
|
"postCode": "100084", |
|
"settlement": "Beijing", |
|
"country": "P. R. China" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "This paper describes a statistics-based Chinese parser, which parses the Chinese sentences with correct segmentation and POS tagging information through the following processing stages: 1) to predict constituent boundaries, 2) to match open and close brackets and produce syntactic trees, 3) to disambiguate and choose the best parse tree. Evaluating the parser against a smaller Chinese treebank with 5573 sentences, it shows the following encouraging results: 86% precision, 86% recall, 1.1 crossing brackets per sentence and 95% labeled precision.", |
|
"pdf_parse": { |
|
"paper_id": "W97-0104", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "This paper describes a statistics-based Chinese parser, which parses the Chinese sentences with correct segmentation and POS tagging information through the following processing stages: 1) to predict constituent boundaries, 2) to match open and close brackets and produce syntactic trees, 3) to disambiguate and choose the best parse tree. Evaluating the parser against a smaller Chinese treebank with 5573 sentences, it shows the following encouraging results: 86% precision, 86% recall, 1.1 crossing brackets per sentence and 95% labeled precision.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Since the large-scale annotated corpora, such as Penn Treebank [MSM93] , have been built in English, statistical knowledge extracted from them has been shown to be more and more crucial for natural language parsing and disambiguation. Hindle and Rooth(1993) tried to use word association information to disambiguate the prepositional phrase attachment problem in English. Brill(1993a) proposed a transformation-based error.driven automatic learning method, which has been used in part-ofspeech(POS) tagging [Bri92] , text chunking [RM95] and sentence bracketing [Bri93b] . Bod's data oriented parsing technique directly used an annotated corpus as a stochastic grammar for parsing [RB93] .", |
|
"cite_spans": [ |
|
{ |
|
"start": 63, |
|
"end": 70, |
|
"text": "[MSM93]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 372, |
|
"end": 384, |
|
"text": "Brill(1993a)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 507, |
|
"end": 514, |
|
"text": "[Bri92]", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 531, |
|
"end": 537, |
|
"text": "[RM95]", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 562, |
|
"end": 570, |
|
"text": "[Bri93b]", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 681, |
|
"end": 687, |
|
"text": "[RB93]", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction 1", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Based on the statistical decision-tree models automatically learned from treebank, Magerman's SPATI~R parser showed good performance in parsing Wall Street Journal texts [DM95] . Collins(1996) described a statistical parser based on probabilities of dependencies between head-words in treebank, which can perform at least as well as SPATTER.", |
|
"cite_spans": [ |
|
{ |
|
"start": 170, |
|
"end": 176, |
|
"text": "[DM95]", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 179, |
|
"end": 192, |
|
"text": "Collins(1996)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction 1", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "As a distinctive language, Chinese has hlany characteristics different from English. Although Chinese information processing techniques have made great progress since 1980, how to use statistical information efficiently in Chinese parser is still a virgin land waiting to explore. This paper describes our preliminary work to build a Chinese parser based on different kinds of statistics extracted from treebank. It tries to parse the Chinese sentences with correct segmentation and POS tagging information through the following processing stages: 1) to predict constituent boundaries using local context statistics, 2) to match the open and close brackets and produce syntactic trees using boundary tag distribution data and syntactic tag reduction rules. 3) to disambiguate parse trees using stochastic !1 tl I: ! f, ! i ! context-free grammar(SCFG) rules. Evaluating'the parser against a smaller Chinese treebank with 5573 sentences, it shows the following encouraging results: 86% precision, 86% recall, 1.1 crossing brackets per sentence and 95% labeled precision. This work illustrates that some simple treebank statistics may play an important role in Chinese sentence parsing and disambiguation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction 1", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The rest of the paper is organized as follows. Section 2 briefly ina'oduces the statistical data set used in our parser. Section 3 describes the detailed parsing algorithm, including the boundary prediction model, bracket matching model, matching restriction schemes and the statistical disambiguation model. Section 4 gives current experimental results. At last, summary and future work are discussed in section 5.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction 1", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The difficulty to parse nati/ral language sentences is their high ambiguities. Traditionally, disambiguation problems in parsing have been addressed by enumerating possibilities and explicitly declaring knowledge which might aid most interesting natural language processing problems. As the large.scale annotated corpora become available nowadays, automatic knowledge acquisition from them becomes a new efficient approach and has been widely used in many natural language processing systems.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Statistics from treebank", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Treebanks are the collections of sentences marked with syntactic constituent structure trees. The statistics extracted from a large scale treebank will show useful syntactic distribution principles and be very helpful for disambiguation in a parser. Some statistical data and rules used in our parser are briefly described as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Statistics from treebank", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "(1) boundary distribution data(Sl) This group of data shows the different influence of context information on the constituent boundaries in a sentence, counted by the co-occurrence frequencies of different constituent boundary labels(b~ with the word(w~) and pmt-of-speech(POS) tags(ti), which include: (a) the co-occurrence frequencies with functional words: ~wi, bi); (b) the co-occurrence frequencies with a single POS tag: j~ts,b~); (c) the co-occurrence frequencies wig local POS tags:f(bi, ti, ti+j) or./~ti.s, ti, b+). They play an important role in the prediction of constituent boundary locations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Statistics from treebank", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "(2) Syntactic tag reduction data(S2) This group of data records the possibilities for the constituent structures to be reduced as different syntactic tags, represented by a set of statistical rules: constituent structure -> {syntactic tag, reduction probability}. For example, the rule v+n -> vp 0.93, np 0.0'7 indicates that a syntactic constituent composed by a verb(v) and a noun(n) can be reduced as a verb phrase(vP) with the probability 0.93, and as a noun phrase(rip) only 0.07 ~. Based on them, it is easy to determinate the suitable syntactic tag for a parsed constituent according to its internal structure components.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Statistics from treebank", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In Chinese, there arc a group of verbs with especial synlactic functions. They can directly modify a noun, such as the verb \"xun//an(Wain)\" in the phrase \"xurd/o.n ~rTumccha(training handbook)\". Therefore,, we have the noun phrases with constituent smscture \"v+n\" in Chinese treebank.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Statistics from treebank", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "(3) syntactic tag distribution on a boundary(S3) This group of data expresses the possibilities for an open or a close bracket to be the boundary of a constituent with certain kind of syntactic tags under different POS context. For example, n [.p..7.> vp 0.531, pp 0.462, np 0.007, indicates that the probability for an open bracket under the context of noun(n) and preposition(p) to be the left boundary of a verb phrase(vp) is 0.531,'a prepositional phrase(pp) 0.462, and a noun phrase(rip) 0.007. This kind of data provides the basis for matching brackets and labeling the matched constituents.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "!", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(4) constituent preference data(S4) This group of data records the preference for a constituent to be combined with its left adjacent constituent or the right adjacent one under local context, counted by the frequencies of different constituent combination cases in treebank(see Figure 1 ), which are represented as: {<constituent combination case>, <left combination frequency>, <right combination frequency>} For example, {p+nF4-vp, 190, 0~. indicates that the combination frequency of the noun phrase(np) with preposition(p) under the local context \"p+np+vp\" is 190, and with verb phrase(vp) is 0. They will be helpful in preference matching model. (5) probabilistic constituent structure rules(S5) The group of data associates a probability to each constituent structure role of the grammar, also called as stochastic context-free gr mmar(SCFG) rules. The probability of a constituent structure rule .,4 ~ o~p\u00a5 can be calculated as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 279, |
|
"end": 287, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "!", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "f(A -, apy) A-\"~tz,~7", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "!", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "where -, is the frequency of the constituent L~ cz ~3 T ] in treebank. It provides useful information for syntactic disambiguation. The key of our approach is to simplify the parsing problem as two processing stages. First, the statistical prediction model assigns a suitable constituent boundary tag to every word in the sentence and produce a partially bracketed sentence (Figure 2(c) ). Second, the preference matching model constructs the syntactic trees through bracket matching operations and select a preference matched tree using probability score scheme as output (Figure 2(d) ). ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 374, |
|
"end": 386, |
|
"text": "(Figure 2(c)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 573, |
|
"end": 585, |
|
"text": "(Figure 2(d)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "!", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "A constituent boundary parse of a sentence can be represented by a sequence of boundary tags. Each tag corresponds to one word in the sentence, and can value L, M or .R, respectively meaning the beginning, continuation or termination of a constituent in the syntactic tree. A constituent boundary parse B is therefore given by B = (bl,b2...,bn), where b i is the boundary tag of the//th word and n is the number of words in the sentence. Let S=<W,T> be the input sentence for syntactic analyzing, where W---Wl, W 2 ..... w n is the word sequence in the sentence, and T=tl, t2,...,t n is the corresponding POS tag sequence, i.e., t i is the POS tag ofwi. Just like the statistical approaches in many automatic POS tagging programs, our job is to select a constituent boundary sequence B' with the highest score, P(BIS), from all possible sequences. B' = argmax P(B]S') = argmaxP(S]B)P (B) (1)", |
|
"cite_spans": [ |
|
{ |
|
"start": 884, |
|
"end": 887, |
|
"text": "(B)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The boundary prediction model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Assume the effects of word information and POS information are independent, we get", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The boundary prediction model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "P(~ B) = P(wl B) P(TI B)", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "The boundary prediction model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Furthermore, replace P(W1B) and P(2qB) by the approximation that each constituent boundary is determined only by a functional word(wi) or local POS context(Ci). Therefore, a statistical model for the automatic prediction of constituent boundary is set up. There are two directions to improve the prediction model. First, many post-editing rules that are manually developed or automatically learned by an error-driven learning method can be used to refine the automatic prediction .ou~uts [ZQ96] . Second, a new statistical model based on forward-backward algorithm will produce multiple bo~fi~ary predictions for a word in the sentence [ZZ96] .", |
|
"cite_spans": [ |
|
{ |
|
"start": 488, |
|
"end": 494, |
|
"text": "[ZQ96]", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 636, |
|
"end": 642, |
|
"text": "[ZZ96]", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The boundary prediction model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "In order to build a complete syntactic tree based on the boundary prediction information, two basic problems must be resolved. The first one is how to find the reasonable constituents among the partially bracketed sentence. The second one is how to label the found constituents with suitable syntactic tags. This section will propose some basic concepts and operations of the matching model to deal with the first problem, and section 3.3.1 will give methods to resolve the second one. The formal description of the bracket matching model can be found in [ZQd96] . (3) Matched consfitaent A matched constituent MC(ij) is a syn~.actic constituent constructed by the simple matching operation SM(ij) or the expanded matching operation EM(ij).", |
|
"cite_spans": [ |
|
{ |
|
"start": 555, |
|
"end": 562, |
|
"text": "[ZQd96]", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Basic matching model", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Therefore, a basic matching algorithm can be built as follows: Starting from the preprocessed sentence S=<W,T,B>, we first use the simple matching operation, then the expanded matching operation, so as to fred every possible matched constituent in the sentence. The complete matching principle will guarantee that this algorithm will produce all matched constituents in the sentence. See [ZQd96] for more detailed infornlation of this principle and its formal proof.", |
|
"cite_spans": [ |
|
{ |
|
"start": 388, |
|
"end": 395, |
|
"text": "[ZQd96]", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Basic matching model", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The basic matching algorithm based on the complete matching principle is inefficient, because many ungrammatical or unnecessary constituents can be produced by two matching operations. In order to improve the efficiency of the-algodt1~, some matching restriction schemes are needed, which include, (1) to label the matched constituents with reasonable syntactic tags, (2) to set the matching restriction regions, (3) to discard unnecess~try matching operations according to local preference information.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Matching restriction schemes", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The aim of labeling approach is to eliminate the ungrammatical matched constituents and label the suitable syntactic tags for the reasonable constituents, according to their internal structure and external context information.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Constituent labeling", |
|
"sec_num": "3.3.1" |
|
}, |
|
{ |
|
"text": "First, some common erroneous constituent structures can be enumerated under current POS tagset and syntactic tagset. Moreover, many heuristic rules to find ungrammatical constituents can also be summarized according to constituent combination principles. Based on them, most ungrammatical constituents can be eliminated.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Constituent labeling", |
|
"sec_num": "3.3.1" |
|
}, |
|
{ |
|
"text": "Then, we can assign ~-'suitable.~y~tactic tag to each matched constituent through the following sequential processing steps:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Constituent labeling", |
|
"sec_num": "3.3.1" |
|
}, |
|
{ |
|
"text": "(a) Set the syntactic tags according to \"the statistical reduction rule, if it can be searched in syntactic tag reduction data(S2) using the constituent structure string as a keyword.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Constituent labeling", |
|
"sec_num": "3.3.1" |
|
}, |
|
{ |
|
"text": "(1:0 Determine the syntactic tags according to the intersection of the tag distribution sets of the open and close bracket on the constituent boundary, if they can be found in statistical data(S3).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Constituent labeling", |
|
"sec_num": "3.3.1" |
|
}, |
|
{ |
|
"text": "(c) Assign an especial tag that is not in the current syntactic set to every unlabeled constituent after above two processing steps.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Constituent labeling", |
|
"sec_num": "3.3.1" |
|
}, |
|
{ |
|
"text": "There arc many regional restricted constituents in natural language, such as reference constituents in the pair of quotation marks: \"... % and the regular collocation phrase: \"zai ... de shikou(when ...)\" in Chinese. The constituents inside them can not have syntactic relationship with the outside ones.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Restriction regions for matching", |
|
"sec_num": "3.3.2" |
|
}, |
|
{ |
|
"text": "In bracket matching model, these cases can be generalized as a matching restriction region (MRR), which is informally represented as the region <RL, RR> in Figure 3 . ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 156, |
|
"end": 164, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Restriction regions for matching", |
|
"sec_num": "3.3.2" |
|
}, |
|
{ |
|
"text": "Consider such a parsing state after the simple matching operation SM(ij):", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Local preference matching", |
|
"sec_num": "3.3.3" |
|
}, |
|
{ |
|
"text": "[ti_ 1 MC(ij) tj+l]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Local preference matching", |
|
"sec_num": "3.3.3" |
|
}, |
|
{ |
|
"text": "Starting from it, there are two possible expanded matching operations: EM(i-Ij) or EM(ij+I). All of them must be processed according to basic matching algorithm, and two candidate matched constituents: MC(i-Ij) and MC(i,j+I), will be produced. But in many cases, one of these operations is unnecessary because only one candidate constituent may be included in the best parse tree. These superfluous matching operations reduces the parsing efficiency of the basic matching algorithm. Let \"A B C\" to be the local matching context (For the above example, we have: A=[ti. 1, B= MC(ij), and C ffi tj+l] ). P(B,C) is the fight combination probability for constituent 'B' and P(A,B) is its left combination probabilit~ which can be easily computed using the constituent preference data ($4) described in s\u00a2ction 2. Set ~=--0.~-as-the_difference threshold. Then, a simple preference-based approach can be added into the basic matching algorithm to improve the parsing efficiency:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Local preference matching", |
|
"sec_num": "3.3.3" |
|
}, |
|
{ |
|
"text": "if P(B,C)-P(A,B)>ct, then the matching Ol~eration [A,B] will be discarded. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Local preference matching", |
|
"sec_num": "3.3.3" |
|
}, |
|
{ |
|
"text": "This section describes the way the best syntactic tree is selected. A statistical approach to this problem is to use SCFG rules extracted from treebank and set a probability score scheme for disambiguation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "\u2022 3.4 Statistical disamBiguation model", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Assume a constituent labeled with syntactic tag PH is composed by the syntactic components RP1, RP 2 ..... RP n. Its parsing probability P(PH) can be calculated through the following formula: By computing logarithm on both sides of equation 7, we will get the probability score $core(P.lt): ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "\u2022 3.4 Statistical disamBiguation model", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "P", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "\u2022 3.4 Statistical disamBiguation model", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Formally, a labeled constituent MC(I,n) may be looked as a syntactic tree. Therefore, the most likely parse tree under this score model is then this kind of matched constituent with the maximum probability score, i.e. Tbest = argmax Score (MC(1,n) ).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 239, |
|
"end": 247, |
|
"text": "(MC(1,n)", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "\u2022 3.4 Statistical disamBiguation model", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In the absence of an available annotated Chinese corpus, we had to build a small Chinese treebank for training and evaluating the parser, which consists of the sentences extracted from two parts of Chinese texts: (1) test set for Chinese-English machine translation systems (Text A), (2) Singapore primary school textbooks on Chinese language (Text B). Table 1 shows the basic statistics of these two parts in the treebank. Then, the treebank is divided as a training set with 4777 sentences and a test set with 796 sentences based on balanced sampling principle. Figure 4 shows the distributions of sentence length in the training and test sets. In addition, according to the difference of word(including punctuation) number in the sentence, all sentences in the treebank can be further classified as two sets. One is simple sentence set, in which every sentence has no more than 20 words. The other is complex sentence set, in which every sentence has more than 20 words. Therefore, we will obtain complete knowledge about the performance of the parser by the comparison of it on these two types of sentences. Table 2 shows the distribution dat~ of simple and complex sentences in the training and test sets. 196 In order to evaluate the performance of the current Chinese parser, we are using the following measures Table 3 shows the experiment results. On a 80Mhz 486 personal computer with 16 megabytes RAM, the parser can parse about 1.38 sentences per second. ", |
|
"cite_spans": [ |
|
{ |
|
"start": 1211, |
|
"end": 1214, |
|
"text": "196", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 353, |
|
"end": 360, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
}, |
|
{ |
|
"start": 564, |
|
"end": 572, |
|
"text": "Figure 4", |
|
"ref_id": "FIGREF10" |
|
}, |
|
{ |
|
"start": 1112, |
|
"end": 1119, |
|
"text": "Table 2", |
|
"ref_id": "TABREF1" |
|
}, |
|
{ |
|
"start": 1319, |
|
"end": 1326, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experimental results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "In this paper, we propose characteristics: ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "I 5 Conclusion", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(1) The idea to separate constituent boundary prediction as a preprocessing stage from parser, just as the widely accepted POS tagging, is based on the following premises: (a) Most constituent boundaries in a Chinese sentence can be predicted according to their local word and POS information, (b) The parsing complexi~be reduced based on constituent boundary prediction.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "|", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(2) The proof of complete matdhTn-gprinciple and the application of matching restriction schemes guarantee the soundness and efficiency of the matching algorithm.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "|", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(3) To use SCFG rules as a main disambiguation knowledge will cut down the hard work to manually develop a complex and detailed disambiguation rule base.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "|", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Although the experimental results are encouraging, there are many possibilities for improvement of the algorithm. Some unsupervised training methods for SCFG rules, such as inside-outside alg0rithm [LY90] and its improved approaches( [PS92] , [SYW95] ), should be tried in the absence of large-scale Chinese treebanks. The disambiguation model could be extended to capture context-sensitive statistics [CC94] ", |
|
"cite_spans": [ |
|
{ |
|
"start": 198, |
|
"end": 204, |
|
"text": "[LY90]", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 234, |
|
"end": 240, |
|
"text": "[PS92]", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 243, |
|
"end": 250, |
|
"text": "[SYW95]", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 402, |
|
"end": 408, |
|
"text": "[CC94]", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "|", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The POS and syntactic tags use~l !n this sentence are briefly describes as follows. Some detailed information about our POS and syntactic tagsets can be found in[ZQd96]:[POS tags]: r-pronoun, n-noun, v-verb, m-numeral, q-classifier, w-punctuation.[Syn tags]: np--noun phrase, mp'-numeral-cla.ssifier phrase, vp-verb Phrase, dj-simple sentence panern' zj-'c\u00b0raplete sentence.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "The author would like to thank Prof. Yao Tianshun. Prof. Yu Shiwen and Prof. Huang Changning for their kind advice and support, and many colleagues and students in Institute of Computational Linguistics, Peking UniVersitio.u for proofreading the U'eebank. The research was supported by national natural science foundation Grant 6948300~2", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "A Procedure for Quantitatively Comparing the Syntactic Coverage of English Grammars", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Black", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1991, |
|
"venue": "Proceedings of the February 1991 DARPA Speech and Natural language Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "306--311", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "E. Black et al. (1991). \"A Procedure for Quantitatively Comparing the Syntactic Coverage of English Grammars.\" In Proceedings of the February 1991 DARPA Speech and Natural language Workshop, 306-311. IB", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "A simple rule-based part of speech tagger", |
|
"authors": [ |
|
{ |
|
"first": "Eric", |
|
"middle": [], |
|
"last": "Briil", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "Proceedings, Third Conference on Applied Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "152--155", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eric Briil (1992). \"A simple rule-based part of speech tagger\". In Proceedings, Third Conference on Applied Natural Language Processing. Trento, Italy, 152-155.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "A Corpus-Based Approach to Language Learning", |
|
"authors": [ |
|
{ |
|
"first": "Eric", |
|
"middle": [], |
|
"last": "Brill", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eric Brill (1993). A Corpus-Based Approach to Language Learning. Ph.D. thesis, University of Pennsylvania.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Grammar Induction and Parsing Free Text : A Transformation-Based Approach", |
|
"authors": [ |
|
{ |
|
"first": "Eric", |
|
"middle": [], |
|
"last": "Brill", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Proc. of ACL-31", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "259--265", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eric Brill. (1993). '~atio.Grammar Induction and Parsing Free Text : A Transformation- Based Approach.\" In Proc. of ACL-31, 259-265.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Context-Sensitive Statistics For Improved Grammatical Language Models", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Charniak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Carroll", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "Proc. of.4AAI-94", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "728--733", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "E. Charniak & G. Carroll. (1994). \"Context-Sensitive Statistics For Improved Grammatical Language Models.\" In Proc. of.4AAI-94, 728-733.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "A New Statistical Parser Based on Bigram Lexical Dependencies", |
|
"authors": [ |
|
{ |
|
"first": "Michael John", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Collins", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "Proc. of ACIL-34", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "84--191", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michael John Collins (1996). \"A New Statistical Parser Based on Bigram Lexical Dependencies.\" In Proc. of ACIL-34, i 84-191.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Statistical Decision-Tree Models for Parsing", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Magerman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Proc. of ACL-I", |
|
"volume": "95", |
|
"issue": "", |
|
"pages": "276--303", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David M. Magerman. (1995). \"Statistical Decision-Tree Models for Parsing\", In Proc. of ACL- I, 95, 276-303.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Parsing with context-free grammars and word statistics", |
|
"authors": [ |
|
{ |
|
"first": "Eugene", |
|
"middle": [], |
|
"last": "Chamiak", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eugene Chamiak (1995). \"Parsing with context-free grammars and word statistics\", Technical report C$-95-28, Department of Computer Science, Brown University.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Structural Ambiguity and Lexical Relations", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Hindle & M. Rooth", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "", |
|
"volume": "19", |
|
"issue": "", |
|
"pages": "103--120", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "D. Hindle & M. Rooth. (1993). \"Structural Ambiguity and Lexical Relations\", Computational \u00a3ingu/st/cs, 19(1), 103-120.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "The estimation of stochastic context-free grammars using the Inside-Outside algorith.~B", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Lari", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Young", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "Compute Speech and Language", |
|
"volume": "4", |
|
"issue": "1", |
|
"pages": "35--56", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "K.Lari, and S.J.Young. (1990). \"The estimation of stochastic context-free grammars using the Inside-Outside algorith.~B.\" Compute Speech and Language, 4(1), 35-56.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Building a Large Annotated Corpus of English: The Penn Treebank", |
|
"authors": [ |
|
{ |
|
"first": "Mitchell", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Marcus", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mary", |
|
"middle": [ |
|
"Ann" |
|
], |
|
"last": "Ma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Beatrice", |
|
"middle": [], |
|
"last": "Santorini", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Computational Linguistics", |
|
"volume": "19", |
|
"issue": "2", |
|
"pages": "313--330", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mitchell P.Marcus, Mary Ann Ma.rcinkiewicz, and Beatrice Santorini (1993). \"Building a Large Annotated Corpus of English: The Penn Treebank\", Computational Linguistics, 19(2), 313- 330.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Inside-Outside reesfimation from partially bracketed Corpora", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Pereim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Schabes", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "Prec. of ACL-30", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "128--163", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "F. Pereim, and Y.Schabes. (1992). \"Inside-Outside reesfimation from partially bracketed Corpora.\" In Prec. of ACL-30, 128-I35.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Using an Annotated Language Corpus as a Virtual Stochastic Grammar", |
|
"authors": [], |
|
"year": 1993, |
|
"venue": "Proc. of AAAA-03", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "778--783", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rens Bed. (1993). \"Using an Annotated Language Corpus as a Virtual Stochastic Grammar\", In Proc. of AAAA-03, 778-783.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Text Chunking using Transformation-Based Learning", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Lance", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ramshaw", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Mitchell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Marcus", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Proceedings of the third workshop on very large corpora", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "82--94", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lance A. Ramshaw & Mitchell P. Marcus (1995). \"Text Chunking using Transformation-Based Learning\", In Proceedings of the third workshop on very large corpora, 82-94.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "An inference approach to grammar construction", |
|
"authors": [ |
|
{ |
|
"first": "H-H", |
|
"middle": [], |
|
"last": "Shih", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": ".", |
|
"middle": [ |
|
". S J" |
|
], |
|
"last": "Young", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Waegner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Compute~\"~hzmatLanguage", |
|
"volume": "9", |
|
"issue": "3", |
|
"pages": "235--256", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "H-H. Shih,...S.J. Young, N.P. Waegner. (1995). \"An inference approach to grammar construction\", Compute~\"~hzmatLanguage, 9(3), 235-256.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "A Model for Automatic Prediction of Chinese Phrase Boundary Location", |
|
"authors": [ |
|
{ |
|
"first": "Zhou", |
|
"middle": [], |
|
"last": "Qiang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "Journal of Soflware", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "315--322", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhou Qiang (1996). \"A Model for Automatic Prediction of Chinese Phrase Boundary Location\", Journal of Soflware, F'ol 7 Supplement, 315-322.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Phrase Bracketing and Annotating on Chinese Language Corpus", |
|
"authors": [ |
|
{ |
|
"first": "Zhou", |
|
"middle": [], |
|
"last": "Qiang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhou Qiang (1996). Phrase Bracketing and Annotating on Chinese Language Corpus, Ph.D. dissertation,, Dept. of Computer Science and Technology, Peking University, June 1996.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "An improved Model for Automatic Prediction of Chinese Phrase Boundary Location", |
|
"authors": [ |
|
{ |
|
"first": "Zhou", |
|
"middle": [], |
|
"last": "Qiang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhang", |
|
"middle": [], |
|
"last": "Wei", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "Prec. of lCCC '96", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "75--81", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhou Qiang, Zhang Wei (1996). \"An improved Model for Automatic Prediction of Chinese Phrase Boundary Location\", In Prec. of lCCC '96, Singapore, June 4-7, 75-81.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"uris": null, |
|
"text": "The overview of different constituent combination cases in treebank. (a) The left combination case: RP, RI'~... l~i~ RP2... RPm; (b) The right combination case: RPi RP2... l~Pr. ~ KPol RPa ... RP~.", |
|
"type_str": "figure" |
|
}, |
|
"FIGREF1": { |
|
"num": null, |
|
"uris": null, |
|
"text": "The aim of the parser is to take a correctly segmented and POS tagged Chinese sentence as input(for ~' example Figure 2(a)) and produce a phrase structure ~ee as output(Figure 2(b)). A parsing algorithm to i~ this problem must deal with two important issues: (1) how to produce the suitable syntactic trees from a sequence, (2) how to select the best tree from all of the possible parse trees.", |
|
"type_str": "figure" |
|
}, |
|
"FIGREF2": { |
|
"num": null, |
|
"uris": null, |
|
"text": "(a) ~(my)/r ~l~ \u2022 Corother)/n ~ (want)/v ~. (buy)/v (football)/n o (period)/w 2 My brother wants to buy twofooToalls. (two)/m ~(-classifier)/q ~]~ (b) [zj[dj[np ~,,/r ~/n][vp ~/v [vp ~/v [np[mp ~/m \"1\"/q ] ~,.~E,~/n ]]]] o /w] (c) [~/r ~/n] [Ply [~/v [~/m ~/q] /,~=~/n] o /w] zj (d) dj vp np ___-.-,----. np mp [~../r ~ ]' ~/v [~/v [~/m -'~\"/q] /,~/nl ./w Figure 2. An overview of th6 representation used by the parser. (a) The segmented and tagged sentence; (b) A candidate parse-tree(the correct one), represented by its bracketed and labeled form; (c) A constituent boundary' prediction representation of (a); (d) A preference matched tree of (c). Arrows show the bracket matching operations.", |
|
"type_str": "figure" |
|
}, |
|
"FIGREF3": { |
|
"num": null, |
|
"uris": null, |
|
"text": "n P(SIB) = H P(w,lbOPfC, IbO i=l In addition, for P(R), it is possible touse simple bigram approximation: f/ P( B) = H P(bilbi-I) i=t where, P(btlbo) = P(bO.", |
|
"type_str": "figure" |
|
}, |
|
"FIGREF4": { |
|
"num": null, |
|
"uris": null, |
|
"text": "' = arg max I'~ P(w, lb,)P(Cilb,)P(bilbi-1) of the model are based on the boundary distribution data(S 1) described in section 2, and can be calculated through maximum likelihood estimation(MLE) method. For example, P(C, Ib,) = max[ P(t,,t, . ,Ibi), P( t,-,, tilb,)] = max[f(bi, ti, ti+ O/f(bi),f(ti-t,ti,bO/f(bO] (6)", |
|
"type_str": "figure" |
|
}, |
|
"FIGREF5": { |
|
"num": null, |
|
"uris": null, |
|
"text": "(I) Simple matching operation The simple matching SM(ij) is the matching of the open bracket (hi = L) and the close bracket (bj = R) under the condition: V b k = M, ke(ij). Expanded matching operation The expanded matching EM(ij) is the matching of the open bracket (b i =/.,) and the close bracket (bj = R) under one of the following conditions: (a) 3 {SM(i,k), i<k<j} and V bp =M, p\u00a2(kj). (b) 3 {SM(k~), i<k<j} and V bp = M, pe(i,k). (c) 3 {SM(i,k) ~-SM~,j),.i.~.k<p<j} and V bq =-M, qe(k,p).", |
|
"type_str": "figure" |
|
}, |
|
"FIGREF6": { |
|
"num": null, |
|
"uris": null, |
|
"text": "Informal description ofa MRR <RL, RR>. The arcs show bracket matching operations, and the arcs marked with 'X' indicate that such matching operations are forbidden.Therefore, the basic matching algorithm can be improved by adding the following restrictions: (a) To restrict the matching operations inside MRR and guarantee them can't cross the boundary of the MRR.(b) To reduce the MRR as a constituent MC(RL,R.R) aitvr all matching operations inside MRR have been finished, so as to make it as a whole during the following matching operations.The key to use MRR efficiently is to correctly identify the possible restriction regions in the sentences. Reference [ZQ~i96-]'describ.e.s the automatic identification methods for some Chinese MRRs.", |
|
"type_str": "figure" |
|
}, |
|
"FIGREF7": { |
|
"num": null, |
|
"uris": null, |
|
"text": "A,B)-P(B,C)>~ then the matching operation [B,C] will be discarded.", |
|
"type_str": "figure" |
|
}, |
|
"FIGREF8": { |
|
"num": null, |
|
"uris": null, |
|
"text": "( PH) = H P( RP') \" P( PH --~ RP,RP2... RP,) (7) i=l where the probability P(PH-,. RP 1 RP 2 ... RPn) comes from statistical data(S5) defined in section 2. In addition, ffRP i is a word component, then set/'(RPi) = 1.", |
|
"type_str": "figure" |
|
}, |
|
"FIGREF9": { |
|
"num": null, |
|
"uris": null, |
|
"text": "Score( PtI) = log P(PH) = log I:'( RPO. P( PtI ~ RPL.. RP,) i=l = ~ Score(RP,) + log P(PH ~ RP L.. RP,)", |
|
"type_str": "figure" |
|
}, |
|
"FIGREF10": { |
|
"num": null, |
|
"uris": null, |
|
"text": "Distn'bution of sentence length in training and test sets.", |
|
"type_str": "figure" |
|
}, |
|
"FIGREF11": { |
|
"num": null, |
|
"uris": null, |
|
"text": "Matched precision(MP) = number of correct matched constituents in proposed parse number of matched constituent in proposed parse recall(MR) = number of correct matched constituents in proposed parse number of constituents in treebank parse 3) Crossing Brackets(CBs) ffi number of constituents which violate constituent boundaries with a constituent in the treebank parse. The above measures are similar with the PARSEVAL measures defmed in [Bla91]. Here, for a matched constituent to be 'correct' it must have the same boundary location with a constituent in the treebank parse. 4) Boundary prediction precision(BPP) = number of words with correct constituent boundary prediction number of words in the sentence 5) I,abeled precisign(LP) = number of correcVi'abeled-constituents in proposed parse number of correct matched constituent in proposed parse 6) Sentence parsing ratio(SPg) = number\" of sentences having a proposed parse by parser number of input sentences", |
|
"type_str": "figure" |
|
}, |
|
"FIGREF12": { |
|
"num": null, |
|
"uris": null, |
|
"text": "3: Results on the training set and test set. 0 CBs, _< 1 CBs, _< 2 CBs are the percentage of.sentences with 0, ~ 1 or g 2 crossing brackets respectively.", |
|
"type_str": "figure" |
|
}, |
|
"TABREF0": { |
|
"content": "<table><tr><td/><td/><td>=</td><td/><td>\u2022</td><td/><td/><td/><td/></tr><tr><td/><td colspan=\"6\">Character Number ! Word Number ! Sentence Number i : l i</td><td colspan=\"2\">Mean Sentence Length(words/sent.)</td></tr><tr><td>TextA TextB</td><td>1434 4139</td><td>i \" i I a</td><td>i in</td><td>11821 52606</td><td>.... a i</td><td>17058 72434</td><td>|l</td><td>8.243 12.71 i</td></tr></table>", |
|
"text": "Basic statistics for the Chinese treebank.", |
|
"num": null, |
|
"type_str": "table", |
|
"html": null |
|
}, |
|
"TABREF1": { |
|
"content": "<table><tr><td/><td colspan=\"2\">Simple Sentences</td><td colspan=\"2\">Complex Sentences</td><td>Mean Sent.</td></tr><tr><td/><td>Sent.</td><td>% in Set</td><td>Sent.</td><td>% in Set</td><td>Length</td></tr><tr><td>,,</td><td>Number . ,,</td><td/><td>Number</td><td>J,.</td><td/></tr><tr><td>Training Set</td><td>4176</td><td>87.419</td><td>601</td><td>12.581</td><td>11.5~33</td></tr><tr><td>Test Set</td><td>682</td><td>85.804</td><td>113</td><td>16.477</td><td>14.</td></tr></table>", |
|
"text": "Distribution of the simple and complex sentences in the training and test sets.", |
|
"num": null, |
|
"type_str": "table", |
|
"html": null |
|
}, |
|
"TABREF2": { |
|
"content": "<table/>", |
|
"text": "", |
|
"num": null, |
|
"type_str": "table", |
|
"html": null |
|
}, |
|
"TABREF4": { |
|
"content": "<table><tr><td>I</td></tr><tr><td>I</td></tr><tr><td>I</td></tr><tr><td>I</td></tr><tr><td>I</td></tr><tr><td>I</td></tr><tr><td>!</td></tr><tr><td>I</td></tr></table>", |
|
"text": "and word statistics([EC95],[Coi96]).", |
|
"num": null, |
|
"type_str": "table", |
|
"html": null |
|
} |
|
} |
|
} |
|
} |