|
{ |
|
"paper_id": "C08-1049", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T12:26:17.375546Z" |
|
}, |
|
"title": "Word Lattice Reranking for Chinese Word Segmentation and Part-of-Speech Tagging", |
|
"authors": [ |
|
{ |
|
"first": "Wenbin", |
|
"middle": [], |
|
"last": "Jiang", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "Key Lab. of Intelligent Information Processing", |
|
"institution": "", |
|
"location": { |
|
"postBox": "P.O. Box 2704", |
|
"postCode": "100190", |
|
"settlement": "Beijing", |
|
"country": "China" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Haitao", |
|
"middle": [], |
|
"last": "Mi", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "Key Lab. of Intelligent Information Processing", |
|
"institution": "", |
|
"location": { |
|
"postBox": "P.O. Box 2704", |
|
"postCode": "100190", |
|
"settlement": "Beijing", |
|
"country": "China" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Qun", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "Key Lab. of Intelligent Information Processing", |
|
"institution": "", |
|
"location": { |
|
"postBox": "P.O. Box 2704", |
|
"postCode": "100190", |
|
"settlement": "Beijing", |
|
"country": "China" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "In this paper, we describe a new reranking strategy named word lattice reranking, for the task of joint Chinese word segmentation and part-of-speech (POS) tagging. As a derivation of the forest reranking for parsing (Huang, 2008), this strategy reranks on the pruned word lattice, which potentially contains much more candidates while using less storage, compared with the traditional n-best list reranking. With a perceptron classifier trained with local features as the baseline, word lattice reranking performs reranking with non-local features that can't be easily incorporated into the perceptron baseline. Experimental results show that, this strategy achieves improvement on both segmentation and POS tagging, above the perceptron baseline and the n-best list reranking.", |
|
"pdf_parse": { |
|
"paper_id": "C08-1049", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "In this paper, we describe a new reranking strategy named word lattice reranking, for the task of joint Chinese word segmentation and part-of-speech (POS) tagging. As a derivation of the forest reranking for parsing (Huang, 2008), this strategy reranks on the pruned word lattice, which potentially contains much more candidates while using less storage, compared with the traditional n-best list reranking. With a perceptron classifier trained with local features as the baseline, word lattice reranking performs reranking with non-local features that can't be easily incorporated into the perceptron baseline. Experimental results show that, this strategy achieves improvement on both segmentation and POS tagging, above the perceptron baseline and the n-best list reranking.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Recent work for Chinese word segmentation and POS tagging pays much attention to discriminative methods, such as Maximum Entropy Model (ME) (Ratnaparkhi and Adwait, 1996) , Conditional Random Fields (CRFs) (Lafferty et al., 2001) , perceptron training algorithm (Collins, 2002) , etc. Compared to generative ones such as Hidden Markov Model (HMM) (Rabiner, 1989; Fine et al., 1998) , discriminative models have the advantage of flexibility in representing features, and usually obtains almost perfect accuracy in two tasks.", |
|
"cite_spans": [ |
|
{ |
|
"start": 140, |
|
"end": 170, |
|
"text": "(Ratnaparkhi and Adwait, 1996)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 206, |
|
"end": 229, |
|
"text": "(Lafferty et al., 2001)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 262, |
|
"end": 277, |
|
"text": "(Collins, 2002)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 347, |
|
"end": 362, |
|
"text": "(Rabiner, 1989;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 363, |
|
"end": 381, |
|
"text": "Fine et al., 1998)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Originated by Xue and Shen (2003) , the typical approach of discriminative models conducts c 2008. Licensed to the Coling 2008 Organizing Committee for publication in Coling 2008 and for re-publishing in any form or medium. segmentation in a classification style, by assigning each character a positional tag indicating its relative position in the word. If we extend these positional tags to include POS information, segmentation and POS tagging can be performed by a single pass under a unify classification framework (Ng and Low, 2004) . In the rest of the paper, we call this operation mode Joint S&T. Experiments of Ng and Low (2004) shown that, compared with performing segmentation and POS tagging one at a time, Joint S&T can achieve higher accuracy not only on segmentation but also on POS tagging.", |
|
"cite_spans": [ |
|
{ |
|
"start": 14, |
|
"end": 33, |
|
"text": "Xue and Shen (2003)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 520, |
|
"end": 538, |
|
"text": "(Ng and Low, 2004)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 621, |
|
"end": 638, |
|
"text": "Ng and Low (2004)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Besides the usual local features such as the character-based ones (Xue and Shen, 2003; Ng and Low, 2004) , many non-local features related to POSs or words can also be employed to improve performance. However, as such features are generated dynamically during the decoding procedure, incorporating these features directly into the classifier results in problems. First, the classifier's feature space will grow much rapidly, which is apt to overfit on training corpus. Second, the variance of non-local features caused by the model evolution during the training procedure will hurt the parameter tuning. Last but not the lest, since the current predication relies on the results of prior predications, exact inference by dynamic programming can't be obtained, and then we have to maintain a n-best candidate list at each considering position, which also evokes the potential risk of depressing the parameter tuning procedure. As a result, many theoretically useful features such as higherorder word-or POS-grams can not be utilized efficiently.", |
|
"cite_spans": [ |
|
{ |
|
"start": 66, |
|
"end": 86, |
|
"text": "(Xue and Shen, 2003;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 87, |
|
"end": 104, |
|
"text": "Ng and Low, 2004)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "A widely used approach of using non-local features is the well-known reranking technique, which has been proved effective in many NLP tasks, for instance, syntactic parsing and machine The character sequence we choose is \" ------\". For clarity, we represent each subsequence-POS pair as a single edge, while ignore the corresponding scores of the edges.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "v 0 v 1 v 2 v 3 v 4 v 5 v 6 v 7 C 1 : C 2 : C 3 : C 4 : C 5 : C 6 :", |
|
"eq_num": "C" |
|
} |
|
], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "translation (Collins, 2000; Huang, 2008) , etc. Especially, Huang (2008) reranked the packed forest, which contains exponentially many parses. Inspired by his work, we propose word lattice reranking, a strategy that reranks the pruned word lattice outputted by a baseline classifier, rather than only a n-best list. Word lattice, a directed graph as shown in Figure 1 , is a packed structure that can represent many possibilities of segmentation and POS tagging. Our experiments on the Penn Chinese Treebank 5.0 show that, reranking on word lattice gains obvious improvement over the baseline classifier and the reranking on n-best list. Compared against the baseline, we obtain an error reduction of 11.9% on segmentation, and 16.3% on Joint S&T.", |
|
"cite_spans": [ |
|
{ |
|
"start": 12, |
|
"end": 27, |
|
"text": "(Collins, 2000;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 28, |
|
"end": 40, |
|
"text": "Huang, 2008)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 60, |
|
"end": 72, |
|
"text": "Huang (2008)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 359, |
|
"end": 367, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Formally, a word lattice L is a directed graph V, E , where V is the node set, and E is the edge set. Suppose the word lattice is for sentence", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Word Lattice", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "C 1:n = C 1 ..C n , node v i \u2208 V (i = 1..n \u2212 1)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Word Lattice", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "denotes the position between C i and C i+1 , while v 0 before C 1 is the source node, and v n after C n is the sink node. An edge e \u2208 E departs from v b and arrives at v e (0 \u2264 b < e \u2264 n), it covers a subsequence of C 1:n , which is recognized as a possible word. Considering Joint S&T, we label each edge a POS tag to represent a word-POS pair. A series of adjoining edges forms a path, and a path connecting the source node and the sink node is called diameter, which indicates a specific pattern of segmentation and POS tagging. For a diameter d, |d| denotes the length of d, which is the count of edges contained in this diameter. In Figure 1 , the path", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 638, |
|
"end": 646, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Word Lattice", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "p \u2032 = v 0 v 3 \u2192 v 3 v 5 \u2192 v 5 v 7", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Word Lattice", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "is a diameter, and |p \u2032 | is 3.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Word Lattice", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Given a sentence s, its reference r and pruned word lattice L generated by the baseline classifier, the oracle diameter d * of L is define as the diameter most similar to r. With F-measure as the scoring function, we can identify d * using the algorithm depicted in Algorithm 1, which is adapted to lexical analysis from the forest oracle computation of Huang (2008) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 354, |
|
"end": 366, |
|
"text": "Huang (2008)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Oracle Diameter in Lattice", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Before describe this algorithm in detail, we depict the key point for finding the oracle diameter. Given the system's output y and the reference y * , using |y| and |y * | to denote word counts of them respectively, and |y \u2229 y * | to denote matched word count of |y| and |y * |, F-measure can be computed by: if e \u2022 label exists in r then 5:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Oracle Diameter in Lattice", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "F (y, y * ) = 2P R P + R = 2|y \u2229 y * | |y| + |y * |", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Oracle Diameter in Lattice", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "T [i, j] \u2022 S[1] \u2190 1 6: else 7: T [i, j] \u2022 S[1] \u2190 0 8: for k s.t. T [i, k \u2212 1] and T [k, j] defined do 9: for p s.t. T [i, k \u2212 1] \u2022 S[p] defined do 10: for q s.t. T [k, j] \u2022 S[q] defined do 11: n \u2190 T [i, k \u2212 1] \u2022 S[p] + T [k, j] \u2022 S[q] 12: if n > T [i, j] \u2022 S[p + q] then 13: T [i, j] \u2022 S[p + q] \u2190 n 14: T [i, j] \u2022 S[p + q] \u2022 bp \u2190 k, p, q 15: t \u2190 argmaxt 2\u00d7T [1,|s|]\u2022S[t] t+|r| 16: d * \u2190 T r(T [1, |s|] \u2022 S[t].bp) 17: Output: oracle diameter: d * define T [i, j]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Oracle Diameter in Lattice", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": ", otherwise we leave this node undefined. In the first situation, we initialize this node's S structure according to whether the word-POS pair of e is in the reference (line 4\u22127). Line 8\u221214 update T [i, j]'s S structure using the S structures from all possible child-node pair,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Oracle Diameter in Lattice", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "T [i, k \u2212 1] and T [k, j].", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Oracle Diameter in Lattice", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Especially, line 9 \u2212 10 enumerate all combinations of p and q, where p and q each represent a kind of diameter length in T", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Oracle Diameter in Lattice", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "[i, k \u2212 1] and T [k, j].", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Oracle Diameter in Lattice", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Line 12 \u2212 14 refreshes the structure S of node T [i, j] when necessary, and meanwhile, a back pointer k, p, q is also recorded. When the dynamic programming procedure ends, we select the diameter length t of the top node T [1, |s|], which maximizes the F-measure formula in line 15, then we use function T r to find the oracle diameter d * by tracing the back pointer bp.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Oracle Diameter in Lattice", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "We can generate the pruned word lattice using the baseline classifier, with a slight modification. The classifier conducts decoding by considering each character in a left-to-right fashion. At each considering position i, the classifier enumerates all candidate results for subsequence C 1:i , by attaching each current candidate word-POS pair p to the tail of each candidate result at p's prior position, as the endmost of the new generated candidate. We give each p a score, which is the highest, among all C 1:i 's candidates that have p as their endmost. Then we select N word-POS pairs with the highest scores, and insert them to the lattice's edge set. This approach of selecting edges implies that, for the lattice's node set, we generate a node v i at each position i. Because N is the limitation on the count Algorithm 2 Lattice generation algorithm. 1: Input: character sequence C1:n 2: E \u2190 \u2205 3: for i \u2190 1 .. n do 4:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generation of the Word Lattice", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "cands \u2190 \u2205 5:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generation of the Word Lattice", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "for l \u2190 1 .. min(i, K) do 6: w \u2190 C i\u2212l+1:i 7: for t \u2208 P OS do 8: p \u2190 w, t 9: p \u2022 score \u2190 Eval(p) 10: s \u2190 p \u2022 score + Best[i \u2212 l] 11: Best[i] \u2190 max(s, Best[i]) 12:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generation of the Word Lattice", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "insert s, p into cands 13:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generation of the Word Lattice", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "sort cands according to s 14:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generation of the Word Lattice", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "E \u2190 E \u222a cands[1..N ]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generation of the Word Lattice", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "\u2022 p 15: Output: edge set of lattice: E of edges that point to the node at position i, we call this pruning strategy in-degree pruning. The generation algorithm is shown in Algorithm 2.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generation of the Word Lattice", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Line 3 \u2212 14 consider each character C i in sequence, cands is used to keep the edges closing at position i. Line 5 enumerates the candidate words ending with C i and no longer than K, where K is 20 in our experiments. Line 5 enumerates all POS tags for the current candidate word w, where P OS denotes the POS tag set. Function Eval in line 9 returns the score for word-POS pair p from the baseline classifier. The array Best preserve the score for sequence C 1:i 's best labelling results. After all possible word-POS pairs (or edges) considered, line 13 \u2212 14 select the N edges we want, and add them to edge set E.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generation of the Word Lattice", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Though this pruning strategy seems relative rough -simple pruning for edge set while no pruning for node set, we still achieve a promising improvement by reranking on such lattices. We believe more elaborate pruning strategy will results in more valuable pruned lattice.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generation of the Word Lattice", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "A unified framework can be applied to describing reranking for both n-best list and pruned word lattices (Collins, 2000; Huang, 2008) . Given the candidate set cand(s) for sentence s, the reranker selects the best item\u0177 from cand(s):", |
|
"cite_spans": [ |
|
{ |
|
"start": 105, |
|
"end": 120, |
|
"text": "(Collins, 2000;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 121, |
|
"end": 133, |
|
"text": "Huang, 2008)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reranking", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "y = argmax y\u2208cand(s) w \u2022 f (y)", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "Reranking", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "For reranking n-best list, cand(s) is simply the set of n best results from the baseline classifier. While for reranking word lattice, cand(s) is the set of all diameters that are impliedly built in the lattice. w \u2022 f (y) is the dot product between a feature vector f and a weight vector w, its value is used to Algorithm 3 Perceptron training for reranking 1:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reranking", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Input: Training examples{cand(si), y * i } N i=1 2: w \u2190 0 3: for t \u2190 1 .. T do 4: for i \u2190 1 .. N do 5:\u0177 \u2190 arg max y\u2208cand(s i ) w \u2022 f (y) 6: if\u0177 = y * i then 7: w \u2190 w + f (y * i ) \u2212 f (\u0177) 8: Output: Parameters: w Non-local Template Comment W0T0 current word-POS pair W\u22121 word 1-gram before W0T0 T\u22121 POS 1-gram before W0T0 T\u22122T\u22121 POS 2-gram before W0T0 T\u22123T\u22122T\u22121", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reranking", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "POS 3-gram before W0T0 Table 1 : Non-local feature templates used for reranking rerank cand(s). Following usual practice in parsing, the first feature f 1 (y) is specified as the score outputted by the baseline classifier, and its value is a real number. The other features are non-local ones such as word-and POS-n-grams extracted from candidates in n-best list (for n-best reranking) or diameters (for word lattice reranking), and they are 0 \u2212 1 valued.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 23, |
|
"end": 30, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Reranking", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We adopt the perceptron algorithm (Collins, 2002) to train the reranker. as shown in Algorithm 3. We use a simple refinement strategy of \"averaged parameters\" of Collins (2002) to alleviate overfitting on the training corpus and obtain more stable performance. For every training example {cand(s i ), y * i }, y * i denotes the best candidate in cand(s i ). For nbest reranking, the best candidate is easy to find, whereas for word lattice reranking, we should use the algorithm in Algorithm 1 to determine the oracle diameter, which represents the best candidate result.", |
|
"cite_spans": [ |
|
{ |
|
"start": 34, |
|
"end": 49, |
|
"text": "(Collins, 2002)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 162, |
|
"end": 176, |
|
"text": "Collins (2002)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training of the Reranker", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The non-local feature templates we use to train the reranker are listed in Table 1 . Notice that all features generated from these templates don't contain \"future\" words or POS tags, it means that we only use current or history word-or POS-n-grams to evaluate the current considering word-POS pair.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 75, |
|
"end": 82, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Non-local Feature Templates", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Although it is possible to use \"future\" information in n-best list reranking, it's not the same when we rerank the pruned word lattice. As we have to traverse the lattice topologically, we face difficulty in Algorithm 4 Cube pruning for non-local features.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Non-local Feature Templates", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "1: function CUBE(L) 2:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Non-local Feature Templates", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "for v \u2208 L \u2022 V in topological order do 3: NBEST(v) 4: return Dv sink [1] 5: procedure NBEST(v) 6:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Non-local Feature Templates", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "heap \u2190 \u2205 7: for i \u2190 1..2 do 22:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Non-local Feature Templates", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "for v \u2032 topologically before v do 8: \u2206 \u2190 all edges from v \u2032 to v 9: p \u2190 D v \u2032 , \u2206 10: p, 1 \u2022score \u2190 Eval(p, 1) 11: PUSH( p, 1 ,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Non-local Feature Templates", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "j \u2032 \u2190 j + b i 23: if |vec i | \u2265 j \u2032 i then 24: p, j \u2032 \u2022score \u2190 Eval(p, j \u2032 ) 25: PUSH( p, j \u2032 , heap)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Non-local Feature Templates", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "utilizing the information ahead of the current considering node.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Non-local Feature Templates", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Because of the non-local features such as wordand POS-n-grams, the reranking procedure is similar to machine translation decoding with intergrated language models, and should maintain a list of N best candidates at each node of the lattice. To speed up the procedure of obtaining the N best candidates, following Huang (2008, Sec. 3 .3), we adapt the cube pruning method from machine translation (Chiang, 2007; Huang and Chiang 2007) which is based on efficient k-best parsing algorithms (Huang and Chiang, 2005) . As shown in Algorithm 4, cube pruning works topologically in the pruned word lattice, and maintains a list of N best derivations at each node. When deducing a new derivation by attaching a current word-POS pair to the tail of a antecedent derivation, a function Eval is used to compute the new derivation's score (line 10 and 24). We use a max-heap heap to hold the candidates for the next-best derivation. Line 7 \u2212 11 initialize heap to the set of top derivations along each deducing source, the vector pair D v head , \u2206 .Here, \u2206 denotes the vector of current word-POS pairs, while D v head denotes the vector of N best derivations at \u2206's antecedent node. Then at each iteration, ", |
|
"cite_spans": [ |
|
{ |
|
"start": 313, |
|
"end": 332, |
|
"text": "Huang (2008, Sec. 3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 396, |
|
"end": 410, |
|
"text": "(Chiang, 2007;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 411, |
|
"end": 433, |
|
"text": "Huang and Chiang 2007)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 488, |
|
"end": 512, |
|
"text": "(Huang and Chiang, 2005)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reranking by Cube Pruning", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Non-lexical-target Instances Cn (n = \u22122..2) C\u22122= , C\u22121= , C0= , C1= , C2= CnCn+1 (n = \u22122..1) C\u22122C\u22121= , C\u22121C0= , C0C1= , C1C2= C\u22121C1 C\u22121C1= Lexical-target Instances C0Cn (n = \u22122..2) C0C\u22122= , C0C\u22121= , C0C0= , C0C1= , C0C2= C0CnCn+1 (n = \u22122..1) C0C\u22122C\u22121= , C0C\u22121C0= , C0C0C1= , C0C1C2= C0C\u22121C1 C0C\u22121C1 =", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reranking by Cube Pruning", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Following Jiang et al. (2008) , we describe segmentation and Joint S&T as below: For a given Chinese sentence appearing as a character sequence:", |
|
"cite_spans": [ |
|
{ |
|
"start": 10, |
|
"end": 29, |
|
"text": "Jiang et al. (2008)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Joint S&T as Classification", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "C 1:n = C 1 C 2 .. C n", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Joint S&T as Classification", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "the goal of segmentation is splitting the sequence into several subsequences: C 1:e 1 C e 1 +1:e 2 .. C e m\u22121 +1:em While in Joint S&T, each of these subsequences is labelled a POS tag: C 1:e 1 /t 1 C e 1 +1:e 2 /t 2 .. C e m\u22121 +1:em /t m Where C i (i = 1..n) denotes a character, C l:r (l \u2264 r) denotes the subsequence ranging from C l to C r , and t i (i = 1..m, m \u2264 n) denotes the POS tag of C e i\u22121 +1:e i . If we label each character a positional tag indicating its relative position in an expected subsequence, we can obtain the segmentation result accordingly. As described in Ng and Low (2004) and Jiang et al. (2008) , we use s indicating a singlecharacter word, while b, m and e indicating the begin, middle and end of a word respectively. With these positional tags, the segmentation transforms to a classification problem. For Joint S&T, we expand positional tags by attaching POS to their tails as postfix. As each tag now contains both positional-and POS-information, Joint S&T can also be resolved in a classification style framework. It means that, a subsequence is a word with POS t, only if the positional part of the tag sequence conforms to s or bm * e pattern, and each element in the POS part equals to t. For example, a tag sequence b N N m N N e N N represents a three-character word with POS tag N N .", |
|
"cite_spans": [ |
|
{ |
|
"start": 583, |
|
"end": 600, |
|
"text": "Ng and Low (2004)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 605, |
|
"end": 624, |
|
"text": "Jiang et al. (2008)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Joint S&T as Classification", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The features we use to build the classifier are generated from the templates of Ng and Low (2004) . For convenience of comparing with other, they didn't adopt the ones containing external knowledge, such as punctuation information. All their templates are shown in Table 2 . C denotes a character, while its subscript indicates its position relative to the current considering character(it has the subscript 0).", |
|
"cite_spans": [ |
|
{ |
|
"start": 80, |
|
"end": 97, |
|
"text": "Ng and Low (2004)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 265, |
|
"end": 272, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Feature Templates", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "The table's upper column lists the templates that immediately from Ng and Low (2004) . they named these templates non-lexical-target because predications derived from them can predicate without considering the current character C 0 . Templates called lexical-target in the column below are introduced by Jiang et al. (2008) . They are generated by adding an additional field C 0 to each nonlexical-target template, so they can carry out predication not only according to the context, but also according to the current character itself.", |
|
"cite_spans": [ |
|
{ |
|
"start": 67, |
|
"end": 84, |
|
"text": "Ng and Low (2004)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 304, |
|
"end": 323, |
|
"text": "Jiang et al. (2008)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature Templates", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Notice that features derived from the templates in Table 2 are all local features, which means all features are determined only by the training instances, and they can be generated before the training procedure.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 51, |
|
"end": 58, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Feature Templates", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Algorithm 5 Perceptron training algorithm. 1: Input: Training examples (xi, yi) 2: \u03b1 \u2190 0 3: for t \u2190 1 .. T do 4:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature Templates", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "for i \u2190 1 .. N do 5: zi \u2190 arg max z\u2208GEN(x i ) \u03a6(xi, z) \u2022 \u03b1 6:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature Templates", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "if zi = yi then 7:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature Templates", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "\u03b1 \u2190 \u03b1 + \u03a6(xi, yi) \u2212 \u03a6(xi, zi) 8: Output: Parameters: \u03b1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature Templates", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Collins 2002's perceptron training algorithm were adopted again, to learn a discriminative classifier, mapping from inputs x \u2208 X to outputs y \u2208 Y . Here x is a character sequence, and y is the sequence of classification result of each character in x. For segmentation, the classification result is a positional tag, while for Joint S&T, it is an extended tag with POS information. X denotes the set of character sequence, while Y denotes the corresponding set of tag sequence.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training of the Classifier", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "According to Collins (2002) , the function GEN(x) generates all candidate tag sequences for the character sequence x , the representation \u03a6 maps each training example (x, y) \u2208 X \u00d7 Y to a feature vector \u03a6(x, y) \u2208 R d , and the parameter vector \u03b1 \u2208 R d is the weight vector corresponding to the expected perceptron model's feature space. For a given input character sequence x, the mission of the classifier is to find the tag sequence F (x) satisfying:", |
|
"cite_spans": [ |
|
{ |
|
"start": 13, |
|
"end": 27, |
|
"text": "Collins (2002)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training of the Classifier", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "F (x) = argmax y\u2208GEN(x) \u03a6(x, y) \u2022 \u03b1", |
|
"eq_num": "(3)" |
|
} |
|
], |
|
"section": "Training of the Classifier", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "The inner product \u03a6(x, y) \u2022 \u03b1 is the score of the result y given x, it represents how much plausibly we can label character sequence x as tag sequence y. The training algorithm is depicted in Algorithm 5. We also use the \"averaged parameters\" strategy to alleviate overfitting.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training of the Classifier", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Our experiments are conducted on the Penn Chinese Treebank 5.0 (CTB 5.0). Following usual practice of Chinese parsing, we choose chapters 1\u2212260 (18074 sentences) as the training set, chapters 301 \u2212 325 (350 sentences) as the development set, and chapters 271 \u2212 300 (348 sentences) as the final test set. We report the performance of the baseline classifier, and then compare the performance of the word lattice reranking against the n-best reranking, based on this baseline classifier. For each experiment, we give accuracies on segmentation and Joint S&T. Analogous to the situation in parsing, the accuracy of Joint S&T means that, a word-POS is recognized only if both the positional-and POS-tags are correctly labelled for each character in the word's span.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The perceptron classifier are trained on the training set using features generated from the templates in Table 2 , and the development set is used to determine the best parameter vector. Figure 2 shows the learning curves for segmentation and Joint S&T on the development set. We choose the averaged parameter vector after 7 iterations for the final test, this parameter vector achieves an Fmeasure of 0.973 on segmentation, and 0.925 on Joint S&T. Although the accuracy on segmentation is quite high, it is obviously lower on Joint S&T. Experiments of Ng and Low (2004) on CTB 3.0 also shown the similar trend, where they obtained F-measure 0.952 on segmentation, and 0.919 on Joint S&T.", |
|
"cite_spans": [ |
|
{ |
|
"start": 553, |
|
"end": 570, |
|
"text": "Ng and Low (2004)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 105, |
|
"end": 112, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
}, |
|
{ |
|
"start": 187, |
|
"end": 195, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Baseline Perceptron Classifier", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "For n-best reranking, we can easily generate n best results for every training instance, by a modification for the baseline classifier to hold n best candidates at each considering point. For word lattice reranking, we use the algorithm in Algorithm 2 to generate the pruned word lattice. Given a training instance s i , its n best result list or pruned word lattice is used as a reranking instance cand(s i ), the best candidate result (of the n best list) or oracle diameter (of the pruned word lattice) is the reranking target y * i . We find the best result of the n best results simply by computing each result's F-measure, and we determine the oracle diameter of the pruned word lattice using the algorithm depicted in Algorithm 1. All pairs of cand(s i ) and y * i deduced from the baseline model's training instances comprise the training set for reranking. The development set and test set for reranking are obtained in the same way. For the reranking training set {cand(", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Preparation for Reranking", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "s i ), y * i } N i=1 , {y * i } N i=1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Preparation for Reranking", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "is called oracle set, and the F-measure of {y * i } N i=1 against the reference set is called oracle F-measure. We use the oracle F-measure indicating the utmost improvement that an reranking algorithm can achieve.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Preparation for Reranking", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "The flows of the n-best list reranking and the pruned word lattice reranking are similar to the training procedure for the baseline classifier. The training set for reranking is used to tune the parameter vector of the reranker, while the development set for reranking is used to determine the optimal number of iterations for the reranker's training procedure.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results and Analysis", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "We compare the performance of the word lattice reranking against the n-best list reranking. Table 3 shows the experimental results. The upper four rows are the experimental results for nbest list reranking, while the four rows below are for word lattice reranking. In n-best list reranking, with list size 20, the oracle F-measure on Joint S&T is 0.9455, and the reranked F-measure is 0.9280. When list size grows up to 50, the oracle F-measure on Joint S&T jumps to 0.9552, while the reranked F-measure becomes 0.9302. However, when n grows to 100, it brings tiny improvement over the situation of n = 50. In word lattice reranking, there is a trend similar to that in n-best reranking, the performance difference between in degree = 2 and in degree = 5 is obvious, whereas the setting in degree = 10 does not obtain a notable improvement over the performance of in degree = 5. We also notice that even with a relative small in degree limitation, such as in degree = 5, the oracle F-measures for segmentation and Joint S&T both reach a quite high level. This indicates the pruned word lattice contains much more possibilities of segmentation and tagging, compared to n-best list.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 92, |
|
"end": 99, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results and Analysis", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "With the setting in degree = 5, the oracle Fmeasure on Joint S&T reaches 0.9774, and the reranked F-measure climbs to 0.9336. It achieves an error reduction of 16.3% on Joint S&T, and an error reduction of 11.9% on segmentation, over the Table 3 : Performance of n-best list reranking and word lattice reranking. n-best: the size of the nbest list for n-best list reranking; Degree: the in degree limitation for word lattice reranking; Ora Seg: oracle F-measure on segmentation of n-best lists or word lattices; Ora S&T: oracle F-measure on Joint S&T of n-best lists or word lattices; Rnk Seg: Fmeasure on segmentation of reranked result; Rnk S&T: F-measure on Joint S&T of reranked result baseline classifier. While for n-best reranking with setting n = 50, the Joint S&T's error reduction is 6.9% , and the segmentation's error reduction is 8.9%. We can see that reranking on pruned word lattice is a practical method for segmentation and POS tagging. Even with a much small data representation, it obtains obvious advantage over the n-best list reranking.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 238, |
|
"end": 245, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results and Analysis", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Comparing between the baseline and the two reranking techniques, We find the non-local information such as word-or POS-grams do improve accuracy of segmentation and POS tagging, and we also find the reranking technique is effective to utilize these kinds of information. As even a small scale n-best list or pruned word lattice can achieve a rather high oracle F-measure, reranking technique, especially the word lattice reranking would be a promising refining strategy for segmentation and POS tagging. This is based on this viewpoint: On the one hand, compared with the initial input character sequence, the pruned word lattice has a quite smaller search space while with a high oracle F-measure, which enables us to conduct more precise reranking over this search space to find the best result. On the other hand, as the structure of the search space is approximately outlined by the topological directed architecture of pruned word lattice, we have a much wider choice for feature selection, which means that we would be able to utilize not only features topologically before the current considering position, just like those depicted in Table 2 in section 4, but also information topologically after it, for example the next word W 1 or the next POS tag T 1 . We believe the pruned word lattice reranking technique will obtain higher improvement, if we develop more precise reranking algorithm and more appropriate features.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 1142, |
|
"end": 1149, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results and Analysis", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "This paper describes a reranking strategy called word lattice reranking. As a derivation of the forest reranking of Huang (2008) , it performs reranking on pruned word lattice, instead of on n-best list. Using word-and POS-gram information, this reranking technique achieves an error reduction of 16.3% on Joint S&T, and 11.9% on segmentation, over the baseline classifier, and it also outperforms reranking on n-best list. It confirms that word lattice reranking can effectively use non-local information to select the best candidate result, from a relative small representation structure while with a quite high oracle F-measure. However, our reranking implementation is relative coarse, and it must have many chances for improvement. In future work, we will develop more precise pruning algorithm for word lattice generation, to further cut down the search space while maintaining the oracle F-measure. We will also investigate the feature selection strategy under the word lattice architecture, for effective use of non-local information.", |
|
"cite_spans": [ |
|
{ |
|
"start": 116, |
|
"end": 128, |
|
"text": "Huang (2008)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This work was supported by National Natural Science Foundation of China, Contracts 60736014 and 60573188, and 863 State Key Project No. 2006AA010108. We show our special thanks to Liang Huang for his valuable suggestions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgement", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Discriminative reranking for natural language parsing", |
|
"authors": [ |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Collins", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Proceedings of the 17th International Conference on Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "175--182", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Collins, Michael. 2000. Discriminative reranking for natural language parsing. In Proceedings of the 17th International Conference on Machine Learn- ing, pages 175-182.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms", |
|
"authors": [ |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Collins", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the Empirical Methods in Natural Language Processing Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--8", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Collins, Michael. 2002. Discriminative training meth- ods for hidden markov models: Theory and exper- iments with perceptron algorithms. In Proceedings of the Empirical Methods in Natural Language Pro- cessing Conference, pages 1-8, Philadelphia, USA.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "The hierarchical hidden markov model: Analysis and applications", |
|
"authors": [ |
|
{ |
|
"first": "Shai", |
|
"middle": [], |
|
"last": "Fine", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoram", |
|
"middle": [], |
|
"last": "Singer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Naftali", |
|
"middle": [], |
|
"last": "Tishby", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "32--41", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Fine, Shai, Yoram Singer, and Naftali Tishby. 1998. The hierarchical hidden markov model: Analysis and applications. In Machine Learning, pages 32- 41.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Forest reranking: Discriminative parsing with non-local features", |
|
"authors": [ |
|
{ |
|
"first": "Liang", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Huang, Liang. 2008. Forest reranking: Discrimina- tive parsing with non-local features. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "A cascaded linear model for joint chinese word segmentation and part-of-speech tagging", |
|
"authors": [ |
|
{ |
|
"first": "Wenbin", |
|
"middle": [], |
|
"last": "Jiang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Liang", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yajuan", |
|
"middle": [], |
|
"last": "Lv", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qun", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jiang, Wenbin, Liang Huang, Yajuan Lv, and Qun Liu. 2008. A cascaded linear model for joint chinese word segmentation and part-of-speech tagging. In Proceedings of the 46th Annual Meeting of the Asso- ciation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Lafferty", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Mccallum", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fernando", |
|
"middle": [], |
|
"last": "Pereira", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Proceedings of the 23rd International Conference on Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "282--289", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lafferty, John, Andrew McCallum, and Fernando Pereira. 2001. Conditional random fields: Proba- bilistic models for segmenting and labeling sequence data. In Proceedings of the 23rd International Con- ference on Machine Learning, pages 282-289, Mas- sachusetts, USA.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Chinese partof-speech tagging: One-at-a-time or all-at-once? word-based or character-based?", |
|
"authors": [ |
|
{ |
|
"first": "Hwee", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jin", |
|
"middle": [ |
|
"Kiat" |
|
], |
|
"last": "Tou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Low", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the Empirical Methods in Natural Language Processing Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ng, Hwee Tou and Jin Kiat Low. 2004. Chinese part- of-speech tagging: One-at-a-time or all-at-once? word-based or character-based? In Proceedings of the Empirical Methods in Natural Language Pro- cessing Conference.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "A tutorial on hidden markov models and selected applications in speech recognition", |
|
"authors": [ |
|
{ |
|
"first": "Lawrence", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Rabiner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1989, |
|
"venue": "Proceedings of IEEE", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "257--286", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rabiner, Lawrence. R. 1989. A tutorial on hidden markov models and selected applications in speech recognition. In Proceedings of IEEE, pages 257- 286.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "A maximum entropy part-of-speech tagger", |
|
"authors": [ |
|
{ |
|
"first": "Adwait", |
|
"middle": [], |
|
"last": "Ratnaparkhi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "Proceedings of the Empirical Methods in Natural Language Processing Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ratnaparkhi and Adwait. 1996. A maximum entropy part-of-speech tagger. In Proceedings of the Empir- ical Methods in Natural Language Processing Con- ference.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Chinese word segmentation as lmr tagging", |
|
"authors": [ |
|
{ |
|
"first": "Nianwen", |
|
"middle": [], |
|
"last": "Xue", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Libin", |
|
"middle": [], |
|
"last": "Shen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of SIGHAN Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xue, Nianwen and Libin Shen. 2003. Chinese word segmentation as lmr tagging. In Proceedings of SIGHAN Workshop.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"text": "Pruned word lattice as directed graph.", |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null |
|
}, |
|
"FIGREF1": { |
|
"text": "Baseline averaged perceptron learning curves for segmentation and Joint S&T.", |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null |
|
}, |
|
"TABREF0": { |
|
"content": "<table><tr><td>Algorithm 1 Oracle Diameter, la Huang (2008,</td></tr><tr><td>Sec. 4.1).</td></tr><tr><td>Here, P = |y\u2229y * | |y| is recall. Notice that F (y, y * ) isn't a linear func-is precision, and R = |y\u2229y * | |y * |</td></tr><tr><td>tion, we need access the largest |y \u2229 y * | for each</td></tr><tr><td>possible |y| in order to determine the diameter with</td></tr><tr><td>maximum F , or another word, we should know the</td></tr><tr><td>maximum matched word count for each possible</td></tr><tr><td>diameter length.</td></tr><tr><td>The algorithm shown in Algorithm 1 works in</td></tr><tr><td>a dynamic programming manner. A table node</td></tr><tr><td>T [i, j] is defined for sequence span [i, j], and it has</td></tr><tr><td>a structure S to remember the best |y i:j \u2229 y * i:j | for</td></tr><tr><td>each |y i:j |, as well as the back pointer for this best</td></tr><tr><td>choice. The for-loop in line 2 \u2212 14 processes for</td></tr><tr><td>each node T [i, j] in a shorter-span-first order. Line</td></tr><tr><td>3 \u2212 7 initialize T [i, j] according to the reference r</td></tr><tr><td>and the word lattice's edge set L \u2022 E. If there exists</td></tr><tr><td>an edge e in L \u2022 E covering the span [i, j], then we</td></tr></table>", |
|
"html": null, |
|
"text": "Input: sentence s, reference r and lattice L 2:for [i, j] \u2286 [1, |s|] in topological order do 3:if \u2203e \u2208 L \u2022 E s.t. e spans from i to j then 4:", |
|
"num": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF2": { |
|
"content": "<table><tr><td/><td>: Feature templates and instances. Suppose we consider the third character \" \" in the sequence</td></tr><tr><td>\"</td><td>\".</td></tr><tr><td colspan=\"2\">we pop the best derivation from heap (line 15),</td></tr><tr><td colspan=\"2\">and push its successors into heap (line 17), until</td></tr><tr><td colspan=\"2\">we get N derivations or heap is empty. In line 22</td></tr><tr><td colspan=\"2\">of function PUSHSUCC, j is a vector composed of</td></tr><tr><td colspan=\"2\">two index numbers, indicating the two candidates'</td></tr><tr><td colspan=\"2\">indexes in the two vectors of the deducing source</td></tr><tr><td colspan=\"2\">p, where the two candidates are selected to deduce</td></tr><tr><td colspan=\"2\">a new derivation. j</td></tr></table>", |
|
"html": null, |
|
"text": "", |
|
"num": null, |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |