Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "C16-1031",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:03:56.048441Z"
},
"title": "A Word Labelling Approach to Thai Sentence Boundary Detection and POS Tagging",
"authors": [
{
"first": "Nina",
"middle": [],
"last": "Zhou",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Wang",
"middle": [],
"last": "Xuangcong",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Aw",
"middle": [],
"last": "Aiti",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Nattadaporn",
"middle": [],
"last": "Lertcheva",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Previous studies on Thai Sentence Boundary Detection (SBD) mostly assumed a sentence ends at a space and formulated the task SBD as a disambiguation problem, which classified a space either as an indicator for Sentence Boundary (SB) or non-Sentence Boundary (nSB). In this paper, we propose a word labelling approach which treats the space character as a normal word, and detects SB between any two words. This removes the restriction for SB to be occurred only at spaces and makes our system more robust for modern Thai writing. It is because in modern Thai writing, the space is not consistently used to indicate SB. As syntactic information contributes to better SBD, we further propose a joint Part-Of-Speech (POS) tagging and SBD framework based on Factorial Conditional Random Field (FCRF) model. We compare the performance of our proposed approach with reported methods on ORCHID corpus. We also performed experiments of FCRF model on the TaLAPi corpus. The results show that the word labelling approach has better performance than previous space-based classification approaches and FCRF joint model outperforms LCRF model in terms of SBD in all experiments.",
"pdf_parse": {
"paper_id": "C16-1031",
"_pdf_hash": "",
"abstract": [
{
"text": "Previous studies on Thai Sentence Boundary Detection (SBD) mostly assumed a sentence ends at a space and formulated the task SBD as a disambiguation problem, which classified a space either as an indicator for Sentence Boundary (SB) or non-Sentence Boundary (nSB). In this paper, we propose a word labelling approach which treats the space character as a normal word, and detects SB between any two words. This removes the restriction for SB to be occurred only at spaces and makes our system more robust for modern Thai writing. It is because in modern Thai writing, the space is not consistently used to indicate SB. As syntactic information contributes to better SBD, we further propose a joint Part-Of-Speech (POS) tagging and SBD framework based on Factorial Conditional Random Field (FCRF) model. We compare the performance of our proposed approach with reported methods on ORCHID corpus. We also performed experiments of FCRF model on the TaLAPi corpus. The results show that the word labelling approach has better performance than previous space-based classification approaches and FCRF joint model outperforms LCRF model in terms of SBD in all experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Sentence Boundary Detection (SBD) is a fundamental task for many Natural Language Processing (NLP) and analysis tasks, including POS tagging, syntactic, semantic, and discourse parsing, parallel text alignment, and machine translation (Gillick, 2009) . Most research on SBD focus on languages that already have a well-defined concept of what a sentence is, typically indicated by sentence-end markers like full-stops, question marks, or other punctuations. However, as we study more contexts of language use (e.g. speech output which lacks punctuations) as well as look at many more different languages, the assumption of clearly-punctuated sentence boundary becomes less valid. One such language is Thai.",
"cite_spans": [
{
"start": 235,
"end": 250,
"text": "(Gillick, 2009)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In prior research on Thai, the space character has been regarded as a very important element in Thai SBD (Pradit et al., 2000; Paisarn et al., 2001; Glenn et al., 2010) . These regard that space characters are always present between sentences. However, in actual fact, as prescribed by Thai linguistic authorities (www.royin.go.th) as well as what can be observed in real texts, spaces do exist in Thai texts not only in such sentence-end contexts. There is some pressure from linguistic authorities (Wathabunditkul, 2003) to set orthographic standards in Thai, prescribing the use of spaces in the context of certain words, following the rules of the Thai Royal Institute Dictionary 1 . Examples of these rules include: using of the space before and after an interjection or an onomatopoeiac word \u0e42\u0e2d\u0e4a \u0e22 (Ouch!), \u0e2d\u0e38 \u0e4a \u0e22 (ow!); before conjunctions \u0e41\u0e25\u0e30 (and), \u0e2b\u0e23\u0e37 \u0e2d (or), and \u0e41\u0e15\u0e48 (but); before and after a numeric expression: \u0e21\u0e35 \u0e19\u0e31 \u0e01\u0e40\u0e23\u0e35 \u0e22\u0e19 20 \u0e04\u0e19 (have 20 students), \u0e40\u0e27\u0e25\u0e32 10.00 \u0e19. (time 10.00 a.m.). Unfortunately, the rules are not strictly followed in practice and the use of spaces between words, phrases, clauses and sentences vary across different users of the Thai language. According to TaLAPi (Aw et al. 2014) , a news domain corpus, it has about 23% sentences ending without a space character. One example of the Thai text from TaLAPi corpus is shown in Figure 1 , in which the space character is used within a sentence, but not as a sentence-end indicator.",
"cite_spans": [
{
"start": 105,
"end": 126,
"text": "(Pradit et al., 2000;",
"ref_id": "BIBREF9"
},
{
"start": 127,
"end": 148,
"text": "Paisarn et al., 2001;",
"ref_id": "BIBREF2"
},
{
"start": 149,
"end": 168,
"text": "Glenn et al., 2010)",
"ref_id": "BIBREF11"
},
{
"start": 500,
"end": 522,
"text": "(Wathabunditkul, 2003)",
"ref_id": "BIBREF20"
},
{
"start": 1198,
"end": 1214,
"text": "(Aw et al. 2014)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 1360,
"end": 1368,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In view of this complexity of spaces in Thai in light of the SBD task, we propose a word-based labelling approach which regards Thai SBD as a word labelling problem instead of a space classification problem. The approach treats the space as a normal word and labels each word as SB or nSB (non-Sentence Boundary). Figure 2 illustrates the space-based classification approach versus the wordbased labelling approach. Figure 1 . Example of a written Thai text in which there are two space characters within the first sentence, but there is no space character at the end of the sentence, i.e., at highlighted <eol>\". \"eol\" refers to end-of-line.",
"cite_spans": [],
"ref_spans": [
{
"start": 314,
"end": 322,
"text": "Figure 2",
"ref_id": null
},
{
"start": 416,
"end": 424,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The proposed word labelling approach formulates SBD as a typical sequence labelling task, i.e., labelling each word including spaces as a SB or nSB. It is tested on ORCHID corpus and demonstrates higher accuracy on SB than previous methods (Pradit et al., 2000; Paisarn et al., 2001; Glenn et al., 2010) . Furthermore, the contribution of POS in this task is investigated and a Joint framework for POS tagging and SBD is formulated. The results on TaLAPi corpus show that POS information can improve the accuracy of SBD, for both the sequential task of POS tagging followed by SBD and the proposed joint framework. Moreover, in the joint framework, we propose a two-layer classification for POS tagging, which is called as \"2-step\" Joint approach in the following paper. For comparison, the joint approach in which POS tagging realized in one step is called as \"1-step\" Joint approach. The proposed \"2-step\" Joint approach runs considerable faster and achieves similar performance when compared with the Cascade approach and \"1-step\" Joint approach of POS tagging and SBD. By adding enhanced features, the \"2-step\" Joint approach yields better SBD accuracy and comparable POS tagging accuracy. Figure 2 . Space-based SBD vs word-based labelling SBD. Space-based SBD detects spaces and assigns Y (SB) or N (nSB) to each space. Word-based labelling assigns Y(SB) or N(nSB) to every word. In this case, the space character is considered as a word.",
"cite_spans": [
{
"start": 240,
"end": 261,
"text": "(Pradit et al., 2000;",
"ref_id": "BIBREF9"
},
{
"start": 262,
"end": 283,
"text": "Paisarn et al., 2001;",
"ref_id": "BIBREF2"
},
{
"start": 284,
"end": 303,
"text": "Glenn et al., 2010)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 1194,
"end": 1202,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of the paper is organized as follows. Section 2 reviews the previous studies on Thai SBD. Section 3 describes the proposed word labelling framework and the approaches. Section 4 compares the performance between the proposed word-based methods and reported space-based methods on ORCHID corpus (Sornlertlamvanich et al., 1997) , and also studied the different frameworks of wordbased approaches. Section 5 concludes the paper.",
"cite_spans": [
{
"start": 302,
"end": 334,
"text": "(Sornlertlamvanich et al., 1997)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There have been limited studies carried out in Thai SBD over the past twenty years. Longchupole (1995) presented a method to segment a paragraph into small units and then used verbs to estimate the number of sentences. That was a grammatical rule based approach to extract sentences from paragraphs. The reported SBD accuracy was 81.18% (Longchupole, 1995) . Pradit et al. (2000) applied the statistical POS tagging technique (Brants, 2000) on the detection of SB. They considered SB and non-SB as POS tags and distinguished SB from other POS tags based on a trigram model. Their method yielded an accuracy of 85.26% on ORCHID corpus. Paisarn et al. (2001) utilized the Winnow algorithm to extract features from the context around the target space. The Winnow functioned like a neuron network model where a few nodes were connected to a target node. Each node examined only two features for simplicity. In total, there were 10 features including words around target space and their POS information. The space-correct accuracy for the Winnow on ORCHID was 89.13%. Later, Glenn et al. 2010proposed to use maximum entropy classifier to distinguish each space as SB or non-SB and their results were shown to be consistent with the Winnow (Paisam et al., 2001) .",
"cite_spans": [
{
"start": 84,
"end": 102,
"text": "Longchupole (1995)",
"ref_id": "BIBREF6"
},
{
"start": 337,
"end": 356,
"text": "(Longchupole, 1995)",
"ref_id": "BIBREF6"
},
{
"start": 359,
"end": 379,
"text": "Pradit et al. (2000)",
"ref_id": "BIBREF9"
},
{
"start": 426,
"end": 440,
"text": "(Brants, 2000)",
"ref_id": "BIBREF1"
},
{
"start": 635,
"end": 656,
"text": "Paisarn et al. (2001)",
"ref_id": "BIBREF2"
},
{
"start": 1227,
"end": 1255,
"text": "Winnow (Paisam et al., 2001)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Studies",
"sec_num": "2"
},
{
"text": "Nearly all Thai SBD studies are based on the assumption that there is a space at the position of the SB. While we have shown in the Introduction part that sentence break is not always indicated by a space, especially in modern Thai writing. That inspired us to propose the word-based approach to consider a space as a word and treats SBD as a word labelling task instead of a space disambiguation problem",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Studies",
"sec_num": "2"
},
{
"text": "The word-based approach is further enhanced to label POS tags and SB jointly using joint inferencing. The advantages of this approach are: 1) it relies on contexts instead of spaces to detect SB, 2) it solves SBD and POS tagging jointly to relax the dependency of POS tagging for SBD, 3) it demonstrates higher accuracy on SBD than previous methods ( Pradit et al., 2000; Paisarn et al., 2001; Glenn et al., 2010) .",
"cite_spans": [
{
"start": 349,
"end": 371,
"text": "( Pradit et al., 2000;",
"ref_id": "BIBREF9"
},
{
"start": 372,
"end": 393,
"text": "Paisarn et al., 2001;",
"ref_id": "BIBREF2"
},
{
"start": 394,
"end": 413,
"text": "Glenn et al., 2010)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Studies",
"sec_num": "2"
},
{
"text": "1 ( | ) e x p ( , , ) (1) ( ) k k t k p L O f O L t Z O \u03bb \u03bb \uf0e6 \uf0f6 = \uf0e7 \uf0f7 \uf0e8 \uf0f8 \uf0e5 \uf0e5 Figure 3. Linear-chain graph CRF (LCRF)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Studies",
"sec_num": "2"
},
{
"text": "where {f k } is a set of feature functions defined over the observation O and label sequence L at each position t, together with the set of corresponding weights {\u03bb k }; z(O) is a normalization factor. Dynamic CRF (DCRF) (Sutton et al., 2004; Sutton et al., 2011) is a generalization of LCRF, which supports any arbitrary structure graph. It is formally defined as in Equation (2):",
"cite_spans": [
{
"start": 221,
"end": 242,
"text": "(Sutton et al., 2004;",
"ref_id": "BIBREF15"
},
{
"start": 243,
"end": 263,
"text": "Sutton et al., 2011)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Studies",
"sec_num": "2"
},
{
"text": "( , ) 1 ( | ) ex p ( , , ) (2 ) ( ) k k c t t c C k p L O f O L t Z O \u03bb \u03bb \u2208 \uf0e6 \uf0f6 = \uf0e7 \uf0f7 \uf0e8 \uf0f8 \uf0e5 \uf0e5 \uf0e5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Studies",
"sec_num": "2"
},
{
"text": "where C is a set of cliques indices which connect the nodes in a sequence in a single layer or among different layers. As a special case of DCRF, Factorial CRF (FCRF) model allows multiple layers' la-belling simultaneously for a given sequence. The graphical illustration of two-layer FCRF is shown in Figure 4 where H indicates the 1 st layer labels and L indicates the 2 nd layer labels. O is the observation sequence. Through the connections between different layers of labels and the given observation, joint conditional distributions of the labels are learnt.",
"cite_spans": [],
"ref_spans": [
{
"start": 302,
"end": 310,
"text": "Figure 4",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Previous Studies",
"sec_num": "2"
},
{
"text": "We use LCRF for the single task of SB detection or POS tagging. In this scenario, as Thai SB is to detect word sequence and find the end of each sentence, we consider this to be similar to a sentenceend punctuation prediction task with only two labels. Words are labelled with SB if they are at the beginning of a sentence otherwise, they will be labelled as nSB. For POS tagging, we use all the 35 subcategories as described in Aw et al. (2014) .",
"cite_spans": [
{
"start": 429,
"end": 445,
"text": "Aw et al. (2014)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Isolated and Cascade SB and POS Tagging",
"sec_num": "3.1"
},
{
"text": "3 or 5 w -1 +w 0 3 or 5 w -1 +w 0 +w 1 3 wtype 0 3 or 5 wtype -1 +wtype 0 3 or 5 wtype -1 +wtype 0 +wtype 1 3 Table 1 . The feature template for Isolated SBD and POS tagging. Window size 3 is used for Isolated SBD and POS tagging. Window size 5 is used in \"2-step\" Joint model (vii and viii) in Table 6 .",
"cite_spans": [],
"ref_spans": [
{
"start": 110,
"end": 117,
"text": "Table 1",
"ref_id": null
},
{
"start": 295,
"end": 302,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Feature Template Window Size w 0",
"sec_num": null
},
{
"text": "Considering POS tagging has much more labels to recognize than SBD, it will increase the memory use for system training, therefore the number of the features and feature template have to be carefully selected. It is essential to use a simple feature set, as shown in Table 1 , to make a comparison between Isolated models, Cascade models and the Joint models. It is important for \"1-step\" Joint model as more features make the process run extremely slow. In Table 1 , w i refers to the word at the i th position relative to the current node; window size is the maximum span of words centered at the current word that the template covers, e.g., w \u22121 +w 0 with a window size of 3 refers to w -2 +w \u22121 , w \u22121 +w 0 , and w 0 +w 1 ; wtype i indicates the word type at the i th position relative to the current node; In total, five word types are defined, i.e., English, Thai, punctuation, digits and spaces, for the data used in our experiments.",
"cite_spans": [],
"ref_spans": [
{
"start": 267,
"end": 274,
"text": "Table 1",
"ref_id": null
},
{
"start": 458,
"end": 465,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Feature Template Window Size w 0",
"sec_num": null
},
{
"text": "3 or 5 pos -1 +pos 0 3 pos -1 +pos 0 +pos 1 3 Table 2 . The additional feature template for Cascade models, besides the feature templates in Table 1 . Window size 5 is used in Cascade model (iv) in Table 6 .",
"cite_spans": [],
"ref_spans": [
{
"start": 46,
"end": 53,
"text": "Table 2",
"ref_id": null
},
{
"start": 141,
"end": 148,
"text": "Table 1",
"ref_id": null
},
{
"start": 198,
"end": 205,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Feature Template Window size pos 0",
"sec_num": null
},
{
"text": "As POS tag provides additional syntactic and some semantic information to the word, they are utilized as additional features to the Cascade approach for detecting the sentence boundary. Besides the features listed in Table 1 , more POS features listed in Table 2 are used in the Cascade models. 3.1.1. \"1-step and \"2-step\" Joint Models",
"cite_spans": [],
"ref_spans": [
{
"start": 217,
"end": 224,
"text": "Table 1",
"ref_id": null
},
{
"start": 255,
"end": 262,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Feature Template Window size pos 0",
"sec_num": null
},
{
"text": "The joint model realizes the 2-layer labelling of one sequence using FCRF. We consider the first layer as labels of SBD, and the second layer as labels of POS tagging (see Table 3 ). However, due to the large number of POS tags, combining the feature templates of both tasks increases the search space tremendously and has a large impact on the processing speed. To address this problem, we propose a \"2-step\" Joint-model in which we first predict 12 top categories of the POS tags as classified in (Aw et al., 2014) and then restore the pseudo POS tags back to the original POS tags (see Figure 5 ). On the other side, the \"1-step\" Joint model uses all the 35 POS tags to realize POS tagging in the 2 nd layer of FCRF. To train the \"2-step\" Joint model, all train data are labelled with two SB labels (i.e., SB and nSB) and 12 pseudo POS tags. The 12 pseudo POS tags are obtained by combining similar POS tags into one category as illustrated in Table 3 . The Original POS column lists the original 35 POS tags and the Pseudo POS column lists the corresponding 12 pseudo POS tags. To restore the pseudo POS tags back to the original tags, we train different LCRF models for each pseudo POS tag. As no restoration is required for \"CL\" and \"REFX\", a total of 10 LCRF models are built to restore the original POS tags. The diagram of the proposed \"2-step\" Joint model is shown as follows ( Figure 5 ). Figure 5 . The proposed \"2-step\" Joint model for Thai SBD and POS Tagging based on two-layer FCRF and the LCRF For fair comparison between Isolated and Joint models, we used the same feature templates in Table 1 in two of the \"2-step\" Joint models, i.e., (v) and (vi) in Table 6 . Since the \"2-step\" Joint model run much faster than \"1-step\" Joint, more features can be added. As in Table 4 shown, name entity recognition (NER) information was added to improve the performance of the \"2-step\" joint models besides the feature template in Table 1 . Table 4 . The enhanced feature template for \"2-step\" Joint model (viii) in Table 6 4 Experimentation",
"cite_spans": [
{
"start": 499,
"end": 516,
"text": "(Aw et al., 2014)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 172,
"end": 179,
"text": "Table 3",
"ref_id": "TABREF0"
},
{
"start": 589,
"end": 597,
"text": "Figure 5",
"ref_id": null
},
{
"start": 947,
"end": 954,
"text": "Table 3",
"ref_id": "TABREF0"
},
{
"start": 1388,
"end": 1396,
"text": "Figure 5",
"ref_id": null
},
{
"start": 1400,
"end": 1408,
"text": "Figure 5",
"ref_id": null
},
{
"start": 1604,
"end": 1612,
"text": "Table 1",
"ref_id": null
},
{
"start": 1672,
"end": 1679,
"text": "Table 6",
"ref_id": null
},
{
"start": 1784,
"end": 1791,
"text": "Table 4",
"ref_id": null
},
{
"start": 1939,
"end": 1946,
"text": "Table 1",
"ref_id": null
},
{
"start": 1949,
"end": 1956,
"text": "Table 4",
"ref_id": null
},
{
"start": 2024,
"end": 2031,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Feature Template Window size pos 0",
"sec_num": null
},
{
"text": "NER 0 3 NER -1 +NER 0 3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Template Window Size",
"sec_num": null
},
{
"text": "Our experiments were performed on the ORCHID corpus (Sornlertlamvanich et al., 1997) and the TaLAPi corpus (Aw et al., 2014) .",
"cite_spans": [
{
"start": 52,
"end": 84,
"text": "(Sornlertlamvanich et al., 1997)",
"ref_id": null
},
{
"start": 107,
"end": 124,
"text": "(Aw et al., 2014)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Preparation",
"sec_num": "4.1"
},
{
"text": "The processing of the ORCHID corpus follows the work of Sornlertlamvanich et al. (1997) to remove all comments and concatenate all sentences and paragraphs. Different from the previous experiments, we did not insert a space at the end of sentence if it was not originally present. As such, the percentage of sentences ending without a space was almost 100% for the ORCHID corpus used in our experiment. We portioned the ORCHID data into 10 parts with equal size and used 10 fold cross validation for evaluation.",
"cite_spans": [
{
"start": 56,
"end": 87,
"text": "Sornlertlamvanich et al. (1997)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Preparation",
"sec_num": "4.1"
},
{
"text": "The experiments on TaLAPi corpus were performed only on the news domain which was annotated with word segmentation, POS tags and name entities. It had 3633 paragraphs, 10,478 sentences and 311,637 words. We split 80% for training and 20% for testing. During the splitting, we tried to balance the distribution of spaces and POS tags. Thus in the training data, we have a total of 282,678 words, of which 8,034 words (2.842%) are SB and 274,644 words are nSB. In the test data we have 2,091 (2.836%) SB and 71,635 nSB.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Preparation",
"sec_num": "4.1"
},
{
"text": "The GRMM toolkit (Sutton, 2006) was used in our experiments to build the 2-layer FCRF models and one layer LCRF models. To demonstrate the proposed methods, we performed 5 different experiments as follows:",
"cite_spans": [
{
"start": 17,
"end": 31,
"text": "(Sutton, 2006)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental results",
"sec_num": "4.2"
},
{
"text": "Orchid Corpus i. Isolated LCRF model to detect SBD using POS information to make it comparable with reported work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental results",
"sec_num": "4.2"
},
{
"text": "ii. Isolated LCRF model for POS tagging and SBD without POS information for SBD.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TaLAPi Corpus",
"sec_num": null
},
{
"text": "iii. Cascade LCRF model on SB utilizing same feature as (i) and POS information with different feature templates. iv. \"1-step\" Joint model using same features as (ii) v. \"2-step\" Joint model using same features as (ii) and with additional features and different feature configurations. Table 5 . Comparison of our word-labelling approach based on LCRF (last column) with previous studies on ORCHID corpus. POS-trigram (Pradit et al., 2000) ; Winnow (Paisarn et al., 2001) ; ME (Glenn et al., 2010) . Space correct =(#correct sb+#correct nsb)/(total # of space tokens). '#' indicate the number of items followed.",
"cite_spans": [
{
"start": 406,
"end": 439,
"text": "POS-trigram (Pradit et al., 2000)",
"ref_id": null
},
{
"start": 442,
"end": 471,
"text": "Winnow (Paisarn et al., 2001)",
"ref_id": null
},
{
"start": 474,
"end": 497,
"text": "ME (Glenn et al., 2010)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 286,
"end": 293,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "TaLAPi Corpus",
"sec_num": null
},
{
"text": "sb",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "POS-trigram(%) Winnow(%) ME(%) our work(%)",
"sec_num": null
},
{
"text": "In the ORCHID corpus experiment, we used the features described in Table 1 and Table 2 . Table 5 shows the result of the word-labelling approach and its comparison with reported methods. Compared to the reported results (Glenn et al., 2010) , our word-labelling approach yielded consistent improvement on precision, recall, F-score for both SB and non-SB and also \"space correct\". Our SB precision is 1% higher than Winnow method and our recall is 6.3% higher than ME method. The F-score is 7% higher than Winnow and ME. As mentioned in 4.1, not all sentence boundaries in ORCHID are indicated by space. To have a fair comparison, we consider all sentence boundaries as a \"space\" when calculating \"space correct\" (Glenn et al., 2010) . In Table 5 , the short form \"sb\" and \"nsb\" refers to the sentence break and non-sentence break respectively.",
"cite_spans": [
{
"start": 220,
"end": 240,
"text": "(Glenn et al., 2010)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 67,
"end": 86,
"text": "Table 1 and Table 2",
"ref_id": null
},
{
"start": 89,
"end": 96,
"text": "Table 5",
"ref_id": null
},
{
"start": 713,
"end": 733,
"text": "(Glenn et al., 2010)",
"ref_id": null
},
{
"start": 739,
"end": 746,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "POS-trigram(%) Winnow(%) ME(%) our work(%)",
"sec_num": null
},
{
"text": "For the experiments on TaLAPi corpus, we study the performance in Isolated, Cascade and Joint model. The same experiment can be run on ORCHID corpus, but due to time and space limitation, we only show the results of the experiments on TaLAPi corpus in Table 6 .",
"cite_spans": [],
"ref_spans": [
{
"start": 252,
"end": 259,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "POS-trigram(%) Winnow(%) ME(%) our work(%)",
"sec_num": null
},
{
"text": "All Cascade models have higher F-score than the Isolated model. The best F-score of the Cascade model is 67.29% when we used 18 features in the experiment (iv). The experiment affirms that POS information is helpful in sentence boundary detection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparing Cascade with Isolated Model",
"sec_num": null
},
{
"text": "With the same set of features as in (i), \"1-step\" Joint (v) yields 3% increase on recall and 2% increase on F-score for SBD when compared to the Isolated model in (i). Comparing (vi) with (i), a similar in-crease in accuracy for SBD, with the same features, is observed. These results demonstrate that SBD can benefit from the other layer's label information, i.e., POS tagging labels, in the Joint model (v). When compared to Cascade models, \"1-step\" Joint shows comparable SB detection performance with the Cascade model (iii) that uses additional 3-gram POS features. By enhancing the feature set for SB detection in the Cascade model (iv), we yielded 1% increase on F-score when compared to the Cascade model (iii). Table 6 . Comparison of our methods based on FCRF and LCRF on TaLAPi corpus. Comparing \"1-step\" with \"2-step\" While the Cascade models and \"1-step\" Joint model was limited by the running speed due to the number of POS tags, the \"2-step\" Joint model was therefore proposed to improve the running speed and not degrade the accuracy. With the same set of features, the \"2-step\" Joint (vi) run much faster than \"1-step\" Joint (v), while yielded almost the same SBD F-score as (v). The run time comparison can be found in Table 7 . Experiments were run on Intel(R) Xeon(R) 8 core processor E5-2667 V2 3.30GHz, 25M cache with multi-thread 16. The \"2-step\" Joint (vi) reduces more than half of the running time, compared to \"1-step\" Joint (v). This decrease in processing time enables us to include more feature set to further improve the performance of SBD in the \"2-step\" Joint model. By increasing window size from 3 to 5 (i.e., from (vi) to (vii)), (vii) yields 1.3% increase on F-score for SBD, compared to (vi). To further improve the performance, we added NER information with different grams on top of experiment (vii) and found that NER information with unigram (i.e., NER 0 ) and bigram (i.e., NER -1 +NER 0 ) improves the performance, i.e., (viii) shown in Table 6 . Undoubtedly, with increased features, the running time of \"2-step\" Joint model (viii) is more than (vi) and (vii), but it is still faster than the \"1-step\" Joint model (v). More importantly, it achieved 2% increase on F-score for SBD. Compared to Cascade model (iv), it saved 50% time and achieved 1.6% increase on F-score for SBD. ",
"cite_spans": [],
"ref_spans": [
{
"start": 720,
"end": 727,
"text": "Table 6",
"ref_id": null
},
{
"start": 1237,
"end": 1244,
"text": "Table 7",
"ref_id": null
},
{
"start": 1981,
"end": 1988,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comparing Isolated, Cascade with Joint Model",
"sec_num": null
},
{
"text": "In this paper, we have demonstrated for the first time a word-based labelling approach to Thai SBD. The word-based labelling approach achieved very good performance compared to reported results on ORCHID data. The Cascade model is to use evaluated POS information as features to help SB detection. Higher accuracy in the POS information will yield better accuracy in the Thai SBD. In fact, we also used manually annotated POS tags in SB detection, and it yielded better accuracy, i.e., 79.31% in precision, 62.70% in recall and 70.03% in F-score, compared to the Cascade approach (iv).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Different from Cascade models, Joint models are supposed to make SBD benefit from POS tagging labels in the second layer. Different features are tried in our experiments. Additional features do not always yield better accuracy. For example, when we use more features, e.g., \"w -1 +w 1 \" and \"wtype -1 +wtype 1 , on the top of \"2-step\" Joint (viii), it does not improve the performance. We noticed that the pseudo-POS tagging performance was not improved in the same way as SBD when more features were added. Besides, more experiments will be explored in the future to see how word boundary information, POS and sentence boundary information affect each other.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "In this paper, we demonstrated for the first time a word-based labelling approach to Thai SBD. The word-based labelling approach was proposed to leverage LCRF to do sequence labelling which achieved very good performance compared to reported results on ORCHID data. Furthermore, the performance of SBD with the help of POS tagging was investigated on the corpus TaLAPi. Cascade models and Joint models were compared and the \"2-step\" Joint POS tagging with SB detection was proposed. This proposed model saved more than half of the time, while obtaining almost the same accuracy for SBD as \"1-step\" Joint model, when using the same feature set. With increased speed, more features were therefore used to improve SBD and yields comparable POS tagging performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "https://en.wikipedia.org/wiki/Royal_Institute_Dictionary",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The ModelsCRFs(Lafferty et al., 2001;Sutton et al., 2011) have demonstrated their strengths of sequence labelling in NLP tasks(McCallum et al., 2003;Liu et al., 2005;Sun et al., 2011). They rely on the capacity to capture the sequence's observation O n {i=1,2\u2026n} (abbreviated as O) and at the same time the local dependency L i {i=1,2,\u2026n} (abbreviated as L) among nodes in the sequence (seeFigure 3for the example of linear-chain CRF (LCRF)(Sutton et al., 2011)). Conditioned on observations O, dependencies of L form the chain. In the model, the probability of labelling an observed input O with a label sequence L is defined by a conditional probability as in Equation(1):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work is licensed under a Creative Commons Attribution 4.0 International Licence. Licence details: http://creativecommons.org/licenses/by/4.0/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Sharifah Mahani Aljunied, Nattadaporn Lertcheva and Sasiwimon Kalunsima",
"authors": [
{
"first": "Aiti",
"middle": [],
"last": "Aw",
"suffix": ""
}
],
"year": 2014,
"venue": "TaLAPi -A Thai Linguistically Annotated Corpus for Language Processing, LREC",
"volume": "",
"issue": "",
"pages": "125--132",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "AiTi Aw, Sharifah Mahani Aljunied, Nattadaporn Lertcheva and Sasiwimon Kalunsima. 2014. TaLAPi -A Thai Linguistically Annotated Corpus for Language Processing, LREC, 125-132.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "TnT-A Statistical Part-of-Speech Tagger",
"authors": [
{
"first": "Thorsten",
"middle": [],
"last": "Brants",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "224--231",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thorsten Brants. 2000. TnT-A Statistical Part-of-Speech Tagger, ANLP, 224-231.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Automatic Sentence Break Disambiguation for Thai",
"authors": [
{
"first": "Virach",
"middle": [],
"last": "Paisarn Charoen Pornsawat",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sorlertlamvanich",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paisarn Charoen Pornsawat and Virach Sorlertlamvanich. 2001. Automatic Sentence Break Disambiguation for Thai. ICCPOL.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Sentence Boundary Detection and the Problem with the",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Gillick",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of NAACL HLT",
"volume": "",
"issue": "",
"pages": "241--244",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Gillick. 2009. Sentence Boundary Detection and the Problem with the U.S., Proceedings of NAACL HLT, 241-244.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Elephant: Sequence Labelling for Word and Sentence Segmentation",
"authors": [
{
"first": "Kilian",
"middle": [],
"last": "Evang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Basile",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Valerio",
"suffix": ""
},
{
"first": "Grzegorz",
"middle": [],
"last": "Chrupala",
"suffix": ""
},
{
"first": "Johan",
"middle": [],
"last": "Bos",
"suffix": ""
}
],
"year": 2013,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "1422--1426",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Evang, Kilian and Basile, Valerio and Chrupala, Grzegorz and Bos, Johan. 2013. Elephant: Sequence Labelling for Word and Sentence Segmentation., EMNLP, 1422--1426",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Conditional Random Fields: Probabilistic Models for Segmenting and Labelling Sequence Data",
"authors": [
{
"first": "J",
"middle": [],
"last": "Laferty",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "F",
"middle": [
"C N"
],
"last": "Pereira",
"suffix": ""
}
],
"year": 2001,
"venue": "ICML Proceedings of the Eighteenth International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Laferty, Andrew McCallum and F. C. N. Pereira. 2001. Conditional Random Fields: Probabilistic Models for Segmenting and Labelling Sequence Data. In ICML Proceedings of the Eighteenth International Conference on Machine Learning.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Thai Syntactical Analysis System by Method of Splitting Sentences from Paragraph form Machine Translation",
"authors": [
{
"first": "S",
"middle": [],
"last": "Longchupole",
"suffix": ""
}
],
"year": 1995,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Longchupole. 1995. Thai Syntactical Analysis System by Method of Splitting Sentences from Paragraph form Machine Translation. Master Thesis. King Mongkut's institute of technology Ladkrabang ( in Thai).",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Using Conditional Random Fields for Sentence Boundary Detection in Speech",
"authors": [
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Stolcke",
"suffix": ""
},
{
"first": "Elizabeth",
"middle": [],
"last": "Shriberg",
"suffix": ""
},
{
"first": "Mary",
"middle": [],
"last": "Harper",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "451--458",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yang Liu, Andreas Stolcke, Elizabeth Shriberg and Mary Harper. 2005. Using Conditional Random Fields for Sentence Boundary Detection in Speech, ACL, 451-458.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Early Results for Named Entity Recognition with Conditional Random Fields, Feature Induction and Web-Enhanced Lexicons",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew McCallum and Wei Li. 2003. Early Results for Named Entity Recognition with Conditional Random Fields, Feature Induction and Web-Enhanced Lexicons, CONLL.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "The Automatic Thai Sentence Extraction",
"authors": [
{
"first": "Pradit",
"middle": [],
"last": "Mittrapiyanuruk",
"suffix": ""
},
{
"first": "Virach",
"middle": [],
"last": "Sornlertlamvanich",
"suffix": ""
}
],
"year": 2000,
"venue": "The Fourth Symposium on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pradit Mittrapiyanuruk and Virach Sornlertlamvanich. 2000. The Automatic Thai Sentence Extraction. The Fourth Symposium on Natural Language Processing.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Joint Chinese Word Segmentation, POS tagging and Parsing, ACL",
"authors": [
{
"first": "Xian",
"middle": [],
"last": "Qian",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xian Qian and Yang Liu. 2012. Joint Chinese Word Segmentation, POS tagging and Parsing, ACL.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Thai Sentence-Breaking for Large-Scale SMT, WSSANLP",
"authors": [
{
"first": "Glenn",
"middle": [],
"last": "Slayden",
"suffix": ""
},
{
"first": "Mei-Yuh",
"middle": [],
"last": "Hwang",
"suffix": ""
},
{
"first": "Lee",
"middle": [],
"last": "Schwartz",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "8--16",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Glenn Slayden, Mei-Yuh Hwang and Lee Schwartz. 2010. Thai Sentence-Breaking for Large-Scale SMT, WSSANLP, pp. 8-16.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Enhancing Chinese Word Segmentation Using Unlabeled Data, ACL. Virach Sornlertlamvanich, Naoto Takahashi and Hitoshi Isahara",
"authors": [
{
"first": "Weiwei",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Jia",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 1997,
"venue": "Building A Thai Part Of-Speech Tagged Corpus (ORCHID)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Weiwei Sun and Jia Xu. 2011. Enhancing Chinese Word Segmentation Using Unlabeled Data, ACL. Virach Sornlertlamvanich, Naoto Takahashi and Hitoshi Isahara. 1997. Building A Thai Part Of-Speech Tagged Corpus (ORCHID).",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "GRMM: Graphical Models in Mallet",
"authors": [
{
"first": "Charles",
"middle": [],
"last": "Sutton",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Charles Sutton, 2006. GRMM: Graphical Models in Mallet. http://mallet.cs.umass.edu/grmm/.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "An introduction to conditional random fields",
"authors": [
{
"first": "Charles",
"middle": [],
"last": "Sutton",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2011,
"venue": "Machine Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Charles Sutton and Andrew McCallum. 2011. An introduction to conditional random fields, Machine Learning.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Dynamic Conditional Random Fields: Factorized Probabilistic Models for Labelling and Segmenting Sequence Data",
"authors": [
{
"first": "Charles",
"middle": [],
"last": "Sutton",
"suffix": ""
},
{
"first": "Khashayar",
"middle": [],
"last": "Rohanimanesh",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2004,
"venue": "International Conference on Machine Learning (ICML)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Charles Sutton, Khashayar Rohanimanesh and Andrew McCallum. 2004. Dynamic Conditional Random Fields: Factorized Probabilistic Models for Labelling and Segmenting Sequence Data. In International Conference on Machine Learning (ICML).",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Sentence and token splitting based on conditional random fields",
"authors": [
{
"first": "Katrin",
"middle": [],
"last": "Tomanek",
"suffix": ""
},
{
"first": "Joachim",
"middle": [],
"last": "Wermter",
"suffix": ""
},
{
"first": "Udo",
"middle": [],
"last": "Hahn",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 10th Conference of the Pacific Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "49--57",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katrin Tomanek, Joachim Wermter, and Udo Hahn. 2007. Sentence and token splitting based on conditional random fields. In Proceedings of the 10th Conference of the Pacific Association for Computational Linguis- tics, pages 49-57, Melbourne, Australia.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Mining Informal Language from Chinese Microtext: Joint Word Recognition and Segmentation, ACL",
"authors": [
{
"first": "Aobo",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Min-Yen",
"middle": [],
"last": "Kan",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aobo Wang and Min-Yen Kan, 2013. Mining Informal Language from Chinese Microtext: Joint Word Recogni- tion and Segmentation, ACL.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Combining Punctuation and Disfluency Prediction: An Empirical Study",
"authors": [
{
"first": "Xuancong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Khe",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Chai",
"middle": [],
"last": "Sim",
"suffix": ""
},
{
"first": "Hwee Tou",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2014,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "121--130",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xuancong Wang, Khe Chai Sim and Hwee Tou Ng. Combining Punctuation and Disfluency Prediction: An Em- pirical Study, EMNLP 2014, pp.121-130",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Dynamic Conditional Random Fields for Joint Sentence Boundary and Punctuation Prediction",
"authors": [
{
"first": "Xuancong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Khe Chai",
"middle": [],
"last": "Hwee Tou Ng",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sim",
"suffix": ""
}
],
"year": 2012,
"venue": "INTERSPEECH",
"volume": "2012",
"issue": "",
"pages": "1384--1387",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xuancong Wang, Hwee Tou Ng, Khe Chai Sim. 2012. Dynamic Conditional Random Fields for Joint Sentence Boundary and Punctuation Prediction. INTERSPEECH 2012, 1384-1387",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Spacing in the Thai Language",
"authors": [
{
"first": "Suphawut",
"middle": [],
"last": "Wathabunditkul",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Suphawut Wathabunditkul. 2003. Spacing in the Thai Language. http://www.thailanguage.com/ref/spacing",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Joint Word Segmentation and POS Tagging using a Single Perceptron",
"authors": [
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yue Zhang and Stephen Clark. 2008. Joint Word Segmentation and POS Tagging using a Single Perceptron, ACL.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "Two-layer Factorial CRF (FCRF)",
"uris": null,
"type_str": "figure"
},
"TABREF0": {
"num": null,
"html": null,
"text": "The mapping from original POS to Pseudo POS tags",
"type_str": "table",
"content": "<table><tr><td/><td>Original POS</td><td>Pseudo</td><td>Total</td><td/><td>Original POS</td><td>Pseudo</td><td>Total</td></tr><tr><td/><td/><td>POS</td><td>No.</td><td/><td/><td>POS</td><td>No.</td></tr><tr><td>1</td><td>NN,NR,PPER,PINT,</td><td>NPs</td><td colspan=\"2\">104271 7</td><td>CL</td><td>CL</td><td>5747</td></tr><tr><td/><td>PDEM</td><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td>REFX</td><td>REFX</td><td>1357</td><td>8</td><td>OD,CD</td><td>OCD</td><td>8453</td></tr><tr><td>3</td><td>DPER,DINT,DDEM,</td><td>DPs</td><td>7267</td><td>9</td><td>FXN, FXG, FXAV,</td><td>FXs</td><td>13887</td></tr><tr><td/><td>PDT</td><td/><td/><td/><td>FXAJ</td><td/><td/></tr><tr><td>4</td><td>JJA, JJV</td><td>JJs</td><td>14335</td><td colspan=\"2\">10 P, COMP, CNJ</td><td>PCs</td><td>50301</td></tr><tr><td>5</td><td>VV, VA, AUX</td><td>VVs</td><td>72769</td><td colspan=\"2\">11 FWN,FWV,FWA,FWX</td><td>FWs</td><td>24</td></tr><tr><td>6</td><td>ADV,NEG</td><td>ADs</td><td>12275</td><td colspan=\"2\">12 PAR, PU, IJ, X</td><td>Os</td><td>6270</td></tr></table>"
}
}
}
}