ACL-OCL / Base_JSON /prefixS /json /socialnlp /2020.socialnlp-1.7.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:48:24.083210Z"
},
"title": "Incorporating Uncertain Segmentation Information into Chinese NER for Social Media Text",
"authors": [
{
"first": "Shengbin",
"middle": [],
"last": "Jia",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tongji University",
"location": {
"settlement": "Shanghai",
"country": "China"
}
},
"email": "[email protected]"
},
{
"first": "Ling",
"middle": [],
"last": "Ding",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tongji University",
"location": {
"settlement": "Shanghai",
"country": "China"
}
},
"email": "[email protected]"
},
{
"first": "Xiaojun",
"middle": [],
"last": "Chen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tongji University",
"location": {
"settlement": "Shanghai",
"country": "China"
}
},
"email": "[email protected]"
},
{
"first": "Yang",
"middle": [],
"last": "Xiang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tongji University",
"location": {
"settlement": "Shanghai",
"country": "China"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Chinese word segmentation is necessary to provide word-level information for Chinese named entity recognition (NER) systems. However, segmentation error propagation is a challenge for Chinese NER while processing colloquial data like social media text. In this paper, we propose a model (UIcwsNN) that specializes in identifying entities from Chinese social media text, especially by leveraging uncertain information of word segmentation. Such ambiguous information contains all the potential segmentation states of a sentence that provides a channel for the model to infer deep word-level characteristics. We propose a trilogy (i.e., Candidate Position Embedding \u21d2 Position Selective Attention \u21d2 Adaptive Word Convolution) to encode uncertain word segmentation information and acquire appropriate word-level representation. Experimental results on the social media corpus show that our model alleviates the segmentation error cascading trouble effectively, and achieves a significant performance improvement of 2% over previous state-of-the-art methods.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Chinese word segmentation is necessary to provide word-level information for Chinese named entity recognition (NER) systems. However, segmentation error propagation is a challenge for Chinese NER while processing colloquial data like social media text. In this paper, we propose a model (UIcwsNN) that specializes in identifying entities from Chinese social media text, especially by leveraging uncertain information of word segmentation. Such ambiguous information contains all the potential segmentation states of a sentence that provides a channel for the model to infer deep word-level characteristics. We propose a trilogy (i.e., Candidate Position Embedding \u21d2 Position Selective Attention \u21d2 Adaptive Word Convolution) to encode uncertain word segmentation information and acquire appropriate word-level representation. Experimental results on the social media corpus show that our model alleviates the segmentation error cascading trouble effectively, and achieves a significant performance improvement of 2% over previous state-of-the-art methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Named entity recognition (NER) is a fundamental task for natural language processing and fulfills lots of downstream applications, such as semantic understanding of social media contents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Chinese NER is often considered as a characterwise sequence labeling task since there are no natural delimiters between Chinese words (Liu et al., 2010; Li et al., 2014) . But the word-level information is necessary for a Chinese NER system (Mao et al., 2008; Peng and Dredze, 2015; . Various segmentation features can be obtained from the Chinese word segmentation (CWS) procedures then used into a pipeline NER module (Peng and Dredze, 2015; He and Sun, 2017a; Zhu and Wang, 2019) , or be co-trained by ",
"cite_spans": [
{
"start": 134,
"end": 152,
"text": "(Liu et al., 2010;",
"ref_id": "BIBREF19"
},
{
"start": 153,
"end": 169,
"text": "Li et al., 2014)",
"ref_id": "BIBREF16"
},
{
"start": 241,
"end": 259,
"text": "(Mao et al., 2008;",
"ref_id": "BIBREF23"
},
{
"start": 260,
"end": 282,
"text": "Peng and Dredze, 2015;",
"ref_id": "BIBREF24"
},
{
"start": 420,
"end": 443,
"text": "(Peng and Dredze, 2015;",
"ref_id": "BIBREF24"
},
{
"start": 444,
"end": 462,
"text": "He and Sun, 2017a;",
"ref_id": "BIBREF10"
},
{
"start": 463,
"end": 482,
"text": "Zhu and Wang, 2019)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Step 1: Candidate Position Embedding",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Step 2: Position Selective Attention",
"sec_num": null
},
{
"text": "Step The architecture of our model. An interesting instance \"\u5357\u4eac\u5e02\u957f\u6c5f\u5927\u6865\u8c03\u7814(Daqiao Jiang, major of Nanjing City, is investigating)...\" is represented, which is cited from .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Step 2: Position Selective Attention",
"sec_num": null
},
{
"text": "CWS-NER multi-task learning (Peng and Dredze, 2016; Cao et al., 2018) .",
"cite_spans": [
{
"start": 28,
"end": 51,
"text": "(Peng and Dredze, 2016;",
"ref_id": "BIBREF25"
},
{
"start": 52,
"end": 69,
"text": "Cao et al., 2018)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Step 2: Position Selective Attention",
"sec_num": null
},
{
"text": "However, segmentation error propagation is a challenge for Chinese NER, when processing informal data like social media text (Duan et al., 2012) . The CWS will produce more unreliable results on the social media text than on the formal data. Incorrectly segmented entity boundaries may lead to NER errors. Nevertheless, most existing extractors always assume that input segmentation information is affirmative and reliable without conscious error discrimination. That is, they acquiesce in that \"The one supposed-reliable word segmentation output of a CWS module will be input into the NER module\". Although the joint training way may improve the accuracy of word segmentations, the NER module still cannot recognize inevitable segmentation errors.",
"cite_spans": [
{
"start": 125,
"end": 144,
"text": "(Duan et al., 2012)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Step 2: Position Selective Attention",
"sec_num": null
},
{
"text": "To solve this problem, we design a model (UIcwsNN) that dedicates to identifying entities from Chinese social media text, by incorporating Uncertain Information of Chinese Word Segmentation into a Neural Network. This kind of uncertain information reflects all the potential segmentation states of a sentence, not just the certain one that is supposed-reliable by the CWS module. Furthermore, we propose a trilogy to encode uncertain word segmentation information and acquire word-level representation, as shown in Figure 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 515,
"end": 523,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Step 2: Position Selective Attention",
"sec_num": null
},
{
"text": "In summary, the contributions of this paper are as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Step 2: Position Selective Attention",
"sec_num": null
},
{
"text": "\u2022 We embed candidate position information of characters into the model (in Section 3.1) to express the states of underlying word. And we design the Position Selective Attention (in Section 3.2) that enforces the model to focus on the appropriate positions while ignoring unreliable parts. The above operations provide a wealth of resources to allow the model to infer word-level deep characteristics, rather than bluntly impose segmentation information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Step 2: Position Selective Attention",
"sec_num": null
},
{
"text": "\u2022 We introduce the Adaptive Word Convolution (in Section 3.3), it dynamically provides wordlevel representation for the characters in specific positions, by encoding segmentations of different lengths. Hence our model can grasp useful word-level semantic information and alleviate the interference of segmentation error cascading.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Step 2: Position Selective Attention",
"sec_num": null
},
{
"text": "\u2022 Experimental results on different datasets show that our model achieves significant performance improvements compared to baselines that use only character information. Especially, our model outperforms the previous state-of-the-art method by 2% on the social media.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Step 2: Position Selective Attention",
"sec_num": null
},
{
"text": "The NER on English has achieved promising performance by naturally integrating character information into word representations (Ma and Hovy, 2016; Peters et al., 2018; Yadav and Bethard, 2019; Li et al., 2020) . However, Chinese NER is still underachieving because of the word segmentation problem. Unlike the English language, words in Chinese sentences are not separated by spaces, so that we cannot get Chinese words without pre-processed CWS. In particular, identifying entities on Chinese social media is harder than on other formal text since there is worse segmentation error propagation trouble. Existing methods payed little attention to this issue, and there were few entity recognition methods specifically for Chinese social media text (Peng and Dredze, 2015; He and Sun, 2017a,b) .",
"cite_spans": [
{
"start": 127,
"end": 146,
"text": "(Ma and Hovy, 2016;",
"ref_id": "BIBREF22"
},
{
"start": 147,
"end": 167,
"text": "Peters et al., 2018;",
"ref_id": "BIBREF26"
},
{
"start": 168,
"end": 192,
"text": "Yadav and Bethard, 2019;",
"ref_id": "BIBREF29"
},
{
"start": 193,
"end": 209,
"text": "Li et al., 2020)",
"ref_id": "BIBREF17"
},
{
"start": 748,
"end": 771,
"text": "(Peng and Dredze, 2015;",
"ref_id": "BIBREF24"
},
{
"start": 772,
"end": 792,
"text": "He and Sun, 2017a,b)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "As for the Chinese NER, existing methods could be classified as either word-wise or character-wise. The former one used words as the basic tagging unit (Ji and Grishman, 2005) . Segmentation errors would be directly and inevitably entered into NER systems. The latter used characters as the basic tokens in the tagging process (Chen et al., 2006; Mao et al., 2008; Lu et al., 2016; . Character-wise methods that outperformed wordwise methods for Chinese NER (Liu et al., 2010; Li et al., 2014 ).",
"cite_spans": [
{
"start": 152,
"end": 175,
"text": "(Ji and Grishman, 2005)",
"ref_id": "BIBREF12"
},
{
"start": 327,
"end": 346,
"text": "(Chen et al., 2006;",
"ref_id": "BIBREF1"
},
{
"start": 347,
"end": 364,
"text": "Mao et al., 2008;",
"ref_id": "BIBREF23"
},
{
"start": 365,
"end": 381,
"text": "Lu et al., 2016;",
"ref_id": "BIBREF20"
},
{
"start": 458,
"end": 476,
"text": "(Liu et al., 2010;",
"ref_id": "BIBREF19"
},
{
"start": 477,
"end": 492,
"text": "Li et al., 2014",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "There were two main ways to take word-level information into a character-wise model. One was to employ various segmentation information as feature vectors into a cascaded NER model. Chinese word segmentation was performed first before applying character sequence labeling (Guo et al., 2004; Mao et al., 2008; Zhu and Wang, 2019) . The pre-processing segmentation features included character positional embedding (Peng and Dredze, 2015; He and Sun, 2017a,b) , segmentation tags Zhu and Wang, 2019) , word embedding (Peng and Dredze, 2015; Liu et al., 2019; E and Xiang, 2017) and so on. The other was to train NER and CWS tasks jointly to incorporate task-shared word boundary information from the CWS into the NER (Xu et al., 2013; Peng and Dredze, 2016; Cao et al., 2018) . Although co-training might improve the validity of the word segmentation, the NER module still had no specific measures to avoid segmentation errors. The above existing methods suffered the potential issue of error propagation.",
"cite_spans": [
{
"start": 272,
"end": 290,
"text": "(Guo et al., 2004;",
"ref_id": "BIBREF9"
},
{
"start": 291,
"end": 308,
"text": "Mao et al., 2008;",
"ref_id": "BIBREF23"
},
{
"start": 309,
"end": 328,
"text": "Zhu and Wang, 2019)",
"ref_id": "BIBREF33"
},
{
"start": 412,
"end": 435,
"text": "(Peng and Dredze, 2015;",
"ref_id": "BIBREF24"
},
{
"start": 436,
"end": 456,
"text": "He and Sun, 2017a,b)",
"ref_id": null
},
{
"start": 477,
"end": 496,
"text": "Zhu and Wang, 2019)",
"ref_id": "BIBREF33"
},
{
"start": 514,
"end": 537,
"text": "(Peng and Dredze, 2015;",
"ref_id": "BIBREF24"
},
{
"start": 538,
"end": 555,
"text": "Liu et al., 2019;",
"ref_id": "BIBREF18"
},
{
"start": 556,
"end": 574,
"text": "E and Xiang, 2017)",
"ref_id": "BIBREF6"
},
{
"start": 714,
"end": 731,
"text": "(Xu et al., 2013;",
"ref_id": "BIBREF28"
},
{
"start": 732,
"end": 754,
"text": "Peng and Dredze, 2016;",
"ref_id": "BIBREF25"
},
{
"start": 755,
"end": 772,
"text": "Cao et al., 2018)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "A few researchers tried to address the above defect. Luo and Yang (2016) used multiple word segmentation outputs as additional features to a NER model. However, they treated the segmentations equally without error discrimination. Liu et al. (2019) introduced four naive selection strategies to select words from the pre-prepared Lexicon for their model. However, these strategies did not consider the context of a sentence. proposed a Lattice LSTM model that used the gated recurrent units to control the contribution of the potential words. However, as shown by Liu et al. (2019) , the gate mechanism might cause the model to degenerate into a partial word-based model. Ding et al. (2019) and Gui et al. (2019) proposed the models with graph neural network based on the information that the gazetteers or lexicons offered. Obtaining largescale, high-quality lexicons would be costly. They were dedicated to capturing the correct segmentation information but might not alleviate the interference of inappropriate segmentations.",
"cite_spans": [
{
"start": 53,
"end": 72,
"text": "Luo and Yang (2016)",
"ref_id": "BIBREF21"
},
{
"start": 230,
"end": 247,
"text": "Liu et al. (2019)",
"ref_id": "BIBREF18"
},
{
"start": 563,
"end": 580,
"text": "Liu et al. (2019)",
"ref_id": "BIBREF18"
},
{
"start": 671,
"end": 689,
"text": "Ding et al. (2019)",
"ref_id": "BIBREF3"
},
{
"start": 694,
"end": 711,
"text": "Gui et al. (2019)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "It is worth mentioning that the above methods were not specifically aimed at social media. We propose a method to learn word-level representation by leveraging uncertain word segmentation information while considering the informal expression characteristics of social media text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "3 Methodology Figure 1 illustrates the overall architecture of our model UIcwsNN. Given a sentence S = {c 1 , c 2 , \u2022 \u2022 \u2022 , c n } as the sequence of characters, each character will be assigned a pre-prepared tag.",
"cite_spans": [],
"ref_spans": [
{
"start": 14,
"end": 22,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We use a Conditional random fields (CRF) layer to decode tags according to the outputs from the sequence encoder (Lample et al., 2016; .",
"cite_spans": [
{
"start": 113,
"end": 134,
"text": "(Lample et al., 2016;",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "As for the sequence encoding, we use the convolution operation as our basic encoding unit. The colloquial social media text usually does not have normative grammar or syntax and presents semantics in fragmented form, for example, \"\u6709\u597d\u591a\u597d \u591a\u7684\u8bdd\u60f3\u5bf9\u4f60\u8bf4\u674e\u5dfe\u51e1\u60f3\u8981\u7626\u7626\u7626\u6210\u674e\u5e06\u6211 \u662f\u60f3\u5207\u5f00\u4e91\u6735\u7684\u5fc3(Have many many words to say to you Jinfan Li wanna thin thin thin to Fan Li I am a heart that want to cut the cloud)\". These properties will destroy the propagation of temporal semantic information that comes with the textual sequence. Therefore, the Convolutional neural network (CNN) is naturally suitable for encoding colloquial text because it specializes in capturing salient local features from a sequence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "More importantly, we use a trilogy to learn the word-level representation by incorporating uncertain information of Chinese text segmentation, as shown in the following details. Figure 2 : Create the candidate position embedding.",
"cite_spans": [],
"ref_spans": [
{
"start": 178,
"end": 186,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "CNN E-PER O O PER \u2026 \u5357 \u4eac Nanjing \u5357 \u4eac \u5e02 Nanjing City \u4eac \u5e02 Beijing City \u5927 \u6865 bridge \u8c03 \u7814 investigate \u6c5f river \u5e02 \u957f major \u957f \u6c5f \u5927 \u6865 Yangtze River Bridge \u957f \u6c5f Yangtze River ... \u5357\u4eac \u4eac\u5e02 \u5357\u4eac\u5e02 \u5357\u4eac \u5e02\u957f \u5357\u4eac\u5e02 \u957f\u6c5f \u5e02\u957f \u957f\u6c5f\u5927\u6865 \u957f\u6c5f \u6c5f \u5927\u6865 \u957f\u6c5f\u5927\u6865 \u5927\u6865 \u8c03\u7814 \u8c03\u7814 1 0 0 0 1 1 1 0 0 1 1 1 1 1 0 0 0 0 1 0 1 0 0 0 1 0 1 0 1 0 1 0 0 0 1 0 (A) (D) ... ... \u5357 South B-LOC (A) (B) (C) ... \u5357 South \u4eac Jing \u5e02 city \u957f long \u6c5f river \u5927 big \u6865 bridge \u8c03 survey \u7814 study \u025119 \u0251110 ... ... ... ... ... ... \u5357 South \u4eac Jing \u5e02 city \u957f long \u6c5f river \u5927 big \u6865 bridge \u8c03 survey \u7814 study \u5728 is ... 0 0 0 0 0 0 0 1 0 1 0 1 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1-word 2-word 3-word 4-word 0 0 0 0 X X 1-gram 2-gram 3-gram 4-gram \u02511,2 ... ... ... \u02511,4 ... ... ... \u02511,5 ... ... ... \u02511,7 ... ... ... ... ... ... ... ... ... ... \u02511,6 ... ... ... \u02511,8 ... ... ... ... ... ... ...",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "1 0 0 0 1 1 1 0 0 1 1 1 1 1 0 0 0 0 1 0 1 0 0 0 1 0 1 0 1 0 1 0 0 0 1 0 ... ... ... \u5357 South \u4eac Jing \u5e02 city \u957f long \u6c5f river \u5927 big \u6865 bridge \u8c03 survey \u7814 study",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We design the candidate position embedding to represent candidate positions of each character in all potential words. It reflects the states of all underlying segmentation in a sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Step-1: Candidate Position Embedding",
"sec_num": "3.1"
},
{
"text": "We firstly scan all the potential words in the sentence that can be worded 1 , so as to obtain as much meaningful segmentation states as possible. As shown in the bottom part of Figure 2 , the instance can be segmented and obtained candidate segmentations: \"\u5357\u4eac(Nanjing), \u4eac\u5e02(Jing City), \u5357 \u4eac\u5e02(Nanjing City), \u5e02\u957f(major), \u957f\u6c5f(Yangtze River), \u6c5f(river), \u5927\u6865(bridge), \u957f\u6c5f\u5927\u6865(Yangtze River Bridge), \u8c03\u7814(investigate), ...\".",
"cite_spans": [],
"ref_spans": [
{
"start": 178,
"end": 186,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Step-1: Candidate Position Embedding",
"sec_num": "3.1"
},
{
"text": "Next, we use a 4-dimensional vector c",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Step-1: Candidate Position Embedding",
"sec_num": "3.1"
},
{
"text": "i to embed candidate position information of a character, where each dimension indicates the positional candidate (i.e., Begin, Inside, End, Single) of a character in words. 1 if it exists, 0 otherwise. For example, as shown in middle and top parts of Figure 2 , as \"\u4eac(Jing)\" being the begin of \"\u4eac\u5e02(Beijing City)\", the inside of \"\u5357\u4eac\u5e02(Nanjing City)\", and the end of \"\u5357\u4eac(Nanjing)\", the 1 st , 2 nd and 3 rd dimensions of the embedding of \"\u4eac(Jing)\" are 1, but the 4 th dimension is 0 (i.e., [1, 1, 1, 0]).",
"cite_spans": [],
"ref_spans": [
{
"start": 252,
"end": 260,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Step-1: Candidate Position Embedding",
"sec_num": "3.1"
},
{
"text": "The correct segmentation sequence for the example should be \"\u5357\u4eac(Nanjing)/\u5e02\u957f(major)/\u6c5f\u5927 \u6865(Daqiao Jiang)/\u8c03\u7814(is investigating)/...\". However, the one certain segmentation output that is supposed-reliable by the above CWS tool is \"\u5357\u4eac\u5e02(Nanjing City)/\u957f\u6c5f\u5927\u6865(Yangtze River Bridge)/\u8c03 \u7814(investigates)/...\". The errors may cause that the entity \"\u6c5f\u5927\u6865(Daqiao Jiang)\" is not recognized. In contrast, the candidate position embedding should be a more reasonable representation for the Chinese sentence segmentation. It is flexible for a model to infer word-level characteristics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Step-1: Candidate Position Embedding",
"sec_num": "3.1"
},
{
"text": "There should be only one certain position for a character in the given sentence. We design the position selective attention over candidate positions. It enforces the model to focus on the most relevant positions while ignoring unreliable parts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Step-2: Position Selective Attention",
"sec_num": "3.2"
},
{
"text": "Each sequence S is projected to an attention matrix A that captures the semantics of position features interaction according to the contexts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Step-2: Position Selective Attention",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "A = tanh(W (a) [h 1 , h 2 , \u2022 \u2022 \u2022 , h n ]),",
"eq_num": "(1)"
}
],
"section": "Step-2: Position Selective Attention",
"sec_num": "3.2"
},
{
"text": "where A is a matrix of n \u00d7 4, W is trainable parameters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Step-2: Position Selective Attention",
"sec_num": "3.2"
},
{
"text": "We apply a set of convolution operations that involve filters W (c) and bias terms b (c) to the sequence to learn a representation h i for character c i .",
"cite_spans": [
{
"start": 64,
"end": 67,
"text": "(c)",
"ref_id": null
},
{
"start": 85,
"end": 88,
"text": "(c)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Step-2: Position Selective Attention",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h i = [h l=2 i ; h l=3 i ; h l=4 i ; h l=5 i ],",
"eq_num": "(2)"
}
],
"section": "Step-2: Position Selective Attention",
"sec_num": "3.2"
},
{
"text": "h l i = relu(W (c) l [x i , \u2022 \u2022 \u2022 , x i+l\u22121 ] + b (c) l ), (3) where h l",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Step-2: Position Selective Attention",
"sec_num": "3.2"
},
{
"text": "i represents a feature that is generated from a window of length l started with c i . The x i is the combination of character embedding c (e) i and expanded candidate position embedding, as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Step-2: Position Selective Attention",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "x i = c (e) i + W (p) c (p) i ,",
"eq_num": "(4)"
}
],
"section": "Step-2: Position Selective Attention",
"sec_num": "3.2"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Step-2: Position Selective Attention",
"sec_num": "3.2"
},
{
"text": "c (e) i \u2208 R de , W (p) \u2208 R 4dp .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Step-2: Position Selective Attention",
"sec_num": "3.2"
},
{
"text": "To enhance the learning of the position information assisted by the character semantic information, we ensure d e d p .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Step-2: Position Selective Attention",
"sec_num": "3.2"
},
{
"text": "Given the matrix A, we define",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Step-2: Position Selective Attention",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "v i = exp(A i,j ) 3 j=0 exp(A i,j ) ,",
"eq_num": "(5)"
}
],
"section": "Step-2: Position Selective Attention",
"sec_num": "3.2"
},
{
"text": "to quantify the reliability of the j th position with respect to the i th character. The position attention feature vectors v should assign higher attention values to the appropriate positions while minimizing the values of disturbing positions. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Step-2: Position Selective Attention",
"sec_num": "3.2"
},
{
"text": "\u5357 South \u4eac Jing \u7814 study \u5927 big \u6c5f river \u957f long \u5e02 city \u8c03 survey \u6865 bridge \u2713 \u2713 \u2713 \u2713 \u2713 \u2713 \u2713 \u2713 \u2713 \u2713 !\"#:! !\"%:! !\"&:! !:!'& ! !:!'% !:!'#",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Step-2: Position Selective Attention",
"sec_num": "3.2"
},
{
"text": "Based on the position selection of each character, the step-3 encodes word segmentations to obtain complete word-level semantics. As for each character c i , we expect to encode the segmentation that involves the c i as its word-level representation. There is a challenge: The lengths of word segmentations are diverse, and the positions of characters located in segmentations are flexible. A single encoding structure is difficult to adapt to this situation. Therefore, we propose the adaptive word convolution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Step-3: Adaptive Word Convolution",
"sec_num": "3.3"
},
{
"text": "When c i is the k th character of the word w, we design the word to consist of two parts, namely, the left subword and the right subword, in the form",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Step-3: Adaptive Word Convolution",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "w m:m+h\u22121 \u21d4 subw m:i \u2295 subw i:m+h\u22121 \u21d4 subw (i\u2212k):i \u2295 subw i:(i+h\u22121\u2212k) ,",
"eq_num": "(6)"
}
],
"section": "Step-3: Adaptive Word Convolution",
"sec_num": "3.3"
},
{
"text": "where 1 m n, 1 h 4, 2 m i m + h, and 0 k < h, \u2295 denotes join operation. For the instance mentioned above, we expect to get the tabulation, as shown in Figure 3 . For example, the \"\u5357(South)\" is the first (i.e., k = 0) character of the word \"\u5357\u4eac\"(Nanjing) (i.e. i = m = 1 and h = 2), we can use the left subw 1:1 and the right subw 1:2 to express the word w 1:2 , and then as the word-level representation for the character \"\u5357(South)\". Especially, we discard the subw 1:1 beacuse subw 1:2 contains it.",
"cite_spans": [],
"ref_spans": [
{
"start": 151,
"end": 159,
"text": "Figure 3",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Step-3: Adaptive Word Convolution",
"sec_num": "3.3"
},
{
"text": "To model subwords automatically, we learn a feature map F (n \u00d7 7) through a set of convolution operations with windows of different directions and different sizes, as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Step-3: Adaptive Word Convolution",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "F = \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 \u2190 \u2212 \u2212 sw 3 1 \u2190 \u2212 \u2212 sw 3 2 \u2022 \u2022 \u2022 \u2190 \u2212 \u2212 sw 3 n \u2190 \u2212 \u2212 sw 2 1 \u2190 \u2212 \u2212 sw 2 2 \u2022 \u2022 \u2022 \u2190 \u2212 \u2212 sw 2 n \u2190 \u2212 \u2212 sw 1 1 \u2190 \u2212 \u2212 sw 1 2 \u2022 \u2022 \u2022 \u2190 \u2212 \u2212 sw 1 n sw 0 1 sw 0 2 \u2022 \u2022 \u2022 sw 0 n \u2212 \u2212 \u2192 sw 1 1 \u2212 \u2212 \u2192 sw 1 2 \u2022 \u2022 \u2022 \u2212 \u2212 \u2192 sw 1 n \u2212 \u2212 \u2192 sw 2 1 \u2212 \u2212 \u2192 sw 2 2 \u2022 \u2022 \u2022 \u2212 \u2212 \u2192 sw 2 n \u2212 \u2212 \u2192 sw 3 1 \u2212 \u2212 \u2192 sw 3 2 \u2022 \u2022 \u2022 \u2212 \u2212 \u2192 sw 3 n \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb ,",
"eq_num": "(7)"
}
],
"section": "Step-3: Adaptive Word Convolution",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u2190 \u2212 \u2212 sw k i = relu(W (s) k [z i\u2212k , \u2022 \u2022 \u2022 , z i ] + b (s) k ), (8) \u2212 \u2212 \u2192 sw k i = relu(W (s ) k [z i , \u2022 \u2022 \u2022 , z i+k ] + b (s ) k ),",
"eq_num": "(9)"
}
],
"section": "Step-3: Adaptive Word Convolution",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "z i = c (e) i + W (v) v i ,",
"eq_num": "(10)"
}
],
"section": "Step-3: Adaptive Word Convolution",
"sec_num": "3.3"
},
{
"text": "where W (v) \u2208 R dv , the \u2192 indicates the windows sliding forward, whereas \u2190 shows the windows sliding backward. Based on the candidate position distribution of characters learned from the step-2, our model can adaptively separate valid subwords from the F to learn the word-level representation w i , in detail,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Step-3: Adaptive Word Convolution",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "w i = 6 f =0 \u03b1 if F i,f ,",
"eq_num": "(11)"
}
],
"section": "Step-3: Adaptive Word Convolution",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b1 if = exp(g(F i,f , v i )) 6 f =0 exp(g(F i,f , v i )) ,",
"eq_num": "(12)"
}
],
"section": "Step-3: Adaptive Word Convolution",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "g(F i , v i ) = tanh(W (\u03b1) [F i + W (v) v i ]).",
"eq_num": "(13)"
}
],
"section": "Step-3: Adaptive Word Convolution",
"sec_num": "3.3"
},
{
"text": "After performing the trilogy, the model can grasp useful word-level semantic information and avoid the trouble of segmentation error cascading. (Levow, 2006) , is in the formal text domain. There are 50,729 annotated sentences with three entity types (PER, ORG, and LOC). We use the BIOES scheme (Begin, Inside, Outside, End, Single) to indicate the position of the token in an entity (Ratinov and Roth, 2009) .",
"cite_spans": [
{
"start": 144,
"end": 157,
"text": "(Levow, 2006)",
"ref_id": "BIBREF15"
},
{
"start": 385,
"end": 409,
"text": "(Ratinov and Roth, 2009)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Step-3: Adaptive Word Convolution",
"sec_num": "3.3"
},
{
"text": "Evaluation. We measure the performance of models by regarding three complementary metrics, Precision (P), Recall (R), and F1-measure (F). Each experiment will be performed five times under different random seeds to reduce the volatility of models. Then we report the mean and standard deviation for each model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Step-3: Adaptive Word Convolution",
"sec_num": "3.3"
},
{
"text": "Hyperparameters. The character embedding is pre-trained on the raw microblog text 3 by the word2vec 4 , and its dimension is 100. As for the base model BiLSTM+CRF, we use hidden state size as 200 for a bidirectional LSTM. As for the base model CNNs+CRF, we use 100 filters with window length {2, 3, 4, 5}. We tune other parameters and set the learning rate as 0.001, dropout rate as 0.5. We randomly select 20% of the training set as a validation set. We train each model for a maximum of 120 epochs using Adam optimizer and stop training if the validation loss does not decrease for 20 consecutive epochs. Besides, we set d e = d p = 100 and d v = 25. We also experiment with other settings and find that these are the most reasonable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Step-3: Adaptive Word Convolution",
"sec_num": "3.3"
},
{
"text": "To study the contribution of each component in our model, we conducted ablation experiments on the two datasets where we use the product of each step to decode tags. We display the results in Table 1 and draw the following conclusions.",
"cite_spans": [],
"ref_spans": [
{
"start": 192,
"end": 199,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Ablation Study",
"sec_num": "4.2.1"
},
{
"text": "The feature (CS) is generated from the one certain segmentation output that is supposed-reliable by the CWS tool Jieba, and it may not benefit the NER on social media text. Compared with the corresponding baseline, the feature (CS) impels the model to improve its performance on the MSRA dataset but to reduce performance on the WeiboNER corpus. There are more segmentation errors on social media text than on formal text so that the impact of error cascading is heavy for NER on social media.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ablation Study",
"sec_num": "4.2.1"
},
{
"text": "On the WeiboNER dataset, the three steps exert different capabilities for improving model performance. Compared with the baseline, the model with the step-1 (+CPE) yields 1.3% improvement in the F value, and its recall improves significantly by 3%, although the precision decreases 1.2%. After we continue with the step-2 (+ PSA), the F value further increases by 0.6%. In this scenario, both precision and recall are higher than the baseline. When the step-3 (+AWC) is completed, the F value further increases by 0.9%. In this scenario, the recall significantly improves by 4% with 0.9% improvement in precision, compared to the baseline. Combining the results on the two different datasets, we find several consistent phenomena. Globally, the F values of the model keep increasing after each step. From a decomposition perspective, the step-2 (+PSA) is notable for improving the precision of the model. And the step-3 (+AWC) is significant for improving the recall. Therefore, the trilogy is complementary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ablation Study",
"sec_num": "4.2.1"
},
{
"text": "Our method has good robustness. On the two datasets from different domains, the uncertain information of word segmentations is always efficient, the trilogy (i.e., +CPE, +PSA, +AWC) is valuable. However, performance improvement on the WeiboNER dataset is more significant than on the MSRA dataset. In contrast with formal text, the social media text contains more word segmentation errors that better reflects the advantages of our method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ablation Study",
"sec_num": "4.2.1"
},
{
"text": "Finally, We verify the influence of the pretrained language model BERT (Devlin et al., 2018) on our model. We optimize the BERT 5 to obtain the character embedding and train the model CNNs+CRF jointly, where its F value reaches 75% on the WeiboNER dataset. The BERT improves the entity recognition outcome dramatically since it uses large-scale external data to pre-train the contextual embedding. When we use our model UIcwsNN to replace the base model CNNs+CRF, the effect is improved by nearly 1%. It proves that our trilogy and the BERT are complementary. The BERT can provide high-quality character-level embedding to the model, and our method contributes word-level semantic information for the model. This conclusion can also be drawn from the results of the MSRA dataset. (Chen et al., 2006) 91.22 81.71 86.20 91.28 90.62 90.95 93.57 92.79 93.18 (Zhu and Wang, 2019) 93.53 92.42 92.97 (Ding et al., 2019) 94.60 94.20 94.40 (Zhao et al., art performance. The overall score of our model is generally more than 2% higher than the scores of other models. Many methods use lexicon instead of the CWS to provide extractors with external word-level information, but how to choose the appropriate words based on sentence contexts is their challenge. Besides, the approaches that jointly train NER and CWS tasks do not achieve desired results, because segmentation noises affect their effectiveness inevitably. Our model handles this trouble.",
"cite_spans": [
{
"start": 71,
"end": 92,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF2"
},
{
"start": 780,
"end": 799,
"text": "(Chen et al., 2006)",
"ref_id": "BIBREF1"
},
{
"start": 854,
"end": 874,
"text": "(Zhu and Wang, 2019)",
"ref_id": "BIBREF33"
},
{
"start": 893,
"end": 912,
"text": "(Ding et al., 2019)",
"ref_id": "BIBREF3"
},
{
"start": 931,
"end": 944,
"text": "(Zhao et al.,",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Ablation Study",
"sec_num": "4.2.1"
},
{
"text": "The CNN-based models achieve better performance compared to the model BiLSTM+CRF. Furthermore, most of the existing methods construct encoders based on recurrent neural networks or graph neural networks. Although they perform excellent results on the MSRA dataset, they do not achieve a significant improvement on the Wei-boNER corpus. In addition to the word segmentation error propagation on social media, another important reason may be that the fragmented semantic expression of colloquial text limits their performance. In contrast, our CNN-based model plays a better advantage in capturing the fragmented semantics of colloquial text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with Existing Methods",
"sec_num": "4.2.2"
},
{
"text": "Results on the MSRA dataset are shown in Table 3. Our model UIcwsNN specializes in learning word-level representation, but rarely considers other-levels characteristics, such as long-distance temporal semantics. Therefore, it only achieves competitive performance on the formal text. But our model UIcwsNN+BERT realizes new state-ofthe-art performance. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with Existing Methods",
"sec_num": "4.2.2"
},
{
"text": "We count the output errors of models and classify them into two categories 6 : type error and boundary error, as shown in Figure 4 . The model CNNs+CRF+CS produces more boundary errors than type errors. However, our model UIcwsNN dramatically decreases the boundary error outputs (and the type errors are also reduced), so that the error distribution is reversed. That is, in model UIcwsNN, the proportion of boundary errors is smaller than that of type errors, but in model CNNs+CRF+CS, the opposite is true. This situation shows that word segmentation errors generated by the word segmentation tool seriously affect model performance, especially misleading the model to identify wrong entity boundaries. Our method can learn the word boundaries effectively, thereby alleviating the cascade of segmentation errors. Figure 5 shows the performance of recognizing entities with different lengths {1, 2, 3, 4}. According to statistics, entities with two or three characters account for more than 95% of the total number of entities. Both models give high F scores for entities of moderate lengths {2, 3}, but low performance for entities that are too short or too long. The reasons may be that entities with a single character or more than four characters are rare, resulting in model training inadequately. Our model UIcwsNN achieves better results than the base model CNNs+CRF when identifying entities of various lengths. In particular, as for entities with two or three characters, the model UIcwsNN yields more than 2% improvement. This situation implies that our model captures word-level semantic information by modeling the uncertain information of word segmentations so that it is good at recognizing multi-character entities. Table 4 shows several examples with word segmentation errors. When we use the one certain (supposed-reliable) segmentation sequence from the tool Jieba as the word-level feature for the model CNNs+CRF+CS, the segmentation errors \"\u5973\u771f'(Nuzhen)\" and \"\u5fae\u535a\u51c6(wei bo zhun)\" lead to the misjudgments of the entities \"\u5973(daughter)\" and \"\u51c6 \u4f1a \u5458(associate member)\", respectively. Our model UIcwsNN can extract these entities. The uncertain character positions can provide our model with rich word-level information. Then, we use the position selective attention to support the model to learn appropriate segmentation states.",
"cite_spans": [],
"ref_spans": [
{
"start": 122,
"end": 130,
"text": "Figure 4",
"ref_id": null
},
{
"start": 816,
"end": 824,
"text": "Figure 5",
"ref_id": null
},
{
"start": 1733,
"end": 1740,
"text": "Table 4",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "4.2.3"
},
{
"text": "\u771f (r e a ll y ) \u5973 (d a u g h te r) \u65e0 (n o ) \u662f (a m ) \u8bed (l a n g u a g e ) \u795d (w is h e d )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Case Study",
"sec_num": "4.2.5"
},
{
"text": "\u6211 The visualization of the first case in Figure 6 shows that our model can assign higher attention values to the appropriate positions while mitigating error interferences.",
"cite_spans": [],
"ref_spans": [
{
"start": 41,
"end": 49,
"text": "Figure 6",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Case Study",
"sec_num": "4.2.5"
},
{
"text": "Named entity recognization is an urgent task for semantic understanding of social media content. As for the Chinese NER, Chinese word segmentation error propagation is prominent since there is much colloquial text in social media. In this paper, we explore a trilogy to leverage the uncertain information of word segmentation to avoid the interference of segmentation errors. The step-1 utilizes the Candidate Position Embedding to present the potential segmentation states of a sentence; The step-2 employs the Position Selective Attention to capture appropriate segmentation states while ignoring unreliable parts; The step-3 uses the Adaptive Word Convolution to encode word-level representation dynamically. We analyze the performance of each component of the model and discuss the relationship between the model and related factors such as segmentation error, BERT, and entity length. Experiment results on different datasets show that our model achieves new state-of-the-art performance. It demonstrates that our method has an excellent ability to capture word-level semantics and can alleviate the segmentation error cascading trouble effectively. In future work, we hope that the model can get rid of the word segmentation tool, instead, learn the candidate position informationn autonomously. We will release the source code when the paper is openly available.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "We use the \"Jieba\", a popular python packages for the CWS. Its special function \"cut for search()\" can achieve this operation. (https://github.com/fxsjy/jieba)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In most cases, Chinese words are no longer than 4 characters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.nlpir.org/download/weibo.7z 4 https://radimrehurek.com/gensim/models/word2vec.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://storage.googleapis.com/bert models/2018 11 03/ chinese L-12 H-768 A-12.zip",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "If there are two kinds of errors on a predicted entity, the error will be counted twice.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Adversarial transfer learning for chinese named entity recognition with selfattention mechanism",
"authors": [
{
"first": "Pengfei",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Yubo",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Shengping",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "182--192",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pengfei Cao, Yubo Chen, Kang Liu, Jun Zhao, and Shengping Liu. 2018. Adversarial transfer learn- ing for chinese named entity recognition with self- attention mechanism. In Proceedings of the 2018 Conference on Empirical Methods in Natural Lan- guage Processing, pages 182-192.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Chinese named entity recognition with conditional probabilistic models",
"authors": [
{
"first": "Aitao",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Fuchun",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Roy",
"middle": [],
"last": "Shan",
"suffix": ""
},
{
"first": "Gordon",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Fifth SIGHAN Workshop on Chinese Language Processing",
"volume": "",
"issue": "",
"pages": "173--176",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aitao Chen, Fuchun Peng, Roy Shan, and Gordon Sun. 2006. Chinese named entity recognition with con- ditional probabilistic models. In Proceedings of the Fifth SIGHAN Workshop on Chinese Language Pro- cessing, pages 173-176.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A neural multi-digraph model for chinese NER with gazetteers",
"authors": [
{
"first": "Ruixue",
"middle": [],
"last": "Ding",
"suffix": ""
},
{
"first": "Pengjun",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Xiaoyan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Linlin",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Luo",
"middle": [],
"last": "Si",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019",
"volume": "",
"issue": "",
"pages": "1462--1467",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ruixue Ding, Pengjun Xie, Xiaoyan Zhang, Wei Lu, Linlin Li, and Luo Si. 2019. A neural multi-digraph model for chinese NER with gazetteers. In Proceed- ings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, pages 1462- 1467.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Characterbased lstm-crf with radical-level features for chinese named entity recognition",
"authors": [
{
"first": "Chuanhai",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Jiajun",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Chengqing",
"middle": [],
"last": "Zong",
"suffix": ""
},
{
"first": "Masanori",
"middle": [],
"last": "Hattori",
"suffix": ""
},
{
"first": "Hui",
"middle": [],
"last": "Di",
"suffix": ""
}
],
"year": 2016,
"venue": "Natural Language Understanding and Intelligent Applications",
"volume": "",
"issue": "",
"pages": "239--250",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chuanhai Dong, Jiajun Zhang, Chengqing Zong, Masanori Hattori, and Hui Di. 2016. Character- based lstm-crf with radical-level features for chinese named entity recognition. In Natural Language Un- derstanding and Intelligent Applications, pages 239- 250. Springer.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "The cips-sighan clp 2012 chineseword segmentation onmicroblog corpora bakeoff",
"authors": [
{
"first": "Huiming",
"middle": [],
"last": "Duan",
"suffix": ""
},
{
"first": "Zhifang",
"middle": [],
"last": "Sui",
"suffix": ""
},
{
"first": "Ye",
"middle": [],
"last": "Tian",
"suffix": ""
},
{
"first": "Wenjie",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the second CIPS-SIGHAN joint conference on Chinese language processing",
"volume": "",
"issue": "",
"pages": "35--40",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Huiming Duan, Zhifang Sui, Ye Tian, and Wenjie Li. 2012. The cips-sighan clp 2012 chineseword seg- mentation onmicroblog corpora bakeoff. In Pro- ceedings of the second CIPS-SIGHAN joint confer- ence on Chinese language processing, pages 35-40.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Chinese named entity recognition with character-word mixed embedding",
"authors": [
{
"first": "E",
"middle": [],
"last": "Shijia",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Xiang",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 ACM on Conference on Information and Knowledge Management",
"volume": "",
"issue": "",
"pages": "2055--2058",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shijia E and Yang Xiang. 2017. Chinese named en- tity recognition with character-word mixed embed- ding. In Proceedings of the 2017 ACM on Confer- ence on Information and Knowledge Management, pages 2055-2058. ACM.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Chinese named entity recognition with bert",
"authors": [
{
"first": "Cheng",
"middle": [],
"last": "Gong",
"suffix": ""
},
{
"first": "Jiuyang",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Shengwei",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Zepeng",
"middle": [],
"last": "Hao",
"suffix": ""
}
],
"year": 2019,
"venue": "DEStech Transactions on Computer Science and Engineering",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cheng Gong, Jiuyang Tang, Shengwei Zhou, Zepeng Hao, and Jun Wang. 2019. Chinese named entity recognition with bert. DEStech Transactions on Computer Science and Engineering, (cisnrc).",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A lexicon-based graph neural network for chinese ner",
"authors": [
{
"first": "Tao",
"middle": [],
"last": "Gui",
"suffix": ""
},
{
"first": "Yicheng",
"middle": [],
"last": "Zou",
"suffix": ""
},
{
"first": "Qi",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Minlong",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Jinlan",
"middle": [],
"last": "Fu",
"suffix": ""
},
{
"first": "Zhongyu",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Xuanjing",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "1039--1049",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tao Gui, Yicheng Zou, Qi Zhang, Minlong Peng, Jin- lan Fu, Zhongyu Wei, and Xuanjing Huang. 2019. A lexicon-based graph neural network for chinese ner. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 1039- 1049.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Chinese named entity recognition based on multilevel linguistic features",
"authors": [
{
"first": "Honglei",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Jianmin",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Gang",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Tong",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2004,
"venue": "International Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "90--99",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Honglei Guo, Jianmin Jiang, Gang Hu, and Tong Zhang. 2004. Chinese named entity recognition based on multilevel linguistic features. In Interna- tional Conference on Natural Language Processing, pages 90-99. Springer.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "F-score driven max margin neural network for named entity recognition in chinese social media",
"authors": [
{
"first": "Hangfeng",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "713--718",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hangfeng He and Xu Sun. 2017a. F-score driven max margin neural network for named entity recognition in chinese social media. In Proceedings of the 15th Conference of the European Chapter of the Associa- tion for Computational Linguistics: Volume 2, Short Papers, pages 713-718.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A unified model for cross-domain and semi-supervised named entity recognition in chinese social media",
"authors": [
{
"first": "Hangfeng",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2017,
"venue": "Thirty-First AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hangfeng He and Xu Sun. 2017b. A unified model for cross-domain and semi-supervised named entity recognition in chinese social media. In Thirty-First AAAI Conference on Artificial Intelligence.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Improving name tagging by reference resolution and relation detection",
"authors": [
{
"first": "Heng",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Grishman",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "411--418",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Heng Ji and Ralph Grishman. 2005. Improving name tagging by reference resolution and relation detec- tion. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, pages 411-418.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Cwpc biatt: Character-word-position combined bilstm-attention for chinese named entity recognition",
"authors": [
{
"first": "Shardrom",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Sherlock",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Yuanchen",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2020,
"venue": "Information",
"volume": "11",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shardrom Johnson, Sherlock Shen, and Yuanchen Liu. 2020. Cwpc biatt: Character-word-position combined bilstm-attention for chinese named entity recognition. Information, 11(1):45.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Neural architectures for named entity recognition",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Ballesteros",
"suffix": ""
},
{
"first": "Sandeep",
"middle": [],
"last": "Subramanian",
"suffix": ""
},
{
"first": "Kazuya",
"middle": [],
"last": "Kawakami",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of NAACL-HLT",
"volume": "",
"issue": "",
"pages": "260--270",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guillaume Lample, Miguel Ballesteros, Sandeep Sub- ramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of NAACL-HLT, pages 260-270.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "The third international chinese language processing bakeoff: Word segmentation and named entity recognition",
"authors": [
{
"first": "Gina-Anne",
"middle": [],
"last": "Levow",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Fifth SIGHAN Workshop on Chinese Language Processing",
"volume": "",
"issue": "",
"pages": "108--117",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gina-Anne Levow. 2006. The third international chi- nese language processing bakeoff: Word segmen- tation and named entity recognition. In Proceed- ings of the Fifth SIGHAN Workshop on Chinese Lan- guage Processing, pages 108-117.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Comparison of the impact of word segmentation on name tagging for chinese and japanese",
"authors": [
{
"first": "Haibo",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Masato",
"middle": [],
"last": "Hagiwara",
"suffix": ""
},
{
"first": "Qi",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Heng",
"middle": [],
"last": "Ji",
"suffix": ""
}
],
"year": 2014,
"venue": "LREC",
"volume": "",
"issue": "",
"pages": "2532--2536",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Haibo Li, Masato Hagiwara, Qi Li, and Heng Ji. 2014. Comparison of the impact of word segmentation on name tagging for chinese and japanese. In LREC, pages 2532-2536.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A survey on deep learning for named entity recognition",
"authors": [
{
"first": "Jing",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Aixin",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Jianglei",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Chenliang",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2020,
"venue": "IEEE Transactions on Knowledge and Data Engineering",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jing Li, Aixin Sun, Jianglei Han, and Chenliang Li. 2020. A survey on deep learning for named entity recognition. IEEE Transactions on Knowledge and Data Engineering.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "An encoding strategy based wordcharacter LSTM for Chinese NER",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Tongge",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Qinghua",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Jiayu",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Yueran",
"middle": [],
"last": "Zu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "2379--2389",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei Liu, Tongge Xu, Qinghua Xu, Jiayu Song, and Yueran Zu. 2019. An encoding strategy based word- character LSTM for Chinese NER. In Proceedings of the 2019 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, pages 2379-2389.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Chinese named entity recognition with a sequence labeling approach: based on characters",
"authors": [
{
"first": "Zhangxun",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Conghui",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Tiejun",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2010,
"venue": "International Conference on Intelligent Computing",
"volume": "",
"issue": "",
"pages": "634--640",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhangxun Liu, Conghui Zhu, and Tiejun Zhao. 2010. Chinese named entity recognition with a sequence labeling approach: based on characters, or based on words? In International Conference on Intelligent Computing, pages 634-640. Springer.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Multiprototype chinese character embedding",
"authors": [
{
"first": "Yanan",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Dong-Hong",
"middle": [],
"last": "Ji",
"suffix": ""
}
],
"year": 2016,
"venue": "LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yanan Lu, Yue Zhang, and Dong-Hong Ji. 2016. Multi- prototype chinese character embedding. In LREC.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "An empirical study of automatic chinese word segmentation for spoken language understanding and named entity recognition",
"authors": [
{
"first": "Wencan",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "Fan",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "238--248",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wencan Luo and Fan Yang. 2016. An empirical study of automatic chinese word segmentation for spoken language understanding and named entity recogni- tion. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technolo- gies, pages 238-248.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "End-to-end sequence labeling via bi-directional lstm-cnns-crf",
"authors": [
{
"first": "Xuezhe",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1064--1074",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xuezhe Ma and Eduard Hovy. 2016. End-to-end se- quence labeling via bi-directional lstm-cnns-crf. In Proceedings of the 54th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1064-1074.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Chinese word segmentation and named entity recognition based on conditional random fields",
"authors": [
{
"first": "Xinnian",
"middle": [],
"last": "Mao",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Saike",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Sencheng",
"middle": [],
"last": "Bao",
"suffix": ""
},
{
"first": "Haila",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Sixth SIGHAN Workshop on Chinese Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xinnian Mao, Yuan Dong, Saike He, Sencheng Bao, and Haila Wang. 2008. Chinese word segmentation and named entity recognition based on conditional random fields. In Proceedings of the Sixth SIGHAN Workshop on Chinese Language Processing.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Named entity recognition for chinese social media with jointly trained embeddings",
"authors": [
{
"first": "Nanyun",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Dredze",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "548--554",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nanyun Peng and Mark Dredze. 2015. Named en- tity recognition for chinese social media with jointly trained embeddings. In Proceedings of the 2015 Conference on Empirical Methods in Natural Lan- guage Processing, pages 548-554.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Improving named entity recognition for chinese social media with word segmentation representation learning",
"authors": [
{
"first": "Nanyun",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Dredze",
"suffix": ""
}
],
"year": 2016,
"venue": "The 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nanyun Peng and Mark Dredze. 2016. Improving named entity recognition for chinese social media with word segmentation representation learning. In The 54th Annual Meeting of the Association for Com- putational Linguistics, page 149.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "2227--2237",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word repre- sentations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227- 2237.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Design challenges and misconceptions in named entity recognition",
"authors": [
{
"first": "Lev",
"middle": [],
"last": "Ratinov",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Thirteenth Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "147--155",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lev Ratinov and Dan Roth. 2009. Design challenges and misconceptions in named entity recognition. In Proceedings of the Thirteenth Conference on Com- putational Natural Language Learning, pages 147- 155.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Joint segmentation and named entity recognition using dual decomposition in chinese discharge summaries",
"authors": [
{
"first": "Yan",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Yining",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Tianren",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jiahua",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yubo",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Yi",
"middle": [],
"last": "Qian",
"suffix": ""
},
{
"first": "Junichi",
"middle": [],
"last": "Tsujii",
"suffix": ""
},
{
"first": "Eric",
"middle": [
"I"
],
"last": "Chang",
"suffix": ""
}
],
"year": 2013,
"venue": "Journal of the American Medical Informatics Association",
"volume": "21",
"issue": "e1",
"pages": "84--92",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yan Xu, Yining Wang, Tianren Liu, Jiahua Liu, Yubo Fan, Yi Qian, Junichi Tsujii, and Eric I Chang. 2013. Joint segmentation and named entity recognition us- ing dual decomposition in chinese discharge sum- maries. Journal of the American Medical Informat- ics Association, 21(e1):e84-e92.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "A survey on recent advances in named entity recognition from deep learning models",
"authors": [
{
"first": "Vikas",
"middle": [],
"last": "Yadav",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bethard",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.11470"
]
},
"num": null,
"urls": [],
"raw_text": "Vikas Yadav and Steven Bethard. 2019. A survey on re- cent advances in named entity recognition from deep learning models. arXiv preprint arXiv:1910.11470.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Design challenges and misconceptions in neural sequence labeling",
"authors": [
{
"first": "Jie",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Shuailong",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3879--3889",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jie Yang, Shuailong Liang, and Yue Zhang. 2018. De- sign challenges and misconceptions in neural se- quence labeling. In Proceedings of the 27th Inter- national Conference on Computational Linguistics, pages 3879-3889.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Chinese ner using lattice lstm",
"authors": [
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jie",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1554--1564",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yue Zhang and Jie Yang. 2018. Chinese ner using lat- tice lstm. In Proceedings of the 56th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1554-1564.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Pre-trained language model transfer on chinese named entity recognition",
"authors": [
{
"first": "H",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Cao",
"suffix": ""
}
],
"year": 2019,
"venue": "2019 IEEE 21st International Conference on High Performance Computing and Communications; IEEE 17th International Conference on Smart City; IEEE 5th International Conference on Data Science and Systems (HPCC/SmartCity/DSS)",
"volume": "",
"issue": "",
"pages": "2150--2155",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Zhao, M. Xu, and J. Cao. 2019. Pre-trained lan- guage model transfer on chinese named entity recog- nition. In 2019 IEEE 21st International Conference on High Performance Computing and Communica- tions; IEEE 17th International Conference on Smart City; IEEE 5th International Conference on Data Science and Systems (HPCC/SmartCity/DSS), pages 2150-2155.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Can-ner: Convolutional attention network for chinese named entity recognition",
"authors": [
{
"first": "Yuying",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Guoxin",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "3384--3393",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuying Zhu and Guoxin Wang. 2019. Can-ner: Con- volutional attention network for chinese named en- tity recognition. In Proceedings of the 2019 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long and Short Pa- pers), pages 3384-3393.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"text": "Figure 1: The architecture of our model. An interesting instance \"\u5357\u4eac\u5e02\u957f\u6c5f\u5927\u6865\u8c03\u7814(Daqiao Jiang, major of Nanjing City, is investigating)...\" is represented, which is cited from (Zhang and Yang, 2018).",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF3": {
"text": "Display the tabulation of subwords. The red vertical lines identify correct word segmentations. The shows the subwords that fit each character.",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF4": {
"text": "The statistics of the model output errors on the WeiboNER corpus. The model CNNs+CRF+CS uses the feature of the one supposed-reliable word segmentation output from the CWS tool Jieba. Performance of multi-character entities on the WeiboNER dataset. The base model CNNs+CRF only uses character embedding.",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF5": {
"text": "Visualization of position attention values v obtained from the position selective attention.",
"type_str": "figure",
"num": null,
"uris": null
},
"TABREF2": {
"content": "<table><tr><td>Models</td><td>P</td><td>WeiboNER R F \u00b1std</td><td>P</td><td>R</td><td>MSRA F \u00b1std</td></tr><tr><td>character embedding (baseline)</td><td>66.45</td><td/><td/><td/><td/></tr></table>",
"type_str": "table",
"text": "Results of ablation experiments on the WeiboNER dataset and MSRA dataset. The base model is the CNNs+CRF. 53.47 59.22 \u00b10.42 87.11 85.84 86.47 \u00b10.21 + certain segmentation feature (CS) 68.41 51.82 58.92 \u00b10.54 90.37 88.06 89.20 \u00b10.12 + candidate position embedding (CPE) 65.19 56.46 60.51 \u00b10.37 90.20 88.27 89.22 \u00b10.06 + position selective attention (PSA) 68.50 55.31 61.13 \u00b10.49 90.34 89.08 89.71 \u00b10.22 + adaptive word convolution (AWC) 67.37 57.61 62.07 \u00b10.61 89.87 90.54 90.20 \u00b10.24 base model + BERT 78.01 72.97 75.40 \u00b10.33 94.51 91.72 93.09 \u00b10.27 UIcwsNN + BERT 79.64 73.29 76.33 \u00b10.20 96.31 94.98 95.64 \u00b10.15",
"html": null,
"num": null
},
"TABREF3": {
"content": "<table><tr><td>Models</td><td colspan=\"3\">NAM NOM Overall</td></tr><tr><td colspan=\"3\">(Peng and Dredze, 2015) \u2022 51.96 61.05</td><td>56.05</td></tr><tr><td colspan=\"3\">(Peng and Dredze, 2016) \u2022 55.28 62.97</td><td>58.99</td></tr><tr><td>(He and Sun, 2017a)</td><td colspan=\"2\">50.60 59.32</td><td>54.82</td></tr><tr><td>(He and Sun, 2017b)</td><td colspan=\"2\">54.50 62.17</td><td>58.23</td></tr><tr><td colspan=\"3\">(Zhang and Yang, 2018) * 53.04 62.25</td><td>58.79</td></tr><tr><td>(Cao et al., 2018) \u2022</td><td colspan=\"2\">54.34 57.35</td><td>58.70</td></tr><tr><td>(Zhu and Wang, 2019)</td><td colspan=\"2\">55.38 62.98</td><td>59.31</td></tr><tr><td>(Liu et al., 2019) *</td><td colspan=\"2\">52.55 67.41</td><td>59.84</td></tr><tr><td>(Ding et al., 2019) *</td><td>-</td><td>-</td><td>59.50</td></tr><tr><td>(Gui et al., 2019) *</td><td colspan=\"2\">55.34 64.98</td><td>60.21</td></tr><tr><td>(Johnson et al., 2020)</td><td colspan=\"2\">55.70 62.80</td><td>59.50</td></tr><tr><td>BiLSTM+CRF</td><td colspan=\"2\">53.95 62.63</td><td>57.69</td></tr><tr><td>CNNs+CRF</td><td colspan=\"2\">55.07 62.97</td><td>59.22</td></tr><tr><td>Our model (UIcwsNN)</td><td colspan=\"2\">57.58 65.97</td><td>62.07</td></tr></table>",
"type_str": "table",
"text": "The F values of existing models on the Wei-boNER dataset. * indicates that the model utilizes external lexicons.\u2022 indicates that the model adopts joint learning. The previous models do not use the BERT, so we show the results of our model without BERT.",
"html": null,
"num": null
},
"TABREF4": {
"content": "<table><tr><td>represents the results of the WeiboNER</td></tr><tr><td>dataset. Our model UIcwsNN significantly outper-</td></tr><tr><td>forms other models and achieves new state-of-the-</td></tr></table>",
"type_str": "table",
"text": "",
"html": null,
"num": null
},
"TABREF5": {
"content": "<table><tr><td>Model</td><td>P</td><td>R</td><td>F</td></tr></table>",
"type_str": "table",
"text": "The results of different models on the MSRA dataset. \u00d7 indicates that the model uses the BERT.",
"html": null,
"num": null
},
"TABREF7": {
"content": "<table><tr><td>one certain segmentation</td><td>\u6709\u4eba(someone), \u795d(wish), \u6211(me), \u65e9(soon), \u751f\u8d35(precious), \u5973 \u5973 \u5973\u771f(Nuzhen), \u662f(is), \u65e0 \u8bed(speechless), \u554a(ah)</td></tr><tr><td>Case Two</td><td>\u521a\u521a\u83b7\u5f97\u4e86\u5fae\u535a[\u51c6 \u51c6 \u51c6\u4f1a \u4f1a \u4f1a\u5458 \u5458 \u5458] P ER.N OM \u4e13\u5c5e\u5fbd\u7ae0\uff0c\u5f00\u5fc3 I just got the exclusive badge for a weibo associate member, I am happy</td></tr><tr><td>candidate segmentation</td><td>\u521a\u521a(just now), \u83b7\u5f97(get), \u4e86(finish), \u5fae\u535a(weibo), \u5fae\u535a\u51c6 \u51c6 \u51c6(wei bo zhun), \u51c6 \u51c6 \u51c6\u4f1a \u4f1a \u4f1a(quasi), \u4f1a \u4f1a \u4f1a\u5458 \u5458 \u5458(member), \u4e13\u5c5e(exclusive), \u5fbd\u7ae0(badge), \u5f00\u5fc3(happy)</td></tr><tr><td>one certain segmentation</td><td>\u521a\u521a(just now), \u83b7\u5f97(get), \u4e86(finish), \u5fae\u535a\u51c6 \u51c6 \u51c6(wei bo zhun), \u4f1a \u4f1a \u4f1a\u5458 \u5458 \u5458(member), \u4e13\u5c5e(exc-lusive), \u5fbd\u7ae0(badge), \u5f00\u5fc3(happy)</td></tr><tr><td colspan=\"2\">4.2.4 Performance against Multi-character</td></tr><tr><td>Entities</td><td/></tr></table>",
"type_str": "table",
"text": "Testing examples with segmentation errors.Case One\u6709\u4eba\u795d\u6211\u65e9\u751f\u8d35[\u5973 \u5973 \u5973] P ER.N OM \u771f\u662f\u65e0\u8bed\u554aSomeone wished me to have a precious daughter soon, I am so speechless candidate segmentation \u6709\u4eba(someone), \u795d(wish), \u6211(me), \u65e9(soon), \u65e9\u751f(early birth), \u751f\u8d35(precious), \u8d35(precious), \u5973 \u5973 \u5973(daughter), \u5973 \u5973 \u5973\u771f(Nuzhen), \u662f(is), \u771f\u662f(really), \u65e0\u8bed(speechless), \u554a(ah)",
"html": null,
"num": null
}
}
}
}