|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T02:13:08.671525Z" |
|
}, |
|
"title": "Chinese Content Scoring: Open-Access Data Sets and Features on Different Segmentation Levels", |
|
"authors": [ |
|
{ |
|
"first": "Yuning", |
|
"middle": [], |
|
"last": "Ding", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "Language Technology Lab", |
|
"institution": "University Duisburg-Essen", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Andrea", |
|
"middle": [], |
|
"last": "Horbach", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "Language Technology Lab", |
|
"institution": "University Duisburg-Essen", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Haoshi", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "Language Technology Lab", |
|
"institution": "University Duisburg-Essen", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Xuefeng", |
|
"middle": [], |
|
"last": "Song", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "Language Technology Lab", |
|
"institution": "University Duisburg-Essen", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Torsten", |
|
"middle": [], |
|
"last": "Zesch", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "Language Technology Lab", |
|
"institution": "University Duisburg-Essen", |
|
"location": {} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "In this paper, we analyse the challenges of Chinese content scoring in comparison to English. As a review of prior work for Chinese content scoring shows a lack of openaccess data in the field, we present two short-answer data sets for Chinese. The Chinese Educational Short Answers data set (CESA) contains 1800 student answers for five science-related questions. As a second data set, we collected ASAP-ZH with 942 answers by re-using three existing prompts from the ASAP data set. We adapt a state-of-the-art content scoring system for Chinese and evaluate it in several settings on these data sets. Results show that features on lower segmentation levels such as character n-grams tend to have better performance than features on token level.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "In this paper, we analyse the challenges of Chinese content scoring in comparison to English. As a review of prior work for Chinese content scoring shows a lack of openaccess data in the field, we present two short-answer data sets for Chinese. The Chinese Educational Short Answers data set (CESA) contains 1800 student answers for five science-related questions. As a second data set, we collected ASAP-ZH with 942 answers by re-using three existing prompts from the ASAP data set. We adapt a state-of-the-art content scoring system for Chinese and evaluate it in several settings on these data sets. Results show that features on lower segmentation levels such as character n-grams tend to have better performance than features on token level.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Short answer questions are a type of educational assessment that requires respondents to give natural language answers in response to a question or some reading material (Rademakers et al., 2005) . The applications used to automatically score such questions are usually thought of as content scoring systems, because content (and not linguistic form) is taken into consideration for automatic scoring (Ziai et al., 2012) . While there is a large research body for English content scoring, there is less research for Chinese. 1 The largest obstacle for more research on Chinese is the lack of publicly available data sets of Chinese short answer questions.", |
|
"cite_spans": [ |
|
{ |
|
"start": 170, |
|
"end": 195, |
|
"text": "(Rademakers et al., 2005)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 401, |
|
"end": 420, |
|
"text": "(Ziai et al., 2012)", |
|
"ref_id": "BIBREF39" |
|
}, |
|
{ |
|
"start": 525, |
|
"end": 526, |
|
"text": "1", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Working with Chinese poses substantially different challenges than work on English data. Unlike English, which uses spaces as natural separators between words, segmentation of Chinese texts into tokens is challenging (Chen and Liu, 1992) . Furthermore, there are more options on which level to segment Chinese text. Apart from tokenization and segmentation into characters, which are two options also available and often used for English, segmentation into components, radicals and even individual strokes are additionally possible for Chinese. Table 1 gives an example for the segmentation options in both languages. Orthographic variance can be challenging in both languages, but behaves very differently. Nonword errors, which is the main source of orthographic problems in English (Mitton, 1987) , can by definition not happen in Chinese, due to the input modalities. In the remainder of this paper, we will discuss these challenges in more detail (Section 2). We review prior work on Chinese content scoring (Section 3) and present two new freely-available data sets of short answers in Chinese (Section 4). In Section 5, we adapt a machine learning pipeline for automatic scoring with state-of-art NLP tools for Chinese. We investigate the extraction of n-gram features on all possible segmentation levels. In addition, we use features based on the Pinyin transcription of Chinese texts and experiment with the removal of auxiliary words as an equivalent to lemmatization in English. We evaluate these features on our new data sets as well as, for comparison, an English data set translated into Chinese.", |
|
"cite_spans": [ |
|
{ |
|
"start": 217, |
|
"end": 237, |
|
"text": "(Chen and Liu, 1992)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 785, |
|
"end": 799, |
|
"text": "(Mitton, 1987)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 545, |
|
"end": 552, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this section, we highlight the main challenges when processing Chinese learner data in comparison to English data sets. We first focus on segmentation, as tokenization is more difficult in Chinese than in English and there are more linguistic levels on which to segment a Chinese text compared to English. Next, we discuss variance in learner answers, which is a challenge for content scoring in any language but manifests itself in Chinese differently than in English.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Challenges in Chinese Content Scoring", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "English has an alphabetic writing system with some degree of grapheme-to-phoneme correspondence. The Chinese language, in contrast, uses a logosyllabic writing system, where characters represent lexical morphemes. Chinese words can be formed by one or more characters (Chen, 1992) . Unlike English, where words are separated by white-spaces, the fact that Chinese writing does not mark word boundaries makes word segmentation a much harder task in Chinese NLP (e.g., Chen and Liu (1992) ; Huang et al. (1996) ). According to a recent literature review on Chinese word segmentation (Zhao et al., 2019) , the best-performing segmentation tool has an average F1-value of only around 97%. A major challenge is the handling of out-of-vocabulary words. In English content scoring, word level features such as word n-grams or word embeddings have proven to be effective (e.g., Sakaguchi et al. (2015) ; Riordan et al. (2017) ). Additionally, character features are frequently used to capture orthographic as well as morphological variance (e.g., Heilman and Mad-nani (2013) ; Zesch et al. (2015) ).", |
|
"cite_spans": [ |
|
{ |
|
"start": 268, |
|
"end": 280, |
|
"text": "(Chen, 1992)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 467, |
|
"end": 486, |
|
"text": "Chen and Liu (1992)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 489, |
|
"end": 508, |
|
"text": "Huang et al. (1996)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 581, |
|
"end": 600, |
|
"text": "(Zhao et al., 2019)", |
|
"ref_id": "BIBREF38" |
|
}, |
|
{ |
|
"start": 870, |
|
"end": 893, |
|
"text": "Sakaguchi et al. (2015)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 896, |
|
"end": 917, |
|
"text": "Riordan et al. (2017)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 1039, |
|
"end": 1066, |
|
"text": "Heilman and Mad-nani (2013)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1069, |
|
"end": 1088, |
|
"text": "Zesch et al. (2015)", |
|
"ref_id": "BIBREF36" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Segmentation", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "In the light of the tokenziation challenges mentioned above, it is surprising that although most prior work on Chinese also applies word-level features (see Section 3), the performance of their tokenizers are barely discussed and character-level features are neglected altogether.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Segmentation", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Apart from words and characters, there are more possibilities of segmentation in Chinese as discussed above. Consider, for example, a Chinese bi-morphemic word such as panda bear . It can additionally be segmented on the stroke, component and radical level as shown in Table 1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Segmentation", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "It has been argued that the morphological information of characters in Chinese consists of the sequential information hidden in stroke order and the spatial information hidden in character components (Tao et al., 2019) . Each Chinese character can directly be mapped into a series of strokes (with a particular order). On the component level, it has been estimated that about 80% of modern Chinese characters are phonetic-logographic compounds, each of which consists of two components: One carries the sound of the character (the stem) and the other the meaning of the character (the radical) (Li, 1977) . We argue that, together with strokes, both kinds of components may be used as features in content scoring. Note that in some cases, a character has only one component, which in the extreme case consists of one stroke only, so that for the character one , all four segmentation levels yield the same result, somewhat comparable to an English onecharacter word, such as \"I\".", |
|
"cite_spans": [ |
|
{ |
|
"start": 200, |
|
"end": 218, |
|
"text": "(Tao et al., 2019)", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 594, |
|
"end": 604, |
|
"text": "(Li, 1977)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Segmentation", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Variance in learner answers has a major influence on content scoring performance (Horbach and Zesch, 2019) , i.e., the more variance between the answers to a specific prompt, the harder it is to score automatically. If we ignore cases of conceptually different answers, variance means different realizations with approximately the same semantic meaning. As shown in Table 2 , if we have a question about the eating habits of pandas, Chinese short answers can contain similar variance as in English, which is realized as both orthographic variance caused by spelling errors as well as variance of linguistic expression. Note that these types of variance should not influence the score of an answer as it depends only from the content of the answer. Both types of variance are further discussed in the following. Spelling errors in English can be classified into non-word and real-word spelling errors. In our example, \"bambu\" is a non-word, while \"beer\" is a real word spelling error. Both error types occur frequently in English short answer data sets, with non-word errors being more frequent (Mitton, 1987 (Mitton, , 1996 . A content scoring system must therefore be able to generalize by taking variance in spelling into account (Leacock and Chodorow, 2003) . To do so, many systems for English data use character-level features (Heilman and Madnani, 2013; , such that \"bamboo\" and \"bambu\", while being different tokens, share, for example, the character 3-grams bam and amb .", |
|
"cite_spans": [ |
|
{ |
|
"start": 81, |
|
"end": 106, |
|
"text": "(Horbach and Zesch, 2019)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 1094, |
|
"end": 1107, |
|
"text": "(Mitton, 1987", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 1108, |
|
"end": 1123, |
|
"text": "(Mitton, , 1996", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 1232, |
|
"end": 1260, |
|
"text": "(Leacock and Chodorow, 2003)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 1332, |
|
"end": 1359, |
|
"text": "(Heilman and Madnani, 2013;", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 366, |
|
"end": 373, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Linguistic Variance", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "For Chinese, the situation is entirely different. Non-word spelling errors are rare and even impossible for digitized data because of the input modalities typically used for Chinese text. When entering a Chinese text on the computer, a writer would normally type the phonetic transcription Pinyin, which is the Romanization of Chinese characters based on their pronunciation. After typing a Pinyin, the writer is shown all corresponding characters from which they choose the right one. As this selection list contains only valid Chinese characters, non-word errors cannot occur by definition. Even if the original data set was collected in hand-written format, the transcription process forces transcribers to correct any non-word error that might occur in the data. For example, if the learner accidentally wrote panda bear as , the transcriber has no choice but to correct such an error, since the non-word character simply does not exist in the Chinese character set.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Linguistic Variance", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "There are two steps in the writing / transcription process where errors can still occur: typing letters to spell a Pinyin and choosing a character out of a list for this Pinyin. Previous experiments showed that people usually do not check Pinyin for errors, but wait until the Chinese characters start to show up (Chen and Lee, 2000) . This behaviour generates two types of real-word spelling errors. In our example, spelling errors like confusing poor (qi\u01d2ng) with bear (xi\u01d2ng) are normally caused by wrong letters typed in the first step. The other error type, i.e., choosing a wrong word from the homophones, leads to spelling errors like pearl (zh\u016b zi) instead of bamboo (zh\u00fa zi). Researchers found that nearly 95% of errors are due to the misuse of homophones (Yang et al., 2012) , i.e., are errors of the second type. In order to reduce the influence of these errors in content scoring, introducing features presented as Pinyin might be beneficial.", |
|
"cite_spans": [ |
|
{ |
|
"start": 313, |
|
"end": 333, |
|
"text": "(Chen and Lee, 2000)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 765, |
|
"end": 784, |
|
"text": "(Yang et al., 2012)", |
|
"ref_id": "BIBREF31" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Linguistic Variance", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Variance of linguistic expression is obviously found in both English and Chinese short answers. As shown in Table 2 , nearly the same content can be expressed using different lexical and syntactic choices. Human annotators can usually abstract away from these differences and treat all answers the same. However, linguistic variance is a challenge for automatic scoring systems.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 108, |
|
"end": 115, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Linguistic Variance", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "In English content scoring, lemmatization is often considered a useful method to reduce part of the variance (Koleva et al., 2014) . In this process, words are reduced to their base forms, such as substituting \"ate\" with \"eat\" and deleting the \"s\" after \"bamboo\". In Chinese, similar grammatical morphemes such as \" \" and \" \", termed auxiliary words (Zan and Zhu, 2009) , which indicate the past tense and plural, can also be deleted in a preprocessing step to achieve a similar effect.", |
|
"cite_spans": [ |
|
{ |
|
"start": 109, |
|
"end": 130, |
|
"text": "(Koleva et al., 2014)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 350, |
|
"end": 369, |
|
"text": "(Zan and Zhu, 2009)", |
|
"ref_id": "BIBREF35" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Linguistic Variance", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Another type of variance is caused by synonyms. For such cases of lexical variance, external knowledge is often needed to decide that two different words are interchangeable. However, as we can see in Table 2 , some synonyms, such as \"panda bears\" vs. \"pandas\" and bamboo vs.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 201, |
|
"end": 208, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Linguistic Variance", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "bamboo share some character(s). Such similarities can be covered by character features, but not token n-grams.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Linguistic Variance", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "In summary, there is the challenge of the segmentation of Chinese texts into tokens. Features extracted on other segmentation levels might be more robust and therefore helpful for automatic scoring. At the same time, NLP techniques which are useful to reduce variance ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Linguistic Variance", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "As shown in Table 3 , all prior work on Chinese content scoring uses lexical features on the word level, such as word n-grams and sentence length in tokens. They are not only used in shallow learning methods like support vector machines (SVM) or support vector regression (SVR) (Wang et al., 2008; Wu and Shih, 2018) , but also applied to deep learning methods like long-short term memory recurrent neural networks (LSTM) (Yang et al., 2017; or deep autoencoders . Also for neural models using word embeddings, word-level tokenization is necessary. Wu and Yeh (2019) train 300-dimensional word2vec word embeddings on sentences from their data set along with Chinese Wikipedia articles and classify student answers with a convolution neural network (CNN). Li et al. (2019) use a Bidirectional Long Short-Term Memory (Bi-LSTM) network for semantic feature extraction from pre-trained 300dimensional word embeddings (Li et al., 2018) and score student answers based on their similarity to the reference answer using a mutual attention mechanism.", |
|
"cite_spans": [ |
|
{ |
|
"start": 278, |
|
"end": 297, |
|
"text": "(Wang et al., 2008;", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 298, |
|
"end": 316, |
|
"text": "Wu and Shih, 2018)", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 422, |
|
"end": 441, |
|
"text": "(Yang et al., 2017;", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 913, |
|
"end": 930, |
|
"text": "(Li et al., 2018)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 12, |
|
"end": 19, |
|
"text": "Table 3", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Prior Work on Chinese Content Scoring", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "For segmentation, most prior work uses the jieba tokenizer 2 for pre-processing. However, 2 https://github.com/fxsjy/jieba the performance of the tokenization is rarely discussed. We also notice that no related work uses segmentation on character or component level. perform stop word removal, but they do not mention if it included some kind of removal of grammatical markers.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Prior Work on Chinese Content Scoring", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "In this section, we review existing Chinese content scoring data sets. They are not publicly available, which is a major obstacle to reproducibility in the field. We thus produce two new Chinese data sets (see detailed description in Section 4.2), which are available online 3 to foster future research . Horbach and Zesch (2019) give an overview of publicly available data sets for content scoring, five of which are for English, and compare them based on properties such as prompt type, learner population and data set size.", |
|
"cite_spans": [ |
|
{ |
|
"start": 305, |
|
"end": 329, |
|
"text": "Horbach and Zesch (2019)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Chinese Scoring Data Sets", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Unfortunately, we did not find any freely available Chinese content scoring data sets. Since we could not access the data sets used in related work, we can only compare them based on their brief descriptions, according to the aspects of comparison mentioned above. Results are shown in Table 4 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 286, |
|
"end": 293, |
|
"text": "Table 4", |
|
"ref_id": "TABREF7" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Existing Data Sets", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The Debris Flow Hazard (DFH) data set is used in the earliest work. It contains more than 1000 answers for 2 prompts in a creative problem-solving task. The learner population are high-school students from Taiwan, who speak native Chinese (Wang et al., 2008) . The Chinese Reading Comprehension Corpus (CRCC) , contains five reading comprehension questions. Each question has on average 2500 answers from students in grade 8.", |
|
"cite_spans": [ |
|
{ |
|
"start": 239, |
|
"end": 258, |
|
"text": "(Wang et al., 2008)", |
|
"ref_id": "BIBREF27" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Existing Data Sets", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Instead of collecting and annotating a data set from scratch, Wu and Shih (2018) translated the English SciEntBank (Dzikovska et al., 2013) and the computer science (CS) (Mohler and Mihalcea, 2009) data sets to Chinese. The data set was first translated using machine translation. In order to solve word usage and grammar problems, 12% of the sentences were manually corrected. In their most recent work, the authors also collected a data set with 12 short answer questions and overall 600 answers related to machine learning (ML_SQA) to compare with the CS-ZH M T data set (Wu and Yeh, 2019).", |
|
"cite_spans": [ |
|
{ |
|
"start": 170, |
|
"end": 197, |
|
"text": "(Mohler and Mihalcea, 2009)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Existing Data Sets", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "In the most recent work ), a large data set containing 85.000 student and reference answers was collected from a national specialty examination related to law.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Existing Data Sets", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "As part of the contribution in this paper, we collected two new data sets for Chinese content scoring: Chinese Short Answer (CESA) and ASAP-ZH. In addition, we provide a machine-translated version of the the original ASAP-SAS English data, ASAP-ZH M T . Table 4 shows key properties, while Table 5 gives example answers of each data set.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 254, |
|
"end": 261, |
|
"text": "Table 4", |
|
"ref_id": "TABREF7" |
|
}, |
|
{ |
|
"start": 290, |
|
"end": 297, |
|
"text": "Table 5", |
|
"ref_id": "TABREF8" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Collection of Open-access Data Sets", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Chinese Educational Short Answers (CESA) contains five questions from the physics and computer science domain (see Table 6 ). Answers are collected from 360 students in the computer science department of Zhengzhou University. Each participant was required to answer each question with a maximum of 20 characters, resulting in an average answer length of 13.5 characters. Two annotators speaking native Chinese with computer science background scored the answers into three classes, 0, 1 and 2 points, with an average inter-annotator agreement of 0.9 quadratically weighted kappa (QWK).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 115, |
|
"end": 122, |
|
"text": "Table 6", |
|
"ref_id": "TABREF10" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Collection of Open-access Data Sets", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "This data set is based on the ASAP short-answer scoring data set released by the Hewlett Foundation. 4 ASAP contains ten short answer prompts covering different subjects and about 2000 student answers per prompt. Prompt 1, 2 and 10 are sciencerelated tasks, which do not have a strong cultural background, and are therefore considered as appropriate to be transferred to other languages. Therefore, we collected answers in Chinese for these three prompts after manually translating the prompt material. The data collection provider BasicFinder 5 helped us to collect 942 answers altogether, 314 answers for each prompt. They are collected from students in high school from grades 9-12, which is comparable with the set of English answers in the ASAP-SAS data set. The answers are transcribed into digital form manually after being collected in handwriting. After reaching an acceptable agreement on a set of answers from the original ASAP-SAS, two annotators speaking native Chinese scored the ASAP-ZH data on a scale from 0 to 3 points (prompt 1 and 2) or 0 to 2 points (prompt 10) with an average QWK of 0.7. Key statistics for the data set can be found in Table 7 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 1159, |
|
"end": 1166, |
|
"text": "Table 7", |
|
"ref_id": "TABREF11" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "ASAP-ZH", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "ASAP-ZH M T For comparison, we also translated the English answers in prompts 1,2 and 10 in the original ASAP-SAS data set to Chinese using the Google Translate API. 6 The examples in Table 5 show that some translation errors can be found, especially when errors exist already in the original text. Words containing spelling errors like \"wat\" instead of \"what\" are simply not translated at all. The overall translation quality is also not perfect, for example, the word \"coolest\" is wrongly translated into As shown in Tables 7 and 8, the average length of the translated answers is larger than the length of the original Chinese answers to the same prompt in our re-collected data set. One explanation could be that paid crowd workers are less motivated than actual students and therefore write shorter answers.", |
|
"cite_spans": [ |
|
{ |
|
"start": 166, |
|
"end": 167, |
|
"text": "6", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 184, |
|
"end": 191, |
|
"text": "Table 5", |
|
"ref_id": "TABREF8" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "ASAP-ZH", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In this section, we adapt a state-of-the-art content scoring system to Chinese. We evaluate it in six settings with different feature sets on the data sets described above in order to investigate different options for segmentation of Chinese text. Table 9 gives an example for the different segmentation options, which will also be detailed in Section 5.2. Additionally, we add a pre-processing step to remove all auxiliary words in the data in order to simulate the effect of lemmatization in English content scoring.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 248, |
|
"end": 255, |
|
"text": "Table 9", |
|
"ref_id": "TABREF13" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "For all our experiments, we use the ESCRITO (Zesch and Horbach, 2018) toolkit and extended it with readers and tokenization for Chinese text. ESCRITO is a publicly available general-purpose scoring framework based on DKPro TC (Daxenberger et al., 2014) , which uses an SVM classifier (Cortes and Vapnik, 1995) using the SMO algorithm as provided by WEKA (Witten et al., 1999) . For all kinds of features, we use the top 10000 most frequent show that white has the lowest light energy absorption rate 1 Black allows the doghouse to absorb more heat in the light, making it warm 0 Dark gray: keep the temperature unchanged, the lighter the color, the lower the temperature ASAP-ZH M T 10 2 white : : having white paint would make the dog house colder, :: so in the summer the dog would not be hot.", |
|
"cite_spans": [ |
|
{ |
|
"start": 44, |
|
"end": 69, |
|
"text": "(Zesch and Horbach, 2018)", |
|
"ref_id": "BIBREF37" |
|
}, |
|
{ |
|
"start": 226, |
|
"end": 252, |
|
"text": "(Daxenberger et al., 2014)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 284, |
|
"end": 309, |
|
"text": "(Cortes and Vapnik, 1995)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 349, |
|
"end": 375, |
|
"text": "WEKA (Witten et al., 1999)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "General Experimental Setup", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "The average for white is the coolest temperature ( 42 ( DEG ) ) 42 DEG 1 black :: Because, the darker the lid color, :: the greater the increase in the air temperature in the glass jar. 0 light gray :: The light grey will effect the doghouse by making it more noticable and plus dogs can only see black, white and grey. 1-to 5-grams. Due to the limited amount of data, we use 10-fold cross-validation on both data sets. For evaluation, we use accuracy, i.e., the percentage of student answers scored correctly, as well as QWK, which does not only consider whether an answer is classified correctly or not, but also how far it is from the gold standard classification.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "General Experimental Setup", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Token Baseline As a baseline, we follow previous work and use tokenization as segmentation, based on the HanLP tokenizer (He, 2020) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 121, |
|
"end": 131, |
|
"text": "(He, 2020)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature Sets", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "In order to reduce the variance caused by spelling errors, we transcribe the text into Pinyin using cnchar (Chen, 2020) and extract ngrams on the level of transcribed characters. Note that we did not include information about tones in Pinyin on purpose, in order to cover spelling errors caused by homophones.", |
|
"cite_spans": [ |
|
{ |
|
"start": 107, |
|
"end": 119, |
|
"text": "(Chen, 2020)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pinyin Features", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For this segmentation level, we simply split a text into individual characters.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Character Features", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "To extract these features on sub-character level, we use a dictionary with 17,803 Chinese characters 7 and their components to decompose all characters.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Component Features", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Remember that radicals are only those components carrying the meaning of characters and might therefore be particularly useful in content scoring. We use XMNLP (Li, 2019) to extract the radicals of each character and use only those as features. Note that some radicals as defined by the \" nents\" 8 can consist of more than one component, therefore the radicals are not a proper subset of the components extracted above.", |
|
"cite_spans": [ |
|
{ |
|
"start": 160, |
|
"end": 170, |
|
"text": "(Li, 2019)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Radical Features", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We use the cnchar tool to represent each answer as a sequence of individual strokes, following the stroke order for each character. Although we show the strokes in their original shapes in Table 9 , a letter encoding is used in the experiment for an efficient processing.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 189, |
|
"end": 196, |
|
"text": "Table 9", |
|
"ref_id": "TABREF13" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Stroke Features", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Auxiliary Words Removal Based on the knowledge database released by Han et al. (2011) , which contains 45 common auxiliary words in modern Chinese, we remove all these grammatical morphemes on token level to reduce the influence of expression variance. In our example shown in problems in scoring, we manually inspected 100 answers from prompt 1 and 4 in CESA. However, we found that tokenization was only erroneous in 12 cases. Surprisingly, most of them occurred in prompt 1, where the token baseline even outperformed the character features and not in prompt 4, where character features performed better. We also had a closer look at a number of student answers which are assigned a wrong score by the token baseline model but not by models with more fine-grained features. 7 out of 18 instances contain indeed variants of more frequent words in the data set. For example, human and human are less-frequently seen variants of human , all of which are indicators of a correct answer. This supports the assumption that, like in English, character-level features can capture variance in learner answers, in this case by handling variance in lexical choice.", |
|
"cite_spans": [ |
|
{ |
|
"start": 68, |
|
"end": 85, |
|
"text": "Han et al. (2011)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Stroke Features", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The usage of Pinyin did not bring the expected benefit, possibly because the amount of spelling errors is not substantial enough in the data. Similarly, removing auxiliary words appears to have little influence on scoring performance.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results and Discussion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "In this paper, we discussed the main challenges in Chinese content scoring in comparison with English, namely segmentation and a different form of linguistic variance. We reviewed related work in Chinese content scoring and saw a need for open-access scoring data sets in Chinese. Therefore, we collected two new data sets, CESA and ASAP-ZH, and release them for research in the future.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Summary & Future Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "While previous work has been limited to word-level features, we conducted a comparison of features on different segmentation levels. Although the difference between feature sets was in general small, we found that some answers with unusual expressions have a tendency to be better scored with models trained on lower level features, such as character ngrams.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Summary & Future Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "In the future, we will extend our comparison of segmentation levels also to a deep learning setting, using embeddings of different granularity (Yin et al., 2016) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 143, |
|
"end": 161, |
|
"text": "(Yin et al., 2016)", |
|
"ref_id": "BIBREF34" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Summary & Future Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "In this work, we use the term 'Chinese' as abbreviation for Mandarin Chinese, which includes simplified and traditional written Chinese. Cantonese, Wu, Min Nan and other dialects are not included.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://github.com/ltlude/ChineseShortAnswerDatasets", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://www.kaggle.com/c/asap-sas", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://github.com/kfcd/chaizi", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Reading comprehension in chinese: Implications from character reading times. Language processing in Chinese", |
|
"authors": [ |
|
{ |
|
"first": "Hsuan-Chih", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "175--205", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hsuan-Chih Chen. 1992. Reading comprehension in chinese: Implications from character reading times. Language processing in Chinese, pages 175-205.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Word identification for mandarin chinese sentences", |
|
"authors": [ |
|
{ |
|
"first": "Shing-Huan", |
|
"middle": [], |
|
"last": "Keh-Jiann Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "Proceedings of the 14th conference on Computational linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "101--107", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Keh-Jiann Chen and Shing-Huan Liu. 1992. Word identification for mandarin chinese sentences. In Proceedings of the 14th conference on Computa- tional linguistics-Volume 1, pages 101-107. As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Full-featured, multi-end support for hanyu pinyin strokes js library", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tack Chen. 2020. Full-featured, multi-end support for hanyu pinyin strokes js library.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "A new statistical approach to chinese pinyin input", |
|
"authors": [ |
|
{ |
|
"first": "Zheng", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai-Fu", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "241--247", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zheng Chen and Kai-Fu Lee. 2000. A new statis- tical approach to chinese pinyin input. In Pro- ceedings of the 38th Annual Meeting of the As- sociation for Computational Linguistics, pages 241-247.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Support-vector networks", |
|
"authors": [ |
|
{ |
|
"first": "Corinna", |
|
"middle": [], |
|
"last": "Cortes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vladimir", |
|
"middle": [], |
|
"last": "Vapnik", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Machine learning", |
|
"volume": "20", |
|
"issue": "3", |
|
"pages": "273--297", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Corinna Cortes and Vladimir Vapnik. 1995. Support-vector networks. Machine learning, 20(3):273-297.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Dkpro tc: A java-based framework for supervised learning experiments on textual data", |
|
"authors": [ |
|
{ |
|
"first": "Johannes", |
|
"middle": [], |
|
"last": "Daxenberger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oliver", |
|
"middle": [], |
|
"last": "Ferschke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iryna", |
|
"middle": [], |
|
"last": "Gurevych", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Torsten", |
|
"middle": [], |
|
"last": "Zesch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "61--66", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Johannes Daxenberger, Oliver Ferschke, Iryna Gurevych, and Torsten Zesch. 2014. Dkpro tc: A java-based framework for supervised learning experiments on textual data. In Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstra- tions, pages 61-66.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Semeval-2013 task 7: The joint student response analysis and 8th recognizing textual entailment challenge", |
|
"authors": [ |
|
{ |
|
"first": "O", |
|
"middle": [], |
|
"last": "Myroslava", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Dzikovska", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Rodney", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Nielsen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Claudia", |
|
"middle": [], |
|
"last": "Brew", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Danilo", |
|
"middle": [], |
|
"last": "Leacock", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luisa", |
|
"middle": [], |
|
"last": "Giampiccolo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Bentivogli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ido", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hoa", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Dagan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Dang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Myroslava O Dzikovska, Rodney D Nielsen, Chris Brew, Claudia Leacock, Danilo Giampiccolo, Luisa Bentivogli, Peter Clark, Ido Dagan, and Hoa T Dang. 2013. Semeval-2013 task 7: The joint student response analysis and 8th recog- nizing textual entailment challenge. Technical report, NORTH TEXAS STATE UNIV DEN- TON.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Automatic annotation of auxiliary words usage in rule-based chinese language", |
|
"authors": [ |
|
{ |
|
"first": "Ying-Jie", |
|
"middle": [], |
|
"last": "Han", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hong-Ying", |
|
"middle": [], |
|
"last": "Zan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kun-Li", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yu-Mei", |
|
"middle": [], |
|
"last": "Chai", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Jisuanji Yingyong/ Journal of Computer Applications", |
|
"volume": "31", |
|
"issue": "12", |
|
"pages": "3271--3274", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ying-Jie Han, Hong-Ying Zan, Kun-Li Zhang, and Yu-Mei Chai. 2011. Automatic annotation of auxiliary words usage in rule-based chinese lan- guage. Jisuanji Yingyong/ Journal of Computer Applications, 31(12):3271-3274.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Ets: Domain adaptation and stacking for short answer scoring", |
|
"authors": [ |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Heilman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nitin", |
|
"middle": [], |
|
"last": "Madnani", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the Seventh International Workshop on Semantic Evaluation (Se-mEval 2013)", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "275--279", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michael Heilman and Nitin Madnani. 2013. Ets: Domain adaptation and stacking for short an- swer scoring. In Second Joint Conference on Lexical and Computational Semantics (* SEM), Volume 2: Proceedings of the Seventh Interna- tional Workshop on Semantic Evaluation (Se- mEval 2013), pages 275-279.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "The influence of spelling errors on content scoring performance", |
|
"authors": [ |
|
{ |
|
"first": "Andrea", |
|
"middle": [], |
|
"last": "Horbach", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuning", |
|
"middle": [], |
|
"last": "Ding", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Torsten", |
|
"middle": [], |
|
"last": "Zesch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 4th Workshop on Natural Language Processing Techniques for Educational Applications (NLPTEA 2017)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "45--53", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrea Horbach, Yuning Ding, and Torsten Zesch. 2017. The influence of spelling errors on content scoring performance. In Proceedings of the 4th Workshop on Natural Language Processing Tech- niques for Educational Applications (NLPTEA 2017), pages 45-53.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "The influence of variance in learner answers on automatic content scoring", |
|
"authors": [ |
|
{ |
|
"first": "Andrea", |
|
"middle": [], |
|
"last": "Horbach", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Torsten", |
|
"middle": [], |
|
"last": "Zesch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Frontiers in Education", |
|
"volume": "4", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrea Horbach and Torsten Zesch. 2019. The influence of variance in learner answers on auto- matic content scoring. Frontiers in Education, 4:28.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Segmentation standard for chinese natural language processing", |
|
"authors": [ |
|
{ |
|
"first": "Chu-Ren", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Keh-Jiann", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Li-Li", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "Proceedings of the 16th Conference on Computational Linguistics", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "1045--1048", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/993268.993362" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chu-Ren Huang, Keh-Jiann Chen, and Li-Li Chang. 1996. Segmentation standard for chi- nese natural language processing. In Proceed- ings of the 16th Conference on Computational Linguistics -Volume 2, COLING 96, page 1045 1048, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Automatic chinese reading comprehension grading by lstm with knowledge adaptation", |
|
"authors": [ |
|
{ |
|
"first": "Yuwei", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xi", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fuzhen", |
|
"middle": [], |
|
"last": "Zhuang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lishan", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shengquan", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Pacific-Asia Conference on Knowledge Discovery and Data Mining", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "118--129", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yuwei Huang, Xi Yang, Fuzhen Zhuang, Lishan Zhang, and Shengquan Yu. 2018. Automatic chinese reading comprehension grading by lstm with knowledge adaptation. In Pacific-Asia Conference on Knowledge Discovery and Data Mining, pages 118-129. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Paraphrase detection for short answer scoring", |
|
"authors": [ |
|
{ |
|
"first": "Nikolina", |
|
"middle": [], |
|
"last": "Koleva", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrea", |
|
"middle": [], |
|
"last": "Horbach", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexis", |
|
"middle": [], |
|
"last": "Palmer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Simon", |
|
"middle": [], |
|
"last": "Ostermann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Manfred", |
|
"middle": [], |
|
"last": "Pinkal", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the third workshop on NLP for computer-assisted language learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "59--73", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nikolina Koleva, Andrea Horbach, Alexis Palmer, Simon Ostermann, and Manfred Pinkal. 2014. Paraphrase detection for short answer scoring. In Proceedings of the third workshop on NLP for computer-assisted language learning, pages 59- 73.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "C-rater: Automated scoring of short-answer questions", |
|
"authors": [ |
|
{ |
|
"first": "Claudia", |
|
"middle": [], |
|
"last": "Leacock", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martin", |
|
"middle": [], |
|
"last": "Chodorow", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Computers and the Humanities", |
|
"volume": "37", |
|
"issue": "4", |
|
"pages": "389--405", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Claudia Leacock and Martin Chodorow. 2003. C-rater: Automated scoring of short-answer questions. Computers and the Humanities, 37(4):389-405.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Grading chinese answers on specialty subjective questions", |
|
"authors": [ |
|
{ |
|
"first": "Dongjin", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tianyuan", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Pan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaoyue", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuqing", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Feng", |
|
"middle": [], |
|
"last": "Yuan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "CCF Conference on Computer Supported Cooperative Work and Social Computing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "670--682", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dongjin Li, Tianyuan Liu, Wei Pan, Xiaoyue Liu, Yuqing Sun, and Feng Yuan. 2019. Grading chi- nese answers on specialty subjective questions. In CCF Conference on Computer Supported Co- operative Work and Social Computing, pages 670-682. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "The history of chinese characters", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ht Li", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1977, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "HT Li. 1977. The history of chinese characters. Taipei, Taiwan: Lian-Jian.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Analogical reasoning on chinese morphological and semantic relations", |
|
"authors": [ |
|
{ |
|
"first": "Shen", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhe", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Renfen", |
|
"middle": [], |
|
"last": "Hu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wensi", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tao", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaoyong", |
|
"middle": [], |
|
"last": "Du", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1805.06504" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shen Li, Zhe Zhao, Renfen Hu, Wensi Li, Tao Liu, and Xiaoyong Du. 2018. Analogical reasoning on chinese morphological and semantic relations. arXiv preprint arXiv:1805.06504.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "A lightweight chinese natural language processing toolkit", |
|
"authors": [ |
|
{ |
|
"first": "Xianming", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xianming Li. 2019. A lightweight chinese natu- ral language processing toolkit. https://github. com/SeanLee97/xmnlp.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Spelling checkers, spelling correctors and the misspellings of poor spellers. Information processing & management", |
|
"authors": [ |
|
{ |
|
"first": "Roger", |
|
"middle": [], |
|
"last": "Mitton", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1987, |
|
"venue": "", |
|
"volume": "23", |
|
"issue": "", |
|
"pages": "495--505", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Roger Mitton. 1987. Spelling checkers, spelling cor- rectors and the misspellings of poor spellers. In- formation processing & management, 23(5):495- 505.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "English spelling and the computer", |
|
"authors": [ |
|
{ |
|
"first": "Roger", |
|
"middle": [], |
|
"last": "Mitton", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Roger Mitton. 1996. English spelling and the com- puter. Longman Group.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Text-totext semantic similarity for automatic short answer grading", |
|
"authors": [ |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Mohler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rada", |
|
"middle": [], |
|
"last": "Mihalcea", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "567--575", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michael Mohler and Rada Mihalcea. 2009. Text-to- text semantic similarity for automatic short an- swer grading. In Proceedings of the 12th Confer- ence of the European Chapter of the Association for Computational Linguistics, pages 567-575. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Progress testing with short answer questions", |
|
"authors": [ |
|
{ |
|
"first": "Jany", |
|
"middle": [], |
|
"last": "Rademakers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J Ten", |
|
"middle": [], |
|
"last": "Th", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Cate", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "B\u00e4r", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Medical teacher", |
|
"volume": "27", |
|
"issue": "7", |
|
"pages": "578--582", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jany Rademakers, Th J Ten Cate, and PR B\u00e4r. 2005. Progress testing with short answer ques- tions. Medical teacher, 27(7):578-582.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Investigating neural architectures for short answer scoring", |
|
"authors": [ |
|
{ |
|
"first": "Brian", |
|
"middle": [], |
|
"last": "Riordan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrea", |
|
"middle": [], |
|
"last": "Horbach", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aoife", |
|
"middle": [], |
|
"last": "Cahill", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Torsten", |
|
"middle": [], |
|
"last": "Zesch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chungmin", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "159--168", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Brian Riordan, Andrea Horbach, Aoife Cahill, Torsten Zesch, and Chungmin Lee. 2017. In- vestigating neural architectures for short answer scoring. In Proceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications, pages 159-168.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Effective feature integration for automated short answer scoring", |
|
"authors": [ |
|
{ |
|
"first": "Keisuke", |
|
"middle": [], |
|
"last": "Sakaguchi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Heilman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nitin", |
|
"middle": [], |
|
"last": "Madnani", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 2015 conference of the North American Chapter of the association for computational linguistics: Human language technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1049--1054", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Keisuke Sakaguchi, Michael Heilman, and Nitin Madnani. 2015. Effective feature integration for automated short answer scoring. In Proceedings of the 2015 conference of the North American Chapter of the association for computational lin- guistics: Human language technologies, pages 1049-1054.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Chinese embedding via stroke and glyph information: A dual-channel view", |
|
"authors": [ |
|
{ |
|
"first": "Hanqing", |
|
"middle": [], |
|
"last": "Tao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shiwei", |
|
"middle": [], |
|
"last": "Tong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tong", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qi", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Enhong", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1906.04287" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hanqing Tao, Shiwei Tong, Tong Xu, Qi Liu, and Enhong Chen. 2019. Chinese embedding via stroke and glyph information: A dual-channel view. arXiv preprint arXiv:1906.04287.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Assessing creative problem-solving with automated text grading", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Hao-Chuan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chun-Yen", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tsai-Yen", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Computers & Education", |
|
"volume": "51", |
|
"issue": "4", |
|
"pages": "1450--1466", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hao-Chuan Wang, Chun-Yen Chang, and Tsai-Yen Li. 2008. Assessing creative problem-solving with automated text grading. Computers & Ed- ucation, 51(4):1450-1466.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Weka: Practical machine learning tools and techniques with java implementations", |
|
"authors": [ |
|
{ |
|
"first": "Eibe", |
|
"middle": [], |
|
"last": "Ian H Witten", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Frank", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Leonard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Trigg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Mark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Geoffrey", |
|
"middle": [], |
|
"last": "Hall", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sally", |
|
"middle": [ |
|
"Jo" |
|
], |
|
"last": "Holmes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Cunningham", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ian H Witten, Eibe Frank, Leonard E Trigg, Mark A Hall, Geoffrey Holmes, and Sally Jo Cunningham. 1999. Weka: Practical machine learning tools and techniques with java imple- mentations.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "A short answer grading system in chinese by support vector approach", |
|
"authors": [ |
|
{ |
|
"first": "Hung", |
|
"middle": [], |
|
"last": "Shih", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wen-Feng", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Shih", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 5th Workshop on Natural Language Processing Techniques for Educational Applications", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "125--129", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shih-Hung Wu and Wen-Feng Shih. 2018. A short answer grading system in chinese by support vector approach. In Proceedings of the 5th Workshop on Natural Language Processing Tech- niques for Educational Applications, pages 125- 129.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "A short answer grading system in chinese by cnn", |
|
"authors": [ |
|
{ |
|
"first": "Hung", |
|
"middle": [], |
|
"last": "Shih", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chun-Yu", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Yeh", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "2019 IEEE 10th International Conference on Awareness Science and Technology (iCAST)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--5", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shih-Hung Wu and Chun-Yu Yeh. 2019. A short answer grading system in chinese by cnn. In 2019 IEEE 10th International Conference on Awareness Science and Technology (iCAST), pages 1-5. IEEE.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Spell checking for chinese", |
|
"authors": [ |
|
{ |
|
"first": "Shaohua", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hai", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaolin", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Baoliang", |
|
"middle": [], |
|
"last": "Lu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "LREC", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "730--736", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shaohua Yang, Hai Zhao, Xiaolin Wang, and Bao- liang Lu. 2012. Spell checking for chinese. In LREC, pages 730-736.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Automatic Chinese Short Answer Grading with Deep Autoencoder", |
|
"authors": [ |
|
{ |
|
"first": "Xi", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuwei", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fuzhen", |
|
"middle": [], |
|
"last": "Zhuang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lishan", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shengquan", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Artificial Intelligence in Education", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "399--404", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xi Yang, Yuwei Huang, Fuzhen Zhuang, Lishan Zhang, and Shengquan Yu. 2018. Automatic Chinese Short Answer Grading with Deep Au- toencoder. In Artificial Intelligence in Educa- tion, pages 399-404, Cham. Springer Interna- tional Publishing.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Can short answers to open response questions be auto-graded without a grading rubric?", |
|
"authors": [ |
|
{ |
|
"first": "Xi", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lishan", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shengquan", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "International Conference on Artificial Intelligence in Education", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "594--597", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xi Yang, Lishan Zhang, and Shengquan Yu. 2017. Can short answers to open response questions be auto-graded without a grading rubric? In In- ternational Conference on Artificial Intelligence in Education, pages 594-597. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Multi-granularity chinese word embedding", |
|
"authors": [ |
|
{ |
|
"first": "Rongchao", |
|
"middle": [], |
|
"last": "Yin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Quan", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peng", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rui", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bin", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 conference on empirical methods in natural language processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "981--986", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rongchao Yin, Quan Wang, Peng Li, Rui Li, and Bin Wang. 2016. Multi-granularity chinese word embedding. In Proceedings of the 2016 confer- ence on empirical methods in natural language processing, pages 981-986.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Nlp oriented studies on chinese functional words and the construction of their generalized knowledge base", |
|
"authors": [ |
|
{ |
|
"first": "Hongying", |
|
"middle": [], |
|
"last": "Zan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xuefeng", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Contemporary Linguistics", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "124--135", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hongying Zan and Xuefeng Zhu. 2009. Nlp ori- ented studies on chinese functional words and the construction of their generalized knowledge base. Contemporary Linguistics, 2:124-135.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "Reducing annotation efforts in supervised short answer scoring", |
|
"authors": [ |
|
{ |
|
"first": "Torsten", |
|
"middle": [], |
|
"last": "Zesch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Heilman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aoife", |
|
"middle": [], |
|
"last": "Cahill", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the Tenth Workshop on Innovative Use of NLP for Building Educational Applications", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "124--132", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Torsten Zesch, Michael Heilman, and Aoife Cahill. 2015. Reducing annotation efforts in supervised short answer scoring. In Proceedings of the Tenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 124- 132.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "Escritoan nlp-enhanced educational scoring toolkit", |
|
"authors": [ |
|
{ |
|
"first": "Torsten", |
|
"middle": [], |
|
"last": "Zesch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrea", |
|
"middle": [], |
|
"last": "Horbach", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Torsten Zesch and Andrea Horbach. 2018. Escrito- an nlp-enhanced educational scoring toolkit. In Proceedings of the Eleventh International Con- ference on Language Resources and Evaluation (LREC-2018).", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "Chinese word segmentation: Another decade review", |
|
"authors": [ |
|
{ |
|
"first": "Hai", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Deng", |
|
"middle": [], |
|
"last": "Cai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Changning", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chunyu", |
|
"middle": [], |
|
"last": "Kit", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1901.06079" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hai Zhao, Deng Cai, Changning Huang, and Chunyu Kit. 2019. Chinese word segmenta- tion: Another decade review (2007-2017). arXiv preprint arXiv:1901.06079.", |
|
"links": null |
|
}, |
|
"BIBREF39": { |
|
"ref_id": "b39", |
|
"title": "Short answer assessment: Establishing links between research strands", |
|
"authors": [ |
|
{ |
|
"first": "Ramon", |
|
"middle": [], |
|
"last": "Ziai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Niels", |
|
"middle": [], |
|
"last": "Ott", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Detmar", |
|
"middle": [], |
|
"last": "Meurers", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the Seventh Workshop on Building Educational Applications Using NLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "190--200", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ramon Ziai, Niels Ott, and Detmar Meurers. 2012. Short answer assessment: Establishing links be- tween research strands. In Proceedings of the Seventh Workshop on Building Educational Ap- plications Using NLP, pages 190-200. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "https://www.basicfinder.com/en 6 https://cloud.google.com/translate", |
|
"num": null |
|
}, |
|
"FIGREF1": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "the indoor temperature not too high, experiments", |
|
"num": null |
|
}, |
|
"TABREF1": { |
|
"type_str": "table", |
|
"html": null, |
|
"text": "Comparison of segmentation possibilities in English and Chinese", |
|
"num": null, |
|
"content": "<table/>" |
|
}, |
|
"TABREF2": { |
|
"type_str": "table", |
|
"html": null, |
|
"text": "bear eat bambooOrthographic Variance Panda beers eat bambu.", |
|
"num": null, |
|
"content": "<table><tr><td/><td>English</td><td>Chinese</td></tr><tr><td>Reference Answer</td><td>Panda bears eat bamboo.</td></tr><tr><td/><td>poor cat eat pearl</td></tr><tr><td/><td colspan=\"2\">panda bear eat <grammatical morpheme for past tense></td></tr><tr><td>Expression Variance</td><td>Panda bears ate bamboos.</td></tr><tr><td/><td colspan=\"2\">bamboo <grammatical morpheme for plural></td></tr><tr><td/><td>Pandas eat bamboo.</td></tr></table>" |
|
}, |
|
"TABREF3": { |
|
"type_str": "table", |
|
"html": null, |
|
"text": "Example answers showing variance in English and Chinese for the question:", |
|
"num": null, |
|
"content": "<table><tr><td>What do panda</td></tr></table>" |
|
}, |
|
"TABREF5": { |
|
"type_str": "table", |
|
"html": null, |
|
"text": "", |
|
"num": null, |
|
"content": "<table/>" |
|
}, |
|
"TABREF7": { |
|
"type_str": "table", |
|
"html": null, |
|
"text": "Chinese content scoring data sets: data sets from previous work (upper part) and our new data sets (lower part)", |
|
"num": null, |
|
"content": "<table/>" |
|
}, |
|
"TABREF8": { |
|
"type_str": "table", |
|
"html": null, |
|
"text": "Example answers in our data sets.", |
|
"num": null, |
|
"content": "<table/>" |
|
}, |
|
"TABREF9": { |
|
"type_str": "table", |
|
"html": null, |
|
"text": "Indexing Chinese Character Compo-", |
|
"num": null, |
|
"content": "<table><tr><td colspan=\"3\">ID Prompt</td><td/><td/><td/><td>IAA</td><td>avg.</td><td>Distribution</td></tr><tr><td/><td/><td/><td/><td/><td/><td>(QWK) Length</td></tr><tr><td>1</td><td>why</td><td colspan=\"3\">we can use diamond cut glass</td><td>?</td><td>.94</td><td>9.6</td></tr><tr><td/><td>why</td><td>red clothes looks</td><td>as</td><td>red</td><td/></tr><tr><td>2</td><td/><td/><td/><td/><td>?</td><td>.83</td><td>14.7</td></tr><tr><td/><td colspan=\"2\">what is artificial intelligence</td><td/><td/><td/></tr><tr><td>3</td><td/><td>?</td><td/><td/><td/><td>.91</td><td>15.3</td></tr><tr><td/><td colspan=\"2\">what is natural language</td><td/><td/><td/></tr><tr><td>4</td><td/><td>?</td><td/><td/><td/><td>.93</td><td>12.1</td></tr><tr><td/><td colspan=\"2\">what is machine learning</td><td/><td/><td/></tr><tr><td>5</td><td/><td>?</td><td/><td/><td/><td>.89</td><td>15.7</td></tr></table>" |
|
}, |
|
"TABREF10": { |
|
"type_str": "table", |
|
"html": null, |
|
"text": "Overview of prompts in CESA", |
|
"num": null, |
|
"content": "<table><tr><td colspan=\"2\">ID IAA</td><td>avg.</td><td>Distribution</td></tr><tr><td/><td colspan=\"2\">(QWK) Length</td></tr><tr><td>1</td><td>.72</td><td>35.3</td></tr><tr><td>2</td><td>.70</td><td>38.2</td></tr><tr><td>10</td><td>.69</td><td>37.6</td></tr></table>" |
|
}, |
|
"TABREF11": { |
|
"type_str": "table", |
|
"html": null, |
|
"text": "Overview of prompts in ASAP-ZH", |
|
"num": null, |
|
"content": "<table><tr><td colspan=\"2\">ID IAA</td><td>avg.</td><td>Distribution</td></tr><tr><td/><td colspan=\"2\">(QWK) Length</td></tr><tr><td>1</td><td>.96</td><td>68</td></tr><tr><td>2</td><td>.94</td><td>94</td></tr><tr><td>10</td><td>.91</td><td>61</td></tr></table>" |
|
}, |
|
"TABREF12": { |
|
"type_str": "table", |
|
"html": null, |
|
"text": "Overview of prompts in ASAP-ZH M T", |
|
"num": null, |
|
"content": "<table/>" |
|
}, |
|
"TABREF13": { |
|
"type_str": "table", |
|
"html": null, |
|
"text": "", |
|
"num": null, |
|
"content": "<table><tr><td>, the possessive</td></tr><tr><td>8 http://www.moe.gov.cn/s78/A19/yxs_left/moe</td></tr><tr><td>_810/s230/201001/t20100115_75694.html</td></tr></table>" |
|
}, |
|
"TABREF14": { |
|
"type_str": "table", |
|
"html": null, |
|
"text": "Different segmentation levels for an answer in CESA, prompt 1.marker", |
|
"num": null, |
|
"content": "<table/>" |
|
}, |
|
"TABREF15": { |
|
"type_str": "table", |
|
"html": null, |
|
"text": ".03 + .01 ** + .13 + .01 + .04 + .09 ** -.02 + .01 + .01 \u00b10 Character -.01 + .03 \u00b10 + .11 + .05 + .04 ** + .13 + .03 + .06 + .07 ** \u00b10 + .04 + .04 + .02 * Component -.03 + .03 -.01 + .10 + .02 + .02 ** + .17 + .04 + .08 + .10 ** -.01 \u00b10 + .04 + .01 ** Radical -.02 + .02 + .03 + .07 \u00b10 + .02 ** + .08 + .08 + .02 + .06 ** + .02 -.02 + .04 + .01 .01 ** + .14 + .07 + .04 + .08 ** -.01 -.02 .03 + .02 + .01 + .01 ** -.01 \u00b10 -.01 -.01 ** -.01 -.01 + .01", |
|
"num": null, |
|
"content": "<table><tr><td>shows the performance of the differ-</td></tr><tr><td>ent system configurations for the individual</td></tr><tr><td>data sets, per prompt as well as averaged over</td></tr><tr><td>all prompts from the same data set. First,</td></tr><tr><td>we see that all feature sets were able to learn</td></tr><tr><td>something meaningful from the training data.</td></tr><tr><td>Although the performance of different feature</td></tr><tr><td>sets is quite close to each other, we see a slight</td></tr><tr><td>but significant advantage across data sets of</td></tr><tr><td>component and character features over the to-</td></tr><tr><td>ken baseline.</td></tr><tr><td>In order to check if tokenization caused</td></tr></table>" |
|
}, |
|
"TABREF16": { |
|
"type_str": "table", |
|
"html": null, |
|
"text": "Classification results on different feature sets in QWK values.", |
|
"num": null, |
|
"content": "<table/>" |
|
} |
|
} |
|
} |
|
} |