|
{ |
|
"paper_id": "O17-1028", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T07:59:49.612750Z" |
|
}, |
|
"title": "Opinion Target Extraction for Student Course Feedback", |
|
"authors": [ |
|
{ |
|
"first": "Janaka", |
|
"middle": [], |
|
"last": "Chathuranga", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Moratuwa", |
|
"location": { |
|
"addrLine": "Katubedda 10400", |
|
"country": "Sri Lanka" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Shanika", |
|
"middle": [], |
|
"last": "Ediriweera", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Moratuwa", |
|
"location": { |
|
"addrLine": "Katubedda 10400", |
|
"country": "Sri Lanka" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Pranidhith", |
|
"middle": [], |
|
"last": "Munasinghe", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Moratuwa", |
|
"location": { |
|
"addrLine": "Katubedda 10400", |
|
"country": "Sri Lanka" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Ravindu", |
|
"middle": [], |
|
"last": "Hasantha", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Moratuwa", |
|
"location": { |
|
"addrLine": "Katubedda 10400", |
|
"country": "Sri Lanka" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Surangika", |
|
"middle": [], |
|
"last": "Ranathunga", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Moratuwa", |
|
"location": { |
|
"addrLine": "Katubedda 10400", |
|
"country": "Sri Lanka" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Student feedback is an essential part of the instructor-student relationship. Traditionally student feedback is manually summarized by instructors, which is time consuming. Automatic student feedback summarization provides a potential solution to this. For summarizing student feedback, first, the opinion targets should be identified and extracted. In this context, opinion targets such as \"lecture slides\", \"teaching style\" are the important key points in the feedback that the students have shown their sentiment towards. In this paper, we focus on the opinion target extraction task of general student feedback. We model this problem as an information extraction task and extract opinion targets using a Conditional Random Fields (CRF) classifier. Our results show that this classifier outperforms the state-of-the-art techniques for student feedback summarization.", |
|
"pdf_parse": { |
|
"paper_id": "O17-1028", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Student feedback is an essential part of the instructor-student relationship. Traditionally student feedback is manually summarized by instructors, which is time consuming. Automatic student feedback summarization provides a potential solution to this. For summarizing student feedback, first, the opinion targets should be identified and extracted. In this context, opinion targets such as \"lecture slides\", \"teaching style\" are the important key points in the feedback that the students have shown their sentiment towards. In this paper, we focus on the opinion target extraction task of general student feedback. We model this problem as an information extraction task and extract opinion targets using a Conditional Random Fields (CRF) classifier. Our results show that this classifier outperforms the state-of-the-art techniques for student feedback summarization.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Student feedback is used widely in present in order to enhance the quality of teaching and learning. Feedback is collected from students as online forms as well as handwritten documents. Since it takes a considerable effort to read and understand all the feedback given by the students, the best way is to read all the feedback and create a summary that covers all the aspects of all the feedback given.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "The 2017 Conference on Computational Linguistics and Speech Processing ROCLING 2017, pp. 295-307 \uf0d3 The Association for Computational Linguistics and Chinese Language Processing Although many lecturers collect student feedback, comments written by students are not summarized. If a lecturer wants to get a summary of these comments, the lecturer has to manually read and summarize these comments. Manual summarization is not scalable; in a large class with more than few hundred students, it is going to be a tedious and rigorous task. Thus, a system to summarize all student feedback and giving an overall summary by categorizing students' sentiments towards different aspects of the lecture will be very useful for teachers, lecturers, schools, universities, and the education systems as a whole.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "295", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Research done in this area so far has focused only on using student feedback collected using reflective prompts [1] . With a reflective prompt, student feedback is collected by giving a specific question (prompt). For an example, a prompt such as \"What are the most interesting topics of today's lecture?\" is considered as a reflective prompt. In reflective prompts, the prompt decides the opinion of the feedback: positive or negative. Opinion for different aspects in student feedback cannot be measured in this approach.", |
|
"cite_spans": [ |
|
{ |
|
"start": 112, |
|
"end": 115, |
|
"text": "[1]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "295", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In this paper, we focus on general student feedback. General feedback means that the feedback is collected using a general prompt (example: \"Give feedback on today's lecture\"), rather than a specific prompt where the prompt suggests the sentiment of the feedback.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "295", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Our system contains three parts:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "295", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(1) Identifying and extracting all the opinion targets in the given feedback", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "295", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(2) Clustering all the targets into unique categories (3) Determining the sentimental polarity of the targets and getting a statistic of polarity for each target cluster.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "295", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Here in this paper, we only focus on the first part of our solution, which is identifying and extracting the opinion targets from student feedback. First, we undergo a data-preprocessing step to fix errors in the dataset. Then we annotate the targets using our own annotation schema into Beginning, Inside, Outside (BIO) tags, and then we use a Conditional Random Field (CRF) classifier as a supervised approach to extract the opinion targets.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "295", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "To the best of our knowledge, there is no prior research done on general feedback summarization. Thus, we have no viable baseline, nor an annotated data set. Therefore, we have created the baseline for our system using the supervised approach used by Luo et al. [1], which was done using reflective prompts. We show that for this general feedback data set, our classifier outperforms the selected baseline system.", |
|
"cite_spans": [ |
|
{ |
|
"start": 251, |
|
"end": 261, |
|
"text": "Luo et al.", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "295", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Even though it is suggested that deep learning techniques such as Recurrent Neural Networks(RNN)[2] perform well in extracting opinion targets, we are not able to get good results because our dataset is very small with 956 student responses in total with 4428 sentences. In order to use deep learning techniques, we need a much bigger dataset and there is no other general student feedback dataset which suits our purpose.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "295", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The rest of the paper is structured as follows; Section 2 overviews previous work and section 3 describes the data used for our experiments. Section 4 describes the details about our approach and the features used. Section 5 describes how the experiment is done and the evaluation results, followed by the conclusion in Section 6.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "295", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "There are two general approaches for automatic summarization: extraction and abstraction.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "Extractive methods work by selecting a subset of existing words, phrases, or sentences in the original text to form the summary. In contrast, abstractive methods build an internal semantic representation and then use natural language generation techniques to create a summary.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "Research on student feedback summarization done up to the day has only used extractive methods such as Integer Linear Programming [3] , Phrase-based approach, clustering, and ranking approaches [4] to summarize student feedback. These techniques use reflective prompt-based student feedback data sets. That means when acquiring feedback, the students are guided with a specific question such as, \"Describe what you found most interesting in today's class?\" In general feedback, the student is not directed towards a specific aspect.", |
|
"cite_spans": [ |
|
{ |
|
"start": 130, |
|
"end": 133, |
|
"text": "[3]", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 194, |
|
"end": 197, |
|
"text": "[4]", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "Students are given the chance to write anything related to the lecture on their feedback. Therefore, it contains many unexpected, but useful information. Further, general feedback contains more complex content and noise compared to reflective prompt based feedback.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "Therefore, target extraction on general feedback is more challenging. To the best of our knowledge, there has not been any research done to summarize general student feedback.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "The first step in general feedback summarization is opinion target (aspect) extraction. Opinion target (aspect) is an entity that respondents have raised their opinions about.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "Aspect extraction has been studied by many researchers in the domain of sentiment analysis.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "There are two main approaches: supervised and unsupervised.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "When supervised approach has been used for opinion target extraction [5] , the sequence labeling scheme known as BIO labeling has been commonly used. However, this research is limited to extract only course names and instructor names as entities.", |
|
"cite_spans": [ |
|
{ |
|
"start": 69, |
|
"end": 72, |
|
"text": "[5]", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "The system [7] that we consider as the baseline for our work also has used a BIO labeling scheme for candidate phrase extraction. A Conditional Random Fields (CRF) [6] classifier is used as the sequence labeler. Since their dataset is responses to reflective prompts, they extract noun phrases as candidate phrases. Double Propagation method [7] is an unsupervised approach to solve opinion target extraction.", |
|
"cite_spans": [ |
|
{ |
|
"start": 11, |
|
"end": 14, |
|
"text": "[7]", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 164, |
|
"end": 167, |
|
"text": "[6]", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 342, |
|
"end": 345, |
|
"text": "[7]", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "The basic idea of this approach is to extract opinion words (or targets) iteratively using known and extracted (in previous iterations) opinion words and targets through the identification of syntactic relations. At the beginning, opinion word is given as a seed word. Thus, it can also be viewed as a semi-supervised method. Improvement of this method has been proposed by Luo These two approaches hold the promise for the task of extracting opinion targets from student responses for small data sets.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "In a student feedback summarization task, the first thing is to identify the entities or aspects students have raised their opinions about. Although currently there are datasets containing student feedback collected by asking them a specific reflection prompt (question), a reasonable sized feedback set that contains feedback about almost every aspect of a course is missing. In this work, we created a new dataset in order to fulfill this purpose.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "Our data consists of student responses collected from an undergraduate Computer Science and Engineering Course. General responses were collected from 27 Lectures and Workshops .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "They contain 956 student responses in total with 4428 sentences.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "The prompts we used to collect responses were general prompts. Therefore, student had the freedom to write regarding any aspect of the lecture .In addition, there was no sentence limitation for providing feedback. Here the student expresses his opinions about \"lecture slides\": positive opinion for uploading them every week and negative opinion for not uploading it on Sunday.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "In our work, we used our own way of annotating student feedback.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "That is mainly because of the nature of the data. Data used in previous work [1] [3] [4] only had opinion targets in them whereas the positive / negative expressions were in the prompt itself.", |
|
"cite_spans": [ |
|
{ |
|
"start": 81, |
|
"end": 84, |
|
"text": "[3]", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "The following cases were identified in responses, which contain both opinion targets and positive/ negative expressions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "In the dataset, many different types of opinion targets and opinion expressons were found.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "\u2022 Multi word opinion targets Ex: -I think time and weight for documentation of the project is too much.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "Opinion target is \"time and weight for documentation of the project\".", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "\u2022 Single target, single opinion expression.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "Ex: -Lectures were really good.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "\u2022 Single target, multiple opinion expressions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "Ex: -Overall lecture session was great, well organized and very helpful.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "Here the target \"Overall lecture session\" has three positive opinion expressions towards it.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "\u2022 Single opinion multiple opinion targets Ex: -Keeping interactions with students, asking questions, giving in-class activities and discussing them within the class were greatly helpful for me to develop my oop skills.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "A positive opinion is expressed here for all the following aspects/ targets of the lecture: \"Keeping interactions\", \"asking questions\", \"giving in class activities\".", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "\u2022 Ambiguity about which opinion target to take E.g.: -Both lecturers did a great job on delivering the subject matter.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "Here, two aspects can be identified: \"Both lecturers\" and \"delivering the subject matter\". It is difficult to find on which target the opinion is focused on.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "We manually annotated 20 feedback files out of 27 using this method. This annotation scheme first identifies sentences or phrases with opinions and then marks the opinion target.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "Since we annotated both the target and the opinion towards the target, we had to use unique BIO tags for both target and the opinion expression. Therefore, we used B-T The CRF labeler is trained using the training data set containing 956 responses.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "As the baseline features we use the features used by Luo et al. [1] . These features are based on sentence syntactic structure and word importance to signal the likelihood of a word being included in the target.", |
|
"cite_spans": [ |
|
{ |
|
"start": 53, |
|
"end": 67, |
|
"text": "Luo et al. [1]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Features", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "\u2022 Word trigram within a 5-word window ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Local Features", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We increased the accuracy of target extraction by adding following features.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "New Features", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "These features check whether the word is a capital letter, whether the first character is a capital letter, whether all characters are capital letters, whether the word is a punctuation mark, whether all characters are punctuation marks, whether the word contains punctuation marks, whether the word is a number, and whether all characters are numbers. These features are applied as unigram features within a 3-word window.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Capitals, Punctuation marks and Numbers (CPN)", |
|
"sec_num": "4.2.1" |
|
}, |
|
{ |
|
"text": "Previous research [11] [12] [13] has shown that utilization of unlabeled data can improve the quality of the Named Entity Recognition, which also used a CRF classifier. Therefore, we tried out following word embedding features to improve the target extraction process.", |
|
"cite_spans": [ |
|
{ |
|
"start": 18, |
|
"end": 22, |
|
"text": "[11]", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 28, |
|
"end": 32, |
|
"text": "[13]", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Word Embedding Features", |
|
"sec_num": "4.2.2" |
|
}, |
|
{ |
|
"text": "Brown's algorithm is a hierarchical clustering algorithm that clusters words that have a higher mutual information of bigrams [14] . We created Brown clusters using the given corpus and some other un-annotated feedback data (this set contains 3970 sentences, which were collected in 37 workshops). The output of the algorithm is a dendrogram. A path from the root of the dendrogram represents a word and can be encoded with a bit sequence. We used the prefix of the bit sequence as a feature. We used the first 5, 7, 11 bits as three features.", |
|
"cite_spans": [ |
|
{ |
|
"start": 126, |
|
"end": 130, |
|
"text": "[14]", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Brown Clusters", |
|
"sec_num": "4.2.2.1" |
|
}, |
|
{ |
|
"text": "Those numbers were discovered by trying different numbers on the same data set. The combination of above numbers gave the best output.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Brown Clusters", |
|
"sec_num": "4.2.2.1" |
|
}, |
|
{ |
|
"text": "Clark's algorithm groups words that have similar context distribution and morphological clues starting with the most frequent words [15] . We created 100 clusters using the non-annotated corpus. Clark clusters were used as unigram, bi-gram, tri-gram and 4-gram features within a 9-word window. The window size was determined by trying different window sizes. 9-word window gave best results.", |
|
"cite_spans": [ |
|
{ |
|
"start": 132, |
|
"end": 136, |
|
"text": "[15]", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Clark Clusters", |
|
"sec_num": "4.2.2.2" |
|
}, |
|
{ |
|
"text": "We trained a word to vector model [16] using the non-annotated data set, and used it to create 100 clusters using k-medoids algorithm. The output was used as a unigram feature within a one-word window.", |
|
"cite_spans": [ |
|
{ |
|
"start": 34, |
|
"end": 38, |
|
"text": "[16]", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Word to vector feature clusters", |
|
"sec_num": "4.2.2.3" |
|
}, |
|
{ |
|
"text": "We first corrected the spelling mistakes in the dataset using the Bing Spell Check API [17].", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiment", |
|
"sec_num": "5." |
|
}, |
|
{ |
|
"text": "Then the data set was annotated according to above described annotation scheme. Annotated data was converted into BIO tags and was used to train the CRF classifier to extract targets.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiment", |
|
"sec_num": "5." |
|
}, |
|
{ |
|
"text": "Here CRF is used because our dataset is small and because of that the deep learning techniques cannot be applied on our dataset. Accuracy of the CRF classifier was measured using 10-fold cross validation. [18] . Among the word embedding features, Clark clusters has improved the results, but even its improvement is considerably less compared to CPN. Stemmed word feature is also like clustering. For an example, both \"Lecture\" and \"Lectures\" will be clustered in to their stemmed word \"lecture\". It has a less improvement in F-score compared to Clark clusters but it has improved precision considerably.", |
|
"cite_spans": [ |
|
{ |
|
"start": 205, |
|
"end": 209, |
|
"text": "[18]", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiment", |
|
"sec_num": "5." |
|
}, |
|
{ |
|
"text": "In both precision and recall wise, the maximum accuracy came by combining all features but only for the recall, maximum accuracy was obtained by baseline and stemmed word feature added, but it has a relatively lower recall. The evaluation of the result shows that adding more features increases the recall.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiment", |
|
"sec_num": "5." |
|
}, |
|
{ |
|
"text": "In this work, we have focused on opinion target extraction task of the general student feedback, which is the first sub task of summarizing student feedback. We used a CRF classifier to address this information extraction task. As the baseline, we used the supervised approach used by Luo et al. [1] . Experimental results show that our method yields better opinion targets extraction performance than this previous work [1], which is done on reflective prompts feedback.", |
|
"cite_spans": [ |
|
{ |
|
"start": 285, |
|
"end": 299, |
|
"text": "Luo et al. [1]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6." |
|
}, |
|
{ |
|
"text": "Future work includes the other two subtasks of student feedback summarization process, which are clustering the extracted opinion targets using a suitable clustering algorithm, and identifying the student's sentiment towards the opinion target. [2] P. Liu, S. Joty, and H. Meng, \"Fine-grained Opinion Mining with Recurrent Neural", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6." |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Proc. 2015 Conf. Empir. Methods Nat. Lang. Process", |
|
"authors": [ |
|
{ |
|
"first": "Word", |
|
"middle": [], |
|
"last": "Networks", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Embeddings", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1433--1443", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Networks and Word Embeddings,\" Proc. 2015 Conf. Empir. Methods Nat. Lang. Process., no. September, pp. 1433-1443, 2015.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Automatic Summarization of Student Course Feedback", |
|
"authors": [ |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Luo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Z", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Litman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "North Am. Chapter Assoc. Comput. Linguist", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "80--85", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "W. Luo, F. Liu, Z. Liu, and D. Litman, \"Automatic Summarization of Student Course Feedback,\" North Am. Chapter Assoc. Comput. Linguist., no. Duc 2004, pp. 80-85, 2016.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Summarizing Student Responses to Reflection Prompts", |
|
"authors": [ |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Luo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Litman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1955--1960", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "W. Luo and D. Litman, \"Summarizing Student Responses to Reflection Prompts,\" pp. 1955-1960, 2015.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Targeted Sentiment to Understand Student Comments", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Welch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Mihalcea", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Street", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Arbor", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proc. 26th Int. Conf. Comput. Linguist", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2471--2481", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "C. Welch, R. Mihalcea, H. Street, and A. Arbor, \"Targeted Sentiment to Understand Student Comments,\" Proc. 26th Int. Conf. Comput. Linguist., no. 1, pp. 2471-2481, 2016.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Generation of T-cell receptor retrogenic mice", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Holst", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Szymczak-Workman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Vignali", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Burton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Workman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [ |
|
"A A" |
|
], |
|
"last": "Vignali", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Nat. Protoc", |
|
"volume": "1", |
|
"issue": "1", |
|
"pages": "406--417", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J. Holst, A. L. Szymczak-Workman, K. M. Vignali, A. R. Burton, C. J. Workman, and D. A. A. Vignali, \"Generation of T-cell receptor retrogenic mice,\" Nat. Protoc., vol. 1, no. 1, pp. 406-417, 2006.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Opinion Word Expansion and Target Extraction through Double Propagation", |
|
"authors": [ |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Qiu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Bu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Comput. Linguist", |
|
"volume": "37", |
|
"issue": "1", |
|
"pages": "9--27", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "G. Qiu, B. Liu, J. Bu, and C. Chen, \"Opinion Word Expansion and Target Extraction through Double Propagation,\" Comput. Linguist., vol. 37, no. 1, pp. 9-27, 2011.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Fine-grained named entity recognition and relation extraction for question answering", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y.-G", |
|
"middle": [], |
|
"last": "Hwang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M.-G", |
|
"middle": [], |
|
"last": "Jang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval -SIGIR '07", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "C. Lee, Y.-G. Hwang, and M.-G. Jang, \"Fine-grained named entity recognition and relation extraction for question answering,\" in Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval -SIGIR '07, 2007, p. 799.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Biomedical named entity recognition using conditional random fields and rich feature sets", |
|
"authors": [ |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Settles", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "JNLPBA '04 Proceedings of the International Joint Workshop on Natural Language Processing in Biomedicine and its Applications", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "104--107", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "B. Settles, \"Biomedical named entity recognition using conditional random fields and rich feature sets,\" in JNLPBA '04 Proceedings of the International Joint Workshop on Natural Language Processing in Biomedicine and its Applications, 2004, pp. 104-107.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Early results for named entity recognition with conditional random fields", |
|
"authors": [ |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Mccallum", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proc. CoNLL-2003", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "188--191", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "a McCallum and W. Li, \"Early results for named entity recognition with conditional random fields,\" Proc. CoNLL-2003, pp. 188-191, 2003.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "A Framework for Learning Predictive Structures from Multiple Tasks and Unlabeled Data", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Ando", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Z", |
|
"middle": [], |
|
"last": "Yahoo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Tong", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "J. Mach. Learn. Res", |
|
"volume": "6", |
|
"issue": "", |
|
"pages": "1817--1853", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "R. K. Ando and Z. (Yahoo R. Tong, \"A Framework for Learning Predictive Structures from Multiple Tasks and Unlabeled Data,\" J. Mach. Learn. Res., vol. 6, pp. 1817-1853, 2005.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Word Representations: A Simple and General Method for Semi-supervised Learning", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Turian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Ratinov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Turian", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proc. 48th Annu", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "384--394", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J. Turian, L. Ratinov, Y. Bengio, and J. Turian, \"Word Representations: A Simple and General Method for Semi-supervised Learning,\" Proc. 48th Annu. Meet. Assoc. Comput. Linguist., no. July, pp. 384-394, 2010.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "An empirical study of semi-supervised structured conditional models for dependency parsing", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Suzuki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Isozaki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "X", |
|
"middle": [], |
|
"last": "Carreras", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Collins", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Conf. Empir. Methods Nat. Lang. Process", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "551--560", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J. Suzuki, H. Isozaki, X. Carreras, and M. Collins, \"An empirical study of semi-supervised structured conditional models for dependency parsing,\" Conf. Empir. Methods Nat. Lang. Process., pp. 551-560, 2009.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Class-Based n-gram Models of Natural Language", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [ |
|
"F" |
|
], |
|
"last": "Brown", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Desouza", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Mercer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Della Pietra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Lai", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "Comput. Linguist", |
|
"volume": "18", |
|
"issue": "", |
|
"pages": "467--479", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "P. F. Brown, P. V DeSouza, R. L. Mercer, V. J. Della Pietra, and J. C. Lai, \"Class-Based n-gram Models of Natural Language,\" Comput. Linguist., vol. 18, pp. 467-479, 1992.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Combining distributional and morphological information for part of speech induction", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proc. tenth Conf. Eur. chapter Assoc. Comput. Linguist. -EACL '03", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "A. Clark, \"Combining distributional and morphological information for part of speech induction,\" Proc. tenth Conf. Eur. chapter Assoc. Comput. Linguist. -EACL '03, vol. 1, p. 59, 2003.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Efficient Estimation of Word Representations in Vector Space", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Corrado", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Dean", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--12", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "T. Mikolov, K. Chen, G. Corrado, and J. Dean, \"Efficient Estimation of Word Representations in Vector Space,\" pp. 1-12, 2013.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Microsoft Cognitive Services-Bing Spell Check API | Microsoft Azure", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Microsoft Cognitive Services-Bing Spell Check API | Microsoft Azure.\" .", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Aspect extraction for opinion mining with a deep convolutional neural network", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Poria", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Cambria", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Gelbukh", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Knowledge-Based Syst", |
|
"volume": "108", |
|
"issue": "", |
|
"pages": "42--49", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "S. Poria, E. Cambria, and A. Gelbukh, \"Aspect extraction for opinion mining with a deep convolutional neural network,\" Knowledge-Based Syst., vol. 108, pp. 42-49, 2016.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "et. al [1]." |
|
}, |
|
"FIGREF1": { |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "This feedback consists of many opinionated responses. Each of those responses focuses their opinion towards a target entity, which is called an opinion target. Some opinion targets have both positive and negative opinions towards them. For example, consider the following sentence. \"The lecture slides were uploaded to Moodle every week and I think it would have been much better if you could upload them on Sunday\"." |
|
}, |
|
"FIGREF2": { |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "Beginning-Target) for the beginning of the Target, I-T (Inside-Target) for the inside of the target, B-PO (Beginning-Positive Opinion) for the beginning of the positive opinion expression, I-PO (Inside-Positive Opinion) for the inside of the positive opinion expression, B-NO (Beginning-Negative Opinion) for the beginning of the negative opinion expression, I-NO (Inside-Negative Opinion) for the inside of the negative expression, and O for the outside words that are not annotated.For example, consider the sentence \"Lectures were really good\". This sentence was annotated as shown below:\u2022 Lectures/B-T were/O really/O good/B-PO4. Aspect extractionFor the task of classification, we choose to use a Conditional Random Fields (CRF) classifier[6]. CRFs are a class of statistical modeling methods often applied in pattern recognition and machine learning and used for structured prediction. CRFs fall into the sequence modeling family. Whereas a discrete classifier predicts a label for a single sample without considering \"neighboring\" samples, a CRF can take context into account; e.g., the linear chain CRF (which is popular in natural language processing) predicts sequences of labels for sequences of input samples. This has been used for many other sequence labeling tasks such as Named Entity Recognition (NER) as well[8] [9][10]." |
|
}, |
|
"FIGREF3": { |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "Part-of-Speech tag trigram within a 5-word window \u2022 Chunk tag trigram within a 5-word window \u2022 Whether the word is in the prompt \u2022 Whether the word is a stop word Global Features \u2022 Total number of word occurrences (stemmed) \u2022 Rank of the word's term frequency These local and global features are used for supervised target extraction. Local features are extracted from one student's response. Global features are extracted using all student responses in one lecture." |
|
}, |
|
"FIGREF4": { |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "W. Luo, F. Liu, and D. Litman, \"An Improved Phrase-based Approach to Annotating and Summarizing Student Course Responses,\" Proc. 26th Int. Conf. Comput. Linguist., pp. 53-63, 2016." |
|
}, |
|
"TABREF0": { |
|
"content": "<table><tr><td/><td>Table 1. Results</td><td/><td/></tr><tr><td>Features</td><td>Precision</td><td>Recall</td><td>F1</td></tr><tr><td>Baseline</td><td>0.76923</td><td>0.60437</td><td>0.67690</td></tr><tr><td>Baseline + CPN</td><td>0.76081</td><td>0.66788</td><td>0.71132</td></tr><tr><td>Baseline + Brown</td><td>0.74648</td><td>0.63174</td><td>0.68434</td></tr><tr><td>Baseline + Clark</td><td>0.79733</td><td>0.62991</td><td>0.70380</td></tr><tr><td>Baseline + Stemmed Word</td><td>0.80348</td><td>0.61161</td><td>0.69454</td></tr><tr><td>Baseline + Word2Vec K-medoids</td><td>0.76627</td><td>0.61939</td><td>0.68504</td></tr><tr><td>All</td><td>0.79566</td><td>0.67154</td><td>0.72835</td></tr></table>", |
|
"type_str": "table", |
|
"num": null, |
|
"html": null, |
|
"text": "shows experiment results.When considering the precision and recall, only exact matches were considered. Partial matches were considered as false negatives. CPN has improved the result considerably compared to other features. One of the major reasons could be usage of capital letters in feedback. Some of the mentioned entities did appear at the beginning of the sentence. Further, many targets are named entities, and there is a high probability for them to appear in title case. CPN is much sensitive to title case because it matches whether word contains a capital letter.Brown clusters, Clark clusters and Word2Vec K-medoids are word embedding features. They provide a cluster representation on words depending on their relative meanings. May be the dataset size being small can be a reason to obtain lower results by Word2Vec K-medoids feature, given that in previous work Word2Vec models were trained on a much larger dataset" |
|
} |
|
} |
|
} |
|
} |