ACL-OCL / Base_JSON /prefixS /json /semeval /2020.semeval-1.129.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:18:24.041668Z"
},
"title": "ECNU at SemEval-2020 Task 7: Assessing Humor in Edited News Headlines Using BiLSTM with Attention",
"authors": [
{
"first": "Tiantian",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "East China Normal University",
"location": {
"settlement": "Shanghai",
"country": "P.R.China"
}
},
"email": ""
},
{
"first": "Zhixuan",
"middle": [],
"last": "Chen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "East China Normal University",
"location": {
"settlement": "Shanghai",
"country": "P.R.China"
}
},
"email": ""
},
{
"first": "Man",
"middle": [],
"last": "Lan",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "East China Normal University",
"location": {
"settlement": "Shanghai",
"country": "P.R.China"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper we describe our system submitted to SemEval 2020 Task 7: \"Assessing Humor in Edited News Headlines\". We participated in all subtasks, in which the main goal is to predict the mean funniness of the edited headline given the original and the edited headline. Our system involves two similar sub-networks, which generate vector representations for the original and edited headlines respectively. And then we do a subtract operation of the outputs from two sub-networks to predict the funniness of the edited headline.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper we describe our system submitted to SemEval 2020 Task 7: \"Assessing Humor in Edited News Headlines\". We participated in all subtasks, in which the main goal is to predict the mean funniness of the edited headline given the original and the edited headline. Our system involves two similar sub-networks, which generate vector representations for the original and edited headlines respectively. And then we do a subtract operation of the outputs from two sub-networks to predict the funniness of the edited headline.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Humor can be defined as the aspiration of provoking laughter and provides amusement from expressions intended (Bertero and Fung, 2016) . The task of humor recognition refers to determining whether a sentence in a given context contains some level of humorous content.",
"cite_spans": [
{
"start": 110,
"end": 134,
"text": "(Bertero and Fung, 2016)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The Semeval 2020 Task 7 (Hossain et al., 2020a) aims to automatically computes the funniness of edited news headlines which are generated using an insertion of a single-word noun or verb to replace an existing entity or single-word noun or verb in original headline (Hossain et al., 2019) . There are two sub-tasks in Task 7. The sub-task 1 is to predict the mean funniness of the edited headline given the original and edited headline. The sub-task 2 is based on sub-task 1, which aims to determine which version of edits makes the headline more humorous given the original headline and two edited versions.",
"cite_spans": [
{
"start": 24,
"end": 47,
"text": "(Hossain et al., 2020a)",
"ref_id": "BIBREF8"
},
{
"start": 266,
"end": 288,
"text": "(Hossain et al., 2019)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In prior studies, humor recognition has been approached as a binary classification problem. Traditional classification algorithms like SVM and Naive Bayes (Mihalcea and Strapparava, 2005) , and a deep learning CNN architecture (Chen and Lee, 2017) are adopted to distinguish between humorous and non-humorous texts. However, humor is not just a binary concept and it occurs in various intensities. In addition, in the past, the research objective for humor recognition is a sentence or text. However, it is interesting to study how short edits applied to a text can turn it from non-funny to funny, which can help us focus on the humorous effects of atomic changes and pointing out the key difference between non-humorous and humorous text.",
"cite_spans": [
{
"start": 155,
"end": 187,
"text": "(Mihalcea and Strapparava, 2005)",
"ref_id": "BIBREF10"
},
{
"start": 227,
"end": 247,
"text": "(Chen and Lee, 2017)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we present our funniness prediction system which is mainly based on bidirectional LSTM (Hochreiter and Schmidhuber, 1997) neural networks with attention mechanism (Bahdanau et al., 2014) . Besides, we show some features related to humor and then analyze the effectiveness of different configurations of our system.",
"cite_spans": [
{
"start": 102,
"end": 136,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF6"
},
{
"start": 178,
"end": 201,
"text": "(Bahdanau et al., 2014)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We develop a framework including two similar sub-networks whose inputs are the sequence of tokens in the original headline and edited headline respectively. Then two outputs of sub-networks and other text features are combined together to predict the mean funniness of the edited headline. Figure 1 represents the network structure of our overall model, with two similar sub-networks displayed in two sides. The highlighted word, in original headline is the replaced word, and in edited headline is the replacement word. Specifically, each sub-network contains the token representation layer using pre-trained word embeddings, BiLSTM layer and attention layer. We apply BiLSTM to obtain contextual token representations. We also use attention mechanism in order to get the headline representations.",
"cite_spans": [],
"ref_spans": [
{
"start": 290,
"end": 298,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "System Description",
"sec_num": "2"
},
{
"text": "The input to each sub-network is a news headline, treated as a sequence of tokens. We use a token representation layer to project the headline W = (w 1 , w 2 , ..., w t ) to a low-dimensional vector space R E , where E is the size of the representation layer and t is the number of tokens in the headline. By projection, the sequence of tokens can be represented as X = (x 1 , x 2 , ..., x t ). We obtain token representations using GloVe word vectors (Pennington et al., 2014) and BERT pre-trained word embeddings (Devlin et al., 2018) . Here, we just use BERT as a feature extraction model to extract token features. ",
"cite_spans": [
{
"start": 452,
"end": 477,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF12"
},
{
"start": 515,
"end": 536,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-trained Word Embeddings",
"sec_num": "2.1"
},
{
"text": "We use a BiLSTM over a sequence of tokens to obtain token representations",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BiLSTM Layer",
"sec_num": "2.2"
},
{
"text": "H = (h 1 , h 2 , ..., h t ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BiLSTM Layer",
"sec_num": "2.2"
},
{
"text": "As shown in figure 1, a BiLSTM encodes the sequence twice, once forward and once backward. A forward LSTM processes the sequence from x 1 to x t , while a backward LSTM processes from x t to x 1 . For word",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BiLSTM Layer",
"sec_num": "2.2"
},
{
"text": "x i , a forward LSTM and backward LSTM produce the token representation as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BiLSTM Layer",
"sec_num": "2.2"
},
{
"text": "\u2192 h i and \u2190 h i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BiLSTM Layer",
"sec_num": "2.2"
},
{
"text": "Finally, the overall output h i is calculated as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BiLSTM Layer",
"sec_num": "2.2"
},
{
"text": "h i = \u2192 h i \u2295 \u2190 h i (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BiLSTM Layer",
"sec_num": "2.2"
},
{
"text": "where \u2295 denotes the concatenation operation. Particularly, h i \u2208 R 2L , L is the size of LSTM.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BiLSTM Layer",
"sec_num": "2.2"
},
{
"text": "Because of the incongruity theory of humor (Morreall, 2016) , we suppose that the distance between the new replacement word and other words in the edited headline is further than that between the replaced word or entity and other words in the original headline. Attention mechanism (Bahdanau et al., 2014) can capture the relationship between two texts, including words, sentences, etc. Therefore we use attention in our model. The attention mechanism assigns a weight w i to each output h i of the BiLSTM layer except for the replaced word and replacement word. The hidden states are finally calculated to produce a hidden sentence feature vector r by a weighted sum function, as indicated in figure 1 by arrows. Formally:",
"cite_spans": [
{
"start": 43,
"end": 59,
"text": "(Morreall, 2016)",
"ref_id": "BIBREF11"
},
{
"start": 282,
"end": 305,
"text": "(Bahdanau et al., 2014)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Attention Layer",
"sec_num": "2.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "e i = tanh(W h h i + b h ) (2) w i = exp(e i ) t j=1 exp(e j )",
"eq_num": "(3)"
}
],
"section": "Attention Layer",
"sec_num": "2.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "r = t i=1 w i h i , r \u2208 R 2L",
"eq_num": "(4)"
}
],
"section": "Attention Layer",
"sec_num": "2.3"
},
{
"text": "The parameters W h and b h above are the weight and bias from the attention layer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention Layer",
"sec_num": "2.3"
},
{
"text": "In this paper, we also design some hand-crafted features and they are directly added in output layer. Statistical features: By doing statistical analysis of replaced words and replacement words that generate humor on news headlines, we counted the occurrence of each replaced word and replacement word. We used the humor grade of an edited headline as its replaced word and replacement word's humor grade. And then we calculated the average, minimum and maximum humor grades of all replaced words and replacement words respectively. These three statistical data for the replaced words and replacement words respectively are used as our hand-crafted features, 6 feature numbers in total. As for a new replaced or replacement word in test dataset, we use features of the word most similar to current word by computing word similarity instead.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "2.4"
},
{
"text": "Sentiment lexicon features: SenticNet 5 (Cambria et al., 2018) is used to calculate the sentiment polarity of words in headlines. The sum of the words' polarity in original headline and edited headline separately are used as two of our hand-crafted features.",
"cite_spans": [
{
"start": 40,
"end": 62,
"text": "(Cambria et al., 2018)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "2.4"
},
{
"text": "Humor lexicon features: Humor Norm Lexion (Engelthaler and Hills, 2017) is used to calculate the humor grades of words in headlines. The sum of the words' humor grades in original headline and edited headline separately are used as two of our hand-crafted features.",
"cite_spans": [
{
"start": 42,
"end": 71,
"text": "(Engelthaler and Hills, 2017)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "2.4"
},
{
"text": "For both the original headline and edited headline, their representation can be obtained by concatenating the output of attention layer and replaced word representation or the replacement word representation separately. Considering that the incongruity of the replaced word and the replacement word, we make a subtract operation for the representation of the original headline and the edited headline. Then the vector concatenating the output and extracted features above is fed to final fully-connected sigmoid layer which outputs a humor grade.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Output Layer",
"sec_num": "2.5"
},
{
"text": "3 Experimental Setup",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Output Layer",
"sec_num": "2.5"
},
{
"text": "Experiments are conducted on Humicroedit dataset (Hossain et al., 2019) and additional training data collected from the FunLines competition (Hossain et al., 2020b) . We follow the standard data partition of Semeval 2020 Task 7.",
"cite_spans": [
{
"start": 49,
"end": 71,
"text": "(Hossain et al., 2019)",
"ref_id": "BIBREF7"
},
{
"start": 141,
"end": 164,
"text": "(Hossain et al., 2020b)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "3.1"
},
{
"text": "Following the official evaluation criteria given in the competition, the Root Mean Squared Error (RMSE) is adopted for sub-task 1 to measure the predicted values and the ground truth mean funniness. The classification accuracy is adopted for sub-task 2. The definitions are as follow:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation metrics",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "RM SE = N i=1 (y i \u2212\u0177 i ) 2 N",
"eq_num": "(5)"
}
],
"section": "Evaluation metrics",
"sec_num": "3.2"
},
{
"text": "where\u0177 i and y i represent the predicted outputs and gold labels, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation metrics",
"sec_num": "3.2"
},
{
"text": "Accuracy = T N (6)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation metrics",
"sec_num": "3.2"
},
{
"text": "where N is the number of overall samples, and T denotes the correct number of predicted samples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation metrics",
"sec_num": "3.2"
},
{
"text": "To predict the funniness of the edited headlines, we trained our model to minimize MSE loss with rmsprop optimizer. We applied BERT and GloVe pre-trained embeddings, two-layers BiLSTM with 128 hidden units and a dropout of 0.5 to the all BiLSTM layers. At the output layer, we tried two ways to predict the final funniness. One is to predict humor grade using sigmoid function and the other is to predict 4 values which represent the percentage of grade 0, 1, 2 and 3 scored by judges using softmax function.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "3.3"
},
{
"text": "Since sub-task 2 is based on the predictions of sub-task 1 to a great extent, we only report the results of sub-task 1. As described in section 2.1, we trained the model based on GloVe embeddings and BERT pre-trained embeddings. Table 1 shows that the model based on BERT outperforms GloVe embeddings in dev set. This means that humor recognition benefits from token representations based on context, which accords with our cognition.",
"cite_spans": [],
"ref_spans": [
{
"start": 229,
"end": 236,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "Sub-Task 1 Baseline 0.57840 GloVe-based 0.54980 BERT-based 0.53929 Table 1 : Performance of baseline and our models on dev set. BERT-based means that our model is based on BERT pre-trained embeddings and a single neuron layer in output layer. GloVe-based means the model based on GloVe embeddings. Table 2 lists the effectiveness of attention mechanism and statistical features. The system performance drops by 0.015 when ablating attention mechnism in BERT-based model. This indicates that attention mechanism is very important to capture the relation between the replaced words, replacement words and headlines.",
"cite_spans": [],
"ref_spans": [
{
"start": 67,
"end": 74,
"text": "Table 1",
"ref_id": null
},
{
"start": 298,
"end": 305,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Models",
"sec_num": null
},
{
"text": "Likewise, statistical features make contribution, but we find that this feature type is limited to models and has little potential in performance improvements. Apart from this feature, we also try sentiment lexicon features and humor lexicon features. However, adding these features to the model has no real performance increase, so we don't report these results here. This is probably because the inappropriate representations of features and rich semantics and all sorts of ironies in news headlines.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": null
},
{
"text": "Besides, if we use the percentage of grade 0, 1, 2 and 3 as our outputs, the performance is slightly better than predictions using one single output.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": null
},
{
"text": "According to experimental results, we take the combination of the GloVe-based, BERT-based and BERT-based (4 output) model with statistical features as the ensemble model. The RMSE result in dev set of sub-task 1 is 0.52205 and the accuracy in sub-task 2 is 0.63078, which show that these three models are complementary in predicting funniness. And this ensemble model is used to predict the humor score in test set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": null
},
{
"text": "Sub- Task Our system Rank 1 Rank 2 Sub-Task 1 0.52187 (6) 0.49725 (1) 0.50726 (2) Sub-Task 2 0.64384 (6) 0.67428 (1) 0.66058 (2) Table 3 : Performance of our system, top-ranked systems for sub-task 1, 2. The numbers in the brackets are the official rankings. Table 3 shows the official evaluation results of rank 1 and rank 2. Compared to other systems, our system has a lot of room for improvement, especially in identifying sarcasm.",
"cite_spans": [],
"ref_spans": [
{
"start": 5,
"end": 9,
"text": "Task",
"ref_id": null
},
{
"start": 129,
"end": 136,
"text": "Table 3",
"ref_id": null
},
{
"start": 259,
"end": 266,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Models",
"sec_num": null
},
{
"text": "In this paper, we present a deep learning model which contains two similar sub-networks. We design a two-layers BiLSTM with attention mechanism model, whose inputs are the original headlines and edited headlines. And then we predict the funniness of the edited headline by a subtract operation between the outputs of the two sub-networks mentioned above and concatenating the hand-crafted features. The experimental results show this method can assess the intensity of humor to some extent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "News headlines are short and concise, and humor often comes from phrases and common sense. In Semeval 2020 Task 7, there are all sorts of ironies in edited headlines, but no effective NLP tools can recognize them so far. In the future, we consider to introduce external knowledge to model headlines and improve the humor recognition performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A long short-term memory framework for predicting humor in dialogues",
"authors": [
{
"first": "Dario",
"middle": [],
"last": "Bertero",
"suffix": ""
},
{
"first": "Pascale",
"middle": [],
"last": "Fung",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "130--135",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dario Bertero and Pascale Fung. 2016. A long short-term memory framework for predicting humor in dialogues. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 130-135, San Diego, California, June. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Senticnet 5: Discovering conceptual primitives for sentiment analysis by means of context embeddings",
"authors": [
{
"first": "Erik",
"middle": [],
"last": "Cambria",
"suffix": ""
},
{
"first": "Soujanya",
"middle": [],
"last": "Poria",
"suffix": ""
},
{
"first": "Devamanyu",
"middle": [],
"last": "Hazarika",
"suffix": ""
},
{
"first": "Kenneth",
"middle": [],
"last": "Kwok",
"suffix": ""
}
],
"year": 2018,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Erik Cambria, Soujanya Poria, Devamanyu Hazarika, and Kenneth Kwok. 2018. Senticnet 5: Discovering concep- tual primitives for sentiment analysis by means of context embeddings. In AAAI.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Convolutional neural network for humor recognition",
"authors": [
{
"first": "Lei",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Chong",
"middle": [
"Min"
],
"last": "Lee",
"suffix": ""
}
],
"year": 2017,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lei Chen and Chong Min Lee. 2017. Convolutional neural network for humor recognition. ArXiv, abs/1702.02584.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirec- tional transformers for language understanding. cite arxiv:1810.04805.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Humor norms for 4,997 english words",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Engelthaler",
"suffix": ""
},
{
"first": "Thomas",
"middle": [
"T"
],
"last": "Hills",
"suffix": ""
}
],
"year": 2017,
"venue": "Behavior Research Methods",
"volume": "50",
"issue": "",
"pages": "1116--1124",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Engelthaler and Thomas T. Hills. 2017. Humor norms for 4,997 english words. Behavior Research Methods, 50:1116 -1124.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural Comput",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Comput., 9(8):1735-1780, November.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "president vows to cut <taxes> hair\": Dataset and analysis of creative text editing for humorous headlines",
"authors": [
{
"first": "Nabil",
"middle": [],
"last": "Hossain",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Krumm",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Gamon",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "133--142",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nabil Hossain, John Krumm, and Michael Gamon. 2019. \"president vows to cut <taxes> hair\": Dataset and analysis of creative text editing for humorous headlines. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 133-142, Minneapolis, Minnesota, June. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Semeval-2020 Task 7: Assessing humor in edited news headlines",
"authors": [
{
"first": "Nabil",
"middle": [],
"last": "Hossain",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Krumm",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Gamon",
"suffix": ""
},
{
"first": "Henry",
"middle": [],
"last": "Kautz",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of International Workshop on Semantic Evaluation (SemEval-2020)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nabil Hossain, John Krumm, Michael Gamon, and Henry Kautz. 2020a. Semeval-2020 Task 7: Assessing humor in edited news headlines. In Proceedings of International Workshop on Semantic Evaluation (SemEval-2020), Barcelona, Spain.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Stimulating creativity with FunLines: A case study of humor generation in headlines",
"authors": [
{
"first": "Nabil",
"middle": [],
"last": "Hossain",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Krumm",
"suffix": ""
},
{
"first": "Tanvir",
"middle": [],
"last": "Sajed",
"suffix": ""
},
{
"first": "Henry",
"middle": [],
"last": "Kautz",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations",
"volume": "",
"issue": "",
"pages": "256--262",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nabil Hossain, John Krumm, Tanvir Sajed, and Henry Kautz. 2020b. Stimulating creativity with FunLines: A case study of humor generation in headlines. In Proceedings of the 58th Annual Meeting of the Association for Com- putational Linguistics: System Demonstrations, pages 256-262, Online, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Making computers laugh: Investigations in automatic humor recognition",
"authors": [
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
},
{
"first": "Carlo",
"middle": [],
"last": "Strapparava",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Conference on Human Language Technology and Empirical Methods in Natural Language Processing, HLT '05",
"volume": "",
"issue": "",
"pages": "531--538",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rada Mihalcea and Carlo Strapparava. 2005. Making computers laugh: Investigations in automatic humor recog- nition. In Proceedings of the Conference on Human Language Technology and Empirical Methods in Natural Language Processing, HLT '05, page 531-538, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Philosophy of humor",
"authors": [
{
"first": "John",
"middle": [],
"last": "Morreall",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Morreall. 2016. Philosophy of humor. In Edward N. Zalta, editor, The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University, winter 2016 edition.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representa- tion. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar, October. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "Overall Model Structure",
"uris": null,
"num": null
},
"TABREF2": {
"content": "<table/>",
"type_str": "table",
"num": null,
"html": null,
"text": "Performance of our models on dev set. -Attention ablates the attention mechnism. + Statistical features adds statistical features to the model. 4 output means 4-neurons layer in prediction layer."
}
}
}
}