Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S17-2004",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:29:58.917302Z"
},
"title": "SemEval-2017 Task 6: #HashtagWars: Learning a Sense of Humor",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Potash",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Massachusetts Lowell",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Alexey",
"middle": [],
"last": "Romanov",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Massachusetts Lowell",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Anna",
"middle": [],
"last": "Rumshisky",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Massachusetts Lowell",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes a new shared task for humor understanding that attempts to eschew the ubiquitous binary approach to humor detection and focus on comparative humor ranking instead. The task is based on a new dataset of funny tweets posted in response to shared hashtags, collected from the 'Hashtag Wars' segment of the TV show @midnight. The results are evaluated in two subtasks that require the participants to generate either the correct pairwise comparisons of tweets (subtask A), or the correct ranking of the tweets (subtask B) in terms of how funny they are. 7 teams participated in subtask A, and 5 teams participated in subtask B. The best accuracy in subtask A was 0.675. The best (lowest) rank edit distance for subtask B was 0.872.",
"pdf_parse": {
"paper_id": "S17-2004",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes a new shared task for humor understanding that attempts to eschew the ubiquitous binary approach to humor detection and focus on comparative humor ranking instead. The task is based on a new dataset of funny tweets posted in response to shared hashtags, collected from the 'Hashtag Wars' segment of the TV show @midnight. The results are evaluated in two subtasks that require the participants to generate either the correct pairwise comparisons of tweets (subtask A), or the correct ranking of the tweets (subtask B) in terms of how funny they are. 7 teams participated in subtask A, and 5 teams participated in subtask B. The best accuracy in subtask A was 0.675. The best (lowest) rank edit distance for subtask B was 0.872.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Most work on humor detection approaches the problem as binary classification: humor or not humor. While this is a reasonable initial step, in practice humor is continuous, so we believe it is interesting to evaluate different degrees of humor, particularly as it relates to a given person's sense of humor. To further such research, we propose a dataset based on humorous responses submitted to a Comedy Central TV show, allowing for computational approaches to comparative humor ranking.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Debuting in Fall 2013, the Comedy Central show @midnight 1 is a late-night \"game-show\" that presents a modern outlook on current events by focusing on content from social media. The show's contestants (generally professional comedians or actors) are awarded points based on how funny their answers are. The segment of the show that best illustrates this attitude is the Hashtag Wars (HW). Every episode the show's host proposes a topic in the form of a hashtag, and the show's contestants must provide tweets that would have this hashtag. Viewers are encouraged to tweet their own responses. From the viewers' tweets, we are able to apply labels that determine how relatively humorous the show finds a given tweet.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Because of the contest's format, it provides an adequate method for addressing the selection bias (Heckman, 1979) often present in machine learning techniques (Zadrozny, 2004) . Since each tweet is intended for the same hashtag, each tweet is effectively drawn from the same sample distribution. Consequently, tweets are seen not as humor/nonhumor, but rather varying degrees of wit and cleverness. Moreover, given the subjective nature of humor, labels in the dataset are only \"gold\" with respect to the show's sense of humor. This concept becomes more grounded when considering the use of supervised systems for the dataset.",
"cite_spans": [
{
"start": 98,
"end": 113,
"text": "(Heckman, 1979)",
"ref_id": "BIBREF6"
},
{
"start": 159,
"end": 175,
"text": "(Zadrozny, 2004)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The idea of the dataset is to learn to characterize the sense of humor represented in this show. Given a set of hashtags, the goal is to predict which tweets the show will find funnier within each hashtag. The degree of humor in a given tweet is determined by the labels provided by the show. We propose two subtasks to evaluate systems on the dataset. The first subtask is pairwise comparison: given two tweets, select the funnier tweet, and the pairs will be derived from the labels assigned by the show to individual tweets. The second subtask is to rank the the tweets based on the comparative labels provided by the show. This is a semiranking task because most labels are applied to more than one tweet. Seen as a classification task, the labels are comparative, because there is a notion of distance. We introduce a new edit distance-inspired metric for this subtask.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A number of different computational approaches to humor have been proposed within the last decade (Yang et al., 2015; Mihalcea and Strapparava, 2005; Zhang and Liu, 2014; Radev et al., 2015; Raz, 2012; Reyes et al., 2013; Barbieri and Saggion, 2014; Shahaf et al., 2015; Purandare and Litman, 2006; Kiddon and Brun, 2011) . In particular, Zhang and Liu (2014) ; Raz (2012) ; Reyes et al. (2013) ; Barbieri and Saggion (2014) focus on recognizing humor in Twitter. However, the majority of this work focuses on distinguishing humor from non-humor.",
"cite_spans": [
{
"start": 98,
"end": 117,
"text": "(Yang et al., 2015;",
"ref_id": "BIBREF18"
},
{
"start": 118,
"end": 149,
"text": "Mihalcea and Strapparava, 2005;",
"ref_id": "BIBREF10"
},
{
"start": 150,
"end": 170,
"text": "Zhang and Liu, 2014;",
"ref_id": "BIBREF20"
},
{
"start": 171,
"end": 190,
"text": "Radev et al., 2015;",
"ref_id": "BIBREF13"
},
{
"start": 191,
"end": 201,
"text": "Raz, 2012;",
"ref_id": "BIBREF14"
},
{
"start": 202,
"end": 221,
"text": "Reyes et al., 2013;",
"ref_id": "BIBREF15"
},
{
"start": 222,
"end": 249,
"text": "Barbieri and Saggion, 2014;",
"ref_id": "BIBREF0"
},
{
"start": 250,
"end": 270,
"text": "Shahaf et al., 2015;",
"ref_id": "BIBREF16"
},
{
"start": 271,
"end": 298,
"text": "Purandare and Litman, 2006;",
"ref_id": "BIBREF12"
},
{
"start": 299,
"end": 321,
"text": "Kiddon and Brun, 2011)",
"ref_id": "BIBREF7"
},
{
"start": 339,
"end": 359,
"text": "Zhang and Liu (2014)",
"ref_id": "BIBREF20"
},
{
"start": 362,
"end": 372,
"text": "Raz (2012)",
"ref_id": "BIBREF14"
},
{
"start": 375,
"end": 394,
"text": "Reyes et al. (2013)",
"ref_id": "BIBREF15"
},
{
"start": 397,
"end": 424,
"text": "Barbieri and Saggion (2014)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This representation has two shortcomings: (1) it ignores the continuous nature of humor, and (2) it does not take into account the subjectivity in humor perception. Regarding the first issue, we believe that shifting away from the binary approach to humor detection as done in the present task is a good pathway towards advancing this work. Regarding the second issue, consider a humour annotation task done by Shahaf et al. (2015) , in which the annotators looked at pairs of captions from the New Yorker Caption Content 2 , Shahaf et al. (2015) report that \"Only 35% of the unique pairs that were ranked by at least five people achieved 80% agreement...\" In contrast, the goal of the present task is to not to identify humour that is universal, but rather, to capture the specific sense of humour represented in the show. Mihalcea and Strapparava (2005) developed a humor dataset of puns and humorous one-liners intended for supervised learning. In order to generate negative examples for their experimental design, the authors used news titles from Reuters and the British National Corpus, as well as proverbs. Recently, Yang et al. (2015) used the same dataset for experimental purposes, taking text from AP News, New York Times, Yahoo! Answers, and proverbs as their negative examples. To further reduce the bias of their negative examples, the authors selected negative examples with a vocabulary that is in the dictionary created from the positive examples. Also, the authors forced the negative examples to have a similar text length compared to the positive examples.",
"cite_spans": [
{
"start": 411,
"end": 431,
"text": "Shahaf et al. (2015)",
"ref_id": "BIBREF16"
},
{
"start": 526,
"end": 546,
"text": "Shahaf et al. (2015)",
"ref_id": "BIBREF16"
},
{
"start": 824,
"end": 855,
"text": "Mihalcea and Strapparava (2005)",
"ref_id": "BIBREF10"
},
{
"start": 1124,
"end": 1142,
"text": "Yang et al. (2015)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Zhang and Liu (2014) constructed a dataset for recognizing humor in Twitter in two parts. First, the authors use the Twitter API with targeted user mentions and hashtags to produce a set of 1,500 humorous tweets. After manual inspections, 1,267 of the original 1,500 tweets were found to be humorous, of which 1,000 were randomly sampled as positive examples in the final dataset. Second, the authors collect negative examples by extracting 1,500 tweets from the Twitter Streaming API, manually checking for the presence of humor. Next, the authors combine these tweets with tweets from part one that were found to actually not contain humor. The authors argue this last step will partly assuage the selection bias of the negative examples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In Reyes et al. (2013) the authors create a model to detect ironic tweets. To construct their dataset they collect tweets with the following hashtags: irony, humor, politics, and education. Therefore, a tweet is considered ironic solely because of the presence of the appropriate hashtag. Barbieri and Saggion (2014) also use this dataset for their work.",
"cite_spans": [
{
"start": 3,
"end": 22,
"text": "Reyes et al. (2013)",
"ref_id": "BIBREF15"
},
{
"start": 289,
"end": 316,
"text": "Barbieri and Saggion (2014)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Finally, recently researchers have developed a dataset similar to our HW dataset based on the New Yorker Caption Contest (NYCC) (Radev et al., 2015; Shahaf et al., 2015) . Whereas for the HW segment, viewers submit a tweet in response to a hashtag, for the NYCC readers submit humorous captions in response to a cartoon. It is important to note this key distinction between the two datasets, because we believe that the presence of the hashtag allows for further innovative NLP methodologies aside from solely analyzing the tweets themselves. In Radev et al. (2015) , the authors developed more than 15 unsupervised methods for ranking submissions for the NYCC. The methods can be categorized into broader categories such as originality and content-based.",
"cite_spans": [
{
"start": 128,
"end": 148,
"text": "(Radev et al., 2015;",
"ref_id": "BIBREF13"
},
{
"start": 149,
"end": 169,
"text": "Shahaf et al., 2015)",
"ref_id": "BIBREF16"
},
{
"start": 546,
"end": 565,
"text": "Radev et al. (2015)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Alternatively, Shahaf et al.(2015) approach the NYCC dataset with supervised models, evaluating on a pairwise comparison task, upon which we base our evaluation methodology. The features to represent a given caption fall in the general areas of Unusual Language, Sentiment, and Taking Expert Advice. For a single data point (which represents two captions), the authors concatenate the features of each individual caption, as well as encoding the difference between each caption's vector. The authors' best-performing system records a 69% accuracy on the pairwise evaluation task. Note that for this evaluation task, random baseline is 50%.",
"cite_spans": [
{
"start": 15,
"end": 34,
"text": "Shahaf et al.(2015)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The following section describes our data collection process. First, when a new episode airs (which generally happens four nights a week), a new hashtag will be given. We wait until the following morning to use the public Twitter search API 3 to collect tweets that have been posted with the new hashtag. Generally, this returns 100-200 tweets. We wait until the following day to allow for as many tweets as possible to be submitted. The day of the ensuing episode (i.e. on a Monday for a hashtag that came out for a Thursday episode), @midnight creates a Tumblr post 4 that announces the top-10 tweets from the previous episode's hashtag (the tweets are listed as embedded images, as is often done for sharing public tweets on websites). If they're not already present, we add the tweets from the top-10 to our existing list of tweets for the hashtag. We also perform automated filtering to remove redundant tweets. Specifically, we see that the text of tweets (aside from hashtags and user mentions) are not the same. The need for this results from the fact that some viewers submit identical tweets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data collection",
"sec_num": "3.1"
},
{
"text": "Using both the @midnight official Tumblr account, as well as the show's official website where the winning tweet is posted, we annotate each tweet with labels 0, 1 and 2. Label 2 designates the winning tweet. Thus, the label 2 only occurs once for each hashtag. Label 1 indicates that the tweet was selected as a top-10 tweet (but not the winning tweet) and label 0 is assigned for all other tweets. It is important to note that every time we collect a tweet, we must also collect its tweet ID. While this was initially done to comply with Twitter's terms of use 5 , which disallows the public distribution of users' tweets, The presence of tweet IDs allows us to easily handle the evaluation process when referencing tweets (see Section 4). The need to determine the tweet IDs for tweets that weren't found in the initial query (i.e. tweets added from the top 10) makes the data collection process slightly laborious, since the top-10 list doesn't contain the tweet ID. In fact, it doesn't even contain the text itself since it's actually an image.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data collection",
"sec_num": "3.1"
},
{
"text": "Collection Because the data collection process is continuously repeated and requires a non-trivial amount of human labor, we have built a helper system that can partially automate the process of data collection. This system is organized as a website with a convenient user interface.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Semi-Automated System for Data",
"sec_num": "3.1.1"
},
{
"text": "On the start page the user enters the id of the Tumblr post with the tweets in the top 10. Next, we invoke Tesseract 6 , an OCR command-line utility, to recognize the textual content of the tweet images. Using the recognized content, the system forms a webpage on which the user can simultaneously see the text of the tweets as well as the original images. On this page, the user can query the Twitter API to search by text, or click the button \"Open twitter search\" to open the Twitter Search page if the API returns zero results. We note that the process is not fully automated because a given text query can we return redundant results, and we primarily check to make sure we add the tweet that came from the appropriate user. With the help of this system, the process of collecting the top-10 tweets (along with their tweet IDs) takes roughly 2 minutes. Lastly, we note that the process for annotating the winning tweet (which is already included in the top-10 posted in the Tumblr list) is currently manual, because it requires going to the @midnight website. This is another aspect of the data collection system that could potentially be automated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Semi-Automated System for Data",
"sec_num": "3.1.1"
},
{
"text": "Data collection occurred for roughly eight months, producing a total of 12,734 tweets for 112 hashtags. The resulting dataset is what we used for the task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "3.2"
},
{
"text": "The distribution of the number of tweets per hashtag is represented in Figure 1 . For 71% of hashtags, we have at least 90 tweets. The files of the individual hashtags are formatted so that the individual hashtag tokens are easily recoverable. Specifically, tokens are separated by the ' ' character. For example, the hashtag FastFoodBooks has the file name \"fast food books.tsv\". Figure 2 represents an example of the tweets collected for the hashtag FastFoodBooks. Ob- Figure 1 : Distribution of the numbers of tweets per hashtag serve that this hashtag requires external knowledge about fast food and books in order to understand the humor. Furthermore, this hashtag illustrates how prevalent puns are in the dataset, especially related to certain target hashtags. In contrast, the hashtag IfIWerePresident (see Figure 3) does not require external knowledge and the tweets are understandable without awareness of any specific concepts.",
"cite_spans": [],
"ref_spans": [
{
"start": 71,
"end": 79,
"text": "Figure 1",
"ref_id": null
},
{
"start": 381,
"end": 389,
"text": "Figure 2",
"ref_id": null
},
{
"start": 471,
"end": 479,
"text": "Figure 1",
"ref_id": null
},
{
"start": 815,
"end": 821,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "Dataset",
"sec_num": "3.2"
},
{
"text": "For the purpose of our task, we released 5 files/660 tweets as the trial data, 101 files/11,325 tweets (separate from the trial data) as the training data, and 6 files/749 tweets as the evaluation data. The 6 evaluation files were chosen based on the following logic: first, we examined the results of our own systems on individual hashtags using leave-one-out evaluation (Potash et al., 2016) . We looked for a mixture of hashtags that had high, average, and low performance. Secondly, we wanted a mixture of hashtags that promote different types of humor, such as puns that use external knowledge (for example the hashtag FastFoodBooks in Figure 3. 2), or hashtags that seek to express more general humor (for example the hashtag IfIWerePresident in Figure 3 .2).",
"cite_spans": [
{
"start": 372,
"end": 393,
"text": "(Potash et al., 2016)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 641,
"end": 650,
"text": "Figure 3.",
"ref_id": null
},
{
"start": 752,
"end": 760,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Dataset",
"sec_num": "3.2"
},
{
"text": "In this task, the results are evaluated in two subtasks. Subtask A requires the participants to generate the correct pairwise comparisons of tweets to determine which tweet is funnier according to the TV show @midnight. Subtask B asks for the correct ranking of tweets in terms of how funny they are (again, according to @midnight).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Subtasks",
"sec_num": "4"
},
{
"text": "As I Lay Dying of congestive heart failure @midnight #FastFoodBooks Harry Potter and the Order of the Big Mac #FastFoodBooks @midnight The Girl With The Jared Tattoo #FastFood-Books @midnight A Room With a Drive-thru @midnight #Fast-FoodBooks Figure 2 : An example of the items in the dataset for the hashtag FastFoodBooks that requires external knowledge in order to understand the humor. Furthermore, the tweets for this hashtag are puns connecting book titles and fast food-related language #IfIWerePresident my Cabinet would just be cats. @midnight Historically, I'd oversleep and eventually get fired. @midnight #IfIWerePresident #IfIWerePresident I'd pardon Dad so we could be together again... @midnight #IfIWerePresident my estranged children would finally know where I was @midnight Figure 3 : An example of the items in the dataset for the hashtag IfIWerePresident that does not require external knowledge in order to understand the humor",
"cite_spans": [],
"ref_spans": [
{
"start": 243,
"end": 251,
"text": "Figure 2",
"ref_id": null
},
{
"start": 792,
"end": 800,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Subtasks",
"sec_num": "4"
},
{
"text": "For the first subtask, we follow the approach taken by Shahaf et al. (2015) and make predictions on pairs of tweets with the goal of determining which tweet is funnier. Using the tweets for each hashtag, we construct pairs of tweets in which one tweet is judged by the show to be funnier than the other. The pairs used for evaluation are constructed as follows:",
"cite_spans": [
{
"start": 55,
"end": 75,
"text": "Shahaf et al. (2015)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Subtask A: Pairwise Comparison",
"sec_num": "4.1"
},
{
"text": "(1) The tweets that are the top-10 funniest tweets are paired with the tweets not in the top-10.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Subtask A: Pairwise Comparison",
"sec_num": "4.1"
},
{
"text": "(2) The winning tweet is paired with the other tweets in the top-10.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Subtask A: Pairwise Comparison",
"sec_num": "4.1"
},
{
"text": "If we have n tweets for a given hashtag, (1) will produce 10(n \u2212 10) pairs, and (2) will produce 9 pairs, giving us 10n \u2212 91 data points for a single hashtag. Constructing the pairs for evaluation in this way ensures that one of the tweets in each pair has been judged to be funnier than the other. We follow Shahaf et al. and use the label 1 to denote that the first tweet is funnier, and 0 to denote that the second tweet is funnier. However, this labeling is counter-intuitive to zero-indexing, and could be changed to avoid confusion in labeling (see Section 5).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Subtask A: Pairwise Comparison",
"sec_num": "4.1"
},
{
"text": "Since we only provide teams with files containing tweet ID, tweet text, and tweet label (gold label: 0, 1, or 2), it is up to the teams to form the appropriate pairs with the correct labels. In order to produce balanced training data, we recommend that the ordering of tweets in a pair be determined by a coin-flip. At evaluation time, we provide the teams with hashtag files with tweet id and tweet text. We then ask the teams to provide predictions for every possible tweet combination. Our evaluation script then chooses only the tweet pairs where two different labels are present. The pairs can be listed in either ordering of the tweets because the scorer accounts for the two possible orderings for each pair. We decided against the idea of providing the appropriate pairs themselves for evaluation because it is very easy to use frequencies of tweet IDs in the pairs to determine overall tweet label.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Subtask A: Pairwise Comparison",
"sec_num": "4.1"
},
{
"text": "The evaluation measure for subtask A is the micro average of accuracy across the individual evaluation hashtags. For a given hashtag, the accuracy is the number of correctly predicted pairs divided by the total number of pairs. Therefore, random guessing will produce 50% accuracy on this task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Subtask A: Pairwise Comparison",
"sec_num": "4.1"
},
{
"text": "The second subtask asks teams to use the same input data for training and evaluation as subtask A. However, whereas subtask A creates pairs of tweets based on the labeling, subtask B asks teams to predict the labels directly. For this dataset, the number of tweets per class is known. Moreover, since the labels describe a partial ordering, predicting the labels is akin to providing a ranking of tweets in order of how funny they are. Therefore, for subtask B, we ask the teams to provide prediction files where the tweets are ranking by how funny they are. From the provided ranking we infer the labeling: the first tweet is labeled 2, the next nine labeled 1, and the rest labeled 0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Subtask B: Ranking",
"sec_num": "4.2"
},
{
"text": "The metric for evaluating subtask B is inspired by a notion of edit distance, because standard clas-sification metrics do not take into account class' comparative rankings. Treating labels as buckets, the metric determines, for a predicted label, how many 'moves' are needed to place it in the correct bucket. For example, if the correct label is 1 and the predicted label is 0, the edit distance is 1. Similarly, if the correct label is 0 and the predicted label is 2, the edit distance is 2. For a given hashtag file, the maximum edit distance for all tweets is 22. As a result, the edit distance for a given hashtag file is the total number of moves for all tweets divided by 22. This gives a normalized metric between 0 and 1 where a lower value is better. For the final distance metric, we micro-average across all evaluation files.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Subtask B: Ranking",
"sec_num": "4.2"
},
{
"text": "Three teams participated only in subtask A, one team participated only in subtask B, and four teams participated in both subtasks. The official results for participating teams are shown in Tables 1 and 2 for subtasks A and B, respectively. Note that due to space constraints we use short versions of hashtag names in the tables. Namely, \"Christmas\" corresponds to the hashtag RuinAChristmasMovie, \"Shakespeare\" corresponds to ModernShakespeare, \"Bad Job\" to Bad-JobIn5Words, \"Break Up\" to BreakUpIn5Words, \"Broadway\" to BroadwayACeleb, and \"Cereal\" to CerealSongs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "We report the results broken down by hashtag, as well as the overall micro-average. This table records results that were submitted to the Co-daLab competition pages 7 . TakeLab (Kukova\u010dec et al., 2017) submitted predictions with the labels flipped, which causes each run to appear in the table twice. The corrected files are not given an official ranking. After the release of the labeled evaluation data, many teams reported improved results. We have accrued these new results and combined them with the official submission rankings to produce Tables 3 and 4. The goal of these tables is to report the most up-to-date results on the evaluation set. Moreover, all results that do not have an official ranking in these tables are results that are reported individually by the teams in their system papers (except for TakeLab's results) after the gold evaluation labels were released. Table 1 : The official results for the subtask A broken down by hashtag. Bold indicates the best run for the given hashtag. \"Christmas\" corresponds to the hashtag RuinAChristmasMovie, \"Shakespeare\" corresponds to ModernShakespeare, \"Bad Job\" to BadJobIn5Words, \"Break Up\" to BreakUpIn5Words, \"Broadway\" to BroadwayACeleb, and \"Cereal\" to CerealSongs. The official results for the subtask B broken down by hashtag. Bold indicates the best run for the given hashtag. \"Christmas\" corresponds to the hashtag RuinAChristmasMovie, \"Shakespeare\" corresponds to ModernShakespeare, \"Bad Job\" to BadJobIn5Words, \"Break Up\" to BreakUpIn5Words, \"Broadway\" to BroadwayACeleb, and \"Cereal\" to CerealSongs.",
"cite_spans": [
{
"start": 177,
"end": 201,
"text": "(Kukova\u010dec et al., 2017)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 883,
"end": 890,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "The last row of Table 1 shows the average accuracy of each hashtag across all systems (the official results of the TakeLab systems are not included in this average since we also include in the average the unofficial, corrected results). The two easiest hashtags are ones that require less external knowledge compared to the other four. These four hashtags specifically riff on a particular Christmas movie, Shakespeare quote, celebrity/Broadway play, or cereal/song. Consequently, one single system did best in three out of four of these hashtags (TakeLab). It is not coincidence, since this system made extensive use of external knowledge bases. Furthermore, the three hashtags where it did best required knowledge of specific entities, whereas the knowledge required in the hashtag ModernShakespeare is the actual lines from Shakespeare plays. As we mentioned in Section 3.2, the evaluation hashtags were chosen partly because of our own system performance on the hashtags (Potash et al., 2016) . One of the most difficult hashtags from our initial experiments was the hashtag CerealSongs, which was the hashtag systems performed the worse on in this task. We believe this is because the humor in this hashtag is based on two sources of external knowledge: cereals and songs. Correspondingly, the hashtag with the second worse performance also requires two sources of external knowledge: Broadway plays and celebrities (this hashtag was originally chosen as a representative of the hashtags our systems recorded average performance). The hashtag BadJobIn5Words was one that had high performance by our own systems, and that continued in this task. This hashtag had the second highest accuracy, and would have had the highest if the Duluth team (Yan and Pedersen, 2017) did not have such remarkable success on the highest accuracy hashtag, BreakUpIn5Words.",
"cite_spans": [
{
"start": 975,
"end": 996,
"text": "(Potash et al., 2016)",
"ref_id": "BIBREF11"
},
{
"start": 1746,
"end": 1770,
"text": "(Yan and Pedersen, 2017)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [
{
"start": 16,
"end": 23,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Task Analysis",
"sec_num": "6.1"
},
{
"text": "The poor performance for the hashtags Cere-alSongs and BroadwayACeleb is also interesting Table 4 : Unofficial results for the subtask B on the released evaluation set reported by the participating teams to note since they were chosen because the hashtag names had strong similarity to hashtags in the training data. For example, 12 hashtags in the training data had the word 'Song'. Likewise, five hashtags had the word 'Celeb', and there was one more hashtag with the word 'Broadway'. Alternatively, The two hashtags with the best performance followed the 'X in X words' format, for which there were 16 such hashtags in the training data. Regarding the hashtag BadJobIn5Words, there are six hashtags in the training data beginning with the word 'Bad'.",
"cite_spans": [],
"ref_spans": [
{
"start": 90,
"end": 97,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Task Analysis",
"sec_num": "6.1"
},
{
"text": "Our current task analysis has focused on subtask A. The primary reason for this is that the performance on subtask B was relatively poor. To put the results in perspective, we created random guesses for subtask B, and these random guesses recorded an average distance of 0.880. From the results, only one team was able to beat this score. We can see that two of the three highest performing teams in subtask A did not participate in subtask B, and the other team that did participate approached subtask B as a secondary task (see Section 6.2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Analysis",
"sec_num": "6.1"
},
{
"text": "For the teams that participated in both subtasks, they used the output of a single system to predict for both subtasks. Two teams, SVNIT (Mahajan and Zaveri, 2017) and QUB (Han and Toner, 2017) , initially predicted the labels of each tweet based on the output of a supervised classifier, and then used these labels to both rank the tweets and make pairwise predictions for the subtasks. Duluth took a similar approach, but used the output of a language model to rank the tweets, as opposed to labels provided by a classifier. Conversely, TakeLab sought to solve subtask A first, then used the frequencies of a tweet being chosen as funnier in a pair to provide a single, ordered metric to make predictions for subtask B. The team that only participated in subtask B, #WarTeam, also used the output of a supervised classifier to label the tweets, which in turn provided the ranking. One of interesting results from having the two subtasks (which are effectively two different ways of evaluating the same overall task) is to see how it distinguishes the unified approaches to solving both subtasks. We can see that, in fact, the top team is not con-sistent between the two subtasks. It is not a surprise to see that the best performing team (out of the four that participated in both subtasks) in subtask A was TakeLab, who focused primarily on this task. Conversely, TakeLab finished second in subtask B to Duluth, who focused on creating an ordered metric for ranking via language models.",
"cite_spans": [
{
"start": 137,
"end": 163,
"text": "(Mahajan and Zaveri, 2017)",
"ref_id": "BIBREF9"
},
{
"start": 172,
"end": 193,
"text": "(Han and Toner, 2017)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "System Analysis",
"sec_num": "6.2"
},
{
"text": "In terms of overall system approach, we can analyze how heavily systems rely on featureengineering, verse using learned representations from neural networks. Three of the top four systems for subtask A leveraged neural network architectures. Two of these systems used only pretrained word representations as external knowledge for the neural network systems. This is in opposition to other systems that relied on the output of separate tools, or looking up terms in corpora. Some teams, such as HumorHawk 8 (Donahue et al., 2017) and #WarTeam, used a combination of these two types of systems, and notably, the system that was ranked first in Subtask A (HumorHawk) was an ensemble system that utilized prediction from both feature-based and neural networks-based models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Analysis",
"sec_num": "6.2"
},
{
"text": "As for the feature-based systems, one trend we observed is that many teams tried to capture the incongruity aspect of humor (Cattle and Ma, 2017) , often present in the dataset. The approaches used by teams varied from n-gram language models, word association, to semantic relatedness features. In addition, the TakeLab team used cultural reference features, such as movie and song references, and Google Trends features for named entities. During the performed analysis, the team found these features most useful for the model.",
"cite_spans": [
{
"start": 124,
"end": 145,
"text": "(Cattle and Ma, 2017)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "System Analysis",
"sec_num": "6.2"
},
{
"text": "Considering neural network-based systems, LSTMs were used the most, which is expected given the sequential nature of text data. Plain LSTM models alone, using pretrained word embeddings, achieved competitive results, and DataStories (Baziotis et al., 2017) ranked third using a siamese bidirectional LSTM model with attention.",
"cite_spans": [
{
"start": 233,
"end": 256,
"text": "(Baziotis et al., 2017)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "System Analysis",
"sec_num": "6.2"
},
{
"text": "One key difference between the dataset used in this task and the datasets based on the NYCC (Radev et al., 2015; Shahaf et al., 2015) is the presence of the hashtag. Some teams used additional hashtag-based features in their systems. 8 Two of the organizers were members of this team. They were not involved in the data selection process. They had no knowledge of which files were selected for evaluation, nor how these files were chosen.",
"cite_spans": [
{
"start": 92,
"end": 112,
"text": "(Radev et al., 2015;",
"ref_id": "BIBREF13"
},
{
"start": 113,
"end": 133,
"text": "Shahaf et al., 2015)",
"ref_id": "BIBREF16"
},
{
"start": 234,
"end": 235,
"text": "8",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "System Analysis",
"sec_num": "6.2"
},
{
"text": "For example, humor patterns, defined by the hashtag, were one of the most important features for the TakeLab team. Other teams used semantic distances between the hashtag and tweets as features. Table 1 also includes the standard deviation of system scores across the hashtags. Looking at the numbers there appears to be little in the way of a pattern regarding the standard deviation numbers. When correlated with system accuracy, the results is 0.11, which supports the idea that consistency across the hashtags has no relation to overall system performance. Even between the two purest neural network-based systems, DataStories and HumorHawk run 1, the standard deviations vary greatly: 0.134 (DataStories) and 0.049 (Hu-morHawk run 1). In fact, 0.049 was the lowest standard deviation across all systems. Duluth recorded the highest standard deviation across the datasets, primarily due to the fact that it had the single highest accuracy on any hashtag (0.913 for the hashtag BreakUpIn5Words), as well as the lowest single hashtag score for any system with an overall accuracy greater than 0.600 (0.485 for the hashtag RuinAChristmasMovie). One possibility for this high standard deviation is that this is the only unsupervised system. However, the other run submitted by Duluth (whose primary difference is that its language model was trained on a dataset of tweets as opposed to news articles) has a both a significantly lower accuracy and standard deviation.",
"cite_spans": [],
"ref_spans": [
{
"start": 195,
"end": 202,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "System Analysis",
"sec_num": "6.2"
},
{
"text": "We have presented the results of the SemEval 2017 shared task: #HashtagWars: Learning a Sense of Humor. It was the first year this task was presented, attracting 8 teams and 19 systems across two substasks. The top performing systems achieved 0.675 accuracy in subtask A and 0.872 score on subtask B, advancing the difficult task of humor understanding. Interestingly, the topranked system used an ensemble of both featurebased and neural network-based systems, suggesting that despite the overwhelming success of neural networks in the past few years, human intuition is still important for systems that seek to automatically understand humor.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "http://www.cc.com/shows/-midnight",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://contest.newyorker.com/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://dev.twitter.com/rest/public/ search 4 http://atmidnightcc.tumblr.com/ 5 https://dev.twitter.com/overview/ terms",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/tesseract-ocr/ tesseract",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://competitions.codalab.org/ competitions/15682, https://competitions. codalab.org/competitions/15689",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Automatic detection of irony and humour in twitter",
"authors": [
{
"first": "Francesco",
"middle": [],
"last": "Barbieri",
"suffix": ""
},
{
"first": "Horacio",
"middle": [],
"last": "Saggion",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Francesco Barbieri and Horacio Saggion. 2014. Au- tomatic detection of irony and humour in twitter.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Proceedings of the International Conference on Computational Creativity",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "In Proceedings of the International Conference on Computational Creativity.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Datastories at semeval-2017 task 6: Siamese lstm with attention for humorous text comparison",
"authors": [
{
"first": "Christos",
"middle": [],
"last": "Baziotis",
"suffix": ""
},
{
"first": "Nikos",
"middle": [],
"last": "Pelekis",
"suffix": ""
},
{
"first": "Christos",
"middle": [],
"last": "Doulkeridis",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 11th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "389--394",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christos Baziotis, Nikos Pelekis, and Christos Doulk- eridis. 2017. Datastories at semeval-2017 task 6: Siamese lstm with attention for humorous text comparison. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017). Association for Computational Linguistics, Vancouver, Canada, pages 389-394. http://www.aclweb.org/anthology/S17-2065.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Srhr at semeval-2017 task 6: Word associations for humour recognition",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Cattle",
"suffix": ""
},
{
"first": "Xiaojuan",
"middle": [],
"last": "Ma",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 11th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "400--405",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew Cattle and Xiaojuan Ma. 2017. Srhr at semeval-2017 task 6: Word associations for hu- mour recognition. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017). Association for Computational Linguistics, Vancouver, Canada, pages 400-405. http://www.aclweb.org/anthology/S17-2067.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Humorhawk at semeval-2017 task 6: Mixing meaning and sound for humor recognition",
"authors": [
{
"first": "David",
"middle": [],
"last": "Donahue",
"suffix": ""
},
{
"first": "Alexey",
"middle": [],
"last": "Romanov",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Rumshisky",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 11th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "98--102",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Donahue, Alexey Romanov, and Anna Rumshisky. 2017. Humorhawk at semeval- 2017 task 6: Mixing meaning and sound for humor recognition. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017). Association for Computational Linguistics, Vancouver, Canada, pages 98-102. http://www.aclweb.org/anthology/S17-2010.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Qub at semeval-2017 task 6: Cascaded imbalanced classification for humor analysis in twitter",
"authors": [
{
"first": "Xiwu",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Gregory",
"middle": [],
"last": "Toner",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017). Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "379--383",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiwu Han and Gregory Toner. 2017. Qub at semeval- 2017 task 6: Cascaded imbalanced classification for humor analysis in twitter. In Proceedings of the 11th International Workshop on Semantic Eval- uation (SemEval-2017). Association for Computa- tional Linguistics, Vancouver, Canada, pages 379- 383. http://www.aclweb.org/anthology/S17-2063.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Sample selection bias as a specification error",
"authors": [
{
"first": "J",
"middle": [],
"last": "James",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Heckman",
"suffix": ""
}
],
"year": 1979,
"venue": "Econometrica: Journal of the econometric society",
"volume": "",
"issue": "",
"pages": "153--161",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James J Heckman. 1979. Sample selection bias as a specification error. Econometrica: Journal of the econometric society pages 153-161.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "That's what she said: double entendre identification",
"authors": [
{
"first": "Chloe",
"middle": [],
"last": "Kiddon",
"suffix": ""
},
{
"first": "Yuriy",
"middle": [],
"last": "Brun",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers",
"volume": "2",
"issue": "",
"pages": "89--94",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chloe Kiddon and Yuriy Brun. 2011. That's what she said: double entendre identification. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Tech- nologies: short papers-Volume 2. Association for Computational Linguistics, pages 89-94.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Takelab at semeval-2017 task 6: #rank-inghumorin4pages",
"authors": [
{
"first": "Marin",
"middle": [],
"last": "Kukova\u010dec",
"suffix": ""
},
{
"first": "Juraj",
"middle": [],
"last": "Malenica",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Mr\u0161i\u0107",
"suffix": ""
},
{
"first": "Domagoj",
"middle": [],
"last": "Anto-Nio\u0161ajatovi\u0107",
"suffix": ""
},
{
"first": "Jan\u0161najder",
"middle": [],
"last": "Alagi\u0107",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 11th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "395--399",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marin Kukova\u010dec, Juraj Malenica, Ivan Mr\u0161i\u0107, Anto- nio\u0160ajatovi\u0107, Domagoj Alagi\u0107, and Jan\u0160najder. 2017. Takelab at semeval-2017 task 6: #rank- inghumorin4pages. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017). Association for Computational Linguistics, Vancouver, Canada, pages 395-399. http://www.aclweb.org/anthology/S17-2066.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Svnit @ semeval 2017 task-6: Learning a sense of humor using supervised approach",
"authors": [
{
"first": "Rutal",
"middle": [],
"last": "Mahajan",
"suffix": ""
},
{
"first": "Mukesh",
"middle": [],
"last": "Zaveri",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017). Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "410--414",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rutal Mahajan and Mukesh Zaveri. 2017. Svnit @ semeval 2017 task-6: Learning a sense of humor using supervised approach. In Proceedings of the 11th International Workshop on Semantic Evalu- ation (SemEval-2017). Association for Computa- tional Linguistics, Vancouver, Canada, pages 410- 414. http://www.aclweb.org/anthology/S17-2069.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Making computers laugh: Investigations in automatic humor recognition",
"authors": [
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
},
{
"first": "Carlo",
"middle": [],
"last": "Strapparava",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Conference on Human Language Technology and Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "531--538",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rada Mihalcea and Carlo Strapparava. 2005. Making computers laugh: Investigations in automatic humor recognition. In Proceedings of the Conference on Human Language Technology and Empirical Meth- ods in Natural Language Processing. Association for Computational Linguistics, pages 531-538.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "#hashtagwars: Learning a sense of humor",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Potash",
"suffix": ""
},
{
"first": "Alexey",
"middle": [],
"last": "Romanov",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Rumshisky",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1612.03216"
]
},
"num": null,
"urls": [],
"raw_text": "Peter Potash, Alexey Romanov, and Anna Rumshisky. 2016. #hashtagwars: Learning a sense of humor. arXiv preprint arXiv:1612.03216 .",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Humor: Prosody analysis and automatic recognition for f* r* i* e* n* d* s*",
"authors": [
{
"first": "Amruta",
"middle": [],
"last": "Purandare",
"suffix": ""
},
{
"first": "Diane",
"middle": [],
"last": "Litman",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "208--215",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amruta Purandare and Diane Litman. 2006. Humor: Prosody analysis and automatic recognition for f* r* i* e* n* d* s*. In Proceedings of the 2006 Con- ference on Empirical Methods in Natural Language Processing. Association for Computational Linguis- tics, pages 208-215.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Humor in collective discourse: Unsupervised funniness detection in the new yorker cartoon caption contest",
"authors": [
{
"first": "Dragomir",
"middle": [],
"last": "Radev",
"suffix": ""
},
{
"first": "Amanda",
"middle": [],
"last": "Stent",
"suffix": ""
},
{
"first": "Joel",
"middle": [],
"last": "Tetreault",
"suffix": ""
},
{
"first": "Aasish",
"middle": [],
"last": "Pappu",
"suffix": ""
},
{
"first": "Aikaterini",
"middle": [],
"last": "Iliakopoulou",
"suffix": ""
},
{
"first": "Agustin",
"middle": [],
"last": "Chanfreau",
"suffix": ""
},
{
"first": "Paloma",
"middle": [],
"last": "De Juan",
"suffix": ""
},
{
"first": "Jordi",
"middle": [],
"last": "Vallmitjana",
"suffix": ""
},
{
"first": "Alejandro",
"middle": [],
"last": "Jaimes",
"suffix": ""
},
{
"first": "Rahul",
"middle": [],
"last": "Jha",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1506.08126"
]
},
"num": null,
"urls": [],
"raw_text": "Dragomir Radev, Amanda Stent, Joel Tetreault, Aa- sish Pappu, Aikaterini Iliakopoulou, Agustin Chan- freau, Paloma de Juan, Jordi Vallmitjana, Alejandro Jaimes, Rahul Jha, et al. 2015. Humor in collective discourse: Unsupervised funniness detection in the new yorker cartoon caption contest. arXiv preprint arXiv:1506.08126 .",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Automatic humor classification on twitter",
"authors": [
{
"first": "Yishay",
"middle": [],
"last": "Raz",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Student Research Workshop",
"volume": "",
"issue": "",
"pages": "66--70",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yishay Raz. 2012. Automatic humor classification on twitter. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies: Student Research Workshop. Association for Computational Linguistics, pages 66-70.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A multidimensional approach for detecting irony in twitter",
"authors": [
{
"first": "Antonio",
"middle": [],
"last": "Reyes",
"suffix": ""
},
{
"first": "Paolo",
"middle": [],
"last": "Rosso",
"suffix": ""
},
{
"first": "Tony",
"middle": [],
"last": "Veale",
"suffix": ""
}
],
"year": 2013,
"venue": "Language resources and evaluation",
"volume": "47",
"issue": "1",
"pages": "239--268",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antonio Reyes, Paolo Rosso, and Tony Veale. 2013. A multidimensional approach for detecting irony in twitter. Language resources and evaluation 47(1):239-268.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Inside jokes: Identifying humorous cartoon captions",
"authors": [
{
"first": "Dafna",
"middle": [],
"last": "Shahaf",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Horvitz",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Mankoff",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining",
"volume": "",
"issue": "",
"pages": "1065--1074",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dafna Shahaf, Eric Horvitz, and Robert Mankoff. 2015. Inside jokes: Identifying humorous cartoon captions. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, pages 1065-1074.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Duluth at semeval-2017 task 6: Language models in humor detection",
"authors": [
{
"first": "Xinru",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "Ted",
"middle": [],
"last": "Pedersen",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 11th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "384--388",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xinru Yan and Ted Pedersen. 2017. Duluth at semeval-2017 task 6: Language models in hu- mor detection. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017). Association for Computational Linguistics, Vancouver, Canada, pages 384-388. http://www.aclweb.org/anthology/S17-2064.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Humor recognition and humor anchor extraction pages",
"authors": [
{
"first": "Diyi",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Alon",
"middle": [],
"last": "Lavie",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "2367--2376",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diyi Yang, Alon Lavie, Chris Dyer, and Eduard Hovy. 2015. Humor recognition and humor anchor extrac- tion pages 2367-2376.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Learning and evaluating classifiers under sample selection bias",
"authors": [
{
"first": "Bianca",
"middle": [],
"last": "Zadrozny",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the twenty-first international conference on Machine learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bianca Zadrozny. 2004. Learning and evaluating clas- sifiers under sample selection bias. In Proceedings of the twenty-first international conference on Ma- chine learning. ACM, page 114.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Recognizing humor on twitter",
"authors": [
{
"first": "Renxian",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Naishi",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management",
"volume": "",
"issue": "",
"pages": "889--898",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Renxian Zhang and Naishi Liu. 2014. Recognizing hu- mor on twitter. In Proceedings of the 23rd ACM International Conference on Conference on Infor- mation and Knowledge Management. ACM, pages 889-898.",
"links": null
}
},
"ref_entries": {
"TABREF2": {
"type_str": "table",
"num": null,
"html": null,
"text": "",
"content": "<table/>"
},
"TABREF4": {
"type_str": "table",
"num": null,
"html": null,
"text": "Unofficial results for the subtask A on the released evaluation set reported by the participating teams",
"content": "<table><tr><td>Official Ranking</td><td>Team</td><td>Score</td><td>Notes</td></tr><tr><td/><td>Duluth</td><td colspan=\"2\">0.853 Bigram language model (news dataset)</td></tr><tr><td>1</td><td>Duluth</td><td colspan=\"2\">0.872 Trigram language model (news dataset)</td></tr><tr><td>2</td><td>TakeLab</td><td colspan=\"2\">0.908 Gradient boosting classifier with a rich set of features, including cultural references</td></tr><tr><td>3</td><td>QUB</td><td colspan=\"2\">0.924 A set of imblanaced classifiers with n-gram features</td></tr><tr><td>3</td><td>QUB</td><td colspan=\"2\">0.924 A set of imblanaced classifiers with n-gram features</td></tr><tr><td>5</td><td>SVNIT</td><td colspan=\"2\">0.938 Multilayer perceptron with incongruity, ambiguity, and stylistic features</td></tr><tr><td>6</td><td>TakeLab</td><td colspan=\"2\">0.944 Gradient boosting classifier with a rich set of features, including cultural references</td></tr><tr><td>7</td><td>SVNIT</td><td colspan=\"2\">0.949 A Naive Bayes classifier with incongruity, ambiguity, and stylistic features</td></tr><tr><td>8</td><td>Duluth</td><td colspan=\"2\">0.967 Trigram language model (tweets dataset)</td></tr><tr><td>9</td><td colspan=\"3\">#WarTeam 1.000 A word-based voting algorithm of a Naive Bayes and neural network word scorers</td></tr></table>"
}
}
}
}