|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T13:27:37.555831Z" |
|
}, |
|
"title": "Supporting Comedy Writers: Predicting Audience's Response from Sketch Comedy and Crosstalk Scripts", |
|
"authors": [ |
|
{ |
|
"first": "Maolin", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Sketch comedy and crosstalk are two popular types of comedy. They can relieve people's stress and thus benefit their mental health, especially when performances and scripts are high-quality. However, writing a script is time-consuming and its quality is difficult to achieve. In order to minimise the time and effort needed for producing an excellent script, we explore ways of predicting the audience's response from the comedy scripts. For this task, we present a corpus of annotated scripts from popular television entertainment programmes in recent years. Annotations include a) text classification labels, indicating which actor's lines made the studio audience laugh; b) information extraction labels, i.e. the text spans that made the audience laughed immediately after the performers said them. The corpus will also be useful for dialogue systems and discourse analysis, since our annotations are based on entire scripts. In addition, we evaluate different baseline algorithms. Experimental results demonstrate that BERT models can achieve the best predictions among all the baseline methods. Furthermore, we conduct an error analysis and investigate predictions across scripts with different styles. 1 * The research was conducted during non-working time. The idea of this research was inspired by a discussion with my friend about an entertainment TV programme in which the comedians mentioned the difficulties of producing a highquality script.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Sketch comedy and crosstalk are two popular types of comedy. They can relieve people's stress and thus benefit their mental health, especially when performances and scripts are high-quality. However, writing a script is time-consuming and its quality is difficult to achieve. In order to minimise the time and effort needed for producing an excellent script, we explore ways of predicting the audience's response from the comedy scripts. For this task, we present a corpus of annotated scripts from popular television entertainment programmes in recent years. Annotations include a) text classification labels, indicating which actor's lines made the studio audience laugh; b) information extraction labels, i.e. the text spans that made the audience laughed immediately after the performers said them. The corpus will also be useful for dialogue systems and discourse analysis, since our annotations are based on entire scripts. In addition, we evaluate different baseline algorithms. Experimental results demonstrate that BERT models can achieve the best predictions among all the baseline methods. Furthermore, we conduct an error analysis and investigate predictions across scripts with different styles. 1 * The research was conducted during non-working time. The idea of this research was inspired by a discussion with my friend about an entertainment TV programme in which the comedians mentioned the difficulties of producing a highquality script.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Comedy plays a major role in people's lives in that it relieves stress and anxiety (Williams et al., 2005; Sar\u0131ta\u015f et al., 2019) . There are two popular types of comedy: sketch comedy and crosstalk. A sketch comedy usually presents a short story and is performed by multiple comedians in various short scenes; while in a crosstalk performance, which is similar to a talk show, there are usually two performers telling humorous stories behind a desk. Although these two types of comedy are different, both of them are performed based on scripts. A script breaks down a story into pieces along with the details that describe which performer should take what action or say which lines at a specific point (Blake, 2014) . Therefore, the quality of the script is critical and it directly influences whether the audience enjoys the performance.", |
|
"cite_spans": [ |
|
{ |
|
"start": 83, |
|
"end": 106, |
|
"text": "(Williams et al., 2005;", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 107, |
|
"end": 128, |
|
"text": "Sar\u0131ta\u015f et al., 2019)", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 702, |
|
"end": 715, |
|
"text": "(Blake, 2014)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "However, it is difficult for script writers to ensure a high-quality comedy script and be productive. Firstly, writers have to assess if audiences will react as expected, in particular laughing at specific points. It is necessary to rehearse multiple times to continuously improve the script, which is time-consuming and can be costly. Secondly, to develop laughter triggers, writers need to identify the potential points from the script where there are possibilities for performers to use funny body moves, tone or tell amusing stories to make the audience laugh. Thirdly, the more times a script is publicly performed, the less laughter it can bring, since the audience have become too familiar with it. Thus, it is essential for comedy writers to explore new laughter triggers constantly.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Since natural language processing (NLP) has been widely and successfully applied to a number of fields (Carrera-Ruvalcaba et al., 2019; Rao and McMahan, 2019) , we investigate how NLP methods can support comedy writers to produce high-quality scripts more efficiently. This paper specifies this challenge as a new task, i.e. the prediction of the audience's response to sketch comedy and crosstalk scripts. To address this challenge, we explore the use of two different NLP methodologies: 1) Text Classification: we predict whether or Label Actor's Line Source 1 \u5b8b\u5c0f\u5b9d\uff1a\u6211\u7684\u4eba\u751f\u683c\u8a00\u662f\uff0c\u5728\u54ea\u91cc\u8dcc\u5012\uff0c\u5c31\u5728\u54ea\u91cc\u7761\u4e00\u89c9\u3002 \u78b0\u74f7 (An Incident-Faking Extortionist) Xiaobao SONG: My life motto is to have a sleep where you've fallen.", |
|
"cite_spans": [ |
|
{ |
|
"start": 103, |
|
"end": 135, |
|
"text": "(Carrera-Ruvalcaba et al., 2019;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 136, |
|
"end": 158, |
|
"text": "Rao and McMahan, 2019)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Joyful Comedians (Season 1), 2015 1 \u5f20\u5c0f\u6590\uff1a\u6211\u53ef\u80fd\u662f\u6d17\u7684\u4f60\u85cf\u79c1\u623f\u94b1\u7684\u8fd9\u6761\u88e4\u5b50\u3002 \u5e78\u798f\u725b\u5bb6\u6751 (Happy Niu Families' Village) Xiaofei ZHANG: The trousers I wash might be the ones you hide your secret purse.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "JSTV Chinese New Year Gala, 2019 0 \u6c88\u817e\uff1a\u5927\u5988\uff0c\u4f60\u597d\u597d\u56de\u5fc6\u4e00\u4e0b\uff0c\u771f\u7684\u6ca1\u6709\u649e\u4f60\u3002 \u6276\u4e0d\u6276 (Help Her Up or Not)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Teng SHEN: Please recall exactly what happened. I really did not hit you. CCTV Chinese New Year Gala, 2014 Table 1 : Text classification annotation examples taken from different comedies in our corpus. In the Label column, 1 and 0 indicate whether or not this line makes audiences laugh respectively; In the Action Line column, we present the performer's names and their lines; The Source column indicates the title of the comedy and the venue where it is performed.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 107, |
|
"end": 114, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Actor's Line with Annotations Source \u8d3e\u73b2\uff1a\u8fd9\u79cd\u88c5\u4fee\u98ce\u683c\u663e\u5f97\u4f60\u5bb6\u7279 \u7279 \u7279\u522b \u522b \u522b\u7684 \u7684 \u7684\u5927 \u5927 \u5927\u3002(\u5ba2\u5385\u51e0\u4e4e\u662f\u7a7a\u7684) \u61d2\u6c49\u76f8\u4eb2 (Idler's Blind Date)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Ling JIA: With this decoration style, your house seems to be incredibly big. (The living room is almost empty) Ace VS Ace (Season 4), 2019 \u5cb3\u4e91\u9e4f\uff1a\u4e0d\u80fd\uff0c\u4e0d\u9000\u7968\u662f \u662f \u662f\u6211 \u6211 \u6211\u4eec \u4eec \u4eec\u7684 \u7684 \u7684\u670d \u670d \u670d\u52a1 \u52a1 \u52a1\u5b97 \u5b97 \u5b97\u65e8 \u65e8 \u65e8\u3002 \u975e\u4e00\u822c\u7684\u7231\u60c5 (Unusual Love) Yunpeng YUE: No way! Our policy is no refund.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Joyful Comedians (Season 2), 2016 \u8d3e\u51b0\uff1a\u6069\u3002(\u8d3e\u51b0\u4e56 \u4e56 \u4e56\u4e56 \u4e56 \u4e56\u5730 \u5730 \u5730\u95ed \u95ed \u95ed\u4e0a \u4e0a \u4e0a\u4e86 \u4e86 \u4e86\u53cc \u53cc \u53cc\u773c \u773c \u773c)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u8d3e\u603b\u7684\u6f14\u8bb2 (Manager JIA's Presentation)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Bing JIA: Okay. (He duly closes his eyes) Legend of Laughter (Season 1), 2017 Table 2 : Information extraction annotation examples taken from different comedies in our corpus. In the first column, we highlight the text spans that trigger laughs from audiences. Note that, we also collected the performer's moves (e.g., \"duly closes his eyes\" in the third example).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 78, |
|
"end": 85, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "not an actor's lines 2 can make audiences laugh. In other words, we formulate the task of predicting as a binary text classification problem. 2) Information Extraction: we predict the text spans from an actor's lines indicating the specific words that trigger an audience's laughter.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Contributions Firstly, we introduce a Chinese corpus of annotated comedy scripts collected from popular TV entertainment programmes. Our annotations include both text classification and information extraction labels. Tables 1 and 2 present annotation examples. The corpus can be used to build an intelligent system to benefit the script writing for comedy writers. It may also be useful for dialogue system research and discourse analysis. Secondly, we evaluate a number of NLP methods and the results demonstrate that BERT models (Devlin et al., 2019) are able to achieve the best prediction performance among all methods. We also further conduct an error analysis which may be useful for further improving the performance. Lastly, we experimentally show that our corpus can also be used to predict laughter triggers for scripts which have very different styles compared to training data.", |
|
"cite_spans": [ |
|
{ |
|
"start": 531, |
|
"end": 552, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 217, |
|
"end": 231, |
|
"text": "Tables 1 and 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Our work is closely related to humour detection, which has been widely studied for many years in natural language processing. Mihalcea and Strapparava (2006) ; Yang et al. (2015) ; Chen 2 The lines are from the dialogue of a comedy performance. Each line consists of an actor's name and the sentences this actor speaks in performance.", |
|
"cite_spans": [ |
|
{ |
|
"start": 126, |
|
"end": 157, |
|
"text": "Mihalcea and Strapparava (2006)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 160, |
|
"end": 178, |
|
"text": "Yang et al. (2015)", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 186, |
|
"end": 187, |
|
"text": "2", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "and Soo (2018); Blinov et al. (2019) investigated if a text fragment is a one-liner. 3 Zhang and Liu (2014); Ortega-Bueno et al. 2018; Chiruzzo et al. (2019) explored the humour classification task on tweets. Castro et al. (2018) collected humour values and funniness scores of Spanish tweets by using crowdsourcing. Chiruzzo et al. (2019) proposed a regression task that predicts the humour score for a tweet. Li et al. (2020) collected Chinese Internet slang expressions and combined them with a humor detecting method to analyse the sentiment of Weibo 4 posts. It should be noted that the examples in all of the corpora used or constructed in the above-mentioned studies are independent of each other. Since our corpus is based on entire scripts, the annotated lines and text spans might also benefit the researchers who are interested in modelling long-context-aware algorithms to understand humour. Apart from the studies on short text fragments, Bertero (2019) and Hasan et al. (2019) created corpora from television (TV) sitcoms such as The Big Bang Theory 5 and TED talks 6 respectively. Their goal is to predict whether or not a sequence of texts will trigger immediate laughter. Yang et al. (2015) ; extracted the key words such as sing, sign language and pretty handy from jokes, which are similar to our information extraction annotations.", |
|
"cite_spans": [ |
|
{ |
|
"start": 16, |
|
"end": 36, |
|
"text": "Blinov et al. (2019)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 85, |
|
"end": 86, |
|
"text": "3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 209, |
|
"end": 229, |
|
"text": "Castro et al. (2018)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 317, |
|
"end": 339, |
|
"text": "Chiruzzo et al. (2019)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 411, |
|
"end": 427, |
|
"text": "Li et al. (2020)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 952, |
|
"end": 966, |
|
"text": "Bertero (2019)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 1189, |
|
"end": 1207, |
|
"text": "Yang et al. (2015)", |
|
"ref_id": "BIBREF31" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "3 Corpus", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "44", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Source Selection In order to ensure the highquality of scripts, we carefully selected thirty performances (the total duration is approximately 473 minutes), including both sketch comedies and crosstalks, of which the leading roles are famous Chinese comedians. These performances were played on well-known Chinese TV entertainment programmes such as Chinese New Year Gala and Ace VS Ace 7 . Since there were many people in the audience present for the recording of these performances, the annotators can judge whether the audience laughed based on the performance videos. Please refer to the appendix for the full list of performances which gives details of their titles, leading comedians and sources. Lastly, we manually typed up actors' lines for each performance and completed thirty scripts. Although there may be differences between our scripts and the real scripts used by comedians in terms of format or content, we assume that our scripts contain the key information about the real scripts, i.e., the actors' lines. Therefore the corpus can be useful for the development of intelligence-assistant comedy script writing systems.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Collection", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Diversity We also took the comedy style into consideration. In order to ensure the diversity and its balance: a) The performances were selected from three main different types of sources 8 as shown in Table 3 , including the topic descriptions of selected performances. It can be observed that the corpus has a wide range of topics. b) As a preliminary study, we selected six popular Chinese comedians who have various and distinctive styles, and we chose five representative performances of each comedian. Table 4 illustrates the statistics and Figure 1 shows the laughter rates of each script. The highest line-level and character-level rates are 7 https://es.wikipedia.org/wiki/Ace_vs_ Ace 8 The three sources are: Chinese New Year Galas-the annual televised Chinese New Year celebrations which are the most viewed TV shows in China. The shows consist of various performances including sketch comedies and crosstalks; Reality Shows-the programmes that show the unscripted actions of participants such as playing games and talking. We selected the shows in which comedians were involved; Comedy Competition Shows-the programmes where different comedians present their comedy performances to a studio audience and the winners are selected based on the audience's votes.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 201, |
|
"end": 208, |
|
"text": "Table 3", |
|
"ref_id": "TABREF0" |
|
}, |
|
{ |
|
"start": 507, |
|
"end": 514, |
|
"text": "Table 4", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 546, |
|
"end": 554, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Data Collection", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Topics Chinese New -Love stories and blind dates between old people;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Source", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Year Galas -Reflecting social phenomena to call for a better society (e.g. avoid judging people by their appearances, do not spoil children, care more about lonely seniors, the woman builds a good relationship with her mother-in-law, spend more time with children, be wary of scams); -Funny family stories during spring festival; Reality -Stories happened in ancient times; Shows -Stories about young people (e.g. encounter ex-boyfriends or ex-girlfriends, relationships between best friends, blind dates); -Reflecting social phenomena to call for a better society (e.g. give seats to vulnerable people); Comedy -Love stories; Competition -Hot topics (e.g. support the COVID-19 frontline fighters); Shows -Funny stories that happened among friends and in families; -Reflecting social phenomena to call for a better society (e.g. be wary of scams, care more about orphans in orphanage); Table 4 : Corpus statistics. # of Actors' Lines and Characters correspond to the total number of lines and characters in our corpus respectively. Laughter Rate is the rate of lines/characters that trigger laughter. 45.39% and 13.12%, while the lowest rates are 16.03% and 3.49%. We note that the characterlevel laughter rates vary in different scripts. This may be due to density of laughter triggers of a line or the topic of the script.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 886, |
|
"end": 893, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Source", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The annotation was completed on Doccano platform (Nakayama et al., 2018) and the annotators are two native Chinese speakers. The annotations were produced based on the studio audiences' responses as observed in the videos, and are not based on the annotators' responses.", |
|
"cite_spans": [ |
|
{ |
|
"start": 49, |
|
"end": 72, |
|
"text": "(Nakayama et al., 2018)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotation", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Annotation Instruction Annotating text classification labels is easy; annotators are requested to simply assigned label 1 to the lines that make audiences laugh, and 0 to the others. With regard to the information extraction annotations, annotators are requested to identify text spans which are usually phrases. The span consists of the words that immediately made the audience laugh after the comedians said them. For example, as indicated in Table 2, the span incredibly big was annotated. In this case, only annotating big would be considered as an incorrect annotation, because the comedian was using incredibly to strongly emphasise big which was her first impression of a man's house in a blind date. Only annotating incredibly would also be incorrect, because the main reason why the audience laughed was because the comedian said the house looked big. 9", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotation", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Annotation Process The annotation process was as follows: Firstly, the annotators conducted discussions about the conflicting annotations after several attempts to annotate the same three scripts. Secondly, once agreements about how to solve the conflicts had been reached, they started to annotate their assigned scripts. Afterwards, since information extraction annotation is more complex than that of classification annotations, we measured its quality by computing three types of interannotator agreement. We asked the annotators to annotate the same six scripts having different styles and then calculated the Overall Percent Agreement (OPA), Fleiss's kappa (Fleiss, 1971) and Randolph's kappa (Randolph, 2005) . We found that the agreement rates were high (OPA 98.09%, Fleiss's Kappa 0.85, Randolph's Kappa 0.96). This is due to the fact that the discussions about solving conflicts were in-depth and the laughter triggers were usually clear in the lines.", |
|
"cite_spans": [ |
|
{ |
|
"start": 663, |
|
"end": 677, |
|
"text": "(Fleiss, 1971)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 699, |
|
"end": 715, |
|
"text": "(Randolph, 2005)", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotation", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "In order to understand how well the machine learning methods work on our corpus, we evaluate the performances of a number of models on 5-fold cross-validation random splits of the scripts in our corpus and report the average results in this section. 10 All the BERT models were pre-trained by using a mixture of large Chinese corpora. 11 Please 9 The house is actually small. Since there is almost no furniture in the house, the comedian said it looked big.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines and Results Discussion", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "10 Model implementations were adapted from https://github.com/649453932/ Chinese-Text-Classification-Pytorch, https: //github.com/luopeixiang/named_entity_ recognition and Zhao et al. (2019) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 172, |
|
"end": 190, |
|
"text": "Zhao et al. (2019)", |
|
"ref_id": "BIBREF34" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines and Results Discussion", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "11 More details are listed in https://github.com/ dbiir/UER-py/wiki/Modelzoo. (Kim, 2014) 42.53 64.14 51.07 66.29 RCNN (Lai et al., 2015) 41.21 68.52 50.89 63.54 BiLSTM (Liu et al., 2016) 41.17 57.13 47.66 65.69 + Attention (Zhou et al., 2016) 39.97 59.91 47.44 63.94 FastText (Joulin et al., 2017) 40.61 66.26 50.12 63.72 DPCNN (Johnson and Zhang, 2017) 42.46 63.25 50.76 66.32 Transformer (Vaswani et al., 2017) 42.60 64.71 51.24 66.18 BERT-tiny (Jiao et al., 2019) 47.56 53.38 48.91 66.21 BERT-small (Turc et al., 2019) 47.29 56.21 51.21 70.78 BERT-base (Devlin et al., 2019) 47.60 56.64 51.61 70.94 (Rabiner and Juang, 1986) 22.19 7.43 11.04 CRF (Lafferty et al., 2001) 28.56 6.11 10.01 BiLSTM (Huang et al., 2015) 31.21 1.64 3.09 BiLSTM-CRF (Lample et al., 2016) 30.33 9.81 14.48 BERT-tiny (Jiao et al., 2019) 26.26 19.89 22.57 BERT-small (Turc et al., 2019) 28.82 17.52 21.56 BERT-base (Devlin et al., 2019) 30.15 21.47 24.59 Table 6 : Information extraction performance. Relaxed metrics are used. The exact-match metric is over-strict because the length of text spans in this corpus is much longer than general named entities. The computation of these metrics can be found in (Nguyen et al., 2017) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 78, |
|
"end": 89, |
|
"text": "(Kim, 2014)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 119, |
|
"end": 137, |
|
"text": "(Lai et al., 2015)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 169, |
|
"end": 187, |
|
"text": "(Liu et al., 2016)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 224, |
|
"end": 243, |
|
"text": "(Zhou et al., 2016)", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 277, |
|
"end": 298, |
|
"text": "(Joulin et al., 2017)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 329, |
|
"end": 378, |
|
"text": "(Johnson and Zhang, 2017) 42.46 63.25 50.76 66.32", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 391, |
|
"end": 413, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 448, |
|
"end": 467, |
|
"text": "(Jiao et al., 2019)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 503, |
|
"end": 522, |
|
"text": "(Turc et al., 2019)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 557, |
|
"end": 578, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 603, |
|
"end": 628, |
|
"text": "(Rabiner and Juang, 1986)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 650, |
|
"end": 673, |
|
"text": "(Lafferty et al., 2001)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 698, |
|
"end": 718, |
|
"text": "(Huang et al., 2015)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 746, |
|
"end": 767, |
|
"text": "(Lample et al., 2016)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 795, |
|
"end": 814, |
|
"text": "(Jiao et al., 2019)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 844, |
|
"end": 863, |
|
"text": "(Turc et al., 2019)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 892, |
|
"end": 913, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 1183, |
|
"end": 1204, |
|
"text": "(Nguyen et al., 2017)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 932, |
|
"end": 939, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Baselines and Results Discussion", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Model P R F Acc. CNN", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines and Results Discussion", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "refer to the appendix for the results of each fold, statistics of splits, computing infrastructure, each model's running time, parameter details and hyperparameter settings.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines and Results Discussion", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Baselines Tables 5 and 6 respectively present the results of text classification and information extraction. BERT-base has the best F1-scores among all the methods. We also note that the classification recall of RCNN (Lai et al., 2015 ) is much higher than other methods. Therefore, we suggest using this model if users prefer a classifier with a high recall.", |
|
"cite_spans": [ |
|
{ |
|
"start": 217, |
|
"end": 234, |
|
"text": "(Lai et al., 2015", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 10, |
|
"end": 24, |
|
"text": "Tables 5 and 6", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Baselines and Results Discussion", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "In addition, we observe that the scores are not high, especially for the information extraction task. The reason may be if the audience laughter highly depends on the conversation contexts which were not considered by baselines. Therefore, taking a longer conversation context of a line into consideration is a worthy research direction. Tables 7 and 8 the styles in the training data. Firstly, since the six comedians in the corpus have distinctive comedy styles, we split the entire corpus into a 6-fold crossvalidation manner. The comedies in each fold are performed by the same leading comedian. Secondly, we train baseline models on five of the folds and evaluate the performance on the remaining fold. Tables 9 and 10 present the average results and the full results are available in the appendix. The results demonstrate that the laughter triggers can be detected even though the styles in the training data are very different compared to the testing data.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 338, |
|
"end": 352, |
|
"text": "Tables 7 and 8", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Baselines and Results Discussion", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We study the prediction of laughter triggers from comedy scripts by using text classification and information extraction methods. Firstly, we introduce a corpus including high-quality and annotated sketch comedy and crosstalk scripts. Secondly, we evaluate a number of baselines and find that BERT models achieve the best performance. We note that the information extraction performance was very low, indicating that this task is particularly challenging. We also conduct an error analysis of incorrect predictions. The errors suggest the incorporation of rich context information may further improve the performance. Therefore, it is worth investigating a model which can take such infor- (Kim, 2014) 42.75 65.69 51.63 65.04 RCNN (Lai et al., 2015) 42.32 71.26 52.32 62.79 BiLSTM (Liu et al., 2016) 41.35 60.76 48.81 63.71 + Attention (Zhou et al., 2016) 43.20 53.76 47.57 66.55 FastText (Joulin et al., 2017) 40.86 67.43 50.70 62.62 DPCNN (Johnson and Zhang, 2017) 41.26 66.82 50.42 62.44 Transformer (Vaswani et al., 2017) 41.57 70.15 51.92 63.06 BERT-tiny (Jiao et al., 2019) 43.01 56.76 48.69 66.23 BERT-small (Turc et al., 2019) 44.95 55.59 49.09 67.38 BERT-base (Devlin et al., 2019) 47.28 58.13 51.72 69.39 Table 9 : Cross-style text classification performance.", |
|
"cite_spans": [ |
|
{ |
|
"start": 690, |
|
"end": 701, |
|
"text": "(Kim, 2014)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 731, |
|
"end": 749, |
|
"text": "(Lai et al., 2015)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 781, |
|
"end": 799, |
|
"text": "(Liu et al., 2016)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 836, |
|
"end": 855, |
|
"text": "(Zhou et al., 2016)", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 889, |
|
"end": 910, |
|
"text": "(Joulin et al., 2017)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 941, |
|
"end": 990, |
|
"text": "(Johnson and Zhang, 2017) 41.26 66.82 50.42 62.44", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1003, |
|
"end": 1025, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 1060, |
|
"end": 1079, |
|
"text": "(Jiao et al., 2019)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 1115, |
|
"end": 1134, |
|
"text": "(Turc et al., 2019)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 1169, |
|
"end": 1190, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1215, |
|
"end": 1222, |
|
"text": "Table 9", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Conclusion and Future Work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Model P R F Acc. CNN", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Future Work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Model P R F HMM (Rabiner and Juang, 1986) 19.79 7.52 10.68 CRF (Lafferty et al., 2001) 25.29 5.03 8.35 BiLSTM (Huang et al., 2015) 29.05 2.90 5.23 BiLSTM-CRF (Lample et al., 2016) 30.72 8.77 12.56 BERT-tiny (Jiao et al., 2019) 26.30 19.20 22.14 BERT-small (Turc et al., 2019) 24.93 27.31 25.12 BERT-base (Devlin et al., 2019) 24.64 31.65 26.51 Table 10 : Cross-style information extraction performance.", |
|
"cite_spans": [ |
|
{ |
|
"start": 16, |
|
"end": 41, |
|
"text": "(Rabiner and Juang, 1986)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 63, |
|
"end": 86, |
|
"text": "(Lafferty et al., 2001)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 110, |
|
"end": 130, |
|
"text": "(Huang et al., 2015)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 158, |
|
"end": 179, |
|
"text": "(Lample et al., 2016)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 207, |
|
"end": 226, |
|
"text": "(Jiao et al., 2019)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 256, |
|
"end": 275, |
|
"text": "(Turc et al., 2019)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 304, |
|
"end": 325, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 344, |
|
"end": 352, |
|
"text": "Table 10", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Conclusion and Future Work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "mation into consideration. Furthermore, it is also worth extending the corpus to a multimodal one by aligning scripts to corresponding audios or videos, because certain intonations or scenes can also make audiences laugh. The multimodal corpus can also benefit the creation of silent comedy. Enriching the corpus by including scripts in other languages may also be a potential direction. Lastly, the encouraging cross-style prediction performance shows the usefulness of our corpus for predicting new scripts with different styles. Moreover, it is also interesting to explore human performances by asking annotators to make predictions based purely on the scripts of unwatched comedies, and to investigate if the script writers find the model predictions insightful.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Future Work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We hope this study will benefit script writing by inspiring the community to develop intelligent systems for comedy writers and other artists in the field. The corpus might also be useful for researchers who are working on related or similar tasks, such as discourse analysis and humorous response generation for dialogue systems.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Future Work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "A.1 Computing Resources Table 11 describes the details of the computing resources used for all of our experiments. These resources are freely available from Paperspace 12 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 24, |
|
"end": 32, |
|
"text": "Table 11", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "A Appendices", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Below we present model hyper-parameter values 13 and the average running time of one epoch. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A.2 Model Details", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "HMM (Rabiner and Juang, 1986 ) Uniform Distribution for Initialisation, Average Total Running Time = 8.47s.", |
|
"cite_spans": [ |
|
{ |
|
"start": 4, |
|
"end": 28, |
|
"text": "(Rabiner and Juang, 1986", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A.2.2 Information Extraction Models", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "CRF (Lafferty et al., 2001 ) LBFGS algorithm, c1 = 0.1, c2 = 0.1, Max Iteration = 100, Average Total Running Time = 11.72s. ", |
|
"cite_spans": [ |
|
{ |
|
"start": 4, |
|
"end": 26, |
|
"text": "(Lafferty et al., 2001", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A.2.2 Information Extraction Models", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We use the same hyper-parameter settings as used in the text classification models with the exception of Batch Size = 16. The average running time of BERT-tiny, BERT-small and BERTbase for information extraction are 34.00s, 55.73s, and 120s respectively. Table 16 : Performance of text classification in predicting the scripts performed by specific leading comedians (0: Xiaobao SONG, 1: Yuepeng YUE, 2: Ling JIA, 3: Xiaofei ZHANG, 4: Teng SHEN, 5: Bing JIA). Table 17 : Performance of information extraction in predicting the scripts performed by specific leading comedians.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 255, |
|
"end": 263, |
|
"text": "Table 16", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 460, |
|
"end": 468, |
|
"text": "Table 17", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "BERT Models", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "A one-liner is a joke that is delivered in a single line which only contains a few words.4 Weibo is a Chinese micro-blogging website similar to Twitter: https://www.weibo.com/ 5 https://the-big-bang-theory.com/ 6 https://www.ted.com/talks", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://www.paperspace.com/ 13 The size of model's trainable parameters can be found in original papers.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We would like to express our sincere appreciation to all comedians and backstage teams for their hard work to make audiences happy. We also sincerely appreciate the valuable comments of all anonymous reviewers. My deepest thanks should also be given to my best friend who inspired me to conduct such research and whose encouragement was important for me to be able to complete the work.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": " Table 19 : Full list of the selected comedy performances with their titles, years, duration, number of lines/characters and laughter rate at line/character-level.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 1, |
|
"end": 9, |
|
"text": "Table 19", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "annex", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Conversational humor recognition and generation through deep learning", |
|
"authors": [ |
|
{ |
|
"first": "Dario", |
|
"middle": [], |
|
"last": "Bertero", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dario Bertero. 2019. Conversational humor recogni- tion and generation through deep learning. PhD dissertation, Hong Kong University of Science and Technology.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Large dataset and language model fun-tuning for humor recognition", |
|
"authors": [ |
|
{ |
|
"first": "Vladislav", |
|
"middle": [], |
|
"last": "Blinov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Valeria", |
|
"middle": [], |
|
"last": "Bolotova-Baranova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pavel", |
|
"middle": [], |
|
"last": "Braslavski", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4027--4032", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P19-1394" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vladislav Blinov, Valeria Bolotova-Baranova, and Pavel Braslavski. 2019. Large dataset and language model fun-tuning for humor recognition. In Pro- ceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4027- 4032, Florence, Italy. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Leveraging natural language processing applications and microblogging platform for increased transparency in crisis areas", |
|
"authors": [ |
|
{ |
|
"first": "Ernesto", |
|
"middle": [], |
|
"last": "Carrera-Ruvalcaba", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Johnson", |
|
"middle": [], |
|
"last": "Ekedum", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Austin", |
|
"middle": [], |
|
"last": "Hancock", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ben", |
|
"middle": [], |
|
"last": "Brock", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "SMU Data Science Review", |
|
"volume": "2", |
|
"issue": "1", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ernesto Carrera-Ruvalcaba, Johnson Ekedum, Austin Hancock, and Ben Brock. 2019. Leveraging natural language processing applications and microblogging platform for increased transparency in crisis areas. SMU Data Science Review, 2(1):6.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "A crowdannotated Spanish corpus for humor analysis", |
|
"authors": [ |
|
{ |
|
"first": "Santiago", |
|
"middle": [], |
|
"last": "Castro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luis", |
|
"middle": [], |
|
"last": "Chiruzzo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aiala", |
|
"middle": [], |
|
"last": "Ros\u00e1", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Diego", |
|
"middle": [], |
|
"last": "Garat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guillermo", |
|
"middle": [], |
|
"last": "Moncecchi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Sixth International Workshop on Natural Language Processing for Social Media", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "7--11", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W18-3502" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Santiago Castro, Luis Chiruzzo, Aiala Ros\u00e1, Diego Garat, and Guillermo Moncecchi. 2018. A crowd- annotated Spanish corpus for humor analysis. In Proceedings of the Sixth International Workshop on Natural Language Processing for Social Media, pages 7-11, Melbourne, Australia. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Association for Computational Linguistics", |
|
"authors": [ |
|
{ |
|
"first": "Yu", |
|
"middle": [], |
|
"last": "Peng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Von-Wun", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Soo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "113--117", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N18-2018" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peng-Yu Chen and Von-Wun Soo. 2018. Humor recog- nition using deep learning. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, Volume 2 (Short Pa- pers), pages 113-117, New Orleans, Louisiana. As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Overview of haha at iberlef 2019: Humor analysis based on human annotation", |
|
"authors": [ |
|
{ |
|
"first": "Luis", |
|
"middle": [], |
|
"last": "Chiruzzo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mathias", |
|
"middle": [], |
|
"last": "Castro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Diego", |
|
"middle": [], |
|
"last": "Etcheverry", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Juan Jos\u00e9", |
|
"middle": [], |
|
"last": "Garat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aiala", |
|
"middle": [], |
|
"last": "Prada", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ros\u00e1", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the Iberian Languages Evaluation Forum", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Luis Chiruzzo, S Castro, Mathias Etcheverry, Diego Garat, Juan Jos\u00e9 Prada, and Aiala Ros\u00e1. 2019. Overview of haha at iberlef 2019: Humor analy- sis based on human annotation. In Proceedings of the Iberian Languages Evaluation Forum (Iber- LEF 2019). CEUR Workshop Proceedings, CEUR- WS, Bilbao, Spain.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "4171--4186", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N19-1423" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Measuring nominal scale agreement among many raters", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Joseph", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Fleiss", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1971, |
|
"venue": "Psychometrika", |
|
"volume": "76", |
|
"issue": "5", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Joseph L Fleiss. 1971. Measuring nominal scale agreement among many raters. Psychometrika, 76(5):378.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Md Iftekhar Tanveer, Louis-Philippe Morency, and Mohammed (Ehsan) Hoque", |
|
"authors": [ |
|
{ |
|
"first": "Wasifur", |
|
"middle": [], |
|
"last": "Md Kamrul Hasan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Amirali", |
|
"middle": [], |
|
"last": "Rahman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jianyuan", |
|
"middle": [], |
|
"last": "Bagher Zadeh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Zhong", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2046--2056", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D19-1211" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Md Kamrul Hasan, Wasifur Rahman, AmirAli Bagher Zadeh, Jianyuan Zhong, Md Iftekhar Tanveer, Louis-Philippe Morency, and Mo- hammed (Ehsan) Hoque. 2019. UR-FUNNY: A multimodal language dataset for understanding humor. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2046-2056, Hong Kong, China. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Bidirectional lstm-crf models for sequence tagging", |
|
"authors": [ |
|
{ |
|
"first": "Zhiheng", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1508.01991" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirec- tional lstm-crf models for sequence tagging. arXiv preprint arXiv:1508.01991.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Tinybert: Distilling bert for natural language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Xiaoqi", |
|
"middle": [], |
|
"last": "Jiao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yichun", |
|
"middle": [], |
|
"last": "Yin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lifeng", |
|
"middle": [], |
|
"last": "Shang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xin", |
|
"middle": [], |
|
"last": "Jiang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiao", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Linlin", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fang", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qun", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1909.10351" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, and Qun Liu. 2019. Tinybert: Distilling bert for natural language understanding. arXiv preprint arXiv:1909.10351.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Deep pyramid convolutional neural networks for text categorization", |
|
"authors": [ |
|
{ |
|
"first": "Rie", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tong", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "562--570", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P17-1052" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rie Johnson and Tong Zhang. 2017. Deep pyramid convolutional neural networks for text categoriza- tion. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 562-570, Vancouver, Canada. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Bag of tricks for efficient text classification", |
|
"authors": [ |
|
{ |
|
"first": "Armand", |
|
"middle": [], |
|
"last": "Joulin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edouard", |
|
"middle": [], |
|
"last": "Grave", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Piotr", |
|
"middle": [], |
|
"last": "Bojanowski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 15th Conference of the European Chapter", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "427--431", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2017. Bag of tricks for efficient text classification. In Proceedings of the 15th Con- ference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Pa- pers, pages 427-431, Valencia, Spain. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Convolutional neural networks for sentence classification", |
|
"authors": [ |
|
{ |
|
"first": "Yoon", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1746--1751", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/v1/D14-1181" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1746-1751, Doha, Qatar. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Lafferty", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Mccallum", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fernando Cn", |
|
"middle": [], |
|
"last": "Pereira", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Proceedings of the 18th International Conference on Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"https://dl.acm.org/doi/10.5555/645530.655813" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John Lafferty, Andrew McCallum, and Fernando CN Pereira. 2001. Conditional random fields: Prob- abilistic models for segmenting and labeling se- quence data. In Proceedings of the 18th Interna- tional Conference on Machine Learning, San Fran- cisco, CA, USA. Morgan Kaufmann Publishers Inc.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Recurrent convolutional neural networks for text classification", |
|
"authors": [ |
|
{ |
|
"first": "Siwei", |
|
"middle": [], |
|
"last": "Lai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Liheng", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kang", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, AAAI'15", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2267--2273", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Siwei Lai, Liheng Xu, Kang Liu, and Jun Zhao. 2015. Recurrent convolutional neural networks for text classification. In Proceedings of the Twenty- Ninth AAAI Conference on Artificial Intelligence, AAAI'15, page 2267-2273. AAAI Press.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Neural architectures for named entity recognition", |
|
"authors": [ |
|
{ |
|
"first": "Guillaume", |
|
"middle": [], |
|
"last": "Lample", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Miguel", |
|
"middle": [], |
|
"last": "Ballesteros", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sandeep", |
|
"middle": [], |
|
"last": "Subramanian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kazuya", |
|
"middle": [], |
|
"last": "Kawakami", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "260--270", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N16-1030" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Guillaume Lample, Miguel Ballesteros, Sandeep Sub- ramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 260-270, San Diego, California. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Hemos: A novel deep learning-based fine-grained humor detecting method for sentiment analysis of social media", |
|
"authors": [ |
|
{ |
|
"first": "Da", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rafal", |
|
"middle": [], |
|
"last": "Rzepka", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michal", |
|
"middle": [], |
|
"last": "Ptaszynski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenji", |
|
"middle": [], |
|
"last": "Araki", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Information Processing & Management", |
|
"volume": "57", |
|
"issue": "6", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Da Li, Rafal Rzepka, Michal Ptaszynski, and Kenji Araki. 2020. Hemos: A novel deep learning-based fine-grained humor detecting method for sentiment analysis of social media. Information Processing & Management, 57(6):102290.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Recurrent neural network for text classification with multi-task learning", |
|
"authors": [ |
|
{ |
|
"first": "Pengfei", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xipeng", |
|
"middle": [], |
|
"last": "Qiu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xuanjing", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, IJCAI'16", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2873--2879", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pengfei Liu, Xipeng Qiu, and Xuanjing Huang. 2016. Recurrent neural network for text classification with multi-task learning. In Proceedings of the Twenty- Fifth International Joint Conference on Artificial In- telligence, IJCAI'16, page 2873-2879. AAAI Press.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Learning to laugh (automatically): Computational models for humor recognition", |
|
"authors": [ |
|
{ |
|
"first": "Rada", |
|
"middle": [], |
|
"last": "Mihalcea", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Carlo", |
|
"middle": [], |
|
"last": "Strapparava", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Computational Intelligence", |
|
"volume": "22", |
|
"issue": "2", |
|
"pages": "126--142", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1111/j.1467-8640.2006.00278.x" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rada Mihalcea and Carlo Strapparava. 2006. Learn- ing to laugh (automatically): Computational models for humor recognition. Computational Intelligence, 22(2):126-142.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "doccano: Text annotation tool for human", |
|
"authors": [ |
|
{ |
|
"first": "Hiroki", |
|
"middle": [], |
|
"last": "Nakayama", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Takahiro", |
|
"middle": [], |
|
"last": "Kubo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Junya", |
|
"middle": [], |
|
"last": "Kamura", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yasufumi", |
|
"middle": [], |
|
"last": "Taniguchi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xu", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hiroki Nakayama, Takahiro Kubo, Junya Kamura, Ya- sufumi Taniguchi, and Xu Liang. 2018. doccano: Text annotation tool for human. Software available from https://github.com/doccano/doccano.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Aggregating and predicting sequence labels from crowd annotations", |
|
"authors": [ |
|
{ |
|
"first": "An", |
|
"middle": [ |
|
"Thanh" |
|
], |
|
"last": "Nguyen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Byron", |
|
"middle": [], |
|
"last": "Wallace", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Junyi", |
|
"middle": [ |
|
"Jessy" |
|
], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ani", |
|
"middle": [], |
|
"last": "Nenkova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Lease", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "299--309", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P17-1028" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "An Thanh Nguyen, Byron Wallace, Junyi Jessy Li, Ani Nenkova, and Matthew Lease. 2017. Aggregating and predicting sequence labels from crowd annota- tions. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 299-309, Vancouver, Canada. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Uo upv: Deep linguistic humor detection in spanish social media", |
|
"authors": [ |
|
{ |
|
"first": "Reynier", |
|
"middle": [], |
|
"last": "Ortega-Bueno", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Carlos", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Muniz-Cuza", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jos\u00e9 E Medina", |
|
"middle": [], |
|
"last": "Pagola", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paolo", |
|
"middle": [], |
|
"last": "Rosso", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Third Workshop on Evaluation of Human Language Technologies for Iberian Languages (IberEval 2018) co-located with 34th Conference of the Spanish Society for Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "203--213", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Reynier Ortega-Bueno, Carlos E Muniz-Cuza, Jos\u00e9 E Medina Pagola, and Paolo Rosso. 2018. Uo upv: Deep linguistic humor detection in spanish social media. In Proceedings of the Third Workshop on Evaluation of Human Language Technologies for Iberian Languages (IberEval 2018) co-located with 34th Conference of the Spanish Society for Natu- ral Language Processing (SEPLN 2018), pages 203- 213, Seville, Spain.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "An introduction to hidden markov models", |
|
"authors": [ |
|
{ |
|
"first": "Lawrence", |
|
"middle": [], |
|
"last": "Rabiner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Juang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1986, |
|
"venue": "IEEE ASSP Magazine", |
|
"volume": "3", |
|
"issue": "1", |
|
"pages": "4--16", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lawrence Rabiner and B Juang. 1986. An introduction to hidden markov models. IEEE ASSP Magazine, 3(1):4-16.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Free-marginal multirater kappa: an alternative to fleiss' fixed-marginal multirater kappa", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Jj Randolph", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Joensuu University Learning and Instruction Symposium", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "JJ Randolph. 2005. Free-marginal multirater kappa: an alternative to fleiss' fixed-marginal multirater kappa. Joensuu University Learning and Instruction Sympo- sium.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Natural language processing with PyTorch: build intelligent language applications using deep learning", |
|
"authors": [ |
|
{ |
|
"first": "Delip", |
|
"middle": [], |
|
"last": "Rao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Brian", |
|
"middle": [], |
|
"last": "Mcmahan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Delip Rao and Brian McMahan. 2019. Natural language processing with PyTorch: build intelli- gent language applications using deep learning. \" O'Reilly Media, Inc.\".", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "The effect of comedy films on postoperative pain and anxiety in surgical oncology patients", |
|
"authors": [ |
|
{ |
|
"first": "Serdar", |
|
"middle": [], |
|
"last": "Sar\u0131ta\u015f", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hasan", |
|
"middle": [], |
|
"last": "Gen\u00e7", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Erafettin Okutan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ahmet\u00f6zdemir", |
|
"middle": [], |
|
"last": "Ra-Mazaninci", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G\u00fclnaz", |
|
"middle": [], |
|
"last": "Kizilkaya", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Complementary Medicine Research", |
|
"volume": "26", |
|
"issue": "4", |
|
"pages": "231--239", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1159/000497234" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Serdar Sar\u0131ta\u015f, Hasan Gen\u00e7, \u015e erafettin Okutan, Ra- mazan\u0130nci, Ahmet\u00d6zdemir, and G\u00fclnaz Kizilkaya. 2019. The effect of comedy films on postoperative pain and anxiety in surgical oncology patients. Com- plementary Medicine Research, 26(4):231-239.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Well-read students learn better: The impact of student initialization on knowledge distillation", |
|
"authors": [ |
|
{ |
|
"first": "Iulia", |
|
"middle": [], |
|
"last": "Turc", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1908.08962" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Iulia Turc, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Well-read students learn better: The impact of student initialization on knowledge distillation. arXiv preprint arXiv:1908.08962.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Attention is all you need", |
|
"authors": [ |
|
{ |
|
"first": "Ashish", |
|
"middle": [], |
|
"last": "Vaswani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noam", |
|
"middle": [], |
|
"last": "Shazeer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Niki", |
|
"middle": [], |
|
"last": "Parmar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jakob", |
|
"middle": [], |
|
"last": "Uszkoreit", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Llion", |
|
"middle": [], |
|
"last": "Jones", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aidan", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Gomez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Illia", |
|
"middle": [], |
|
"last": "Kaiser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Polosukhin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "30", |
|
"issue": "", |
|
"pages": "5998--6008", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 30, pages 5998-6008. Curran Asso- ciates, Inc.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Coping with stress: a survey of murdoch university veterinary students", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Sandy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pauline", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Williams", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jennifer", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Arnold", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mills", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Journal of Veterinary Medical Education", |
|
"volume": "32", |
|
"issue": "2", |
|
"pages": "201--212", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sandy M Williams, Pauline K Arnold, and Jennifer N Mills. 2005. Coping with stress: a survey of mur- doch university veterinary students. Journal of Vet- erinary Medical Education, 32(2):201-212.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Humor recognition and humor anchor extraction", |
|
"authors": [ |
|
{ |
|
"first": "Diyi", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alon", |
|
"middle": [], |
|
"last": "Lavie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eduard", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2367--2376", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D15-1284" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Diyi Yang, Alon Lavie, Chris Dyer, and Eduard Hovy. 2015. Humor recognition and humor anchor extrac- tion. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2367-2376, Lisbon, Portugal. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Telling the whole story: A manually annotated Chinese dataset for the analysis of humor in jokes", |
|
"authors": [ |
|
{ |
|
"first": "Dongyu", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Heting", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xikai", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hongfei", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Feng", |
|
"middle": [], |
|
"last": "Xia", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "6402--6407", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D19-1673" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dongyu Zhang, Heting Zhang, Xikai Liu, Hongfei Lin, and Feng Xia. 2019. Telling the whole story: A man- ually annotated Chinese dataset for the analysis of humor in jokes. In Proceedings of the 2019 Con- ference on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 6402-6407, Hong Kong, China. As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Recognizing humor on twitter", |
|
"authors": [ |
|
{ |
|
"first": "Renxian", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Naishi", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management, CIKM '14", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "889--898", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1145/2661829.2661997" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Renxian Zhang and Naishi Liu. 2014. Recognizing hu- mor on twitter. In Proceedings of the 23rd ACM International Conference on Conference on Infor- mation and Knowledge Management, CIKM '14, page 889-898, New York, NY, USA. Association for Computing Machinery.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "UER: An open-source toolkit for pretraining models", |
|
"authors": [ |
|
{ |
|
"first": "Zhe", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hui", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jinbin", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xin", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tao", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Lu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xi", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Haotang", |
|
"middle": [], |
|
"last": "Deng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qi", |
|
"middle": [], |
|
"last": "Ju", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaoyong", |
|
"middle": [], |
|
"last": "Du", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "241--246", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D19-3041" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhe Zhao, Hui Chen, Jinbin Zhang, Xin Zhao, Tao Liu, Wei Lu, Xi Chen, Haotang Deng, Qi Ju, and Xiaoy- ong Du. 2019. UER: An open-source toolkit for pre- training models. In Proceedings of the 2019 Con- ference on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP): System Demonstrations, pages 241-246, Hong Kong, China. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Attention-based bidirectional long short-term memory networks for relation classification", |
|
"authors": [ |
|
{ |
|
"first": "Peng", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Shi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jun", |
|
"middle": [], |
|
"last": "Tian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhenyu", |
|
"middle": [], |
|
"last": "Qi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bingchen", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hongwei", |
|
"middle": [], |
|
"last": "Hao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bo", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "207--212", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P16-2034" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peng Zhou, Wei Shi, Jun Tian, Zhenyu Qi, Bingchen Li, Hongwei Hao, and Bo Xu. 2016. Attention-based bidirectional long short-term memory networks for relation classification. In Proceedings of the 54th Annual Meeting of the Association for Computa- tional Linguistics (Volume 2: Short Papers), pages 207-212, Berlin, Germany. Association for Compu- tational Linguistics.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "Each script's laughter rate in our corpus.", |
|
"num": null |
|
}, |
|
"FIGREF1": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": ", 2014) Dropout = 0.5, Number of Epochs = 20, Batch Size = 128, Learning Rate = 0.001, Number of Filters = 256, Filter Sizes = 2,3,4, Average Running Time = 5s. RCNN (Lai et al., 2015) Dropout = 1.0, Number of Epochs = 10, Batch Size = 128, Learning Rate = 0.001, Hidden Size = 256, Number of Layers = 1, Average Running Time = 5s. BiLSTM (Liu et al., 2016) Dropout = 0.5, Number of Epochs = 10, Batch Size = 128, Learning Rate = 0.001, Hidden Size = 128, Number of Layers = 2, Average Running Time = 5.4s. + Attention (Zhou et al., 2016) Dropout = 0.5, Number of Epochs = 10, Batch Size = 128, Learning Rate = 0.001, Hidden Size = 128 and 64 respectively, Number of Layers = 2, Average Running Time = 5.74s. FastText (Joulin et al., 2017) Dropout = 0.5, Number of Epochs = 20, Batch Size = 128, Learning Rate = 0.001, Hidden Size = 256, Average Running Time = 22.5s. DPCNN (Johnson and Zhang, 2017) Dropout = 0.5, Number of Epochs = 20, Batch Size = 128, Learning Rate = 0.001, Number of Filter = 250, Average Running Time = 5s. Transformer (Vaswani et al., 2017) Dropout = 0.5, Number of Epochs = 20, Batch Size = 128, Learning Rate = 0.0005, Number of Head = 5, Number of Encoder = 2, Average Running Time = 6.528s. BERT-tiny (Jiao et al., 2019) Dropout = 0.1, Number of Epoch = 20, Batch Size = 64, Learning Rate = 0.00002, Size of Embedding = 384, Feedforward Size = 1536, Hidden Size = 384, Number of Head = 6, Number of Layer = 3, Average Running Time = 20.63s. BERT-small (Turc et al., 2019) Dropout = 0.5, Number of Epoch = 20, Batch Size = 64, Learning Rate = 0.00002, Size of Embedding = 512, Feedforward Size = 2048, Hidden Size = 512, Number of Head = 8, Number of Layer = 6, Average Running Time = 36.65. BERT-base (Devlin et al., 2019) Dropout = 0.1, Number of Epoch = 10, Batch Size = 64, Learning Rate = 0.00002, Size of Embedding = 768, Feedforward Size = 3072, Hidden Size = 768, Number of Head = 12, Number of Layer = 12, Average Running Time = 99s.", |
|
"num": null |
|
}, |
|
"FIGREF2": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "BiLSTM(Huang et al., 2015) Number of Epoch = 30, Batch Size = 64, Learning Rate = 0.001, Size of Embedding = 128, Hidden Size = 128, Average Running Time = 8.43s. BiLSTM-CRF (Lample et al., 2016) Number of Epoch = 30, Batch Size = 64, Learning Rate = 0.001, Size of Embedding = 128, Hidden Size = 128, Average Running Time = 9.35s.", |
|
"num": null |
|
}, |
|
"TABREF0": { |
|
"content": "<table><tr><td>Statistics</td><td>Value</td></tr><tr><td># of Comedy Scripts</td><td>30</td></tr><tr><td>Year Range</td><td>2014-2020</td></tr><tr><td>Total Duration</td><td>473.44 mins</td></tr><tr><td>Average Duration</td><td>15.78 mins</td></tr><tr><td># of Actors' Lines</td><td>6087</td></tr><tr><td>Laughter Rate (Line-Level)</td><td>28.62%</td></tr><tr><td># of Characters</td><td>120451</td></tr><tr><td colspan=\"2\">Laughter Rate (Character-Level) 8.16%</td></tr></table>", |
|
"type_str": "table", |
|
"text": "Topics of the selected comedies.", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF1": { |
|
"content": "<table><tr><td>Model</td><td>P</td><td>R</td><td>F</td></tr><tr><td>HMM</td><td/><td/><td/></tr></table>", |
|
"type_str": "table", |
|
"text": "Text classification performance. P, R, F and Acc. are Precision, Recall, F1-score and Accuracy respectively.", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF3": { |
|
"content": "<table><tr><td>Actor's Line</td><td>Translation</td></tr><tr><td>\u8d3e\u51b0\uff1a\u8fd9\u4e8b\u6211\u4e0d\u6562\u4fdd\u8bc1\u6211 \u6211 \u6211\u73b0 \u73b0 \u73b0\u5728 \u5728 \u5728</td><td>Bing JIA: I can't guarantee if I am still [able to read</td></tr><tr><td>\u8fd8 \u8fd8 \u8fd8\u8ba4 \u8ba4 \u8ba4\u4e0d \u4e0d \u4e0d[\u8ba4 \u8ba4 \u8ba4\u5b57 \u5b57 \u5b57\u554a \u554a \u554a]\u3002</td><td>this] now.</td></tr><tr><td colspan=\"2\">\u5cb3\u4e91\u9e4f\uff1a\u4f60\u4ec0 \u4ec0 \u4ec0\u4e48 \u4e48 \u4e48\u65f6 \u65f6 \u65f6\u5019 \u5019 \u5019\u6765 \u6765 \u6765\u7684 \u7684 \u7684\u554a \u554a \u554a \u3002 Yuepeng YUE: Huh? When did you come?</td></tr><tr><td>\u6c88\u817e\uff1a\u8d70\u80af\u5b9a\u662f\u4e0d\u8d76\u8d9f\u4e86\uff0c\u6211 \u6211 \u6211</td><td>Teng SHEN: Walking is not fast enough, I have to</td></tr><tr><td>\u5f97 \u5f97 \u5f97\u8dd1 \u8dd1 \u8dd1\u4e86 \u4e86 \u4e86 \u3002</td><td>run to escape this.</td></tr><tr><td>\u5feb\u9012\u54e5\uff1a[\u6563\u6253\u90a3\u4e2a] \u3002</td><td>Courier: [The one who is good at free combat].</td></tr><tr><td>\u5f20\u5c0f\u6590\uff1a\u54ce\u5440\uff0c\u8fd9\u662f\u5584\u610f\u7684\u8c0e</td><td>Xiaofei ZHANG: Well, this is a white lie. But the</td></tr><tr><td>\u8a00\uff0c\u53ef\u95ee\u9898\u662f\uff0c\u8001\u5e08\u4e5f\u4e0d\u4f1a[\u6f14</td><td>problem is, I don't know how [to pretend]!</td></tr><tr><td>\u620f\u5440] \uff01</td><td/></tr><tr><td>\u4f55\u6b22\uff1a\u4e0d\u662f\uff0c\u4f60[\u627e\u8c01\u5440] \uff1f</td><td>Huan HE: Eh? [Who are] you [looking for]?</td></tr></table>", |
|
"type_str": "table", |
|
"text": "Examples of incorrect classifications taken from different comedies in our corpus. G and P are Gold and Predicted labels, respectively.", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF4": { |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"text": "", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF6": { |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"text": "Details of computing resources.", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF7": { |
|
"content": "<table><tr><td>Fold</td><td colspan=\"4\">Text Classification Information Extraction # of Lines Total # of Characters Total</td></tr><tr><td>0</td><td>277/685</td><td>962</td><td>1666/20010</td><td>21676</td></tr><tr><td>1</td><td>290/854</td><td>1144</td><td>1703/19354</td><td>21057</td></tr><tr><td>2</td><td>285/789</td><td>1074</td><td>1666/19609</td><td>21275</td></tr><tr><td>3</td><td>358/909</td><td>1267</td><td>2251/23535</td><td>25786</td></tr><tr><td>4</td><td>459/1181</td><td>1640</td><td>2632/28025</td><td>30657</td></tr></table>", |
|
"type_str": "table", |
|
"text": "shows the statistics of each fold.", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF8": { |
|
"content": "<table><tr><td/><td>Model</td><td colspan=\"6\">Fold-0 Fold-1 Fold-2 Fold-3 Fold-4 Average</td></tr><tr><td/><td>CNN</td><td>42.64</td><td>40.43</td><td>42.61</td><td>42.27</td><td>44.68</td><td>42.53</td></tr><tr><td/><td>RCNN</td><td>43.47</td><td>38.30</td><td>43.93</td><td>35.44</td><td>44.93</td><td>41.21</td></tr><tr><td/><td>BiLSTM</td><td>43.56</td><td>37.80</td><td>40.19</td><td>42.20</td><td>42.12</td><td>41.17</td></tr><tr><td/><td>+Attention</td><td>39.53</td><td>38.93</td><td>41.75</td><td>37.62</td><td>42.01</td><td>39.97</td></tr><tr><td>P</td><td>FastText DPCNN</td><td>39.37 42.44</td><td>44.64 41.67</td><td>36.96 41.73</td><td>41.12 41.86</td><td>40.98 44.61</td><td>40.61 42.46</td></tr><tr><td/><td>Transformer</td><td>46.67</td><td>38.17</td><td>42.43</td><td>42.08</td><td>43.66</td><td>42.60</td></tr><tr><td/><td>BERT-tiny</td><td>42.26</td><td>42.05</td><td>44.62</td><td>54.43</td><td>54.43</td><td>47.56</td></tr><tr><td/><td>BERT-small</td><td>44.74</td><td>46.13</td><td>46.02</td><td>52.35</td><td>47.22</td><td>47.29</td></tr><tr><td/><td>BERT-base</td><td>45.65</td><td>47.73</td><td>42.66</td><td>53.03</td><td>48.91</td><td>47.60</td></tr><tr><td/><td>CNN</td><td>70.04</td><td>64.02</td><td>60.70</td><td>66.48</td><td>59.48</td><td>64.14</td></tr><tr><td/><td>RCNN</td><td>66.06</td><td>70.08</td><td>59.65</td><td>86.03</td><td>60.78</td><td>68.52</td></tr><tr><td/><td>BiLSTM</td><td>51.26</td><td>65.15</td><td>58.95</td><td>56.70</td><td>53.59</td><td>57.13</td></tr><tr><td/><td>+Attention</td><td>55.23</td><td>65.91</td><td>58.60</td><td>75.14</td><td>44.66</td><td>59.91</td></tr><tr><td>R</td><td>FastText DPCNN</td><td>68.23 67.87</td><td>56.82 62.50</td><td>71.58 61.05</td><td>63.41 65.36</td><td>71.24 59.48</td><td>66.26 63.25</td></tr><tr><td/><td>Transformer</td><td>60.65</td><td>69.70</td><td>64.91</td><td>62.29</td><td>66.01</td><td>64.71</td></tr><tr><td/><td>BERT-tiny</td><td>62.09</td><td>41.03</td><td>59.65</td><td>49.72</td><td>54.43</td><td>53.38</td></tr><tr><td/><td>BERT-small</td><td>59.93</td><td>47.24</td><td>62.81</td><td>59.22</td><td>51.85</td><td>56.21</td></tr><tr><td/><td>BERT-base</td><td>62.45</td><td>50.69</td><td>55.09</td><td>61.17</td><td>53.81</td><td>56.64</td></tr><tr><td/><td>CNN</td><td>53.01</td><td>49.56</td><td>50.07</td><td>51.68</td><td>51.03</td><td>51.07</td></tr><tr><td/><td>RCNN</td><td>52.44</td><td>49.53</td><td>50.60</td><td>50.20</td><td>51.67</td><td>50.89</td></tr><tr><td/><td>BiLSTM</td><td>47.10</td><td>47.84</td><td>47.80</td><td>48.39</td><td>47.17</td><td>47.66</td></tr><tr><td/><td>+Attention</td><td>46.08</td><td>48.95</td><td>48.76</td><td>50.14</td><td>43.29</td><td>47.44</td></tr><tr><td>F</td><td>FastText DPCNN</td><td>49.93 52.22</td><td>50.00 50.00</td><td>48.75 49.57</td><td>49.89 51.04</td><td>52.03 50.98</td><td>50.12 50.76</td></tr><tr><td/><td>Transformer</td><td>52.75</td><td>49.33</td><td>51.32</td><td>50.23</td><td>52.56</td><td>51.24</td></tr><tr><td/><td>BERT-tiny</td><td>50.29</td><td>41.54</td><td>51.05</td><td>51.97</td><td>49.72</td><td>48.91</td></tr><tr><td/><td>BERT-small</td><td>51.23</td><td>46.68</td><td>53.12</td><td>55.57</td><td>49.43</td><td>51.21</td></tr><tr><td/><td>BERT-base</td><td>52.74</td><td>49.16</td><td>48.09</td><td>56.81</td><td>51.24</td><td>51.61</td></tr><tr><td/><td>CNN</td><td>64.24</td><td>66.41</td><td>67.88</td><td>64.88</td><td>68.05</td><td>66.29</td></tr><tr><td/><td>RCNN</td><td>65.49</td><td>63.18</td><td>69.09</td><td>51.78</td><td>68.17</td><td>63.54</td></tr><tr><td/><td>BiLSTM</td><td>66.84</td><td>63.38</td><td>65.83</td><td>65.82</td><td>66.40</td><td>65.65</td></tr><tr><td/><td>+Attention</td><td>62.79</td><td>64.55</td><td>67.32</td><td>57.77</td><td>67.26</td><td>63.94</td></tr><tr><td>A</td><td>FastText DPCNN</td><td>60.60 64.24</td><td>70.70 67.77</td><td>60.06 67.04</td><td>64.01 64.56</td><td>63.23 67.99</td><td>63.72 66.32</td></tr><tr><td/><td>Transformer</td><td>68.71</td><td>63.09</td><td>67.32</td><td>65.11</td><td>66.65</td><td>66.18</td></tr><tr><td/><td>BERT-tiny</td><td>64.66</td><td>70.72</td><td>69.65</td><td>74.03</td><td>51.97</td><td>66.21</td></tr><tr><td/><td>BERT-small</td><td>67.15</td><td>72.64</td><td>70.58</td><td>73.24</td><td>70.30</td><td>70.78</td></tr><tr><td/><td>BERT-base</td><td>67.78</td><td>73.43</td><td>68.44</td><td>73.72</td><td>71.34</td><td>70.94</td></tr></table>", |
|
"type_str": "table", |
|
"text": "Statistics of each fold in the baseline experiments. The number before slash indicates how many actor's lines or characters that make the audience laugh. The number after slash indicates the number of lines or characters without causing audiences laugh.", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF9": { |
|
"content": "<table><tr><td colspan=\"8\">Tables 13 and 14 describe the detailed perfor-</td></tr><tr><td colspan=\"8\">mance of text classification and information extrac-</td></tr><tr><td colspan=\"4\">tion baseline experiments.</td><td/><td/><td/><td/></tr><tr><td/><td>Model</td><td colspan=\"6\">Fold-0 Fold-1 Fold-2 Fold-3 Fold-4 Average</td></tr><tr><td/><td>HMM</td><td>18.26</td><td>21.15</td><td>26.37</td><td>19.46</td><td>25.70</td><td>22.19</td></tr><tr><td/><td>CRF</td><td>24.62</td><td>30.86</td><td>34.17</td><td>23.33</td><td>29.83</td><td>28.56</td></tr><tr><td/><td>BILSTM</td><td>26.25</td><td>37.14</td><td>26.00</td><td>37.31</td><td>29.35</td><td>31.21</td></tr><tr><td>P</td><td>BiLSTM-CRF</td><td>30.50</td><td>24.43</td><td>32.46</td><td>29.71</td><td>34.56</td><td>30.33</td></tr><tr><td/><td>BERT-tiny</td><td>27.64</td><td>23.98</td><td>28.89</td><td>26.73</td><td>24.08</td><td>26.26</td></tr><tr><td/><td>BERT-small</td><td>31.71</td><td>24.12</td><td>30.83</td><td>28.04</td><td>29.38</td><td>28.82</td></tr><tr><td/><td>BERT-base</td><td>36.23</td><td>28.16</td><td>26.56</td><td>30.88</td><td>28.93</td><td>30.15</td></tr><tr><td/><td>HMM</td><td>7.19</td><td>9.64</td><td>7.38</td><td>5.93</td><td>7.02</td><td>7.43</td></tr><tr><td/><td>CRF</td><td>6.49</td><td>5.49</td><td>6.32</td><td>6.47</td><td>5.76</td><td>6.11</td></tr><tr><td/><td>BILSTM</td><td>2.24</td><td>1.23</td><td>0.55</td><td>2.47</td><td>1.70</td><td>1.64</td></tr><tr><td>R</td><td>BiLSTM-CRF</td><td>16.76</td><td>8.00</td><td>9.25</td><td>6.30</td><td>8.73</td><td>9.81</td></tr><tr><td/><td>BERT-tiny</td><td>17.37</td><td>19.54</td><td>20.96</td><td>22.46</td><td>19.10</td><td>19.89</td></tr><tr><td/><td>BERT-small</td><td>16.90</td><td>19.22</td><td>16.26</td><td>21.35</td><td>13.86</td><td>17.52</td></tr><tr><td/><td>BERT-base</td><td>21.62</td><td>21.19</td><td>27.17</td><td>21.02</td><td>16.35</td><td>21.47</td></tr><tr><td/><td>HMM</td><td>10.32</td><td>13.24</td><td>11.53</td><td>9.09</td><td>11.03</td><td>11.04</td></tr><tr><td/><td>CRF</td><td>10.27</td><td>9.33</td><td>10.67</td><td>10.13</td><td>9.65</td><td>10.01</td></tr><tr><td/><td>BILSTM</td><td>4.13</td><td>2.37</td><td>1.08</td><td>4.63</td><td>3.22</td><td>3.09</td></tr><tr><td>F</td><td>BiLSTM-CRF</td><td>21.63</td><td>12.05</td><td>14.40</td><td>10.39</td><td>13.94</td><td>14.48</td></tr><tr><td/><td>BERT-tiny</td><td>21.33</td><td>21.53</td><td>24.30</td><td>24.41</td><td>21.30</td><td>22.57</td></tr><tr><td/><td>BERT-small</td><td>22.05</td><td>21.40</td><td>21.29</td><td>24.24</td><td>18.83</td><td>21.56</td></tr><tr><td/><td>BERT-base</td><td>26.01</td><td>24.18</td><td>26.86</td><td>25.02</td><td>20.89</td><td>24.59</td></tr></table>", |
|
"type_str": "table", |
|
"text": "Each fold's text classification baseline experiments and their overall average performance. P, R, F and A are Precision, Recall, F1-score and Accuracy respectively.", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF10": { |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"text": "Each fold's information extraction baseline experiments and their overall average performance.", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF11": { |
|
"content": "<table><tr><td>Comedian</td><td colspan=\"4\">Text Classification Information Extraction # of Lines Total # of Characters Total</td></tr><tr><td>Xiaobao SONG</td><td>315/815</td><td>1150</td><td>2001/20366</td><td>22367</td></tr><tr><td>Yuepeng YUE</td><td>436/1547</td><td>1983</td><td>2362/25579</td><td>27941</td></tr><tr><td>Ling JIA</td><td>195/501</td><td>696</td><td>1135/14414</td><td>15549</td></tr><tr><td>Xiaofei ZHANG</td><td>190/495</td><td>685</td><td>1056/15946</td><td>17002</td></tr><tr><td>Teng SHEN</td><td>166/350</td><td>516</td><td>1071/13143</td><td>14214</td></tr><tr><td>Bing JIA</td><td>367/690</td><td>1057</td><td>2293/21085</td><td>23378</td></tr></table>", |
|
"type_str": "table", |
|
"text": "shows the statistics of the scripts performed by specific leading comedians. Tables 16 and 17 present the prediction results.", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF12": { |
|
"content": "<table><tr><td>A.5 Full List of Selected Comedy</td></tr><tr><td>Performances</td></tr><tr><td>Tables 18 and 19 show the full list of performances</td></tr><tr><td>in our corpus with details.</td></tr></table>", |
|
"type_str": "table", |
|
"text": "Statistics of the scripts performed by specific leading comedians. 40.00 41.85 39.67 45.70 47.29 42.75 BiLSTM 38.17 42.18 39.35 36.26 45.26 46.90 41.35 + Attention 42.70 38.16 45.06 38.49 48.02 46.78 43.20 RCNN 46.93 44.01 36.36 37.23 46.26 43.10 42.32 FastText 41.88 37.99 36.64 37.81 45.24 45.57 40.86 DPCNN 42.01 42.21 36.03 35.25 45.33 46.71 41.26 Transformer 41.97 36.32 36.94 41.30 46.50 46.41 Attention 39.18 45.07 49.07 44.70 52.86 54.55 47.57 RCNN 51.01 47.41 50.39 49.82 58.30 57.01 52.32 FastText 50.06 45.11 48.98 48.21 57.95 53.90 50.70 DPCNN 49.74 44.59 48.76 47.79 57.71 53.92 50.42 Transformer 50.13 45.91 51.33 50.95 58.98 54.20 51.92 BERT-tiny 50.84 41.94 49.14 44.07 53.00 53.16 48.69 BERT-small 47.91 41.95 48.78 48.15 53.76 53.98 49.09 F BERT-base 51.32 45.78 45.97 53.11 58.29 55.87 51.72 CNN 66.17 72.37 64.37 61.72 62.89 62.72 65.04 BiLSTM 63.83 74.18 62.50 56.41 62.70 62.63 63.71 + Attention 69.22 70.50 68.68 62.50 66.21 62.16 66.55 RCNN 70.61 74.94 54.74 56.87 63.67 55.91 62.79 FastText 66.00 70.30 56.90 59.38 62.30 60.83 62.62 DPCNN 66.26 74.18 55.60 53.91 62.50 62.16 62.44 Transformer 66.09 67.68 55.32 63.59 63.87 61.78 63.06 BERT-tiny 64.17 70.95 66.09 66.28 63.57 66.32 66.23 BERT-small 63.13 72.92 66.81 71.39 60.66 69.35 67.38 A BERT-base 66.35 75.39 67.24 71.39 65.89 70.10 69.39", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF13": { |
|
"content": "<table><tr><td/><td>Model</td><td>0</td><td>1</td><td>2</td><td>3</td><td>4</td><td>5</td><td>Average</td></tr><tr><td/><td>HMM</td><td colspan=\"6\">21.67 19.47 22.03 16.69 14.61 24.25</td><td>19.79</td></tr><tr><td>P</td><td>CRF</td><td colspan=\"7\">22.72 30.72</td></tr><tr><td/><td>BERT-tiny</td><td colspan=\"6\">24.43 21.87 29.61 26.37 27.19 28.32</td><td>26.30</td></tr><tr><td/><td>BERT-small</td><td colspan=\"6\">23.93 20.19 26.01 23.85 22.80 32.81</td><td>24.93</td></tr><tr><td/><td>BERT-base</td><td colspan=\"6\">20.48 19.10 23.74 24.41 26.58 33.51</td><td>24.64</td></tr><tr><td/><td>HMM</td><td colspan=\"6\">6.91 6.26 6.96 9.50 8.79 6.68</td><td>7.52</td></tr><tr><td/><td>CRF</td><td colspan=\"6\">5.80 4.87 7.47 4.41 3.13 4.48</td><td>5.03</td></tr><tr><td/><td>BILSTM</td><td colspan=\"6\">4.40 2.17 2.40 2.20 3.71 2.49</td><td>2.90</td></tr><tr><td>R</td><td colspan=\"7\">BiLSTM-CRF 7.64 14.06 17.52 7.30 3.20 2.90</td><td>8.77</td></tr><tr><td/><td>BERT-tiny</td><td colspan=\"6\">16.88 19.89 22.26 19.15 19.06 17.98</td><td>19.20</td></tr><tr><td/><td>BERT-small</td><td colspan=\"6\">22.63 31.31 29.31 32.78 32.42 15.39</td><td>27.31</td></tr><tr><td/><td>BERT-base</td><td colspan=\"6\">39.61 36.43 35.25 32.01 28.64 17.95</td><td>31.65</td></tr><tr><td/><td>HMM</td><td colspan=\"6\">10.48 9.48 10.58 12.11 10.97 10.47</td><td>10.68</td></tr><tr><td/><td>CRF</td><td colspan=\"6\">9.24 7.87 12.59 7.31 5.34 7.76</td><td>8.35</td></tr><tr><td/><td>BILSTM</td><td colspan=\"6\">7.60 3.94 4.46 4.04 6.65 4.67</td><td>5.23</td></tr><tr><td>F</td><td colspan=\"7\">BiLSTM-CRF 11.87 17.55 22.90 11.88 5.70 5.44</td><td>12.56</td></tr><tr><td/><td>BERT-tiny</td><td colspan=\"6\">19.97 20.83 25.42 22.19 22.41 22.00</td><td>22.14</td></tr><tr><td/><td>BERT-small</td><td colspan=\"6\">23.26 24.55 27.56 27.61 26.77 20.95</td><td>25.12</td></tr><tr><td/><td>BERT-base</td><td colspan=\"6\">27.00 25.06 28.37 27.70 27.57 23.37</td><td>26.51</td></tr></table>", |
|
"type_str": "table", |
|
"text": "20.54 39.79 21.43 18.28 28.96 25.29 BILSTM 26.32 21.48 31.70 24.53 32.39 37.89 29.05 BiLSTM-CRF 26.32 23.35 33.08 31.86 25.71 44.02", |
|
"num": null, |
|
"html": null |
|
} |
|
} |
|
} |
|
} |