ACL-OCL / Base_JSON /prefixN /json /nuse /2021.nuse-1.3.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:06:05.238418Z"
},
"title": "Learning Similarity between Movie Characters and Its Potential Implications on Understanding Human Experiences",
"authors": [
{
"first": "Zhilin",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Washington",
"location": {
"country": "United States"
}
},
"email": "[email protected]"
},
{
"first": "Weizhe",
"middle": [],
"last": "Lin",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Cambridge",
"location": {
"country": "United Kingdom"
}
},
"email": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "Wu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Cambridge",
"location": {
"country": "United Kingdom"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "While many different aspects of human experiences have been studied by the NLP community, none has captured its full richness. We propose a new task 1 to capture this richness based on an unlikely setting: movie characters. We sought to capture theme-level similarities between movie characters that were community-curated into 20,000 themes. By introducing a two-step approach that balances performance and efficiency, we managed to achieve 9-27% improvement over recent paragraph-embedding based methods. Finally, we demonstrate how the thematic information learnt from movie characters can potentially be used to understand themes in the experience of people, as indicated on Reddit posts.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "While many different aspects of human experiences have been studied by the NLP community, none has captured its full richness. We propose a new task 1 to capture this richness based on an unlikely setting: movie characters. We sought to capture theme-level similarities between movie characters that were community-curated into 20,000 themes. By introducing a two-step approach that balances performance and efficiency, we managed to achieve 9-27% improvement over recent paragraph-embedding based methods. Finally, we demonstrate how the thematic information learnt from movie characters can potentially be used to understand themes in the experience of people, as indicated on Reddit posts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "What makes a person similar to another? While there is no definitive answer, some aspects that have been investigated in the NLP community are personality (Gjurkovi\u0107 and \u0160najder, 2018; Conway and O'Connor, 2016) , demographics (Nguyen et al., 2016) as well as personal beliefs and intents (Sap et al., 2019) . While each of these aspects is valuable on its own, they also seem somewhat lacking to sketch a complete picture of a person. Researchers who recognise such limitations seek to ameliorate them by jointly modelling multiple aspects at the same time (Benton et al., 2017 ). Yet, we intuitively know that as humans, we are more than the sum of the multiple aspects that constitutes our individuality. Our human experiences are marked by so many different aspects that interact in ways that we can not anticipate. What then can we do to better capture the degree of similarity between different people?",
"cite_spans": [
{
"start": 155,
"end": 184,
"text": "(Gjurkovi\u0107 and \u0160najder, 2018;",
"ref_id": "BIBREF15"
},
{
"start": 185,
"end": 211,
"text": "Conway and O'Connor, 2016)",
"ref_id": "BIBREF10"
},
{
"start": 227,
"end": 248,
"text": "(Nguyen et al., 2016)",
"ref_id": "BIBREF22"
},
{
"start": 289,
"end": 307,
"text": "(Sap et al., 2019)",
"ref_id": "BIBREF28"
},
{
"start": 558,
"end": 578,
"text": "(Benton et al., 2017",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Finding similar movie characters can be an interesting first step to understanding humans better.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Many characters are inspired by and related to true stories of people so understanding how to identify similarities between character descriptions might ultimately help us to better understand similarities in human characteristics and experiences. One way of defining what makes movie character descriptions similar is when community-based contributors on All The Tropes 2 classify them into the same theme (also known as a trope), with an example from the trope \"Driven by Envy\" shown in Table 1 . Other themes (tropes) include \"Parental Neglect\", \"Fallen Hero\", and \"A Friend in Need\".",
"cite_spans": [],
"ref_spans": [
{
"start": 489,
"end": 496,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Such community-based curation allows All The Tropes to reap the same advantages as Wikipedia and open-sourced software: a large catalog can be created with high internal-consistency given the in-built self-correction mechanisms. This approach allowed us to collect a dataset of >100 thousand characters labelled with >20,000 themes without requiring any annotation cost. Based on this dataset, we propose a model that can be used to identify similar movie characters precisely yet efficiently. While movie characters may not be the perfect reflection of human experience, we ultimately show that they are good enough proxies when collecting a dataset of similar scale with real people would be extremely expensive.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our key contributions are as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1. We conduct a pioneering study on identifying similar movie character descriptions using weakly supervised learning, with potential implications on understanding similarities in human characteristics and experiences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2. We propose a two-step generalizable approach that can be used to identify similar movie characters precisely yet efficiently and demonstrate that our approach performs at least 9-27% better than methods employing recent paragraph embedding-based approaches.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Superman's 1990s enemy Conduit. Conduit hates Superman because he knows if Superman wasn't around he would be humanity's greatest hero instead ...",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Loki's constant scheming against Thor in his efforts to one-up him gave Odin and the rest of Asgard more and more reasons to hate Loki ... 3. We show that our model, which is trained on identifying similar movie characters, can be related to themes in human experience found in Reddit posts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Loki",
"sec_num": null
},
{
"text": "2 Related Work",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Loki",
"sec_num": null
},
{
"text": "Characters in movies and novels have been computationally analyzed by many researchers. Bamman et al. (2013 Bamman et al. ( , 2014 attempted to cluster various characters into prototypes based on topic modelling techniques (Blei et al., 2003) . On the other hand, Frermann and Szarvas (2017) and Iyyer et al. (2016) sought to cluster fictional characters alongside the relationships between them using recurrent neural networks and matrix factorization. While preceded by prior literature, our work is novel in framing character analysis as a supervised learning problem rather than an unsupervised learning problem. Specifically, we formulate it as a similarity learning task between characters. Tapping on fancurated movie-character labels (ie tropes) can provide valuable information concerning character similarity, which previous literature did not use. A perceptible effect of this change in task formulation is that our formulation allows movie characters to be finely distinguished amongst > 20000 themes versus < 200 in prior literature. Such differences in task formulation can contribute a fresh perspective into this research area and inspire subsequent research.",
"cite_spans": [
{
"start": 88,
"end": 107,
"text": "Bamman et al. (2013",
"ref_id": "BIBREF1"
},
{
"start": 108,
"end": 130,
"text": "Bamman et al. ( , 2014",
"ref_id": "BIBREF2"
},
{
"start": 223,
"end": 242,
"text": "(Blei et al., 2003)",
"ref_id": "BIBREF4"
},
{
"start": 296,
"end": 315,
"text": "Iyyer et al. (2016)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of characters in film and fiction",
"sec_num": "2.1"
},
{
"text": "Furthermore, the corpus we use differs significantly from those used in existing research. We use highly concise character descriptions of around 200 words whereas existing research mostly uses movie/book-length character mentions. Concise character descriptions can exemplify specific trait-s/experiences of characters. This allows the differences between characters to be more discriminative compared to a longer description, which might include more points of commonality (going to school/work, eating and having a polite conversation). This means that such concise descriptions can eventually prove more helpful in understanding characteristics and experiences of humans.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of characters in film and fiction",
"sec_num": "2.1"
},
{
"text": "Mostly researched in the field of psychology, reallife experiences are often analyzed through asking individuals to document and reflect upon their experiences. Trained analysts then seek to classify such writing into predefined categories. Demorest et al. (1999) interpreted an individual's experience in the form of three key stages: an individual's wish, the response from the other and the response from the self in light of the response from the other. Each stage consists of around ten predefined categories such as wanting to be autonomous (Stage 1), being denied of that autonomy (Stage 2) and developing an enmity against the other (Stage 3). Thorne and McLean (2001) organized their analysis in terms of central themes. These central themes include experiences of interpersonal turmoil, having a sense of achievement and surviving a potentially life-threatening event/illness. Both studies above code individuals' personal experiences into categories/themes that greatly resemble movie tropes. Because of this congruence, it is very likely that identifying similarity between characters in the same trope can inform about similarity between people in real-life. A common drawback of Demorest et al. (1999) and Thorne and McLean (2001) lie in their relatively small sample size (less than 200 people classified into tens of themes/categories). Comparatively, our study uses > 100,000 characters fine-grainedly labelled by fans into >20,000 tropes. As a result, this study has the potential of supporting a better understanding of tropes, which we have shown to be structurally similar to themes in real-life experiences.",
"cite_spans": [
{
"start": 241,
"end": 263,
"text": "Demorest et al. (1999)",
"ref_id": "BIBREF11"
},
{
"start": 652,
"end": 676,
"text": "Thorne and McLean (2001)",
"ref_id": "BIBREF30"
},
{
"start": 1193,
"end": 1215,
"text": "Demorest et al. (1999)",
"ref_id": "BIBREF11"
},
{
"start": 1220,
"end": 1244,
"text": "Thorne and McLean (2001)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Congruence between themes in real-life experiences and movie tropes",
"sec_num": "2.2"
},
{
"text": "Many information retrieval pipelines involve first identifying likely candidates and then postprocessing these candidates to determine which among them are most suitable. The most widelyused class of approaches for this purpose is known as Shingling and Locally Sensitive Hashing (Leskovec et al., 2020; Rodier and Carter, 2020) . Such approaches first represent documents as Bagof-Ngrams before hashing such representation into shorter integer-vector signatures. These signatures contain information on n-gram overlap between documents and hence encode lexical features that characterize similar documents. However, such approaches are unable to identify documents that are similar based on abstract semantic features rather than superficial lexical similarities. Recent progress in language modelling has enabled the semantic meaning of short paragraphs to be encoded beyond lexical features (Peters et al., 2018; Devlin et al., 2019; Howard and Ruder, 2018; Raffel et al., 2019) . This has reaped substantial gains in text similarity tasks including entailment tasks (Bowman et al., 2015; Williams et al., 2018) , duplicate questions tasks (Sharma et al., 2019; Nakov et al., 2017) and others (Cer et al., 2017; Dolan and Brockett, 2005 ). Yet, such progress has yet to enable better candidate selection based on semantic similarities. As a result, relatively naive approaches such as exhaustive pairwise comparisons and distance-based measures continue to be the dominant approach in identifying similar documents encoded into dense contextualized embeddings (Reimers and Gurevych, 2019) . To improve this gap in knowledge, this study proposes and validates a candidate selection method that is compatible with recent progress in text representation.",
"cite_spans": [
{
"start": 280,
"end": 303,
"text": "(Leskovec et al., 2020;",
"ref_id": "BIBREF19"
},
{
"start": 304,
"end": 328,
"text": "Rodier and Carter, 2020)",
"ref_id": "BIBREF27"
},
{
"start": 894,
"end": 915,
"text": "(Peters et al., 2018;",
"ref_id": "BIBREF23"
},
{
"start": 916,
"end": 936,
"text": "Devlin et al., 2019;",
"ref_id": "BIBREF12"
},
{
"start": 937,
"end": 960,
"text": "Howard and Ruder, 2018;",
"ref_id": "BIBREF17"
},
{
"start": 961,
"end": 981,
"text": "Raffel et al., 2019)",
"ref_id": "BIBREF25"
},
{
"start": 1070,
"end": 1091,
"text": "(Bowman et al., 2015;",
"ref_id": "BIBREF5"
},
{
"start": 1092,
"end": 1114,
"text": "Williams et al., 2018)",
"ref_id": "BIBREF34"
},
{
"start": 1143,
"end": 1164,
"text": "(Sharma et al., 2019;",
"ref_id": "BIBREF29"
},
{
"start": 1165,
"end": 1184,
"text": "Nakov et al., 2017)",
"ref_id": "BIBREF21"
},
{
"start": 1196,
"end": 1214,
"text": "(Cer et al., 2017;",
"ref_id": "BIBREF6"
},
{
"start": 1215,
"end": 1239,
"text": "Dolan and Brockett, 2005",
"ref_id": "BIBREF13"
},
{
"start": 1563,
"end": 1591,
"text": "(Reimers and Gurevych, 2019)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Candidate selection in information retrieval",
"sec_num": "2.3"
},
{
"text": "There is a set of unique character descriptions from the All The Tropes (Character 0 , Character 1 ... Character n ), each associated with a non-unique trope (theme) (T rope 0 , T rope 0 ... T rope p ). Given this set, find the k (where k = 1, 5 or 10) most similar character(s) to each character without making explicit use of the trope association of each character. In doing so, the goal is to have a maximal proportion of most similar character(s) which share the same tropes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task formulation",
"sec_num": "3"
},
{
"text": "In this section, we first discuss how we prepare the dataset and trained a BERT Next Sentence Prediction (NSP) model to identify similar characters. Based on this model, we present a 2-step Select and Refine approach, which can be utilized to find the most similar characters quickly yet effectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "4"
},
{
"text": "Character descriptions from All The Tropes 3 were used. We downloaded all character descriptions that had more than 100 words because character descriptions that are too short are unlikely to provide sufficient textual information for comparing similarity with other character descriptions. We then filtered our data to retain only tropes that contain more than one character descriptions. Character descriptions were then randomly split into training and evaluation sets (evaluation set = 20%). Inspired by BERT NSP dataset construction Devlin et al. 2019, we generated all possible combinationpairs of character descriptions that are classified under each trope (i.e. an unordered set) and gave the text-pair a label of IsSimilar, For each IsSimilar pair in the training set, we took the first item, randomly selected a character description that is not in the same trope as the first item and gave the new pair a label of NotSimilar.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4.1"
},
{
"text": "Descriptive statistics are available in Table 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 40,
"end": 47,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4.1"
},
{
"text": "We trained a BERT Next Sentence Prediction model (English-base-uncased) 4 with the pretrained weights used as an initialization. As this model was trained to perform pair-wise character comparison instead of next sentence prediction, we thereafter name it as Character Comparison Model (CCM). All hyper-parameters used to train the model were default 5 except adjusting the maximum sequence length to 512 tokens (to adapt to the paragraph-length text), batch-size per GPU to 8 and epoch number to 2, as recommended by Devlin et al. (2019) . Among the training set, 1% was separated as a validation set during the training process. We also used the default pre-trained BERT Englishbase-uncased tokenizer because only a small proportion of words (< 0.5%) in the training corpus were out-of-vocabulary, of which most were names. As a result, training took 3 days on 4 Nvidia Tesla P100 GPUs. 2) top_n characters are then selected using cosine similarity based on the Character Embedding Model or using a Siamese-BERT model, which has been omitted from the illustration for clarity (Section 4.3.1). This selection is then refined using the Character Comparison Model to create a similarity matrix, which can then be sorted to identified most similar characters. ",
"cite_spans": [
{
"start": 518,
"end": 538,
"text": "Devlin et al. (2019)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training BERT Next Sentence Prediction model",
"sec_num": "4.2"
},
{
"text": "To address the key limitation of utilizing exhaustive pairwise comparison in practice -its impractically long computation time (\u2248 10 thousand GPU-hours on Nvidia Tesla P100), we propose a two-step Select and Refine approach. The Select step first identifies a small set of likely candidates in a coarse but computationally efficient manner. Then, the Refine step re-ranks these candidates using a precise but computationally expensive model. In doing so, it combines their strengths to precisely identify similar characters while being computationally efficient. While the Select and Refine approach is designed for identifying similar characters, this novel approach can also be directly used in other tasks involving semantic similarities between a pair of texts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Select and Refine",
"sec_num": "4.3"
},
{
"text": "Characters that are likely to be similar to each character are first selected using a variant of our CCM model -named the Character Encoding Model (thereafter CEM). This model differs from the CCM model in that it does not utilize the final classifier layer. Therefore it can process a character description individually (instead of in pairs) to output an embedding that represents the character. The shared weights with CCM means that it encodes semantic information in a a similar way. This makes it likely that the most cosine similar character descriptions based on their character embedding are likely to have high (but not necessarily the highest) character-pair similarity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Select",
"sec_num": "4.3.1"
},
{
"text": "Beyond the CEM, any model capable of efficiently generating candidates for similar character description texts in O(n) time can also be used for this Select step, allowing immense flexibility in the application of the Select and Refine approach. To demonstrate this, we also test a Siamese-BERT model for the Select step, with the details of its preparation in Section 5.2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Select",
"sec_num": "4.3.1"
},
{
"text": "In this step, we effectively reduced the search space for the most similar characters. We choose top_n candidates characters which are most similar to each character, forming top_n most similar character-pairs. top_n is a hyper-parameter that can range from 1 to 500. Strictly speaking, this step requires O(n 2 ) comparisons to find the top_n most similar character-pairs. However, each cosine similarity calculation is significantly less computationally demanding compared to each BERT NSP operation (note that CCM is trained from an NSP model). This also applies to the Siamese-BERT model because character embeddings can be cached, meaning that only a single classification layer operation needs to be repeated O(n 2 ) times. This means that computational runtime is dominated by O(n) BERT NSP operations in the subsequent Refine step, given the huge constant factor for BERT NSP operations. Overall, this step took 0.25 GPU-hours.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Select",
"sec_num": "4.3.1"
},
{
"text": "The initial selection of candidates for most similar characters to each character will then be refined using the CCM model. This step is more computationally demanding (0.25 * top_n GPU-hours) but can more effectively determine the extent to which characters are similar. Character Comparison Model (CCM) will then only be used on the top_n most similar candidate character-pairs, reducing the number of operations for each character from the total number of characters (n chars ) to only top_n. As a consequence, the runtime complexity of the overall operation is reduced from O(n 2 chars ) to O(top_n \u2022 n chars ) == O(n chars ), given top_n is a constant.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Refine",
"sec_num": "4.3.2"
},
{
"text": "In this section, we first present evaluation metrics and then present the preparation of baseline models including state-of-the-art paragraph-level embedding models. Finally, we analyze the performance of our models relative to baseline models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5"
},
{
"text": "Recall @ k considers the proportion of all groundtruth pairs found within the k (1, 5 or 10) most similar characters to each character (Manning et al., 2008) . Normalized Discounted Cumulative Gain @ k (nDCG @ k) is a precision metric that considers the proportion of predicted k most similar characters to each character that are in the ground-truth character-pairs. It also takes into account the order amongst top k predicted most similar characters (Wang et al., 2013) . Mean reciprocal rank (MRR) identifies the rank of the first correctly predicted most similar character for each character and averages the reciprocal of their ranks. (Voorhees, 2000) . Higher is better for all metrics.",
"cite_spans": [
{
"start": 135,
"end": 157,
"text": "(Manning et al., 2008)",
"ref_id": "BIBREF20"
},
{
"start": 453,
"end": 472,
"text": "(Wang et al., 2013)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [
{
"start": 641,
"end": 657,
"text": "(Voorhees, 2000)",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Evaluation metrics",
"sec_num": "5.1"
},
{
"text": "Baseline measurements were obtained for Google Universal Sentence Encoder-large (Cer et al., 2018) , BERT-base (Devlin et al., 2019) and Siamese-BERT-base 6 (Reimers and Gurevych, 2019) .",
"cite_spans": [
{
"start": 80,
"end": 98,
"text": "(Cer et al., 2018)",
"ref_id": "BIBREF7"
},
{
"start": 111,
"end": 132,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF12"
},
{
"start": 157,
"end": 185,
"text": "(Reimers and Gurevych, 2019)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Models",
"sec_num": "5.2"
},
{
"text": "Google Universal Sentence Encoder-large model 7 (USE) on Tensorflow Hub was used to obtain a 512-dimensional vector representation of each character description. Bag of Words (BoW) was implemented by lowercasing all words and counting the number of times each word occurred in each character description. BERT embedding of 768 dimensions were obtained by average-pooling all the word embedding of tokens in the second-tolast layer, as recommended by (Xiao, 2018) . The English-base-uncased version 8 was used. For each type of embedding, the most similar characters were obtained by finding other characters whose embeddings are most cosine similar.",
"cite_spans": [
{
"start": 450,
"end": 462,
"text": "(Xiao, 2018)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Models",
"sec_num": "5.2"
},
{
"text": "Siamese-BERT was obtained based on training a Siamese model architecture connected to a BERT base model on the training set in Section 4.1. We follow the optimal model configuration for sentence-pair classification tasks described in Reimers and Gurevych (2019) , which involves taking the mean of all tokens embeddings in the final layer. With the mean embedding for each character description, an absolute difference between them was taken. The mean embedding for character A, mean embedding for character B and their absolute difference was then entered into a feedforward neural network, which makes the prediction. Siamese-BERT was chosen as a baseline due to its outstanding performance in sentence-pair classifi-cation tasks such as Semantic Textual Similarity (Cer et al., 2017) and Natural Language Inference (Bowman et al., 2015; Williams et al., 2018) . For this baseline, the characters most similar to a character are those with the highest likelihood of being predicted IsSimilar with the character.",
"cite_spans": [
{
"start": 234,
"end": 261,
"text": "Reimers and Gurevych (2019)",
"ref_id": "BIBREF26"
},
{
"start": 768,
"end": 786,
"text": "(Cer et al., 2017)",
"ref_id": "BIBREF6"
},
{
"start": 818,
"end": 839,
"text": "(Bowman et al., 2015;",
"ref_id": "BIBREF5"
},
{
"start": 840,
"end": 862,
"text": "Williams et al., 2018)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Models",
"sec_num": "5.2"
},
{
"text": "Step 1: Select",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Suitability of Siamese-BERT and CEM for",
"sec_num": "5.3"
},
{
"text": "While the prohibitively high computational demands of exhaustive pairwise comparison (\u2248 10 thousand GPU-hours) prevents a full-scale evaluation of the adequateness of Siamese-BERT and CEM for Step 1:Select, we conducted a small-scale experiment on 100 randomly chosen characters from the test set. First, an exhaustive pairwise comparison was conducted between these randomly chosen characters and all characters in the test set. From this, 100 characters with the highest CCM similarity value with each of the randomly chosen characters were identified. Next, various methods in Table 3 were attempted to identify 500 characters with the highest cosine similarity with the randomly chosen characters. Finally, the proportion of overlap between CCM and each method was calculated. Results demonstrate that Siamese-BERT and CEM have the greatest overlap and hence, the use of Siamese-BERT and CEM can select for the most number of highly similar characters to be refined by the CCM. ",
"cite_spans": [],
"ref_spans": [
{
"start": 580,
"end": 587,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Suitability of Siamese-BERT and CEM for",
"sec_num": "5.3"
},
{
"text": "Step 2: Refine Based on Figure 2 , the ideal top_n for the Select and Refine model with Siamese-BERT varies between 7 and 25 depending on the metric that is optimised for. In general, a lower value for top_n is preferred when optimizing for Recall@k and nDCG@k with smaller values of k. The metrics reported in Table 4 consist of the optimal value for each metric at various top_n. On the other hand, there is no ideal value for top_n when using the Select and Refine model with CEM. Instead, the metrics continue to improve over large values of top_n, albeit at a gradually reduced rate. However, due to practical considerations relating to GPU computation time, we terminated our search at top_n = 500 and report metrics for that value of top_n.",
"cite_spans": [],
"ref_spans": [
{
"start": 24,
"end": 32,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 311,
"end": 318,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Selecting hyper-parameter top_n for",
"sec_num": "5.4"
},
{
"text": "Together, this means that the Select and Refine model using Siamese-BERT achieves peak performance with significant less computational resources compared to the one using CEM (2-6 GPUhours vs. 125 GPU-hours).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selecting hyper-parameter top_n for",
"sec_num": "5.4"
},
{
"text": "As shown in Table 4 , the highest value for all metrics lies below 40% suggesting that identifying similar characters is a novel and challenging task. This is because there are only very few correct answers (characters from the same trope) out of 27,000 possible characters. The poor performance of the Bag-of-Words baseline also demonstrates that abstract semantic similarity between characters is significantly different from their superficial lexical similarity. In face of such challenges, the Select and Refine model using Siamese-BERT performed 9-27 % better on all metrics than the best performing paragraph-embedding-based baseline. This suggests the importance of refining initial selection of candidates instead of using them directly, even when the baseline model has relatively good performance.",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 19,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Comparing Select and Refine models with baseline models",
"sec_num": "5.5"
},
{
"text": "Comparing the Select and Refine models, Siamese-BERT performed much better than CEM Recall @ k (in %) nDCG @ k (in %) MRR k = 1 k = 5 k = 10 k = 1 k = 5 k = 10 (in %) while having a significantly low top_n, which means that less computational resources is required. The superior performance and efficiency of Siamese-BERT means that it is more suitable for",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparing Select and Refine models with baseline models",
"sec_num": "5.5"
},
{
"text": "Step 1: Select. This is likely caused by the higher performance of Siamese-BERT as a baseline model. While it was surprising that using Siamese-BERT outperformed CEM, which directly shares weights with the CCM, such an observation also shows the relatively low coupling between the Select and Refine steps. This means that the Select and Refine approach that we propose can continue to be relevant when model architectures that are more optimized for each step are introduced in the future.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparing Select and Refine models with baseline models",
"sec_num": "5.5"
},
{
"text": "The significantly higher performance of Select and Refine models can be attributed to the ability of underlying BERT NSP architecture in our CCM to consider complex word relationships across the two character descriptions. A manual examination of correct pairs captured only by Select and Refine models but not baseline models revealed that these pairs often contain words relating to multiple common aspects. As an example, one character description contains \"magic, enchanter\" and \"training, candidate, learn\" while the other character in the ground-truth pair contains \"spell, wonder, sphere\" and \"researched, school\". Compressing these word-level aspects into a fixed-length vector would cause some important semantic information -such as the inter-relatedness between aspects -to be lost (Conneau et al., 2018) . As a result, capturing similarities between these pairs prove to be difficult in baseline models, leading to sub-optimal ranking of the most similar characters.",
"cite_spans": [
{
"start": 793,
"end": 815,
"text": "(Conneau et al., 2018)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparing Select and Refine models with baseline models",
"sec_num": "5.5"
},
{
"text": "6 Implications for understanding themes in real-life experiences",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparing Select and Refine models with baseline models",
"sec_num": "5.5"
},
{
"text": "To demonstrate the potential applications of this study in understanding human experiences, we designed a task that can show how the model can be used with zero-shot transfer learning. Specifically, we used our model to identify the movie-characters that are most fitting to a description of people's life experiences. To do this, we collected 50 posts describing people's real-life experiences from a forum r/OffMyChest on Reddit 9 , on which people share their life experiences with strangers online. Then, we used our models to identify 10 movie characters (from our test set) that are most befitting to each post. For each of these 10 movie characters suggested by model, three graduate students independently rated whether the character matches the concepts, ideas and themes expressed in each post, while blind to information on which model the characters were generated by. Because the extent of similarity between a movie character and a Reddit post can be ambiguous, a binary annotation was chosen over a Likert scale for clarity of annotation. Annotators were instructed to annotate \"similar\" when they can specify at least one area of overlap between the concepts, ideas and themes of a Reddit post and a movie character. Examples of some characters that are indicated as \"similar\" to two posts are shown in Appendix A. Annotators agree on 94.2% of labels (Cohen's \u03ba = 0.934). Where the annotators disagree, the majority opinion out of three is taken. From these annotations, Precision @ k (in %) k = 1 k = 5 k = 10",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relating movie characters to Reddit posts",
"sec_num": "6.1"
},
{
"text": "Siamese-BERT 98.0 (14.0) 92.4 (14.4) 87.0 (8.79) CEM 82.0 (39.6) 77.6 (17.9) 70. Precision @ k is calculated, considering the proportion of all characters identified within the k (1, 5 or 10) that are labelled as \"similar\" (Manning et al., 2008) .",
"cite_spans": [
{
"start": 223,
"end": 245,
"text": "(Manning et al., 2008)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Select and Refine models",
"sec_num": null
},
{
"text": "In Table 5 , the performance of our Select and Refine models reflects a similar extent of improvement compared to our main learning task. This shows that the model that was trained to disambiguate movie character similarity can also determine the extent of similarity between movie characters and people's life experiences. Beyond the relative performance gains, the Select and Refine model on this task also demonstrates an excellent absolute performance of precision @ 1 = 98.00%. This means that our model can be used on this task without any fine-tuning.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 5",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Select and Refine models",
"sec_num": null
},
{
"text": "Illustrating the difference in performance of the various models in Table 6 , the better performing models on this task are generally better at capturing thematic similarities in terms of the abstract sense of recollection and memory, which are thematically more related to the Reddit post. Our Select and Refine model (with Siamese-BERT) is particularly effective at capturing both a sense of recollection as well as a sense of reverence towards a respected figure (historical figure and father respectively) . In contrary, the poorer performing models contain phrase-level semantic overlap (USE: picture with facial recognition; BoW: killed and passed away; eyes and recognize) but fail to capture thematic resemblance. This suggests our learning of similarities between movie characters of the same trope can effectively transfer onto thematic similarities between written human experiences and movie characters.",
"cite_spans": [],
"ref_spans": [
{
"start": 68,
"end": 75,
"text": "Table 6",
"ref_id": "TABREF9"
},
{
"start": 459,
"end": 509,
"text": "figure (historical figure and father respectively)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Select and Refine models",
"sec_num": null
},
{
"text": "We are excited about the diversity of research directions that this study can complement. One possible area is social media analysis (Zirikly et al., 2019; Amir et al., 2019; Hauser et al., 2019) . Researchers can make use of movie characters with known experiences (e.g. mental health, personal circumstances or individual interests) to identify similar experiences in social media when collecting large amounts of text labelled with such experiences directly is difficult.",
"cite_spans": [
{
"start": 133,
"end": 155,
"text": "(Zirikly et al., 2019;",
"ref_id": "BIBREF37"
},
{
"start": 156,
"end": 174,
"text": "Amir et al., 2019;",
"ref_id": "BIBREF0"
},
{
"start": 175,
"end": 195,
"text": "Hauser et al., 2019)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Future directions",
"sec_num": "6.2"
},
{
"text": "Another area would be personalizing dialogue agents (Tigunova et al., 2020; Zhang et al., 2018) . In the context of limited personality-related training data, movie characters with personality that are similar to a desired dialogue agent can be found. Using this, a dialogue agent can be trained with movie subtitle language data (involving the identified movie character). Thereby, the augmented linguistic data enables the dialogue agent to have a well-defined, distinct and consistent personality.",
"cite_spans": [
{
"start": 52,
"end": 75,
"text": "(Tigunova et al., 2020;",
"ref_id": "BIBREF31"
},
{
"start": 76,
"end": 95,
"text": "Zhang et al., 2018)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Future directions",
"sec_num": "6.2"
},
{
"text": "A final area that can benefit from this study is media recommendations (Rafailidis et al., 2017) . Users might be suggested media content based on the extent to which movie characters resonate with their own/friends' experiences. Additionally, with social environments being formed in games (particularly social simulation games such as Animal Crossing, The Sims and Pokemon) as well as in virtual reality (Chu et al., 2020) , participants can even assume the identity of movie characters that they are similar to, so as to have an interesting and immersive experience.",
"cite_spans": [
{
"start": 71,
"end": 96,
"text": "(Rafailidis et al., 2017)",
"ref_id": "BIBREF24"
},
{
"start": 406,
"end": 424,
"text": "(Chu et al., 2020)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Future directions",
"sec_num": "6.2"
},
{
"text": "My father passed away when I was 6 so I didn't really remember much of him but the fact that I didn't recognize his picture saddens me.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reddit post",
"sec_num": null
},
{
"text": "Siamese-BERT Sisko in Star Trek: Deep Space Nine (Past Tense) When he encountered an entry about the historical figure, passed comment about how closely Sisko resembled a picture of him (the picture, of course, being that of Sisko.) CEM Roxas in Kingdom Hearts: Chain of Memories His memories are wiped by Ansem the Wise and placed in a simulated world with a completely new identity",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Select and Refine",
"sec_num": null
},
{
"text": "Siamese-BERT Audrina, My Sweet Audrina by V.C Andrews is a girl living in the constant shadow of her elder sister who had died nine years before she was born CEM Macsen Wledig in The Mabinogion An amazing memory was an important necessity to the job, but remembering many long stories was much more important than getting one right after days of wandering around madly muttering BERT Kira in Push is made to think that her entire relationship with Nick was a false memory that she gave him and she's been pushing his thoughts the entire time they were together. USE EyeRobot in Fallout: New Vegas can recognize your face and voice with advanced facial and auditory recognition technology BoW",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline",
"sec_num": null
},
{
"text": "Magneto took Ron the Death Eater Up to Eleven to show him as he \"truly\" was in Morrison's eyes, and ended with him (intended as) Killed Off for Real ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline",
"sec_num": null
},
{
"text": "We introduce a pioneering study on identifying similar movie characters through weakly supervised learning. Based on this task, we introduce a novel Select-and-Refine approach that allows us to match characters belonging to a common theme, which simultaneously optimize for efficiency and performance. Using this trained model, we demonstrate the potential applications of this study in identifying movie characters that are similar to human experiences as presented in Reddit posts, without any fine-tuning. This represents an early step into understanding the complexity and richness of our human experience, which is not only interesting in itself but can also complement research in social media analysis, personalizing dialogue agents and media recommendations/interactions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "My father passed away when I was 6 so I didn't really remember much of him but the fact that I didn't recognize his picture saddens me.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reddit post",
"sec_num": null
},
{
"text": "Movie characters 1. Sisko in Star Trek: Deep Space Nine (Past Tense) When he encountered an entry about the historical figure, passed comment about how closely Sisko resembled a picture of him (the picture, of course, being that of Sisko.) 2. Arator the Redeemer in World of Warcraft As Arator never knew his father, he asks several of the veteran members of Alliance Expedition about Turalyon for information and leads on Turalyon's current location. Several people then gave their opinion on how great a guy Turalyon was, but sadly, he has been MIA for 15 years. 3. Kira in Push The reality of a photo taken at Coney Island is the key evidence that causes her to realize that this was a fake memory. 4. Todd Aldridge in Mindwarp Todd shows up back in town; to him, there was a bright light one night, and he returned several months later with no knowledge of the intervening period. 5. Parker Girls in Stranger in Paradise However, when the operation collapsed after the death of Darcy Parker many Parker Girls were trapped in their cover identities, unable to extricate themselves from the lives they had established.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reddit post",
"sec_num": null
},
{
"text": "The black ladies I work with make me feel the most loved I've felt in years. I've had a horrible past 10 years. Childhood trauma and depression, addiction, abuse etc Movie characters 1. Shinjiro Aragaki in Persona 3 First of all, he's an orphan. During those two years, he began taking drugs to help control his Persona. Said drugs are slowly killing him. He has his own Social Link with the female protagonist where it becomes painfully clear that he really is a nice guy, and he slowly falls in love with her. 2. Mami in Breath of Fire IV Country Mouse finds King in the Mountain God-Emperor that The Empire (that aforementioned God-Emperor founded) is trying very, very hard to kill. Country Mouse Mami nurses God-Emperor Fou-lu back to health. Mami and Fou-lu end up falling in love. 3. Emi in Katawa Shoujo The loss of her legs was traumatic, but she learned to cope with that well. The loss of her dad she did not cope with at all. Part of getting her happy ending is to help her deal with her loss. 4. Harry in Harry Potter Harry reaches out, has friends, and even in the moments when the school turns against him, he still has a full blown group of True Companions to help him, thus making him well adjusted and pretty close to normal. 5. Commander Shepard in the Mass Effect series If the right dialogue is chosen, s/he's cynical and bitter with major emotional scars from his/her past experiences. It becomes pretty clear how emotionally burned out s/he really is. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reddit post",
"sec_num": null
},
{
"text": "https://allthetropes.org",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://allthetropes.org 4 12-layer, 768-hidden, 12-heads, 110M parameters with only Next Sentence Prediction loss, accessed from https: //github.com/huggingface/transformers 5 https://github.com/huggingface/ transformers/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "12-layer, 768-hidden, 12-heads and 110M parameters 7 https://tfhub.dev/google/universal-sentence-encoderlarge/3 8 12-layer, 768-hidden, 12-heads and 110M parameters",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://www.reddit.com/r/offmychest/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank the anonymous reviewers for their helpful feedback.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Mental health surveillance over social media with digital cohorts",
"authors": [
{
"first": "Silvio",
"middle": [],
"last": "Amir",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Dredze",
"suffix": ""
},
{
"first": "John",
"middle": [
"W"
],
"last": "Ayers",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Sixth Workshop on Computational Linguistics and Clinical Psychology",
"volume": "",
"issue": "",
"pages": "114--120",
"other_ids": {
"DOI": [
"10.18653/v1/W19-3013"
]
},
"num": null,
"urls": [],
"raw_text": "Silvio Amir, Mark Dredze, and John W. Ayers. 2019. Mental health surveillance over social media with digital cohorts. In Proceedings of the Sixth Work- shop on Computational Linguistics and Clinical Psy- chology, pages 114-120, Minneapolis, Minnesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Learning latent personas of film characters",
"authors": [
{
"first": "David",
"middle": [],
"last": "Bamman",
"suffix": ""
},
{
"first": "O'",
"middle": [],
"last": "Brendan",
"suffix": ""
},
{
"first": "Noah A",
"middle": [],
"last": "Connor",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "352--361",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Bamman, Brendan O'Connor, and Noah A Smith. 2013. Learning latent personas of film char- acters. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 352-361.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A bayesian mixed effects model of literary character",
"authors": [
{
"first": "David",
"middle": [],
"last": "Bamman",
"suffix": ""
},
{
"first": "Ted",
"middle": [],
"last": "Underwood",
"suffix": ""
},
{
"first": "Noah A",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "370--379",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Bamman, Ted Underwood, and Noah A Smith. 2014. A bayesian mixed effects model of literary character. In Proceedings of the 52nd Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), pages 370-379.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Multitask learning for mental health conditions with limited social media data",
"authors": [
{
"first": "Adrian",
"middle": [],
"last": "Benton",
"suffix": ""
},
{
"first": "Margaret",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "152--162",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adrian Benton, Margaret Mitchell, and Dirk Hovy. 2017. Multitask learning for mental health condi- tions with limited social media data. In Proceed- ings of the 15th Conference of the European Chap- ter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 152-162, Valencia, Spain. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Latent dirichlet allocation",
"authors": [
{
"first": "David",
"middle": [
"M"
],
"last": "Blei",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"I"
],
"last": "Jordan",
"suffix": ""
}
],
"year": 2003,
"venue": "J. Mach. Learn. Res",
"volume": "3",
"issue": "",
"pages": "993--1022",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent dirichlet allocation. J. Mach. Learn. Res., 3:993-1022.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A large annotated corpus for learning natural language inference",
"authors": [
{
"first": "R",
"middle": [],
"last": "Samuel",
"suffix": ""
},
{
"first": "Gabor",
"middle": [],
"last": "Bowman",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Angeli",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Potts",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large anno- tated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
},
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "I\u00f1igo",
"middle": [],
"last": "Lopez-Gazpio",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017)",
"volume": "",
"issue": "",
"pages": "1--14",
"other_ids": {
"DOI": [
"10.18653/v1/S17-2001"
]
},
"num": null,
"urls": [],
"raw_text": "Daniel Cer, Mona Diab, Eneko Agirre, I\u00f1igo Lopez- Gazpio, and Lucia Specia. 2017. SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 1-14, Vancouver, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Universal sentence encoder for english",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Yinfei",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Sheng-Yi",
"middle": [],
"last": "Kong",
"suffix": ""
},
{
"first": "Nan",
"middle": [],
"last": "Hua",
"suffix": ""
},
{
"first": "Nicole",
"middle": [],
"last": "Limtiaco",
"suffix": ""
},
{
"first": "Rhomni",
"middle": [],
"last": "St John",
"suffix": ""
},
{
"first": "Noah",
"middle": [],
"last": "Constant",
"suffix": ""
},
{
"first": "Mario",
"middle": [],
"last": "Guajardo-Cespedes",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Tar",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "169--174",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, et al. 2018. Universal sentence encoder for english. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 169-174.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Expressive telepresence via modular codec avatars",
"authors": [
{
"first": "Hang",
"middle": [],
"last": "Chu",
"suffix": ""
},
{
"first": "Shugao",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "De La",
"suffix": ""
},
{
"first": "Torre",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2020,
"venue": "Sanja Fidler, and Yaser Sheikh",
"volume": "",
"issue": "",
"pages": "330--345",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hang Chu, Shugao Ma, Fernando De la Torre, Sanja Fidler, and Yaser Sheikh. 2020. Expressive telepres- ence via modular codec avatars. In Computer Vision -ECCV 2020, pages 330-345, Cham. Springer Inter- national Publishing.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "What you can cram into a single $&!#* vector: Probing sentence embeddings for linguistic properties",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "German",
"middle": [],
"last": "Kruszewski",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Lo\u00efc",
"middle": [],
"last": "Barrault",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "2126--2136",
"other_ids": {
"DOI": [
"10.18653/v1/P18-1198"
]
},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, German Kruszewski, Guillaume Lam- ple, Lo\u00efc Barrault, and Marco Baroni. 2018. What you can cram into a single $&!#* vector: Probing sentence embeddings for linguistic properties. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 2126-2136, Melbourne, Aus- tralia. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Social media, big data, and mental health: current advances and ethical implications",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Conway",
"suffix": ""
},
{
"first": "O'",
"middle": [],
"last": "Daniel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Connor",
"suffix": ""
}
],
"year": 2016,
"venue": "Social media and applications to health behavior",
"volume": "9",
"issue": "",
"pages": "77--82",
"other_ids": {
"DOI": [
"10.1016/j.copsyc.2016.01.004"
]
},
"num": null,
"urls": [],
"raw_text": "Mike Conway and Daniel O'Connor. 2016. Social me- dia, big data, and mental health: current advances and ethical implications. Current Opinion in Psy- chology, 9:77 -82. Social media and applications to health behavior.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A comparison of interpersonal scripts in clinically depressed versus nondepressed individuals",
"authors": [
{
"first": "Amy",
"middle": [],
"last": "Demorest",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Crits-Christoph",
"suffix": ""
},
{
"first": "Mary",
"middle": [],
"last": "Hatch",
"suffix": ""
},
{
"first": "Lester",
"middle": [],
"last": "Luborsky",
"suffix": ""
}
],
"year": 1999,
"venue": "Journal of Research in Personality",
"volume": "33",
"issue": "3",
"pages": "265--280",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amy Demorest, Paul Crits-Christoph, Mary Hatch, and Lester Luborsky. 1999. A comparison of interper- sonal scripts in clinically depressed versus nonde- pressed individuals. Journal of Research in Person- ality, 33(3):265-280.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Automatically constructing a corpus of sentential paraphrases",
"authors": [
{
"first": "B",
"middle": [],
"last": "William",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dolan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Brockett",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Third International Workshop on Paraphrasing (IWP2005)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William B. Dolan and Chris Brockett. 2005. Automati- cally constructing a corpus of sentential paraphrases. In Proceedings of the Third International Workshop on Paraphrasing (IWP2005).",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Inducing semantic micro-clusters from deep multi-view representations of novels",
"authors": [
{
"first": "Lea",
"middle": [],
"last": "Frermann",
"suffix": ""
},
{
"first": "Gy\u00f6rgy",
"middle": [],
"last": "Szarvas",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1873--1883",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lea Frermann and Gy\u00f6rgy Szarvas. 2017. Inducing semantic micro-clusters from deep multi-view rep- resentations of novels. In Proceedings of the 2017 Conference on Empirical Methods in Natural Lan- guage Processing, pages 1873-1883.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Reddit: A gold mine for personality prediction",
"authors": [
{
"first": "Matej",
"middle": [],
"last": "Gjurkovi\u0107",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "\u0160najder",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Second Workshop on Computational Modeling of People's Opinions, Personality, and Emotions in Social Media",
"volume": "",
"issue": "",
"pages": "87--97",
"other_ids": {
"DOI": [
"10.18653/v1/W18-1112"
]
},
"num": null,
"urls": [],
"raw_text": "Matej Gjurkovi\u0107 and Jan \u0160najder. 2018. Reddit: A gold mine for personality prediction. In Proceedings of the Second Workshop on Computational Modeling of People's Opinions, Personality, and Emotions in Social Media, pages 87-97, New Orleans, Louisiana, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Using natural conversations to classify autism with limited data: Age matters",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Hauser",
"suffix": ""
},
{
"first": "Evangelos",
"middle": [],
"last": "Sariyanidi",
"suffix": ""
},
{
"first": "Birkan",
"middle": [],
"last": "Tunc",
"suffix": ""
},
{
"first": "Casey",
"middle": [],
"last": "Zampella",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Brodkin",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Schultz",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Parish-Morris",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Sixth Workshop on Computational Linguistics and Clinical Psychology",
"volume": "",
"issue": "",
"pages": "45--54",
"other_ids": {
"DOI": [
"10.18653/v1/W19-3006"
]
},
"num": null,
"urls": [],
"raw_text": "Michael Hauser, Evangelos Sariyanidi, Birkan Tunc, Casey Zampella, Edward Brodkin, Robert Schultz, and Julia Parish-Morris. 2019. Using natural con- versations to classify autism with limited data: Age matters. In Proceedings of the Sixth Workshop on Computational Linguistics and Clinical Psychology, pages 45-54, Minneapolis, Minnesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Universal language model fine-tuning for text classification",
"authors": [
{
"first": "Jeremy",
"middle": [],
"last": "Howard",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1801.06146"
]
},
"num": null,
"urls": [],
"raw_text": "Jeremy Howard and Sebastian Ruder. 2018. Univer- sal language model fine-tuning for text classification. arXiv preprint arXiv:1801.06146.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Feuding families and former friends: Unsupervised learning for dynamic fictional relationships",
"authors": [
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Anupam",
"middle": [],
"last": "Guha",
"suffix": ""
},
{
"first": "Snigdha",
"middle": [],
"last": "Chaturvedi",
"suffix": ""
},
{
"first": "Jordan",
"middle": [],
"last": "Boyd-Graber",
"suffix": ""
},
{
"first": "Hal",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "1534--1544",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohit Iyyer, Anupam Guha, Snigdha Chaturvedi, Jor- dan Boyd-Graber, and Hal Daum\u00e9 III. 2016. Feud- ing families and former friends: Unsupervised learn- ing for dynamic fictional relationships. In Proceed- ings of the 2016 Conference of the North Ameri- can Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1534-1544.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Finding Similar Items",
"authors": [
{
"first": "Jure",
"middle": [],
"last": "Leskovec",
"suffix": ""
},
{
"first": "Anand",
"middle": [],
"last": "Rajaraman",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [
"David"
],
"last": "Ullman",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "78--137",
"other_ids": {
"DOI": [
"10.1017/9781108684163.004"
]
},
"num": null,
"urls": [],
"raw_text": "Jure Leskovec, Anand Rajaraman, and Jeffrey David Ullman. 2020. Finding Similar Items, 3 edition, page 78-137. Cambridge University Press.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Introduction to Information Retrieval",
"authors": [
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Prabhakar",
"middle": [],
"last": "Raghavan",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher D. Manning, Prabhakar Raghavan, and Hinrich Sch\u00fctze. 2008. Introduction to Information Retrieval. Cambridge University Press, Cambridge, UK.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "SemEval-2017 task 3: Community question answering",
"authors": [
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Doris",
"middle": [],
"last": "Hoogeveen",
"suffix": ""
},
{
"first": "Llu\u00eds",
"middle": [],
"last": "M\u00e0rquez",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Moschitti",
"suffix": ""
},
{
"first": "Hamdy",
"middle": [],
"last": "Mubarak",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
},
{
"first": "Karin",
"middle": [],
"last": "Verspoor",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017)",
"volume": "",
"issue": "",
"pages": "27--48",
"other_ids": {
"DOI": [
"10.18653/v1/S17-2003"
]
},
"num": null,
"urls": [],
"raw_text": "Preslav Nakov, Doris Hoogeveen, Llu\u00eds M\u00e0rquez, Alessandro Moschitti, Hamdy Mubarak, Timothy Baldwin, and Karin Verspoor. 2017. SemEval-2017 task 3: Community question answering. In Proceed- ings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 27-48, Vancou- ver, Canada. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Computational sociolinguistics: A survey",
"authors": [
{
"first": "Dong",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Seza",
"suffix": ""
},
{
"first": "Carolyn",
"middle": [
"P"
],
"last": "Dogru\u00f6z",
"suffix": ""
},
{
"first": "Franciska",
"middle": [],
"last": "Ros\u00e9",
"suffix": ""
},
{
"first": "Jong",
"middle": [],
"last": "De",
"suffix": ""
}
],
"year": 2016,
"venue": "Computational Linguistics",
"volume": "42",
"issue": "3",
"pages": "537--593",
"other_ids": {
"DOI": [
"10.1162/COLI_a_00258"
]
},
"num": null,
"urls": [],
"raw_text": "Dong Nguyen, A. Seza Dogru\u00f6z, Carolyn P. Ros\u00e9, and Franciska de Jong. 2016. Computational soci- olinguistics: A survey. Computational Linguistics, 42(3):537-593.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "E",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1802.05365"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word repre- sentations. arXiv preprint arXiv:1802.05365.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Preference dynamics with multimodal user-item interactions in social media recommendation",
"authors": [
{
"first": "D",
"middle": [],
"last": "Rafailidis",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Kefalas",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Manolopoulos",
"suffix": ""
}
],
"year": 2017,
"venue": "Expert Systems with Applications",
"volume": "74",
"issue": "",
"pages": "11--18",
"other_ids": {
"DOI": [
"10.1016/j.eswa.2017.01.005"
]
},
"num": null,
"urls": [],
"raw_text": "D. Rafailidis, P. Kefalas, and Y. Manolopoulos. 2017. Preference dynamics with multimodal user-item in- teractions in social media recommendation. Expert Systems with Applications, 74:11 -18.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Exploring the limits of transfer learning with a unified text-to-text transformer",
"authors": [
{
"first": "Colin",
"middle": [],
"last": "Raffel",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Roberts",
"suffix": ""
},
{
"first": "Katherine",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Sharan",
"middle": [],
"last": "Narang",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Matena",
"suffix": ""
},
{
"first": "Yanqi",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"J"
],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text trans- former. ArXiv, abs/1910.10683.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Sentence-BERT: Sentence embeddings using Siamese BERTnetworks",
"authors": [
{
"first": "Nils",
"middle": [],
"last": "Reimers",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "3982--3992",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1410"
]
},
"num": null,
"urls": [],
"raw_text": "Nils Reimers and Iryna Gurevych. 2019. Sentence- BERT: Sentence embeddings using Siamese BERT- networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 3982-3992, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Online nearduplicate detection of news articles",
"authors": [
{
"first": "Simon",
"middle": [],
"last": "Rodier",
"suffix": ""
},
{
"first": "Dave",
"middle": [],
"last": "Carter",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of The 12th Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "1242--1249",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Simon Rodier and Dave Carter. 2020. Online near- duplicate detection of news articles. In Proceedings of The 12th Language Resources and Evaluation Conference, pages 1242-1249, Marseille, France. European Language Resources Association.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Atomic: An atlas of machine commonsense for ifthen reasoning",
"authors": [
{
"first": "Maarten",
"middle": [],
"last": "Sap",
"suffix": ""
},
{
"first": "Emily",
"middle": [],
"last": "Ronan Le Bras",
"suffix": ""
},
{
"first": "Chandra",
"middle": [],
"last": "Allaway",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [],
"last": "Bhagavatula",
"suffix": ""
},
{
"first": "Hannah",
"middle": [],
"last": "Lourie",
"suffix": ""
},
{
"first": "Brendan",
"middle": [],
"last": "Rashkin",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Roof",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "33",
"issue": "",
"pages": "3027--3035",
"other_ids": {
"DOI": [
"10.1609/aaai.v33i01.33013027"
]
},
"num": null,
"urls": [],
"raw_text": "Maarten Sap, Ronan Le Bras, Emily Allaway, Chan- dra Bhagavatula, Nicholas Lourie, Hannah Rashkin, Brendan Roof, Noah A. Smith, and Yejin Choi. 2019. Atomic: An atlas of machine commonsense for if- then reasoning. Proceedings of the AAAI Confer- ence on Artificial Intelligence, 33(01):3027-3035.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Natural language understanding with the quora question pairs dataset",
"authors": [
{
"first": "Lakshay",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Laura",
"middle": [],
"last": "Graesser",
"suffix": ""
},
{
"first": "Nikita",
"middle": [],
"last": "Nangia",
"suffix": ""
},
{
"first": "Utku",
"middle": [],
"last": "Evci",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lakshay Sharma, Laura Graesser, Nikita Nangia, and Utku Evci. 2019. Natural language understand- ing with the quora question pairs dataset. CoRR, abs/1907.01041.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Manual for Coding Events in Self-Defining Memories. Unpublished Manuscript",
"authors": [
{
"first": "Avril",
"middle": [],
"last": "Thorne",
"suffix": ""
},
{
"first": "Kate",
"middle": [
"C"
],
"last": "Mclean",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Avril Thorne and Kate C. McLean. 2001. Manual for Coding Events in Self-Defining Memories. Unpub- lished Manuscript.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "CHARM: Inferring personal attributes from conversations",
"authors": [
{
"first": "Anna",
"middle": [],
"last": "Tigunova",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Yates",
"suffix": ""
},
{
"first": "Paramita",
"middle": [],
"last": "Mirza",
"suffix": ""
},
{
"first": "Gerhard",
"middle": [],
"last": "Weikum",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "5391--5404",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anna Tigunova, Andrew Yates, Paramita Mirza, and Gerhard Weikum. 2020. CHARM: Inferring per- sonal attributes from conversations. In Proceed- ings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5391-5404, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "The trec-8 question answering track report",
"authors": [
{
"first": "M",
"middle": [],
"last": "Ellen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Voorhees",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ellen M Voorhees. 2000. The trec-8 question answer- ing track report. Technical report.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "A theoretical analysis of ndcg type ranking measures",
"authors": [
{
"first": "Yining",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Liwei",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yuanzhi",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Di",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Tie-Yan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2013,
"venue": "COLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yining Wang, Liwei Wang, Yuanzhi Li, Di He, and Tie- Yan Liu. 2013. A theoretical analysis of ndcg type ranking measures. In COLT.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "A broad-coverage challenge corpus for sentence understanding through inference",
"authors": [
{
"first": "Adina",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "Nikita",
"middle": [],
"last": "Nangia",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1112--1122",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sen- tence understanding through inference. In Proceed- ings of the 2018 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112-1122. Association for Computational Linguistics.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "bert-as-service",
"authors": [
{
"first": "Han",
"middle": [],
"last": "Xiao",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Han Xiao. 2018. bert-as-service. https:// github.com/hanxiao/bert-as-service.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Personalizing dialogue agents: I have a dog, do you have pets too?",
"authors": [
{
"first": "Saizheng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Emily",
"middle": [],
"last": "Dinan",
"suffix": ""
},
{
"first": "Jack",
"middle": [],
"last": "Urbanek",
"suffix": ""
},
{
"first": "Arthur",
"middle": [],
"last": "Szlam",
"suffix": ""
},
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "2204--2213",
"other_ids": {
"DOI": [
"10.18653/v1/P18-1205"
]
},
"num": null,
"urls": [],
"raw_text": "Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018. Per- sonalizing dialogue agents: I have a dog, do you have pets too? In Proceedings of the 56th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2204- 2213, Melbourne, Australia. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "CLPsych 2019 shared task: Predicting the degree of suicide risk in Reddit posts",
"authors": [
{
"first": "Ayah",
"middle": [],
"last": "Zirikly",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": ""
},
{
"first": "\u00d6zlem",
"middle": [],
"last": "Uzuner",
"suffix": ""
},
{
"first": "Kristy",
"middle": [],
"last": "Hollingshead",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Sixth Workshop on Computational Linguistics and Clinical Psychology",
"volume": "",
"issue": "",
"pages": "24--33",
"other_ids": {
"DOI": [
"10.18653/v1/W19-3003"
]
},
"num": null,
"urls": [],
"raw_text": "Ayah Zirikly, Philip Resnik, \u00d6zlem Uzuner, and Kristy Hollingshead. 2019. CLPsych 2019 shared task: Predicting the degree of suicide risk in Reddit posts. In Proceedings of the Sixth Workshop on Computa- tional Linguistics and Clinical Psychology, pages 24-33, Minneapolis, Minnesota. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"text": "Workflow of finding most similar characters: BERT NSP model is first trained on the training set (Section 4.",
"type_str": "figure"
},
"FIGREF1": {
"num": null,
"uris": null,
"text": "Percent change in metrics with each additional top_n for Select and Refine model with Siamese-BERT. Average smoothing applied over a range of 10 to improve clarity. Points annotated where each metric is at 0.",
"type_str": "figure"
},
"TABREF0": {
"num": null,
"type_str": "table",
"content": "<table><tr><td>: Character descriptions from the trope \"Driven</td></tr><tr><td>by Envy\"</td></tr></table>",
"html": null,
"text": ""
},
"TABREF2": {
"num": null,
"type_str": "table",
"content": "<table/>",
"html": null,
"text": "Descriptive statistics of dataset"
},
"TABREF4": {
"num": null,
"type_str": "table",
"content": "<table><tr><td>similarity value that overlaps with each method for Step</td></tr><tr><td>1: Select</td></tr></table>",
"html": null,
"text": "Proportion of 100 characters with high CCM"
},
"TABREF6": {
"num": null,
"type_str": "table",
"content": "<table/>",
"html": null,
"text": "Performance of Select and Refine models compared to baseline models. Higher is better for all metrics."
},
"TABREF8": {
"num": null,
"type_str": "table",
"content": "<table/>",
"html": null,
"text": "Precision @ k (std. dev.) for movie characters identified by each model."
},
"TABREF9": {
"num": null,
"type_str": "table",
"content": "<table/>",
"html": null,
"text": "Most similar character predicted by each model to a post from Reddit r/OffMyChest. Excerpts of Reddit post mildly paraphrased to protect anonymity."
},
"TABREF10": {
"num": null,
"type_str": "table",
"content": "<table/>",
"html": null,
"text": "Excepts from Posts from Reddit r/OffMyChest to five similar movie characters. Excerpts of Reddit posts mildly paraphrased to protect anonymity."
}
}
}
}