ACL-OCL / Base_JSON /prefixF /json /figlang /2020.figlang-1.34.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T09:42:59.442063Z"
},
"title": "Metaphor Detection Using Contextual Word Embeddings From Transformers",
"authors": [
{
"first": "Jerry",
"middle": [],
"last": "Liu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Duke University",
"location": {}
},
"email": ""
},
{
"first": "Nathan",
"middle": [],
"last": "O'hara",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Duke University",
"location": {}
},
"email": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Rubin",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Duke University",
"location": {}
},
"email": ""
},
{
"first": "Rachel",
"middle": [],
"last": "Draelos",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Duke University",
"location": {}
},
"email": ""
},
{
"first": "Cynthia",
"middle": [],
"last": "Rudin",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Duke University",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The detection of metaphors can provide valuable information about a given text and is crucial to sentiment analysis and machine translation. In this paper, we outline the techniques for word-level metaphor detection used in our submission to the Second Shared Task on Metaphor Detection. We propose using both BERT and XLNet language models to create contextualized embeddings and a bidirectional LSTM to identify whether a given word is a metaphor. Our best model achieved F1-scores of 68.0% on VUA AllPOS, 73.0% on VUA Verbs, 66.9% on TOEFL AllPOS, and 69.7% on TOEFL Verbs, placing 7th, 6th, 5th, and 5th respectively. In addition, we outline another potential approach with a KNN-LSTM ensemble model that we did not have enough time to implement given the deadline for the competition. We show that a KNN classifier provides a similar F1-score on a validation set as the LSTM and yields different information on metaphors.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "The detection of metaphors can provide valuable information about a given text and is crucial to sentiment analysis and machine translation. In this paper, we outline the techniques for word-level metaphor detection used in our submission to the Second Shared Task on Metaphor Detection. We propose using both BERT and XLNet language models to create contextualized embeddings and a bidirectional LSTM to identify whether a given word is a metaphor. Our best model achieved F1-scores of 68.0% on VUA AllPOS, 73.0% on VUA Verbs, 66.9% on TOEFL AllPOS, and 69.7% on TOEFL Verbs, placing 7th, 6th, 5th, and 5th respectively. In addition, we outline another potential approach with a KNN-LSTM ensemble model that we did not have enough time to implement given the deadline for the competition. We show that a KNN classifier provides a similar F1-score on a validation set as the LSTM and yields different information on metaphors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "A metaphor is a form of figurative language that creates a link between two different concepts and conveys rich linguistic information (Lakofi and Johnson, 1980) . The complex information that accompanies a metaphorical text is often overlooked in sentiment analysis, machine translation, and information extraction. Therefore, the detection of metaphors is an important task in order to achieve the full potential of many applications in natural language processing (Tsvetkov et al., 2014) .",
"cite_spans": [
{
"start": 135,
"end": 161,
"text": "(Lakofi and Johnson, 1980)",
"ref_id": "BIBREF4"
},
{
"start": 467,
"end": 490,
"text": "(Tsvetkov et al., 2014)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The differences between a metaphorical text and a non-metaphorical text can be subtle and require specific domain information. For instance, in the phrase the trajectory of your legal career the word trajectory is used metaphorically. To identify this metaphor, both the meaning of the word in the context of the sentence and its literal definition must be recognized and compared. In this case, the word trajectory is used to describe the path of a legal career in the sentence, whereas its basic definition involves the path of a projectile. As a result of the ambiguity present in determining the basic meaning of a word, as well as whether it deviates significantly from a contextual use, detecting metaphors at a word-level can be challenging even for humans. Additionally, the Metaphor Identification Procedure used to label the datasets (MIPVU) accounts for multiple kinds of metaphors (Steen et al., 2010) . Capturing implicit, complex metaphors may require different information than capturing direct, simple metaphors.",
"cite_spans": [
{
"start": 893,
"end": 913,
"text": "(Steen et al., 2010)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper describes the techniques that we utilized in the Second Shared Task on Metaphor Detection. The competition provided two datasets: a subset of ETS Corpus of Non-Native Written English, which contains essays written by test-takers for the TOEFL test and was annotated for argumentation relevant metaphors, and the VU Amsterdam Metaphor Corpus (VUA) dataset, which consists of text fragments sampled across four genres from the British National Corpus (BNC) -Academic, News, Conversation, and Fiction. For each dataset, participants could compete in two tracks: identifying metaphors of all parts of speech (AllPOS) or verbs only (Verbs).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our final submission uses pretrained BERT (Devlin et al., 2018) and XLNet transformer models, part-of-speech (POS) labels, and a two-layer bi-directional long short-term memory (Bi-LSTM) neural network architecture. BERT and XLNet are used to generate contextualized word embeddings, which are then combined with POS tags and fed through the Bi-LSTM to predict metaphoricity for each word. By creating contex-tualized word embeddings using transformers, we hoped to capture more long-range interdependencies between words than would be possible using methods such as word2vec, GloVe, or fastText. Indeed, our model achieved F1-scores of 68.0% on VUA AllPOS and 73.0% on VUA Verbs, improving upon results from the First Shared Task (Leong et al., 2018 ). On the TOEFL task, we achieved F1scores of 66.9% on AllPOS, and 69.7% on Verbs. Our scores placed 7th, 6th, 5th, and 5th respectively in the Second Shared Task on Metaphor Detection (Leong et al., 2020).",
"cite_spans": [
{
"start": 42,
"end": 63,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF2"
},
{
"start": 731,
"end": 750,
"text": "(Leong et al., 2018",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Historically, approaches to automatic metaphor detection have focused on hand-crafting a set of informative features for every word and applying a supervised machine learning algorithm to classify words as metaphorical or non-metaphorical. Previous works have explored features including POS tags, concreteness, imageability, semantic distributions, and semantic classes as characterized through SUMO ontology, WordNet, and VerbNet (Beigman Klebanov et al., 2014; Tsvetkov et al., 2014; Dunn, 2013; Mohler et al., 2013) .",
"cite_spans": [
{
"start": 432,
"end": 463,
"text": "(Beigman Klebanov et al., 2014;",
"ref_id": "BIBREF0"
},
{
"start": 464,
"end": 486,
"text": "Tsvetkov et al., 2014;",
"ref_id": "BIBREF14"
},
{
"start": 487,
"end": 498,
"text": "Dunn, 2013;",
"ref_id": "BIBREF3"
},
{
"start": 499,
"end": 519,
"text": "Mohler et al., 2013)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "2"
},
{
"text": "Deep learning methods have also been employed for automatic metaphor detection. In the First Shared Task on Metaphor Detection, the top three highest scoring teams all employed an LSTM model with word embeddings and additional features (Leong et al., 2018) . Stemle and Onysko (2018) trained fastText word embeddings on various native and non-native English corpora, and passed the sequences of embeddings to an Bi-LSTM. The highest-performing model from Bizzoni and Ghanimifard (2018) employed a Bi-LSTM on GloVe embeddings and concreteness ratings for each word. appended POS and semantic class information to pretrained word2vec word embeddings, and utilized a CNN in addition to a Bi-LSTM in order to better capture local and global contextual information. In all these cases, the word embeddings used are contextindependent: the same word appearing in two different sentences will nonetheless have the same embedding. Thus, these embeddings may not be able to fully capture information about multi-sense words (for example, the word bank in river bank and bank robber), which is crucial for properly identifying metaphors.",
"cite_spans": [
{
"start": 236,
"end": 256,
"text": "(Leong et al., 2018)",
"ref_id": "BIBREF6"
},
{
"start": 259,
"end": 283,
"text": "Stemle and Onysko (2018)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "2"
},
{
"text": "More recently, Mao et al. (2019) proposed two RNN models for word-level metaphor detection based on linguistic theories of metaphor identification. GloVe and ELMo embeddings are used as input features that capture literal meanings of words, which are compared with the hidden states of Bi-LSTMs that capture contextual meaning. We chose to explore transformer-based embeddings as an alternative way to capture contextual information.",
"cite_spans": [
{
"start": 15,
"end": 32,
"text": "Mao et al. (2019)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "2"
},
{
"text": "Transformer-based models have shown state-ofthe-art results on a wide variety of language tasks, including sentence classification, question answering, and named entity recognition. These models rely on self-attention mechanisms to capture global dependencies, and can be used to generate contextualized word embeddings. We chose to examine the models BERT, GPT2, and XLNet. These three models all achieve remarkable performances on various NLP tasks, but they capture long-distance relationships within the text in different ways. BERT is an autoencoder model, consisting of a stack of encoder layers, and is able to capture bi-directional context using masking during training (Devlin et al., 2018) . GPT2 is an autoregressive model, consisting of a stack of decoder layers, and thus is only able to capture unidirectional context (Radford et al., 2018) . XLNet is also autoregressive, but it captures bi-directional context by considering all permutations of the given words . Each of these models has its advantages and disadvantages that are worth exploring in the context of metaphor detection.",
"cite_spans": [
{
"start": 679,
"end": 700,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF2"
},
{
"start": 833,
"end": 855,
"text": "(Radford et al., 2018)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "2"
},
{
"text": "Our method for metaphor detection begins with generating contextualized word embeddings for each word in a sentence using the hidden states of pretrained BERT and XLNet language models. Next, those embeddings are concatenated together, POS tags for each word are appended to the embeddings, and a Bi-LSTM reads the features as input and classifies each word in the sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "Word Embeddings Due to limited metaphorannotated data, rather than training a transformer model on our downstream task, we instead opted to take a feature-based approach to generating contextualized word embeddings from pretrained transformer models. This idea was inspired by the approach to the token-level named entity recognition task described in Devlin et al. (2018) , which used a number of strategies for combining hidden state representations of words from a pretrained BERT model to generate contextualized word embeddings.",
"cite_spans": [
{
"start": 352,
"end": 372,
"text": "Devlin et al. (2018)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "We installed the Python transformers library developed by huggingface (Wolf et al., 2019) , which includes a PyTorch (Paszke et al., 2019) implementation of BERT and several pretrained BERT models. We opted to use the BERT base uncased model, which consists of 12-layers, 768-hidden, 12-heads, and 110M parameters. For each line in the VUA and TOEFL datasets, we use the BERT tokenizer included in the transformers package to pre-process the text, then generate hidden-state representations for each word by inputting each line into the pretrained BERT model. Each token is given a 12x768 hidden-state representation from BERT. We generate 768-dimension word embeddings by summing the values from each of the 12 hidden layers for each token. Words out-of-vocab for BERT are split into multiple tokens representing subwords. To generate embeddings for these words, embeddings are generated for each subword token, then averaged together.",
"cite_spans": [
{
"start": 70,
"end": 89,
"text": "(Wolf et al., 2019)",
"ref_id": null
},
{
"start": 117,
"end": 138,
"text": "(Paszke et al., 2019)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "Similarly, we installed the huggingface implementation of XLNet and used its pretrained XLNet base uncased model to generate embeddings for each word in the dataset using the same method as with BERT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "Once both embeddings are generated, we con-catenate the BERT and XLNet embeddings for each word to generate 1536-dimensional word embeddings. By combining word embeddings from multiple high-performing pretrained transformers, we are able to capture more contextual information for each word. Additionally, we supplement these word embeddings with the POS tag for each word as generated by the Stanford parser (Toutanova et al., 2003) . POS tags were shown to improve metaphor detection in the 2018 Metaphor Detection Shared Task (Leong et al., 2018), and we find a small improvement by including them here.",
"cite_spans": [
{
"start": 409,
"end": 433,
"text": "(Toutanova et al., 2003)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "We pass the features from each sentence into a Bi-LSTM. The purpose of this network is to capture long-range relationships between words in the same sentence which may reveal the presence of metaphors. We use a dense layer with a sigmoid activation function to obtain the predicted probability of being a metaphor for each word in the sentence. During training, we employ a weighted binary cross entropy loss function to address the extreme class imbalance, since nonmetaphors occur significantly more frequently than metaphors. Hyperparameters were tuned via crossvalidation. For the testing phase, we use an ensemble strategy which was effective for Wu et al.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Network",
"sec_num": null
},
{
"text": ": we trained four copies of this Bi-LSTM with different initializations and averaged the pre-dictions from each model. Additionally, we noted that our model tended to assign similar probabilities to different instances of the same word in different contexts, and that a prediction significantly higher than the average prediction for that word was a good indicator of the presence of metaphor, even if the prediction fell lower than the ideal threshold. Thus, we used the following procedure for the testing phase: label the word as a metaphor if its predicted probability is higher than the threshold, or if its probability is three orders of magnitude higher than the median predicted probability for that word in the evaluation set. We found this to be a useful way of addressing the domain shift between the training and the test data. This concept is further explored in Section 4.1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Network",
"sec_num": null
},
{
"text": "Word Embeddings Devlin et al. (2018) suggest that for different token-level classification tasks, different methods for combining hidden states from BERT may prove effective in generating contextualized word embeddings. For our task, to determine the optimal embedding strategy, we evaluated four different methods of combining information from hidden states of the transformer models. To determine which performed best prior to training LSTM models, we tested each strategy using logistic regression on the word embeddings with an 80/20 training-test split. Results from logistic regression on BERT embeddings from the VUA AllPOS data are in Table 1 . We note that the F1 scores using different methods of generating contextualized word embeddings differ substantially. We use the \"sumall-layers\" method of generating word embeddings for our further experiments. Transformers Table 2 compares the performance of the Bi-LSTM using the embeddings from BERT, GPT2, and XLNet. Because the true test labels were not made available to us, here we report results on an 80/20 training-test split of the given training data. We make the following observations.",
"cite_spans": [
{
"start": 16,
"end": 36,
"text": "Devlin et al. (2018)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 643,
"end": 650,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 877,
"end": 884,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "\u2022 The LSTM models perform far better than their logistic regression counterparts. Of the single embedding LSTM models, the BERT and XLNet embeddings have the best performances. Combining BERT and XLNet embeddings and using an ensemble strategy further improved our performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "\u2022 In general, the AllPOS task is more challenging than the Verbs task. Different parts of speech are used metaphorically in different ways, and these multiple varieties of metaphor must all be captured by a single model in the AllPOS task. Correspondingly, all models perform worse on AllPOS than Verbs in both VUA and TOEFL datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "\u2022 Additionally, the models achieve a lower F1 score on the TOEFL dataset than the VUA in both AllPOS and Verbs track. We believe this is in part due to the smaller size of the TOEFL dataset, and in part because linguistic characteristics can differ substantially between native and non-native text. Since we used transformer models pretrained on a native corpus, the word embeddings were likely less informative for the TOEFL track.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "\u2022 GPT2 and XLNet are both autoregressive language models, but GPT2+LSTM performs significantly worse than the other LSTM models. This result suggests that bi-directional relationships between words play a crucial role in metaphor detection. Because XLNet considers every possible permutation of the given words during training, the XLNet embeddings likely contain more bi-directional context than the GPT2 embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "In our experiments, we noted that our LSTM models tended to output similar probabilities for different instances of the same word independent of context. For example, although 4 out of 14 of the occurrences of the word capacity in the validation set were metaphor-related, all of the LSTM predictions were less than 10 \u22125 . This suggested that although word embeddings from transformer models contain more contextual information than embeddings from word2vec or GloVe, the model could be improved by including even more contextual information. We explored the idea of ensembling an LSTM with a K-Nearest Neighbors (KNN) classification approach. We believe that the LSTM approach would give information as to which types of words tend to be metaphors in context, whereas the KNN approach would clue into whether a specific use of a specific word is more likely to be metaphorical. We were unable to fully implement such an ensemble model for the competition, but we detail some promising results below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Promising Future Approach: K-Nearest Neighbors",
"sec_num": "4.1"
},
{
"text": "We trained a KNN-only model using our contextualized word embeddings. First, we lemmatized each word in the VUA and TOEFL datasets. For VUA, we classified each word based on a KNN classifier trained on all instances of the same lemmatized word in the training data. If no such lemmatized word existed in the training data, we classified that word using a prediction from an LSTM model, though that occurred in only 2% of cases. For TOEFL, we compared using training data from TOEFL combined with VUA due to the limited dataset. We achieved F1 scores of 0.642 and 0.608 on 80/20 training-test splits of VUA and TOEFL respectively, not much worse than our LSTM models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Promising Future Approach: K-Nearest Neighbors",
"sec_num": "4.1"
},
{
"text": "There is reason to believe the LSTM and KNN approaches capture significantly different information on metaphors. On the VUA validation data, the LSTM method predicted 3751 metaphors and the KNN predicted 3190. However, only 2372 words were predicted as metaphors by the two models together. Since both models have similar F1 scores, this implies that a superior classifier can be constructed using information from both classifiers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Promising Future Approach: K-Nearest Neighbors",
"sec_num": "4.1"
},
{
"text": "For our final submissions, we were able to adopt a simplified implementation of this approach, la-beling an instance of a word as metaphorical if its LSTM prediction either was higher than a certain threshold, or higher by a significant amount than the median LSTM prediction of all instances of that word. This procedure improved our F1 scores by about 1% during the testing phase. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Promising Future Approach: K-Nearest Neighbors",
"sec_num": "4.1"
},
{
"text": "In this paper, we describe the best performing model that we submitted for the Second Shared Task on Metaphor Detection. We used BERT and XLNet language models to create contextualized embeddings, and fed these embeddings into a bidirectional LSTM with a sigmoid layer that used both local and global contextual information to output a probability. Our experimental results verify that contextualized embeddings outperform previous state-of-the-art word embeddings for metaphor detection. We also propose an ensemble model combining a bi-directional LSTM and a KNN, and show promising results that suggest the two models encode complementary information on metaphors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
}
],
"back_matter": [
{
"text": "The authors thank the organizers of the Second Shared Task on Metaphor Detection and the rest of the Duke Data Science Team. We also thank the anonymous reviewers for their insightful comments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Different texts, same metaphors: Unigrams and beyond",
"authors": [
{
"first": "Ben",
"middle": [],
"last": "Beata Beigman Klebanov",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Leong",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Heilman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Flor",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Second Workshop on Metaphor in NLP",
"volume": "",
"issue": "",
"pages": "11--17",
"other_ids": {
"DOI": [
"10.3115/v1/W14-2302"
]
},
"num": null,
"urls": [],
"raw_text": "Beata Beigman Klebanov, Ben Leong, Michael Heil- man, and Michael Flor. 2014. Different texts, same metaphors: Unigrams and beyond. In Proceedings of the Second Workshop on Metaphor in NLP, pages 11-17, Baltimore, MD. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Bigrams and BiLSTMs two neural networks for sequential metaphor detection",
"authors": [
{
"first": "Yuri",
"middle": [],
"last": "Bizzoni",
"suffix": ""
},
{
"first": "Mehdi",
"middle": [],
"last": "Ghanimifard",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Workshop on Figurative Language Processing",
"volume": "",
"issue": "",
"pages": "91--101",
"other_ids": {
"DOI": [
"10.18653/v1/W18-0911"
]
},
"num": null,
"urls": [],
"raw_text": "Yuri Bizzoni and Mehdi Ghanimifard. 2018. Bigrams and BiLSTMs two neural networks for sequential metaphor detection. In Proceedings of the Workshop on Figurative Language Processing, pages 91-101, New Orleans, Louisiana. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv, arXiv:1810.04805.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "What metaphor identification systems can tell us about metaphor-in-language",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Dunn",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the First Workshop on Metaphor in NLP",
"volume": "",
"issue": "",
"pages": "1--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan Dunn. 2013. What metaphor identification systems can tell us about metaphor-in-language. In Proceedings of the First Workshop on Metaphor in NLP, pages 1-10, Atlanta, Georgia. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Metaphors we live by",
"authors": [
{
"first": "George",
"middle": [],
"last": "Lakofi",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 1980,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George Lakofi and Mark Johnson. 1980. Metaphors we live by. University of Chicago Press, Chicago, IL.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A report on the 2020 vua and toefl metaphor detection shared task",
"authors": [
{
"first": "Beata",
"middle": [
"Beigman"
],
"last": "Chee Wee Leong",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Klebanov",
"suffix": ""
},
{
"first": "Egon",
"middle": [],
"last": "Hamill",
"suffix": ""
},
{
"first": "Rutuja",
"middle": [],
"last": "Stemle",
"suffix": ""
},
{
"first": "Xianyang",
"middle": [],
"last": "Ubale",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Second Workshop on Figurative Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chee Wee Leong, Beata Beigman Klebanov, Chris Hamill, Egon Stemle, Rutuja Ubale, and Xianyang Chen. 2020. A report on the 2020 vua and toefl metaphor detection shared task. In Proceedings of the Second Workshop on Figurative Language Pro- cessing, Seattle, WA.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A report on the 2018 VUA metaphor detection shared task",
"authors": [
{
"first": "Chee",
"middle": [],
"last": "Wee",
"suffix": ""
},
{
"first": ";",
"middle": [],
"last": "Ben",
"suffix": ""
},
{
"first": ")",
"middle": [],
"last": "Leong",
"suffix": ""
},
{
"first": "Beata",
"middle": [
"Beigman"
],
"last": "Klebanov",
"suffix": ""
},
{
"first": "Ekaterina",
"middle": [],
"last": "Shutova",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Workshop on Figurative Language Processing",
"volume": "",
"issue": "",
"pages": "56--66",
"other_ids": {
"DOI": [
"10.18653/v1/W18-0907"
]
},
"num": null,
"urls": [],
"raw_text": "Chee Wee (Ben) Leong, Beata Beigman Klebanov, and Ekaterina Shutova. 2018. A report on the 2018 VUA metaphor detection shared task. In Proceedings of the Workshop on Figurative Language Processing, pages 56-66, New Orleans, Louisiana. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Endto-end sequential metaphor identification inspired by linguistic theories",
"authors": [
{
"first": "Rui",
"middle": [],
"last": "Mao",
"suffix": ""
},
{
"first": "Chenghua",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Guerin",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3888--3898",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1378"
]
},
"num": null,
"urls": [],
"raw_text": "Rui Mao, Chenghua Lin, and Frank Guerin. 2019. End- to-end sequential metaphor identification inspired by linguistic theories. In Proceedings of the 57th An- nual Meeting of the Association for Computational Linguistics, pages 3888-3898, Florence, Italy. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Semantic signatures for example-based linguistic metaphor detection",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Mohler",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Bracewell",
"suffix": ""
},
{
"first": "Marc",
"middle": [],
"last": "Tomlinson",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Hinote",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the First Workshop on Metaphor in NLP",
"volume": "",
"issue": "",
"pages": "27--35",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Mohler, David Bracewell, Marc Tomlinson, and David Hinote. 2013. Semantic signatures for example-based linguistic metaphor detection. In Proceedings of the First Workshop on Metaphor in NLP, pages 27-35, Atlanta, Georgia. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Pytorch: An imperative style, high-performance deep learning library",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Paszke",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Massa",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lerer",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Bradbury",
"suffix": ""
},
{
"first": "Gregory",
"middle": [],
"last": "Chanan",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Killeen",
"suffix": ""
},
{
"first": "Zeming",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Natalia",
"middle": [],
"last": "Gimelshein",
"suffix": ""
},
{
"first": "Luca",
"middle": [],
"last": "Antiga",
"suffix": ""
},
{
"first": "Alban",
"middle": [],
"last": "Desmaison",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Kopf",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zachary",
"middle": [],
"last": "Devito",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Raison",
"suffix": ""
},
{
"first": "Alykhan",
"middle": [],
"last": "Tejani",
"suffix": ""
},
{
"first": "Sasank",
"middle": [],
"last": "Chilamkurthy",
"suffix": ""
},
{
"first": "Benoit",
"middle": [],
"last": "Steiner",
"suffix": ""
},
{
"first": "Lu",
"middle": [],
"last": "Fang",
"suffix": ""
},
{
"first": "Junjie",
"middle": [],
"last": "Bai",
"suffix": ""
},
{
"first": "Soumith",
"middle": [],
"last": "Chintala",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems",
"volume": "32",
"issue": "",
"pages": "8024--8035",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Te- jani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch: An imperative style, high-performance deep learn- ing library. In Advances in Neural Information Pro- cessing Systems 32, pages 8024-8035. Curran Asso- ciates, Inc.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Language models are unsupervised multitask learners",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Rewon",
"middle": [],
"last": "Child",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Dario",
"middle": [],
"last": "Amodei",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2018. Language models are unsupervised multitask learners.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A Method for Linguistic Metaphor Identification: From MIP to MIPVU",
"authors": [
{
"first": "Gerard",
"middle": [
"J"
],
"last": "Steen",
"suffix": ""
},
{
"first": "Aletta",
"middle": [
"G"
],
"last": "Dorst",
"suffix": ""
},
{
"first": "J",
"middle": [
"Berenike"
],
"last": "Herrmann",
"suffix": ""
},
{
"first": "Anna",
"middle": [
"A"
],
"last": "Kaal",
"suffix": ""
},
{
"first": "Tina",
"middle": [],
"last": "Krennmayr",
"suffix": ""
},
{
"first": "Trijntje",
"middle": [],
"last": "Pasma",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gerard J. Steen, Aletta G. Dorst, J. Berenike Herrmann, Anna A. Kaal, Tina Krennmayr, and Trijntje Pasma. 2010. A Method for Linguistic Metaphor Identifica- tion: From MIP to MIPVU. John Benjamins Pub- lishing.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Using language learner data for metaphor detection",
"authors": [
{
"first": "Egon",
"middle": [],
"last": "Stemle",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Onysko",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Workshop on Figurative Language Processing",
"volume": "",
"issue": "",
"pages": "133--138",
"other_ids": {
"DOI": [
"10.18653/v1/W18-0918"
]
},
"num": null,
"urls": [],
"raw_text": "Egon Stemle and Alexander Onysko. 2018. Using language learner data for metaphor detection. In Proceedings of the Workshop on Figurative Lan- guage Processing, pages 133-138, New Orleans, Louisiana. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Feature-rich part-ofspeech tagging with a cyclic dependency network",
"authors": [
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Yoram",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "252--259",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kristina Toutanova, Dan Klein, Christopher D. Man- ning, and Yoram Singer. 2003. Feature-rich part-of- speech tagging with a cyclic dependency network. In Proceedings of the 2003 Human Language Tech- nology Conference of the North American Chapter of the Association for Computational Linguistics, pages 252-259.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Metaphor detection with cross-lingual model transfer",
"authors": [
{
"first": "Yulia",
"middle": [],
"last": "Tsvetkov",
"suffix": ""
},
{
"first": "Leonid",
"middle": [],
"last": "Boytsov",
"suffix": ""
},
{
"first": "Anatole",
"middle": [],
"last": "Gershman",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Nyberg",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "248--258",
"other_ids": {
"DOI": [
"10.3115/v1/P14-1024"
]
},
"num": null,
"urls": [],
"raw_text": "Yulia Tsvetkov, Leonid Boytsov, Anatole Gershman, Eric Nyberg, and Chris Dyer. 2014. Metaphor detec- tion with cross-lingual model transfer. In Proceed- ings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 248-258, Baltimore, Maryland. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Morgan Funtowicz, and Jamie Brew. 2019. Huggingface's transformers: State-of-the-art natural language processing. arXiv",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "R'emi",
"middle": [],
"last": "Louf",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.03771"
]
},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R'emi Louf, Morgan Funtow- icz, and Jamie Brew. 2019. Huggingface's trans- formers: State-of-the-art natural language process- ing. arXiv, arXiv:1910.03771.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Neural metaphor detecting with CNN-LSTM model",
"authors": [
{
"first": "Chuhan",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Fangzhao",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Yubo",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Sixing",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Zhigang",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "Yongfeng",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Workshop on Figurative Language Processing",
"volume": "",
"issue": "",
"pages": "110--114",
"other_ids": {
"DOI": [
"10.18653/v1/W18-0913"
]
},
"num": null,
"urls": [],
"raw_text": "Chuhan Wu, Fangzhao Wu, Yubo Chen, Sixing Wu, Zhigang Yuan, and Yongfeng Huang. 2018. Neu- ral metaphor detecting with CNN-LSTM model. In Proceedings of the Workshop on Figurative Lan- guage Processing, pages 110-114, New Orleans, Louisiana. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Xlnet: Generalized autoregressive pretraining for language understanding. arXiv",
"authors": [
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zihang",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jaime",
"middle": [],
"last": "Carbonell",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1906.08237"
]
},
"num": null,
"urls": [],
"raw_text": "Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. arXiv, arXiv:1906.08237.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"text": "Our model architecture. Sentences are fed through pretrained BERT and XLNet models, concatenated along with POS tags, passed to a Bi-LSTM, and a sigmoid layer outputs probabilities.",
"num": null
},
"TABREF1": {
"html": null,
"content": "<table/>",
"type_str": "table",
"text": "Logistic regression on various BERT word embeddings, VUA and TOEFL AllPOS.",
"num": null
},
"TABREF3": {
"html": null,
"content": "<table/>",
"type_str": "table",
"text": "Performance of LSTM models. The baseline is the highest achieved score from the First Shared Task on Metaphor Detection.",
"num": null
},
"TABREF5": {
"html": null,
"content": "<table/>",
"type_str": "table",
"text": "KNN using sum-all BERT word embeddings, VUA AllPOS",
"num": null
}
}
}
}