ACL-OCL / Base_JSON /prefixH /json /humeval /2022.humeval-1.10.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:29:05.184458Z"
},
"title": "Towards Human Evaluation of Mutual Understanding in Human-Computer Spontaneous Conversation: An Empirical Study of Word Sense Disambiguation for Naturalistic Social Dialogs in American English",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "L\u01b0u",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Brandeis University",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Current evaluation practices for social dialog systems, dedicated to human-computer spontaneous conversation, exclusively focus on the quality of system-generated surface text, but not human-verifiable aspects of mutual understanding between the systems and their interlocutors. This work proposes Word Sense Disambiguation (WSD) as an essential component of a valid and reliable human evaluation framework, whose long-term goal is to radically improve the usability of dialog systems in real-life human-computer collaboration. The practicality of this proposal is proved via experimentally investigating (1) the WordNet 3.0 sense inventory coverage of lexical meanings in spontaneous conversation between humans in American English, assumed as an upper bound of lexical diversity of human-computer communication, and (2) the effectiveness of state-of-the-art WSD models and pretrained transformer-based contextual embeddings on this type of data. 1",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [
{
"text": "Current evaluation practices for social dialog systems, dedicated to human-computer spontaneous conversation, exclusively focus on the quality of system-generated surface text, but not human-verifiable aspects of mutual understanding between the systems and their interlocutors. This work proposes Word Sense Disambiguation (WSD) as an essential component of a valid and reliable human evaluation framework, whose long-term goal is to radically improve the usability of dialog systems in real-life human-computer collaboration. The practicality of this proposal is proved via experimentally investigating (1) the WordNet 3.0 sense inventory coverage of lexical meanings in spontaneous conversation between humans in American English, assumed as an upper bound of lexical diversity of human-computer communication, and (2) the effectiveness of state-of-the-art WSD models and pretrained transformer-based contextual embeddings on this type of data. 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "As surveyed in Finch and Choi (2020) , current evaluation practices for human-computer spontaneous conversation, including open domain dialog systems and chatbots, exclusively focus on the quality of system responses, e.g. how well the responses match ground truth human responses (based on certain automated metrics) or whether they are ontopic with the immediate dialog history (judged by a human). These evaluation practices potentially drive researchers into the race of generating better surface text while undermining or ignoring the ultimate goal of capturing mutual understanding between the systems and humans throughout the conversation (cf. the Great Misalignment Problem raised by H\u00e4m\u00e4l\u00e4inen and Alnajjar, 2021) . Consequently, current systems are unable to effectively function in real-life human-computer collaboration tasks. For example, the lack of genuine conceptual alignment with users leads to language learning chatbots being used only as reactive systems, even though theoretically they could provide the learners with the opportunity for free and flexible meaningful conversation (Bibauw et al., 2019) , and consequently play a key role in supporting autonomous language learning beyond the classroom. To improve the usability of dialog systems for humancomputer spontaneous conversation, their evaluation should include human-verifiable aspects of language competence which facilitate mutual understanding (instead of treating them as black box functions). Moreover, breaking down the evaluation into such concrete components would allow users' participation in system evaluation from early development stages (Heuer and Buschek, 2021) .",
"cite_spans": [
{
"start": 15,
"end": 36,
"text": "Finch and Choi (2020)",
"ref_id": "BIBREF8"
},
{
"start": 693,
"end": 723,
"text": "H\u00e4m\u00e4l\u00e4inen and Alnajjar, 2021)",
"ref_id": "BIBREF11"
},
{
"start": 1103,
"end": 1124,
"text": "(Bibauw et al., 2019)",
"ref_id": "BIBREF1"
},
{
"start": 1634,
"end": 1659,
"text": "(Heuer and Buschek, 2021)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Currently, talking to social chatbots without knowing which sense of a semantically ambiguous word 2 the chatbots have in their internal interpretation, human evaluators cannot identify the root cause of a problematic conversational move performed by the chatbots to provide more useful feeback. For example, examining the dialog shown in Figure 1 , we can agree that the last utterance produced by the chatbot is not appropriate. However, we cannot know for sure if that is due to the chatbot's inadequate interpretation of \"bank\" 3 in the preceding question \"What do you do at a river bank?\", or its complete ignorance of the meaning of this word by just generating the most probable utterance according to the dataset it is trained on.",
"cite_spans": [],
"ref_spans": [
{
"start": 339,
"end": 347,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Arguably, one of the most natural ways for social chatbots to enhance the quality of their interaction with humans is explicitly assigning semantically ambiguous words specific senses, aka Word Sense Disambiguation (WSD), and using these senses Figure 1 : A dialog between me and a state-of-the-art (SOTA) chatbot developed by Meta Research (Roller) .",
"cite_spans": [
{
"start": 341,
"end": 349,
"text": "(Roller)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 245,
"end": 253,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "for further reasoning 4 to demonstrate the chatbots' understanding capability with human-readable aspects of grounding (Clark, 1996) in the course of spontaneous conversation. This would improve human-computer communication in collaborative tasks by allowing the human partners to directly access the interpretable form of computers' model of conversation anytime they need to so that they can make adequate on-the-fly conversational adjustments. In addition, being able to access the computer's human-readable representation of conversational context in the evaluation regime, a human evaluator does not need to construct different interpretation alternatives and therefore can be confident that they are on the same page with other evaluators (cf. Appendix A -a small experiment that shows a wide divergence in human interpretation of a word token in spontaneous conversation). This transparency definitely reduces the subjectivity of the evaluation task, and therefore improves its reliability and reproducibility (Specia, 2021) .",
"cite_spans": [
{
"start": 119,
"end": 132,
"text": "(Clark, 1996)",
"ref_id": "BIBREF2"
},
{
"start": 1017,
"end": 1031,
"text": "(Specia, 2021)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This work proposes and evaluates WSD as an essential component of a novel human evaluation framework intended for human-computer mutual understanding in spontaneous conversation in English, but also sensible for any tasks involving natural language interpretation. Specifically, based on the state of the art in WSD (Bevilacqua et al., 2021) , it addresses the following research questions:",
"cite_spans": [
{
"start": 316,
"end": 341,
"text": "(Bevilacqua et al., 2021)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1. Can WordNet 3.0 (Fellbaum, 2010) , the most popular English sense inventory, approximate word meaning in spontaneous dialog 5 well? 2. Are state-of-the-art (SOTA) WSD models, using transfer learning with both pretrained transformers and non-conversational senseannotated data, ready for conversational text? 3. How effective is it to directly use contextual embeddings of pretrained transformers, e.g. BERT (Devlin et al., 2019) or its variants, to address WSD in spontaneous conversation? The rationale behind (3) is to test the hypothesis that contextual embeddings of word tokens in spontaneous conversation are well correlated with definitions of their context-sensitive senses (versus task-oriented scenarios where the word senses are constrained by the task). When deploying a dialog system, the transparent integration of these embeddings with other components in the NLP pipeline is preferable over the \"black box\" nature of offthe-shelf end-to-end WSD models, which poses the challenges of how to (a) align these models' output with the system's NLP pipeline's, and (b) improve their real-time performance using knowledge about a specific instance of conversation.",
"cite_spans": [
{
"start": 19,
"end": 35,
"text": "(Fellbaum, 2010)",
"ref_id": "BIBREF7"
},
{
"start": 410,
"end": 431,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To address (1-3), I first automatically annotated WordNet senses of ambiguous words in NEWT-SBCSAE, a publicly accessible corpus of naturally occurring spontaneous dialogs in American English (L\u01b0u and Malamud, 2020; Riou, 2015; Du Bois et al., 2000) , using both a SOTA WSD model and a simple baseline model directly based on contextual embeddings of pretrained transformers (Section 2.2). Next, I collected human judgements on the outputs of these models as well as the appropriate senses of the target words (Section 2.3). These judgments were then used to assess the coverage of the WordNet sense inventory (Section 3) and the efficacy of WSD models, including both models used in automatic sense annotation (Section 4.1) and variants of the baseline model based on various pretrained transformers (Section 4.2).",
"cite_spans": [
{
"start": 192,
"end": 215,
"text": "(L\u01b0u and Malamud, 2020;",
"ref_id": "BIBREF18"
},
{
"start": 216,
"end": 227,
"text": "Riou, 2015;",
"ref_id": "BIBREF24"
},
{
"start": 228,
"end": 249,
"text": "Du Bois et al., 2000)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The experiment reflects the proposed WSD-based evaluation protocol: ambiguous words in spontaneous dialog are first disambiguated by dialog systems and then evaluated by humans (or, less interactively, against predefined gold standard data).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "2"
},
{
"text": "NEWT-SBCSAE, released by L\u01b0u and Malamud (2020) , includes seven 15-minute extracts of faceto-face casual dialogs from the Santa Barbara Corpus of Spoken American English (SBCSAE) (Du Bois et al., 2000) , segmented into 3253 turnconstructional units (TCUs) by Riou (2015) and accompanied by audio files publicly browsable at TalkBank.org. This corpus possesses a rare combination 6 of valuable features:",
"cite_spans": [
{
"start": 25,
"end": 47,
"text": "L\u01b0u and Malamud (2020)",
"ref_id": "BIBREF18"
},
{
"start": 184,
"end": 202,
"text": "Bois et al., 2000)",
"ref_id": "BIBREF6"
},
{
"start": 260,
"end": 271,
"text": "Riou (2015)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Selected Corpus",
"sec_num": "2.1"
},
{
"text": "\u2022 freely and publicly accessible (in a welldeveloped XML-based data format) \u2022 carefully curated to include only naturally occurring casual dialogs by a wide variety of people, differing in gender, occupation, social background, and regional origin in comparison with its compact size The selection of this corpus rests upon the assumption that the corpus can serve as an approximate upper bound of lexical diversity of humancomputer spontaneous conversation in the same dialect of English within the evaluation scale of this empirical study. The preference for this corpus over a currently available corpus of human-computer spontaneous conversations is also supported by the fact that the latter may not actually be as representative as claimed (Dogru\u00f6z and Skantze, 2021). It is worth noting that the results achieved in this study may not generalize to varieties of American English not present in the corpus, to other regional varieties of English, or to other languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selected Corpus",
"sec_num": "2.1"
},
{
"text": "Automatic Transcript Preprocessing After every prosodic token are replaced with \"...\", each turn-constructional unit (TCU) is tokenized, lemmatized, and part-of speech (POS) tagged by spaCy 7 (v2.3.5)'s small core model for English. Then each ambiguous word is identified as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic WSD",
"sec_num": "2.2"
},
{
"text": "\u2022 its universal POS is in WordNet, i.e. adjective, adverb, noun, proper noun, or verb \u2022 it has more than one WordNet synset (information about the synsets, i.e. sense names and corresponding definitions, is also retrieved)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic WSD",
"sec_num": "2.2"
},
{
"text": "SOTA I use as a SOTA WSD model because it is the back end of AMuSE-WSD 8 (AW), the first end-to-end system that provides a web-based API for downstream tasks to obtain high-quality sense information in 40 languages, including English (Orlando et al., 2021) . This model is composed of BERT (large-cased, frozen), a non-linear layer and a linear classifier, and trained on the SemCor corpus (Miller et al., 1994) as well as WordNet glosses and examples with a multilabel classification objective. It achieves 80%accuracy on the concatenation of all Unified Evaluation Framework datasets for English all-words WSD (Raganato et al., 2017) .",
"cite_spans": [
{
"start": 234,
"end": 256,
"text": "(Orlando et al., 2021)",
"ref_id": "BIBREF22"
},
{
"start": 390,
"end": 411,
"text": "(Miller et al., 1994)",
"ref_id": "BIBREF20"
},
{
"start": 612,
"end": 635,
"text": "(Raganato et al., 2017)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic WSD",
"sec_num": "2.2"
},
{
"text": "The AW API takes as input the text string of each TCU and yields a list of tokens automatically annotated with lemma, POS, and WordNet sense if available. Next, this output sequence is aligned with the spaCy preprocessing output.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic WSD",
"sec_num": "2.2"
},
{
"text": "Baseline The baseline WSD model (cf. Oele and van Noord, 2018) picks the best sense of each ambiguous word (identified in preprocessing) by ranking similarity scores between the contextual embeddings of the word and of the definitions of its WordNet senses, accessed via spacy-wordnet 7 . The contextual embeddings are from DistilBERT (Sanh et al., 2019) , accessed via spacy-transformers 7 .",
"cite_spans": [
{
"start": 37,
"end": 62,
"text": "Oele and van Noord, 2018)",
"ref_id": "BIBREF21"
},
{
"start": 335,
"end": 354,
"text": "(Sanh et al., 2019)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic WSD",
"sec_num": "2.2"
},
{
"text": "Task The models' output was evaluated by two annotators, both Linguistics majors (incl. Formal Semantics) and native speakers of English 9 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human WSD Judgment",
"sec_num": "2.3"
},
{
"text": "For each target word, the annotators saw:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human WSD Judgment",
"sec_num": "2.3"
},
{
"text": "\u2022 the WordNet senses assigned to the word by AW and the baseline model 10 \u2022 the list of possible WordNet senses for the word, taking into account its POS The annotators were asked to decide if:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human WSD Judgment",
"sec_num": "2.3"
},
{
"text": "\u2022 AW sense is appropriate (and different from the baseline) -label '1' \u2022 the baseline sense is appropriate (and different from AW) -label '2' \u2022 Both are the same & appropriate -label 'both'",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human WSD Judgment",
"sec_num": "2.3"
},
{
"text": "\u2022 No sense is appropriate and at least one of them has a correct POS -label '0' \u2022 Both senses have incorrect POS and their actual POS are still covered by WordNet -label 'c' (i.e. 'content word but wrong POS') \u2022 Both senses have incorrect POS and their actual POS are not covered by WordNet -label 'f' (i.e. 'function word') For '0' and 'c', the annotators provided the appropriate senses, sometimes from WordNet senses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human WSD Judgment",
"sec_num": "2.3"
},
{
"text": "The annotation was run in two rounds. In the first round (R.1), both annotators worked on the same dialog so that their inter-annotation agreement (IAA) could be assessed as shown in Table 1(a). The agreement level was substantial (Lan-dis and Koch, 1977) and the inter-annotator consistency likely improved after the review of this annotation round and the corresponding revision of annotation guidelines for the final round (R.2), in which the annotators worked on different dialogs. In Table 1 , all tokens are the ambiguous words identified in preprocessing; AW tokens exclude:",
"cite_spans": [
{
"start": 244,
"end": 255,
"text": "Koch, 1977)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 489,
"end": 496,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Human WSD Judgment",
"sec_num": "2.3"
},
{
"text": "\u2022 proper nouns for which AW does not provide WordNet senses \u2022 tokens that AW doesn't tag as adjectives, adverbs, nouns, proper nouns or verbs \u2022 tokens that cannot be aligned with AW outputs Table 1 (b) shows the counts of these types of tokens for each annotation round and in total. The existence of non-AW tokens (5.5% of all tokens in total) demonstrates the challenge of aligning the output of off-the-shelf end-to-end WSD models with the output of the NLP pipeline inherent in a dialog system in real-life situations.",
"cite_spans": [],
"ref_spans": [
{
"start": 190,
"end": 197,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Human WSD Judgment",
"sec_num": "2.3"
},
{
"text": "Further annotation details (e.g. data format, platform and examples) can be found in Appendix B.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human WSD Judgment",
"sec_num": "2.3"
},
{
"text": "Outcome 11 To facilitate fair comparisons between AW and the baseline WSD model, only AW tokens are considered in the following statistics. In addition, the counts of the first round only cover instances that get the same judgments from both annotators on the aspects the counts concern. Table 2 shows the various sense judgments, corresponding to the labels listed in Section 2.3. '1' '2' 'both' '0' 'c' 'f Table 3 shows key statistics as the prerequisite for answering the research questions in Section 1. Table 3 (a) shows two groups of sense annotations, based on whether the annotated appropriate sense (unavailable for 'f' cases) is covered by WordNet or not (Section 3). Table 3 (b) shows main POS-based groups of sense annotations that are used as gold standard to evaluate automatic WSD effectiveness 11 The annotated data is publicly accessible at https://alexluu.flowlu.com/hc/6/271-wsd. (Section 4) . This data only include cases in which both AW and the baseline senses have correct POS and the appropriate WordNet sense is available.",
"cite_spans": [
{
"start": 810,
"end": 812,
"text": "11",
"ref_id": null
},
{
"start": 899,
"end": 910,
"text": "(Section 4)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 288,
"end": 295,
"text": "Table 2",
"ref_id": "TABREF1"
},
{
"start": 382,
"end": 407,
"text": "'1' '2' 'both' '0' 'c' 'f",
"ref_id": null
},
{
"start": 408,
"end": 415,
"text": "Table 3",
"ref_id": null
},
{
"start": 508,
"end": 515,
"text": "Table 3",
"ref_id": null
},
{
"start": 678,
"end": 685,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Human WSD Judgment",
"sec_num": "2.3"
},
{
"text": "WordNet senses cover 96.3% of ambiguous words as shown in Table 3 (a). POS-wise, they cover 95.6% adjectives, 98.2% adverbs, 95.7% nouns, 96.6% verbs. Among 200 non-WordNet tokens:",
"cite_spans": [],
"ref_spans": [
{
"start": 58,
"end": 65,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "WordNet Sense Coverage",
"sec_num": "3"
},
{
"text": "\u2022 1 token is sub-word (\"toes\" in \"Of the different cantos or cantos or whatever toes.\") \u2022 4 tokens are named entities \u2022 64 tokens are components of multiword expressions or used idiomatically. Handling multiword expressions by feeding phrases instead of tokens into the WordNet search engine would improve the WordNet coverage to 96.7% as more 19 tokens are covered. So, WordNet coverage for conversations is good.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WordNet Sense Coverage",
"sec_num": "3"
},
{
"text": "The gold standard data presented in Table 3 (b) covers 1046 lemmas, including 191 adjectives, 80 adverbs, 501 nouns and 274 verbs. Table 4 shows the performances of AW and the baseline models across POS and in total. The values in 'both' columns illustrate the portion of correct disambiguated senses shared by both models.",
"cite_spans": [],
"ref_spans": [
{
"start": 36,
"end": 43,
"text": "Table 3",
"ref_id": null
},
{
"start": 131,
"end": 138,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Automatic WSD Effectiveness",
"sec_num": "4"
},
{
"text": "AW model performs well on conversational text with the accuracy of 73.7%, though it does not achieve 80% as it did on non-conversational data. In addition, it performs consistently across all POS.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Initial WSD Models",
"sec_num": "4.1"
},
{
"text": "The 36%-level accuracy of the DistilBERTbased baseline model is encouraging, given that the average number of WordNet senses per word token (sense average) is 9.9. Its low performance on verbs can be explained by the high sense average of this POS: 15.5 (versus adjectives -7.5, adverbs -4.7, and nouns -6.3). To improve this model's performance, we can experiment with different ways of manipulating the text containing target words before feeding it into a pretrained transformer. Table 5 shows the performances of the baseline model, using BERT, XLNet (Yang et al., 2019) and RoBERTa (Liu et al., 2019) , accessed via spacytransformers. Comparing to the DistilBERT-based Table 5 : Accuracy (%) of variants of the baseline WSD models (B: BERT, X: XLNet, R: RoBERTa).",
"cite_spans": [
{
"start": 555,
"end": 574,
"text": "(Yang et al., 2019)",
"ref_id": "BIBREF29"
},
{
"start": 587,
"end": 605,
"text": "(Liu et al., 2019)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 483,
"end": 490,
"text": "Table 5",
"ref_id": null
},
{
"start": 674,
"end": 681,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Initial WSD Models",
"sec_num": "4.1"
},
{
"text": "model, the performances decrease in the order of [BERT > RoBERTa > XLNet] across POS and in total, except for the case of adverbs in which RoBERTa performs best. XLNet's performance is noticeably low in comparison with the others. The empirical results show that DistilBERT is the best option for disambiguating WordNet senses of words by ranking similarity scores between contextual embeddings of the words and of the definitions of their senses. DistilBERT is not only effective but also efficient as it is the only simplified version of BERT among the tested transformers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments with Pretrained Transformers",
"sec_num": "4.2"
},
{
"text": "Future Work Next, I will perform a detailed data analysis to gain insights into (1) what the annotators disagreed about, (2) what kinds of errors the WSD models made, and (3) how good incorrect senses are, taking into account the distinction between polysemous and homonymous senses, which is not available in WordNet (Freihat et al., 2016; Habibi et al., 2021; Janz and Maziarz, 2021) . These insights will help improve the design of the annotation task and the performance of the WSD models.",
"cite_spans": [
{
"start": 318,
"end": 340,
"text": "(Freihat et al., 2016;",
"ref_id": "BIBREF9"
},
{
"start": 341,
"end": 361,
"text": "Habibi et al., 2021;",
"ref_id": "BIBREF10"
},
{
"start": 362,
"end": 385,
"text": "Janz and Maziarz, 2021)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "I will also study the effect of manipulation of input utterances, by taking into account the linguistic and discourse information about the target words, on the performance of the pretrained transformers. This can shed light on how to create optimal contextual embeddings of ambiguous words for WSD.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "Limitations and Challenges Exclusively relying on pre-existing sense inventories such as WordNet, the proposed evaluation method would not only miss semantically ambiguous words that do not have multiple senses in these sense inventories, but also inherit their limitations, due to the fact that their senses have different degrees of granularity and cannot keep up with the continuously involving character of natual languages (Mennes and van der Waart van Gulik, 2020; Bevilacqua et al., 2021) .",
"cite_spans": [
{
"start": 428,
"end": 470,
"text": "(Mennes and van der Waart van Gulik, 2020;",
"ref_id": "BIBREF19"
},
{
"start": 471,
"end": 495,
"text": "Bevilacqua et al., 2021)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "The proposed evaluation method may not easily be adopted by the developers of end-to-end dialog models, the most popular approach to open-domain dialog systems (Huang et al., 2020) , as the \"black box\" nature of these systems does not facilitate human-readable word-level interpretations.",
"cite_spans": [
{
"start": 160,
"end": 180,
"text": "(Huang et al., 2020)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "This work proposes WSD, an established NLP task, as a required component of a valid and reliable human evaluation framework for mutual understanding in human-computer spontaneous conversation. The conducted experiments demonstrate the practicality of this proposal for English. To sufficiently evaluate human-computer mutual understanding, I envision that the WSD component will be necessarily coupled with a reasoning judgment component in which human evaluators assess the appropriateness of conversation moves made by a dialog system, including clarifying and adjusting their interpretations, based on the disambiguated word senses in those moves. This setting will help human evaluation become more grounded and therefore more objective than the current common practices, in which human evaluators are asked to rate system responses using vaguely defined criteria and inconsistent numeric scales (Finch and Choi, 2020) . linguistic annotation types with arbitrary tagsets, and annotated with FLAT 13 , FoLiA's web-based annotation tool whose user-interface can show different linguistic annotation layers at the same time (van Gompel et al., 2017) .",
"cite_spans": [
{
"start": 900,
"end": 922,
"text": "(Finch and Choi, 2020)",
"ref_id": "BIBREF8"
},
{
"start": 1126,
"end": 1151,
"text": "(van Gompel et al., 2017)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Figures 2-4 display an annotation file opened on FLAT. The ambiguous words are highlighted in different colors, corresponding to the annotation labels mentioned in Section 2.3, so that the annotators can navigate them quickly. Figure 3 shows that when a word token such as \"guilty\" is hovered over, it is highlighted in black while its text turns yellow, and all of its annotation information are displayed in a pop-up box. Figure 4 shows that when \"guilty\" is clicked, it is highlighted in yellow, and its annotation layers become editable in the Annotation Editor. ",
"cite_spans": [],
"ref_spans": [
{
"start": 227,
"end": 235,
"text": "Figure 3",
"ref_id": null
},
{
"start": 424,
"end": 432,
"text": "Figure 4",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "B.2 Annotation Examples",
"sec_num": null
},
{
"text": "The live version of this publication is located at https://osf.io/8u3gf/.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Either polysemous or homonymous.3 As a financial institution instead of the land alongside a river, which is more felicitous in this particular context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Including the use of sense relation knowledge encoded in thesauri such as WordNet.5 Given that language is continuously changing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The only existing corpus of its kind I am aware of.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Under the MIT License. 8 Under the CC BY-NC-SA 4.0 License.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "From North-Eastern US. They were paid $15-16/hour.10 The listing order of these senses are the same for all target words. Consequently, the annotators could recognize that one system is better and treat its prediction as the default for borderline cases, which might slightly inflate the better system's results. On the other hand, this setting reflects real evaluation scenarios in which evaluators are aware of the performance of a specific dialog system throughout their evaluation sessions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "An open file format, whose specification and documentation are generated by open source code under GNU General Public License version 3.0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Under GNU General Public License version 3.0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The annotation work of this study was funded by a PhD Research Award from the Graduate School of Arts and Sciences, Brandeis University. I am extremely grateful to my annotators, Josh Broderick Phillips and Tali Tukachinsky, for their diligence and professionalism. My deepest gratitude goes to Sophia A. Malamud, who exhaustively discussed every aspect of this study with me. Finally, I would like to thank Nianwen Xue, the anonymous ARR reviewers of the January 2022 deadline and the organizers of HumEval at ACL 2022 for their detailed, constructive and actionable feedback.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
},
{
"text": "12 native speakers of American English (2 PhD, 9 master's, 1 senior undergraduate) in a linguistic course are asked to give their interpretation of entities available in the following excerpt of dialogue between Jim and Michael, adapted from this publicly accessible recording (10'32\"-11'04\"):Jim: So much of today's technology is soulless and has nothing to do with peace. It has to do with chewing up the human experience and turning it into some kind of consumer need.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A An Example of Divergence in Human Interpretation",
"sec_num": null
},
{
"text": "Jim: Just ever so peripherally.Michael: He had a lot of real wacky ideas on big levels. He wanted a world power system, that you could tap into the air basically, and get power anywhere on earth.The interpretation results for the token \"Tesla\" and the corresponding pronouns \"he\" is presented in ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Michael: Did you ever get into Tesla?",
"sec_num": null
},
{
"text": "The annotation files are stored in the XML-based FoLiA format 12 , which accommodates multiple ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B.1 Annotation Data Format and Platform",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Recent trends in word sense disambiguation: A survey",
"authors": [
{
"first": "Michele",
"middle": [],
"last": "Bevilacqua",
"suffix": ""
},
{
"first": "Tommaso",
"middle": [],
"last": "Pasini",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Raganato",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-21",
"volume": "",
"issue": "",
"pages": "4330--4338",
"other_ids": {
"DOI": [
"10.24963/ijcai.2021/593"
]
},
"num": null,
"urls": [],
"raw_text": "Michele Bevilacqua, Tommaso Pasini, Alessandro Ra- ganato, and Roberto Navigli. 2021. Recent trends in word sense disambiguation: A survey. In Pro- ceedings of the Thirtieth International Joint Con- ference on Artificial Intelligence, IJCAI-21, pages 4330-4338. International Joint Conferences on Arti- ficial Intelligence Organization. Survey Track.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Discussing with a computer to practice a foreign language: research synthesis and conceptual framework of dialogue-based CALL",
"authors": [
{
"first": "Serge",
"middle": [],
"last": "Bibauw",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Fran\u00e7ois",
"suffix": ""
},
{
"first": "Piet",
"middle": [],
"last": "Desmet",
"suffix": ""
}
],
"year": 2019,
"venue": "Publisher: Routledge _eprint",
"volume": "32",
"issue": "",
"pages": "827--877",
"other_ids": {
"DOI": [
"10.1080/09588221.2018.1535508"
]
},
"num": null,
"urls": [],
"raw_text": "Serge Bibauw, Thomas Fran\u00e7ois, and Piet Desmet. 2019. Discussing with a computer to prac- tice a foreign language: research synthesis and conceptual framework of dialogue-based CALL. Computer Assisted Language Learning, 32(8):827-877. Publisher: Routledge _eprint: https://doi.org/10.1080/09588221.2018.1535508.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Using' Linguistic Books",
"authors": [
{
"first": "H",
"middle": [],
"last": "Herbert",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 1996,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1017/CBO9780511620539"
]
},
"num": null,
"urls": [],
"raw_text": "Herbert H. Clark. 1996. Using Language. 'Using' Linguistic Books. Cambridge University Press, Cam- bridge.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Framing word sense disambiguation as a multi-label problem for model-agnostic knowledge integration",
"authors": [
{
"first": "Simone",
"middle": [],
"last": "Conia",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume",
"volume": "",
"issue": "",
"pages": "3269--3275",
"other_ids": {
"DOI": [
"10.18653/v1/2021.eacl-main.286"
]
},
"num": null,
"urls": [],
"raw_text": "Simone Conia and Roberto Navigli. 2021. Framing word sense disambiguation as a multi-label prob- lem for model-agnostic knowledge integration. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Lin- guistics: Main Volume, pages 3269-3275, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "How \"open\" are the conversations with open-domain chatbots? a proposal for speech event based evaluation",
"authors": [
{
"first": "A",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Seza",
"middle": [],
"last": "Dogru\u00f6z",
"suffix": ""
},
{
"first": "Gabriel",
"middle": [],
"last": "Skantze",
"suffix": ""
},
{
"first": ";",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Online",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 22nd Annual Meeting of the Special Interest Group on Discourse and Dialogue",
"volume": "",
"issue": "",
"pages": "392--402",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Seza Dogru\u00f6z and Gabriel Skantze. 2021. How \"open\" are the conversations with open-domain chat- bots? a proposal for speech event based evaluation. In Proceedings of the 22nd Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 392-402, Singapore and Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Santa Barbara corpus of spoken American English. CD-ROM. Philadelphia: Linguistic Data Consortium",
"authors": [
{
"first": "John W Du",
"middle": [],
"last": "Bois",
"suffix": ""
},
{
"first": "Wallace",
"middle": [
"L"
],
"last": "Chafe",
"suffix": ""
},
{
"first": "Charles",
"middle": [],
"last": "Meyer",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Sandra",
"suffix": ""
},
{
"first": "Nii",
"middle": [],
"last": "Thompson",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Martey",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John W Du Bois, Wallace L Chafe, Charles Meyer, Sandra A Thompson, and Nii Martey. 2000. Santa Barbara corpus of spoken American English. CD- ROM. Philadelphia: Linguistic Data Consortium.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "WordNet",
"authors": [
{
"first": "Christiane",
"middle": [],
"last": "Fellbaum",
"suffix": ""
}
],
"year": 2010,
"venue": "Theory and Applications of Ontology: Computer Applications",
"volume": "",
"issue": "",
"pages": "231--243",
"other_ids": {
"DOI": [
"10.1007/978-90-481-8847-5_10"
]
},
"num": null,
"urls": [],
"raw_text": "Christiane Fellbaum. 2010. WordNet. In Roberto Poli, Michael Healy, and Achilles Kameas, editors, Theory and Applications of Ontology: Computer Applica- tions, pages 231-243. Springer Netherlands, Dor- drecht.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Towards unified dialogue system evaluation: A comprehensive analysis of current evaluation protocols",
"authors": [
{
"first": "Sarah",
"middle": [
"E"
],
"last": "Finch",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Jinho",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 21th Annual Meeting of the Special Interest Group on Discourse and Dialogue",
"volume": "",
"issue": "",
"pages": "236--245",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sarah E. Finch and Jinho D. Choi. 2020. Towards uni- fied dialogue system evaluation: A comprehensive analysis of current evaluation protocols. In Proceed- ings of the 21th Annual Meeting of the Special Inter- est Group on Discourse and Dialogue, pages 236- 245, 1st virtual meeting. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A taxonomic classification of WordNet polysemy types",
"authors": [
{
"first": "Fausto",
"middle": [],
"last": "Abed Alhakim Freihat",
"suffix": ""
},
{
"first": "Biswanath",
"middle": [],
"last": "Giunchiglia",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dutta",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 8th Global WordNet Conference (GWC)",
"volume": "",
"issue": "",
"pages": "106--114",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abed Alhakim Freihat, Fausto Giunchiglia, and Biswanath Dutta. 2016. A taxonomic classification of WordNet polysemy types. In Proceedings of the 8th Global WordNet Conference (GWC), pages 106- 114, Bucharest, Romania. Global Wordnet Associa- tion.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Homonymy and polysemy detection with multilingual information",
"authors": [
{
"first": "Ahmad",
"middle": [],
"last": "Amir",
"suffix": ""
},
{
"first": "Bradley",
"middle": [],
"last": "Habibi",
"suffix": ""
},
{
"first": "Grzegorz",
"middle": [],
"last": "Hauer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kondrak",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 11th Global Wordnet Conference",
"volume": "",
"issue": "",
"pages": "26--35",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amir Ahmad Habibi, Bradley Hauer, and Grzegorz Kondrak. 2021. Homonymy and polysemy detec- tion with multilingual information. In Proceedings of the 11th Global Wordnet Conference, pages 26-35, University of South Africa (UNISA). Global Wordnet Association.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "The great misalignment problem in human evaluation of NLP methods",
"authors": [
{
"first": "Mika",
"middle": [],
"last": "H\u00e4m\u00e4l\u00e4inen",
"suffix": ""
},
{
"first": "Khalid",
"middle": [],
"last": "Alnajjar",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the Workshop on Human Evaluation of NLP Systems (HumEval)",
"volume": "",
"issue": "",
"pages": "69--74",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mika H\u00e4m\u00e4l\u00e4inen and Khalid Alnajjar. 2021. The great misalignment problem in human evaluation of NLP methods. In Proceedings of the Workshop on Human Evaluation of NLP Systems (HumEval), pages 69-74, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Methods for the design and evaluation of HCI+NLP systems",
"authors": [
{
"first": "Hendrik",
"middle": [],
"last": "Heuer",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Buschek",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the First Workshop on Bridging Human-Computer Interaction and Natural Language Processing",
"volume": "",
"issue": "",
"pages": "28--33",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hendrik Heuer and Daniel Buschek. 2021. Methods for the design and evaluation of HCI+NLP systems. In Proceedings of the First Workshop on Bridging Human-Computer Interaction and Natural Language Processing, pages 28-33, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Challenges in Building Intelligent Open-domain Dialog Systems",
"authors": [
{
"first": "Minlie",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Xiaoyan",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
}
],
"year": 2020,
"venue": "ACM Transactions on Information Systems",
"volume": "38",
"issue": "3",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/3383123"
]
},
"num": null,
"urls": [],
"raw_text": "Minlie Huang, Xiaoyan Zhu, and Jianfeng Gao. 2020. Challenges in Building Intelligent Open-domain Di- alog Systems. ACM Transactions on Information Systems, 38(3):21:1-21:32.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Discriminating homonymy from polysemy in wordnets: English, Spanish and Polish nouns",
"authors": [
{
"first": "Arkadiusz",
"middle": [],
"last": "Janz",
"suffix": ""
},
{
"first": "Marek",
"middle": [],
"last": "Maziarz",
"suffix": ""
}
],
"year": 2021,
"venue": "University of South Africa (UNISA). Global Wordnet Association",
"volume": "",
"issue": "",
"pages": "53--62",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arkadiusz Janz and Marek Maziarz. 2021. Discriminat- ing homonymy from polysemy in wordnets: English, Spanish and Polish nouns. In Proceedings of the 11th Global Wordnet Conference, pages 53-62, Uni- versity of South Africa (UNISA). Global Wordnet Association.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "The Measurement of Observer Agreement for Categorical Data",
"authors": [
{
"first": "J",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Landis",
"suffix": ""
},
{
"first": "Gary",
"middle": [
"G"
],
"last": "Koch",
"suffix": ""
}
],
"year": 1977,
"venue": "Biometrics",
"volume": "33",
"issue": "1",
"pages": "",
"other_ids": {
"DOI": [
"10.2307/2529310"
]
},
"num": null,
"urls": [],
"raw_text": "J. Richard Landis and Gary G. Koch. 1977. The Mea- surement of Observer Agreement for Categorical Data. Biometrics, 33(1):159.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Robustly Optimized BERT Pretraining Approach",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692[cs].ArXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "RoBERTa: A Robustly Optimized BERT Pretrain- ing Approach. arXiv:1907.11692 [cs]. ArXiv: 1907.11692.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Annotating coherence relations for studying topic transitions in social talk",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "L\u01b0u",
"suffix": ""
},
{
"first": "Sophia",
"middle": [
"A"
],
"last": "Malamud",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 14th Linguistic Annotation Workshop",
"volume": "",
"issue": "",
"pages": "174--179",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex L\u01b0u and Sophia A. Malamud. 2020. Annotating coherence relations for studying topic transitions in social talk. In Proceedings of the 14th Linguistic Annotation Workshop, pages 174-179, Barcelona, Spain. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A critical analysis and explication of word sense disambiguation as approached by natural language processing",
"authors": [
{
"first": "Julie",
"middle": [],
"last": "Mennes",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Van Der Waart Van Gulik",
"suffix": ""
}
],
"year": 2020,
"venue": "Lingua",
"volume": "243",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1016/j.lingua.2020.102896"
]
},
"num": null,
"urls": [],
"raw_text": "Julie Mennes and Stephan van der Waart van Gulik. 2020. A critical analysis and explication of word sense disambiguation as approached by natural lan- guage processing. Lingua, 243:102896.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Using a semantic concordance for sense identification",
"authors": [
{
"first": "George",
"middle": [
"A"
],
"last": "Miller",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Chodorow",
"suffix": ""
},
{
"first": "Shari",
"middle": [],
"last": "Landes",
"suffix": ""
},
{
"first": "Claudia",
"middle": [],
"last": "Leacock",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"G"
],
"last": "Thomas",
"suffix": ""
}
],
"year": 1994,
"venue": "Human Language Technology: Proceedings of a Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George A. Miller, Martin Chodorow, Shari Landes, Claudia Leacock, and Robert G. Thomas. 1994. Us- ing a semantic concordance for sense identification. In Human Language Technology: Proceedings of a Workshop held at Plainsboro, New Jersey, March 8-11, 1994.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Simple embedding-based word sense disambiguation",
"authors": [
{
"first": "Dieke",
"middle": [],
"last": "Oele",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gertjan Van Noord",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 9th Global Wordnet Conference",
"volume": "",
"issue": "",
"pages": "259--265",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dieke Oele and Gertjan van Noord. 2018. Simple embedding-based word sense disambiguation. In Proceedings of the 9th Global Wordnet Conference, pages 259-265, Nanyang Technological University (NTU), Singapore. Global Wordnet Association.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "AMuSE-WSD: An all-in-one multilingual system for easy Word Sense Disambiguation",
"authors": [
{
"first": "Riccardo",
"middle": [],
"last": "Orlando",
"suffix": ""
},
{
"first": "Simone",
"middle": [],
"last": "Conia",
"suffix": ""
},
{
"first": "Fabrizio",
"middle": [],
"last": "Brignone",
"suffix": ""
},
{
"first": "Francesco",
"middle": [],
"last": "Cecconi",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "298--307",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Riccardo Orlando, Simone Conia, Fabrizio Brignone, Francesco Cecconi, and Roberto Navigli. 2021. AMuSE-WSD: An all-in-one multilingual system for easy Word Sense Disambiguation. In Proceedings of the 2021 Conference on Empirical Methods in Nat- ural Language Processing: System Demonstrations, pages 298-307, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Word sense disambiguation: A unified evaluation framework and empirical comparison",
"authors": [
{
"first": "Alessandro",
"middle": [],
"last": "Raganato",
"suffix": ""
},
{
"first": "Jose",
"middle": [],
"last": "Camacho-Collados",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "99--110",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alessandro Raganato, Jose Camacho-Collados, and Roberto Navigli. 2017. Word sense disambiguation: A unified evaluation framework and empirical com- parison. In Proceedings of the 15th Conference of the European Chapter of the Association for Compu- tational Linguistics: Volume 1, Long Papers, pages 99-110, Valencia, Spain. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "The Grammar of Topic Transition in American English Conversation. Topic Transition Design and Management in Typical and Atypical Conversations (Schizophrenia)",
"authors": [
{
"first": "Marine",
"middle": [],
"last": "Riou",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marine Riou. 2015. The Grammar of Topic Transi- tion in American English Conversation. Topic Transi- tion Design and Management in Typical and Atypical Conversations (Schizophrenia). Ph.D. thesis, Univer- sit\u00e9 Sorbonne Paris Cit\u00e9.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter",
"authors": [
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
}
],
"year": 2019,
"venue": "The 5th Workshop on Energy Efficient Machine Learning and Cognitive Computing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. In The 5th Workshop on Energy Efficient Machine Learning and Cognitive Computing, Vancouver, Canada.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Disagreement in human evaluation: Blame the task not the annotators. Invited talk at the Workshop on Human Evaluation of NLP systems",
"authors": [
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lucia Specia. 2021. Disagreement in human evaluation: Blame the task not the annotators. Invited talk at the Workshop on Human Evaluation of NLP systems (HumEval).",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "FoLiA in Practice: The Infrastructure of a Linguistic Annotation Format",
"authors": [
{
"first": "Ko",
"middle": [],
"last": "Maarten Van Gompel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Van Der",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Sloot",
"suffix": ""
},
{
"first": "Antal",
"middle": [],
"last": "Reynaert",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Van Den",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bosch",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "71--82",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maarten van Gompel, Ko van der Sloot, Martin Rey- naert, and Antal van den Bosch. 2017. FoLiA in Practice: The Infrastructure of a Linguistic Annota- tion Format, pages 71-82. Ubiquity Press.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "XLNet: Generalized Autoregressive Pretraining for Language Understanding",
"authors": [
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zihang",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jaime",
"middle": [],
"last": "Carbonell",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Russ",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems",
"volume": "32",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Russ R Salakhutdinov, and Quoc V Le. 2019. XLNet: Generalized Autoregressive Pretraining for Language Understanding. In Advances in Neural In- formation Processing Systems, volume 32. Curran Associates, Inc.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Statistics of the annotation task."
},
"FIGREF1": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Annotation Editor for a token."
},
"TABREF1": {
"num": null,
"content": "<table/>",
"text": "Counts of the human WSD judgment.",
"html": null,
"type_str": "table"
},
"TABREF2": {
"num": null,
"content": "<table><tr><td>(a) WordNet sense coverage yes no 439 (98.7) 6 (1.3) 445 (100) 4788 (96.1) 194 (3.9) 4982 (100) 538 (11.4) 755 (16.1) 1507 (32.1) 1899 (40.4) 4699 (100) (b) Gold standard data ADJ ADV NOUN VERB 68 (16.8) 61 (15.1) 149 (36.8) 127 (31.3) 445 (100) Total 5227 (96.3) 200 (3.7) 5427 (100) 606 (11.9) 816 (16.0) 1656 (32.4) 2026 (39.7) 5104 (100) R.1 R.2</td></tr><tr><td>Table 3: ADJ AW DB '' ADV NOUN VERB All 66.2 50.0 36.8 83.6 37.7 31.1 77.9 41.6 32.2 85.8 33.1 23.6 79.3 39.8 30.1 74.5 42.9 32.9 76.2 37.1 30.6 76.0 45.0 33.8 69.5 25.6 17.6 73.2 35.7 26.6 Total 73.6 43.7 R.1 R.2 33.3 76.7 37.1 30.6 76.2 44.7 33.7 70.7 26.1 18.0 73.7 36.0 26.9</td></tr><tr><td>Table 4: Accuracy (%) of initial WSD models (DB: DistilBERT).</td></tr><tr><td>ADJ X 44.1 36.8 33.8 50.8 16.4 42.6 44.3 22.8 34.9 33.9 20.5 27.6 42.0 23.5 33.6 ADV NOUN VERB All B R B X R B X R B X R B X R 42.2 21.7 34.2 34.6 17.9 38.1 43.5 24.0 34.6 22.7 12.5 21.7 33.5 18.1 29.9 Total 42.4 23.4 34.2 35.8 17.8 38.5 43.6 23.9 34.7 23.4 13.0 22.1 34.2 18.5 30.2 R.1 R.2</td></tr></table>",
"text": "Statistics (counts and percentages) of the human WSD judgment.",
"html": null,
"type_str": "table"
}
}
}
}