ACL-OCL / Base_JSON /prefixE /json /emnlp /2020.emnlp-demos.23.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T11:08:48.159266Z"
},
"title": "WantWords: An Open-source Online Reverse Dictionary System",
"authors": [
{
"first": "Fanchao",
"middle": [],
"last": "Qi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tsinghua University",
"location": {}
},
"email": ""
},
{
"first": "Lei",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tsinghua University Beijing National Research",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Yanhui",
"middle": [],
"last": "Yang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tsinghua University Beijing National Research",
"location": {}
},
"email": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tsinghua University",
"location": {}
},
"email": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tsinghua University",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "A reverse dictionary takes descriptions of words as input and outputs words semantically matching the input descriptions. Reverse dictionaries have great practical value such as solving the tip-of-the-tongue problem and helping new language learners. There have been some online reverse dictionary systems, but they support English reverse dictionary queries only and their performance is far from perfect. In this paper, we present a new open-source online reverse dictionary system named WantWords (https://wantwords. thunlp.org/). It not only significantly outperforms other reverse dictionary systems on English reverse dictionary performance, but also supports Chinese and English-Chinese as well as Chinese-English cross-lingual reverse dictionary queries for the first time. Moreover, it has user-friendly front-end design which can help users find the words they need quickly and easily. All the code and data are available at https://github.com/ thunlp/WantWords.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "A reverse dictionary takes descriptions of words as input and outputs words semantically matching the input descriptions. Reverse dictionaries have great practical value such as solving the tip-of-the-tongue problem and helping new language learners. There have been some online reverse dictionary systems, but they support English reverse dictionary queries only and their performance is far from perfect. In this paper, we present a new open-source online reverse dictionary system named WantWords (https://wantwords. thunlp.org/). It not only significantly outperforms other reverse dictionary systems on English reverse dictionary performance, but also supports Chinese and English-Chinese as well as Chinese-English cross-lingual reverse dictionary queries for the first time. Moreover, it has user-friendly front-end design which can help users find the words they need quickly and easily. All the code and data are available at https://github.com/ thunlp/WantWords.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Opposite to a regular (forward) dictionary that provides definitions for query words, a reverse dictionary (Sierra, 2000) returns words semantically matching the query descriptions. In Figure 1 , for example, a regular dictionary tells you the definition of \"expressway\" is \"a wide road that allows traffic to travel fast\", while a reverse dictionary outputs \"expressway\" and other semantically similar words like \"freeway\" which match the query description \"a road where cars go very quickly without stopping\" you input.",
"cite_spans": [
{
"start": 107,
"end": 121,
"text": "(Sierra, 2000)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 185,
"end": 193,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Reverse dictionaries are useful in practical applications. First and foremost, they can effectively solve the tip-of-the-tongue problem (Brown and An example illustrating what a regular (forward) dictionary and a reverse dictionary are. McNeill, 1966) , namely the phenomenon of failing to retrieve a word from memory. Many people frequently suffer the problem, especially those who write a lot such as writers, researchers and students. With the help of reverse dictionaries, people can quickly and easily find the words that they need but temporarily forget.",
"cite_spans": [
{
"start": 136,
"end": 146,
"text": "(Brown and",
"ref_id": null
},
{
"start": 237,
"end": 251,
"text": "McNeill, 1966)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In addition, reverse dictionaries are helpful to new language learners who grasp a limited number of words. They will know and learn some new words that have the meanings they want to express by using a reverse dictionary. Also, reverse dictionaries can help word selection (or word dictionary) anomia patients, people who can recognize and describe an object but fail to name it due to neurological disorder (Benson, 1979) .",
"cite_spans": [
{
"start": 409,
"end": 423,
"text": "(Benson, 1979)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Currently, there are mainly two online reverse dictionaries, namely OneLook 1 and ReverseDictionary. 2 Their performance is far from perfect. Further, both of them are closed-source and only support English reverse dictionary queries.",
"cite_spans": [
{
"start": 101,
"end": 102,
"text": "2",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To solve these problems, we design and develop a new online reverse dictionary system named WantWords, which is totally open-source. WantWords is mainly based on our proposed multi-channel reverse dictionary model (Zhang et al., 2020) , which achieves state-of-the-art performance on an English benchmark dataset. Our system uses an improved version of the multi-channel reverse dictionary model and incorporates some engineering tricks to handle extreme cases. Evaluation results show that with these improvements, our system achieves higher performance. Besides, our system supports Chinese reverse dictionary queries and Chinese-English as well as English-Chinese cross-lingual reverse dictionary queries, all of which are realized for the first time. Finally, our system is very user-friendly. It includes multiple filters and sort methods, and can automatically cluster the candidate words, all of which help users find the target words as quickly as possible.",
"cite_spans": [
{
"start": 214,
"end": 234,
"text": "(Zhang et al., 2020)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There are mainly two methods for reverse dictionary building. The first one is based on sentence matching Zock and Bilac, 2004; M\u00e9ndez et al., 2013; Shaw et al., 2013) . Its main idea is to return the words whose dictionary definitions are most similar to the query description. Although effective in some cases, this method cannot cope with the problem that human-written query descriptions might differ widely from dictionary definitions.",
"cite_spans": [
{
"start": 106,
"end": 127,
"text": "Zock and Bilac, 2004;",
"ref_id": "BIBREF20"
},
{
"start": 128,
"end": 148,
"text": "M\u00e9ndez et al., 2013;",
"ref_id": "BIBREF10"
},
{
"start": 149,
"end": 167,
"text": "Shaw et al., 2013)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The second method uses a neural language model (NLM) to encode the query description into a vector in the word embedding space, and returns the words with the closest embeddings to the vector of the query description (Hill et al., 2016; Morinaga and Yamaguchi, 2018; Kartsaklis et al., 2018; Hedderich et al., 2019; Pilehvar, 2019) . Performance of this method depends largely on the quality of word embeddings. Unfortunately, according to Zipf's law (Zipf, 1949) , many words are low-frequency and usually have poor embeddings.",
"cite_spans": [
{
"start": 217,
"end": 236,
"text": "(Hill et al., 2016;",
"ref_id": "BIBREF7"
},
{
"start": 237,
"end": 266,
"text": "Morinaga and Yamaguchi, 2018;",
"ref_id": "BIBREF12"
},
{
"start": 267,
"end": 291,
"text": "Kartsaklis et al., 2018;",
"ref_id": "BIBREF9"
},
{
"start": 292,
"end": 315,
"text": "Hedderich et al., 2019;",
"ref_id": "BIBREF6"
},
{
"start": 316,
"end": 331,
"text": "Pilehvar, 2019)",
"ref_id": "BIBREF13"
},
{
"start": 451,
"end": 463,
"text": "(Zipf, 1949)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "To tackle this issue of the NLM-based method, we proposed a multi-channel reverse dictionary model (Zhang et al., 2020) . This model is composed of a sentence encoder, more specifically, a bi-directional LSTM (BiLSTM) (Hochreiter and Schmidhuber, 1997) with attention (Bahdanau et al., 2015) , and four characteristic predictors. The four predictors are used to predict the part-ofspeech, morphemes, word category and sememes 3 of the target word according to the query description, respectively. The incorporation of the characteristic predictors can help find the target words with poor embeddings and exclude wrong words with similar embeddings to the target words, such 3 A sememe is defined as the minimum semantic units of human languages (Bloomfield, 1926) . The meaning of a word can be expressed by several sememes. as antonyms. Experimental results have demonstrated that our multi-channel reverse dictionary model achieves state-of-the-art performance. In WantWords, we employ an improved version of it that yields better results.",
"cite_spans": [
{
"start": 99,
"end": 119,
"text": "(Zhang et al., 2020)",
"ref_id": "BIBREF18"
},
{
"start": 218,
"end": 252,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF8"
},
{
"start": 268,
"end": 291,
"text": "(Bahdanau et al., 2015)",
"ref_id": "BIBREF0"
},
{
"start": 745,
"end": 763,
"text": "(Bloomfield, 1926)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In this section, we describe the system architecture of WantWords. We first give an overview of its workflow, then we detail the improved multichannel reverse dictionary model, and finally we introduce its front-end design.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Architecture",
"sec_num": "3"
},
{
"text": "The workflow of WantWords is illustrated in Figure 2. There are two reverse dictionary modes, namely monolingual and cross-lingual modes. In the monolingual mode, if the query description is longer than one word, it will be fed into the multichannel reverse dictionary model directly, which calculates a confidence score for each candidate word in the vocabulary; if the query description is just a word, the confidence score of each candidate word is mostly based on the cosine similarity between the embeddings of the query word and candidate word.",
"cite_spans": [],
"ref_spans": [
{
"start": 44,
"end": 50,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "Overall Workflow",
"sec_num": "3.1"
},
{
"text": "In the cross-lingual mode, where the query descriptions are in the source language and the target words are in the target language, if the query description is longer than one word, it will be translated into the target language first and then processed in the monolingual mode of the target language; if the query description is just a word, crosslingual dictionaries will be consulted for the target-language definitions of the query word, and then the definitions are fed into the multi-channel reverse dictionary model to calculate candidate words' confidence scores.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overall Workflow",
"sec_num": "3.1"
},
{
"text": "After obtaining confidence scores, all candidate words in the vocabulary will be sorted by descending confidence scores and listed as system output. The words in the query description are excluded since they are unlikely to be the target word. Different filters, other sort methods and clustering may be further employed to adjust the final results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overall Workflow",
"sec_num": "3.1"
},
{
"text": "The multi-channel reverse dictionary model (MRDM) is the core module of our system. We use an improved version of MRDM that employs BERT (Devlin et al., 2019) rather than BiLSTM as the sentence encoder. Figure 3 illustrates the model.",
"cite_spans": [
{
"start": 137,
"end": 158,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 203,
"end": 211,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Multi-channel Reverse Dictionary Model",
"sec_num": "3.2"
},
{
"text": "For a given query description, MRDM calculates a confidence score for each candidate word in the vocabulary. The confidence score is composed of five parts:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-channel Reverse Dictionary Model",
"sec_num": "3.2"
},
{
"text": "(1) The first part is word score. To obtain it, the input query description is first encoded into a sentence vector by BERT, then the sentence vector is mapped into the space of word embeddings by a single-layer perceptron, and finally word score is the dot product of the mapped sentence vector and the candidate word's embedding.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-channel Reverse Dictionary Model",
"sec_num": "3.2"
},
{
"text": "(2) The second part is part-of-speech (PoS) score, which is based on the prediction for the PoS of the target word. MRDM first calculates a prediction score for each PoS tag by feeding the sentence vector into a single-layer perceptron, and then a candidate word's PoS score is the sum of the prediction scores of all its PoS tags.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-channel Reverse Dictionary Model",
"sec_num": "3.2"
},
{
"text": "(3) The third part is category score, which is related to the category of the target word and can be obtained in a similar way to PoS score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-channel Reverse Dictionary Model",
"sec_num": "3.2"
},
{
"text": "(4) The fourth part is morpheme score, which is supposed to capture the morphemes of the target word. Each token of the input query description corresponds to a hidden state as the output of BERT. MRDM first feeds each hidden state into a single-layer perceptron to obtain a local morpheme prediction score, then does max-pooling over all the local morpheme prediction scores to obtain a prediction score for each morpheme, and finally a candidate word's morpheme score is the sum of the prediction scores of all its morphemes. Figure 3 : Revised version of the multi-channel reverse dictionary model.",
"cite_spans": [],
"ref_spans": [
{
"start": 528,
"end": 536,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Multi-channel Reverse Dictionary Model",
"sec_num": "3.2"
},
{
"text": "(5) The fifth part is sememe score, which is based on the prediction for the sememes of the target word. Sememe score can be calculated in a similar way to morpheme score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-channel Reverse Dictionary Model",
"sec_num": "3.2"
},
{
"text": "We use the official pre-trained BERT models for both English and Chinese. 4 As for fine-tuning (training) for English, we use the dictionary definition dataset created by Hill et al. (2016) , which contains about 100, 000 words and 900, 000 worddefinition pairs extracted from five dictionaries. For fine-tuning (training) for Chinese, we build a large-scale dictionary definition dataset based on the dataset created by Zhang et al. (2020) . It contains 137, 174 words and 270, 549 word-definition pairs, where the definitions are extracted from several authoritative Chinese dictionaries including Modern Chinese Dictionary, Xinhua Dictionary and Chinese Idiom Dictionary as well as an opensource dictionary dataset. 5 MRDM requires some other resources, and we simply follow the settings in Zhang et al. (2020) . Specifically, for English, we use Morfessor (Virpioja et al., 2013) to segment words into morphemes, WordNet (Miller, 1995) to obtain PoS and word category information, and OpenHowNet 6 (Qi et al., 2019) to obtain sememe information. As for Chinese, we simply use Chinese characters as morphemes. We utilize the PoS tags in Modern Chinese Dictionary. In addition, we use HIT-IR Tongyici Cilin 7 and OpenHowNet to obtain word category and sememe information, respectively. ",
"cite_spans": [
{
"start": 74,
"end": 75,
"text": "4",
"ref_id": null
},
{
"start": 171,
"end": 189,
"text": "Hill et al. (2016)",
"ref_id": "BIBREF7"
},
{
"start": 421,
"end": 440,
"text": "Zhang et al. (2020)",
"ref_id": "BIBREF18"
},
{
"start": 719,
"end": 720,
"text": "5",
"ref_id": null
},
{
"start": 794,
"end": 813,
"text": "Zhang et al. (2020)",
"ref_id": "BIBREF18"
},
{
"start": 925,
"end": 939,
"text": "(Miller, 1995)",
"ref_id": "BIBREF11"
},
{
"start": 1002,
"end": 1019,
"text": "(Qi et al., 2019)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-channel Reverse Dictionary Model",
"sec_num": "3.2"
},
{
"text": "In the monolingual reverse dictionary mode, in the case where the query description is a single word, we simply use word embedding similarity to calculate the confidence scores of candidate words in the vocabulary, rather than feed the query word into MRDM. We also take the synonyms into consideration and double the confidence score of a candidate word if it is a synonym of the query word. We use WordNet and HIT-IR Tongyici Cilin as English and Chinese thesauri, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "One-word Query in the Monolingual Mode",
"sec_num": "3.3"
},
{
"text": "In the cross-lingual mode, a query description longer than one word is first translated into the target language using Baidu Translation API 8 , and then the translated query description is processed in the same procedure as the monolingual mode.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Cross-lingual Mode",
"sec_num": "3.4"
},
{
"text": "As for a one-word query description, we do not utilize machine translation because existing translation APIs cannot return all the possible translation results, especially for polysemous query words, which may impairs system performance. Instead, we consult cross-lingual dictionaries for definitions in the target language of the query word, and feed all the definitions into the targetlanguage MRDM. Specifically, we use StarDict and LangDao English-Chinese Dictionaries in the English-Chinese mode and LangDao, CEDICT, and MDBG Chinese-English dictionaries in the Chinese-English mode. We concatenate multiple dictionary definitions before feeding into MRDM.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Cross-lingual Mode",
"sec_num": "3.4"
},
{
"text": "The front-end design of WantWords is simple and user-friendly, as shown in Figure 4 . After inputting a query description in the textbox in the center of the system web page and clicking the \"Search\" button, one hundred candidate words will be listed in descending order of confidence scores. The words with confidence scores higher than a threshold have a background color whose shade is proportional to the confidence score.",
"cite_spans": [],
"ref_spans": [
{
"start": 75,
"end": 83,
"text": "Figure 4",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Front-end Design",
"sec_num": "3.5"
},
{
"text": "A tool bar will appear below the textbox. Users can filter the candidate words by different filters in the tool bar. Specifically, for English candidate words, there are PoS, word length, initial and wildcard pattern filters; for Chinese candidate words, there are word length, total stroke number, wildcard pattern, pinyin initials, PoS and rhyme filters. These filters can help users find the word they need as quickly as possible.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Front-end Design",
"sec_num": "3.5"
},
{
"text": "In the tool bar, users can also change the sort method of the candidate words. Users can sort the English candidate words in regular or reverse alphabetical order and by word length, and Chinese candidate words in regular or reverse pinyin alphabetical order and by total stroke number. Besides, WantWords supports dividing candidate words into six clusters, where we use k-means clustering algorithm in the word embedding space. The sort methods and clustering are also beneficial to quickly finding the target word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Front-end Design",
"sec_num": "3.5"
},
{
"text": "Considering the cases where users, especially new language learners, do not know rather than forget a word, our system provides definitions for candidate words. Users can click a candidate word to invoke a floating window that displays the PoS and definition of the word. The displayed definitions of English and Chinese words are from Word-Net and the open-source Chinese dictionary dataset respectively, both of which are freely available. Finally, our system has quick feedback channels to collect real-world data. Due to the lack of humanwritten description data, existing reverse dictionary systems can only utilize dictionary definitions in training. However, dictionary definitions are usually different from human-written descriptions, which affects the performance of reverse dictionaries. Therefore, we design some feedback channels to collect users' feedback, aiming to use it to improve our system. Specifically, users can choose between \"Matched Well\" and \"Not Matched\" in the floating window of a candidate word to give their opinions about the candidate word. In addition, users can directly propose appropriate words matching the query description.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Front-end Design",
"sec_num": "3.5"
},
{
"text": "In this section, we evaluate the reverse dictionary performance of WantWords. We conduct both monolingual (English and Chinese) and crosslingual (English-Chinese and Chinese-English) reverse dictionary evaluations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4"
},
{
"text": "In the evaluation of English monolingual reverse dictionary performance, we use two test sets including (1) Definition set, which contains 500 pairs of words and WordNet definitions that are randomly selected and have been excluded from the training set; and (2) Description set, which comprises 200 pairs of words and human-written descriptions and is a benchmark dataset created by Hill et al. (2016) .",
"cite_spans": [
{
"start": 384,
"end": 402,
"text": "Hill et al. (2016)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.1"
},
{
"text": "As for Chinese, we use three test sets: (1) Definition set, which contains 2, 000 pairs of words and dictionary definitions that are selected at random and do not exist in the training set; (2) Description set, which is composed of 200 word-description pairs given by Chinese native speakers and is built by Zhang et al. (2020) ; and (3) Question set, which collects 272 real-world Chinese exam questionanswers of writing the right word given a description from the Internet and is also created by Zhang et al. (2020) .",
"cite_spans": [
{
"start": 308,
"end": 327,
"text": "Zhang et al. (2020)",
"ref_id": "BIBREF18"
},
{
"start": 498,
"end": 517,
"text": "Zhang et al. (2020)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.1"
},
{
"text": "To evaluate cross-lingual reverse dictionary performance, we build two test sets based on the two monolingual Description sets. We manually translate the word of each word-description pair in the English Description sets into Chinese to obtain the English-Chinese Description set, which is composed of 200 pairs of English descriptions and Chinese words. In a similar way, we construct the Chinese-English Description set, which contains 200 pairs of Chinese descriptions and English words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.1"
},
{
"text": "We choose two existing online reverse dictionary systems, namely OneLook and ReverseDictionary, and two reverse dictionary models, namely original MRDM and BERT, as baseline methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Methods",
"sec_num": "4.2"
},
{
"text": "OneLook and ReverseDictionary can only support English monolingual reverse dictionary queries. MRDM, as mentioned before, is the current state-of-the-art reverse dictionary model (Zhang et al., 2020) and mainly differs from WantWords in the sentence encoder (BiLSTM vs BERT) and engineering tricks (e.g., special processing for one-word queries) to handle one-word queries. As for BERT, it does not have extra characteristic predictors and engineering tricks as compared to WantWords. MRDM and BERT are trained with the same training sets as WantWords to respond English and Chinese reverse dictionary queries, respectively. They can also support crosslingual reverse dictionary queries processed with the same procedure as the cross-lingual mode of WantWords.",
"cite_spans": [
{
"start": 179,
"end": 199,
"text": "(Zhang et al., 2020)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Methods",
"sec_num": "4.2"
},
{
"text": "Following previous work (Hill et al., 2016; Zhang et al., 2020) , we use four evaluation metrics: the median rank of the target words in the final word lists (lower better) and the accuracy that the target words appear in top 1/10/100 (acc@1/10/100, higher better). Every experiment is run five times, and we report the average results. We also conduct Student's t-test to measure the significance of performance difference. Table 2 : Evaluation results of cross-lingual reverse dictionaries (median rank and acc@1/10/100).",
"cite_spans": [
{
"start": 24,
"end": 43,
"text": "(Hill et al., 2016;",
"ref_id": "BIBREF7"
},
{
"start": 44,
"end": 63,
"text": "Zhang et al., 2020)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 425,
"end": 432,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "4.3"
},
{
"text": "The monolingual reverse dictionary evaluation results of WantWords and baseline methods are shown in Table 1 . OneLook and ReverseDictionary have stored all the WordNet definitions, and we cannot exclude the word-definition pairs in the Definition set from their databases. Therefore, they can be evaluated on the Description set only.",
"cite_spans": [],
"ref_spans": [
{
"start": 101,
"end": 108,
"text": "Table 1",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Evaluation Results",
"sec_num": "4.4"
},
{
"text": "We observe that WantWords basically performs better than all the baseline methods on all the five test sets. On the English benchmark test set Description, WantWords completely outperforms the two existing online systems and achieves new state-of-the-art performance. On the three Chinese test sets, WantWords also yields significantly better results than the two baseline methods. Table 2 shows the cross-lingual reverse dictionary evaluation results of WantWords and two baseline methods. We find that the performance of three models is similar and much poorer than that on corresponding monolingual datasets. We conjecture that the unsatisfying translation quality seriously affects final performance, based on our observation that translations of some query descriptions are inaccurate and even ungrammatical. Table 3 shows two English reverse dictionary cases, where the query descriptions and output word lists of three reverse dictionary systems are displayed. In the first case, WantWords finds 8 correct words among top 10 while the other two systems finds none among top 15. In the second case, OneLook and WantWords use the PoS filter to retain verbs only. After filtering, the target word \"receive\" is ranked top 3 in the word list of WantWords while OneLook still cannot find any correct words among top 6. ReverseDictionary has no filter and none of correct words appear among top 15.",
"cite_spans": [],
"ref_spans": [
{
"start": 382,
"end": 389,
"text": "Table 2",
"ref_id": null
},
{
"start": 814,
"end": 821,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Evaluation Results",
"sec_num": "4.4"
},
{
"text": "In this paper, we present WantWords, an online reverse dictionary system, which achieves stateof-the-art performance on an English reverse dictionary benchmark dataset. Besides, it supports Chinese and English-Chinese as well as Chinese-English cross-lingual reverse dictionary queries for the first time. In the future, we will try to incorporate multi-word expressions and idioms in the system. Also, we will work on improving crosslingual reverse dictionary performance by bilingual word embeddings or multilingual BERT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future work",
"sec_num": "5"
},
{
"text": "https://onelook.com/thesaurus/ 2 https://reversedictionary.org/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/google-research/ bert 5 https://github.com/pwxcoo/ chinese-xinhua 6 https://github.com/thunlp/OpenHowNet 7 https://github.com/yaleimeng/Final_ word_Similarity/tree/master/cilin",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://fanyi-api.baidu.com/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of ICLR.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Neurologic correlates of anomia",
"authors": [
{
"first": "D Frank",
"middle": [],
"last": "Benson",
"suffix": ""
}
],
"year": 1979,
"venue": "Studies in Neurolinguistics",
"volume": "",
"issue": "",
"pages": "293--328",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D Frank Benson. 1979. Neurologic correlates of anomia. In Studies in Neurolinguistics, pages 293- 328. Elsevier.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Dictionary search based on the target word description",
"authors": [
{
"first": "Slaven",
"middle": [],
"last": "Bilac",
"suffix": ""
},
{
"first": "Wataru",
"middle": [],
"last": "Watanabe",
"suffix": ""
},
{
"first": "Taiichi",
"middle": [],
"last": "Hashimoto",
"suffix": ""
},
{
"first": "Takenobu",
"middle": [],
"last": "Tokunaga",
"suffix": ""
},
{
"first": "Hozumi",
"middle": [],
"last": "Tanaka",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of NLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Slaven Bilac, Wataru Watanabe, Taiichi Hashimoto, Takenobu Tokunaga, and Hozumi Tanaka. 2004. Dictionary search based on the target word descrip- tion. In Proceedings of NLP.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A set of postulates for the science of language",
"authors": [
{
"first": "Leonard",
"middle": [],
"last": "Bloomfield",
"suffix": ""
}
],
"year": 1926,
"venue": "Language",
"volume": "2",
"issue": "3",
"pages": "153--164",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Leonard Bloomfield. 1926. A set of postulates for the science of language. Language, 2(3):153-164.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "The \"tip of the tongue\" phenomenon",
"authors": [
{
"first": "Roger",
"middle": [],
"last": "Brown",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Mcneill",
"suffix": ""
}
],
"year": 1966,
"venue": "Journal of Verbal Learning and Verbal Behavior",
"volume": "5",
"issue": "4",
"pages": "325--337",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roger Brown and David McNeill. 1966. The \"tip of the tongue\" phenomenon. Journal of Verbal Learning and Verbal Behavior, 5(4):325-337.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of NAACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understand- ing. In Proceedings of NAACL-HLT.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Using multi-sense vector embeddings for reverse dictionaries",
"authors": [
{
"first": "A",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Hedderich",
"suffix": ""
},
{
"first": "Dietrich",
"middle": [],
"last": "Yates",
"suffix": ""
},
{
"first": "Gerard",
"middle": [],
"last": "Klakow",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "De Melo",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of IWCS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael A Hedderich, Andrew Yates, Dietrich Klakow, and Gerard de Melo. 2019. Using multi-sense vector embeddings for reverse dictionaries. In Proceedings of IWCS.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Learning to understand phrases by embedding the dictionary",
"authors": [
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2016,
"venue": "TACL",
"volume": "4",
"issue": "",
"pages": "17--30",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Felix Hill, Kyunghyun Cho, Anna Korhonen, and Yoshua Bengio. 2016. Learning to understand phrases by embedding the dictionary. TACL, 4:17- 30.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural Computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735-1780.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Mapping text to knowledge graph entities using multi-sense lstms",
"authors": [
{
"first": "Dimitri",
"middle": [],
"last": "Kartsaklis",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Taher Pilehvar",
"suffix": ""
},
{
"first": "Nigel",
"middle": [],
"last": "Collier",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dimitri Kartsaklis, Mohammad Taher Pilehvar, and Nigel Collier. 2018. Mapping text to knowledge graph entities using multi-sense lstms. In Proceed- ings of EMNLP.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A reverse dictionary based on semantic analysis using wordnet",
"authors": [
{
"first": "Oscar",
"middle": [],
"last": "M\u00e9ndez",
"suffix": ""
},
{
"first": "Hiram",
"middle": [],
"last": "Calvo",
"suffix": ""
},
{
"first": "Marco",
"middle": [
"A"
],
"last": "Moreno-Armend\u00e1riz",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of MICAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oscar M\u00e9ndez, Hiram Calvo, and Marco A. Moreno- Armend\u00e1riz. 2013. A reverse dictionary based on semantic analysis using wordnet. In Proceedings of MICAI.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Wordnet: a lexical database for english",
"authors": [
{
"first": "A",
"middle": [],
"last": "George",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Miller",
"suffix": ""
}
],
"year": 1995,
"venue": "Communications of the ACM",
"volume": "38",
"issue": "11",
"pages": "39--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George A Miller. 1995. Wordnet: a lexical database for english. Communications of the ACM, 38(11):39- 41.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Improvement of reverse dictionary by tuning word vectors and category inference",
"authors": [
{
"first": "Yuya",
"middle": [],
"last": "Morinaga",
"suffix": ""
},
{
"first": "Kazunori",
"middle": [],
"last": "Yamaguchi",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of ICIST",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuya Morinaga and Kazunori Yamaguchi. 2018. Im- provement of reverse dictionary by tuning word vec- tors and category inference. In Proceedings of ICIST.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "On the importance of distinguishing word meaning representations: A case study on reverse dictionary mapping",
"authors": [
{
"first": "Mohammad",
"middle": [],
"last": "Taher",
"suffix": ""
},
{
"first": "Pilehvar",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of NAACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohammad Taher Pilehvar. 2019. On the importance of distinguishing word meaning representations: A case study on reverse dictionary mapping. In Pro- ceedings of NAACL-HLT.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Openhownet: An open sememe-based lexical knowledge base",
"authors": [
{
"first": "Fanchao",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Chenghao",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Qiang",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Zhendong",
"middle": [],
"last": "Dong",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1901.09957"
]
},
"num": null,
"urls": [],
"raw_text": "Fanchao Qi, Chenghao Yang, Zhiyuan Liu, Qiang Dong, Maosong Sun, and Zhendong Dong. 2019. Openhownet: An open sememe-based lexical knowl- edge base. arXiv preprint arXiv:1901.09957.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Building a scalable databasedriven reverse dictionary",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Shaw",
"suffix": ""
},
{
"first": "Anindya",
"middle": [],
"last": "Datta",
"suffix": ""
},
{
"first": "Debra",
"middle": [
"E"
],
"last": "Vandermeer",
"suffix": ""
},
{
"first": "Kaushik",
"middle": [],
"last": "Dutta",
"suffix": ""
}
],
"year": 2013,
"venue": "TKDE",
"volume": "25",
"issue": "",
"pages": "528--540",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan Shaw, Anindya Datta, Debra E. VanderMeer, and Kaushik Dutta. 2013. Building a scalable database- driven reverse dictionary. TKDE, 25:528-540.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "The onomasiological dictionary: a gap in lexicography",
"authors": [
{
"first": "Gerardo",
"middle": [],
"last": "Sierra",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the Ninth Euralex International Congress",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gerardo Sierra. 2000. The onomasiological dictionary: a gap in lexicography. In Proceedings of the Ninth Euralex International Congress.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Morfessor 2.0: Python implementation and extensions for morfessor baseline",
"authors": [
{
"first": "Sami",
"middle": [],
"last": "Virpioja",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Smit",
"suffix": ""
},
{
"first": "Stig",
"middle": [
"Arne"
],
"last": "Gr\u00f6nroos",
"suffix": ""
},
{
"first": "Mikko",
"middle": [],
"last": "Kurimo",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sami Virpioja, Peter Smit, Stig Arne Gr\u00f6nroos, and Mikko Kurimo. 2013. Morfessor 2.0: Python im- plementation and extensions for morfessor baseline. Aalto University Publication.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Multi-channel reverse dictionary model",
"authors": [
{
"first": "Lei",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Fanchao",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yasheng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lei Zhang, Fanchao Qi, Zhiyuan Liu, Yasheng Wang, Qun Liu, and Maosong Sun. 2020. Multi-channel reverse dictionary model. In Proceedings of AAAI.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Human behavior and the principle of least effort. SERBIULA (sistema Librum 2.0)",
"authors": [
{
"first": "George",
"middle": [],
"last": "Kingsley",
"suffix": ""
},
{
"first": "Zipf",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 1949,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George Kingsley Zipf. 1949. Human behavior and the principle of least effort. SERBIULA (sistema Li- brum 2.0).",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Word lookup on the basis of associations: from an idea to a roadmap",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Zock",
"suffix": ""
},
{
"first": "Slaven",
"middle": [],
"last": "Bilac",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the Workshop on Enhancing and Using Electronic Dictionaries",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Zock and Slaven Bilac. 2004. Word lookup on the basis of associations: from an idea to a roadmap. In Proceedings of the Workshop on Enhancing and Using Electronic Dictionaries.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Figure 1: An example illustrating what a regular (forward) dictionary and a reverse dictionary are."
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Workflow of WantWords."
},
"FIGREF2": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Front-end design of WantWords in the English monolingual mode."
},
"TABREF2": {
"text": "/.29/.59 3 .31/.65/.88 8 .21/.51/.76 4 .27/.60/.85 1 .50/.79/.91 BERT 34 .09/.34/.61 2 .33/.76/.93 13 .13/.45/.72 5 .23/.62/.86 1 .49/.79/.91 WantWords 19 .10/.38/.72 2 .36/.75/.92 7 .22/.54/.77 2 .37/.74/.91 0 .60/.82/.93",
"num": null,
"html": null,
"type_str": "table",
"content": "<table><tr><td>Model</td><td colspan=\"2\">En Definition</td><td>En Description</td><td colspan=\"2\">Zh Definition</td><td colspan=\"2\">Zh Description</td><td colspan=\"2\">Zh Question</td></tr><tr><td>OneLook</td><td>-</td><td>-</td><td>6 .33/.54/.76</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td colspan=\"2\">ReverseDictionary -</td><td>-</td><td>4 .30/.64/.80</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>MRDM</td><td>53 .08</td><td/><td/><td/><td/><td/><td/><td/><td/></tr></table>"
},
"TABREF3": {
"text": "Evaluation results of English and Chinese monolingual reverse dictionaries (median rank and acc@1/10/100). The boldfaced results denote significant dominance, and the underlined results indicate insignificant difference, where the statistical significance threshold of p-value is 0.05. The same is true forTable 2.",
"num": null,
"html": null,
"type_str": "table",
"content": "<table><tr><td>Model</td><td>en-zh</td><td>zh-en</td></tr><tr><td>MRDM</td><td colspan=\"2\">40 .12/.31/.63 8 .20/.52/.76</td></tr><tr><td>BERT</td><td colspan=\"2\">16 .14/.40/.75 7 .21/.54/.76</td></tr><tr><td colspan=\"3\">WantWords 19 .14/.38/.76 8 .22/.53/.78</td></tr></table>"
},
"TABREF5": {
"text": "Two English reverse dictionary cases. The boldfaced words are correct answers while the words struck through are filtered out by PoS filter (verb).",
"num": null,
"html": null,
"type_str": "table",
"content": "<table/>"
}
}
}
}