ACL-OCL / Base_JSON /prefixD /json /deelio /2021.deelio-1.10.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:21:56.272729Z"
},
"title": "KW-ATTN: Knowledge Infused Attention for Accurate and Interpretable Text Classification",
"authors": [
{
"first": "Hyeju",
"middle": [],
"last": "Jang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of British Columbia",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Seojin",
"middle": [],
"last": "Bang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Wen",
"middle": [],
"last": "Xiao",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of British Columbia",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Giuseppe",
"middle": [],
"last": "Carenini",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of British Columbia",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Raymond",
"middle": [],
"last": "Ng",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Young",
"middle": [
"Ji"
],
"last": "Lee",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of British Columbia",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Text classification has wide-ranging applications in various domains. While neural network approaches have drastically advanced performance in text classification, they tend to be powered by a large amount of training data, and interpretability is often an issue. As a step towards better accuracy and interpretability especially on small data, in this paper we present a new knowledge-infused attention mechanism, called KW-ATTN (KnoWledgeinfused ATTentioN) to incorporate high-level concepts from external knowledge bases into Neural Network models. We show that KW-ATTN outperforms baseline models using only words as well as other approaches using concepts by classification accuracy, which indicates that high-level concepts help model prediction. Furthermore, crowdsourced human evaluation suggests that additional concept information helps interpretability of the model.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Text classification has wide-ranging applications in various domains. While neural network approaches have drastically advanced performance in text classification, they tend to be powered by a large amount of training data, and interpretability is often an issue. As a step towards better accuracy and interpretability especially on small data, in this paper we present a new knowledge-infused attention mechanism, called KW-ATTN (KnoWledgeinfused ATTentioN) to incorporate high-level concepts from external knowledge bases into Neural Network models. We show that KW-ATTN outperforms baseline models using only words as well as other approaches using concepts by classification accuracy, which indicates that high-level concepts help model prediction. Furthermore, crowdsourced human evaluation suggests that additional concept information helps interpretability of the model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Text classification is a fundamental Natural Language Processing (NLP) task which has wideranging applications such as topic classification (Lee et al., 2011) , fake news detection (Shu et al., 2017) , and medical text classification (Botsis et al., 2011) . The current state-of-the-art approaches for text classification use Neural Network (NN) models. When these techniques are applied to real data in various domains, there are two problems. First, neural approaches tend to require large training data, but it is often the case that large training data or pretrained embeddings are not available in domain-specific applications. Second, when text classification is applied in real life, not only the accuracy, but also the interpretability or explainability of the model is important.",
"cite_spans": [
{
"start": 140,
"end": 158,
"text": "(Lee et al., 2011)",
"ref_id": "BIBREF12"
},
{
"start": 181,
"end": 199,
"text": "(Shu et al., 2017)",
"ref_id": "BIBREF31"
},
{
"start": 234,
"end": 255,
"text": "(Botsis et al., 2011)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As a way to improve interpretability as well as accuracy, incorporating high-level concept information can be useful. High-level concepts could help interpretation of model results because concepts summarize individual words. The concept \"medication\" would be not only easier to interpret than the words \"ibuprofen\" or \"topiramate\" but also contributes to understanding the words better. In addition, higher-level concepts can make raw words with low frequency more predictive. For instance, the words \"hockey\" and \"archery\" might not occur in a corpus frequently enough to be considered important by a model, but knowing that they belong to the concept \"athletics\" could give more predictive power to the less frequent individual words depending on the task, because the frequency of the concept \"athletics\" would be higher than individual words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper we present a new approach that incorporates high-level concept information from external knowledge sources into NN models. We devise a novel attention mechanism, KW-ATTN, that allows the network to separately and flexibly attend to the words and/or concepts occurring in a text, so that attended concepts can offer information for predictions in addition to the information a model learns from texts or a pretrained model. We test KW-ATTN on two different tasks: patient need detection in the healthcare domain and topic classification in general domains. Data is annotated with high level concepts from external knowledge bases: BabelNet (Navigli and Ponzetto, 2012) and UMLS (Unified Medical Language System) (Lindberg, 1990) . We also conduct experiments and analyses to evaluate how high-level concept information helps with interpretability of resultant classifications as well as accuracy. Our results indicate that KW-ATTN improves both classification accuracy and interpretability.",
"cite_spans": [
{
"start": 653,
"end": 681,
"text": "(Navigli and Ponzetto, 2012)",
"ref_id": "BIBREF22"
},
{
"start": 725,
"end": 741,
"text": "(Lindberg, 1990)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our contribution is threefold: (1) We propose a novel attention mechanism that exploits highlevel concept information from external knowledge bases, designed for providing an additional layer of interpretation using attention. This attention mechanism can be plugged in different architectures and applied in any domain for which we have a knowledge resource and a corresponding tagger.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(2) Experiments show KW-ATTN makes statistically significant gains over a widely used attention mechanism plugged in RNN models and other approaches using concepts. We also show that the attention mechanism can help prediction accuracy when added on top of the pretrained BERT model. Additionally, our attention analysis on patient need data annotated with BabelNet and UMLS indicates that choice of external knowledge impacts the model's performance. (3) Lastly, our human evaluation using crowdsourcing suggests our model improves interpretability.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Section 2 relates prior work to ours. Section 3 explains our method. Section 4 evaluates our model on two different tasks in terms of classification accuracy. Section 5 describes our human evaluation on interpretability. Section 6 concludes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There has been a growing interest in incorporation of external semantic knowledge into neural models for text classification. proposed a framework based on convolutional neural networks that combines explicit and implicit representations of short text for classification by conceptualizing a short text as a set of relevant concepts using a large taxonomy knowledge base. Yang and Mitchell (2017) proposed KBLSTM, a RNN model that uses continuous representations of knowledge bases for machine reading. Xu et al. (2017) incorporated background knowledge with the format of entity-attribute for conversation modeling. Stanovsky et al. (2017) overrided word embeddings with DBpedia concept embeddings, and used RNNs for recognizing mentions of adverse drug reaction in social media.",
"cite_spans": [
{
"start": 372,
"end": 396,
"text": "Yang and Mitchell (2017)",
"ref_id": "BIBREF37"
},
{
"start": 503,
"end": 519,
"text": "Xu et al. (2017)",
"ref_id": "BIBREF36"
},
{
"start": 617,
"end": 640,
"text": "Stanovsky et al. (2017)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge-infused Neural Networks",
"sec_num": "2.1"
},
{
"text": "More advanced neural architectures such as transformers has been also benefited by external knowledge. (Zhong et al., 2019) proposed a Knowledge Enriched Transformer (KET), where contextual utterances are interpreted using hierarchical self-attention and external commonsense knowledge is dynamically leveraged using a contextaware affective graph attention mechanism. ERNIE (Zhang et al., 2019) integrated entity embeddings pretrained on a knowledge graph with corresponding entity mentions in the text to augment the text representation. KnowBERT (Peters et al., 2019) trained BERT for entity linkers and language modeling in a multitask setting to incorporate entity representation. K-BERT (Liu et al., 2020) injected triples from knowledge graphs into a sentence to obtain an extended tree-form input for BERT.",
"cite_spans": [
{
"start": 103,
"end": 123,
"text": "(Zhong et al., 2019)",
"ref_id": "BIBREF42"
},
{
"start": 375,
"end": 395,
"text": "(Zhang et al., 2019)",
"ref_id": "BIBREF41"
},
{
"start": 540,
"end": 570,
"text": "KnowBERT (Peters et al., 2019)",
"ref_id": null
},
{
"start": 693,
"end": 711,
"text": "(Liu et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge-infused Neural Networks",
"sec_num": "2.1"
},
{
"text": "Although all these prior models incorporated external knowledge into advanced neural architectures to improve model performance, they didn't pay much attention to interpretability benefits. There have been a few knowledge-infused models that considered interpretability. Kumar et al. (2018) proposed a two-level attention network for sentiment analysis using knowledge graph embedding generated using WordNet (Fellbaum, 2012) and top-k similar words. Although this work mentions interpretability, it did not show whether/how the model can help interpretability. Margatina et al. (2019) incorporated existing psycho-linguistic and affective knowledge from human experts for sentiment related tasks. This work only showed attention heatmap for an example.",
"cite_spans": [
{
"start": 271,
"end": 290,
"text": "Kumar et al. (2018)",
"ref_id": "BIBREF11"
},
{
"start": 562,
"end": 585,
"text": "Margatina et al. (2019)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge-infused Neural Networks",
"sec_num": "2.1"
},
{
"text": "Our work is distinguished from others in that KW-ATTN is designed in consideration of not only accuracy but also interpretability of the model. For this reason, KW-ATTN allows separately and flexibly attending to the words and/or concepts so that important concepts for prediction can be included in prediction explanations, adding an extra layer of interpretation. We also perform human evaluation to see the effect of incorporating high-level concepts on interpretation rather than just showing a few visualization examples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge-infused Neural Networks",
"sec_num": "2.1"
},
{
"text": "Interpretability is the ability to explain or present a model in an understandable way to humans (Doshi-Velez and Kim, 2017) . This interpretability is beneficial for developers to understand the model, help identify and possibly fix issues with the model, or to enhance the model. It is crucial for application end users because knowing explanations or justifications behind a model's prediction can further assist in decision making or the task at hand.",
"cite_spans": [
{
"start": 97,
"end": 124,
"text": "(Doshi-Velez and Kim, 2017)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Interpretability",
"sec_num": "2.2"
},
{
"text": "To provide interpretability, researchers have used inherently interpretable models such as sparse linear regression models, decision trees, or rule sets. These models are generally useful for simple prediction tasks, yet it is difficult to apply them to complicated tasks. To interpret complex models used for complex tasks, one can examine how prediction changes between two different inputs (Shrikumar et al., 2017; Lundberg and Lee, 2017) or by locally perturbing an input (Ribeiro et al., 2016) . However, a recent and popular method in NLP has been the use of an attention mechanism, which was found to be effective in helping interpret complex models by highlighting which inputs are informative to prediction (Wang et al., 2016; Lin et al., 2017; Ghaeini et al., 2018; Seo et al., 2016) .",
"cite_spans": [
{
"start": 393,
"end": 417,
"text": "(Shrikumar et al., 2017;",
"ref_id": "BIBREF30"
},
{
"start": 418,
"end": 441,
"text": "Lundberg and Lee, 2017)",
"ref_id": "BIBREF16"
},
{
"start": 476,
"end": 498,
"text": "(Ribeiro et al., 2016)",
"ref_id": "BIBREF26"
},
{
"start": 716,
"end": 735,
"text": "(Wang et al., 2016;",
"ref_id": "BIBREF34"
},
{
"start": 736,
"end": 753,
"text": "Lin et al., 2017;",
"ref_id": "BIBREF13"
},
{
"start": 754,
"end": 775,
"text": "Ghaeini et al., 2018;",
"ref_id": "BIBREF7"
},
{
"start": 776,
"end": 793,
"text": "Seo et al., 2016)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Interpretability",
"sec_num": "2.2"
},
{
"text": "Along the lines of work using attention for interpretation, our model improves attention-based interpretability by using high-level concept information. To our knowledge, no prior work used external high-level concept information for better interpretability.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interpretability",
"sec_num": "2.2"
},
{
"text": "3 Our Approach",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interpretability",
"sec_num": "2.2"
},
{
"text": "We automatically annotate data with high-level concepts from two knowledge bases: BabelNet and UMLS.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "External Knowledge Bases",
"sec_num": "3.1"
},
{
"text": "BabelNet (Navigli and Ponzetto, 2012 ) is a constantly growing semantic network which connects concepts and named entities in a large network of semantic relations, currently made up of about 16 million entries, called Babel synsets. In our study, we use the hypernyms of Babel synsets as additional higher-level concept information for the raw words or phrases in text. We first map texts with concepts in Babel synsets using an entity linking toolkit, Babelfy (Moro et al., 2014) , and then retrieve hypernyms, high-level concepts, of the concepts using BabelNet APIs. Table 1 shows example annotations for the sentence \"My mom was diagnosed with stage 3 ovarian cancer.\"",
"cite_spans": [
{
"start": 9,
"end": 36,
"text": "(Navigli and Ponzetto, 2012",
"ref_id": "BIBREF22"
},
{
"start": 462,
"end": 481,
"text": "(Moro et al., 2014)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [
{
"start": 571,
"end": 578,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "BabelNet",
"sec_num": "3.1.1"
},
{
"text": "BabelNet Concepts \"Mom\" mother \"diagnosed\" analyze \"state\" state \"ovarian cancer\" disease We also exploit an external medical ontology, the UMLS (Lindberg, 1990) , for a comparison with BabelNet for the patient need task. The UMLS is a high-level ontology for organizing a great number of concepts in the biomedical domain, which provides unified access to many different biomedical resources. On top of the UMLS, the UMLS semantic network (McCray, 2003) implements an upperlevel conceptual layer for all UMLS concepts. This semantic network categorizes all concepts in the UMLS into 134 semantic types and provides 54 links between the semantic types to represent relationships in the biomedical domain. We use the semantic types of the UMLS semantic network as additional higher-level concepts because it can abstract more fine clinical concepts that exist across much larger medical ontologies such as UMLS, SNOMED (Benson, 2010) , and ICT-10(Organization et al., 2017) . To obtain the semantic types, we annotate raw text by using MetaMap. Table 2 shows an example from MetaMap. Note that the automatic annotation can be noisy (e.g., incorrect semantic types for \"mom\" in the example).",
"cite_spans": [
{
"start": 145,
"end": 161,
"text": "(Lindberg, 1990)",
"ref_id": "BIBREF14"
},
{
"start": 440,
"end": 454,
"text": "(McCray, 2003)",
"ref_id": "BIBREF20"
},
{
"start": 918,
"end": 932,
"text": "(Benson, 2010)",
"ref_id": "BIBREF0"
},
{
"start": 939,
"end": 972,
"text": "ICT-10(Organization et al., 2017)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1044,
"end": 1051,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Expression",
"sec_num": null
},
{
"text": "UMLS Semantic Type \"Mom\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Expression",
"sec_num": null
},
{
"text": "Quantitative Concept \"Diagnosed\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Expression",
"sec_num": null
},
{
"text": "Diagnostic Procedure \"Stage 3 ovarian cancer\" Neoplastic Process ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Expression",
"sec_num": null
},
{
"text": "To incorporate high-level concept information into a NN model, we design a new attention mechanism, KW-ATTN, which allows giving separate but complementary attentions to a word and its corresponding concept. To test KW-ATTN, we choose a one-level RNN architecture with an attention mechanism (1L), a hierarchical RNN architecture with an attention mechanism (2L) as in Hierarchical Attention Network (HAN) (Yang et al., 2016) , and a pretrained BERT (Devlin et al., 2018 ). Our 2L model architecture is shown in Figure 1. The whole architecture begins with words in each sentence as input. They are embedded and encoded using a word encoder, and then the resulting hidden representations move forward to a wordconcept attention layer after being concatenated with the corresponding concept embeddings. This part is different from common RNN architectures for text classification, where only the hidden representations from the word encoder are used for a word-level attention layer. Then, the output of this attention layer is used in the next phase, a sentence encoder in case of 2L, and a final layer in case of 1L. When KW-ATTN is applied to BERT (KW-BERT), the word encoder using RNN is replaced with BERT and then the output of KW-ATTN is feed to the final layer as in 1L. Word and Concept Embeddings: Each word w it (a one-hot vector, where t \u2208 {1, \u2022 \u2022 \u2022 , T } and T i is the number of words in the i-th sentence) is mapped to a real-valued vector x it through an embedding matrix W e by x it = W e w it . To use high-level concepts, each concept c it (a one-hot vector) corresponding to word w it is also mapped to x c it through an embedding matrix W ec by x c it = W ec c it . When a word is not mapped into a concept, we map the concept vector to a no-concept vector.",
"cite_spans": [
{
"start": 406,
"end": 425,
"text": "(Yang et al., 2016)",
"ref_id": "BIBREF38"
},
{
"start": 450,
"end": 470,
"text": "(Devlin et al., 2018",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 512,
"end": 518,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "Incorporating High-Level Concepts",
"sec_num": "3.2"
},
{
"text": "h ! i1 \u03b1 iT p iT \u03b1 iT (1\u2212 p iT ) v Bi-Directional GRU Layer h ! 1 h ! 1 h ! 2 h ! 2 h ! L h ! L s 1 s 2 s L \u2026 s i \u2026 \u03b1 1 \u2026 h 1 \u03b1 2 h 2 \u03b1 L h L \u03b1 i1 p i1 \u03b1 i1 (1\u2212 p i1 ) \u03b1 i 2 p i 2 \u03b1 i 2 (1\u2212 p i 2 ) Word Encoder Word Embeddings in i-th sentence Softmax Layer \u2026 \u2026 concepts in i-th sentence Concept Embeddings in i-th sentence Concept Embedding Layer c i1 c i2 c iT \u2026 Bi-Directional GRU Layer w i1 w i 2 w iT \u2026 x i1 x i 2 x iT \u2026 h ! i1 h ! i1 h ! i 2 h ! i 2 h ! iT h ! iT Word Embedding",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incorporating High-Level Concepts",
"sec_num": "3.2"
},
{
"text": "x c i1 x c i2 x c iT h i1 h c i1 h i2 h c i2 c h ! i1 h ! i 2 h ! i 2 h ! iT h !",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incorporating High-Level Concepts",
"sec_num": "3.2"
},
{
"text": "Word and Concept Encoders: We encode T words in each sentence i using a word encoder. The corresponding T concepts are also encoded using a concept encoder. We use a bi-directional GRU (Cho et al., 2014) to build a representation for the t-th word and concept in the sentence i, denoted as h it and h c it as follows:",
"cite_spans": [
{
"start": 185,
"end": 203,
"text": "(Cho et al., 2014)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Incorporating High-Level Concepts",
"sec_num": "3.2"
},
{
"text": "\u2212 \u2192 h it = \u2212 \u2212\u2212 \u2192 GRU (x it ), \u2190 \u2212 h it = \u2190 \u2212\u2212 \u2212 GRU (x it ), h it = [ \u2212 \u2192 h it , \u2190 \u2212 h it ], \u2212 \u2192 h c it = \u2212 \u2212\u2212 \u2192 GRU (x c it ), \u2190 \u2212 h c it = \u2190 \u2212\u2212 \u2212 GRU (x c it ), h c it = [ \u2212 \u2192 h c it , \u2190 \u2212 h c it ].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incorporating High-Level Concepts",
"sec_num": "3.2"
},
{
"text": "where t \u2208 {1, \u2022 \u2022 \u2022 , T }, and T i is the number of words in the i-th sentence. Note that we obtain a representation that summarizes the information of the whole sentence around the t-th word w it by concatenating the forward hidden state \u2212 \u2192 h it and the backward hidden state \u2190 \u2212 h it .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incorporating High-Level Concepts",
"sec_num": "3.2"
},
{
"text": "Word-Concept Attention: In this stage, the output from the word encoder h it and the corresponding concept output h c it are combined by going through a word-concept level attention layer. This layer consists of two attention levels. One is an attention vector \u03b1 it that tracks the importance of a combined word-concept, which we call \"combined\" attention. The other attention vector we call \"balancing\" attention p it is for flexibly incorporating concept information into the model. The balancing attention is introduced to give attention complementarily to both word and concept because the importance of a word or concept can differ at times. For example, when \"football\" is attended, we don't know if \"football\" itself is important for the prediction, or \"football\", \"tennis\", and all others together are important. Additionally, this balancing attention helps the model to be more robust to noisy concepts that may be caused by automatic annotation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incorporating High-Level Concepts",
"sec_num": "3.2"
},
{
"text": "In detail, each position in a sentence includes a word and its corresponding concept. For each position, combined attention \u03b1 is assigned, which represents attention to the position (both word and concept). Within each position, balancing attention p is assigned to a concept and its complement 1 \u2212 p is assigned to the corresponding word. As seen in Figure 1 , \u03b1 it represents the contribution of the position t (both the t-th word and its concept) to the meaning of the sentence i in the sentence, while 1 \u2212 p it represents a weight on the word and p it represents a weight on the word's concept. Hence, \u03b1 it (1 \u2212 p it ) and \u03b1 it p it represent the contribution of the t-th word and concept to the sentence i, respectively. This attention mechanism using combined and balancing attentions enables us to give separate but complementary attentions to the word and concept. In addition, we set p it as 0 when a word does not have a corresponding concept because in this case the model should attend only the word. The new attention mechanism is as follows:",
"cite_spans": [],
"ref_spans": [
{
"start": 351,
"end": 359,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Incorporating High-Level Concepts",
"sec_num": "3.2"
},
{
"text": "u it = tanh(W \u03b1 [h it , h c it ] + b \u03b1 ) p it = sigmoid(w p [h it , h c it ] + b p ) \u03b1 it = exp u T it u \u03b1 t exp u T it u \u03b1 s i = t \u03b1 it ((1 \u2212 p it )h it + p it h c it )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incorporating High-Level Concepts",
"sec_num": "3.2"
},
{
"text": "where W \u03b1 , b \u03b1 , w p , b p and u \u03b1 are the model parameters. s i is a representation for the i-th sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incorporating High-Level Concepts",
"sec_num": "3.2"
},
{
"text": "s i is used as an input to the next layer, the sentence encoder in case of 2L (HAN). Then, the sentence representations h i go through the sentence level attention layer, and build a document vector v, as shown in Figure 1 . In case of a 1L model or a BERT model, all the words in the document are treated as one single sentence. Then, there is a single representation s 1 , which is equivalent to the document vector v in the 2L case.",
"cite_spans": [],
"ref_spans": [
{
"start": 214,
"end": 222,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Incorporating High-Level Concepts",
"sec_num": "3.2"
},
{
"text": "Finally, based on this vector v, classification probability for each class is computed in the final layer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incorporating High-Level Concepts",
"sec_num": "3.2"
},
{
"text": "KW-ATTN is evaluated on two different datasets for patient need detection (need dataset) (Jang et al., 2019) and topic classification (Yahoo answers) (Zhang et al., 2015) . We use different tasks to more broadly demonstrate the benefits of our approach.",
"cite_spans": [
{
"start": 89,
"end": 108,
"text": "(Jang et al., 2019)",
"ref_id": "BIBREF9"
},
{
"start": 150,
"end": 170,
"text": "(Zhang et al., 2015)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Patient need detection: This dataset is for detecting patient need in posts from an online cancer discussion forum. We use the health information need data for binary classification (450 positive samples out of 853). Although this dataset is quite small, we choose to use it because RNN approaches showed effectiveness (Jang et al., 2019) and it is a dataset we can compare the effect of general knowledge graph and domain-specific medical ontology. We build two different concept annotations with BabelNet and UMLS.",
"cite_spans": [
{
"start": 319,
"end": 338,
"text": "(Jang et al., 2019)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "4.1"
},
{
"text": "Yahoo answers: This dataset is for topic classification. It incluldes 10 different topics such as Society & Culture and Sports. To generate a dataset that is still small but one order of magnitude bigger than the need dataset, we randomly select 10,000 instances of the dataset enforcing a balanced dataset (1,000 instances per topic), and annotate them with BabelNet concepts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "4.1"
},
{
"text": "The data statistics of our concept annotated datasets are summarized in Table 3 . The ratios of words that match concepts are 6.6%(the need dataset with BabelNet), 36.3%(the need dataset with UMLS), and 8.9%(Yahoo answers). In all our experiments, we perform 10-fold cross-validation ten times. For each run, we use 80% of data for training, 10% for development, and 10% for test.",
"cite_spans": [],
"ref_spans": [
{
"start": 72,
"end": 79,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "4.1"
},
{
"text": "We compare our KW-ATTN 1L and 2L with a widely used attention mechanism leveraging only words (Yang et al., 2016; Ying et al., 2018) . We call it ATTN. In addition, we use other proven approaches that leverage concept information: Concept-replace uses input documents where raw words are replaced with the corresponding Ba-belNet/UMLS high-level concepts when the mappings are available, as in (Stanovsky et al., 2017; Magumba et al., 2018) . Concept-concat uses concatenation to combine word and concept embeddings, as in Zhou et al., 2018) . Attn-concat uses concatenation to combine a concept embedding and a hidden representation of word and use ATTN. Attn-gating uses a gate mechanism to select salient features of a hidden word representation, conditioned on the concept information. Both Attn-concat and Attn-gating are stateof-the-art presented by Margatina et al. (2019) . All these methods are tested in 1L and 2L settings.",
"cite_spans": [
{
"start": 94,
"end": 113,
"text": "(Yang et al., 2016;",
"ref_id": "BIBREF38"
},
{
"start": 114,
"end": 132,
"text": "Ying et al., 2018)",
"ref_id": "BIBREF39"
},
{
"start": 394,
"end": 418,
"text": "(Stanovsky et al., 2017;",
"ref_id": "BIBREF32"
},
{
"start": 419,
"end": 440,
"text": "Magumba et al., 2018)",
"ref_id": "BIBREF17"
},
{
"start": 523,
"end": 541,
"text": "Zhou et al., 2018)",
"ref_id": "BIBREF43"
},
{
"start": 856,
"end": 879,
"text": "Margatina et al. (2019)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Settings",
"sec_num": "4.2"
},
{
"text": "The parameters for RNN models are tuned on development data in the following ranges: word embedding dimension: 25, 50, 100, 200, GRU size: 10, 25, 50, learning rate: 0.1, 0.05, 0.01, 0.005, 0.001, 0.0005, and 0.0001. The word embeddings are initialized randomly, and concept embeddings are initialized using pretrained concept embeddings trained on English web data and Ba-belNet semantic network, SW2V (Mancini et al., 2016) . 1 We randomly initialize word embeddings rather than using pretrained embeddings because our model often uses phrases recognized by knowledge resources, and they are usually not part of pretrained embeddings. We optimize parameters using Adam (Kingma and Ba, 2014) with epsilon 1e-08, decay 0.0, a mini-batch size of 32, and the loss function of negative log-likelihood loss. We use early-stopping. In addition, we also conduct experiments with pre-trained BERT Word Encoder (KW-BERT) to see if injecting concept also helps the model trained on large-scale corpora. We use the 'bert-baseuncased' model, and the dimension of Concept bi-GRU is 384, making the concept representation the same dimension of BERT word representations. We show both the results from frozen models and fine-tuned models.The frozen models do not update parameters of pretrained models, i.e., they use pre-trained contextualized embeddings without fine-tuning. In contrast, fine-tuned BERT or KW-BERT are adapted to the target task. The learning rates for learning frozen models and fined-tuned models are 2e-3 and 1e-6, respectively.",
"cite_spans": [
{
"start": 403,
"end": 425,
"text": "(Mancini et al., 2016)",
"ref_id": "BIBREF18"
},
{
"start": 428,
"end": 429,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Settings",
"sec_num": "4.2"
},
{
"text": "The results are shown in Table 4 . First, we observe that 2L models do not perform better than 1L models. This could be because 2L models are too large for the data sizes, especially for the need data. It could indicate that the document itself is not too long to put in one RNN, and the sentence boundary might not be necessary for the classification. Second, using concept information alone does not perform well in general, which indicates that concept information alone is not sufficient. Using word and concept information together (concept-concat) also do not always result in a gain of performance. Third, Attn-models generally perform better than simpler Concept-models. However, KW-ATTN significantly improves over all other models for both tasks, indicating the effeteness of our mechanism.",
"cite_spans": [],
"ref_spans": [
{
"start": 25,
"end": 32,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Experiment Results",
"sec_num": "4.3"
},
{
"text": "In addition, Table 4 shows that for the need task, while both types of concepts help the prediction, UMLS concepts help slightly more. This suggests that choosing the right knowledge resource, especially for domain specific tasks, is critical for prediction performance.",
"cite_spans": [],
"ref_spans": [
{
"start": 13,
"end": 20,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Experiment Results",
"sec_num": "4.3"
},
{
"text": "To see the effect of data size on the model, we compare KW-ATTN and ATTN across different data sizes of Yahoo reviews (Table 5) . KW-ATTN models significantly outperform ATTN models consistently. However, as the data size becomes larger, performance gains, while still significant, diminish, showing that, in this domain, our method is more effective when the data is smaller. Table 6 shows the comparison between BERT and KW-BERT. We can see that additional concept information substantially improves the performances on both datasets in case of frozen models whereas it only improves the performance on the need dataset when fine-tuned. The results from the frozen models indicate that the encoded concepts provide complementary information to BERT. However, when fine-tuned, KW-BERT outperforms BERT only on the Need dataset. This could be because a BERT model itself is learnt on Wikipedia, which may lack knowledge on medicine. Although BERT learns task-specific knowledge during finetuning, but the data is small and additional highlevel concept information still helps. This may suggest that KW-BERT could be more beneficial for small data problems in domains that require more expert knowledge than Wikipedia can provide.",
"cite_spans": [],
"ref_spans": [
{
"start": 118,
"end": 127,
"text": "(Table 5)",
"ref_id": "TABREF7"
},
{
"start": 377,
"end": 384,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiment Results",
"sec_num": "4.3"
},
{
"text": "We can also notice that the frozen models poorly perform on the Need dataset compared with RNN models (Table 4) whereas they drastically outperform on the Yahoo dataset. This could be because the documents in the Need dataset are conversational coming from an online forum, which are markedly different from the Wikipedia dataset on which BERT is trained. We can see that when finetuned, both BERT and KW-BERT beat RNN models, which suggests that finetuning allows learning task/domain specific information.",
"cite_spans": [],
"ref_spans": [
{
"start": 102,
"end": 111,
"text": "(Table 4)",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Experiment Results",
"sec_num": "4.3"
},
{
"text": "Attention Analysis: To better understand why UMLS concepts help more on the need dataset, we draw the distributions of concept attentions in models with both annotations in Figure 2 . Interestingly, for the average attention of each concept, the attention for the model using BabelNet annotations is greater than the model using UMLS annotations. However, the max attention of each concept is greater for UMLS annotations than for BabelNet annotations, which indicates that UMLS concepts are more actively used. Additionally, attentions from the model using UMLS concepts show lower variance. This result indicates that the model using UMLS concepts assigns a similar attention to each concept whereas the model using BabelNet concepts sometimes assigns small or large attentions to concept. In other words, the model using UMLS concepts consistently select a concept to attend whereas the model using BabelNet concepts is less consistent. Intuitively, this makes sense as the UMLS concepts are domain specific to the task of health information need detection.",
"cite_spans": [],
"ref_spans": [
{
"start": 173,
"end": 181,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Experiment Results",
"sec_num": "4.3"
},
{
"text": "We use human evaluation to see whether additional high-level concept information given by KW-ATTN can be beneficial for interpretation. We compare top-ranked attended words/concepts by KW-ATTN with top-ranked attended words by ATTN. We use Amazon Mechanical Turk (MTurk). Since we use crowdsourcing, we conduct evaluation only on the Yahoo reviews dataset for topic classification, which covers general domains.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human Evaluation on Interpretability",
"sec_num": "5"
},
{
"text": "For each Human Intelligence Task (HIT) in MTurk, we provide a prediction and its explanation for a text, generated from either KW-ATTN 1L or ATTN 1L. 2 We use 1L because one attention layer is simpler to interpret. Then, we ask whether MTurkers would assign the given topic to the text based on the given explanation. Only one explanation is randomly given, and which model the explanations is from is not shown to MTurkers. Additionally, we ask them to rate their confidence in their answer. ",
"cite_spans": [
{
"start": 150,
"end": 151,
"text": "2",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Design",
"sec_num": "5.1"
},
{
"text": "No concept \"java, yields, best, language, results, built\" KW same number \"java as a(n) object-oriented_programming_language, ide as a(n) application, php as a(n) free_software, swing, best, looking\" KW same length \"java as a(n) object-oriented_programming_language, ide as a(n) application, php as a(n) free_software\" KW replacement \"object-oriented_programming_language, application, free_software, swing, best, looking\" We assume that attention can be used for prediction explanations based on (Wiegreffe and Pinter, 2019; Serrano and Smith, 2019) . We choose to ask about the validity of a given prediction unlike prior work that asked to guess a model's prediction based on an explanation (Nguyen, 2018; Chen et al., 2020) . Although we acknowledge that the model's prediction may bias the annotators, we choose this approach since humans have high-level concepts as background knowledge. Humans do not require external additional concept information for guessing a correct topic label among multiple topic options especially when the given topic options are distinct from each other. For example, although the high-level concept \"athletics\" is not given for the word \"baseball\" in an explanation, humans would not have a problem with classifying it into the sports category when given topic options are sports and music. However, high-level concepts may help users to have more confidence when interpreting the explanation for a given topic. Therefore, we evaluate users' trusts about the system indirectly by requesting them to assess a given topic based on an explanation and rate their confidence. The top 6 ranked features (words and concepts) with the highest attention weights are selected as an explanation. The high-level concept of a word is included in the explanation as the format of \"[word] as a(n) [concept]\" only when the balancing weight, p, for the concept is non-zero (See Section 3.2).",
"cite_spans": [
{
"start": 496,
"end": 524,
"text": "(Wiegreffe and Pinter, 2019;",
"ref_id": "BIBREF35"
},
{
"start": 525,
"end": 549,
"text": "Serrano and Smith, 2019)",
"ref_id": "BIBREF29"
},
{
"start": 693,
"end": 707,
"text": "(Nguyen, 2018;",
"ref_id": "BIBREF23"
},
{
"start": 708,
"end": 726,
"text": "Chen et al., 2020)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Explanation Type Example",
"sec_num": null
},
{
"text": "We remove stopwords and punctuations from explanations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Explanation Type Example",
"sec_num": null
},
{
"text": "Four different types of explanations are given to MTurkers and compared in our analysis as shown in Table 7 . A no-concept explanation consists of 6 words. A KW-same-number explanation also contains 6 words and their corresponding concepts if they exist. A KW-same-length is composed of 3 words and their corresponding concepts if they exist. A KW-replacement consists of 6 words or concept. When a word has a lower attention value than its corresponding concept according to the p attention value, it is replaced by its concept in the explanation. Note that KWexplanations are all from the same model using KW-ATTN, and no-concept explanations are from a model using ATTN.",
"cite_spans": [],
"ref_spans": [
{
"start": 100,
"end": 107,
"text": "Table 7",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Explanation Type Example",
"sec_num": null
},
{
"text": "We randomly pick 200 samples that have correct predicted labels made by both systems. To make the 200 samples, we draw 100 samples with the prediction probability higher than .90 for their predicted labels, and 100 samples with the prediction probability between .80 and .90. To balance topics, we pick equal number of samples for each topic. We do not perform the same MTurk task for incorrectly predicted samples because when a system makes an incorrect prediction, assessing interpretability is not straightfoward. There can be multiple different reasons about the wrong prediction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Explanation Type Example",
"sec_num": null
},
{
"text": "For MTurk, each HIT asks questions about an explanation generated by a system for one sample, as shown in Figure 3 . For each HIT, 5 MTurkers participate. We hire North American Master MTurkers with HIT acceptance rates above 98% in order to ensure high quality of the evaluation. We pay $0.03-$0.05 for each HIT.",
"cite_spans": [],
"ref_spans": [
{
"start": 106,
"end": 114,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Explanation Type Example",
"sec_num": null
},
{
"text": "As shown in Table 8 , KW-same-number and KWsame-length explanations resulted in a significantly higher confidence in assigning given topics to explanations compared to no-concept explanations. This indicates that the additional high-level concept information from KW-ATTN is beneficial for improving interpretability. We can also observe that KW-replacement explanations improve confidence although the gain is not significant.",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 19,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Human Evaluation Results",
"sec_num": "5.2"
},
{
"text": "No-concept 4.70 4.15 11.31 KW-same-number 4.82 4.40* 11.64 KW-same-length 4.77 4.31* 11.37 KW-replacement 4.74 4.22 12.34 Table 8 : Human evaluation results on interpretation. Pred: average # of \"yes\" on predicted topics, Conf: average confidence score, Time: average time taken for each HIT, *: indicates statistically significant difference over no-concept via t-test (p < 0.05).",
"cite_spans": [],
"ref_spans": [
{
"start": 122,
"end": 129,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Explanation Type Pred Conf Time",
"sec_num": null
},
{
"text": "It is important to note that KW-same-length and KW-replacement explanations both improve interpretability over no-concept explanations as well as KW-same-number. While KW-same-number explanations provide more information (12 at maximum in total including both words and concepts), KWsame-length and KW-replacement give the same or less amount of information compare to no-concept (6 at maximum in total). This indicates that the high-level concept information really helps.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Explanation Type Pred Conf Time",
"sec_num": null
},
{
"text": "We presented a new attention mechanism, KW-ATTN, which extends a NN model by incorporating high-level concepts. Our experiments showed that using high-level concept information improves predictive power by helping the data sparseness problem in small data. Furthermore, in our crowdsourcing experiments, we found significant improvement on the confidence of human evaluators on predictions, suggesting that our new attention mechanism provides benefits in explaining the predictions. High-level concepts provide an additional layer of information above raw words that can assist in understanding predictions. Additionally, our attention mechanism can distinguish between the importance of words vs. concepts, providing further information. We are optimistic that KW-ATTN can be applied widely.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "We also tried SW2V Wiki, SensEmbed(Iacobacci et al., 2015) and SENSEMBERT(Scarlini et al., 2020) pretrained embeddings, but SW2V WEB slightly outperformed others (no statistical significance).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The screenshot of the MTurk user interface can be found in the Appendix.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": " Figure 3 shows a screenshot of the Amazon Mechanical Turk user interface in our human evaluation. ",
"cite_spans": [],
"ref_spans": [
{
"start": 1,
"end": 9,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "A Appendices",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Snomed ct",
"authors": [
{
"first": "Tim",
"middle": [],
"last": "Benson",
"suffix": ""
}
],
"year": 2010,
"venue": "Principles of Health Interoperability HL7 and SNOMED",
"volume": "",
"issue": "",
"pages": "189--215",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tim Benson. 2010. Snomed ct. In Principles of Health Interoperability HL7 and SNOMED, pages 189-215. Springer.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Text mining for the vaccine adverse event reporting system: medical text classification using informative feature selection",
"authors": [
{
"first": "Taxiarchis",
"middle": [],
"last": "Botsis",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Emily",
"middle": [
"Jane"
],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Marianthi",
"middle": [],
"last": "Woo",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Markatou",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ball",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of the American Medical Informatics Association",
"volume": "18",
"issue": "5",
"pages": "631--638",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Taxiarchis Botsis, Michael D Nguyen, Emily Jane Woo, Marianthi Markatou, and Robert Ball. 2011. Text mining for the vaccine adverse event reporting sys- tem: medical text classification using informative feature selection. Journal of the American Medical Informatics Association, 18(5):631-638.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Generating hierarchical explanations on text classification via feature interaction detection",
"authors": [
{
"first": "Hanjie",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Guangtao",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Yangfeng",
"middle": [],
"last": "Ji",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.02015"
]
},
"num": null,
"urls": [],
"raw_text": "Hanjie Chen, Guangtao Zheng, and Yangfeng Ji. 2020. Generating hierarchical explanations on text clas- sification via feature interaction detection. arXiv preprint arXiv:2004.02015.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "On the properties of neural machine translation",
"authors": [
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Bart",
"middle": [],
"last": "Van Merri\u00ebnboer",
"suffix": ""
},
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1409.1259"
]
},
"num": null,
"urls": [],
"raw_text": "Kyunghyun Cho, Bart Van Merri\u00ebnboer, Dzmitry Bah- danau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder ap- proaches. arXiv preprint arXiv:1409.1259.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Towards a rigorous science of interpretable machine learning",
"authors": [
{
"first": "Finale",
"middle": [],
"last": "Doshi",
"suffix": ""
},
{
"first": "-",
"middle": [],
"last": "Velez",
"suffix": ""
},
{
"first": "Been",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1702.08608"
]
},
"num": null,
"urls": [],
"raw_text": "Finale Doshi-Velez and Been Kim. 2017. Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Christiane Fellbaum. 2012. Wordnet. the encyclopedia of applied linguistics",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christiane Fellbaum. 2012. Wordnet. the encyclopedia of applied linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Interpreting recurrent and attention-based neural models: a case study on natural language inference",
"authors": [
{
"first": "Reza",
"middle": [],
"last": "Ghaeini",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Xiaoli",
"suffix": ""
},
{
"first": "Prasad",
"middle": [],
"last": "Fern",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tadepalli",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1808.03894"
]
},
"num": null,
"urls": [],
"raw_text": "Reza Ghaeini, Xiaoli Z Fern, and Prasad Tadepalli. 2018. Interpreting recurrent and attention-based neural models: a case study on natural language in- ference. arXiv preprint arXiv:1808.03894.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Sensembed: Learning sense embeddings for word and relational similarity",
"authors": [
{
"first": "Ignacio",
"middle": [],
"last": "Iacobacci",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Taher Pilehvar",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "95--105",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ignacio Iacobacci, Mohammad Taher Pilehvar, and Roberto Navigli. 2015. Sensembed: Learning sense embeddings for word and relational similarity. In Proceedings of the 53rd Annual Meeting of the Asso- ciation for Computational Linguistics and the 7th In- ternational Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 95-105.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Neural prediction of patient needs in an ovarian cancer online discussion forum",
"authors": [
{
"first": "Hyeju",
"middle": [],
"last": "Jang",
"suffix": ""
},
{
"first": "Young",
"middle": [
"Ji"
],
"last": "Lee",
"suffix": ""
},
{
"first": "Giuseppe",
"middle": [],
"last": "Carenini",
"suffix": ""
},
{
"first": "Raymond",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "Grace",
"middle": [],
"last": "Campbell",
"suffix": ""
},
{
"first": "Kendall",
"middle": [],
"last": "Ho",
"suffix": ""
}
],
"year": 2019,
"venue": "Canadian Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "492--497",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hyeju Jang, Young Ji Lee, Giuseppe Carenini, Ray- mond Ng, Grace Campbell, and Kendall Ho. 2019. Neural prediction of patient needs in an ovarian can- cer online discussion forum. In Canadian Con- ference on Artificial Intelligence, pages 492-497. Springer.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6980"
]
},
"num": null,
"urls": [],
"raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Knowledge-enriched two-layered attention network for sentiment analysis",
"authors": [
{
"first": "Abhishek",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Daisuke",
"middle": [],
"last": "Kawahara",
"suffix": ""
},
{
"first": "Sadao",
"middle": [],
"last": "Kurohashi",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "2",
"issue": "",
"pages": "253--258",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abhishek Kumar, Daisuke Kawahara, and Sadao Kuro- hashi. 2018. Knowledge-enriched two-layered atten- tion network for sentiment analysis. In Proceedings of the 2018 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Pa- pers), pages 253-258.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Md Mostofa Ali Patwary, Ankit Agrawal, and Alok Choudhary",
"authors": [
{
"first": "Kathy",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Diana",
"middle": [],
"last": "Palsetia",
"suffix": ""
},
{
"first": "Ramanathan",
"middle": [],
"last": "Narayanan",
"suffix": ""
}
],
"year": 2011,
"venue": "2011 IEEE 11th International Conference on Data Mining Workshops",
"volume": "",
"issue": "",
"pages": "251--258",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kathy Lee, Diana Palsetia, Ramanathan Narayanan, Md Mostofa Ali Patwary, Ankit Agrawal, and Alok Choudhary. 2011. Twitter trending topic classifica- tion. In 2011 IEEE 11th International Conference on Data Mining Workshops, pages 251-258. IEEE.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A structured self-attentive sentence embedding",
"authors": [
{
"first": "Zhouhan",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Minwei",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Cicero",
"middle": [],
"last": "Nogueira",
"suffix": ""
},
{
"first": "Mo",
"middle": [],
"last": "Santos",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Xiang",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1703.03130"
]
},
"num": null,
"urls": [],
"raw_text": "Zhouhan Lin, Minwei Feng, Cicero Nogueira dos San- tos, Mo Yu, Bing Xiang, Bowen Zhou, and Yoshua Bengio. 2017. A structured self-attentive sentence embedding. arXiv preprint arXiv:1703.03130.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "The unified medical language system (umls) of the national library of medicine",
"authors": [
{
"first": "C",
"middle": [],
"last": "Lindberg",
"suffix": ""
}
],
"year": 1990,
"venue": "Journal (American Medical Record Association)",
"volume": "61",
"issue": "5",
"pages": "40--42",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C Lindberg. 1990. The unified medical language system (umls) of the national library of medicine. Journal (American Medical Record Association), 61(5):40-42.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Haotang Deng, and Ping Wang. 2020. K-bert: Enabling language representation with knowledge graph",
"authors": [
{
"first": "Weijie",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Zhe",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Zhiruo",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Qi",
"middle": [],
"last": "Ju",
"suffix": ""
}
],
"year": null,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "2901--2908",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Weijie Liu, Peng Zhou, Zhe Zhao, Zhiruo Wang, Qi Ju, Haotang Deng, and Ping Wang. 2020. K-bert: Enabling language representation with knowledge graph. In AAAI, pages 2901-2908.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A unified approach to interpreting model predictions",
"authors": [
{
"first": "M",
"middle": [],
"last": "Scott",
"suffix": ""
},
{
"first": "Su-In",
"middle": [],
"last": "Lundberg",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "4765--4774",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Scott M Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. In Ad- vances in Neural Information Processing Systems, pages 4765-4774.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Ontology boosted deep learning for disease name extraction from twitter messages",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Mark Abraham Magumba",
"suffix": ""
},
{
"first": "Ernest",
"middle": [],
"last": "Nabende",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mwebaze",
"suffix": ""
}
],
"year": 2018,
"venue": "Journal of Big Data",
"volume": "5",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Abraham Magumba, Peter Nabende, and Ernest Mwebaze. 2018. Ontology boosted deep learning for disease name extraction from twitter messages. Journal of Big Data, 5(1):31.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Embedding words and senses together via joint knowledge-enhanced training",
"authors": [
{
"first": "Massimiliano",
"middle": [],
"last": "Mancini",
"suffix": ""
},
{
"first": "Jose",
"middle": [],
"last": "Camacho-Collados",
"suffix": ""
},
{
"first": "Ignacio",
"middle": [],
"last": "Iacobacci",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1612.02703"
]
},
"num": null,
"urls": [],
"raw_text": "Massimiliano Mancini, Jose Camacho-Collados, Igna- cio Iacobacci, and Roberto Navigli. 2016. Em- bedding words and senses together via joint knowledge-enhanced training. arXiv preprint arXiv:1612.02703.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Attention-based conditioning methods for external knowledge integration",
"authors": [
{
"first": "Katerina",
"middle": [],
"last": "Margatina",
"suffix": ""
},
{
"first": "Christos",
"middle": [],
"last": "Baziotis",
"suffix": ""
},
{
"first": "Alexandros",
"middle": [],
"last": "Potamianos",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3944--3951",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katerina Margatina, Christos Baziotis, and Alexandros Potamianos. 2019. Attention-based conditioning methods for external knowledge integration. In Pro- ceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 3944- 3951.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "An upper-level ontology for the biomedical domain",
"authors": [
{
"first": "Alexa T",
"middle": [],
"last": "Mccray",
"suffix": ""
}
],
"year": 2003,
"venue": "International Journal of Genomics",
"volume": "4",
"issue": "1",
"pages": "80--84",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexa T McCray. 2003. An upper-level ontology for the biomedical domain. International Journal of Ge- nomics, 4(1):80-84.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Entity Linking meets Word Sense Disambiguation: a Unified Approach. Transactions of the Association for Computational Linguistics (TACL)",
"authors": [
{
"first": "Andrea",
"middle": [],
"last": "Moro",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Raganato",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "2",
"issue": "",
"pages": "231--244",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrea Moro, Alessandro Raganato, and Roberto Nav- igli. 2014. Entity Linking meets Word Sense Disam- biguation: a Unified Approach. Transactions of the Association for Computational Linguistics (TACL), 2:231-244.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Ba-belNet: The automatic construction, evaluation and application of a wide-coverage multilingual semantic network",
"authors": [
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
},
{
"first": "Simone",
"middle": [
"Paolo"
],
"last": "Ponzetto",
"suffix": ""
}
],
"year": 2012,
"venue": "Artificial Intelligence",
"volume": "193",
"issue": "",
"pages": "217--250",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roberto Navigli and Simone Paolo Ponzetto. 2012. Ba- belNet: The automatic construction, evaluation and application of a wide-coverage multilingual seman- tic network. Artificial Intelligence, 193:217-250.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Comparing automatic and human evaluation of local explanations for text classification",
"authors": [
{
"first": "Dong",
"middle": [],
"last": "Nguyen",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1069--1078",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dong Nguyen. 2018. Comparing automatic and human evaluation of local explanations for text classifica- tion. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technolo- gies, Volume 1 (Long Papers), pages 1069-1078.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "International classification of diseases (icd) information sheet",
"authors": [],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "World Health Organization et al. 2017. International classification of diseases (icd) information sheet. 2017.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Knowledge enhanced contextual word representations",
"authors": [
{
"first": "E",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"L"
],
"last": "Neumann",
"suffix": ""
},
{
"first": "I",
"middle": [
"V"
],
"last": "Logan",
"suffix": ""
},
{
"first": "Roy",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Vidur",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Noah A",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1909.04164"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew E Peters, Mark Neumann, Robert L Lo- gan IV, Roy Schwartz, Vidur Joshi, Sameer Singh, and Noah A Smith. 2019. Knowledge enhanced contextual word representations. arXiv preprint arXiv:1909.04164.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Why should i trust you?: Explaining the predictions of any classifier",
"authors": [
{
"first": "Sameer",
"middle": [],
"last": "Marco Tulio Ribeiro",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Guestrin",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining",
"volume": "",
"issue": "",
"pages": "1135--1144",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. Why should i trust you?: Explain- ing the predictions of any classifier. In Proceed- ings of the 22nd ACM SIGKDD international con- ference on knowledge discovery and data mining, pages 1135-1144. ACM.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Sensembert: Context-enhanced sense embeddings for multilingual word sense disambiguation",
"authors": [
{
"first": "Bianca",
"middle": [],
"last": "Scarlini",
"suffix": ""
},
{
"first": "Tommaso",
"middle": [],
"last": "Pasini",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
}
],
"year": 2020,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "8758--8765",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bianca Scarlini, Tommaso Pasini, and Roberto Navigli. 2020. Sensembert: Context-enhanced sense embed- dings for multilingual word sense disambiguation. In AAAI, pages 8758-8765.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Bidirectional attention flow for machine comprehension",
"authors": [
{
"first": "Minjoon",
"middle": [],
"last": "Seo",
"suffix": ""
},
{
"first": "Aniruddha",
"middle": [],
"last": "Kembhavi",
"suffix": ""
},
{
"first": "Ali",
"middle": [],
"last": "Farhadi",
"suffix": ""
},
{
"first": "Hannaneh",
"middle": [],
"last": "Hajishirzi",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1611.01603"
]
},
"num": null,
"urls": [],
"raw_text": "Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2016. Bidirectional attention flow for machine comprehension. arXiv preprint arXiv:1611.01603.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Is attention interpretable? arXiv preprint",
"authors": [
{
"first": "Sofia",
"middle": [],
"last": "Serrano",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Noah",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1906.03731"
]
},
"num": null,
"urls": [],
"raw_text": "Sofia Serrano and Noah A Smith. 2019. Is attention interpretable? arXiv preprint arXiv:1906.03731.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Learning important features through propagating activation differences",
"authors": [
{
"first": "Avanti",
"middle": [],
"last": "Shrikumar",
"suffix": ""
},
{
"first": "Peyton",
"middle": [],
"last": "Greenside",
"suffix": ""
},
{
"first": "Anshul",
"middle": [],
"last": "Kundaje",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1704.02685"
]
},
"num": null,
"urls": [],
"raw_text": "Avanti Shrikumar, Peyton Greenside, and Anshul Kun- daje. 2017. Learning important features through propagating activation differences. arXiv preprint arXiv:1704.02685.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Fake news detection on social media: A data mining perspective",
"authors": [
{
"first": "Kai",
"middle": [],
"last": "Shu",
"suffix": ""
},
{
"first": "Amy",
"middle": [],
"last": "Sliva",
"suffix": ""
},
{
"first": "Suhang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jiliang",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Huan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2017,
"venue": "ACM SIGKDD Explorations Newsletter",
"volume": "19",
"issue": "1",
"pages": "22--36",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kai Shu, Amy Sliva, Suhang Wang, Jiliang Tang, and Huan Liu. 2017. Fake news detection on social me- dia: A data mining perspective. ACM SIGKDD Ex- plorations Newsletter, 19(1):22-36.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Recognizing mentions of adverse drug reaction in social media using knowledge-infused recurrent models",
"authors": [
{
"first": "Gabriel",
"middle": [],
"last": "Stanovsky",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Gruhl",
"suffix": ""
},
{
"first": "Pablo",
"middle": [],
"last": "Mendes",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "142--151",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gabriel Stanovsky, Daniel Gruhl, and Pablo Mendes. 2017. Recognizing mentions of adverse drug re- action in social media using knowledge-infused re- current models. In Proceedings of the 15th Confer- ence of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 142-151.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Combining knowledge with deep convolutional neural networks for short text classification",
"authors": [
{
"first": "Jin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Zhongyuan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Dawei",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2017,
"venue": "IJCAI",
"volume": "",
"issue": "",
"pages": "2915--2921",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jin Wang, Zhongyuan Wang, Dawei Zhang, and Jun Yan. 2017. Combining knowledge with deep convo- lutional neural networks for short text classification. In IJCAI, pages 2915-2921.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Attention-based lstm for aspectlevel sentiment classification",
"authors": [
{
"first": "Yequan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Minlie",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Xiaoyan",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 conference on empirical methods in natural language processing",
"volume": "",
"issue": "",
"pages": "606--615",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yequan Wang, Minlie Huang, Xiaoyan Zhu, and Li Zhao. 2016. Attention-based lstm for aspect- level sentiment classification. In Proceedings of the 2016 conference on empirical methods in natural language processing, pages 606-615.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Attention is not not explanation",
"authors": [
{
"first": "Sarah",
"middle": [],
"last": "Wiegreffe",
"suffix": ""
},
{
"first": "Yuval",
"middle": [],
"last": "Pinter",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "11--20",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sarah Wiegreffe and Yuval Pinter. 2019. Attention is not not explanation. In Proceedings of the 2019 Con- ference on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 11-20.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Incorporating loosestructured knowledge into conversation modeling via recall-gate lstm",
"authors": [
{
"first": "Zhen",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Bingquan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Baoxun",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Chengjie",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Xiaolong",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2017,
"venue": "2017 International Joint Conference on Neural Networks (IJCNN)",
"volume": "",
"issue": "",
"pages": "3506--3513",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhen Xu, Bingquan Liu, Baoxun Wang, Chengjie Sun, and Xiaolong Wang. 2017. Incorporating loose- structured knowledge into conversation modeling via recall-gate lstm. In 2017 International Joint Conference on Neural Networks (IJCNN), pages 3506-3513. IEEE.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Leveraging knowledge bases in lstms for improving machine reading",
"authors": [
{
"first": "Bishan",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Mitchell",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1436--1446",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bishan Yang and Tom Mitchell. 2017. Leveraging knowledge bases in lstms for improving machine reading. In Proceedings of the 55th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1436-1446.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Hierarchical attention networks for document classification",
"authors": [
{
"first": "Zichao",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Diyi",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Smola",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "1480--1489",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classification. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 1480-1489.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Sequential recommender system based on hierarchical attention network",
"authors": [
{
"first": "Haochao",
"middle": [],
"last": "Ying",
"suffix": ""
},
{
"first": "Fuzhen",
"middle": [],
"last": "Zhuang",
"suffix": ""
},
{
"first": "Fuzheng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yanchi",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Guandong",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Xing",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Hui",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2018,
"venue": "IJ-CAI International Joint Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Haochao Ying, Fuzhen Zhuang, Fuzheng Zhang, Yanchi Liu, Guandong Xu, Xing Xie, Hui Xiong, and Jian Wu. 2018. Sequential recommender sys- tem based on hierarchical attention network. In IJ- CAI International Joint Conference on Artificial In- telligence.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Character-level convolutional networks for text classification",
"authors": [
{
"first": "Xiang",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Junbo",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Yann",
"middle": [],
"last": "Lecun",
"suffix": ""
}
],
"year": 2015,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "649--657",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text clas- sification. In Advances in neural information pro- cessing systems, pages 649-657.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Ernie: Enhanced language representation with informative entities",
"authors": [
{
"first": "Zhengyan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Xin",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1905.07129"
]
},
"num": null,
"urls": [],
"raw_text": "Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang, Maosong Sun, and Qun Liu. 2019. Ernie: En- hanced language representation with informative en- tities. arXiv preprint arXiv:1905.07129.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Knowledge-enriched transformer for emotion detection in textual conversations",
"authors": [
{
"first": "Peixiang",
"middle": [],
"last": "Zhong",
"suffix": ""
},
{
"first": "Di",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Chunyan",
"middle": [],
"last": "Miao",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1909.10681"
]
},
"num": null,
"urls": [],
"raw_text": "Peixiang Zhong, Di Wang, and Chunyan Miao. 2019. Knowledge-enriched transformer for emotion de- tection in textual conversations. arXiv preprint arXiv:1909.10681.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Commonsense knowledge aware conversation generation with graph attention",
"authors": [
{
"first": "Hao",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Young",
"suffix": ""
},
{
"first": "Minlie",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Haizhou",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Jingfang",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Xiaoyan",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2018,
"venue": "IJCAI",
"volume": "",
"issue": "",
"pages": "4623--4629",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hao Zhou, Tom Young, Minlie Huang, Haizhou Zhao, Jingfang Xu, and Xiaoyan Zhu. 2018. Com- monsense knowledge aware conversation generation with graph attention. In IJCAI, pages 4623-4629.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"text": "Overview of KW-ATTN (in red) when plugged in HAN (2L). KW-ATTN 1L does not have the sentence embeddings, sentence encoder, and sentence level attention layers. KW-BERT replaces the word encoder with a pretrained BERT model.",
"num": null
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"text": "Distributions of concept attentions for the two annotations for patient need detection: UMLS and Ba-belNet (BN). For each concept, average (left), maximum (middle), variance (right) of attention values from all occurrences are used.",
"num": null
},
"TABREF0": {
"num": null,
"content": "<table/>",
"text": "Babelfy annotations for BabelNet concepts",
"html": null,
"type_str": "table"
},
"TABREF1": {
"num": null,
"content": "<table/>",
"text": "MetaMap annotations for UMLS concepts",
"html": null,
"type_str": "table"
},
"TABREF4": {
"num": null,
"content": "<table><tr><td colspan=\"7\">with UMLS concepts, #D: # of documents, #S: average # of sentences per document, #W: # of words per sen-</td></tr><tr><td colspan=\"7\">tence, #C(D): # of annotated concepts per document, #C(S): # of annotated concepts per sentence, Voca(W): word</td></tr><tr><td>vocabulary size, Voca(C): concept vocabulary size.</td><td/><td/><td/><td/><td/><td/></tr><tr><td/><td colspan=\"2\">Yahoo answers</td><td colspan=\"2\">Need BN</td><td colspan=\"2\">Need UMLS</td></tr><tr><td>Model</td><td>1L</td><td>2L</td><td>1L</td><td>2L</td><td>1L</td><td>2L</td></tr><tr><td>ATTN</td><td>.557</td><td>.574</td><td>.706</td><td>.684</td><td>.706</td><td>.684</td></tr><tr><td>Concept-replace</td><td>.560</td><td>.563</td><td>.698</td><td>.671</td><td>.699</td><td>.676</td></tr><tr><td>Concept-concat</td><td>.569</td><td>.571</td><td>.664</td><td>.602</td><td>.702</td><td>.661</td></tr><tr><td colspan=\"2\">Attn-concat (Margatina et al., 2019) .585</td><td>.577</td><td>.669</td><td>.669</td><td>.709</td><td>.681</td></tr><tr><td colspan=\"2\">Attn-gating (Margatina et al., 2019) .593</td><td>.577</td><td>.712</td><td>.587</td><td>.679</td><td>.631</td></tr><tr><td>KW-ATTN</td><td colspan=\"6\">.605* .597* .721* .692* .727* .703*</td></tr></table>",
"text": "Data summary statistics. Need-BN: need dataset with BabelNet concepts, Need-UMLS: need dataset",
"html": null,
"type_str": "table"
},
"TABREF5": {
"num": null,
"content": "<table><tr><td>: Comparison of KW-ATTN against baselines for 1-level (1L) and 2-level (2L) networks, in terms of F1</td></tr><tr><td>macro scores.</td></tr></table>",
"text": "",
"html": null,
"type_str": "table"
},
"TABREF7": {
"num": null,
"content": "<table><tr><td>: F1 macro scores by data size in Yahoo an-</td></tr><tr><td>swers. * indicates statistically significant improvement</td></tr><tr><td>over corresponding ATTN model via t-test (p &lt; 0.05).</td></tr></table>",
"text": "",
"html": null,
"type_str": "table"
},
"TABREF9": {
"num": null,
"content": "<table/>",
"text": "Examples of different types of explanations used for human evaluation.",
"html": null,
"type_str": "table"
}
}
}
}