ACL-OCL / Base_JSON /prefixC /json /clinicalnlp /2022.clinicalnlp-1.3.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:27:09.719616Z"
},
"title": "m-Networks: Adapting the Triplet Networks for Acronym Disambiguation",
"authors": [
{
"first": "Sandaru",
"middle": [],
"last": "Seneviratne",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The Australian National University (ANU)",
"location": {
"settlement": "Canberra"
}
},
"email": "[email protected]"
},
{
"first": "Elena",
"middle": [],
"last": "Daskalaki",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The Australian National University (ANU)",
"location": {
"settlement": "Canberra"
}
},
"email": "[email protected]"
},
{
"first": "Artem",
"middle": [],
"last": "Lenskiy",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The Australian National University (ANU)",
"location": {
"settlement": "Canberra"
}
},
"email": "[email protected]"
},
{
"first": "Hanna",
"middle": [],
"last": "Suominen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The Australian National University (ANU)",
"location": {
"settlement": "Canberra"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Acronym disambiguation (AD) is the process of identifying the correct expansion of the acronyms in text. AD is crucial in natural language understanding of scientific and medical documents due to the high prevalence of technical acronyms and the possible expansions. Given that natural language is often ambiguous with more than one meaning for words, identifying the correct expansion for acronyms requires learning of effective representations for words, phrases, acronyms, and abbreviations based on their context. In this paper, we proposed an approach to leverage the triplet networks and triplet loss which learns better representations of text through distance comparisons of embeddings. We tested both the triplet network-based method and the modified triplet network-based method with m networks on the AD dataset from the SDU@AAAI-21 AD task, CASI dataset, and MeDAL dataset. F scores of 87.31%, 70.67%, and 75.75% were achieved by the m network-based approach for SDU, CASI, and MeDAL datasets respectively indicating that triplet network-based methods have comparable performance but with only 12% of the number of parameters in the baseline method. This effective implementation is available at https://github.com/sandaruSen/m_networks under the MIT license.",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [
{
"text": "Acronym disambiguation (AD) is the process of identifying the correct expansion of the acronyms in text. AD is crucial in natural language understanding of scientific and medical documents due to the high prevalence of technical acronyms and the possible expansions. Given that natural language is often ambiguous with more than one meaning for words, identifying the correct expansion for acronyms requires learning of effective representations for words, phrases, acronyms, and abbreviations based on their context. In this paper, we proposed an approach to leverage the triplet networks and triplet loss which learns better representations of text through distance comparisons of embeddings. We tested both the triplet network-based method and the modified triplet network-based method with m networks on the AD dataset from the SDU@AAAI-21 AD task, CASI dataset, and MeDAL dataset. F scores of 87.31%, 70.67%, and 75.75% were achieved by the m network-based approach for SDU, CASI, and MeDAL datasets respectively indicating that triplet network-based methods have comparable performance but with only 12% of the number of parameters in the baseline method. This effective implementation is available at https://github.com/sandaruSen/m_networks under the MIT license.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Natural language is often ambiguous and contains phrases, words, acronyms, and abbreviations which have more than one meaning (Charbonnier and Wartena, 2018) . The complexity of natural language is further augmented based on which context these words are being used (Navigli, 2009) . Scientific and medical communities use domain specific technical terms, which are often shorthanded for ease of use. This has resulted in the prevalence of acronyms in scientific and medical documents (Charbonnier and Wartena, 2018) . To understand these expert texts, it is important to disambiguate the meaning of their acronyms. For example, given a sentence with the acronym RNN, the possible expansion for the acronym can be Recurrent Neural Network, Random Neural Network, Recursive Neural Network, Reverse Nearest Neighbour, etc. Out of these expansions, the one corresponding to the meaning of the sentence should be identified in order to correctly understand the sentence. The task of identifying the correct expansion of acronyms from possible expansions is called Acronym Disambiguation (AD).",
"cite_spans": [
{
"start": 126,
"end": 157,
"text": "(Charbonnier and Wartena, 2018)",
"ref_id": "BIBREF1"
},
{
"start": 266,
"end": 281,
"text": "(Navigli, 2009)",
"ref_id": "BIBREF15"
},
{
"start": 485,
"end": 516,
"text": "(Charbonnier and Wartena, 2018)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Methods of pattern matching, language modeling, and machine/deep learning have shown promising results in AD. Early systems for AD used pattern matching (Schwartz and Hearst, 2002) together with approaches based on word embeddings and machine learning (Jaber and Mart\u00ednez, 2021) where the AD task is considered as a classification problem. Recent efforts in AD mainly include the use of deep learning-based models (Pan et al., 2021; Zhong et al., 2021) and pre-trained language models (Beltagy et al., 2019; Devlin et al., 2019) . However, identifying the correct expansion of an acronym calls for better representation of text.",
"cite_spans": [
{
"start": 153,
"end": 180,
"text": "(Schwartz and Hearst, 2002)",
"ref_id": "BIBREF19"
},
{
"start": 252,
"end": 278,
"text": "(Jaber and Mart\u00ednez, 2021)",
"ref_id": "BIBREF8"
},
{
"start": 414,
"end": 432,
"text": "(Pan et al., 2021;",
"ref_id": "BIBREF16"
},
{
"start": 433,
"end": 452,
"text": "Zhong et al., 2021)",
"ref_id": "BIBREF29"
},
{
"start": 485,
"end": 507,
"text": "(Beltagy et al., 2019;",
"ref_id": "BIBREF0"
},
{
"start": 508,
"end": 528,
"text": "Devlin et al., 2019)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this study, we approached the problem of AD with the aim of learning effective text representations towards better disambiguation of acronyms. We derived our approach from Siamese Networks (Koch et al., 2015) and Triplet Networks (TNs) (Hoffer and Ailon, 2015) . TNs, inspired by Siamese Networks, aim to learn the information of inputs based on one or a few samples of training data using a triplet loss to provide better representations for data.",
"cite_spans": [
{
"start": 192,
"end": 211,
"text": "(Koch et al., 2015)",
"ref_id": "BIBREF10"
},
{
"start": 239,
"end": 263,
"text": "(Hoffer and Ailon, 2015)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The main contributions of this paper were as follows: We leveraged the triplet loss and TNs (Schroff et al., 2015) for AD with the aim of learning sentence embeddings, which can capture the semantic differences of the different expansions of the same acronym. We extended the TN architecture further to include m networks and mapped the",
"cite_spans": [
{
"start": 92,
"end": 114,
"text": "(Schroff et al., 2015)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "! \" ! # ! $ ! Net || ! # \u2212 ( ! \" )|| || ! # \u2212 ( ! $ \" )|| Loss Function ! $ \" ! $ # \u2026 || ! # \u2212 ( ! $ ! )|| || ! # \u2212 ( ! $ # )|| Net Net Net Net ! \" ! # Net || ! # \u2212 ( ! \" )|| || ! # \u2212 ( ! $ )|| ! $ Net Net Loss Function",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Figure 1: Triplet Network Architecture and Modified Triplet Network Architecture. The triplet network architecture (left, Formula (1)) considers the anchor sentence x a i , positive sentence x p i , and negative sentence x n i for a sample when computing the triplet loss. Modified architecture (right, Formula (2)) considers the anchor sentence, positive sentence, and all the possible negative sentences for a sample. This includes m number of similar architectures.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "AD task as a binary classification problem, which predicts if the suggested expansion for an acronym is correct or not. To the best of our knowledge this is the first attempt of adapting the TN-based methods and triplet loss for disambiguating the acronyms. We evaluated and verified the proposed approach on the AAAI-21 Scientific Document Understanding AD task dataset (SDU dataset) (Veyseh et al., 2020), sense inventory for clinical abbreviations and acronym dataset (CASI dataset) (Moon et al., 2014) , and on a sample of the Medical Abbreviation Disambiguation Dataset (MeDAL) (Wen et al., 2020) . We made our implementation available at https://github.com/sandaruSen/m_networks under the MIT license.",
"cite_spans": [
{
"start": 486,
"end": 505,
"text": "(Moon et al., 2014)",
"ref_id": "BIBREF14"
},
{
"start": 583,
"end": 601,
"text": "(Wen et al., 2020)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Extensive body of prior research for AD in scientific and medical domains exists because understanding scientific and medical text requires both AD and domain knowledge. Earliest approaches for AD included the use of a number of rules and patterns (Schwartz and Hearst, 2002) , training of classifiers based on a set of features which represent the context of the input like, part-of-speech tags, case representation of the words, or word stems (Finley et al., 2016; Wu et al., 2017) , and computation of the cosine similarity between the text with the acronym and the possible expansions based on word embeddings (Tulkens et al., 2016) . Recent efforts in AD include the use of deep learning-based methods and pre-trained language models (Pan et al., 2021; Singh and Kumar, 2021; Zhong et al., 2021) .",
"cite_spans": [
{
"start": 248,
"end": 275,
"text": "(Schwartz and Hearst, 2002)",
"ref_id": "BIBREF19"
},
{
"start": 445,
"end": 466,
"text": "(Finley et al., 2016;",
"ref_id": "BIBREF6"
},
{
"start": 467,
"end": 483,
"text": "Wu et al., 2017)",
"ref_id": "BIBREF28"
},
{
"start": 614,
"end": 636,
"text": "(Tulkens et al., 2016)",
"ref_id": "BIBREF22"
},
{
"start": 739,
"end": 757,
"text": "(Pan et al., 2021;",
"ref_id": "BIBREF16"
},
{
"start": 758,
"end": 780,
"text": "Singh and Kumar, 2021;",
"ref_id": "BIBREF20"
},
{
"start": 781,
"end": 800,
"text": "Zhong et al., 2021)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "With the introduction of transformers, the transformer-based pre-trained language models have been extensively used for the AD task. BERT (Bidirectional Encoder Representations from Transformers) models such as (Devlin et al., 2019) , SciBERT (BERT-based language model for performing scientific tasks) (Beltagy et al., 2019) , and RoBERTa (Robustly Optimized BERT Pretraining Approach) (Liu et al., 2019) are the language models that are exploited to formulate the problem of AD as a classification task for AD. The SDU@AAAI-21 AD task consisted of systems with transformer-based language models, which differed based on how the inputs and the outputs to the systems were defined (Veyseh et al., 2021) . In our work, we explored triplet loss and TNs for AD using pre-trained language models. TNs and triplet loss have been effectively used for representation learning by distance comparisons among pairs of examples. They were initially introduced for computer vision related tasks (Schroff et al., 2015) and are now used in many natural language processing (NLP) tasks (Santos et al., 2016; Ein-Dor et al., 2018; Lauriola and Moschitti, 2020; Wei et al., 2021) . We believe that through the triplet loss, the models will be able to learn subtle yet complex differences among the different expansions of the same acronym.",
"cite_spans": [
{
"start": 211,
"end": 232,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF3"
},
{
"start": 303,
"end": 325,
"text": "(Beltagy et al., 2019)",
"ref_id": "BIBREF0"
},
{
"start": 387,
"end": 405,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF13"
},
{
"start": 681,
"end": 702,
"text": "(Veyseh et al., 2021)",
"ref_id": "BIBREF24"
},
{
"start": 983,
"end": 1005,
"text": "(Schroff et al., 2015)",
"ref_id": "BIBREF18"
},
{
"start": 1071,
"end": 1092,
"text": "(Santos et al., 2016;",
"ref_id": "BIBREF17"
},
{
"start": 1093,
"end": 1114,
"text": "Ein-Dor et al., 2018;",
"ref_id": "BIBREF4"
},
{
"start": 1115,
"end": 1144,
"text": "Lauriola and Moschitti, 2020;",
"ref_id": "BIBREF11"
},
{
"start": 1145,
"end": 1162,
"text": "Wei et al., 2021)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The goal of AD was to identify the correct expansion for a given acronym in text. Considering a dictionary of acronyms D with acronyms as keys",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3"
},
{
"text": "[A 1 , A 2 , ..., A j ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3"
},
{
"text": "where j is the number of acronyms. For each acronym A i , the m possible expansions were represented as [e 1 , e 2 , ..., e m ]. Given a sentence x i with an acronym A i , the correct expansion should be obtained from D out of the expansion list of the corresponding A i . We modeled the AD task based on a TN as well as a modified version of the TN architecture with the triplet loss. The TN allowed the AD task to be expressed as a binary classification problem to predict which expansion is the most relevant to the given acronym based on the context it appears (Appendix A). For the modified version of the TN, we included m number of architectures considering the possible negatives for a sample at once. This resulted in an anchor sentence, a positive sentence, and a list of negative sentences as inputs to the architectures ( Figure 1 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 834,
"end": 842,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Methods",
"sec_num": "3"
},
{
"text": "Denoting anchor, positive, and negative embeddings as x a i , x p i , and x n i , respectively, where i = 1, 2, . . . , k, and considering a d-dimensional embedding in the vector space f (x) \u2208 R d and \u03b1 a margin that is enforced between positive and negative pairs, the loss for the TN was defined as follows using the L 2 distances for the TN:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3"
},
{
"text": "||f (x a i )\u2212f (x p i )|| 2 2 +\u03b1 < ||f (x a i )\u2212f (x n i )|| 2 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3"
},
{
"text": "(1) For the modified version of the TN with m networks, the loss was computed considering all the possible negatives. Adapting the triplet loss to the modified architecture, the distance between the anchor and the positive sentence should be less than the minimum of the distances between the anchor and the negative sentences. We could denote the loss considering all the m number of negatives",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3"
},
{
"text": "x n 1 i , x n 2 i , . . . , x nm i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3"
},
{
"text": "as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3"
},
{
"text": "||f (x a i ) \u2212 f (x p i )|| 2 2 + \u03b1 < min( ||f (x a i ) \u2212 f (x n 1 i )|| 2 2 , ||f (x a i ) \u2212 f (x n 2 i )|| 2 2 , . . . , ||f (x a i ) \u2212 f (x nm i )|| 2 2 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3"
},
{
"text": "(2) Sentence triplet creation, which includes identifying an anchor sample x a i , a positive sample x p i , and a negative sample x n i (Table 1) , was considered crucial when using TNs. For each possible expansion of an acronym, we randomly extracted one sentence matching the expansion from the training dataset. These sentences were considered as anchor sentences. We then used all sentences in the training dataset to create positive samples. Acronyms in sentences were replaced by their respective correct expansion to obtain positive sentences. We then applied the following guidelines to create the negative samples: i) For each positive sentence with an acronym, we obtained all the possible expansions except for the correct expansion. ii) We replaced the acronym in the sentence with these expansions to obtain a list of sentences with other expansions. iii) Each of these negative sentences was used to create the final list of triplets.",
"cite_spans": [],
"ref_spans": [
{
"start": 137,
"end": 146,
"text": "(Table 1)",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Methods",
"sec_num": "3"
},
{
"text": "The triplet selection ensured effective training of the models. Hence, it is advised to consider triplets, which violate the triplet constraint (Formula (1)). In our approach, we considered the same positive sentence with the respective acronym replaced by other expansions of the acronym as negatives. Even though the text in the sentences was very much similar to each other, replacing the acronym with possible expansions resulted in a change in the semantic meaning of the overall sentences. Hence, we believe considering sentences with other possible expansions as negative sentences satisfied the necessity of having hard negatives, which were difficult to discriminate from the correct expansion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3"
},
{
"text": "The purpose of RL is for the agent to learn an optimal, or nearly-optimal, policy that maximizes the reward function.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Anchor Sentence",
"sec_num": null
},
{
"text": "All agents can then operate in parallel, allowing one to exploit a number of already available reinforcement learning techniques for parallel learning. Negative Sentences [All agents can then operate in parallel, allowing one to exploit a number of already available robust locomotion techniques for parallel learning., All agents can then operate in parallel, allowing one to exploit a number of already available representation learning techniques for parallel learning., ...] In the training stage, we used the anchor sentence, positive sentence, and negative sentence as the input to the TN-based system and anchor sentence, positive sentence, and possible negative sentences as the input to the m-network-based system. For each of the sentences, we obtained an embedding, which was then used to calculate the triplet loss. In the inference stage, we used the given sentence with the acronym as the anchor sentence and we created a list of sentences by replacing the acronym in the sample sentence with possible expansions. We computed the distances between each of the possible sentences and the anchor sentence to obtain the sentence closest to the anchor sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Positive Sentence",
"sec_num": null
},
{
"text": "We used the SDU dataset (Veyseh et al., 2020), CASI dataset (Moon et al., 2014) , and MeDAL dataset (Wen et al., 2020 ) (see Appendix B for further information). The SDU dataset contained data from 6, 786 English scientific papers published at arXiv and consisted of 62, 441 sentences. The dataset also consisted of a dictionary of acronyms and their possible expansions. We used the publicly available training and development data of the SDU dataset for our experiments. CASI dataset was created using admission notes, consultation notes, and discharge summaries from hospitals affiliated with the University of Minnesota. 37, 500 samples from CASI dataset was split into train, validation, and test subsets and a dictionary with the acronyms was created for the experiments. The MeDAL dataset was created from 14, 393, 619 articles in PubMed. We created a sample dataset and a dictionary of acronyms from MeDAL dataset for experiments (Table 3 of Appendix B).",
"cite_spans": [
{
"start": 60,
"end": 79,
"text": "(Moon et al., 2014)",
"ref_id": "BIBREF14"
},
{
"start": 100,
"end": 117,
"text": "(Wen et al., 2020",
"ref_id": "BIBREF27"
}
],
"ref_spans": [
{
"start": 938,
"end": 946,
"text": "(Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "We performed a basic preprocessing on the sentences, which were quite long, by sampling tokens in the sentences as proposed by Singh and Kumar (2021) . We used N/2 tokens to the left and right of the acronym for sentences with length of more than 120, considering N = 120. As a baseline model, we experimented with the system proposed by Singh and Kumar, 2021 which modeled the AD task as a span prediction task. The proposed system fine-tuned the complete SciBERT model with 12 layers to predict the start and end indices of the correct expansion of an acronym given all the possible expansions, leveraging the SciB-ERT's ability to encode pair of sequences together.",
"cite_spans": [
{
"start": 127,
"end": 149,
"text": "Singh and Kumar (2021)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "We used the pre-trained SciBERT model architecture as the base model for experiments on SDU dataset and the pre-trained BioBERT (BERT-based language model for performing biomedical tasks) (Lee et al., 2020) model as the base model for experiments on the CASI and the MeDAL datasets with their first 11 encoder layers frozen followed by dropout of 0.5 to avoid over-fitting and a dense layer to map the feature embeddings output by the base models with dimensions of 768 to 64 (Appendix C). These 64 dimensional embeddings were used to compute the triplet loss. We trained the models using a learning rate of 5 \u00d7 10 \u22124 with the Adam optimizer (Kingma and Ba, 2014). The best model over 10 epochs with a batch size of 32 was chosen as the final model.",
"cite_spans": [
{
"start": 188,
"end": 206,
"text": "(Lee et al., 2020)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "To evaluate the performance of the proposed architecture in the training set, we computed the macro-averaged F1 score. If the distance between the anchor and the positive sentence is less than the distance between the anchor and negative sentences, the prediction of the model was considered correct. We used F1 also in evaluation. We computed the distances between the anchor and possible sentences from which the sentence with the minimum distance to the anchor was considered the sentence with the correct expansion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "By comparing the proposed methods with the baseline system on the three datasets, we observed that the methods based on TNs learnt to discriminate among the different expansions of an acronym. Compared to the TN-based method, the m network-based method has comparable performance as the baseline for all the datasets. Both the proposed methods outperformed the baseline on SDU and MeDAL datasets. The m network-based method gave an F1 score of 87.31% on SDU dataset, 70.67% on CASI dataset, and 75.75% on MeDAL dataset (Table 2) .",
"cite_spans": [],
"ref_spans": [
{
"start": 519,
"end": 528,
"text": "(Table 2)",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "5"
},
{
"text": "To investigate the semantic similarity and the representation of the output embeddings in the vector space, we visualized output representations obtained by the m network-based architecture for the SDU, CASI, and MeDAL datasets by reducing the dimensions using principal component analysis (PCA) (Figure 3 of Appendix D). For the SDU dataset, we used the acronym RL with reinforcement learning to obtain the positive and respective negative sentences. Similarly, for the CASI dataset the acronym DM with diabetes mellitus expansion and for the MeDAL dataset the acronym RSM with respiratory muscle strength expansion were used.",
"cite_spans": [],
"ref_spans": [
{
"start": 296,
"end": 305,
"text": "(Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "5"
},
{
"text": "In this paper, we have suggested a new approach for disambiguating the acronyms to effectively identify the correct expansion through better representation learning using TNs by creating high quality sentence embeddings, which can capture the semantic differences among the different expansions of the same acronym. Namely, we have presented how methods based on TNs and triplet loss can be used for AD. To address the effective learning of context representations for identifying the correct expansion of acronyms, our methods leverage the contextual information of text and semantic similarity among expansions. In particular, our paper has introduced m networks inspired by TNs. Our experiments have demonstrated that methods based on TNs have comparable performance on both scientific and medical domains. However, the applicability of the proposed methods on CASI dataset should be further investigated. Finally, the number of parameters in TN-based methods is only 12% of the number of parameters in the baseline method resulting in smaller size of the models (Table 2) . The TN-based methods have used the representations from the last layer of the BERT-based models where as the baseline method fine-tuned the complete model with all 12 layers for the predictions 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 1066,
"end": 1075,
"text": "(Table 2)",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "We have tested the proposed methods on the SDU, CASI, and MeDAL datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "The TN-based method for AD can be used for data augmentation when the training data is limited. Given that the original TN architecture only considers one negative sample at a time, considering all the possible expansions of each acronym one at a time can be used to augment the training data size. This addresses the issue of limited training data for deep learning architectures. However, in the modified TN-based architecture with m networks, at the training stage all the possible negatives are considered for a sample at once. Therefore, data augmentation is not possible in this case.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "In this paper, our main goal was to approach the AD problem as an effective representation learning problem to discriminate among the possible expansions of an acronym based on the context it appears. Earliest approaches on AD relied on rules and patterns (Schwartz and Hearst, 2002) to identify the correct expansion of an acronym which evolved to use of machine learning-based approaches with different features (Finley et al., 2016; Wu et al., 2017) and computing of semantic similarity between the text with acronym and the possible expansions. Recent efforts involved pre-trained language models for the AD task. Most of these systems were validated on one domain of focus (i.e., scientific text, medical text, or general text). We approached the problem focusing on learning better representations for text through TNs and triplet loss using pretrained language models. Furthermore, we tested the proposed approaches on both the scientific and medical domains.",
"cite_spans": [
{
"start": 256,
"end": 283,
"text": "(Schwartz and Hearst, 2002)",
"ref_id": "BIBREF19"
},
{
"start": 414,
"end": 435,
"text": "(Finley et al., 2016;",
"ref_id": "BIBREF6"
},
{
"start": 436,
"end": 452,
"text": "Wu et al., 2017)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "As future work, we intend to experiment with different constrastive losses (Sohn, 2016; Chen et al., 2020) . Specifically, our aspiration is to compare and contrast the proposed approach with InfoNCE (Van den Oord et al., 2018), a popular contrastive loss which includes multiple negatives and normalises across examples in a mini batch.",
"cite_spans": [
{
"start": 75,
"end": 87,
"text": "(Sohn, 2016;",
"ref_id": "BIBREF21"
},
{
"start": 88,
"end": 106,
"text": "Chen et al., 2020)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "We have proposed an approach for AD using TNbased methods with the aim of learning effective representations for data. We have used SciBERT trained on scientific publications and BioBERT trained on biomedical domain corpora (PubMed abstracts and PMC full-text articles) for our experiments. Instead of finetuning all the layers in the pre-trained language models, we have finetuned only the last encoder layer by freezing the first 11 encoder layers thereby bringing the latest deep learning advances to AD in a computationally efficient way. However, the m network architecture despite its smaller number of parameters has m architectures. This has resulted in more updates in the parameters increasing the computational time in the training stage.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ethical Considerations",
"sec_num": "7"
},
{
"text": "The proposed approaches have been tested and validated on three datasets: SDU dataset, CASI dataset, and MeDAL dataset. According to the National Statement on Ethical Conduct in Human Research (2007) -Updated 2018 (National Health and Medical Research Council, 2018), a new ethics approval is not required for our experiments and, to the best of our knowledge, the three original datasets have been created ethically. All the three datasets are publicly available (see Appendix B).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ethical Considerations",
"sec_num": "7"
},
{
"text": "Identifying the correct expansion of acronyms is important in improving the understandability of scientific/medical text due to the prevalence of technical acronyms which are shorthanded for ease of use. For people with limited expertise knowledge, understanding scientific/medical documents can be difficult, stressful and cause misunderstandings. The proposed methods can be used in scientific/medical text simplification tasks to provide lay people with better understanding of text through the disambiguation of acronyms. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ethical Considerations",
"sec_num": "7"
},
{
"text": "Our implementation used the pre-trained SciBERT and BioBERT model architectures. We conducted out experiments on 1 RTX 3090 graphics cards with 24 GB memory and CUDA 11.4. Our implementation is based on PyTorch 1.8.2. Figure 3 shows sample output representations obtained by the m network-based architecture for the SDU, CASI, and MeDAL datasets by reducing the dimensions using PCA. For the SDU dataset, the acronym RL with reinforcement learning were used to obtain the positive and respective negative sentences. Similarly, for CASI dataset the acronym DM with diabetes mellitus expansion and for MeDAL dataset the acronym RMS with respiratory muscle strength expansion were used. ",
"cite_spans": [],
"ref_spans": [
{
"start": 218,
"end": 226,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "C Implementation Details",
"sec_num": null
},
{
"text": "However, given that m network-based method consists of m architectures, the number of updates on parameters increases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This research was funded by and has been delivered in partnership with Our Health in Our Hands (OHIOH), a strategic initiative of the ANU, which aims to transform health care by developing new personalized health technologies and solutions in collaboration with patients, clinicians and healthcare providers. We gratefully acknowledge the funding from the ANU School of Computing for the first author's PhD studies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgement",
"sec_num": null
},
{
"text": "Triplet loss uses anchor, positive, and negative samples to learn effective representations. Anchor sample comes from a specific class. Positive samples belong to the same class as the anchor sample and the negative samples belong to a different class than the class of the anchor sample. The triplet loss encourages to minimize the distance between similar embeddings (i.e., anchor and positive embeddings) and maximize the distances between dissimilar embeddings (anchor and negative embeddings) enforcing a margin between the embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Triplet Networks and Triplet Loss",
"sec_num": null
},
{
"text": "The datasets used in this study are all publicly available from the following sources: AD dataset from SDU@AAAI21, CASI, and MeDAL. The dataset statistics are shown in Table 3 . The distribution of the number of samples based on the number of acronym expansion pairs is shown in Figure 2 . For the SDU dataset, the acronym RL with reinforcement learning were used to obtain the positive and respective negative sentences. Similarly, for CASI dataset the acronym DM with diabetes mellitus expansion and for MeDAL dataset the acronym RMS with respiratory muscle strength expansion were used.",
"cite_spans": [],
"ref_spans": [
{
"start": 168,
"end": 175,
"text": "Table 3",
"ref_id": null
},
{
"start": 279,
"end": 287,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "B Data Samples and Their Availability",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Scibert: A pretrained language model for scientific text",
"authors": [
{
"first": "Iz",
"middle": [],
"last": "Beltagy",
"suffix": ""
},
{
"first": "Kyle",
"middle": [],
"last": "Lo",
"suffix": ""
},
{
"first": "Arman",
"middle": [],
"last": "Cohan",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "3615--3620",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. Scibert: A pretrained language model for scientific text. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3615-3620.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Using word embeddings for unsupervised acronym disambiguation",
"authors": [
{
"first": "Jean",
"middle": [],
"last": "Charbonnier",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Wartena",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jean Charbonnier and Christian Wartena. 2018. Using word embeddings for unsupervised acronym disam- biguation. In Proceedings of the 27th International Conference on Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A simple framework for contrastive learning of visual representations",
"authors": [
{
"first": "Ting",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Kornblith",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Norouzi",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
}
],
"year": 2020,
"venue": "In International conference on machine learning",
"volume": "",
"issue": "",
"pages": "1597--1607",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. 2020. A simple framework for contrastive learning of visual representations. In In- ternational conference on machine learning, pages 1597-1607. PMLR.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Learning thematic similarity metric using triplet networks",
"authors": [
{
"first": "Yosi",
"middle": [],
"last": "Liat Ein-Dor",
"suffix": ""
},
{
"first": "Alon",
"middle": [],
"last": "Mass",
"suffix": ""
},
{
"first": "Elad",
"middle": [],
"last": "Halfon",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Venezian",
"suffix": ""
},
{
"first": "Ranit",
"middle": [],
"last": "Shnayderman",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Aharonov",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Slonim",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liat Ein-Dor, Yosi Mass, Alon Halfon, Elad Venezian, Ilya Shnayderman, Ranit Aharonov, and Noam Slonim. 2018. Learning thematic similarity metric using triplet networks. In Proceedings of the 56th",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Annual Meeting of the Association for Computational Linguistics (ACL 2018)",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "15--20",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual Meeting of the Association for Computational Linguistics (ACL 2018), Melbourne, Australia, pages 15-20.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Towards comprehensive clinical abbreviation disambiguation using machine-labeled training data",
"authors": [
{
"first": "P",
"middle": [],
"last": "Gregory",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Finley",
"suffix": ""
},
{
"first": "V",
"middle": [
"S"
],
"last": "Serguei",
"suffix": ""
},
{
"first": "Reed",
"middle": [],
"last": "Pakhomov",
"suffix": ""
},
{
"first": "Genevieve",
"middle": [
"B"
],
"last": "Mcewan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Melton",
"suffix": ""
}
],
"year": 2016,
"venue": "AMIA Annual Symposium Proceedings",
"volume": "2016",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gregory P Finley, Serguei VS Pakhomov, Reed McE- wan, and Genevieve B Melton. 2016. Towards com- prehensive clinical abbreviation disambiguation us- ing machine-labeled training data. In AMIA Annual Symposium Proceedings, volume 2016, page 560. American Medical Informatics Association.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Deep metric learning using triplet network",
"authors": [
{
"first": "Elad",
"middle": [],
"last": "Hoffer",
"suffix": ""
},
{
"first": "Nir",
"middle": [],
"last": "Ailon",
"suffix": ""
}
],
"year": 2015,
"venue": "International workshop on similarity-based pattern recognition",
"volume": "",
"issue": "",
"pages": "84--92",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elad Hoffer and Nir Ailon. 2015. Deep metric learning using triplet network. In International workshop on similarity-based pattern recognition, pages 84-92. Springer.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Participation of uc3m in sdu@ aaai-21: A hybrid approach to disambiguate scientific acronyms",
"authors": [
{
"first": "Areej",
"middle": [],
"last": "Jaber",
"suffix": ""
},
{
"first": "Paloma",
"middle": [],
"last": "Mart\u00ednez",
"suffix": ""
}
],
"year": 2021,
"venue": "SDU@ AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Areej Jaber and Paloma Mart\u00ednez. 2021. Participation of uc3m in sdu@ aaai-21: A hybrid approach to disambiguate scientific acronyms. In SDU@ AAAI.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6980"
]
},
"num": null,
"urls": [],
"raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Siamese neural networks for one-shot image recognition",
"authors": [
{
"first": "Gregory",
"middle": [],
"last": "Koch",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Zemel",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2015,
"venue": "ICML deep learning workshop",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gregory Koch, Richard Zemel, Ruslan Salakhutdinov, et al. 2015. Siamese neural networks for one-shot image recognition. In ICML deep learning workshop, volume 2. Lille.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Context-based transformer models for answer sentence selection",
"authors": [
{
"first": "Ivano",
"middle": [],
"last": "Lauriola",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Moschitti",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2006.01285"
]
},
"num": null,
"urls": [],
"raw_text": "Ivano Lauriola and Alessandro Moschitti. 2020. Context-based transformer models for answer sen- tence selection. arXiv preprint arXiv:2006.01285.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Biobert: a pre-trained biomedical language representation model for biomedical text mining",
"authors": [
{
"first": "Jinhyuk",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Wonjin",
"middle": [],
"last": "Yoon",
"suffix": ""
},
{
"first": "Sungdong",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Donghyeon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Sunkyu",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Chan",
"middle": [],
"last": "Ho So",
"suffix": ""
},
{
"first": "Jaewoo",
"middle": [],
"last": "Kang",
"suffix": ""
}
],
"year": 2020,
"venue": "Bioinformatics",
"volume": "36",
"issue": "4",
"pages": "1234--1240",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2020. Biobert: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics, 36(4):1234-1240.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Roberta: A robustly optimized bert pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A sense inventory for clinical abbreviations and acronyms created using clinical notes and medical dictionary resources",
"authors": [
{
"first": "Sungrim",
"middle": [],
"last": "Moon",
"suffix": ""
},
{
"first": "Serguei",
"middle": [],
"last": "Pakhomov",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Genevieve",
"middle": [
"B"
],
"last": "James O Ryan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Melton",
"suffix": ""
}
],
"year": 2007,
"venue": "Journal of the American Medical Informatics Association",
"volume": "21",
"issue": "2",
"pages": "299--307",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sungrim Moon, Serguei Pakhomov, Nathan Liu, James O Ryan, and Genevieve B Melton. 2014. A sense inventory for clinical abbreviations and acronyms created using clinical notes and medical dictionary resources. Journal of the American Medi- cal Informatics Association, 21(2):299-307. National Health and Medical Research Coun- cil. 2018. National Statement on Ethi- cal Conduct in Human Research (2007).",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Word sense disambiguation: A survey",
"authors": [
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
}
],
"year": 2009,
"venue": "ACM computing surveys (CSUR)",
"volume": "41",
"issue": "",
"pages": "1--69",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roberto Navigli. 2009. Word sense disambiguation: A survey. ACM computing surveys (CSUR), 41(2):1- 69.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Bert-based acronym disambiguation with multiple training strategies",
"authors": [
{
"first": "Chunguang",
"middle": [],
"last": "Pan",
"suffix": ""
},
{
"first": "Bingyan",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Shengguang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Zhipeng",
"middle": [],
"last": "Luo",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2103.00488"
]
},
"num": null,
"urls": [],
"raw_text": "Chunguang Pan, Bingyan Song, Shengguang Wang, and Zhipeng Luo. 2021. Bert-based acronym disambigua- tion with multiple training strategies. arXiv preprint arXiv:2103.00488.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Attentive pooling networks",
"authors": [
{
"first": "Santos",
"middle": [],
"last": "Cicero Dos",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Xiang",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1602.03609"
]
},
"num": null,
"urls": [],
"raw_text": "Cicero dos Santos, Ming Tan, Bing Xiang, and Bowen Zhou. 2016. Attentive pooling networks. arXiv preprint arXiv:1602.03609.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Facenet: A unified embedding for face recognition and clustering",
"authors": [
{
"first": "Florian",
"middle": [],
"last": "Schroff",
"suffix": ""
},
{
"first": "Dmitry",
"middle": [],
"last": "Kalenichenko",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Philbin",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the IEEE conference on computer vision and pattern recognition",
"volume": "",
"issue": "",
"pages": "815--823",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Florian Schroff, Dmitry Kalenichenko, and James Philbin. 2015. Facenet: A unified embedding for face recognition and clustering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 815-823.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A simple algorithm for identifying abbreviation definitions in biomedical text",
"authors": [
{
"first": "S",
"middle": [],
"last": "Ariel",
"suffix": ""
},
{
"first": "Marti",
"middle": [
"A"
],
"last": "Schwartz",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hearst",
"suffix": ""
}
],
"year": 2002,
"venue": "Biocomputing",
"volume": "",
"issue": "",
"pages": "451--462",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ariel S Schwartz and Marti A Hearst. 2002. A simple algorithm for identifying abbreviation definitions in biomedical text. In Biocomputing 2003, pages 451- 462. World Scientific.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Scidr at sdu-2020: Ideas-identifying and disambiguating everyday acronyms for scientific domain",
"authors": [
{
"first": "Aadarsh",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Priyanshu",
"middle": [],
"last": "Kumar",
"suffix": ""
}
],
"year": 2021,
"venue": "SDU@AAAI-21",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aadarsh Singh and Priyanshu Kumar. 2021. Scidr at sdu-2020: Ideas-identifying and disambiguating everyday acronyms for scientific domain. In In SDU@AAAI-21.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Improved deep metric learning with multi-class n-pair loss objective",
"authors": [
{
"first": "Kihyuk",
"middle": [],
"last": "Sohn",
"suffix": ""
}
],
"year": 2016,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kihyuk Sohn. 2016. Improved deep metric learning with multi-class n-pair loss objective. Advances in neural information processing systems, 29.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Using distributed representations to disambiguate biomedical and clinical concepts",
"authors": [
{
"first": "St\u00e9phan",
"middle": [],
"last": "Tulkens",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "\u0160uster",
"suffix": ""
},
{
"first": "Walter",
"middle": [],
"last": "Daelemans",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 15th Workshop on Biomedical Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "St\u00e9phan Tulkens, Simon \u0160uster, and Walter Daelemans. 2016. Using distributed representations to disam- biguate biomedical and clinical concepts. In Pro- ceedings of the 15th Workshop on Biomedical Natu- ral Language Processing.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Representation learning with contrastive predictive coding. arXiv e-prints",
"authors": [
{
"first": "Aaron",
"middle": [],
"last": "Van Den Oord",
"suffix": ""
},
{
"first": "Yazhe",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aaron Van den Oord, Yazhe Li, and Oriol Vinyals. 2018. Representation learning with contrastive predictive coding. arXiv e-prints, pages arXiv-1807.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Acronym identification and disambiguation shared tasks for scientific document understanding",
"authors": [
{
"first": "Ben",
"middle": [],
"last": "Amir Pouran",
"suffix": ""
},
{
"first": "Franck",
"middle": [],
"last": "Veyseh",
"suffix": ""
},
{
"first": "Thien",
"middle": [],
"last": "Dernoncourt",
"suffix": ""
},
{
"first": "Walter",
"middle": [],
"last": "Huu Nguyen",
"suffix": ""
},
{
"first": "Leo",
"middle": [
"Anthony"
],
"last": "Chang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Celi",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amir Pouran Ben Veyseh, Franck Dernoncourt, Thien Huu Nguyen, Walter Chang, and Leo Anthony Celi. 2021. Acronym identification and disambigua- tion shared tasks for scientific document understand- ing. In In SDU@AAAI-21.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "What does this acronym mean? introducing a new dataset for acronym identification and disambiguation",
"authors": [
{
"first": "Ben",
"middle": [],
"last": "Amir Pouran",
"suffix": ""
},
{
"first": "Franck",
"middle": [],
"last": "Veyseh",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dernoncourt",
"suffix": ""
},
{
"first": "Thien Huu",
"middle": [],
"last": "Quan Hung Tran",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nguyen",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 28th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3285--3301",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amir Pouran Ben Veyseh, Franck Dernoncourt, Quan Hung Tran, and Thien Huu Nguyen. 2020. What does this acronym mean? introducing a new dataset for acronym identification and disambigua- tion. In Proceedings of the 28th International Con- ference on Computational Linguistics, pages 3285- 3301.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Few-shot text classification with triplet networks, data augmentation, and curriculum learning",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Chengyu",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Soroush",
"middle": [],
"last": "Vosoughi",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Shiqi",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "5493--5500",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Wei, Chengyu Huang, Soroush Vosoughi, Yu Cheng, and Shiqi Xu. 2021. Few-shot text clas- sification with triplet networks, data augmentation, and curriculum learning. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5493-5500.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Medal: Medical abbreviation disambiguation dataset for natural language understanding pretraining",
"authors": [
{
"first": "Zhi",
"middle": [],
"last": "Wen",
"suffix": ""
},
{
"first": "Xing",
"middle": [
"Han"
],
"last": "Lu",
"suffix": ""
},
{
"first": "Siva",
"middle": [],
"last": "Reddy",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 3rd Clinical Natural Language Processing Workshop",
"volume": "",
"issue": "",
"pages": "130--135",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhi Wen, Xing Han Lu, and Siva Reddy. 2020. Medal: Medical abbreviation disambiguation dataset for nat- ural language understanding pretraining. In Proceed- ings of the 3rd Clinical Natural Language Processing Workshop, pages 130-135.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "A long journey to short abbreviations: developing an open-source framework for clinical abbreviation recognition and disambiguation (card)",
"authors": [
{
"first": "Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Joshua",
"middle": [
"C"
],
"last": "Denny",
"suffix": ""
},
{
"first": "Trent",
"middle": [],
"last": "Rosenbloom",
"suffix": ""
},
{
"first": "Randolph",
"middle": [
"A"
],
"last": "Miller",
"suffix": ""
},
{
"first": "Dario",
"middle": [
"A"
],
"last": "Giuse",
"suffix": ""
},
{
"first": "Lulu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Carmelo",
"middle": [],
"last": "Blanquicett",
"suffix": ""
},
{
"first": "Ergin",
"middle": [],
"last": "Soysal",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2017,
"venue": "Journal of the American Medical Informatics Association",
"volume": "24",
"issue": "e1",
"pages": "79--86",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yonghui Wu, Joshua C Denny, S Trent Rosenbloom, Randolph A Miller, Dario A Giuse, Lulu Wang, Carmelo Blanquicett, Ergin Soysal, Jun Xu, and Hua Xu. 2017. A long journey to short abbreviations: developing an open-source framework for clinical abbreviation recognition and disambiguation (card). Journal of the American Medical Informatics Associ- ation, 24(e1):e79-e86.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Leveraging domain agnostic and specific knowledge for acronym disambiguation",
"authors": [
{
"first": "Qiwei",
"middle": [],
"last": "Zhong",
"suffix": ""
},
{
"first": "Guanxiong",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Danqing",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Wangli",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Jiayu",
"middle": [],
"last": "Tang",
"suffix": ""
}
],
"year": 2021,
"venue": "SDU@ AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qiwei Zhong, Guanxiong Zeng, Danqing Zhu, Yang Zhang, Wangli Lin, Ben Chen, and Jiayu Tang. 2021. Leveraging domain agnostic and specific knowledge for acronym disambiguation. In SDU@ AAAI.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "The distribution of samples based on the number of acronym expansion pairs for SDU, CASI, and MeDAL datasets.",
"type_str": "figure",
"uris": null,
"num": null
},
"TABREF0": {
"num": null,
"type_str": "table",
"content": "<table><tr><td>Architecture or Model Baseline method by Singh and Kumar (2021)</td><td>Number of Pa-rameters 109, 920, 002</td><td>F score on SDU 84.24%</td><td>F score on CASI 78.16%</td><td>F score on MeDAL 74.91%</td></tr><tr><td>Triplet Network-based method</td><td>13, 576, 768</td><td>85.70%</td><td>56.49%</td><td>75.19%</td></tr><tr><td>m Network-based method</td><td>13, 576, 768</td><td>87.31%</td><td>70.67%</td><td>75.75%</td></tr></table>",
"html": null,
"text": "An example of anchor, positive, and negative sentences for the acronym RL and the expansion reinforcement learning."
},
"TABREF1": {
"num": null,
"type_str": "table",
"content": "<table/>",
"html": null,
"text": "Results of the validation data of SDU dataset and test data of CASI and MeDAL datasets."
},
"TABREF3": {
"num": null,
"type_str": "table",
"content": "<table/>",
"html": null,
"text": ""
}
}
}
}