|
{ |
|
"paper_id": "2022", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T10:33:55.405943Z" |
|
}, |
|
"title": "DEFTri: A Few-Shot Label Fused Contextual Representation Learning For Product Defect Triage in e-Commerce", |
|
"authors": [ |
|
{ |
|
"first": "Ipsita", |
|
"middle": [], |
|
"last": "Mohanty", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Walmart Global Tech Sunnyvale", |
|
"location": { |
|
"region": "California", |
|
"country": "USA" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Defect Triage is a time-sensitive and critical process in a large-scale agile software development lifecycle for e-commerce. Inefficiencies arising from human and process dependencies in this domain have motivated research in automated approaches using machine learning to accurately assign defects to qualified teams. This work proposes a novel framework for automated defect triage (DEFTri) using finetuned state-of-the-art pre-trained BERT on labels fused text embeddings to improve contextual representations from human-generated product defects. For our multi-label text classification defect triage task, we also introduce a Walmart proprietary dataset of product defects using weak supervision and adversarial learning, in a few-shot setting.", |
|
"pdf_parse": { |
|
"paper_id": "2022", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Defect Triage is a time-sensitive and critical process in a large-scale agile software development lifecycle for e-commerce. Inefficiencies arising from human and process dependencies in this domain have motivated research in automated approaches using machine learning to accurately assign defects to qualified teams. This work proposes a novel framework for automated defect triage (DEFTri) using finetuned state-of-the-art pre-trained BERT on labels fused text embeddings to improve contextual representations from human-generated product defects. For our multi-label text classification defect triage task, we also introduce a Walmart proprietary dataset of product defects using weak supervision and adversarial learning, in a few-shot setting.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "In large e-commerce organizations, there are many defects generated periodically with a massive pool of software teams and developers spread across geographies to pick from, each with unique domain specialization. Most organizations have a large pool of human triaging agents responsible for routing these product defects across various teams within the organization. However, large-scale software releases are time-sensitive, and effective defect assignments are a critical component in the process that is prone to bottlenecks. Determining the most suitable team to own a defect may require several attempts; thus, wasting time to diagnose a defect not in the team's domain of specialty and, overall, negatively impacting the defect resolution throughput.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Prior industry research work on automated defect triage has primarily focused on using the traditional machine learning approaches. However, with the recent surge of state-of-the-art pre-trained language models, one under-explored field of application is operations in agile software development.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In the defect triage, handling scenarios require Natural Language Understanding to utilize the context of the defects logged by human testers, to predict all the teams associated with resolution. The current defect triage process is primarily humanagents driven. This work integrates an automated defect triage framework, DEFTri using product defect's contextual features to achieve operational excellence within Walmart's software development lifecycle.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We propose a novel framework, DEFTri to perform an automated defect triage using contextual representations of human-generated defect texts. We use Walmart's proprietary data of product defects curated by product managers, program managers, and beta-testers to train our models. We use domain-specific lexicons to generate labeled training data using weak supervision in a few shot settings. We further use adversarial learning to increase our training sample size while increasing the robustness of our models. We propose our model architecture for fine-tuning pre-trained BERT (Devlin et al., 2018) for our multi-label classification task. Finally, we consolidate our experiments, analyze the results and discuss future research work.", |
|
"cite_spans": [ |
|
{ |
|
"start": 579, |
|
"end": 600, |
|
"text": "(Devlin et al., 2018)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Prior research work on defect triage (Choquette-Choo et al., 2019; Mani et al., 2018; Soleimani Neysiani et al., 2020) mostly focuses on using traditional machine learning and RNNs on word vector representations of text using BOW, Word2Vec, Tfidf, etc. Another recent research relies on graph representation learning for defect triage . This paper proposes a graph recurrent convolution network with a joint random walk mechanism-based architecture. Also, several recent research on label embedding (Xiong et al., 2021; Liu et al., 2021; Si et al., 2020) has shown promising results for learning the text and label representation in the same latent 1 space. We further the research by proposing a novel architecture to derive superior contextual text representations using state-of-the-art language model BERT for multi-label defect triage.", |
|
"cite_spans": [ |
|
{ |
|
"start": 37, |
|
"end": 66, |
|
"text": "(Choquette-Choo et al., 2019;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 67, |
|
"end": 85, |
|
"text": "Mani et al., 2018;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 86, |
|
"end": 118, |
|
"text": "Soleimani Neysiani et al., 2020)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 499, |
|
"end": 519, |
|
"text": "(Xiong et al., 2021;", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 520, |
|
"end": 537, |
|
"text": "Liu et al., 2021;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 538, |
|
"end": 554, |
|
"text": "Si et al., 2020)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Most of these published research benchmarks are on open-source defect report datasets -Eclipse and Mozilla (Lamkanfi et al., 2013) . However, these datasets are focused on technical errors generated during system failures and do not mimic our use case. Our product defects are comprehensive user testing reviews consisting of natural language, technical and domain-specific text. In the real world, gathering labeled data is hard and expensive. Hence, we propose a methodology to generate a robust proprietary multi-label training dataset using weak supervision and adversarial learning.", |
|
"cite_spans": [ |
|
{ |
|
"start": 107, |
|
"end": 130, |
|
"text": "(Lamkanfi et al., 2013)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Our primary dataset is a proprietary in-house dataset consisting of actual defect reviews generated by beta testers for one of our major software releases. We rely on defect title and description fields to create the text corpus and text labels to identify the teams uniquely. Each defect could have multiple associated teams and vice versa. For our research, we have 3485 samples as a train set and 85 samples as the test set with 15 unique team labels for our multi-label dataset. Refer Table 1 . We have 4-5 human-expert annotated defects corresponding to each team label in our low-resource setting. Our data preparation pipeline follows the below steps,", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 489, |
|
"end": 496, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Despite the success of fine-tuning pre-trained language models, one bottleneck is the requirement of labeled data. These labeled training data were expensive and time-consuming to create. It required human annotators with domain expertise to read through each defect review and assign team labels accordingly. Every change in labeling guidelines, team orientation, or use case changes necessitated re-labeling. Hence, we used Snorkel label model (Ratner et al., 2017) to generate weak labels for our training data. We apply 25 labeling functions (LFs) to unlabelled training data using a snorkel pipeline. Refer Table 2 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 446, |
|
"end": 467, |
|
"text": "(Ratner et al., 2017)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 612, |
|
"end": 619, |
|
"text": "Table 2", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Generate Labeled Data Using Weak Supervision", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Machine learning algorithms are often vulnerable to adversarial examples that have imperceptible alterations from the original counterparts but can fool the state-of-the-art models (Jin et al., 2019; Dong et al., 2021) .To increase the robustness, model training can be done using adversarial examples (Goodfellow et al., 2014; Gowal et al., 2021) . We use Textattack framework (Morris et al., 2020) on 30% of our data, chosen at random to generate synthetic data for training our models and append these synthetic examples to our train set. We use embedding recipe of the framework that augments text by replacing words with neighbors in the counter-fitted embedding space, with a constraint to ensure their cosine similarity is at least 0.8. For every sampled defect, we produce 2 augmented defect texts by altering 10% of original text words, while preserving the team labels . Refer Table 3 3", |
|
"cite_spans": [ |
|
{ |
|
"start": 181, |
|
"end": 199, |
|
"text": "(Jin et al., 2019;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 200, |
|
"end": 218, |
|
"text": "Dong et al., 2021)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 302, |
|
"end": 327, |
|
"text": "(Goodfellow et al., 2014;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 328, |
|
"end": 347, |
|
"text": "Gowal et al., 2021)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 378, |
|
"end": 399, |
|
"text": "(Morris et al., 2020)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 887, |
|
"end": 894, |
|
"text": "Table 3", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Generate Synthetic Data Using Adversarial Learning", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We found that the final training data created using the above techniques were imbalanced. This issue was because the product defects were likely skewed towards a specific defect associated with a more significant and frequently tested domain vs. a rarely occurring one. We also noticed that defect reviews for features related to new team labels are getting introduced into the environment on an ongoing basis. To resolve the skewness, we used Multilabel Synthetic Minority Over-sampling Technique (MLSMOTE) (Charte et al., 2015) w.r.t the team labels with minimal data representation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 508, |
|
"end": 529, |
|
"text": "(Charte et al., 2015)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": ".3 Fix Data Imbalances", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The multi-team-labels defect classification task in this research can be summarized with S as the tuple set. d i and t i represents the i th defect denoted as D and its corresponding team-labels denoted as T. N, n and m are the total number of defects, the length of the i th defect text and the number of teams-labels of the i th document, respectively.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "S = {(d i , t i )} N i=1 , D = {d i |d i = {d 1 , d 2 , , d n }}, T = {t i |t i = {t 1 , t 2 , , t m }}", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Our framework, DEFTri aims at assigning teamlabels to its corresponding defects based on the conditional probability P(t i |d i ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "...For a store only query like XXX i am seeing available for scheduled pickup as the stack title on FE when i don't have a slot booked.This stack title should just reflect the XXX query like ios and web..Incorrect XXX mapping (number mapped to XXX.. ...I cant add XXX to my cart from order details from my previous canceled order. There is no actionable CTA.There is an add to cart CTA for the XXX. See attached video. Using ios XXX... ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Defect Text Corpus (Anonymized Excerpts)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For our fine-tuning, we use BERT pre-trained transformer embedding from Hugging Face's Transformers library (Wolf et al., 2020) .BERT base uncased embeddings are case insensitive and are pre-trained on the English language self-supervised using two objectives -masked language modeling (MLM) and Next Sentence Prediction (NSP). These embeddings were introduced in the original BERT (Devlin et al., 2018) paper and serve as baseline embeddings for our models.", |
|
"cite_spans": [ |
|
{ |
|
"start": 108, |
|
"end": 127, |
|
"text": "(Wolf et al., 2020)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 382, |
|
"end": 403, |
|
"text": "(Devlin et al., 2018)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pre-Trained Model", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "For our DEFTri framework, we propose 2 novel implementations to derive superior contextual representations from product defect text, that help in improved multi-label defect classification task. We denote the defect corpus(title and description) tokens as D i and their corresponding token embeddings as E Di , where K is the total number of words in the input defect and D K represents the last token. Similarly, let L j be the team label text of the j th team of the overall 15 teams, corresponding to the defect corpus. Finally, we derive the positional embedding using BERT and apply classification layer with activation to the last layer of the hidden state at the [CLS] token.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Approach", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "We utilize the sentence pair configuration of BERT for text input. We concatenate the team labels text as Sentence A and concatenate the Defect title and description text as Sentence B, both separated by a [SEP] token. Refer Figure 2 ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Label Fused Model with [SEP]", |
|
"sec_num": "4.2.1" |
|
}, |
|
{ |
|
"text": "We experimented with two different dense layers for the classification head -Linear and BiLSTM. Refer Table 5 Classification ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 102, |
|
"end": 109, |
|
"text": "Table 5", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Classification Head", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "For model training we use PyTorch implementation of BCEWithLogitsLoss as our loss function and AdamOptimizer as our optimizer. BCEWithLog-itsLoss combines a Sigmoid layer and the Binary Cross Entropy Loss in one single class. In case of multi-label classification the loss can be described as,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Loss Function and Optimizer", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "l t (x, y) = L t = {l 1,t , ..., l N,t } T , l n,t = \u2212w n,t [p t y n,t \u2022 log\u03c3(x n,t ) + (1 + y n,t ) \u2022 log\u03c3(x n,t )]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Loss Function and Optimizer", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "where t=15 and represents the number of teamlabels , n is number of sample in the batch and p t is the weight of the positive answer for team-label t.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Loss Function and Optimizer", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "We use a set of hyper-parameters for our experiments. We used manual search for hyper-parameter search and the best model was chosen based on the best top-1 accuracy yielded in the validation data. Refer Table 6 HParams ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 204, |
|
"end": 211, |
|
"text": "Table 6", |
|
"ref_id": "TABREF7" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Hyper-Parameters", |
|
"sec_num": "4.5" |
|
}, |
|
{ |
|
"text": "As baseline and our proposed architecture, we use the pre-trained bert-base-uncased model (Wolf et al., 2020; Vaswani et al., 2017) . We perform a total of 6 experiments for our models under 3 different settings (1) baseline fine-tuned BERT model with no fused labels (2) fine-tuned BERT with fused labels without [SEP] token and (3) fine-tuned BERT with fused labels with [SEP] token, using 2 classification heads combinations e.g Linear and BiLSTM. Refer Table 4 and Appendix A.1 For data preprocessing step, the corpus is converted to lowercase and tokenzied with one-hotencoded labels.Our deep learning model is then trained to predict multiple team-labels for each test sample. At inference time, the model takes in an input of text corpus of defect and predicts a vector of probabilities for each of the 15 teamlabels. We used a confidence threshold of 0.55 for our probability vector to obtain a binary vector for comparison with ground-truth.", |
|
"cite_spans": [ |
|
{ |
|
"start": 90, |
|
"end": 109, |
|
"text": "(Wolf et al., 2020;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 110, |
|
"end": 131, |
|
"text": "Vaswani et al., 2017)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 457, |
|
"end": 464, |
|
"text": "Table 4", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Measuring accuracy on exact binary vector matching for multi-label classification is too penalizing because of the low tolerance for partial errors. Therefore, we divide our predictions by classes. For each of the team-labels in our dataset, we calculate the number of false positives (FP), false negatives (FN), true positives (TP), true negatives (TN). Finally, to obtain our Accuracy, we sum up the values across each team-labels as below,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Accuracy = T Pt+ T Nt F Pt+ F Nt+ T Pt+ T Nt", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "where T=15 and represents the number of team-labels in our dataset and T P t , T N t , F P t , F N t represents values of TP, TN, FP, FN for t th team-label. Similarly, we used macro-F1 (F1) scores based on averaged value of precision and recall calculated over all team-labels as below,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "P recision t = T Pt F Pt+T Pt Recall t = T Pt F Nt+T Pt F 1 = 2 \u00d7 1 T P recisiont\u00d7 1 T Recallt 1 T P recisiont+ 1 T Recallt 6 Analysis", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Based on our experiments, we observed that labelfused contextual learning-based fine-tuned BERT models significantly outperformed the base model using only the context of the defect text. The performance boost over the base BERT pre-trained fine-tuned model is because of the context in the label embeddings used in addition to the defect text in the label-fused models, which optimizes on the alignment of features, which makes it possible to classify better. Our team labels were short meaningful English words vs abbreviations which made fused embeddings better for classification when paired as a sentence with the defect texts as inputs. We observed that label-fused model without [SEP] token performed better that with [SEP] token which could have been because of the unnatural formation of Sentence A, where a bunch of team labels are concatenated together.", |
|
"cite_spans": [ |
|
{ |
|
"start": 686, |
|
"end": 691, |
|
"text": "[SEP]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Also, with the addition of synthetically generated data using adversarial examples for model training, we achieved an average accuracy improvement of 2.69% across our models vs. using the original data only. However, during our experiments we observed that the performance was sensitive towards the choice of text corpus sequence length and perturbation percentage for data augmentation made, during model training. A higher percentage of perturbations combined with a lower sequence length of text corpus negatively impacted performance.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Fine-tuning language models with weak supervision definitely solves the challenge of low labeled data availability. However, the models performance definitely suffers from error-propagation of pseudo-labels generated during the process. Recent research in contrastive self-regularized selftraining approach (Yu et al., 2020) and GAN-BERT in adversarial setting (Croce et al., 2020) have shown promising results for fine-tuning BERTbased language models with weak supervision. Also, Contrastive learning and Adversarial Learning approaches applied to various NLP tasks have demonstrated improvement over fine-tuning on BERT-based models (Mohanty et al., 2021; Pan et al., 2021) . To further our research, we would improve upon these approaches.", |
|
"cite_spans": [ |
|
{ |
|
"start": 307, |
|
"end": 324, |
|
"text": "(Yu et al., 2020)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 361, |
|
"end": 381, |
|
"text": "(Croce et al., 2020)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 636, |
|
"end": 658, |
|
"text": "(Mohanty et al., 2021;", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 659, |
|
"end": 676, |
|
"text": "Pan et al., 2021)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Future Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "In this work, we proposed a novel framework, DEFTri for automated defect triage using contextual representations of human-generated defect reviews at Walmart. We discussed our methodology of generating a new proprietary labeled dataset by using weak supervision and adversarial learning, in a few shot setting. We presented two label-fused model approaches for fine-tuning pre-trained BERT. As hypothesized, the experimental results show that our approach improves the multi-label text classification task for defect triage. We also proposed our future work of implementing contrastive learning for fine-tuning using weak supervision.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "8" |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "The author would like to thank colleagues in the Omni Customer Experience org. at Walmart Global", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We ran all our experiments on a Google Cloud Platform using a n1-standard-16 machine with NVIDIA Tesla V100 GPUs.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A.1 Experiment Setting", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Mlsmote: Approaching imbalanced multilabel learning through synthetic instance generation. Knowledge-Based Systems", |
|
"authors": [ |
|
{ |
|
"first": "Francisco", |
|
"middle": [], |
|
"last": "Charte", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Antonio", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Rivera", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Mar\u00eda", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jesus", |
|
"middle": [], |
|
"last": "Del", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Francisco", |
|
"middle": [], |
|
"last": "Herrera", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "89", |
|
"issue": "", |
|
"pages": "385--397", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1016/j.knosys.2015.07.019" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Francisco Charte, Antonio J. Rivera, Mar\u00eda J. del Je- sus, and Francisco Herrera. 2015. Mlsmote: Ap- proaching imbalanced multilabel learning through synthetic instance generation. Knowledge-Based Sys- tems, 89:385-397.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "A multi-label, dual-output deep neural network for automated bug triaging", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Christopher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Choquette-Choo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonny", |
|
"middle": [], |
|
"last": "Sheldon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Proppe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Harsha", |
|
"middle": [], |
|
"last": "Alphonso-Gibbs", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Gupta", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christopher A. Choquette-Choo, David Sheldon, Jonny Proppe, John Alphonso-Gibbs, and Harsha Gupta. 2019. A multi-label, dual-output deep neural network for automated bug triaging. CoRR, abs/1910.05835.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "GAN-BERT: Generative adversarial learning for robust text classification with a bunch of labeled examples", |
|
"authors": [ |
|
{ |
|
"first": "Danilo", |
|
"middle": [], |
|
"last": "Croce", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Giuseppe", |
|
"middle": [], |
|
"last": "Castellucci", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roberto", |
|
"middle": [], |
|
"last": "Basili", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2114--2119", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.acl-main.191" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Danilo Croce, Giuseppe Castellucci, and Roberto Basili. 2020. GAN-BERT: Generative adversarial learning for robust text classification with a bunch of labeled examples. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 2114-2119, Online. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "BERT: pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language under- standing. CoRR, abs/1810.04805.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "How should pretrained language models be fine-tuned towards adversarial robustness? CoRR", |
|
"authors": [ |
|
{ |
|
"first": "Xinshuai", |
|
"middle": [], |
|
"last": "Dong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anh", |
|
"middle": [], |
|
"last": "Luu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Min", |
|
"middle": [], |
|
"last": "Tuan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shuicheng", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hanwang", |
|
"middle": [], |
|
"last": "Yan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xinshuai Dong, Luu Anh Tuan, Min Lin, Shuicheng Yan, and Hanwang Zhang. 2021. How should pre- trained language models be fine-tuned towards adver- sarial robustness? CoRR, abs/2112.11668.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Improving robustness using generated data", |
|
"authors": [ |
|
{ |
|
"first": "Sven", |
|
"middle": [], |
|
"last": "Gowal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Olivia", |
|
"middle": [], |
|
"last": "Sylvestre-Alvise Rebuffi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Florian", |
|
"middle": [], |
|
"last": "Wiles", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [ |
|
"Andrei" |
|
], |
|
"last": "Stimberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Timothy", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Calian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sven Gowal, Sylvestre-Alvise Rebuffi, Olivia Wiles, Florian Stimberg, Dan Andrei Calian, and Timothy A. Mann. 2021. Improving robustness using generated data. CoRR, abs/2110.09468.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Is BERT really robust? natural language attack on text classification and entailment", |
|
"authors": [ |
|
{ |
|
"first": "Di", |
|
"middle": [], |
|
"last": "Jin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhijing", |
|
"middle": [], |
|
"last": "Jin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joey", |
|
"middle": [ |
|
"Tianyi" |
|
], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Szolovits", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Di Jin, Zhijing Jin, Joey Tianyi Zhou, and Peter Szolovits. 2019. Is BERT really robust? natural language attack on text classification and entailment. CoRR, abs/1907.11932.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "The eclipse and mozilla defect tracking dataset: A genuine dataset for mining bug information", |
|
"authors": [ |
|
{ |
|
"first": "Ahmed", |
|
"middle": [], |
|
"last": "Lamkanfi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Javier", |
|
"middle": [], |
|
"last": "P\u00e9rez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Serge", |
|
"middle": [], |
|
"last": "Demeyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "2013 10th Working Conference on Mining Software Repositories (MSR)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "203--206", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1109/MSR.2013.6624028" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ahmed Lamkanfi, Javier P\u00e9rez, and Serge Demeyer. 2013. The eclipse and mozilla defect tracking dataset: A genuine dataset for mining bug information. In 2013 10th Working Conference on Mining Software Repositories (MSR), pages 203-206.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Multi-label text classification via joint learning from label embedding and label correlation", |
|
"authors": [ |
|
{ |
|
"first": "Huiting", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Geng", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peipei", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peng", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xindong", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Neurocomputing", |
|
"volume": "460", |
|
"issue": "", |
|
"pages": "385--398", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1016/j.neucom.2021.07.031" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Huiting Liu, Geng Chen, Peipei Li, Peng Zhao, and Xindong Wu. 2021. Multi-label text classification via joint learning from label embedding and label correlation. Neurocomputing, 460:385-398.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Deeptriage: Exploring the effectiveness of deep learning for bug triaging", |
|
"authors": [ |
|
{ |
|
"first": "Senthil", |
|
"middle": [], |
|
"last": "Mani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anush", |
|
"middle": [], |
|
"last": "Sankaran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rahul", |
|
"middle": [], |
|
"last": "Aralikatte", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Senthil Mani, Anush Sankaran, and Rahul Ara- likatte. 2018. Deeptriage: Exploring the effec- tiveness of deep learning for bug triaging. CoRR, abs/1801.01275.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Emotions are subtle: Learning sentiment based text representations using contrastive learning", |
|
"authors": [ |
|
{ |
|
"first": "Ipsita", |
|
"middle": [], |
|
"last": "Mohanty", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ankit", |
|
"middle": [], |
|
"last": "Goyal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Dotterweich", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ipsita Mohanty, Ankit Goyal, and Alex Dotterweich. 2021. Emotions are subtle: Learning sentiment based text representations using contrastive learning. CoRR, abs/2112.01054.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Textattack: A framework for adversarial attacks in natural language processing", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [ |
|
"X" |
|
], |
|
"last": "Morris", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eli", |
|
"middle": [], |
|
"last": "Lifland", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jin", |
|
"middle": [ |
|
"Yong" |
|
], |
|
"last": "Yoo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yanjun", |
|
"middle": [], |
|
"last": "Qi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John X. Morris, Eli Lifland, Jin Yong Yoo, and Yanjun Qi. 2020. Textattack: A framework for adversar- ial attacks in natural language processing. CoRR, abs/2005.05909.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Improved text classification via contrastive adversarial training", |
|
"authors": [ |
|
{ |
|
"first": "Lin", |
|
"middle": [], |
|
"last": "Pan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chung-Wei", |
|
"middle": [], |
|
"last": "Hang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Avirup", |
|
"middle": [], |
|
"last": "Sil", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Saloni", |
|
"middle": [], |
|
"last": "Potdar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mo", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lin Pan, Chung-Wei Hang, Avirup Sil, Saloni Pot- dar, and Mo Yu. 2021. Improved text classifica- tion via contrastive adversarial training. CoRR, abs/2107.10137.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Snorkel: Rapid training data creation with weak supervision", |
|
"authors": [ |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Ratner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Stephen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Henry", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Bach", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [ |
|
"Alan" |
|
], |
|
"last": "Ehrenberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sen", |
|
"middle": [], |
|
"last": "Fries", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "R\u00e9", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexander Ratner, Stephen H. Bach, Henry R. Ehren- berg, Jason Alan Fries, Sen Wu, and Christopher R\u00e9. 2017. Snorkel: Rapid training data creation with weak supervision. CoRR, abs/1711.10160.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Students need more attention: Bertbased attentionmodel for small data with application to automaticpatient message triage. CoRR, abs", |
|
"authors": [ |
|
{ |
|
"first": "Shijing", |
|
"middle": [], |
|
"last": "Si", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rui", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jedrek", |
|
"middle": [], |
|
"last": "Wosik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hao", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Dov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guoyin", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ricardo", |
|
"middle": [], |
|
"last": "Henao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lawrence", |
|
"middle": [], |
|
"last": "Carin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shijing Si, Rui Wang, Jedrek Wosik, Hao Zhang, David Dov, Guoyin Wang, Ricardo Henao, and Lawrence Carin. 2020. Students need more attention: Bert- based attentionmodel for small data with applica- tion to automaticpatient message triage. CoRR, abs/2006.11991.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Efficient feature extraction model for validation performance improvement of duplicate bug report detection in software bug triage systems", |
|
"authors": [], |
|
"year": 2020, |
|
"venue": "Information and Software Technology", |
|
"volume": "126", |
|
"issue": "", |
|
"pages": "106344--106363", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1016/j.infsof.2020.106344" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Behzad Soleimani Neysiani, Seyed Morteza Babamir, and Masayoshi Aritsugi. 2020. Efficient feature ex- traction model for validation performance improve- ment of duplicate bug report detection in software bug triage systems. Information and Software Tech- nology, 126:106344-106363.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Huggingface's transformers: State-of-the-art natural language processing", |
|
"authors": [ |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Wolf", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lysandre", |
|
"middle": [], |
|
"last": "Debut", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Victor", |
|
"middle": [], |
|
"last": "Sanh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julien", |
|
"middle": [], |
|
"last": "Chaumond", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Clement", |
|
"middle": [], |
|
"last": "Delangue", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anthony", |
|
"middle": [], |
|
"last": "Moi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pierric", |
|
"middle": [], |
|
"last": "Cistac", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tim", |
|
"middle": [], |
|
"last": "Rault", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R\u00e9mi", |
|
"middle": [], |
|
"last": "Louf", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Morgan", |
|
"middle": [], |
|
"last": "Funtowicz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joe", |
|
"middle": [], |
|
"last": "Davison", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sam", |
|
"middle": [], |
|
"last": "Shleifer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Clara", |
|
"middle": [], |
|
"last": "Patrick Von Platen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yacine", |
|
"middle": [], |
|
"last": "Ma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julien", |
|
"middle": [], |
|
"last": "Jernite", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Canwen", |
|
"middle": [], |
|
"last": "Plu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Teven", |
|
"middle": [ |
|
"Le" |
|
], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sylvain", |
|
"middle": [], |
|
"last": "Scao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Gugger", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Hug- gingface's transformers: State-of-the-art natural lan- guage processing.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Zhenglong Xiang, Chen Yang, and Keqing He. 2021. A spatial-temporal graph neural network framework for automated software bug triaging", |
|
"authors": [ |
|
{ |
|
"first": "Hongrun", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yutao", |
|
"middle": [], |
|
"last": "Ma", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hongrun Wu, Yutao Ma, Zhenglong Xiang, Chen Yang, and Keqing He. 2021. A spatial-temporal graph neu- ral network framework for automated software bug triaging. CoRR, abs/2101.11846.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Fusing label embedding into bert: An efficient improvement for text classification", |
|
"authors": [ |
|
{ |
|
"first": "Yijin", |
|
"middle": [], |
|
"last": "Xiong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yukun", |
|
"middle": [], |
|
"last": "Feng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hao", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hidetaka", |
|
"middle": [], |
|
"last": "Kamigaito", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Manabu", |
|
"middle": [], |
|
"last": "Okumura", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Findings of the Association for Computational Linguistics: ACL/IJCNLP 2021", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1743--1750", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yijin Xiong, Yukun Feng, Hao Wu, Hidetaka Kami- gaito, and Manabu Okumura. 2021. Fusing label embedding into bert: An efficient improvement for text classification. In Findings of the Association for Computational Linguistics: ACL/IJCNLP 2021, Online Event, August 1-6, 2021, pages 1743-1750. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Fine-tuning pre-trained language model with weak supervision: A contrastive-regularized self-training approach", |
|
"authors": [ |
|
{ |
|
"first": "Yue", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Simiao", |
|
"middle": [], |
|
"last": "Zuo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Haoming", |
|
"middle": [], |
|
"last": "Jiang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wendi", |
|
"middle": [], |
|
"last": "Ren", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tuo", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chao", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yue Yu, Simiao Zuo, Haoming Jiang, Wendi Ren, Tuo Zhao, and Chao Zhang. 2020. Fine-tuning pre-trained language model with weak supervi- sion: A contrastive-regularized self-training ap- proach. CoRR, abs/2010.07835.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"text": "DEFTri Data Generation Methodology Rule corpus->labels (Anonymized) Keyword 'android' or 'ios' -> [Team-LabelA] Pattern '*search*' -> [Team-LabelB, Team-LabelC]", |
|
"uris": null, |
|
"type_str": "figure", |
|
"num": null |
|
}, |
|
"FIGREF1": { |
|
"text": "DEFTri LabelFuse Model with [SEP]4.2.2 Label Fused Model without [SEP]For our second implementation, we concatenate the team labels text along with Defect title and description text as a single Sentence A, without any [SEP] token as input. ReferFigure 3", |
|
"uris": null, |
|
"type_str": "figure", |
|
"num": null |
|
}, |
|
"FIGREF2": { |
|
"text": "Final cost by weight not showing on search tiles Final prices by weight not showing on search tiles Spacing on Nutrition Label is too large Spacing on Nourishment Label is too large", |
|
"uris": null, |
|
"type_str": "figure", |
|
"num": null |
|
}, |
|
"TABREF0": { |
|
"content": "<table/>", |
|
"html": null, |
|
"num": null, |
|
"type_str": "table", |
|
"text": "Samples of Defect Text Corpus." |
|
}, |
|
"TABREF1": { |
|
"content": "<table/>", |
|
"html": null, |
|
"num": null, |
|
"type_str": "table", |
|
"text": "" |
|
}, |
|
"TABREF2": { |
|
"content": "<table><tr><td>Model</td><td colspan=\"2\">Macro-F1 Accuracy</td></tr><tr><td>BERT+Linear</td><td>0.8123</td><td>0.8134</td></tr><tr><td>BERT+BiLSTM</td><td>0.8206</td><td>0.8216</td></tr><tr><td>BERT+LabelFuse w/o [SEP]+Linear</td><td>0.8144</td><td>0.8153</td></tr><tr><td colspan=\"2\">BERT+LabelFuse w/o [SEP]+BiLSTM 0.8236</td><td>0.8245</td></tr><tr><td>BERT+LabelFuse w [SEP]+Linear</td><td>0.8137</td><td>0.8150</td></tr><tr><td>BERT+LabelFuse w [SEP]+BiLSTM</td><td>0.8229</td><td>0.8241</td></tr></table>", |
|
"html": null, |
|
"num": null, |
|
"type_str": "table", |
|
"text": "Sample cases of Defect text vs Adversarial Defect Text." |
|
}, |
|
"TABREF3": { |
|
"content": "<table><tr><td>Figure 3: DEFTri LabelFuse Model w/o [SEP]</td></tr></table>", |
|
"html": null, |
|
"num": null, |
|
"type_str": "table", |
|
"text": "DEFTri Experiments Results For Contextual Multi-TeamLabel Classification on Real Product Defects" |
|
}, |
|
"TABREF5": { |
|
"content": "<table/>", |
|
"html": null, |
|
"num": null, |
|
"type_str": "table", |
|
"text": "DEFTri Classification Head Configurations" |
|
}, |
|
"TABREF7": { |
|
"content": "<table/>", |
|
"html": null, |
|
"num": null, |
|
"type_str": "table", |
|
"text": "" |
|
} |
|
} |
|
} |
|
} |