|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T14:59:24.094144Z" |
|
}, |
|
"title": "Adversarial Training for Commonsense Inference", |
|
"authors": [ |
|
{ |
|
"first": "Lis", |
|
"middle": [], |
|
"last": "Pereira", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Ochanomizu University", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Xiaodong", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Fei", |
|
"middle": [], |
|
"last": "Cheng", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Kyoto University", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Masayuki", |
|
"middle": [], |
|
"last": "Asahara", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Ichiro", |
|
"middle": [], |
|
"last": "Kobayashi", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Ochanomizu University", |
|
"location": {} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We propose an AdversariaL training algorithm for commonsense InferenCE (ALICE). We apply small perturbations to word embeddings and minimize the resultant adversarial risk to regularize the model. We exploit a novel combination of two different approaches to estimate these perturbations: 1) using the true label and 2) using the model prediction. Without relying on any human-crafted features, knowledge bases or additional datasets other than the target datasets, our model boosts the finetuning performance of RoBERTa, achieving competitive results on multiple reading comprehension datasets that require commonsense inference.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We propose an AdversariaL training algorithm for commonsense InferenCE (ALICE). We apply small perturbations to word embeddings and minimize the resultant adversarial risk to regularize the model. We exploit a novel combination of two different approaches to estimate these perturbations: 1) using the true label and 2) using the model prediction. Without relying on any human-crafted features, knowledge bases or additional datasets other than the target datasets, our model boosts the finetuning performance of RoBERTa, achieving competitive results on multiple reading comprehension datasets that require commonsense inference.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Commonsense knowledge is often necessary for natural language understanding. As shown in Table 1 , we can understand that the writer needs help to get dressed and seems upset with this situation, indicating that he or she is probably not a child. Thus, we can infer that a possible reason that the writer needs to be dressed by other people is that he or she may have a physical disability (Huang et al., 2019) . Although a simple task for humans, it is still challenging for computers to understand and reason about commonsense.", |
|
"cite_spans": [ |
|
{ |
|
"start": 391, |
|
"end": 411, |
|
"text": "(Huang et al., 2019)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 89, |
|
"end": 97, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Commonsense inference in natural language processing (NLP) is generally evaluated via machine reading comprehension task, in the format of selecting plausible responses with respect to natural language queries. Recent approaches are based on the use of pre-trained Transformer-based language models such as BERT (Devlin et al., 2019) . Some approaches rely solely on these models by adopting either a single or multi-stage fine-tuning approach (by fine-tuning using additional datasets in a stepwise manner) (Li and Xie, 2019; Sharma and Roychowdhury, 2019; Liu and Yu, 2019; Paragraph: It's a very humbling experience when you need someone to dress you every morning, tie your shoes, and put your hair up. Every menial task takes an unprecedented amount of effort. It made me appreciate Dan even more. But anyway I shan't dwell on this (I'm not dying after all) and not let it detract from my lovely 5 days with my friends visiting from Jersey.", |
|
"cite_spans": [ |
|
{ |
|
"start": 312, |
|
"end": 333, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 558, |
|
"end": 575, |
|
"text": "Liu and Yu, 2019;", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Question: What's a possible reason the writer needed someone to dress him every morning?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Option1: The writer doesn't like putting effort into these tasks. Option2: The writer has a physical disability. Option3: The writer is bad at doing his own hair. Option4: None of the above choices. (Huang et al., 2019) . The task is to identify the correct answer option. The correct answer is in bold.", |
|
"cite_spans": [ |
|
{ |
|
"start": 199, |
|
"end": 219, |
|
"text": "(Huang et al., 2019)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "2019; Zhou et al., 2019) , while others further enhance their word representations with knowledge bases such as ConceptNet (Jain and Singh, 2019; Da, 2019; Wang et al., 2020) . However, due to the often limited data from the downstream tasks and the extremely high complexity of the pre-trained model, aggressive fine-tuning can easily make the adapted model overfit the data of the target task, making it unable to generalize well on unseen data (Jiang et al., 2019) . Moreover, some researchers have shown that such pre-trained models are vulnerable to adversarial attacks (Jin et al., 2020) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 6, |
|
"end": 24, |
|
"text": "Zhou et al., 2019)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 123, |
|
"end": 145, |
|
"text": "(Jain and Singh, 2019;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 146, |
|
"end": 155, |
|
"text": "Da, 2019;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 156, |
|
"end": 174, |
|
"text": "Wang et al., 2020)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 447, |
|
"end": 467, |
|
"text": "(Jiang et al., 2019)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 575, |
|
"end": 593, |
|
"text": "(Jin et al., 2020)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Inspired by the recent success of adversarial training in NLP Jiang et al., 2019) , our AdversariaL training algorithm for commonsense InferenCE (ALICE) focuses on improving the generalization of pre-trained language mod-els on downstream tasks by enhancing their robustness in the embedding space. More specifically, during the fine-tuning stage of Transformer-based models, e.g. RoBERTa (Liu et al., 2019b) , random perturbations are added to the embedding layer to regularize the model by updating the parameters on these adversarial embeddings. ALICE exploits a novel way of combining two different approaches to estimate these perturbations: 1) using the true label and 2) using the model prediction. Experiments show that we were able to boost the performance of RoBERTa on multiple reading comprehension datasets that require commonsense inference, achieving competitive results with stateof-the-art approaches.", |
|
"cite_spans": [ |
|
{ |
|
"start": 62, |
|
"end": 81, |
|
"text": "Jiang et al., 2019)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 389, |
|
"end": 408, |
|
"text": "(Liu et al., 2019b)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Given a dataset D of N training examples, D = {(x 1 , y 1 ), (x 2 , y 2 ), ..., (x N , y N )}, the objective of supervised learning is to learn a function f (x; \u03b8) that minimizes the empirical risk, which is defined by min \u03b8 E (x,y)\u223cD [l(f (x; \u03b8), y)].", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "ALICE", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Here, the function f (x; \u03b8) maps input sentences x to an output space y, and \u03b8 are learnable parameters. While this objective is effective to train a neural network, it usually suffers from overfitting and poor generalization to unseen cases (Goodfellow et al., 2015; Madry et al., 2018) . To alleviate these issues, one can use adversarial training, which has been primarily explored in computer vision (Goodfellow et al., 2015; Madry et al., 2018) . The idea is to perturb the data distribution in the embedding space by performing adversarial attacks. Specifically, its objective is defined by:", |
|
"cite_spans": [ |
|
{ |
|
"start": 242, |
|
"end": 267, |
|
"text": "(Goodfellow et al., 2015;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 268, |
|
"end": 287, |
|
"text": "Madry et al., 2018)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 404, |
|
"end": 429, |
|
"text": "(Goodfellow et al., 2015;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 430, |
|
"end": 449, |
|
"text": "Madry et al., 2018)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "ALICE", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "min \u03b8 E (x,y)\u223cD [max \u03b4 l(f (x + \u03b4; \u03b8), y)], (1)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "ALICE", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "where \u03b4 is the perturbation added to the embeddings. One challenge of adversarial training is how to estimate this perturbation \u03b4, which is to solve the inner maximization, max \u03b4 l(f (x + \u03b4; \u03b8), y). A feasible solution is to approximate it by a fixed number of steps of a gradient-based optimization approach (Madry et al., 2018) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 309, |
|
"end": 329, |
|
"text": "(Madry et al., 2018)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "ALICE", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Based on recent successful cases that applied adversarial training to NLP (Jiang et al., 2019; Miyato et al., 2018) , the approaches to estimate \u03b4 can be divided into two categories: adversarial training that uses the label y and adversarial training that uses the model prediction f (x; \u03b8), i.e. a \"virtual\" label (Miyato et al., 2018; Jiang et al., 2019) . We hypothesize that these two categories complement each other: the first one is to improve the robustness of our target label, by avoiding an increase in the error of the unperturbed inputs, while the second term enforces the smoothness of the model, encouraging the output of the model not to change much, when injecting a small perturbation to the input. Thus, ALICE proposes a novel algorithm by combining these two approaches, which is defined by:", |
|
"cite_spans": [ |
|
{ |
|
"start": 74, |
|
"end": 94, |
|
"text": "(Jiang et al., 2019;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 95, |
|
"end": 115, |
|
"text": "Miyato et al., 2018)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 315, |
|
"end": 336, |
|
"text": "(Miyato et al., 2018;", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 337, |
|
"end": 356, |
|
"text": "Jiang et al., 2019)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "ALICE", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "min \u03b8 E (x,y)\u223cD [max \u03b4 1 l(f (x + \u03b4 1 ; \u03b8), y)+ \u03b1 max \u03b4 2 l(f (x + \u03b4 2 ; \u03b8), f (x; \u03b8))],", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "ALICE", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "where \u03b4 1 and \u03b4 2 are two perturbations, bounded by a general l p norm ball, estimated by a fixed K steps of the gradient-based optimization approach. In our experiments, we set p = \u221e. It has been shown that a larger K can lead to a better estimation of \u03b4 (Qin et al., 2019; Madry et al., 2018) . However, this can be expensive, especially in large models, e.g. BERT and RoBERTa. Thus, K is set to 1 for a better trade-off between speed and performance. Note that \u03b1 is a hyperparameter balancing these two loss terms. In our experiments, we set \u03b1 to 1.", |
|
"cite_spans": [ |
|
{ |
|
"start": 256, |
|
"end": 274, |
|
"text": "(Qin et al., 2019;", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 275, |
|
"end": 294, |
|
"text": "Madry et al., 2018)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "ALICE", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "3 Experiments", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "ALICE", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We evaluate ALICE on three reading comprehension benchmarks that require commonsense inference:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datasets and Evaluation Metrics", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "CosmosQA ( typical time (when an event occurs), (4) frequency (how often an event occurs), and (5) stationarity (whether a state is maintained for a very long time or indefinitely). It contains 13k tuples, each consisting of a sentence, a question, and a candidate answer, that should be judged as plausible or not. The sentences are taken from different sources such as news, Wikipedia and textbooks.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datasets and Evaluation Metrics", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The summary of the datasets is in Table 2 . For the MCTACO dataset, no training set is available. Following (Zhou et al., 2019) , we use the dev set for fine-tuning the model. We perform 5-fold crossvalidation for fine-tuning the parameters.", |
|
"cite_spans": [ |
|
{ |
|
"start": 108, |
|
"end": 127, |
|
"text": "(Zhou et al., 2019)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 34, |
|
"end": 41, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Datasets and Evaluation Metrics", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We evaluate CosmosQA and MCScript2.0 in terms of accuracy. Following (Ostermann et al., 2019a) , we also report for the MCScript2.0 accuracy on the commonsense based questions and accuracy on the questions that are not commonsense based. For the MCTACO, we report the exact match (EM) and F1 scores, following (Zhou et al., 2019) . EM measures how many questions a system correctly labeled all candidate answers, while F1 measures the average overlap between one's predictions and the ground truth. Our implementation for pairwise text classification and relevance ranking tasks are based on the MT-DNN framework 1 (Liu et al., 2019a .", |
|
"cite_spans": [ |
|
{ |
|
"start": 69, |
|
"end": 94, |
|
"text": "(Ostermann et al., 2019a)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 310, |
|
"end": 329, |
|
"text": "(Zhou et al., 2019)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 615, |
|
"end": 633, |
|
"text": "(Liu et al., 2019a", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datasets and Evaluation Metrics", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The RoBERTa LARGE model (Liu et al., 2019b) was used as the text encoder. We used ADAM (Kingma and Ba, 2015) as our optimizer with a learning rate in the range \u2208 {1 \u00d7 10 \u22125 , 2 \u00d7 10 \u22125 , 3 \u00d7 10 \u22125 , 5 \u00d7 10 \u22125 , 5 \u00d7 10 \u22125 } and a batch size \u2208 {16, 32, 64}. The maximum number of epochs was set to 10. A linear learning rate decay schedule with warm-up over 0.1 was used, unless stated otherwise. We also set the dropout rate of all the task specific layers as 0.1, except 0.3 for MCTACO. To avoid gradient exploding, we clipped the gradient norm within 1. All the texts were tokenized using wordpieces and were chopped to spans no longer than 512 tokens.", |
|
"cite_spans": [ |
|
{ |
|
"start": 24, |
|
"end": 43, |
|
"text": "(Liu et al., 2019b)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Implementation Details", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We compare ALICE to a list of state-of-the-art models, as shown in Table 3 . BERT + unit normalization (Zhou et al., 2019) is the BERT base model. The authors further add unit normalization to temporal expressions in candidate answers and finetune on the MC-TACO dataset. RoBERTa LARGE is our re-implementation of the large RoBERTa model by (Liu et al., 2019b) . PSH-SJTU (Li and Xie, 2019) is based on multi-stage fine-tuning XL-NET (Yang et al., 2019) on RACE (Lai et al., 2017) , SWAG (Zellers et al., 2018) and MC-Script2.0 datasets. K-ADAPTER (Wang et al., 2020) further enhances RoBERTa word representations with multiple knowledge sources, such as factual knowledge obtained through Wikipedia and Wikidata and linguistic knowledge obtained through dependency parsing web texts. SMART (Jiang et al., 2019) is an adversarial training model for fine-tuning pretrained language models through regularization. SMART uses the model prediction, f (x; \u03b8), for estimating the perturbation \u03b4. This model recently obtained state-of-the-art results on a bunch of NLP tasks on the GLUE benchmark (Wang et al., 2018) . We also compare ALICE with a baseline that uses only the label y for estimating the perturbation \u03b4 (called model ADV hereafter) (Madry et al., 2018) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 103, |
|
"end": 122, |
|
"text": "(Zhou et al., 2019)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 341, |
|
"end": 360, |
|
"text": "(Liu et al., 2019b)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 434, |
|
"end": 453, |
|
"text": "(Yang et al., 2019)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 462, |
|
"end": 480, |
|
"text": "(Lai et al., 2017)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 488, |
|
"end": 510, |
|
"text": "(Zellers et al., 2018)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 548, |
|
"end": 567, |
|
"text": "(Wang et al., 2020)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 791, |
|
"end": 811, |
|
"text": "(Jiang et al., 2019)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 1090, |
|
"end": 1109, |
|
"text": "(Wang et al., 2018)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 1240, |
|
"end": 1260, |
|
"text": "(Madry et al., 2018)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 67, |
|
"end": 74, |
|
"text": "Table 3", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The results are summarized in Table 3 . Overall, we observed that adversarial methods, i.e. ADV, SMART and ALICE, were able to achieve competitive results over the baselines, without using any additional knowledge source, and without using any additional dataset other than the target task datasets. These results suggest that adversarial training lead to a more robust model and help generalize better on unseen data. ALICE consistently oupterformed SMART (which overall outperformed ADV) across all three datasets on both dev and test sets, indicating that adversarial training that uses the label y and adversarial training that uses the model prediction (Liu et al., 2019b) 80 XLNET, and used additional datasets other than MCScript2.0, while ALICE does not use any additional dataset. On the MCTACO dataset, AL-ICE obtained on the dev-set 56.20% EM score, a 2.41% and 12.8% absolute gains over SMART and RoBERTa LARGE , respectively, and 79.06% F1 score, a 0.75% and 14.21% absolute gains over SMART and RoBERTa LARGE , respectively. On the test-set, ALICE outperformed SMART obtaining absolute gains of 1.65% and 1.47% on EM and F1 scores, respectively. Compared to the T5-3B finetuned + number normalization model, which uses T5, a much larger model (with 3B parameters) than RoBERTa (300M parameters), ALICE obtained competitive results, outperforming by 0.04 on F1 score and obtaining 2.63% lower score on EM. Regarding the training time, ALICE takes on average 4X more time to train compared to standard finetuning.", |
|
"cite_spans": [ |
|
{ |
|
"start": 658, |
|
"end": 677, |
|
"text": "(Liu et al., 2019b)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 30, |
|
"end": 37, |
|
"text": "Table 3", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "We proposed ALICE, a simple and efficient adversarial training algorithm for fine-tuning large scale pre-trained language models. Our experiments demonstrated that it achieves competitive results on multiple machine reading comprehension datasets, without relying on any additional resource other than the target task dataset. Although in this paper we focused on the machine reading comprehension task, ALICE can be generalized to solve other downstream tasks as well, and we will explore this direction as future work.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "https://github.com/namisan/mt-dnn", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We thank the reviewers for their helpful feedback. This work has been supported by the project KAKENHI ID: 18H05521.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Jeff da at coin-shared task", |
|
"authors": [ |
|
{ |
|
"first": "Jeff", |
|
"middle": [], |
|
"last": "Da", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the First Workshop on Commonsense Inference in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "85--92", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jeff Da. 2019. Jeff da at coin-shared task. In Proceed- ings of the First Workshop on Commonsense Infer- ence in Natural Language Processing, pages 85-92.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of NAACL-HLT 2019", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4171--4186", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understand- ing. Proceedings of NAACL-HLT 2019, pages 4171- 4186.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Explaining and harnessing adversarial examples", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Ian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonathon", |
|
"middle": [], |
|
"last": "Goodfellow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christian", |
|
"middle": [], |
|
"last": "Shlens", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Szegedy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and harnessing adversar- ial examples. ICLR 2015.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Cosmos qa: Machine reading comprehension with contextual commonsense reasoning", |
|
"authors": [ |
|
{ |
|
"first": "Lifu", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Le", |
|
"middle": [], |
|
"last": "Ronan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chandra", |
|
"middle": [], |
|
"last": "Bras", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yejin", |
|
"middle": [], |
|
"last": "Bhagavatula", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Choi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2391--2401", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lifu Huang, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2019. Cosmos qa: Machine reading comprehension with contextual commonsense rea- soning. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing, pages 2391-2401.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Karna at coin shared task 1: Bidirectional encoder representations from transformers with relational knowledge for machine comprehension with common sense", |
|
"authors": [ |
|
{ |
|
"first": "Yash", |
|
"middle": [], |
|
"last": "Jain", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chinmay", |
|
"middle": [], |
|
"last": "Singh", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the First Workshop on Commonsense Inference in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "75--79", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yash Jain and Chinmay Singh. 2019. Karna at coin shared task 1: Bidirectional encoder representations from transformers with relational knowledge for ma- chine comprehension with common sense. In Pro- ceedings of the First Workshop on Commonsense In- ference in Natural Language Processing, pages 75- 79.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Smart: Robust and efficient fine-tuning for pretrained natural language models through principled regularized optimization", |
|
"authors": [ |
|
{ |
|
"first": "Haoming", |
|
"middle": [], |
|
"last": "Jiang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pengcheng", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Weizhu", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaodong", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jianfeng", |
|
"middle": [], |
|
"last": "Gao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tuo", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1911.03437" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Haoming Jiang, Pengcheng He, Weizhu Chen, Xi- aodong Liu, Jianfeng Gao, and Tuo Zhao. 2019. Smart: Robust and efficient fine-tuning for pre- trained natural language models through princi- pled regularized optimization. arXiv preprint arXiv:1911.03437.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Is bert really robust? natural language attack on text classification and entailment", |
|
"authors": [ |
|
{ |
|
"first": "Di", |
|
"middle": [], |
|
"last": "Jin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhijing", |
|
"middle": [], |
|
"last": "Jin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joey", |
|
"middle": [ |
|
"Tianyi" |
|
], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Szolovits", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "2020", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Di Jin, Zhijing Jin, Joey Tianyi Zhou, and Peter Szolovits. 2020. Is bert really robust? natural lan- guage attack on text classification and entailment. AAAI 2020.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Adam: A method for stochastic optimization. ICLR (Poster)", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Diederik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jimmy", |
|
"middle": [], |
|
"last": "Kingma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ba", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. ICLR (Poster) 2015.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Race: Large-scale reading comprehension dataset from examinations", |
|
"authors": [ |
|
{ |
|
"first": "Guokun", |
|
"middle": [], |
|
"last": "Lai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qizhe", |
|
"middle": [], |
|
"last": "Xie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hanxiao", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yiming", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eduard", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "785--794", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. 2017. Race: Large-scale reading comprehension dataset from examinations. Proceed- ings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 785-794.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Pingan smart health and sjtu at coin-shared task: utilizing pre-trained language models and common-sense knowledge in machine reading tasks", |
|
"authors": [], |
|
"year": 2019, |
|
"venue": "Proceedings of the First Workshop on Commonsense Inference in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "93--98", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhexi Zhang Wei Zhu Zheng Li Yuan Ni Peng Gao Junchi Yan Li, Xiepeng and Guotong Xie. 2019. Pin- gan smart health and sjtu at coin-shared task: utiliz- ing pre-trained language models and common-sense knowledge in machine reading tasks. In In Proceed- ings of the First Workshop on Commonsense Infer- ence in Natural Language Processing, pages 93-98, Hong Kong.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Blcu-nlp at coinshared task1: Stagewise fine-tuning bert for commonsense inference in everyday narrations", |
|
"authors": [ |
|
{ |
|
"first": "Chunhua", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dong", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the First Workshop on Commonsense Inference in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "99--103", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chunhua Liu and Dong Yu. 2019. Blcu-nlp at coin- shared task1: Stagewise fine-tuning bert for com- monsense inference in everyday narrations. In Pro- ceedings of the First Workshop on Commonsense In- ference in Natural Language Processing, pages 99- 103.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Multi-task deep neural networks for natural language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Xiaodong", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pengcheng", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Weizhu", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jianfeng", |
|
"middle": [], |
|
"last": "Gao", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4487--4496", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jian- feng Gao. 2019a. Multi-task deep neural networks for natural language understanding. Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 4487-4496.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "The microsoft toolkit of multitask deep neural networks for natural language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Xiaodong", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yu", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jianshu", |
|
"middle": [], |
|
"last": "Ji", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hao", |
|
"middle": [], |
|
"last": "Cheng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xueyun", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Emmanuel", |
|
"middle": [], |
|
"last": "Awa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pengcheng", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Weizhu", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hoifung", |
|
"middle": [], |
|
"last": "Poon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guihong", |
|
"middle": [], |
|
"last": "Cao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jianfeng", |
|
"middle": [], |
|
"last": "Gao", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2002.07972" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xiaodong Liu, Yu Wang, Jianshu Ji, Hao Cheng, Xueyun Zhu, Emmanuel Awa, Pengcheng He, Weizhu Chen, Hoifung Poon, Guihong Cao, and Jianfeng Gao. 2020. The microsoft toolkit of multi- task deep neural networks for natural language un- derstanding. arXiv preprint arXiv:2002.07972.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Roberta: A robustly optimized bert pretraining approach", |
|
"authors": [ |
|
{ |
|
"first": "Yinhan", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Myle", |
|
"middle": [], |
|
"last": "Ott", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Naman", |
|
"middle": [], |
|
"last": "Goyal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jingfei", |
|
"middle": [], |
|
"last": "Du", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mandar", |
|
"middle": [], |
|
"last": "Joshi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Danqi", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Omer", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Lewis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Veselin", |
|
"middle": [], |
|
"last": "Stoyanov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1907.11692" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019b. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Towards deep learning models resistant to adversarial attacks", |
|
"authors": [ |
|
{ |
|
"first": "Aleksander", |
|
"middle": [], |
|
"last": "Madry", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aleksandar", |
|
"middle": [], |
|
"last": "Makelov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ludwig", |
|
"middle": [], |
|
"last": "Schmidt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dimitris", |
|
"middle": [], |
|
"last": "Tsipras", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adrian", |
|
"middle": [], |
|
"last": "Vladu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2018. Towards deep learning models resistant to adversar- ial attacks. ICLR 2018.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Virtual adversarial training: a regularization method for supervised and semisupervised learning", |
|
"authors": [ |
|
{ |
|
"first": "Takeru", |
|
"middle": [], |
|
"last": "Miyato", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Masanori", |
|
"middle": [], |
|
"last": "Shin-Ichi Maeda", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shin", |
|
"middle": [], |
|
"last": "Koyama", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ishii", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "IEEE transactions on pattern analysis and machine intelligence", |
|
"volume": "41", |
|
"issue": "", |
|
"pages": "1979--1993", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, and Shin Ishii. 2018. Virtual adversarial training: a regularization method for supervised and semi- supervised learning. IEEE transactions on pat- tern analysis and machine intelligence, 41(8):1979- 1993.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Mcscript2. 0: A machine comprehension corpus focused on script events and participants", |
|
"authors": [ |
|
{ |
|
"first": "Simon", |
|
"middle": [], |
|
"last": "Ostermann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Roth", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Manfred", |
|
"middle": [], |
|
"last": "Pinkal", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the Eighth Joint Conference on Lexical and Computational Semantics (*SEM 2019)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "103--117", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Simon Ostermann, Michael Roth, and Manfred Pinkal. 2019a. Mcscript2. 0: A machine comprehension corpus focused on script events and participants. Proceedings of the Eighth Joint Conference on Lex- ical and Computational Semantics (*SEM 2019), pages 103-117.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Commonsense inference in natural language processing (coin)-shared task report", |
|
"authors": [ |
|
{ |
|
"first": "Simon", |
|
"middle": [], |
|
"last": "Ostermann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sheng", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Roth", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the First Workshop on Commonsense Inference in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "66--74", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Simon Ostermann, Sheng Zhang, Michael Roth, and Peter Clark. 2019b. Commonsense inference in nat- ural language processing (coin)-shared task report. In Proceedings of the First Workshop on Common- sense Inference in Natural Language Processing, pages 66-74.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Adversarial robustness through local linearization. 33rd Conference on Neural Information Processing Systems", |
|
"authors": [ |
|
{ |
|
"first": "Chongli", |
|
"middle": [], |
|
"last": "Qin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Martens", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sven", |
|
"middle": [], |
|
"last": "Gowal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dilip", |
|
"middle": [], |
|
"last": "Krishnan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alhussein", |
|
"middle": [], |
|
"last": "Fawzi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Soham", |
|
"middle": [], |
|
"last": "De", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Stanforth", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pushmeet", |
|
"middle": [], |
|
"last": "Kohli", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chongli Qin, James Martens, Sven Gowal, Dilip Kr- ishnan, Alhussein Fawzi, Soham De, Robert Stan- forth, Pushmeet Kohli, et al. 2019. Adversarial robustness through local linearization. 33rd Con- ference on Neural Information Processing Systems (NeurIPS 2019).", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Iitkgp at coin 2019: Using pre-trained language models for modeling machine comprehension", |
|
"authors": [ |
|
{ |
|
"first": "Prakhar", |
|
"middle": [], |
|
"last": "Sharma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sumegh", |
|
"middle": [], |
|
"last": "Roychowdhury", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the First Workshop on Commonsense Inference in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "80--84", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Prakhar Sharma and Sumegh Roychowdhury. 2019. Iit- kgp at coin 2019: Using pre-trained language mod- els for modeling machine comprehension. In Pro- ceedings of the First Workshop on Commonsense In- ference in Natural Language Processing, pages 80- 84.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Glue: A multi-task benchmark and analysis platform for natural language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Amanpreet", |
|
"middle": [], |
|
"last": "Singh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julian", |
|
"middle": [], |
|
"last": "Michael", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Felix", |
|
"middle": [], |
|
"last": "Hill", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Omer", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Samuel R", |
|
"middle": [], |
|
"last": "Bowman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "353--355", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2018. Glue: A multi-task benchmark and analysis platform for natural language understanding. Proceedings of the 2018 EMNLP Workshop BlackboxNLP: An- alyzing and Interpreting Neural Networks for NLP, pages 353-355.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Xlnet: Generalized autoregressive pretraining for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Zhilin", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zihang", |
|
"middle": [], |
|
"last": "Dai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yiming", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jaime", |
|
"middle": [], |
|
"last": "Carbonell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Russ", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Quoc V", |
|
"middle": [], |
|
"last": "Salakhutdinov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Le", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Advances in neural information processing systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5754--5764", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In Advances in neural in- formation processing systems, pages 5754-5764.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Swag: A large-scale adversarial dataset for grounded commonsense inference", |
|
"authors": [ |
|
{ |
|
"first": "Rowan", |
|
"middle": [], |
|
"last": "Zellers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yonatan", |
|
"middle": [], |
|
"last": "Bisk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roy", |
|
"middle": [], |
|
"last": "Schwartz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yejin", |
|
"middle": [], |
|
"last": "Choi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "93--104", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Choi. 2018. Swag: A large-scale adversarial dataset for grounded commonsense inference. Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 93-104.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "going on a vacation\" takes longer than\" going for a walk\": A study of temporal commonsense understanding", |
|
"authors": [ |
|
{ |
|
"first": "Ben", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Khashabi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qiang", |
|
"middle": [], |
|
"last": "Ning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Roth", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3363--3369", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ben Zhou, Daniel Khashabi, Qiang Ning, and Dan Roth. 2019. \"going on a vacation\" takes longer than\" going for a walk\": A study of temporal common- sense understanding. Proceedings of the 2019 Con- ference on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 3363-3369.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Freelb: Enhanced adversarial training for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Chen", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yu", |
|
"middle": [], |
|
"last": "Cheng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhe", |
|
"middle": [], |
|
"last": "Gan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Siqi", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Goldstein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jingjing", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "2020", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chen Zhu, Yu Cheng, Zhe Gan, Siqi Sun, Thomas Goldstein, and Jingjing Liu. 2020. Freelb: En- hanced adversarial training for language understand- ing. ICLR 2020.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"TABREF0": { |
|
"html": null, |
|
"num": null, |
|
"text": "Example from the CosmosQA dataset", |
|
"content": "<table/>", |
|
"type_str": "table" |
|
}, |
|
"TABREF1": { |
|
"html": null, |
|
"num": null, |
|
"text": "Huang et al., 2019): a large-scale dataset that focuses on people's everyday narratives, asking questions about the likely causes or effects of events that require reasoning beyond the exact text spans in the context. It has 35,888 questions on 21,886 distinct contexts taken from blogs of personal narratives. Each question has four answer candidates, one of which is correct. 93.8% of the dataset requires contextual commonsense reasoning. MCScript2.0(Ostermann et al., 2019b): a dataset focused on short narrations on different everyday activities (e.g. baking a cake, taking a bus, etc.). It has 19,821 questions on 3,487 texts. Each question has two answer candidates, one of which is correct. Roughly half of the questions require inferences over commonsense knowledge.", |
|
"content": "<table><tr><td>Dataset</td><td colspan=\"3\">#Train #Dev #Test #Label</td><td>Task</td><td>Metrics</td></tr><tr><td colspan=\"3\">CosmosQA 25,262 2,985 6,963</td><td>4</td><td>Relevance Ranking</td><td>Accuracy</td></tr><tr><td colspan=\"3\">MCScript2.0 14,191 2,020 3,610</td><td>2</td><td>Relevance Ranking</td><td>Accuracy</td></tr><tr><td>MCTACO</td><td>-</td><td>3,783 9,442</td><td>2</td><td colspan=\"2\">Pairwise Text Classification Exact Match (EM)/F1</td></tr><tr><td/><td/><td/><td/><td/><td>(3)</td></tr></table>", |
|
"type_str": "table" |
|
}, |
|
"TABREF2": { |
|
"html": null, |
|
"num": null, |
|
"text": "Summary of the three datasets: CosmosQA, MCScript2.0 and MCTACO.", |
|
"content": "<table/>", |
|
"type_str": "table" |
|
}, |
|
"TABREF5": { |
|
"html": null, |
|
"num": null, |
|
"text": "Development and test results of CosmosQA, MCScript 2.0 and MCTACO. The best results are in bold. Note that RoBERTa LARGE , SMART and ALICE models use RoBERTa LARGE as the text encoder, and for a fair comparison, all these results are produced by ourselves. On the test results, note that CosmosQA and MCTACO are scored by using the official evaluation server (https://leaderboard.allenai.org/). * denotes unpublished work and scores were obtained from the evaluation server on April 16, 2020. Acc cs denotes the accuracy on commonsense based questions and Acc ood denotes the accuracy on questions that are not commonsense based, i.e. out-of-domain questions.", |
|
"content": "<table><tr><td>f (x; \u03b8) are complementary, leading to better re-</td></tr><tr><td>sults. For example, on the CosmosQA dataset, we</td></tr><tr><td>obtained a dev-set accuracy of 83.6% with AL-</td></tr><tr><td>ICE, a 1.6% and 3.0% absolute gains over SMART</td></tr><tr><td>and RoBERTa LARGE , respectively. On the blind</td></tr><tr><td>test-set, ALICE outperforms by a large margin</td></tr><tr><td>K-ADAPTER, a model that enhances RoBERTa</td></tr><tr><td>word representations with multiple knowledge</td></tr><tr><td>sources. Our submission to the CosmosQA leader-</td></tr><tr><td>board achieved a test-set accuracy of 84.57%,</td></tr><tr><td>ranking first place among all submissions (as of</td></tr><tr><td>April 16, 2020). On the MCScript2.0 dataset,</td></tr><tr><td>ALICE obtained a dev-set accuracy of 93.8%</td></tr><tr><td>in total, a 0.2% and 3.8% absolute gains over</td></tr><tr><td>SMART and RoBERTa LARGE , respectively. On</td></tr><tr><td>the commonsense based questions, ALICE under-</td></tr><tr><td>performed SMART by 0.1% and outperformed</td></tr><tr><td>RoBERTa LARGE by 4.0%. On the out-of-domain</td></tr><tr><td>questions, ALICE obtained 0.5% and 3.6% abso-</td></tr><tr><td>lute gains over SMART and RoBERTa LARGE , re-</td></tr><tr><td>spectively. On the MCScript2.0 test-set, ALICE</td></tr><tr><td>outperformed all baselines (on all types of ques-</td></tr><tr><td>tions), including SMART, indicating that it can</td></tr><tr><td>generalize better to unseen cases. Moreover, AL-</td></tr><tr><td>ICE outperformed PSH-SJTU, which is based on</td></tr></table>", |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |