|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T14:59:20.993714Z" |
|
}, |
|
"title": "Contextual and Non-Contextual Word Embeddings: an in-depth Linguistic Investigation", |
|
"authors": [ |
|
{ |
|
"first": "Alessio", |
|
"middle": [], |
|
"last": "Miaschi", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Istituto di Linguistica Computazionale \"Antonio Zampolli\"", |
|
"location": { |
|
"settlement": "Pisa" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Felice", |
|
"middle": [], |
|
"last": "Dell'orletta", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Istituto di Linguistica Computazionale \"Antonio Zampolli\"", |
|
"location": { |
|
"settlement": "Pisa" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "In this paper we present a comparison between the linguistic knowledge encoded in the internal representations of a contextual Language Model (BERT) and a contextual-independent one (Word2vec). We use a wide set of probing tasks, each of which corresponds to a distinct sentence-level feature extracted from different levels of linguistic annotation. We show that, although BERT is capable of understanding the full context of each word in an input sequence, the implicit knowledge encoded in its aggregated sentence representations is still comparable to that of a contextualindependent model. We also find that BERT is able to encode sentence-level properties even within single-word embeddings, obtaining comparable or even superior results than those obtained with sentence representations.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "In this paper we present a comparison between the linguistic knowledge encoded in the internal representations of a contextual Language Model (BERT) and a contextual-independent one (Word2vec). We use a wide set of probing tasks, each of which corresponds to a distinct sentence-level feature extracted from different levels of linguistic annotation. We show that, although BERT is capable of understanding the full context of each word in an input sequence, the implicit knowledge encoded in its aggregated sentence representations is still comparable to that of a contextualindependent model. We also find that BERT is able to encode sentence-level properties even within single-word embeddings, obtaining comparable or even superior results than those obtained with sentence representations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Distributional word representations (Mikolov et al., 2013) trained on large-scale corpora have rapidly become one of the most prominent component in modern NLP systems. In this context, the recent development of context-dependent embeddings (Peters et al., 2018; Devlin et al., 2019) has shown that such representations are able to achieve state-ofthe-art performance in many complex NLP tasks.", |
|
"cite_spans": [ |
|
{ |
|
"start": 36, |
|
"end": 58, |
|
"text": "(Mikolov et al., 2013)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 241, |
|
"end": 262, |
|
"text": "(Peters et al., 2018;", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 263, |
|
"end": 283, |
|
"text": "Devlin et al., 2019)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "However, the introduction of such models made the interpretation of the syntactic and semantic properties learned by their inner representations more complex. Recent studies have begun to study these models in order to understand whether they encode linguistic phenomena even without being explicitly designed to learn such properties (Marvin and Linzen, 2018; Goldberg, 2019; Warstadt et al., 2019) . Much of this work focused on the definition of probing models trained to predict simple linguistic properties from unsupervised representations. In particular, those work provided evidences that contextualized Neural Language Models (NLMs) are able to capture a wide range of linguistic phenomena (Adi et al., 2016; Perone et al., 2018; Tenney et al., 2019b) and even to organize this information in a hierarchical manner (Belinkov et al., 2017; Lin et al., 2019; Jawahar et al., 2019) . Despite this, less study focused on the analysis and the comparison of contextual and non-contextual NLMs according to their ability to encode implicit linguistic properties in their representations.", |
|
"cite_spans": [ |
|
{ |
|
"start": 335, |
|
"end": 360, |
|
"text": "(Marvin and Linzen, 2018;", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 361, |
|
"end": 376, |
|
"text": "Goldberg, 2019;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 377, |
|
"end": 399, |
|
"text": "Warstadt et al., 2019)", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 699, |
|
"end": 717, |
|
"text": "(Adi et al., 2016;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 718, |
|
"end": 738, |
|
"text": "Perone et al., 2018;", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 739, |
|
"end": 760, |
|
"text": "Tenney et al., 2019b)", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 824, |
|
"end": 847, |
|
"text": "(Belinkov et al., 2017;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 848, |
|
"end": 865, |
|
"text": "Lin et al., 2019;", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 866, |
|
"end": 887, |
|
"text": "Jawahar et al., 2019)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper we perform a large number of probing experiments to analyze and compare the implicit knowledge stored by a contextual and a non-contextual model within their inner representations. In particular, we define two research questions, aimed at understanding: (i) which is the best method for combining BERT and Word2vec word representations into sentence embeddings and how they differently encode properties related to the linguistic structure of a sentence; (ii) whether such sentence-level knowledge is preserved within BERT single-word representations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "To answer our questions, we rely on a large suite of probing tasks, each of which codifies a particular propriety of a sentence, from very shallow features (such as sentence length and average number of characters per token) to more complex aspects of morphosyntactic and syntactic structure (such as the depth of the whole syntactic tree), thus making them as suitable to assess the implicit knowledge encoded by a NLM at a deep level of granularity.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The remainder of the paper is organized as follows. First we present related work (Sec. 2), then, after briefly presenting our approach (Sec. 3), we describe in more details the data (Sec. 3.1), our set of probing features (Sec. 3.2) and the models used for the experiments (Sec. 3.3). Experiments and results are described in Sec. 4 and 5. To conclude, in Sec. 6 we summarize the main findings of the study.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Contributions In this paper: (i) we perform an in-depth study aimed at understanding the linguistic knowledge encoded in a contextual (BERT) and a contextual-independent (Word2vec) Neural Language Model; (ii) we evaluate the best method for obtaining sentence-level representations from BERT and Word2vec according to a wide spectrum of probing tasks; (iii) we compare the results obtained by BERT and Word2vec according to the different combining methods; (iv) we study whether BERT is able to encode sentence-level properties within its single word representations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In the last few years, several methods have been devised to open the black box and understand the linguistic information encoded in NLMs (Belinkov and Glass, 2019) . They range from techniques to examine the activations of individual neurons (Karpathy et al., 2015; Li et al., 2016; K\u00e1d\u00e1r et al., 2017) to more domain specific approaches, such as interpreting attention mechanisms (Raganato and Tiedemann, 2018; Kovaleva et al., 2019; Vig and Belinkov, 2019) or designing specific probing tasks that a model can solve only if it captures a precise linguistic phenomenon using the contextual word/sentence embeddings of a pre-trained model as training features (Conneau et al., 2018; Zhang and Bowman, 2018; Hewitt and Liang, 2019) . These latter studies demonstrated that NLMs are able to encode a wide range of linguistic information in a hierarchical manner (Belinkov et al., 2017; Blevins et al., 2018; Tenney et al., 2019b) and even to support the extraction of dependency parse trees (Hewitt and Manning, 2019) . Jawahar et al. (2019) investigated the representations learned at different layers of BERT, showing that lower layer representations are usually better for capturing surface features, while embeddings from higher layers are better for syntactic and semantic properties. Using a suite of probing tasks, Tenney et al. (2019a) found that the linguistic knowledge encoded by BERT through its 12/24 layers follows the traditional NLP pipeline: POS tagging, parsing, NER, semantic roles and then coreference. Liu et al. (2019), instead, quantified differences in the transferability of individual layers between different models, showing that higher layers of RNNs (ELMo) are more task-specific (less general), while transformer layers (BERT) do not exhibit this increase in task-specificity.", |
|
"cite_spans": [ |
|
{ |
|
"start": 137, |
|
"end": 163, |
|
"text": "(Belinkov and Glass, 2019)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 242, |
|
"end": 265, |
|
"text": "(Karpathy et al., 2015;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 266, |
|
"end": 282, |
|
"text": "Li et al., 2016;", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 283, |
|
"end": 302, |
|
"text": "K\u00e1d\u00e1r et al., 2017)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 381, |
|
"end": 411, |
|
"text": "(Raganato and Tiedemann, 2018;", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 412, |
|
"end": 434, |
|
"text": "Kovaleva et al., 2019;", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 435, |
|
"end": 458, |
|
"text": "Vig and Belinkov, 2019)", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 660, |
|
"end": 682, |
|
"text": "(Conneau et al., 2018;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 683, |
|
"end": 706, |
|
"text": "Zhang and Bowman, 2018;", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 707, |
|
"end": 730, |
|
"text": "Hewitt and Liang, 2019)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 860, |
|
"end": 883, |
|
"text": "(Belinkov et al., 2017;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 884, |
|
"end": 905, |
|
"text": "Blevins et al., 2018;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 906, |
|
"end": 927, |
|
"text": "Tenney et al., 2019b)", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 989, |
|
"end": 1015, |
|
"text": "(Hewitt and Manning, 2019)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 1320, |
|
"end": 1341, |
|
"text": "Tenney et al. (2019a)", |
|
"ref_id": "BIBREF28" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Closer to our study, Adi et al. (2016) proposed a method for analyzing and comparing different sentence representations and different dimensions, exploring the effect of the dimensionality on the resulting representations. In particular, they showed that sentence representations based on averaged Word2vec embeddings are particularly effective and encode a wide amount of information regarding sentence length, while LSTM auto-encoders are very effective at capturing word order and word content. Similarly, but focused on the resolution of specific downstream tasks, Shen et al. 2018compared a Single Word Embedding-based model (SWEM-based) with existing recurrent and convolutional networks using a suite of 17 NLP datasets, demonstrating that simple pooling operations over SWEM-based representations exhibit comparable or even superior performance in the majority of cases considered. On the contrary, Joshi et al. (2019) showed that, in the context of three different classification problems in health informatics, context-based representations are a better choice than word-based representations to create vectors. Focusing instead on the geometry of the representation space, Ethayarajh (2019) first showed that the contextualized word representations of ELMo, BERT and GPT-2 produce more context specific representations in the upper layers and then proposed a method for creating a new type of static embedding that outperforms GloVe and FastText on many benchmarks, by simply taking the first principal component of contextualized representations in lower layers of BERT.", |
|
"cite_spans": [ |
|
{ |
|
"start": 21, |
|
"end": 38, |
|
"text": "Adi et al. (2016)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 907, |
|
"end": 926, |
|
"text": "Joshi et al. (2019)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Differently from those latter work, our aim is to investigate the implicit linguistic knowledge encoded in pre-trained contextual and contextualindependent models both at sentence and word levels.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We studied how layer-wise internal representations of BERT encode a wide spectrum of linguistic properties and how such implicit knowledge differs from that learned by a context-independent model such as Word2vec. Following the probing task approach as defined in Conneau et al. (2018) , we proposed a suite of 68 probing tasks, each of which corresponds to a distinct linguistic feature capturing raw-text, lexical, morpho-syntactic and syntactic characteristics of a sentence. More specifically, we defined two sets of experiments. The first consists in evaluating which is the best method for generating sentence-level embeddings using BERT and Word2vec single-word representations.", |
|
"cite_spans": [ |
|
{ |
|
"start": 264, |
|
"end": 285, |
|
"text": "Conneau et al. (2018)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Our Approach", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "In particular, we defined a simple probing model that takes as input layer-wise BERT and Word2vec combined representations for each sentence of a gold standard Universal Dependencies (UD) (Nivre et al., 2016) English dataset and predicts the actual value of a given probing feature. Moreover, we compared the results to understand which model performs better according to different levels of linguistic sophistication.", |
|
"cite_spans": [ |
|
{ |
|
"start": 188, |
|
"end": 208, |
|
"text": "(Nivre et al., 2016)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Our Approach", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "In the second set of experiments, we measured how many sentence-level properties are encoded in single-word representations. To do so, we performed our set of probing tasks using the embeddings extracted from both BERT and Word2vec individual tokens. In particular, we considered the word representations corresponding to the first, last and two internal tokens for each sentence of the UD dataset.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Our Approach", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "In order to perform the probing experiments on gold annotated sentences, we relied on the Universal Dependencies (UD) English dataset. The dataset includes three UD English treebanks: UD English-ParTUT, a conversion of a multilin-gual parallel treebank consisting of a variety of text genres, including talks, legal texts and Wikipedia articles (Sanguinetti and Bosco, 2015) ; the Universal Dependencies version annotation from the GUM corpus (Zeldes, 2017) ; the English Web Treebank (EWT), a gold standard universal dependencies corpus for English (Silveira et al., 2014) . Overall, the final dataset consists of 23,943 sentences.", |
|
"cite_spans": [ |
|
{ |
|
"start": 345, |
|
"end": 374, |
|
"text": "(Sanguinetti and Bosco, 2015)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 443, |
|
"end": 457, |
|
"text": "(Zeldes, 2017)", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 550, |
|
"end": 573, |
|
"text": "(Silveira et al., 2014)", |
|
"ref_id": "BIBREF27" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "As previously mentioned, our method is in line with the probing tasks approach defined in Conneau et al. 2018, which aims to capture linguistic information from the representations learned by a NLM. Specifically, in our work, each probing task correspond to predict the value of a specific linguistic feature automatically extracted from the POS tagged and dependency parsed sentences in the English UD dataset. The set of features is based on the ones described in Brunato et al. (2020) and it includes characteristics acquired from raw, morphosyntactic and syntactic levels of annotation. As described in Brunato et al. (2020) , this set of features has been shown to have a highly predictive role when leveraged by traditional learning models on a variety of classification problems, covering different aspects of stylometric and complexity analysis.", |
|
"cite_spans": [ |
|
{ |
|
"start": 466, |
|
"end": 487, |
|
"text": "Brunato et al. (2020)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 607, |
|
"end": 628, |
|
"text": "Brunato et al. (2020)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Probing Features", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "As shown in Table 1 , these features capture sev-eral linguistic phenomena ranging from the average length of words and sentence, to morpho-syntactic information both at the level of POS distribution and about the inflectional properties of verbs. More complex aspects of sentence structure are derived from syntactic annotation and model global and local properties of parsed tree structure, with a focus on subtrees of verbal heads, the order of subjects and objects with respect to the verb, the distribution of UD syntactic relations and features referring to the use of subordination.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 12, |
|
"end": 19, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Probing Features", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We relied on a pre-trained English version of BERT (BERT-base uncased, 12 layers) for the extraction of the contextual word embeddings. To obtain the representations for our sentence-level tasks we experimented the activation of the first input token ([CLS]) 1 and four different combining methods: Max-pooling, Min-pooling, Mean and Sum. Each of this four combining methods returns a single s vector, such that each s n is obtained by combining the n th components w 1n , w 2n , ..., w mn of the embedding of each word in the input sentence.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Models", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "In order to conduct a comparison of contextbased and word-based representations when solving our set of probing tasks, we performed all the probing experiments using also the embeddings extracted from a pre-trained version of Word2vec. In particular, we trained the model on the English Wikipedia dataset (dump of March 2020), resulting in 300-dimensional vectors. In the same manner as BERT's contextual representations, we experimented four combining methods: Max-pooling, Min-pooling, Mean and Sum.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Models", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "We used a linear Support Vector Regression model (LinearSVR) as probing model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Models", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The first set of experiments consists in evaluating which is the best method for combining word-level embeddings into sentence representations in order to understand what kind of implicit linguistic properties are encoded within both contextual and noncontextual representations using different combining methods. To do so, we firstly extracted from each sentence in the UD dataset the corresponding word embeddings using the output of the internal representations of Word2vec and BERT layers 1 As suggested in Jawahar et al. (2019) , the [CLS] token somehow summerizes the information encoded in the input sequence.", |
|
"cite_spans": [ |
|
{ |
|
"start": 511, |
|
"end": 532, |
|
"text": "Jawahar et al. (2019)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluating Sentence Representations", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "BERT (from input layer -12 to output layer -1). Secondly, we computed the sentence-representations according to the different combining strategies defined in 3.3. We then performed our set of 68 probing tasks using the LinearSVR model for each sentence representation. Since the majority of our probing features is correlated to sentence length, we compared probing results with the ones obtained with a baseline computed by measuring the \u03c1 coefficient between the length of the UD sentences and each of the 68 probing features. Evaluation was performed with a 5-cross fold validation and using Spearman correlation score (\u03c1) between predicted and gold labels as evaluation metric. Table 2 report average \u03c1 scores aggregating all probing results (All features) and according to raw text (Raw text), morphosyntactic (Morphosyntax) and syntactic (Syntax) levels of annotations. Scores are computed by averaging Max-, Min-pooling, Mean and Sum results. As a general remark, we notice that the scores obtained by Word2vec and BERT's internal representations outperforms the ones obtained with the correlation baseline, thus showing that both models are capable of implicitly encoding a wide spectrum of linguistic phenomena. Interestingly, we can notice that Word2vec sentence representations outperform BERT ones when considering all the probing features in average.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 682, |
|
"end": 689, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Categories", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Word2vec Baseline", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Categories", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We report in Table 3 and Figure 1 the probing scores obtained by the two models. For what concerns Word2vec representations, we notice that the Sum method prove to be the best one for encoding raw text and syntactic features, while mo- rophosyntactic properties are better represented averaging all the word embeddings (Mean). In general, best results are obtained with probing tasks related to morphosyntactic and syntactic features, like the distribution of POS (e.g. upos dist PRON, upos dist VERB) or the maximum depth of the syntactic tree (parse depth). If we look instead at the average \u03c1 scores obtained with BERT layerwise representations (Figure 1 ), we observe that, differently from Word2vec, best results are the ones related to raw-text features, such as sentence length or Type/Token Ratio. The Mean method prove to be the best one for almost all the probing tasks, achieving highest scores in the first five layers. The only exceptions mainly concern some of the linguistic features related to syntactic properties, e.g. the average length of dependency links (avg links len) or the maximum depth of the syntactic tree (parse depth), for which best scores across layers are obtained with the Sum strategy. The Maxand Min-pooling methods, instead, show a similar trend for almost all the probing features. Interestingly, the representations corresponding to the Table 4 : Average \u03c1 differences between BERT and Word2vec probing results according to the four embedding-aggregation strategies.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 13, |
|
"end": 20, |
|
"text": "Table 3", |
|
"ref_id": "TABREF4" |
|
}, |
|
{ |
|
"start": 25, |
|
"end": 33, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 648, |
|
"end": 657, |
|
"text": "(Figure 1", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 1377, |
|
"end": 1384, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Categories", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "[CLS] token, although considered as a summarization of the entire input sequence, achieve results comparable to those obtained with Maxand Minpooling methods. Moreover, it can be noticed that, unlike Maxand Min-pooling, the representations computed with Mean and Sum methods tend to lose their average precision in encoding our set of linguistic properties across the 12 layers. In order to investigate more in depth how the linguistic knowledge encoded by BERT across its layers differs from that learned by Word2vec, we report in Table 4 average \u03c1 differences between the two models according to the four combining strategies. As a general remark, we can notice that, regardless of the aggregation strategy taken into account, BERT and Word2vec sentence representations achieve quite similar results on average. Hence, although BERT is capable of understanding the full context of each word in an input sequence, the amount of linguistic knowledge implicitly encoded in its aggregated sentence representations is still comparable to that which can be achieved with a non-contextual language model.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 532, |
|
"end": 539, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Categories", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In Figure 2 we report instead the differences between BERT and Word2vec scores for all the 68 probing features (ordered by correlation with sentence length). For the comparison, we used the representations obtained with the Mean combining method. As a first remark, we notice that there is a clear distinction in terms of \u03c1 scores between features better predicted by BERT and Word2vec. In fact, features most related to syntactic properties (left heatmap) are those for which BERT results are generally higher with respect to those obtained with Word2vec. This result demonstrates that BERT, unlike a non-contextual language model as Word2vec, is able to encode information within its representa-tions that involves the entire input sequence, thus making more simple to solve probing tasks that refer to syntatic characteristics.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 11, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Categories", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Focusing instead on the right heatmap, we observe that Word2vec non-contextual representations are still capable of encoding a wide spectrum of linguistic properties with higher \u03c1 values compared to BERT ones, especially if we consider scores closer to BERT's output layers (from -4 to -1). This is particularly evident for morphosyntactic features related to the distribution of POS categories (xpos dist *, upos dist *), most likely because non-contextual representations tend to encode properties related to single tokens rather than syntactic relations between them.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Categories", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Once we have probed the linguistic knowledge encoded by BERT and Word2vec using different strategies for computing sentence embeddings, we investigated how much information about the structure of a sentence is encoded within single-word contextual representations. For doing so, we performed our sentence-level probing tasks using a single BERT word embedding for each sentence in the UD dataset. We tested four different words, corresponding to the first, the last and two internal tokens for each sentence in the UD dataset. In Table 5 : Average \u03c1 scores obtained by BERT and Word2vec according to word representations corresponding to the first, the last and two internal tokens of each input sentence. Results are computed according to the three linguistic levels of annotation and considering all the probing features (All). Average scores obtained with the [CLS] token are also reported.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 530, |
|
"end": 537, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluating Word Representations", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "particular, we extracted the embeddings from the output layer (-1) and from the layer that achieved best results in the previous experiments (-8). We used probing scores obtained with Word2vec embeddings for the same tokens as baseline. In Table 5 we report average \u03c1 scores obtained by BERT (BERT-*) and Word2vec (Word2vec-*) according to word-level representations extracted from the four tokens mentioned above. Results were computed aggregating all probing results (All) and according", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 240, |
|
"end": 248, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluating Word Representations", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "to raw text (Raw), morphosyntactic (Morphosyntax) and syntatic (Syntax) levels of annotation. For comparison, we also report average scores obtained with the [CLS] token.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluating Word Representations", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "As a first remark, we can clearly notice that even with a single-word embedding BERT is able to encode a wide spectrum of sentence-level linguistic properties. This result allows us to highlight the main potential of contextual representations, i.e. the capability of capturing linguistic phenomena that refer to the entire input sequence within single-word representations. An interesting observation is that, except for the raw text features, for which the best scores are achieved using [CLS], higher performance are obtained with the embeddings corresponding to BERT-4, i.e. the last token of each sentence. This result seems to indicate that [CLS], although being used for classification predictions, does not necessarily correspond to the most linguistically informative token within each input sequence.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluating Word Representations", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Comparing the results with those achieved using Word2vec word embeddings, we notice that BERT scores greatly outperform Word2vec for all the probing tasks. This is a straightforward result and can be easily explained by the fact that the lack of contextual knowledge does not allow singleword representations to encode information that are related to the structure of the whole sentence.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluating Word Representations", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Since the latter results demonstrated that BERT is capable of encoding many sentence-level properties within its single word representations, as a last analysis, we decided to compare these results with the ones obtained using sentence embeddings. In particular, Figure 3 reports probing scores obtained by BERT single word (tok *) and Mean sentence representations (sent) extracted from the output layer (-1) and from the layer that achieved best results in average (-8) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 467, |
|
"end": 471, |
|
"text": "(-8)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 263, |
|
"end": 271, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluating Word Representations", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "As already mentioned, for many of these probing tasks, word embeddings performance is comparable to that obtained with the aggregated sentence representations. Nevertheless, there are several cases in which the difference between performance is particularly significant. Interestingly, we can notice that aggregated sentence representations are generally better for predicting properties belonging to the left heatmap, i.e. to the group of features more related to syntactic properties. This is particularly noticeable for the average number of tokens per clause (avg token per clause) or the distribution of subordinate chains by length (subord dist), for which we observe an improvement from word-level to sentence-level representations of more than .10 \u03c1 points. On the contrary, probing features belonging to the right heatmap, therefore more close to raw text and morphosyntactic properties, are generally better predicted using single word embeddings, especially when considering the inner representations corresponding to the last token in each sentence (tok 4). The property most affected by the difference in scores between wordand sentence-level embeddings is the the distribution of periods (xpos dist .).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluating Word Representations", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Focusing instead on differences in performance between the two considered layers, we can notice that regardless of the method used to predict each feature, the representations learned by BERT tend to lose their precision in encoding our set of linguistic properties, most likely because the model is storing task-specific information (Masked Language Modeling task) at the expense of its ability to encode general knowledge about the language.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluating Word Representations", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "In this paper we studied the linguistic knowledge implicitly encoded in the internal representations of a contextual Language Model (BERT) and a contextual-independent one (Word2vec). Using a suite of 68 probing tasks and testing different methods for combining word embeddings into sentence representations, we showed that BERT and Word2vec encode a wide set of sentence-level linguistic properties in a similar manner. Nevertheless, we found that for Word2vec the best method for obtaining sentence representations is the Sum, while BERT is more effective when averaging all the single-word representations (Mean method). Moreover, we showed that BERT is able in storing features that are mainly related to raw text and syntactic properties, while Word2vec is good at predicting morphosyntactic characteristics.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Finally, we showed that BERT is able to encode sentence-level linguistic phenomena even within single-word embeddings, exhibiting comparable or even superior performance than those obtained with aggregated sentence representations. Moreover, we found that, at least for morphosyntactic and syntactic characteristics, the most informative word representation is the one that correspond to the last token of each input sequence and not, as might be expected, to the [CLS] special token.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Fine-grained analysis of sentence embeddings using auxiliary prediction tasks", |
|
"authors": [ |
|
{ |
|
"first": "Yossi", |
|
"middle": [], |
|
"last": "Adi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Einat", |
|
"middle": [], |
|
"last": "Kermany", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yonatan", |
|
"middle": [], |
|
"last": "Belinkov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ofer", |
|
"middle": [], |
|
"last": "Lavi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoav", |
|
"middle": [], |
|
"last": "Goldberg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1608.04207" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yossi Adi, Einat Kermany, Yonatan Belinkov, Ofer Lavi, and Yoav Goldberg. 2016. Fine-grained anal- ysis of sentence embeddings using auxiliary predic- tion tasks. arXiv preprint arXiv:1608.04207.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Analysis methods in neural language processing: A survey", |
|
"authors": [ |
|
{ |
|
"first": "Yonatan", |
|
"middle": [], |
|
"last": "Belinkov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Glass", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "7", |
|
"issue": "", |
|
"pages": "49--72", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yonatan Belinkov and James Glass. 2019. Analysis methods in neural language processing: A survey. Transactions of the Association for Computational Linguistics, 7:49-72.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Evaluating layers of representation in neural machine translation on part-of-speech and semantic tagging tasks", |
|
"authors": [ |
|
{ |
|
"first": "Yonatan", |
|
"middle": [], |
|
"last": "Belinkov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Llu\u00eds", |
|
"middle": [], |
|
"last": "M\u00e0rquez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hassan", |
|
"middle": [], |
|
"last": "Sajjad", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nadir", |
|
"middle": [], |
|
"last": "Durrani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fahim", |
|
"middle": [], |
|
"last": "Dalvi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Glass", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the Eighth International Joint Conference on Natural Language Processing", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1--10", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yonatan Belinkov, Llu\u00eds M\u00e0rquez, Hassan Sajjad, Nadir Durrani, Fahim Dalvi, and James Glass. 2017. Evaluating layers of representation in neural ma- chine translation on part-of-speech and semantic tag- ging tasks. In Proceedings of the Eighth Interna- tional Joint Conference on Natural Language Pro- cessing (Volume 1: Long Papers), pages 1-10.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Deep rnns encode soft hierarchical syntax", |
|
"authors": [ |
|
{ |
|
"first": "Terra", |
|
"middle": [], |
|
"last": "Blevins", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Omer", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "14--19", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Terra Blevins, Omer Levy, and Luke Zettlemoyer. 2018. Deep rnns encode soft hierarchical syntax. In Pro- ceedings of the 56th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 2: Short Papers), pages 14-19.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Profiling-ud: a tool for linguistic profiling of texts", |
|
"authors": [ |
|
{ |
|
"first": "Dominique", |
|
"middle": [], |
|
"last": "Brunato", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrea", |
|
"middle": [], |
|
"last": "Cimino", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Felice", |
|
"middle": [], |
|
"last": "Dell'orletta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Giulia", |
|
"middle": [], |
|
"last": "Venturi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Simonetta", |
|
"middle": [], |
|
"last": "Montemagni", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of The 12th Language Resources and Evaluation Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "7147--7153", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dominique Brunato, Andrea Cimino, Felice Dell'Orletta, Giulia Venturi, and Simonetta Montemagni. 2020. Profiling-ud: a tool for linguis- tic profiling of texts. In Proceedings of The 12th Language Resources and Evaluation Conference, pages 7147-7153, Marseille, France. European Language Resources Association.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "What you can cram into a single $&!#* vector: Probing sentence embeddings for linguistic properties", |
|
"authors": [ |
|
{ |
|
"first": "Alexis", |
|
"middle": [], |
|
"last": "Conneau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Germ\u00e1n", |
|
"middle": [], |
|
"last": "Kruszewski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guillaume", |
|
"middle": [], |
|
"last": "Lample", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lo\u00efc", |
|
"middle": [], |
|
"last": "Barrault", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Baroni", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "2126--2136", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexis Conneau, Germ\u00e1n Kruszewski, Guillaume Lam- ple, Lo\u00efc Barrault, and Marco Baroni. 2018. What you can cram into a single $&!#* vector: Probing sentence embeddings for linguistic properties. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 2126-2136.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "4171--4186", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "How contextual are contextualized word representations? comparing the geometry of BERT, ELMo, and GPT-2 embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Kawin", |
|
"middle": [], |
|
"last": "Ethayarajh", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "55--65", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D19-1006" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kawin Ethayarajh. 2019. How contextual are contex- tualized word representations? comparing the geom- etry of BERT, ELMo, and GPT-2 embeddings. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 55-65, Hong Kong, China. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Assessing bert's syntactic abilities", |
|
"authors": [ |
|
{ |
|
"first": "Yoav", |
|
"middle": [], |
|
"last": "Goldberg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1901.05287" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yoav Goldberg. 2019. Assessing bert's syntactic abili- ties. arXiv preprint arXiv:1901.05287.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Designing and interpreting probes with control tasks", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Hewitt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Percy", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2733--2743", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John Hewitt and Percy Liang. 2019. Designing and interpreting probes with control tasks. In Proceed- ings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Inter- national Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), pages 2733-2743.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "A structural probe for finding syntax in word representations", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Hewitt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Christopher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "4129--4138", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John Hewitt and Christopher D Manning. 2019. A structural probe for finding syntax in word represen- tations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4129-4138.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "What does bert learn about the structure of language?", |
|
"authors": [ |
|
{ |
|
"first": "Ganesh", |
|
"middle": [], |
|
"last": "Jawahar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Beno\u00eet", |
|
"middle": [], |
|
"last": "Sagot", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Djam\u00e9", |
|
"middle": [], |
|
"last": "Seddah", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Samuel", |
|
"middle": [], |
|
"last": "Unicomb", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gerardo", |
|
"middle": [], |
|
"last": "I\u00f1iguez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M\u00e1rton", |
|
"middle": [], |
|
"last": "Karsai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yannick", |
|
"middle": [], |
|
"last": "L\u00e9o", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M\u00e1rton", |
|
"middle": [], |
|
"last": "Karsai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Carlos", |
|
"middle": [], |
|
"last": "Sarraute", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "\u00c9ric", |
|
"middle": [], |
|
"last": "Fleury", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "57th Annual Meeting of the Association for Computational Linguistics (ACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ganesh Jawahar, Beno\u00eet Sagot, Djam\u00e9 Seddah, Samuel Unicomb, Gerardo I\u00f1iguez, M\u00e1rton Karsai, Yannick L\u00e9o, M\u00e1rton Karsai, Carlos Sarraute,\u00c9ric Fleury, et al. 2019. What does bert learn about the structure of language? In 57th Annual Meeting of the Associa- tion for Computational Linguistics (ACL), Florence, Italy.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "A comparison of word-based and context-based representations for classification problems in health informatics", |
|
"authors": [ |
|
{ |
|
"first": "Aditya", |
|
"middle": [], |
|
"last": "Joshi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sarvnaz", |
|
"middle": [], |
|
"last": "Karimi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ross", |
|
"middle": [], |
|
"last": "Sparks", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cecile", |
|
"middle": [], |
|
"last": "Paris", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C Raina", |
|
"middle": [], |
|
"last": "Macintyre", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 18th BioNLP Workshop and Shared Task", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "135--141", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W19-5015" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Aditya Joshi, Sarvnaz Karimi, Ross Sparks, Cecile Paris, and C Raina MacIntyre. 2019. A comparison of word-based and context-based representations for classification problems in health informatics. In Pro- ceedings of the 18th BioNLP Workshop and Shared Task, pages 135-141, Florence, Italy. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Representation of linguistic form and function in recurrent neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Akos", |
|
"middle": [], |
|
"last": "K\u00e1d\u00e1r", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Grzegorz", |
|
"middle": [], |
|
"last": "Chrupa\u0142a", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Afra", |
|
"middle": [], |
|
"last": "Alishahi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Computational Linguistics", |
|
"volume": "43", |
|
"issue": "4", |
|
"pages": "761--780", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Akos K\u00e1d\u00e1r, Grzegorz Chrupa\u0142a, and Afra Alishahi. 2017. Representation of linguistic form and func- tion in recurrent neural networks. Computational Linguistics, 43(4):761-780.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Visualizing and understanding recurrent networks", |
|
"authors": [ |
|
{ |
|
"first": "Andrej", |
|
"middle": [], |
|
"last": "Karpathy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Justin", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Li", |
|
"middle": [], |
|
"last": "Fei-Fei", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1506.02078" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrej Karpathy, Justin Johnson, and Li Fei-Fei. 2015. Visualizing and understanding recurrent networks. arXiv preprint arXiv:1506.02078.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Revealing the dark secrets of BERT", |
|
"authors": [ |
|
{ |
|
"first": "Olga", |
|
"middle": [], |
|
"last": "Kovaleva", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexey", |
|
"middle": [], |
|
"last": "Romanov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anna", |
|
"middle": [], |
|
"last": "Rogers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anna", |
|
"middle": [], |
|
"last": "Rumshisky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4365--4374", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D19-1445" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Olga Kovaleva, Alexey Romanov, Anna Rogers, and Anna Rumshisky. 2019. Revealing the dark secrets of BERT. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 4365-4374, Hong Kong, China. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Visualizing and understanding neural models in nlp", |
|
"authors": [ |
|
{ |
|
"first": "Jiwei", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xinlei", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eduard", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Jurafsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "681--691", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jiwei Li, Xinlei Chen, Eduard Hovy, and Dan Jurafsky. 2016. Visualizing and understanding neural models in nlp. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, pages 681-691.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Open sesame: Getting inside BERT's linguistic knowledge", |
|
"authors": [ |
|
{ |
|
"first": "Yongjie", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yi", |
|
"middle": [], |
|
"last": "Chern Tan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Frank", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "241--253", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W19-4825" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yongjie Lin, Yi Chern Tan, and Robert Frank. 2019. Open sesame: Getting inside BERT's linguistic knowledge. In Proceedings of the 2019 ACL Work- shop BlackboxNLP: Analyzing and Interpreting Neu- ral Networks for NLP, pages 241-253, Florence, Italy. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Linguistic knowledge and transferability of contextual representations", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Nelson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matt", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yonatan", |
|
"middle": [], |
|
"last": "Gardner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Belinkov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Matthew", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah A", |
|
"middle": [], |
|
"last": "Peters", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1073--1094", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nelson F Liu, Matt Gardner, Yonatan Belinkov, Matthew E Peters, and Noah A Smith. 2019. Lin- guistic knowledge and transferability of contextual representations. In Proceedings of the 2019 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long and Short Pa- pers), pages 1073-1094.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Targeted syntactic evaluation of language models", |
|
"authors": [ |
|
{ |
|
"first": "Rebecca", |
|
"middle": [], |
|
"last": "Marvin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tal", |
|
"middle": [], |
|
"last": "Linzen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1192--1202", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rebecca Marvin and Tal Linzen. 2018. Targeted syn- tactic evaluation of language models. In Proceed- ings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1192-1202.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Distributed representations of words and phrases and their compositionality", |
|
"authors": [ |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Greg", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Corrado", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeff", |
|
"middle": [], |
|
"last": "Dean", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Advances in neural information processing systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3111--3119", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in neural information processing systems, pages 3111-3119.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Universal dependencies v1: A multilingual treebank collection", |
|
"authors": [ |
|
{ |
|
"first": "Joakim", |
|
"middle": [], |
|
"last": "Nivre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marie-Catherine", |
|
"middle": [], |
|
"last": "De Marneffe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Filip", |
|
"middle": [], |
|
"last": "Ginter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoav", |
|
"middle": [], |
|
"last": "Goldberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jan", |
|
"middle": [], |
|
"last": "Hajic", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Christopher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Slav", |
|
"middle": [], |
|
"last": "Mcdonald", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sampo", |
|
"middle": [], |
|
"last": "Petrov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Natalia", |
|
"middle": [], |
|
"last": "Pyysalo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Silveira", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1659--1666", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Joakim Nivre, Marie-Catherine De Marneffe, Filip Gin- ter, Yoav Goldberg, Jan Hajic, Christopher D Man- ning, Ryan McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, et al. 2016. Universal dependencies v1: A multilingual treebank collection. In Proceed- ings of the Tenth International Conference on Lan- guage Resources and Evaluation (LREC'16), pages 1659-1666.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Evaluation of sentence embeddings in downstream and linguistic probing tasks", |
|
"authors": [ |
|
{ |
|
"first": "Roberto", |
|
"middle": [], |
|
"last": "Christian S Perone", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thomas S", |
|
"middle": [], |
|
"last": "Silveira", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Paula", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1806.06259" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christian S Perone, Roberto Silveira, and Thomas S Paula. 2018. Evaluation of sentence embeddings in downstream and linguistic probing tasks. arXiv preprint arXiv:1806.06259.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Deep contextualized word representations", |
|
"authors": [ |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Peters", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Neumann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohit", |
|
"middle": [], |
|
"last": "Iyyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matt", |
|
"middle": [], |
|
"last": "Gardner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "2227--2237", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word repre- sentations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227- 2237.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "An analysis of encoder representations in transformerbased machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Alessandro", |
|
"middle": [], |
|
"last": "Raganato", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J\u00f6rg", |
|
"middle": [], |
|
"last": "Tiedemann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alessandro Raganato and J\u00f6rg Tiedemann. 2018. An analysis of encoder representations in transformer- based machine translation. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Parttut: The turin university parallel treebank", |
|
"authors": [ |
|
{ |
|
"first": "Manuela", |
|
"middle": [], |
|
"last": "Sanguinetti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cristina", |
|
"middle": [], |
|
"last": "Bosco", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Harmonization and Development of Resources and Tools for Italian Natural Language Processing within the PARLI Project", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "51--69", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Manuela Sanguinetti and Cristina Bosco. 2015. Parttut: The turin university parallel treebank. In Harmo- nization and Development of Resources and Tools for Italian Natural Language Processing within the PARLI Project, pages 51-69. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Baseline needs more love: On simple word-embedding-based models and associated pooling mechanisms", |
|
"authors": [ |
|
{ |
|
"first": "Dinghan", |
|
"middle": [], |
|
"last": "Shen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guoyin", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wenlin", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martin", |
|
"middle": [], |
|
"last": "Renqiang Min", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qinliang", |
|
"middle": [], |
|
"last": "Su", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yizhe", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chunyuan", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ricardo", |
|
"middle": [], |
|
"last": "Henao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lawrence", |
|
"middle": [], |
|
"last": "Carin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "440--450", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P18-1041" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dinghan Shen, Guoyin Wang, Wenlin Wang, Mar- tin Renqiang Min, Qinliang Su, Yizhe Zhang, Chun- yuan Li, Ricardo Henao, and Lawrence Carin. 2018. Baseline needs more love: On simple word-embedding-based models and associated pool- ing mechanisms. In Proceedings of the 56th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 440- 450, Melbourne, Australia. Association for Compu- tational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "A gold standard dependency corpus for english", |
|
"authors": [ |
|
{ |
|
"first": "Natalia", |
|
"middle": [], |
|
"last": "Silveira", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Timothy", |
|
"middle": [], |
|
"last": "Dozat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marie-Catherine De", |
|
"middle": [], |
|
"last": "Marneffe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Samuel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Miriam", |
|
"middle": [], |
|
"last": "Bowman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Connor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher D", |
|
"middle": [], |
|
"last": "Bauer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "LREC", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2897--2904", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Natalia Silveira, Timothy Dozat, Marie-Catherine De Marneffe, Samuel R Bowman, Miriam Connor, John Bauer, and Christopher D Manning. 2014. A gold standard dependency corpus for english. In LREC, pages 2897-2904.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "BERT rediscovers the classical NLP pipeline", |
|
"authors": [ |
|
{ |
|
"first": "Ian", |
|
"middle": [], |
|
"last": "Tenney", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dipanjan", |
|
"middle": [], |
|
"last": "Das", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ellie", |
|
"middle": [], |
|
"last": "Pavlick", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4593--4601", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P19-1452" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019a. BERT rediscovers the classical NLP pipeline. In Proceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4593- 4601, Florence, Italy. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "What do you learn from context? probing for sentence structure in contextualized word representations", |
|
"authors": [ |
|
{ |
|
"first": "Ian", |
|
"middle": [], |
|
"last": "Tenney", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Patrick", |
|
"middle": [], |
|
"last": "Xia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Berlin", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Poliak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Mccoy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Najoung", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Van Durme", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Samuel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dipanjan", |
|
"middle": [], |
|
"last": "Bowman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Das", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1905.06316" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R Thomas McCoy, Najoung Kim, Benjamin Van Durme, Samuel R Bowman, Dipan- jan Das, et al. 2019b. What do you learn from context? probing for sentence structure in con- textualized word representations. arXiv preprint arXiv:1905.06316.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Analyzing the structure of attention in a transformer language model", |
|
"authors": [ |
|
{ |
|
"first": "Jesse", |
|
"middle": [], |
|
"last": "Vig", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yonatan", |
|
"middle": [], |
|
"last": "Belinkov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "63--76", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W19-4808" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jesse Vig and Yonatan Belinkov. 2019. Analyzing the structure of attention in a transformer language model. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 63-76, Florence, Italy. As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Investigating bert's knowledge of language: Five analysis methods with npis", |
|
"authors": [ |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Warstadt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yu", |
|
"middle": [], |
|
"last": "Cao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ioana", |
|
"middle": [], |
|
"last": "Grosu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Peng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hagen", |
|
"middle": [], |
|
"last": "Blix", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yining", |
|
"middle": [], |
|
"last": "Nie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anna", |
|
"middle": [], |
|
"last": "Alsop", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shikha", |
|
"middle": [], |
|
"last": "Bordia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Haokun", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alicia", |
|
"middle": [], |
|
"last": "Parrish", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2870--2880", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alex Warstadt, Yu Cao, Ioana Grosu, Wei Peng, Ha- gen Blix, Yining Nie, Anna Alsop, Shikha Bordia, Haokun Liu, Alicia Parrish, et al. 2019. Investi- gating bert's knowledge of language: Five analysis methods with npis. In Proceedings of the 2019 Con- ference on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 2870-2880.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "The GUM corpus: Creating multilayer resources in the classroom. Language Resources and Evaluation", |
|
"authors": [ |
|
{ |
|
"first": "Amir", |
|
"middle": [], |
|
"last": "Zeldes", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "51", |
|
"issue": "", |
|
"pages": "581--612", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1007/s10579-016-9343-x" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Amir Zeldes. 2017. The GUM corpus: Creating mul- tilayer resources in the classroom. Language Re- sources and Evaluation, 51(3):581-612.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Language modeling teaches you more than translation does: Lessons learned through auxiliary syntactic task analysis", |
|
"authors": [ |
|
{ |
|
"first": "Kelly", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Samuel", |
|
"middle": [], |
|
"last": "Bowman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "359--361", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kelly Zhang and Samuel Bowman. 2018. Language modeling teaches you more than translation does: Lessons learned through auxiliary syntactic task analysis. In Proceedings of the 2018 EMNLP Work- shop BlackboxNLP: Analyzing and Interpreting Neu- ral Networks for NLP, pages 359-361.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"type_str": "figure", |
|
"text": "Layerwise \u03c1 scores for the three categories of raw-text, morphosyntactic and syntactic features. Layerwise average results are also reported. Each line in the four plots corresponds to a different aggregating strategy.", |
|
"uris": null |
|
}, |
|
"FIGREF1": { |
|
"num": null, |
|
"type_str": "figure", |
|
"text": "Differences between BERT and Word2vec scores (multiplied by 100) for all the 68 probing features (ranked by correlation with sentence length), obtained with the Mean aggregation strategy. BERT scores are reported for all the 12 layers. Positive (red) and negative (blue) cells correspond to scores for which BERT outperforms Word2vec and vice versa.", |
|
"uris": null |
|
}, |
|
"FIGREF2": { |
|
"num": null, |
|
"type_str": "figure", |
|
"text": "Probing scores obtained by BERT word (tok *) and sentence (mean) representations extracted from layers -1 and -8. Sentence embeddings are computed using the Mean method.", |
|
"uris": null |
|
}, |
|
"TABREF0": { |
|
"html": null, |
|
"content": "<table><tr><td colspan=\"2\">Level of Annotation Linguistic Feature</td><td>Label</td></tr><tr><td/><td>Sentence Length</td><td>sent length</td></tr><tr><td>Raw Text</td><td>Word Length</td><td>char per tok</td></tr><tr><td colspan=\"2\">specific POS Type/POS tagging Distibution of UD and language-</td><td>upos dist *, xpos dist *</td></tr><tr><td/><td>Lexical density</td><td>lexical density</td></tr><tr><td/><td>Inflectional morphology of lexical verbs</td><td>verbs *, aux *</td></tr><tr><td/><td>and auxiliaries (Mood, Number, Person,</td><td/></tr><tr><td/><td>Tense and VerbForm)</td><td/></tr><tr><td/><td>Depth of the whole syntactic tree</td><td>parse depth</td></tr><tr><td>Dependency Parsing</td><td>Average length of dependency links and</td><td>avg links len, max links len</td></tr><tr><td/><td>of the longest link</td><td/></tr><tr><td/><td>Average length of prepositional chains</td><td>avg prepositional chain len, prep dist *</td></tr><tr><td/><td>and distribution by depth</td><td/></tr><tr><td/><td>Clause length (n. tokens/verbal heads)</td><td>avg token per clause</td></tr><tr><td/><td>Order of subject and object</td><td>subj pre, obj post</td></tr><tr><td/><td>Verb arity and distribution of verbs by</td><td>avg verb edges, verbal arity *</td></tr><tr><td/><td>arity</td><td/></tr><tr><td/><td>Distribution of verbal heads and verbal</td><td>verbal head dist, verbal root perc</td></tr><tr><td/><td>roots</td><td/></tr><tr><td/><td>Distribution of dependency relations</td><td>dep dist *</td></tr><tr><td/><td>Distribution of subordinate and principal</td><td>principal proposition dist, subordinate proposition dist</td></tr><tr><td/><td>clauses</td><td/></tr><tr><td/><td>Average length of subordination chains</td><td>avg subordinate chain len, subordinate dist 1</td></tr><tr><td/><td>and distribution by depth</td><td/></tr><tr><td/><td>Relative order of subordinate clauses</td><td>subordinate post</td></tr></table>", |
|
"text": "Token Ratio for words and lemmas ttr form, ttr lemma", |
|
"type_str": "table", |
|
"num": null |
|
}, |
|
"TABREF1": { |
|
"html": null, |
|
"content": "<table/>", |
|
"text": "", |
|
"type_str": "table", |
|
"num": null |
|
}, |
|
"TABREF3": { |
|
"html": null, |
|
"content": "<table><tr><td>Categories</td><td colspan=\"3\">Sum Min Max Mean</td></tr><tr><td>Raw text</td><td>0.56</td><td>0.51 0.51</td><td>0.46</td></tr><tr><td colspan=\"2\">Morphosyntax 0.59</td><td>0.52 0.54</td><td>0.61</td></tr><tr><td>Syntax</td><td>0.61</td><td>0.55 0.55</td><td>0.54</td></tr><tr><td>All features</td><td>0.60</td><td>0.54 0.55</td><td>0.57</td></tr></table>", |
|
"text": "BERT (average between layers) and Word2vec \u03c1 scores computed by averaging Max-, Min-, Mean and Sum scores according to the three linguistic levels of annotations and considering all the probing features (All features). Baseline scores are also reported.", |
|
"type_str": "table", |
|
"num": null |
|
}, |
|
"TABREF4": { |
|
"html": null, |
|
"content": "<table/>", |
|
"text": "Word2vec probing scores obtained with the four sentence combining methods.", |
|
"type_str": "table", |
|
"num": null |
|
} |
|
} |
|
} |
|
} |