ACL-OCL / Base_JSON /prefixD /json /deelio /2021.deelio-1.12.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:22:01.506159Z"
},
"title": "What BERTs and GPTs know about your brand? Probing contextual language models for affect associations",
"authors": [
{
"first": "Vivek",
"middle": [],
"last": "Srivastava",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "TCS Research",
"location": {
"settlement": "Pune",
"country": "India"
}
},
"email": "[email protected]"
},
{
"first": "Stephen",
"middle": [],
"last": "Pilli",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "TCS Research",
"location": {
"settlement": "Pune",
"country": "India"
}
},
"email": "[email protected]"
},
{
"first": "Savita",
"middle": [],
"last": "Bhat",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "TCS Research",
"location": {
"settlement": "Pune",
"country": "India"
}
},
"email": "[email protected]"
},
{
"first": "Niranjan",
"middle": [],
"last": "Pedanekar",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "TCS Research",
"location": {
"settlement": "Pune",
"country": "India"
}
},
"email": "[email protected]"
},
{
"first": "Shirish",
"middle": [],
"last": "Karande",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "TCS Research",
"location": {
"settlement": "Pune",
"country": "India"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Investigating brand perception is fundamental to marketing strategies. In this regard, brand image, defined by a set of attributes (Aaker, 1997), is recognized as a key element in indicating how a brand is perceived by various stakeholders such as consumers and competitors. Traditional approaches (e.g., surveys) to monitor brand perceptions are time-consuming and inefficient. In the era of digital marketing, both brand managers and consumers engage with a vast amount of digital marketing content. The exponential growth of digital content has propelled the emergence of pre-trained language models such as BERT and GPT as essential tools in solving myriads of challenges with textual data. This paper seeks to investigate the extent of brand perceptions (i.e., brand and image attribute associations) these language models encode. We believe that any kind of bias for a brand and attribute pair may influence customer-centric downstream tasks such as recommender systems, sentiment analysis, and question-answering, e.g., suggesting a specific brand consistently when queried for 'innovative' products. We use synthetic data and real-life data and report comparison results for five contextual LMs, viz. BERT, RoBERTa, DistilBERT, ALBERT and BART.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Investigating brand perception is fundamental to marketing strategies. In this regard, brand image, defined by a set of attributes (Aaker, 1997), is recognized as a key element in indicating how a brand is perceived by various stakeholders such as consumers and competitors. Traditional approaches (e.g., surveys) to monitor brand perceptions are time-consuming and inefficient. In the era of digital marketing, both brand managers and consumers engage with a vast amount of digital marketing content. The exponential growth of digital content has propelled the emergence of pre-trained language models such as BERT and GPT as essential tools in solving myriads of challenges with textual data. This paper seeks to investigate the extent of brand perceptions (i.e., brand and image attribute associations) these language models encode. We believe that any kind of bias for a brand and attribute pair may influence customer-centric downstream tasks such as recommender systems, sentiment analysis, and question-answering, e.g., suggesting a specific brand consistently when queried for 'innovative' products. We use synthetic data and real-life data and report comparison results for five contextual LMs, viz. BERT, RoBERTa, DistilBERT, ALBERT and BART.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Brands play a vital role in marketing strategies. They are essential to company positioning, marketing campaigns, customer relationships, and profits (Lovett et al., 2014) . A brand persona is broadly defined by a set of attributes or dimensions; for instance, 'Mountain Dew' may be recognized by attributes such as 'adventurous' and 'rugged'. While Aaker's dimensions (Aaker, 1997) are widely used to define a brand persona, more fine-grained attributes are documented in Lovett et al. (2014) .",
"cite_spans": [
{
"start": 150,
"end": 171,
"text": "(Lovett et al., 2014)",
"ref_id": "BIBREF22"
},
{
"start": 369,
"end": 382,
"text": "(Aaker, 1997)",
"ref_id": "BIBREF0"
},
{
"start": 473,
"end": 493,
"text": "Lovett et al. (2014)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Furthermore, evaluating a brand persona, i.e., how a brand is perceived by various stakeholders such as consumers, competitors, and market analysts has been an active area of research (Culotta and Cutler, 2016; Davies et al., 2018) . Following the widespread success of pre-trained word representations, alternatively called Language Models (LMs), consumer-specific downstream tasks such as recommender systems, dialogues systems, and information retrieval engines look to make use of brand persona along with these representations to better fulfill consumer requirements.",
"cite_spans": [
{
"start": 184,
"end": 210,
"text": "(Culotta and Cutler, 2016;",
"ref_id": "BIBREF4"
},
{
"start": 211,
"end": 231,
"text": "Davies et al., 2018)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Accordingly, we formulate our first research question (RQ1) as Do LMs store implicit associations between brands and brand image attributes?.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To answer this, we look specifically at brands and brand image defined as affect attributes. Since LMs are trained on real-world data; we believe that these representations may be useful in understanding correlations between a brand and its persona attributes. While numerous studies have investigated unintended biases in Natural Language Processing systems (Dev et al., 2020; Dixon et al., 2018; Bolukbasi et al., 2016; Kiritchenko and Mohammad, 2018; Hutchinson et al., 2020) , this is probably the first work that explores brand and affect attributes associations in pre-trained LMs.",
"cite_spans": [
{
"start": 359,
"end": 377,
"text": "(Dev et al., 2020;",
"ref_id": "BIBREF6"
},
{
"start": 378,
"end": 397,
"text": "Dixon et al., 2018;",
"ref_id": "BIBREF8"
},
{
"start": 398,
"end": 421,
"text": "Bolukbasi et al., 2016;",
"ref_id": "BIBREF2"
},
{
"start": 422,
"end": 453,
"text": "Kiritchenko and Mohammad, 2018;",
"ref_id": "BIBREF17"
},
{
"start": 454,
"end": 478,
"text": "Hutchinson et al., 2020)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "These LMs are trained in an unsupervised manner on large-scale corpora. The training corpora generally comprise a variety of textual data such as common web crawl, Wikipedia dump, and book corpora. They are optimized to statistical properties of the training data from which they pick up and amplify real-world trends and associations along with biases such as gender and race (Kurita et al., 2019) . Some of these biases may be beneficial for downstream applications (e.g., filtering out mature content for non-adult viewers) while some can be inappropriate (e.g., resume sorting system believing men are more qualified programmers than women (Bolukbasi et al., 2016; Kiritchenko and Mohammad, 2018) . Marketing applications such as recommender systems and sentiment analysis can also perpetuate and highlight unfair biases, such as consistently showing popular brands as recommendations and not considering uncommon brands with less positive sentiment. With this in mind, we formulate our second research question (RQ2) as Do the associations embedded in LMs signify any bias? We also investigate whether these associations are consistent across all LMs as RQ3.",
"cite_spans": [
{
"start": 377,
"end": 398,
"text": "(Kurita et al., 2019)",
"ref_id": "BIBREF18"
},
{
"start": 644,
"end": 668,
"text": "(Bolukbasi et al., 2016;",
"ref_id": "BIBREF2"
},
{
"start": 669,
"end": 700,
"text": "Kiritchenko and Mohammad, 2018)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Brand personas are alternatively characterized as brand archetypes in Bechter's work (Bechter et al., 2016) . Brand archetypes are widely used as effective branding and marketing strategy. According to Jung (Jung, 1954) , archetypes are defined as inherent images within the collective human unconsciousness having universal meaning across cultures and generations. When successfully used, archetypal branding provides a narrative to connect with consumers. We formulate the following research questions: RQ4 as Do LMs capture brand personality intended by a brand? and RQ5 as Do LMs capture brand personality as perceived by consumers? We propose to use brand-attribute associations to understand brand archetypes perceived by LMs.",
"cite_spans": [
{
"start": 85,
"end": 107,
"text": "(Bechter et al., 2016)",
"ref_id": "BIBREF1"
},
{
"start": 207,
"end": 219,
"text": "(Jung, 1954)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, we probe five different LMs ( BERT (Devlin et al., 2018) , ALBERT (Lan et al., 2019) , RoBERTa , DistilBERT (Sanh et al., 2019) and BART ) on affect associations by using Masked Language Model (MLM) head. The choice of LMs was guided by three factors: 1) availability of MLM head, 2) variety in model architectures and 3) type and size of training data used while pre-training. Table 1 summarizes all the five LMs based on the pre-training data and the architecture. We believe that diversity in architectures and training data can influence the affective associations stored in representations. We propose to evaluate word representations based on following dimensions: 1) contextual similarity (Ethayarajh, 2019) , 2) statistical implicit association tests (Kurita et al., 2019; , 3) controlled probing tasks (Talmor et al., 2019) and 4) brand archetypes (Bechter et al., 2016) . We observe that LMs do encode affective associations between brands and image attributes (RQ1). Some of these associations are consistently observed across multiple LMs (RQ3) and are shown to be further enhanced by finetuning thus implying certain bias (RQ2). We find that brand images or personality captured by LMs do not concur with either intended or consumer perceived brand personality. We believe that appropriate dataset and more rigor is needed to address RQ4 and RQ5. ",
"cite_spans": [
{
"start": 49,
"end": 70,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF7"
},
{
"start": 80,
"end": 98,
"text": "(Lan et al., 2019)",
"ref_id": "BIBREF19"
},
{
"start": 122,
"end": 141,
"text": "(Sanh et al., 2019)",
"ref_id": "BIBREF27"
},
{
"start": 710,
"end": 728,
"text": "(Ethayarajh, 2019)",
"ref_id": "BIBREF9"
},
{
"start": 773,
"end": 794,
"text": "(Kurita et al., 2019;",
"ref_id": "BIBREF18"
},
{
"start": 825,
"end": 846,
"text": "(Talmor et al., 2019)",
"ref_id": "BIBREF29"
},
{
"start": 871,
"end": 893,
"text": "(Bechter et al., 2016)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 392,
"end": 399,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The success of pre-trained word embeddings in achieving state-of-the-art results has sparked widespread interest in investigating information captured in these representations. Typically defined as 'probing task', a wide variety of analyses have been proposed. For instance, (Hewitt and Manning, 2019) proposes a structural probe to test whether syntax trees are embedded in word representation space. Experiments in (Wallace et al., 2019) are aimed to investigate the numerical reasoning capabilities of an LM. Similarly, (Petroni et al., 2019) presents an in-depth analysis of relational knowledge present in pre-trained LMs. Penha and Hauff (2020) probe the contextual LMs (BERT and RoBERTa) for the conversational recommendation of books, movies, and music. Our work seeks to apply the idea of probing to a relatively unexplored area of affect analysis. To the best of our knowledge, this is the first work that presents a multi-pronged investigation of brands and subjective knowledge like affect attributes represented in contextual representation. Field and Tsvetkov (2019) is the most relevant prior work in terms of affect analysis. They present an entity-centric affective analysis with the use of contextual representations, where they find that meaningful affect information is captured in contextualize word representations but these representations are heavily biased towards their training data. A significant effort has been seen in investigating the intrinsic bias in word embeddings. These representations are trained in an unsupervised manner using a large amount of training data typically consisting of common web crawls. As a result, all kinds of biases like gender, race, demography along with trends and preferences get encoded in LMs. Works in (Kurita et al., 2019; Dev et al., 2020; propose methodologies to measure and mitigate bias in word representations. Our work is targeted at finding trends and preferences that certain entities have by using a combination of old and new such measures.",
"cite_spans": [
{
"start": 275,
"end": 301,
"text": "(Hewitt and Manning, 2019)",
"ref_id": "BIBREF13"
},
{
"start": 417,
"end": 439,
"text": "(Wallace et al., 2019)",
"ref_id": "BIBREF30"
},
{
"start": 523,
"end": 545,
"text": "(Petroni et al., 2019)",
"ref_id": "BIBREF25"
},
{
"start": 628,
"end": 650,
"text": "Penha and Hauff (2020)",
"ref_id": "BIBREF24"
},
{
"start": 1055,
"end": 1080,
"text": "Field and Tsvetkov (2019)",
"ref_id": "BIBREF11"
},
{
"start": 1769,
"end": 1790,
"text": "(Kurita et al., 2019;",
"ref_id": "BIBREF18"
},
{
"start": 1791,
"end": 1808,
"text": "Dev et al., 2020;",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In this work, we evaluate affect information captured in the LMs for different brands. Accordingly, the selected brands should have large volumes of online data to get significant representation in the LMs. We choose 697 major US national brands reported in (Lovett et al., 2014) . These brands are categorized into 16 different product categories. To analyze affect associations, we refer to surveys conducted by Young and Rubicam (Y&R) (Lovett et al., 2014) to measure a broad array of perceptions and attributes for a large number of brands. We choose 40 affect attributes listed as a part of 'Brand Image' in (Lovett et al., 2014) . We also manually map (see Table 8 in supplementary material and Bechter et al. (2016)) these attributes to one of the five Aaker's dimensions of brand personality. We restrict our analysis only to positive affect attributes since 'Arrogant' and 'Unapproachable' were the only two negative affect attributes observed in Y&R surveys. We understand the analysis with negative attributes is essential to explore the complete brand perception and we intend to pursue this in future. We consider three different data sources for our experiments as tabulated in Table 2 . We choose appropriate datasets based on experiments' requirements. We describe the datasets in detail in supplementary material.",
"cite_spans": [
{
"start": 258,
"end": 279,
"text": "(Lovett et al., 2014)",
"ref_id": "BIBREF22"
},
{
"start": 438,
"end": 459,
"text": "(Lovett et al., 2014)",
"ref_id": "BIBREF22"
},
{
"start": 613,
"end": 634,
"text": "(Lovett et al., 2014)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [
{
"start": 663,
"end": 670,
"text": "Table 8",
"ref_id": null
},
{
"start": 1192,
"end": 1200,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Dataset",
"sec_num": "3"
},
{
"text": "We outline our approach for exploring answers to the research questions stated above.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4"
},
{
"text": "\u2022 RQ1, RQ3: Understanding brand and attribute word association at different layers of the LMs (see contextual geometry in Section 4.1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4"
},
{
"text": "\u2022 RQ1, RQ2, RQ3, RQ4, RQ5: Analyzing closeness between the brand and attribute words using statistical tests (see implicit association test in Section 4.2). \u2022 RQ1: Probing for the association as well as the influence of brand name and the surrounding context on the attribute word (see probing task in Section 4.3). \u2022 RQ4: Examining brand perceptions in terms of archetypes and affect attributes (see brand archetype in Section 4.4).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4"
},
{
"text": "Taking inspiration from (Ethayarajh, 2019) , we use geometrical analysis to understand associations between brands and brand image attributes. Ethayarajh (2019) analyzes geometry of contextual representations across different layers. We follow the same approach to specifically analyze representations for brands and affect attributes. We use two metrics introduced in (Ethayarajh, 2019): selfsimilarity and intra-sentence similarity. Additionally, we use a similar methodology to define associations among brand words and affect words. We consider Ads. Dataset data for these experiments.",
"cite_spans": [
{
"start": 24,
"end": 42,
"text": "(Ethayarajh, 2019)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Contextual Geometry",
"sec_num": "4.1"
},
{
"text": "Let bw be a brand word and aw be an attribute or affect word appearing in sentences",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contextual Geometry",
"sec_num": "4.1"
},
{
"text": "{s 1 , s 2 , ..., s n } at positions {i 1 , i 2 , .., i n } and {j 1 , j 2 , .., j n } re- spectively. Accordingly, bw = s 1 [i 1 ] = s 2 [i 2 ] = .. = s n [i n ] and aw = s 1 [j 1 ] = s 2 [j 2 ] = .. = s n [j n ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contextual Geometry",
"sec_num": "4.1"
},
{
"text": "with i k and j k representing positions in sentence s k . In other words, a brand word bw is the i th 1 word in sentence s 1 and attribute word aw is the j th 1 word in sentence s 1 . Let f l (s, i) be a function that maps s[i] to its representation in layer l of language model f (Ethayarajh, 2019) . Then,",
"cite_spans": [
{
"start": 281,
"end": 299,
"text": "(Ethayarajh, 2019)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Contextual Geometry",
"sec_num": "4.1"
},
{
"text": "The affect-similarity between bw and aw in layer l is defined as the average cosine similarity between contextualized representations of brand and attribute across n unique contexts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "affect-similarity",
"sec_num": "4.1.1"
},
{
"text": "Af f Sim l (bw, aw) = 1 n k cos(f l (s k , i k ), f l (s k , j k )) Dataset",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "affect-similarity",
"sec_num": "4.1.1"
},
{
"text": "Data Example Brand Attribute Ads. Dataset (Hussain et al., 2017) 35k Action Reason pairs \"I should buy Converse shoes because they are stylish.\" Converse stylish BCD (Roy et al., 2019) 1962 sentences from webpages containing both brand and affect attributes \"Verizon is a global leader delivering innovative communications solutions.\"",
"cite_spans": [
{
"start": 42,
"end": 64,
"text": "(Hussain et al., 2017)",
"ref_id": "BIBREF14"
},
{
"start": 166,
"end": 184,
"text": "(Roy et al., 2019)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "affect-similarity",
"sec_num": "4.1.1"
},
{
"text": "Verizon innovative Synthetic (Table 16 in Supplementary Material) 40 hand crafted sentences \"Apple is a trendy brand.\" Apple trendy ",
"cite_spans": [],
"ref_spans": [
{
"start": 29,
"end": 65,
"text": "(Table 16 in Supplementary Material)",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "affect-similarity",
"sec_num": "4.1.1"
},
{
"text": "The intra-brand similarity between a pair of brand words in layer l is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "intra-brand similarity",
"sec_num": "4.1.2"
},
{
"text": "IntraBrandSim l (bw i , bw j ) = 1 n(n \u2212 1) k p =k cos(f l (s k , i k ), f l (s p , j p ))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "intra-brand similarity",
"sec_num": "4.1.2"
},
{
"text": "In other words, the intra-brand similarity provides average cosine similarity between representations of two brands across n different contexts. This measure captures how close the two brands are in the vector space.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "intra-brand similarity",
"sec_num": "4.1.2"
},
{
"text": "Similarly, we define the intra-attribute similarity between a pair of attributes in layer l as the average cosine similarity between two attributes across n different contexts. This measure helps us understand the association between different affect words in the vector space and can be used while defining and analyzing brand persona.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "intra-attribute similarity",
"sec_num": "4.1.3"
},
{
"text": "The Implicit Association Test (IAT) (Greenwald et al., 1998) in its purest form measures association between two target concepts with respect to an attribute. This test has enabled the examination of unconscious thought processes and implicit biases among people in different contexts (Sleek, 2018) . We believe that a variety of implicit biases and associations may be encoded in LMs. We use two interpretations of IAT (viz. WEAT and RIPA) to investigate brand and attribute associations in LMs.",
"cite_spans": [
{
"start": 36,
"end": 60,
"text": "(Greenwald et al., 1998)",
"ref_id": "BIBREF12"
},
{
"start": 285,
"end": 298,
"text": "(Sleek, 2018)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Implicit Association Tests",
"sec_num": "4.2"
},
{
"text": "The Word Embedding Association Test (WEAT) (Caliskan et al., 2017) for non-contextual word embeddings shows implicit biases captured in these representations. May et al. (2019) extend this test to sentence embeddings for contextual LMs. Since our focus is on words; we follow the approach used in (Kurita et al., 2019) to adapt WEAT for words. We also consider the new measure, log-probability bias score, introduced in (Kurita et al., 2019) . This test follows a similar approach to WEAT except for the cosine similarity computation between target word and attributes is replaced by log-probability.",
"cite_spans": [
{
"start": 43,
"end": 66,
"text": "(Caliskan et al., 2017)",
"ref_id": "BIBREF3"
},
{
"start": 159,
"end": 176,
"text": "May et al. (2019)",
"ref_id": "BIBREF23"
},
{
"start": 297,
"end": 318,
"text": "(Kurita et al., 2019)",
"ref_id": "BIBREF18"
},
{
"start": 420,
"end": 441,
"text": "(Kurita et al., 2019)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Implicit Association Tests",
"sec_num": "4.2"
},
{
"text": "The work in proves that any embedding model that implicitly does matrix factorization, subspace projection under certain conditions, can be considered as debiasing the embedding vectors. Accordingly, they propose a new method of the association called relational inner product association (RIPA) that uses the subspace projection method. We adapt RIPA measure for brands and attribute words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implicit Association Tests",
"sec_num": "4.2"
},
{
"text": "Both log-probability and RIPA have been proposed as an alternative to the basic WEAT association test. We detail the experimental structure for these tests below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implicit Association Tests",
"sec_num": "4.2"
},
{
"text": "The WEAT test simulates the human implicit association test for word embeddings, measuring the association between two equal-sized sets of target concepts and two sets of attributes (May et al., 2019) . Specifically, in our case, we consider highlevel brand categories as target concept sets and Aaker's dimensions as attribute sets. Specific details about test statistics along with permutation test and effect size can be found in (Caliskan et al., 2017; May et al., 2019; Kurita et al., 2019) .",
"cite_spans": [
{
"start": 182,
"end": 200,
"text": "(May et al., 2019)",
"ref_id": "BIBREF23"
},
{
"start": 433,
"end": 456,
"text": "(Caliskan et al., 2017;",
"ref_id": "BIBREF3"
},
{
"start": 457,
"end": 474,
"text": "May et al., 2019;",
"ref_id": "BIBREF23"
},
{
"start": 475,
"end": 495,
"text": "Kurita et al., 2019)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "WEAT",
"sec_num": "4.2.1"
},
{
"text": "We consider the same set of broad categories for brands and Aaker's dimensions for attributes as target and attribute sets respectively for finding logprobability score. Similar to (Kurita et al., 2019) , we compute the mean log probability bias score for each attribute and permute the attributes to measure statistical significance with the permutation test.",
"cite_spans": [
{
"start": 181,
"end": 202,
"text": "(Kurita et al., 2019)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Log-probability score",
"sec_num": "4.2.2"
},
{
"text": "For both WEAT and log-probability test, we use synthetic data generated by appropriate handcrafted templates. We apply these tests to all combinations of brand categories and Aaker's dimensions. We apply these tests on combinations of all brand categories except 'Food and Dining' and 5 Aaker's affect dimensions. We use the pairwise ranking to rank these combinations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Log-probability score",
"sec_num": "4.2.2"
},
{
"text": "For our affect analysis formulation, we define RIPA as the projection of the affect word vector i.e. attribute onto the bias subspace defined by a pair of brands. We use handcrafted templates to generate sentences corresponding to 40 attributes combined with brand words. Thus, we get 40 representations for every brand and 697 representations for every attribute. Final brand and attribute vectors are computed by taking an average of corresponding vector sets. RIPA score between each attribute word and a pair of brand words is then calculated by taking the inner product of the first principal component of the subspace defined by the pair of brand words and attribute word. For a brand pair (x,y) and an attribute word w, a positive RIPA score suggests the relatively more association of w with the brand x and vice-versa.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RIPA",
"sec_num": "4.2.3"
},
{
"text": "A large body of research comprising of probing tasks is dedicated to exploring what is captured by contextual LMs. We define two probing tasks that are essentially cloze tasks to analyze brand and affect attributes associations. In the simplest form, we consider MLM setup: given a sentence with brand and masked attribute word, we use pretrained LM with MLM head to predict words at the masked position. If a model predicts the correct attribute in the top-5 position, then we infer that the model representations have captured the corresponding affect association. Additionally, to understand the behavior after fine-tuning, we introduce MLP with a 1-hidden layer to the MLM setup to train the LMs as discussed in (Talmor et al., 2019) ; we call this setup MLP-MLM.",
"cite_spans": [
{
"start": 716,
"end": 737,
"text": "(Talmor et al., 2019)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Probing Tasks",
"sec_num": "4.3"
},
{
"text": "To further analyze sensitivity to context, we define perturbed language control, where we introduce nonsensical words into the sentences. We observe if there is any effect of nonsense words to affect associations. MLM setup is used to experiment on all LMs using Ads. Dataset and BCD datasets, whereas MLP-MLM uses only Ads. Dataset and is experimented on all the LMs except BART.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probing Tasks",
"sec_num": "4.3"
},
{
"text": "Brand archetypes provide a relatable connection between brands and consumers. We consider implicit and explicit perceptions of archetypes. We use Lovett's data (Lovett et al., 2014) to understand people's tacit perceptions about brand archetypes in terms of affect attributes. We believe that training data used for pre-training LMs may record impressions about the brand in the wild. Accordingly, we consider pre-trained LMs to investigate the explicit perceptions for archetypes. We consider 12 archetypes (Jung, 1954) for this analysis. We manually map every archetype to a set of affect attributes from Lovett's attributes (Lovett et al., 2014) with the help from (Bechter et al., 2016 ) (see Table 8 and 10 in Supplementary Material).",
"cite_spans": [
{
"start": 160,
"end": 181,
"text": "(Lovett et al., 2014)",
"ref_id": "BIBREF22"
},
{
"start": 627,
"end": 648,
"text": "(Lovett et al., 2014)",
"ref_id": "BIBREF22"
},
{
"start": 668,
"end": 689,
"text": "(Bechter et al., 2016",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 697,
"end": 704,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Brand Archetypes",
"sec_num": "4.4"
},
{
"text": "To understand the brand archetype information captured in the LMs, we take the intersection of the top attributes obtained using the brand-attribute affect similarity and the attributes for a given archetype (obtained after manual mapping). First, we identify the top-5 attributes for a given brand using the affect similarity score and then we take the percentage overlap with the list of attributes corresponding to each of the archetypes. The percentage overlap suggests the degree of brand archetyperelated knowledge instilled in the LMs. To better evaluate our results qualitatively we choose five brands (Adidas, Apple, GAP, Pepsi, and Porsche) from different brand categories.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Brand Archetypes",
"sec_num": "4.4"
},
{
"text": "We present a battery of analyses aimed at finding how much knowledge do the off-the-shelf LMs capture about brands and affect attributes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "We believe that brand persona can be succinctly defined by a set of affect words, namely attributes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Affect Association",
"sec_num": "5.1"
},
{
"text": "We make use of intra-attribute similarity to understand which of the attributes are closer to each other in embedding space. Using intra-brand similarity, we also examine how the brands of a category are positioned in the vector space. Additionally, the affect similarity helps us find the correlation between brand and affect words. We argue that a brand persona can be identified by combining results from these three measures. It should be noted that some of these associations of brands and attributes are indeed consistent across all LMs (RQ1, RQ3). Table 3 reports some of the most similar and least similar associations. By far, brands of category 'Cars' are seen to have high similarity among themselves consistently across all LMs. In some instances, brands of categories 'Technology' and 'Telecommunication' are found to have a close association. Similarly, cliques of attributes are observed such as elegant, lovely, fashionable, popular in BERT and reliable, efficient, helpful, convenient in Dis-tilBERT. These clusters of attributes can further be beneficial in defining a brand persona. Using the affect-similarity, we found interesting associations between brands and attributes. For instance, brand 'Disney' is associated most with attributes , 'magical' and 'fun' across all LMs whereas brand 'IBM' is highly associated with 'innovative' and 'intelligent'. These positive associations help understand the brand persona. We also observe the least similar relations across all LMs. There are some surprising results, such as brands 'Intel' and 'Samsung' not being 'efficient' and 'Best' respectively. Such associations may not be what brand marketing teams would want to portray for their brands. We believe that these negative associations are also important in identifying the perception of a brand.",
"cite_spans": [],
"ref_spans": [
{
"start": 555,
"end": 563,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Affect Association",
"sec_num": "5.1"
},
{
"text": "The self-similarity metric provides a measure to evaluate the contextualization of a word. Following (Ethayarajh, 2019) , lower self-similarity is observed when the representations are more contextualized. We compare the average self-similarity of a representative brand and attribute words for each layer of selected LMs. For all five models, selfsimilarity is lower in upper layers or final layers i.e. the word representations are more context-specific. Out of five LMs, RoBERTa representations have the lowest self-similarity. Furthermore, it should be noted that different words have different levels of context specificity in different LMs. Ethayarajh 2019observes that the variety of context is important for having variations in representation and common words or popular words like 'the', 'of ' and 'to' generally have larger variation in their representations. We believe that popular brands have the diverse contexts in the training data used for pre-training the LMs and hence are more contextualized. As can be seen in Figure 1 , representations for Google are more context-specific as compared to those for Gymboree. Affect words 'good','bad' and 'exceptional also have different context specificity implying a certain kind of inequality in the encoded knowledge corresponding to different words. This pattern is observed across all LMs implying that variation in representations is consistent irrespective of the amount of training data used while pre-training.",
"cite_spans": [
{
"start": 101,
"end": 119,
"text": "(Ethayarajh, 2019)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 1032,
"end": 1040,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Contextual Representation",
"sec_num": "5.2"
},
{
"text": "In WEAT as well as in Log Probability, the null hypothesis is that there is no significant difference between the two sets of brand categories in terms of their relative similarity to the two sets of Aaker's dimensions. The polarity of the effect size indi- cates that the categories and dimensions are directly or inversely related. For example, consider, the Sports/Health in brand category and sincerity/ruggedness in Aaker's Dimensions from Table 4 the polarity of effect size indicates that they are inversely related, which means 'Sports' is more associated with 'ruggedness' similarly 'Health' is to the 'sincerity' (RQ2). Since we are considering the permutation test, the p-value indicates the significance of their association. Most of these associations are consistently observed across all LMs (RQ1, RQ3). This has intrigued us to further examine which LM is better at capturing brand personality as perceived by consumers. The pairwise ranking is applied to all the combinations of brand categories and Aaker's dimensions (Aaker, 1997) .",
"cite_spans": [
{
"start": 1036,
"end": 1049,
"text": "(Aaker, 1997)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 445,
"end": 453,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Implicit Association Tests",
"sec_num": "5.3"
},
{
"text": "The resultant ranked dimensions of all the categories are assessed against the ground truth values/ consumers perception (please refer Table 9 in Supplementary Material) in Lovett's data (Lovett et al., 2014) . Using the same procedure, all the LMs are ranked independently for each brand category (refer to Table 15 in Supplementary Material). We observe that BERT has better agreement with consumers' perceptions of brand personality amongst all the language models in both WEAT and Log Probability (RQ5). Though RoBERTa did follow, other LMs agree equally likely in Log Probability. Furthermore, DistilBERT has a consistently poor agreement in Log Probability. One interesting observation is that WEAT and Log Probability give the same ranking for all LMs in the 'Cars' brand category. RIPA test measures the word embedding association using the subspace projection method . A positive score suggests that brand x is more associated with attribute word w than brand y for a given brand pair (x,y) and attribute word w. We combine this score for a brand with all attributes to compute a preference score for a brand. Based on this preference score, we found the most associated brands for every attribute word. Representative results are presented in Table 5 . We observe that the predictions across different LMs for a given category are occasionally consistent (e.g., YouTube being associated as a fun brand in RoBERTa, DistilBERT, and ALBERT) (RQ3). This could be attributed to the perception of brands being captured by the various LMs. Also, we see the diversity in the predictions for different attribute words (e.g., BERT and RoBERTa has different brand association across different categories) which also signifies that the brand associations being captured by the LMs vary with the context (RQ1).",
"cite_spans": [
{
"start": 187,
"end": 208,
"text": "(Lovett et al., 2014)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [
{
"start": 135,
"end": 142,
"text": "Table 9",
"ref_id": null
},
{
"start": 308,
"end": 316,
"text": "Table 15",
"ref_id": "TABREF1"
},
{
"start": 1253,
"end": 1261,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Implicit Association Tests",
"sec_num": "5.3"
},
{
"text": "Comparing the LMs off-the-shelf gives us an idea of how affect-related attributes are represented in LMs. From Table 6 , we find that BART and RoBERTa have the better brand and attribute associations amongst the LMs on the Ads. Dataset and the BCD datasets (RQ1). Further, to understand the impact of fine-tuning, we employ techniques proposed by (Talmor et al., 2019) to measure the language mismatch. In this exercise, we fine-tune the LM with examples from Ads. Dataset; high performance indicates that the LM was able to overcome the language mismatch with a very small number of samples. Trends in the Figure 2 conveys that BERT and RoBERTa achieve high performance with a limited number of samples, in turn indicating that their internal representations are well suited for any downstream tasks related to brand personality. On the other hand, ALBERT has the least performance improvement of 8.08%, meaning ALBERT has poor internal representation and needs more samples to overcome the language mismatch. BERT outperforms all LMs with 22.28% improvement followed by RoBERTa with 20.06%.",
"cite_spans": [
{
"start": 347,
"end": 368,
"text": "(Talmor et al., 2019)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [
{
"start": 111,
"end": 118,
"text": "Table 6",
"ref_id": "TABREF9"
},
{
"start": 607,
"end": 615,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Impact of fine-tuning",
"sec_num": "5.4"
},
{
"text": "To understand the context-dependency of the attributes related to affect, we employ perturbed language control as discussed by (Talmor et al., 2019) . This control task gives us an idea of how well the pre-trained representation of the words in context can influence the affect association. For exam- ple, consider the statement \"I should play Nintendo because it is [MASK] .\" and its perturbed version \"I snap play Nintendo ya it is [MASK] .\". If 'fun' from the set of attributes is persistently perceived to be in top-5 predictions irrespective of perturbation, we say that context doesn't influence attributes. In either of the setups discussed in Controlled probing task, the drop in accuracy after perturbation indicates that the affect attributes are context-dependent. Our observations on MLM setup (Table 6 ) and MLP-MLM setup (Figure 2 ) indicate that the attributes are moderately influenced by the context. We need more samples to comment on ALBERT. ",
"cite_spans": [
{
"start": 127,
"end": 148,
"text": "(Talmor et al., 2019)",
"ref_id": "BIBREF29"
},
{
"start": 367,
"end": 373,
"text": "[MASK]",
"ref_id": null
},
{
"start": 434,
"end": 440,
"text": "[MASK]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 806,
"end": 814,
"text": "(Table 6",
"ref_id": "TABREF9"
},
{
"start": 835,
"end": 844,
"text": "(Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Sensitivity to context",
"sec_num": "5.5"
},
{
"text": "We investigate implicit perceptions about brands using data collected in a survey (Lovett et al., 2014) . Table 7 shows the result of the top archetype(s) extracted from the various LMs for the brand Adidas.",
"cite_spans": [
{
"start": 82,
"end": 103,
"text": "(Lovett et al., 2014)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [
{
"start": 106,
"end": 113,
"text": "Table 7",
"ref_id": "TABREF10"
}
],
"eq_spans": [],
"section": "Archetypes",
"sec_num": "5.6"
},
{
"text": "The actual archetype of Adidas is Creator 1 . We make three major observations about the brand archetype extracted from different LMs (RQ4). First, we observe the same prediction of the top archetype across various LMs. For instance, we get the same set of top archetype(s) prediction with BERT, RoBERTa, and ALBERT for the brand Adidas. This behavior could be attributed to the absence of explicit brand archetype-related information in the LMs. Next, we observe multiple top archetypes with the same degree of attribute overlap which suggests that LMs does not capture the brand archetype information distinctly. Lastly, we observe that the degree of attribute overlap for the top archetypes is consistently very low (i.e., an overlap of only one out of five attributes) for all the five brands across all the five LMs. This low degree of attribute overlap is also suggestive of the absence of archetype-related information in the LMs. The actual archetype of a brand can not be distinguished in any of the LMs. We make similar observations for other brands as well (see Table 11 to 14 in Supplementary Material). The current observation that the LMs do not reflect the expected perception of the brand's archetype needs to be investigated further with archetype-specific datasets.",
"cite_spans": [],
"ref_spans": [
{
"start": 1073,
"end": 1081,
"text": "Table 11",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Archetypes",
"sec_num": "5.6"
},
{
"text": "In this paper, we presented a series of exploration setups to address research questions pertaining to associations between brands and brand image attributes. Our analyses were able to tease out varied responses even from the models having identical training data and pre-training learning objectives. We observed that there exists a definite association between brands and attribute affect words across all LMs (RQ1). This impression is observed across a range of abstraction i.e. from individual brands and broader categories to attributes and Aaker's dimensions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "In all our experiments, some categories such as 'Cars' and 'Technology product & stores' and brands such as 'Disney' and 'Intel' are found to have consistent associations across all LMs (RQ3). However, it is interesting to note that these biases do not concur with both consumer perceptions and intended perceptions of the brand (RQ4 and RQ5). Lastly, it is seen that perturbations in sentence moderately influences the association between brands and affect words. Improved performance in fine-tuning implies that affect associations are enhanced (RQ2). Since we do not have enough data, it remains to be seen how additional training data changes the landscape.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "This work documents an initial investigation of brand and attribute associations in different LMs. With enough task-specific data, we plan to evaluate how the affect associations are enhanced. We also intend to use these observations in further defining brand-persona and brand-archetype definitions. These impressions can help understand perceptions about a brand. Furthermore, this can be extended in investigating impressions about iconic entities such as sports teams, celebrities, and politicians.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "https://report.adidas-group.com/2019/ en/group-management-report-our-company/ corporate-strategy/ adidas-brand-strategy.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Dimensions of brand personality",
"authors": [
{
"first": "L",
"middle": [],
"last": "Jennifer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Aaker",
"suffix": ""
}
],
"year": 1997,
"venue": "Journal of marketing research",
"volume": "34",
"issue": "3",
"pages": "347--356",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jennifer L Aaker. 1997. Dimensions of brand personal- ity. Journal of marketing research, 34(3):347-356.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Advertising between archetype and brand personality",
"authors": [
{
"first": "Clemens",
"middle": [],
"last": "Bechter",
"suffix": ""
},
{
"first": "Giorgio",
"middle": [],
"last": "Farinelli",
"suffix": ""
},
{
"first": "Rolf-Dieter",
"middle": [],
"last": "Daniel",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Frey",
"suffix": ""
}
],
"year": 2016,
"venue": "Administrative Sciences",
"volume": "6",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Clemens Bechter, Giorgio Farinelli, Rolf-Dieter Daniel, and Michael Frey. 2016. Advertising be- tween archetype and brand personality. Administra- tive Sciences, 6(2):5.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Quantifying and reducing stereotypes in word embeddings",
"authors": [
{
"first": "Tolga",
"middle": [],
"last": "Bolukbasi",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Zou",
"suffix": ""
},
{
"first": "Venkatesh",
"middle": [],
"last": "Saligrama",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Kalai",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1606.06121"
]
},
"num": null,
"urls": [],
"raw_text": "Tolga Bolukbasi, Kai-Wei Chang, James Zou, Venkatesh Saligrama, and Adam Kalai. 2016. Quantifying and reducing stereotypes in word embeddings. arXiv preprint arXiv:1606.06121.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Semantics derived automatically from language corpora contain human-like biases",
"authors": [
{
"first": "Aylin",
"middle": [],
"last": "Caliskan",
"suffix": ""
},
{
"first": "Joanna",
"middle": [
"J"
],
"last": "Bryson",
"suffix": ""
},
{
"first": "Arvind",
"middle": [],
"last": "Narayanan",
"suffix": ""
}
],
"year": 2017,
"venue": "Science",
"volume": "356",
"issue": "6334",
"pages": "183--186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aylin Caliskan, Joanna J Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334):183-186.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Mining brand perceptions from twitter social networks",
"authors": [
{
"first": "Aron",
"middle": [],
"last": "Culotta",
"suffix": ""
},
{
"first": "Jennifer",
"middle": [],
"last": "Cutler",
"suffix": ""
}
],
"year": 2016,
"venue": "Marketing science",
"volume": "35",
"issue": "3",
"pages": "343--362",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aron Culotta and Jennifer Cutler. 2016. Mining brand perceptions from twitter social networks. Marketing science, 35(3):343-362.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Brand personality: theory and dimensionality",
"authors": [
{
"first": "Gary",
"middle": [],
"last": "Davies",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Jos\u00e9",
"suffix": ""
},
{
"first": "Susan",
"middle": [],
"last": "Rojas-M\u00e9ndez",
"suffix": ""
},
{
"first": "Melisa",
"middle": [],
"last": "Whelan",
"suffix": ""
},
{
"first": "Theresa",
"middle": [],
"last": "Mete",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Loo",
"suffix": ""
}
],
"year": 2018,
"venue": "Journal of product & brand management",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gary Davies, Jos\u00e9 I Rojas-M\u00e9ndez, Susan Whelan, Melisa Mete, and Theresa Loo. 2018. Brand person- ality: theory and dimensionality. Journal of product & brand management.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "On measuring and mitigating biased inferences of word embeddings",
"authors": [
{
"first": "Sunipa",
"middle": [],
"last": "Dev",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jeff",
"middle": [
"M"
],
"last": "Phillips",
"suffix": ""
},
{
"first": "Vivek",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "34",
"issue": "",
"pages": "7659--7666",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sunipa Dev, Tao Li, Jeff M Phillips, and Vivek Sriku- mar. 2020. On measuring and mitigating biased in- ferences of word embeddings. In Proceedings of the AAAI Conference on Artificial Intelligence, vol- ume 34, pages 7659-7666.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Measuring and mitigating unintended bias in text classification",
"authors": [
{
"first": "Lucas",
"middle": [],
"last": "Dixon",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Sorensen",
"suffix": ""
},
{
"first": "Nithum",
"middle": [],
"last": "Thain",
"suffix": ""
},
{
"first": "Lucy",
"middle": [],
"last": "Vasserman",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society",
"volume": "",
"issue": "",
"pages": "67--73",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lucas Dixon, John Li, Jeffrey Sorensen, Nithum Thain, and Lucy Vasserman. 2018. Measuring and mitigat- ing unintended bias in text classification. In Pro- ceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, pages 67-73.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "How contextual are contextualized word representations? comparing the geometry of bert, elmo, and gpt-2 embeddings",
"authors": [
{
"first": "Kawin",
"middle": [],
"last": "Ethayarajh",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1909.00512"
]
},
"num": null,
"urls": [],
"raw_text": "Kawin Ethayarajh. 2019. How contextual are contex- tualized word representations? comparing the ge- ometry of bert, elmo, and gpt-2 embeddings. arXiv preprint arXiv:1909.00512.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Understanding undesirable word embedding associations",
"authors": [
{
"first": "Kawin",
"middle": [],
"last": "Ethayarajh",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Duvenaud",
"suffix": ""
},
{
"first": "Graeme",
"middle": [],
"last": "Hirst",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1908.06361"
]
},
"num": null,
"urls": [],
"raw_text": "Kawin Ethayarajh, David Duvenaud, and Graeme Hirst. 2019. Understanding undesirable word embedding associations. arXiv preprint arXiv:1908.06361.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Entitycentric contextual affective analysis",
"authors": [
{
"first": "Anjalie",
"middle": [],
"last": "Field",
"suffix": ""
},
{
"first": "Yulia",
"middle": [],
"last": "Tsvetkov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1906.01762"
]
},
"num": null,
"urls": [],
"raw_text": "Anjalie Field and Yulia Tsvetkov. 2019. Entity- centric contextual affective analysis. arXiv preprint arXiv:1906.01762.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Measuring individual differences in implicit cognition: the implicit association test",
"authors": [
{
"first": "Debbie",
"middle": [
"E"
],
"last": "Anthony G Greenwald",
"suffix": ""
},
{
"first": "Jordan Lk",
"middle": [],
"last": "Mcghee",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Schwartz",
"suffix": ""
}
],
"year": 1998,
"venue": "Journal of personality and social psychology",
"volume": "74",
"issue": "6",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anthony G Greenwald, Debbie E McGhee, and Jor- dan LK Schwartz. 1998. Measuring individual dif- ferences in implicit cognition: the implicit associa- tion test. Journal of personality and social psychol- ogy, 74(6):1464.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A structural probe for finding syntax in word representations",
"authors": [
{
"first": "John",
"middle": [],
"last": "Hewitt",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4129--4138",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Hewitt and Christopher D Manning. 2019. A structural probe for finding syntax in word represen- tations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4129-4138.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Automatic understanding of image and video advertisements",
"authors": [
{
"first": "Zaeem",
"middle": [],
"last": "Hussain",
"suffix": ""
},
{
"first": "Mingda",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xiaozhong",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Keren",
"middle": [],
"last": "Ye",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Zuha",
"middle": [],
"last": "Agha",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Ong",
"suffix": ""
},
{
"first": "Adriana",
"middle": [],
"last": "Kovashka",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "1705--1715",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zaeem Hussain, Mingda Zhang, Xiaozhong Zhang, Keren Ye, Christopher Thomas, Zuha Agha, Nathan Ong, and Adriana Kovashka. 2017. Automatic un- derstanding of image and video advertisements. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1705-1715.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Social biases in nlp models as barriers for persons with disabilities",
"authors": [
{
"first": "Ben",
"middle": [],
"last": "Hutchinson",
"suffix": ""
},
{
"first": "Vinodkumar",
"middle": [],
"last": "Prabhakaran",
"suffix": ""
},
{
"first": "Emily",
"middle": [],
"last": "Denton",
"suffix": ""
},
{
"first": "Kellie",
"middle": [],
"last": "Webster",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Zhong",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Denuyl",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2005.00813"
]
},
"num": null,
"urls": [],
"raw_text": "Ben Hutchinson, Vinodkumar Prabhakaran, Emily Denton, Kellie Webster, Yu Zhong, and Stephen De- nuyl. 2020. Social biases in nlp models as bar- riers for persons with disabilities. arXiv preprint arXiv:2005.00813.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Psychological aspects of the mother archetype",
"authors": [
{
"first": "",
"middle": [],
"last": "Cg Jung",
"suffix": ""
}
],
"year": 1954,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "CG Jung. 1954. Psychological aspects of the mother archetype. collected works 9/1.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Examining gender and race bias in two hundred sentiment analysis systems",
"authors": [
{
"first": "Svetlana",
"middle": [],
"last": "Kiritchenko",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Saif",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mohammad",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1805.04508"
]
},
"num": null,
"urls": [],
"raw_text": "Svetlana Kiritchenko and Saif M Mohammad. 2018. Examining gender and race bias in two hun- dred sentiment analysis systems. arXiv preprint arXiv:1805.04508.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Measuring bias in contextualized word representations",
"authors": [
{
"first": "Keita",
"middle": [],
"last": "Kurita",
"suffix": ""
},
{
"first": "Nidhi",
"middle": [],
"last": "Vyas",
"suffix": ""
},
{
"first": "Ayush",
"middle": [],
"last": "Pareek",
"suffix": ""
},
{
"first": "Alan",
"middle": [
"W"
],
"last": "Black",
"suffix": ""
},
{
"first": "Yulia",
"middle": [],
"last": "Tsvetkov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1906.07337"
]
},
"num": null,
"urls": [],
"raw_text": "Keita Kurita, Nidhi Vyas, Ayush Pareek, Alan W Black, and Yulia Tsvetkov. 2019. Measuring bias in con- textualized word representations. arXiv preprint arXiv:1906.07337.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Albert: A lite bert for self-supervised learning of language representations",
"authors": [
{
"first": "Zhenzhong",
"middle": [],
"last": "Lan",
"suffix": ""
},
{
"first": "Mingda",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Goodman",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Piyush",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Radu",
"middle": [],
"last": "Soricut",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1909.11942"
]
},
"num": null,
"urls": [],
"raw_text": "Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. Albert: A lite bert for self-supervised learn- ing of language representations. arXiv preprint arXiv:1909.11942.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal ; Abdelrahman Mohamed",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Ves",
"middle": [],
"last": "Stoyanov",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.13461"
]
},
"num": null,
"urls": [],
"raw_text": "Mike Lewis, Yinhan Liu, Naman Goyal, Mar- jan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Roberta: A robustly optimized bert pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "A data set of brands and their characteristics",
"authors": [
{
"first": "Mitchell",
"middle": [],
"last": "Lovett",
"suffix": ""
},
{
"first": "Renana",
"middle": [],
"last": "Peres",
"suffix": ""
},
{
"first": "Ron",
"middle": [],
"last": "Shachar",
"suffix": ""
}
],
"year": 2014,
"venue": "Marketing Science",
"volume": "33",
"issue": "4",
"pages": "609--617",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitchell Lovett, Renana Peres, and Ron Shachar. 2014. A data set of brands and their characteristics. Mar- keting Science, 33(4):609-617.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "On measuring social biases in sentence encoders",
"authors": [
{
"first": "Chandler",
"middle": [],
"last": "May",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Shikha",
"middle": [],
"last": "Bordia",
"suffix": ""
},
{
"first": "Samuel",
"middle": [
"R"
],
"last": "Bowman",
"suffix": ""
},
{
"first": "Rachel",
"middle": [],
"last": "Rudinger",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "622--628",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chandler May, Alex Wang, Shikha Bordia, Samuel R. Bowman, and Rachel Rudinger. 2019. On measur- ing social biases in sentence encoders. In Proceed- ings of the 2019 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 622-628.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "What does bert know about books, movies and music? probing bert for conversational recommendation",
"authors": [
{
"first": "Gustavo",
"middle": [],
"last": "Penha",
"suffix": ""
},
{
"first": "Claudia",
"middle": [],
"last": "Hauff",
"suffix": ""
}
],
"year": 2020,
"venue": "Fourteenth ACM Conference on Recommender Systems",
"volume": "",
"issue": "",
"pages": "388--397",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gustavo Penha and Claudia Hauff. 2020. What does bert know about books, movies and music? probing bert for conversational recommendation. In Four- teenth ACM Conference on Recommender Systems, pages 388-397.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Language models as knowledge bases? arXiv preprint",
"authors": [
{
"first": "Fabio",
"middle": [],
"last": "Petroni",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rockt\u00e4schel",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Anton",
"middle": [],
"last": "Bakhtin",
"suffix": ""
},
{
"first": "Yuxiang",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"H"
],
"last": "Miller",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1909.01066"
]
},
"num": null,
"urls": [],
"raw_text": "Fabio Petroni, Tim Rockt\u00e4schel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, Alexander H Miller, and Se- bastian Riedel. 2019. Language models as knowl- edge bases? arXiv preprint arXiv:1909.01066.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Understanding brand consistency from web content",
"authors": [
{
"first": "Soumyadeep",
"middle": [],
"last": "Roy",
"suffix": ""
},
{
"first": "Niloy",
"middle": [],
"last": "Ganguly",
"suffix": ""
},
{
"first": "Shamik",
"middle": [],
"last": "Sural",
"suffix": ""
},
{
"first": "Niyati",
"middle": [],
"last": "Chhaya",
"suffix": ""
},
{
"first": "Anandhavelu",
"middle": [],
"last": "Natarajan",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 10th ACM Conference on Web Science",
"volume": "",
"issue": "",
"pages": "245--253",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Soumyadeep Roy, Niloy Ganguly, Shamik Sural, Niy- ati Chhaya, and Anandhavelu Natarajan. 2019. Un- derstanding brand consistency from web content. In Proceedings of the 10th ACM Conference on Web Science, pages 245-253.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter",
"authors": [
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.01108"
]
},
"num": null,
"urls": [],
"raw_text": "Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "The bias beneath: Two decades of measuring implicit associations",
"authors": [
{
"first": "Scott",
"middle": [],
"last": "Sleek",
"suffix": ""
}
],
"year": 2018,
"venue": "APS Observer",
"volume": "",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Scott Sleek. 2018. The bias beneath: Two decades of measuring implicit associations. APS Observer, 31(2).",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "olmpics-on what language model pre-training captures",
"authors": [
{
"first": "Alon",
"middle": [],
"last": "Talmor",
"suffix": ""
},
{
"first": "Yanai",
"middle": [],
"last": "Elazar",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Berant",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1912.13283"
]
},
"num": null,
"urls": [],
"raw_text": "Alon Talmor, Yanai Elazar, Yoav Goldberg, and Jonathan Berant. 2019. olmpics-on what lan- guage model pre-training captures. arXiv preprint arXiv:1912.13283.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Do nlp models know numbers? probing numeracy in embeddings",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "Wallace",
"suffix": ""
},
{
"first": "Yizhong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Sujian",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1909.07940"
]
},
"num": null,
"urls": [],
"raw_text": "Eric Wallace, Yizhong Wang, Sujian Li, Sameer Singh, and Matt Gardner. 2019. Do nlp models know num- bers? probing numeracy in embeddings. arXiv preprint arXiv:1909.07940.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "Self-Similarity for brand and attribute words 'Google' (+), 'Gymboree' ( ), 'good' ( * ) , 'exceptional' (x) and 'bad' (3).",
"type_str": "figure",
"num": null
},
"FIGREF1": {
"uris": null,
"text": "MLP-MLM with (--) and without perturbation (-) for different LMs-BERT (\u2022), ALBERT (+), DistilBERT (x), RoBERTa (*)",
"type_str": "figure",
"num": null
},
"TABREF1": {
"type_str": "table",
"num": null,
"content": "<table><tr><td>: Variants of LMs. L-total layers, H-hidden</td></tr><tr><td>size, A-self-attention heads, T-total parameters. We</td></tr><tr><td>mention the architecture of the large version of all the</td></tr><tr><td>LMs.</td></tr></table>",
"text": "",
"html": null
},
"TABREF2": {
"type_str": "table",
"num": null,
"content": "<table/>",
"text": "Representative examples from three different datasets.",
"html": null
},
"TABREF4": {
"type_str": "table",
"num": null,
"content": "<table/>",
"text": "Affect associations across different LMs for least similar (LS) and most similar (MS) brands and attributes.",
"html": null
},
"TABREF6": {
"type_str": "table",
"num": null,
"content": "<table><tr><td>125</td></tr></table>",
"text": "Effect-size of WEAT and Log Probability (at p-value < 0.01) .",
"html": null
},
"TABREF7": {
"type_str": "table",
"num": null,
"content": "<table/>",
"text": "Top brand and attribute associations for three different brand categories using RIPA association test.",
"html": null
},
"TABREF9": {
"type_str": "table",
"num": null,
"content": "<table><tr><td>LM</td><td>Top archetype(s) based on the attribute overlap</td></tr><tr><td>BERT</td><td>Creator, Jester, Outlaw, Magician, Hero, Sage, Explorer, Innocent</td></tr><tr><td>RoBERTa</td><td>Creator, Jester, Outlaw, Magician, Hero, Sage, Explorer, Innocent</td></tr><tr><td colspan=\"2\">DistilBERT Ruler, Everyman, Magician, Sage, Innocent</td></tr><tr><td>ALBERT</td><td>Creator, Jester, Outlaw, Magician, Hero, Sage, Explorer, Innocent</td></tr><tr><td>BART</td><td>Ruler, Everyman, Magician, Sage, Innocent</td></tr></table>",
"text": "MLM setup with and without perturbation on the Ads. Dataset and BCD datasets.",
"html": null
},
"TABREF10": {
"type_str": "table",
"num": null,
"content": "<table/>",
"text": "Archetype information extracted from the LMs for the brand Adidas.",
"html": null
}
}
}
}