|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T12:16:30.566911Z" |
|
}, |
|
"title": "Non-Complementarity of Information in Word-Embedding and Brain Representations in Distinguishing between Concrete and Abstract Words", |
|
"authors": [ |
|
{ |
|
"first": "Kalyan", |
|
"middle": [], |
|
"last": "Ramakrishnan", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Fatma", |
|
"middle": [], |
|
"last": "Deniz", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Word concreteness and imageability have proven crucial in understanding how humans process and represent language in the brain. While word-embeddings do not explicitly incorporate the concreteness of words into their computations, they have been shown to accurately predict human judgments of concreteness and imageability. Inspired by the recent interest in using neural activity patterns to analyze distributed meaning representations, we first show that brain responses acquired while human subjects passively comprehend natural stories can significantly distinguish the concreteness levels of the words encountered. We then examine for the same task whether the additional perceptual information in the brain representations can complement the contextual information in the word-embeddings. However, the results of our predictive models and residual analyses indicate the contrary. We find that the relevant information in the brain representations is a subset of the relevant information in the contextualized wordembeddings, providing new insight into the existing state of natural language processing models.", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Word concreteness and imageability have proven crucial in understanding how humans process and represent language in the brain. While word-embeddings do not explicitly incorporate the concreteness of words into their computations, they have been shown to accurately predict human judgments of concreteness and imageability. Inspired by the recent interest in using neural activity patterns to analyze distributed meaning representations, we first show that brain responses acquired while human subjects passively comprehend natural stories can significantly distinguish the concreteness levels of the words encountered. We then examine for the same task whether the additional perceptual information in the brain representations can complement the contextual information in the word-embeddings. However, the results of our predictive models and residual analyses indicate the contrary. We find that the relevant information in the brain representations is a subset of the relevant information in the contextualized wordembeddings, providing new insight into the existing state of natural language processing models.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Language comprises concrete and abstract words that are distinctively used in everyday conversations. Concrete words refer to entities that can be easily perceived with the senses (e.g., \"house\", \"blink\", \"red\"). On the other hand, abstract words refer to concepts that one cannot directly perceive with the senses (e.g., \"luck\", \"justify\", \"risky\"), but relies on the use of language to understand them .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "This categorization of words based on their concreteness is rooted in theoretical accounts in cognitive science. One such account is the Dual Coding Theory (Paivio, 1971 (Paivio, , 1991 , according to which two separate but interconnected cognitive systems represent word meanings, i.e., a non-verbal system that encodes perceptual properties of words and a verbal system that encodes linguistic properties of words. Concrete concepts can be easily imagined and are represented in the brain with both verbal and non-verbal codes. Abstract concepts are less imaginable and are represented with only verbal codes. For example, one can readily picture as well as describe the word bicycle (e.g., \"has a chain\", \"has wheels\"), but relies more on a verbal description for the word bravery.", |
|
"cite_spans": [ |
|
{ |
|
"start": 156, |
|
"end": 169, |
|
"text": "(Paivio, 1971", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 170, |
|
"end": 185, |
|
"text": "(Paivio, , 1991", |
|
"ref_id": "BIBREF34" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The concreteness of words has since been used as a differentiating property of word meaning representations. Previous studies in natural language processing (NLP) have examined the wordembedding spaces of concrete and abstract words and showed: (i) distinct vector representations of the two categories within and across languages (Ljube\u0161i\u0107 et al., 2018) , and (ii) high predictability of concreteness scores from pre-trained wordembeddings (Charbonnier and Wartena, 2019) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 331, |
|
"end": 354, |
|
"text": "(Ljube\u0161i\u0107 et al., 2018)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 441, |
|
"end": 472, |
|
"text": "(Charbonnier and Wartena, 2019)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Neurolinguistic studies have shown an extensive, distributed network of brain regions representing the conceptual meaning of words (Mitchell et al., 2008; Wehbe et al., 2014; Huth et al., 2016) . Among these, regions more closely involved in sensory processing have been shown to respond favorably to concrete words (Binder et al., 2005) over abstract words. argued that concrete and abstract concepts must be represented differently in the human brain by showing through a statistical analysis that concrete concepts have fewer but stronger associations in the mind with other concepts, while abstract concepts have weak associations with several other concepts. Wang et al. (2013) showed that functional Magnetic Resonance Imaging (fMRI) signals of brain activity recorded as subjects attempted to decide which two out of a triplet of words were most similar contained sufficient information to classify the concreteness level of the word triplet, providing further evidence of the dissimilar representations of the two categories in the brain. However, it remains an open question whether the brain responses within the semantic system can directly predict concreteness levels in the more challenging setting of naturalistic word stimuli (e.g., words encountered while reading a story). Moreover, given the human brain's expertise in generating and processing perceptual as well as linguistic information, one could expect the brain representations to provide information that complements the word-embeddings purely learned from linguistic contexts, improving their predictive capability. We address both these questions in this paper.", |
|
"cite_spans": [ |
|
{ |
|
"start": 131, |
|
"end": 154, |
|
"text": "(Mitchell et al., 2008;", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 155, |
|
"end": 174, |
|
"text": "Wehbe et al., 2014;", |
|
"ref_id": "BIBREF46" |
|
}, |
|
{ |
|
"start": 175, |
|
"end": 193, |
|
"text": "Huth et al., 2016)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 316, |
|
"end": 337, |
|
"text": "(Binder et al., 2005)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 664, |
|
"end": 682, |
|
"text": "Wang et al. (2013)", |
|
"ref_id": "BIBREF45" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "While several related works exist, the following limitations prompted a new study: (i) Anderson et al. (2017) indirectly decoded the brain representations for concrete and abstract nouns with the help of word-embeddings and convolutional neural network image representations. Instead of building a predictive model, the authors used a similarity metric to determine which signal in a pair of fMRI signals corresponds to which word in a pair of words. However, a direct, supervised decoding approach (as adopted here) would provide more substantial evidence about the strengths and weaknesses of the different information modalities. (ii) found word concreteness scores to be highly correlated with both visual and haptic perceptual strength. However, multi-modal methods (Anderson et al., 2017; Bhaskar et al., 2017 ) have incorporated only visual features (as the second source of information) instead of general perceptual features into their predictions. By incorporating brain representations in our models, we do not miss out on such perceptual information (e.g., the adjectives \"silky\", \"crispy\", and \"salty\" are concrete but not as imagery-inducing as the adjective \"blue\"). (iii) In contrast to previous studies that have required participants to actively imagine a randomly presented word stimulus 1 (before being given a few seconds to \"reset\" their thoughts) during the brain data acquisition task (Anderson et al., 2012; Wang et al., 2013; Anderson et al., 2017 ), we adopt a task where participants would read highly engaging natural stories (without unnatural pauses), enabling them to process the word stimuli in a more realistic context.", |
|
"cite_spans": [ |
|
{ |
|
"start": 87, |
|
"end": 109, |
|
"text": "Anderson et al. (2017)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 771, |
|
"end": 794, |
|
"text": "(Anderson et al., 2017;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 795, |
|
"end": 815, |
|
"text": "Bhaskar et al., 2017", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 1409, |
|
"end": 1432, |
|
"text": "(Anderson et al., 2012;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 1433, |
|
"end": 1451, |
|
"text": "Wang et al., 2013;", |
|
"ref_id": "BIBREF45" |
|
}, |
|
{ |
|
"start": 1452, |
|
"end": 1473, |
|
"text": "Anderson et al., 2017", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "To summarize, our objectives with this paper are twofold. First, we investigate how well human 1 e.g., one word would be presented every 10s.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "brain representations can predict the concreteness levels of words encountered in natural stories using simple, supervised learning algorithms. Second, we investigate whether brain representations encode information that may be missing from wordembeddings trained on a text corpus in making the concrete/abstract distinction. We believe that answering such questions will shed light on the current state of human and machine intelligence and on the ways to incorporate human language processing information into NLP models.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "A few studies have shown that the concreteness (and imageability) of words can be directly predicted with high accuracy from precomputed wordembeddings using supervised learning algorithms. Recently, Charbonnier and Wartena (2019) used a combination of word-embeddings and morphological features to predict the word concreteness and imageability values provided in seven publicly available datasets. Ljube\u0161i\u0107 et al. (2018) extended the idea to perform a cross-lingual transfer of concreteness and imageability scores by exploiting pretrained bilingual aligned word-embeddings (Conneau et al., 2017) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 400, |
|
"end": 422, |
|
"text": "Ljube\u0161i\u0107 et al. (2018)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 576, |
|
"end": 598, |
|
"text": "(Conneau et al., 2017)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Multi-modal models that use both linguistic and perceptual information have been shown to outperform language models at various NLP tasks, such as learning concrete or abstract word embeddings Lazaridou et al., 2015) , concept categorization (Silberer and Lapata, 2014) , and compositionality prediction (Roller and Schulte im Walde, 2013). However, Bhaskar et al. (2017) found that the concreteness of nouns could be predicted equally well from the textual, visual, and combined modalities. This suggests that the textual and visual modalities independently provided reliable, non-complementary information to represent both concrete and abstract nouns.", |
|
"cite_spans": [ |
|
{ |
|
"start": 193, |
|
"end": 216, |
|
"text": "Lazaridou et al., 2015)", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 242, |
|
"end": 269, |
|
"text": "(Silberer and Lapata, 2014)", |
|
"ref_id": "BIBREF42" |
|
}, |
|
{ |
|
"start": 350, |
|
"end": 371, |
|
"text": "Bhaskar et al. (2017)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Several studies have addressed the idea of decoding neural activity patterns recorded in subjects when presented with certain textual or visual stimuli. Anderson et al. (2017) applied linguistic and visually-grounded computational models to decode the fMRI representations of a set of concrete and abstract nouns. They, too, reported no decoding advantage for multi-modal combinations over the linguistic model. Anderson et al. (2012) demonstrated that fMRI signals contained sufficient information to perform a 7-way classification of a set of words into WordNet-based (Miller, 1995) taxonomic categories.", |
|
"cite_spans": [ |
|
{ |
|
"start": 153, |
|
"end": 175, |
|
"text": "Anderson et al. (2017)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 412, |
|
"end": 434, |
|
"text": "Anderson et al. (2012)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 570, |
|
"end": 584, |
|
"text": "(Miller, 1995)", |
|
"ref_id": "BIBREF30" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Lately, there has been an increasing research interest at the intersection of neuroimaging and language models (Jain and Huth, 2018; Abnar et al., 2019; Gauthier and Levy, 2019; Hollenstein et al., 2019; Jain et al., 2020; Caucheteux and King, 2020; Schrimpf et al., 2020) . In an interesting study, Schwartz et al. (2019) finetuned the BERT language model to predict the fMRI responses of text-reading participants to obtain representations that encode brain-activityrelevant semantic information. While the modified representations could better predict neural activity and even generalize to new participants, this inclusion of brain-relevant bias did not improve or degrade the model's performance on downstream NLP tasks.", |
|
"cite_spans": [ |
|
{ |
|
"start": 111, |
|
"end": 132, |
|
"text": "(Jain and Huth, 2018;", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 133, |
|
"end": 152, |
|
"text": "Abnar et al., 2019;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 153, |
|
"end": 177, |
|
"text": "Gauthier and Levy, 2019;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 178, |
|
"end": 203, |
|
"text": "Hollenstein et al., 2019;", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 204, |
|
"end": 222, |
|
"text": "Jain et al., 2020;", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 223, |
|
"end": 249, |
|
"text": "Caucheteux and King, 2020;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 250, |
|
"end": 272, |
|
"text": "Schrimpf et al., 2020)", |
|
"ref_id": "BIBREF40" |
|
}, |
|
{ |
|
"start": 300, |
|
"end": 322, |
|
"text": "Schwartz et al. (2019)", |
|
"ref_id": "BIBREF41" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "3 Data Collection", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We briefly describe the functional Magnetic Resonance Imaging (fMRI) data-collection procedure here and refer the reader to Deniz et al. (2019) for specific details.", |
|
"cite_spans": [ |
|
{ |
|
"start": 124, |
|
"end": 143, |
|
"text": "Deniz et al. (2019)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Stimulus and fMRI data", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Nine participants were asked to read 11 autobiographical narrative stories taken from The Moth Radio Hour podcast. We used six participants' data in our experiments. The stories are each 10-15 minutes long and were chosen to cover a wide range of topics. Each story was first aligned to its transcript by applying the UPenn Forced Aligner (Yuan and Liberman, 2008) and Praat (Boersma and Weenink, 2001 ) on the narration audio. Timestamps for word-occurrences were then obtained from Praat's TextGrid as a list of entries of the form (w i , t i ) representing the ith word and its onset time, respectively. Using this word-representation list for each story, each word in the story was displayed one-by-one at the center of a screen for a duration equal to its duration in the spoken version.", |
|
"cite_spans": [ |
|
{ |
|
"start": 339, |
|
"end": 364, |
|
"text": "(Yuan and Liberman, 2008)", |
|
"ref_id": "BIBREF47" |
|
}, |
|
{ |
|
"start": 375, |
|
"end": 401, |
|
"text": "(Boersma and Weenink, 2001", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Stimulus and fMRI data", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Each fMRI scan consists of a sequence of voxel-responses 2 acquired at a fixed repetition-time (T R = 2.0045s) with a voxel-size of 2.24\u00d72.24\u00d7 4.1mm. A separate scan was conducted for each subject and presented story (all analysis was done within subjects). The acquired volumetric fMRI responses for each subject were first preprocessed to correct for motion and then aligned to the first 2 voxel = volumetric pixel. scan's temporal average, using the FMRIB Linear Image Registration Tool (FLIRT) from FSL v5.0 (Jenkinson et al., 2002; Jenkinson and Smith, 2001) . A Savitzky-Golay filter (Schafer, 2011) with a 120s window was applied to remove lowfrequency voxel-response drift from the signal. Finally, the voxel-responses for each story were zscored separately so that they have zero mean and unit variance across all acquisitions for the story.", |
|
"cite_spans": [ |
|
{ |
|
"start": 512, |
|
"end": 536, |
|
"text": "(Jenkinson et al., 2002;", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 537, |
|
"end": 563, |
|
"text": "Jenkinson and Smith, 2001)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 590, |
|
"end": 605, |
|
"text": "(Schafer, 2011)", |
|
"ref_id": "BIBREF38" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Stimulus and fMRI data", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We note that an equivalent analysis could be carried out through a listening task since the elicited brain representations have been shown to be largely invariant to the stimulus modality (Deniz et al., 2019) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 188, |
|
"end": 208, |
|
"text": "(Deniz et al., 2019)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Stimulus and fMRI data", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We used the dataset collected by , consisting of concreteness ratings for 39,954 English words. Each word was rated by around 25 participants (recruited through Amazon Mechanical Turk) on a 1-5 scale so that the most concrete words are assigned the highest score of 5, and the most abstract words are assigned the lowest score of 1. For each word, the average rating (and standard deviation) across all raters was recorded.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Concreteness Ratings", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We extracted the 768-dimensional activations from the final hidden layer of the Generative Pre-trained Transformer (GPT-2) (Radford et al., 2019) to obtain contextualized representations for the words in the stories. The reasons for selecting GPT-2 in this work are due to the findings of Schrimpf et al. (2020) . First, GPT-2 was constrained to use unidirectional attention in the same way humans process text in a left-to-right fashion. Second, the authors find that models best matching human language processing are precisely those trained for a next word prediction objective (such as the GPT family).", |
|
"cite_spans": [ |
|
{ |
|
"start": 123, |
|
"end": 145, |
|
"text": "(Radford et al., 2019)", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 289, |
|
"end": 311, |
|
"text": "Schrimpf et al. (2020)", |
|
"ref_id": "BIBREF40" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Word-Embeddings", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Rating and Vectorizing Using the wordrepresentation for each story and a list of the fMRI acquisition-times (identical for all subjects), we partitioned the words into disjoint chunks so that all words in a chunk correspond to the same acquisition. Therefore, all words read by the subjects within a duration of 1 T R from the start of the acquisition pulse were included in the same chunk.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Preparation", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We used GPT-2 to vectorize each word in a story by supplying all words in the story leading up to it 3 as context and extracting the network's hidden layer representation corresponding to the last input position. To rate the words in the story, we first lowercased and lemmatized them and then used the concreteness dataset to assign a rating to each word in a chunk. Only around 7% of all words in the stories were not covered by the dataset and were dropped before subsequent analysis.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Preparation", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We stored the ith preprocessed functional image of each subject as an N b -dimensional voxelresponse vector b i , where N b denotes the number of voxels for that subject's brain. Typical values for N b were found to lie in the 70k-90k range (with a mean of 80976 and a standard deviation of 6173, across subjects).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Preparation", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Downsampling Since the rate at which the text stimulus was presented to the subjects (the narration rate) is higher than the rate of fMRI data acquisition (2.0045s per acquisition), several words may occur within the TR corresponding to a single acquisition and will all fall under the same chunk. Therefore, we downsampled the stimulus to match the acquisition rate before further analysis by averaging out the concreteness ratings (r w ) and word-embeddings ( e w ) within each TR. Thus, the chunk-rating and chunk-embedding for chunk C i are given by:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Preparation", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "r i = 1 |C i | w\u2208C i r w e i = 1 |C i | w\u2208C i e w Stacking", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Preparation", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We temporally stacked the voxelresponse vectors, chunk-embeddings and chunkratings, first within each story and then across all 11 stories to obtain (i) a per-subject voxel-response matrix B \u2208 R T \u00d7N b , (ii) an embedding matrix E \u2208 R T \u00d7D , and (iii) a rating vector r \u2208 R T , where T denotes the total number of fMRI acquisitions across all stories per subject, and D denotes the dimensionality of the word-embedding space. D = 768 for GPT-2, and 11 stories with an average duration close to 12.5 min per story gives T = 4028.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Preparation", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "3 or as many as allowed by the model's capacity.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Preparation", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We consider the task of classifying words as concrete or abstract (based on their concreteness ratings) using the word-embeddings (chunkembeddings, e i ) as explanatory variables. For this, we first defined a concreteness threshold \u03c4 as follows: any word is labeled concrete if its assigned rating is strictly greater than \u03c4 , and is labeled abstract otherwise. We take \u03c4 = 3.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Word-Embedding based model", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "We then segregated the data into well-defined classes by discarding any chunks that were found to consist of a mixture of concrete and abstract words (as defined above). This retains roughly 42% of all chunks (T s < T ), resulting in the following strict counterparts to the embedding matrix and rating vector obtained in Section 4: (i) E s \u2208 R T s \u00d7D , and (ii) r s \u2208 R T s , with the superscript s denoting that only chunks satisfying the strictly concrete/abstract property are being considered. We binary-encoded r s into the boolean vector y s \u2208 {0, 1} T s , so that y s i = 1 if the corresponding chunk is strictly concrete and y s i = 0 otherwise. Our specific choice for the concreteness threshold (\u03c4 = 3) produces a dataset that is approximately balanced between the two classes and is a natural choice for a 1-5 scale. 4 We learned the E s \u2192 y s mapping for each subject through L2-regularized logistic regression. We trained on 75% of the available data and picked the best value for the regularization parameter C through 5-fold cross-validation. We report the accuracy, recall, and F1 score of the classifier in our results.", |
|
"cite_spans": [ |
|
{ |
|
"start": 829, |
|
"end": 830, |
|
"text": "4", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Word-Embedding based model", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "An important variable in cognitive processing is the frequency with which words are encountered in language. High-frequency words are often perceived and processed faster than low-frequency words (van Heuven et al., 2014) . Thus, word frequency could be a confounding variable to our objective if its distribution over the concrete words significantly differs from its distribution over the abstract words encountered in the stories. To check if this is the case, we computed the distribution of SUBTLEX-US (Brysbaert and New, 2009) word frequencies separately over all concrete vs. abstract words encountered by the subjects. However, a Kolmogorov-Smirnov test showed that the computed distribution over the concrete words was not significantly different from the distribution over the abstract words (ks = 0.056, p = 0.063).", |
|
"cite_spans": [ |
|
{ |
|
"start": 196, |
|
"end": 221, |
|
"text": "(van Heuven et al., 2014)", |
|
"ref_id": "BIBREF44" |
|
}, |
|
{ |
|
"start": 507, |
|
"end": 532, |
|
"text": "(Brysbaert and New, 2009)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Word-Embedding based model", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Voxel Selection With up to 90,000 voxelresponses recorded per fMRI acquisition, not all voxels may be relevant to our objective of predicting the concreteness of word stimuli (Binder et al., 2005) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 175, |
|
"end": 196, |
|
"text": "(Binder et al., 2005)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Voxel-Response based model", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "A standard voxel selection method is to manually determine regions of interest (ROIs) in the brain by analyzing the fMRI responses recorded in an auxiliary functional localizer task (Fedorenko et al., 2010) and select voxels from only these regions. However, this comes at the risk of being too restrictive. For example, one might inadvertently exclude regions in the brain encoding relevant sensory processing information in favor of regions encoding linguistic information. Given our objective to investigate whether brain representations contain any such additional information over wordembeddings, we avoided ROI-based methods for voxel selection.", |
|
"cite_spans": [ |
|
{ |
|
"start": 182, |
|
"end": 206, |
|
"text": "(Fedorenko et al., 2010)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Voxel-Response based model", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "We instead selected voxels based on their fractions of potentially-explainable response variance across time steps. This may be estimated separately for each voxel by recording different versions of its (time-varying) response corresponding to repeated presentations (Hsu et al., 2004) of the same stimulus-sequence. Assume that one story is repeatedly presented N times to a given subject and b represents a voxel being analyzed. If b (n) t represents its response at time step t corresponding to the nth repetition, then its mean response across repetitions is", |
|
"cite_spans": [ |
|
{ |
|
"start": 267, |
|
"end": 285, |
|
"text": "(Hsu et al., 2004)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Voxel-Response based model", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "b t = 1 N N m=1 b (m) t .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Voxel-Response based model", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "The following equations estimate the fraction of potentially-explainable variance for b assuming the voxel-responses are z-scored across all time steps for the story:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Voxel-Response based model", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "ev(b) = 1 N N n=1 [1 \u2212 V ar t (b (n) t \u2212 b t )] ev(b) = ev(b) \u2212 1 N \u2212 1 (1 \u2212 ev(b))", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Voxel-Response based model", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Thus, ev(b) is analogous to the adjusted R 2 of a (perfect) model that always predicts the mean response (b t ) across repetitions. A larger value indicates that the voxel responds consistently to repetitions of the same stimulus. Each subject was presented the last story N = 2 times, and the top-V voxels with the highest ev values were retained.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Voxel-Response based model", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "From this, we obtain the desired reduced formB \u2208 R T \u00d7V . The optimal number of semantic voxels V was chosen separately for each subject to maximize performance on the validation set (described next).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Voxel-Response based model", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Prediction Task Blood-oxygen-level-dependent (BOLD) signals in the brain typically persist for 8-10s after stimulus onset (Ashby, 2019). Since each chunk covers nearly 2s of stimulus presentation, we expect the response to each chunk to be jointly encoded by the first, second, third, and fourth (reduced) voxel-response vectors that follow the current acquisition. However, including the first or fourth acquisition significantly degraded predictive performance. We posit that this degradation occurs because the voxel-response vectors recorded one or four TRs after the current acquisition are more prone to be directly affected by words falling in chunks preceding or succeeding the chunk of interest.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Voxel-Response based model", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "With this observation, we modeled the brain's representation of the stimulus in chunk C i to be of the form f (\u02c6 b i+2 ,\u02c6 b i+3 ), where\u02c6 b i represents the reduced voxel-response vector from the i th acquisition. We therefore constructed the reduced+delayed voxel-response matrixB + \u2208 R T \u00d72V by replacing each row ofB with the concatenation of the second and third rows that succeed it. 5 For classification, we first discarded chunks that are not strictly concrete/abstract and obtained B +s \u2208 R T s \u00d72V . We then used regularized logistic regression to learn the per-subjectB +s \u2192 y s mapping. The training procedure is identical to the one followed in Section 5.1.", |
|
"cite_spans": [ |
|
{ |
|
"start": 389, |
|
"end": 390, |
|
"text": "5", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Voxel-Response based model", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "We determined the statistical significance of our classification results using a label-permutation method (Ojala and Garriga, 2009) with cross-validated accuracy as the chosen test statistic. Here, the distribution of a test statistic under the null hypothesis (that data and labels are independent) is estimated by training and evaluating the classifier on several randomized versions of the original data (by permuting classification labels). The p-value is then calculated as the proportion of randomized samples where the classifier performs better than it does on the original sample. We ran 100 iterations per subject.", |
|
"cite_spans": [ |
|
{ |
|
"start": 106, |
|
"end": 131, |
|
"text": "(Ojala and Garriga, 2009)", |
|
"ref_id": "BIBREF32" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Statistical Significance", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "First, we combined the word-embedding and voxelresponse stimulus representations (obtained in Section 4 and Section 5.2) for each subject, by stacking the word-embedding matrix (E) and the re-duced+delayed voxel-response matrix (B + ) along the feature dimension to obtain the combined stimulus matrix C \u2208 R T \u00d7(D+2V ) . Limiting the data to strict chunks yields the matrix C s \u2208 R T s \u00d7(D+2V ) , which was then used for the classification task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combined model", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "The rationale behind combining representations is the following. If the information encoded by the word-embedding and voxel-response representations were indeed complementary, the combined model should fare better at the prediction task than the two individual models because it now has access to information that was missing in either representation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combined model", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "The classification task (predicting y s ) and its training procedure are identical to those described in Section 5.1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combined model", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "Next, we attempted to remove the information present in each representation from the other and then train the classification model using the resulting representation. This procedure is described below.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Residual Classification", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "word-embeddings: For each subject, we learned a linear mapping L \u2208 R 2V \u00d7D from B +s to E s through multivariate ridge regression (Haitovsky, 1987) . We then computed the residuals E s r \u2208 R T s \u00d7D in a cross-validated manner as follows, and used the residuals for the classification task:", |
|
"cite_spans": [ |
|
{ |
|
"start": 130, |
|
"end": 147, |
|
"text": "(Haitovsky, 1987)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Removing voxel-response information from", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "E s r = E s \u2212B +s \u2022 L 2.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Removing voxel-response information from", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "Removing word-embedding information from voxel-responses: For each subject, we learned the linear mapping L \u2208 R D\u00d72V from E s toB +s through multivariate ridge regression. We then computed the residualsB +s r \u2208 R T s \u00d72V in a cross-validated manner as follows, and used the residuals for the classifica-", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Removing voxel-response information from", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "tion task:B +s r =B +s \u2212 E s \u2022 L", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Removing voxel-response information from", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "Statistical Significance To statistically validate that any observed decrease in a residual model's performance compared to the corresponding nonresidual model is really due to shared information between the representations (and not due to overfitting/chance), we adopted a \"residual-permutation\" procedure similar to that in Section 5.2.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Removing voxel-response information from", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "Here, an empirical null distribution is created by training and evaluating each residual model above with several randomized versions of whichever representation is to be regressed out. The randomization is performed by permuting this representation over all time steps. The p-value is then calculated as the fraction of such residual models with crossvalidated accuracies lower than that of the true (non-randomized) residual model. We ran 100 iterations per subject.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Removing voxel-response information from", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "We use the abbreviations E for the wordembedding based model, B for the voxel-response based (brain) model, E+B for the combinedrepresentation model, E-B for the word-embedding model with voxel-response information removed, and B-E for the voxel-response model with wordembedding information removed. Figure 1 shows the classification accuracies of all models across the six subjects. Table 1 shows the average accuracy, recall, and F1 score of E and B.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 301, |
|
"end": 309, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 385, |
|
"end": 392, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "B achieved an average classification accuracy of 69% and F1 score of 71%, and performed significantly higher than chance under the labelpermutation test (p \u2264 9 \u00d7 10 \u22123 ) for each subject. This indicates that the fMRI signals triggered due to words encountered by subjects in natural stories encode enough information to significantly distinguish their concreteness levels under the current predictive framework. Evidently, this information must be useful above and beyond the noise present in the fMRI data unique to the data acquisition process. To our knowledge, the ability to classify the concreteness of naturalistic word stimuli from their induced brain representations in a direct, supervised fashion has not been shown in the existing literature.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Individual models", |
|
"sec_num": "7.1" |
|
}, |
|
{ |
|
"text": "E achieved a comparatively higher classification accuracy of 87%, which is in agreement with existing research (in non-naturalistic settings) on the pre-", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Individual models", |
|
"sec_num": "7.1" |
|
}, |
|
{ |
|
"text": "Performance ( Table 1 : Classification metrics across the six participants for the word-embedding based (E), voxel-response based (B) and combined (E+B) models. dictability of word concreteness and imageability using word-embeddings as explanatory variables (Charbonnier and Wartena, 2019; Ljube\u0161i\u0107 et al., 2018) . Table 1 shows the average accuracy, recall, and F1 score of E, B, and E+B. As argued in Section 1, we expect the additional sensory processing information encoded in the voxel-responses to complement the linguistic/contextual information encoded in the wordembeddings. Consequently, the combined model should fare better at distinguishing the concreteness of words in the stories.", |
|
"cite_spans": [ |
|
{ |
|
"start": 258, |
|
"end": 289, |
|
"text": "(Charbonnier and Wartena, 2019;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 290, |
|
"end": 312, |
|
"text": "Ljube\u0161i\u0107 et al., 2018)", |
|
"ref_id": "BIBREF28" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 14, |
|
"end": 21, |
|
"text": "Table 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 315, |
|
"end": 322, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "However, our results indicate otherwise. The performance of E+B (86 \u00b1 1.9%) was not significantly different from E (87%) under a 1-sample t-test (t = \u22122.33, p = 0.07, df = 5, 2-tail), meaning the combined model is only as good as the wordembedding based model at the task considered. Therefore, the information present in the voxelresponses relevant to differentiating between concrete and abstract words is already well-encoded by the word-embeddings, and the former does not complement the latter. On the other hand, the performance of E+B (86 \u00b1 1.9%) was significantly higher than B (69 \u00b1 2.5%) under a paired t-test (t = 17.77, p = 5 \u00d7 10 \u22126 , df = 5, 1-tail). This indicates that the word-embeddings may even contain useful extra information above that in the fMRI signals (note that we already demonstrated the effectiveness of our predictive framework in significantly distinguishing word-concreteness purely from fMRI signals). We explore this idea further next. Table 2 shows the average accuracy, recall, and F1 score of the residual models E-B and B-E.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 971, |
|
"end": 978, |
|
"text": "Table 2", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Comparative models", |
|
"sec_num": "7.2" |
|
}, |
|
{ |
|
"text": "The results of the residual analyses are surprising. First, E-B achieved an average accuracy of 84%, which was significant under the residualpermutation test (p \u2264 9 \u00d7 10 \u22123 ) for each subject. The performance of E-B (84 \u00b1 1.7%) was also significantly lower than E (87%) across subjects under a 1-sample t-test (t = \u22124.71, p = 2.6 \u00d7 10 \u22123 , df = 5, 1-tail). This shows that removing the voxel-response information from the word-embeddings marginally affects its ability to classify word concreteness. More strikingly, B-E achieved an average accuracy of 48%, which is lower than the theoretical chance accuracy of 50% (see Figure 1 ). This result was significant under the residual-permutation test (p \u2264 9 \u00d7 10 \u22123 ) for each subject, ruling out the possibility that the", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 622, |
|
"end": 630, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Comparative models", |
|
"sec_num": "7.2" |
|
}, |
|
{ |
|
"text": "Performance (Mean \u00b1 S.D.) Accuracy Recall F1 score E-B 0.84 \u00b1 1.7% 0.85 \u00b1 2.4% 0.84 \u00b1 1.4% B-E 0.48 \u00b1 9.1% 0.60 \u00b1 5.8% 0.55 \u00b1 5.6% Table 3 : Examples of chunks frequently misclassified by the voxel-response model. The exact phrase falling within the chunk is in dark color. We find that a majority of such misclassifications come from the abstract category.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 131, |
|
"end": 138, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Residual Model", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "huge performance decrease was merely caused by overfitting/chance. Across subjects too, the performance of B-E (48 \u00b1 9.1%) was significantly lower than B (69 \u00b1 2.5%) under a paired t-test (t = \u22128.52, p = 1.8 \u00d7 10 \u22124 , df = 5, 1-tail). Therefore, while removing the word-embedding information from the voxel-responses fully eliminates the latter's predictive capability (a 30% decrease), going the other way around only has a marginal effect on predictive performance (a 3% decrease). These results show not only that the fMRI signals do not provide complementary information to the word-embeddings in making the concrete/abstract distinction, but that the relevant information in the voxel-responses is really a subset of the relevant information in the word-embeddings. This is a surprising result, considering the task was to distinguish a property of words theorized to fundamentally affect how the human brain represents language. We summarize our findings and provide some additional observations about this work next.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Residual Model", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "This paper has three key findings. First, we showed that words encountered in natural stories could be classified based on concreteness purely from the neural activity elicited as subjects passively comprehended the stories, using a direct, supervised approach.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "Second, we showed that in making the concrete/abstract distinction, contextualized wordembeddings (i.e., GPT-2) do not benefit from the inclusion of information from the accompanying fMRI signals, despite evidence from several neurolinguistic studies of the human brain exhibiting fundamentally different representations over the two categories.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "Finally, we found that while the residual information remaining in fMRI signals after regressing out word-embedding information can no longer distinguish concrete from abstract words, the residual information in word-embeddings beyond the fMRI signals performs significantly at this task. This shows that the information in the voxel-responses important to our prediction task is a subset of the corresponding information in the contextualized word-embeddings.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "Our results should be interpreted in light of the following observations:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "A limitation of our work is that while the voxelresponses and word-embeddings (from GPT-2) considered provide contextualized stimulus representations, the dataset provides non-contextualized ratings for each word. We partially addressed this discrepancy by formulating the prediction task as a classification problem since the available labels are now much more likely to match ground-truth. I.e., it is reasonable to assume that the broad binary concreteness class of a word will rarely be modified by context as much as the continuous scores would. Future work could overcome this limitation by developing the ideas from the recently introduced CONcreTEXT task 6", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "Model E B E+B E-B B-E Spearman's \u03c1 (Mean \u00b1 S.D.) 0.85 0.42 \u00b1 0.03 0.84 \u00b1 0.02 0.80 \u00b1 0.03 0.09 \u00b1 0.05 Table 4 : Spearman's rank-correlation coefficients (\u03c1 \u2208 [\u22121, 1]) between predicted and true ratings across the six participants.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 102, |
|
"end": 109, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Metric", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "of computing contextualized rating scores. We still report regression results in Table 4 for completeness and observe that they are consistent with our findings (e.g., B-E can no longer predict word concreteness as suggested by its near-zero rankcorrelation). Finally, we find that repeating our analyses with non-contextualized word2vec embeddings (Mikolov et al., 2013) also yielded qualitatively identical results as in Section 7.2, indicating that our three conclusions above hold for word-embeddings more generally.", |
|
"cite_spans": [ |
|
{ |
|
"start": 349, |
|
"end": 371, |
|
"text": "(Mikolov et al., 2013)", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 81, |
|
"end": 88, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Metric", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Another observation is that while B (69 \u00b1 2.5%) significantly distinguishes concrete from abstract words, it still does not perform as well as E (87%) at this task. There could be two reasons for this difference. First, B does not handle abstract stimuli as well as E does. Quantitatively, while B achieves a recall of 77 \u00b1 2.6% on concrete chunks, its recall on abstract chunks is significantly lower at 63 \u00b1 3.6%. On the other hand, E shows nearly identical performances over the categories. Table 3 shows some of B's misclassified examples common to as many as four out of six subjects. Out of the 29 such common misclassifications, 19 (65.5%) were found to be abstract. This could indicate that neural activity patterns are not as informative for abstract stimuli as concrete stimuli, which is in agreement with psycholinguistic studies demonstrating verbal processing advantages for concrete concepts over abstract concepts (Holmes and Langford, 1976; Kroll and Merves, 1986; Romani et al., 2008) . Second, the temporal resolution of functional Magnetic Resonance Imaging may be too coarse (Gauthier and Levy, 2019; Schwartz et al., 2019) for optimal performance on our task (we had to downsample the stimulus in Section 4). Nevertheless, our findings are important. Applying the current predictive framework on the fMRI signals produced highly significant results, and it is under such a framework that the above conclusions were made. Future work could explore the differences in decoding neural activity from naturalistic stimuli with imaging methods of different temporal resolu-tions (e.g., EEG, MEG) to determine which method should be used for which kind of task.", |
|
"cite_spans": [ |
|
{ |
|
"start": 930, |
|
"end": 957, |
|
"text": "(Holmes and Langford, 1976;", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 958, |
|
"end": 981, |
|
"text": "Kroll and Merves, 1986;", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 982, |
|
"end": 1002, |
|
"text": "Romani et al., 2008)", |
|
"ref_id": "BIBREF37" |
|
}, |
|
{ |
|
"start": 1096, |
|
"end": 1121, |
|
"text": "(Gauthier and Levy, 2019;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 1122, |
|
"end": 1144, |
|
"text": "Schwartz et al., 2019)", |
|
"ref_id": "BIBREF41" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 494, |
|
"end": 502, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Metric", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "To conclude, we believe that this paper will inspire future work to take up the following exciting directions: Which natural language processing tasks may benefit from incorporating human language processing information into the existing frameworks? Are there ways of including such information to expose avenues for improvement in these models?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Metric", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Out of all strictly concrete/abstract chunks, 52% were labeled concrete, and 48% were labeled abstract.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For rows that are \u2264 3 positions from the end, we used zero-padding for consistent dimensions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://github.com/lablita/CONcreTEXT", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Blackbox meets blackbox: Representational Similarity and Stability Analysis of Neural Language Models and Brains", |
|
"authors": [ |
|
{ |
|
"first": "Samira", |
|
"middle": [], |
|
"last": "Abnar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lisa", |
|
"middle": [], |
|
"last": "Beinborn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rochelle", |
|
"middle": [], |
|
"last": "Choenni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Willem", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Zuidema", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Samira Abnar, Lisa Beinborn, Rochelle Choenni, and Willem H. Zuidema. 2019. Blackbox meets black- box: Representational Similarity and Stability Anal- ysis of Neural Language Models and Brains. CoRR, abs/1906.01539.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "On discriminating fMRI representations of abstract WordNet taxonomic categories", |
|
"authors": [ |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Anderson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tao", |
|
"middle": [], |
|
"last": "Yuan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Brian", |
|
"middle": [], |
|
"last": "Murphy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Massimo", |
|
"middle": [], |
|
"last": "Poesio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 3rd Workshop on Cognitive Aspects of the Lexicon", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "21--32", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrew Anderson, Tao Yuan, Brian Murphy, and Mas- simo Poesio. 2012. On discriminating fMRI rep- resentations of abstract WordNet taxonomic cate- gories. In Proceedings of the 3rd Workshop on Cog- nitive Aspects of the Lexicon, pages 21-32, Mumbai, India. The COLING 2012 Organizing Committee.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Visually Grounded and Textual Semantic Models Differentially Decode Brain Activity Associated with Concrete and Abstract Nouns", |
|
"authors": [ |
|
{ |
|
"first": "Andrew", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Anderson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Douwe", |
|
"middle": [], |
|
"last": "Kiela", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Massimo", |
|
"middle": [], |
|
"last": "Poesio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "5", |
|
"issue": "", |
|
"pages": "17--30", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1162/tacl_a_00043" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrew J. Anderson, Douwe Kiela, Stephen Clark, and Massimo Poesio. 2017. Visually Grounded and Textual Semantic Models Differentially Decode Brain Activity Associated with Concrete and Ab- stract Nouns. Transactions of the Association for Computational Linguistics, 5:17-30.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Statistical analysis of fMRI data", |
|
"authors": [ |
|
{ |
|
"first": "Ashby", |
|
"middle": [], |
|
"last": "F Gregory", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "F Gregory Ashby. 2019. Statistical analysis of fMRI data. MIT press.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Exploring Multi-Modal Text+Image Models to Distinguish between Abstract and Concrete Nouns", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Sai Abishek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maximilian", |
|
"middle": [], |
|
"last": "Bhaskar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sabine", |
|
"middle": [], |
|
"last": "K\u00f6per", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Diego", |
|
"middle": [], |
|
"last": "Schulte Im Walde", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Frassinelli", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the IWCS workshop on Foundations of Situated and Multimodal Communication", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sai Abishek Bhaskar, Maximilian K\u00f6per, Sabine Schulte Im Walde, and Diego Frassinelli. 2017. Ex- ploring Multi-Modal Text+Image Models to Distin- guish between Abstract and Concrete Nouns. In Pro- ceedings of the IWCS workshop on Foundations of Situated and Multimodal Communication.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Distinct Brain Systems for Processing Concrete and Abstract Concepts", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Binder", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"F" |
|
], |
|
"last": "Westbury", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Mckiernan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Possing", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Medler", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Journal of Cognitive Neuroscience", |
|
"volume": "17", |
|
"issue": "6", |
|
"pages": "905--917", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1162/0898929054021102" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J. R. Binder, C. F. Westbury, K. A. McKiernan, E. T. Possing, and D. A. Medler. 2005. Distinct Brain Sys- tems for Processing Concrete and Abstract Concepts. Journal of Cognitive Neuroscience, 17(6):905-917.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "PRAAT, a system for doing phonetics by computer", |
|
"authors": [ |
|
{ |
|
"first": "Paul", |
|
"middle": [], |
|
"last": "Boersma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Weenink", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Glot International", |
|
"volume": "5", |
|
"issue": "9", |
|
"pages": "341--345", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Paul Boersma and David Weenink. 2001. PRAAT, a system for doing phonetics by computer. Glot Inter- national, 5(9/10):341-345.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Moving beyond Ku\u010dera and Francis: A critical evaluation of current word frequency norms and the introduction of a new and improved word frequency measure for American English", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Brysbaert", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Boris", |
|
"middle": [], |
|
"last": "New", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Behavior Research Methods", |
|
"volume": "41", |
|
"issue": "", |
|
"pages": "977--990", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M. Brysbaert and Boris New. 2009. Moving beyond Ku\u010dera and Francis: A critical evaluation of current word frequency norms and the introduction of a new and improved word frequency measure for Ameri- can English. Behavior Research Methods, 41:977- 990.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Concreteness ratings for 40 thousand generally known English word lemmas", |
|
"authors": [ |
|
{ |
|
"first": "Marc", |
|
"middle": [], |
|
"last": "Brysbaert", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Amy", |
|
"middle": [ |
|
"Beth" |
|
], |
|
"last": "Warriner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Victor", |
|
"middle": [], |
|
"last": "Kuperman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Behavior Research Methods", |
|
"volume": "46", |
|
"issue": "3", |
|
"pages": "904--911", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marc Brysbaert, Amy Beth Warriner, and Victor Ku- perman. 2014. Concreteness ratings for 40 thousand generally known English word lemmas. Behavior Research Methods, 46(3):904-911.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Language processing in brains and deep neural networks: computational convergence and its limits", |
|
"authors": [ |
|
{ |
|
"first": "Charlotte", |
|
"middle": [], |
|
"last": "Caucheteux", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jean-R\u00e9mi", |
|
"middle": [], |
|
"last": "King", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1101/2020.07.03.186288" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Charlotte Caucheteux and Jean-R\u00e9mi King. 2020. Lan- guage processing in brains and deep neural net- works: computational convergence and its limits. bioRxiv.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Predicting Word Concreteness and Imagery", |
|
"authors": [ |
|
{ |
|
"first": "Jean", |
|
"middle": [], |
|
"last": "Charbonnier", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christian", |
|
"middle": [], |
|
"last": "Wartena", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 13th International Conference on Computational Semantics -Long Papers", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "176--187", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W19-0415" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jean Charbonnier and Christian Wartena. 2019. Pre- dicting Word Concreteness and Imagery. In Pro- ceedings of the 13th International Conference on Computational Semantics -Long Papers, pages 176- 187, Gothenburg, Sweden. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Word Translation Without Parallel Data", |
|
"authors": [ |
|
{ |
|
"first": "Alexis", |
|
"middle": [], |
|
"last": "Conneau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guillaume", |
|
"middle": [], |
|
"last": "Lample", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marc'aurelio", |
|
"middle": [], |
|
"last": "Ranzato", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ludovic", |
|
"middle": [], |
|
"last": "Denoyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Herv\u00e9", |
|
"middle": [], |
|
"last": "J\u00e9gou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, and Herv\u00e9 J\u00e9gou. 2017. Word Translation Without Parallel Data. CoRR, abs/1710.04087.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "The Representation of Semantic Information Across Human Cerebral Cortex During Listening Versus Reading Is Invariant to Stimulus Modality", |
|
"authors": [ |
|
{ |
|
"first": "Fatma", |
|
"middle": [], |
|
"last": "Deniz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "O", |
|
"middle": [], |
|
"last": "Anwar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [ |
|
"G" |
|
], |
|
"last": "Nunez-Elizalde", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jack", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Huth", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Gallant", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Journal of Neuroscience", |
|
"volume": "39", |
|
"issue": "39", |
|
"pages": "7722--7736", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1523/JNEUROSCI.0675-19.2019" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Fatma Deniz, Anwar O. Nunez-Elizalde, Alexander G. Huth, and Jack L. Gallant. 2019. The Representa- tion of Semantic Information Across Human Cere- bral Cortex During Listening Versus Reading Is In- variant to Stimulus Modality. Journal of Neuro- science, 39(39):7722-7736.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "New Method for fMRI Investigations of Language: Defining ROIs Functionally in Individual Subjects", |
|
"authors": [ |
|
{ |
|
"first": "Evelina", |
|
"middle": [], |
|
"last": "Fedorenko", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Po-Jang", |
|
"middle": [], |
|
"last": "Hsieh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alfonso", |
|
"middle": [], |
|
"last": "Nieto-Casta\u00f1\u00f3n", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Susan", |
|
"middle": [], |
|
"last": "Whitfield-Gabrieli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nancy", |
|
"middle": [], |
|
"last": "Kanwisher", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Journal of Neurophysiology", |
|
"volume": "104", |
|
"issue": "2", |
|
"pages": "1177--1194", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1152/jn.00032.2010" |
|
], |
|
"PMID": [ |
|
"20410363" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Evelina Fedorenko, Po-Jang Hsieh, Alfonso Nieto- Casta\u00f1\u00f3n, Susan Whitfield-Gabrieli, and Nancy Kanwisher. 2010. New Method for fMRI Investi- gations of Language: Defining ROIs Functionally in Individual Subjects. Journal of Neurophysiology, 104(2):1177-1194. PMID: 20410363.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Linking artificial and human neural representations of language", |
|
"authors": [ |
|
{ |
|
"first": "Jon", |
|
"middle": [], |
|
"last": "Gauthier", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roger", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "529--539", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D19-1050" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jon Gauthier and Roger Levy. 2019. Linking artificial and human neural representations of language. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 529- 539, Hong Kong, China. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "On Multivariate Ridge Regression", |
|
"authors": [ |
|
{ |
|
"first": "Yoel", |
|
"middle": [], |
|
"last": "Haitovsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1987, |
|
"venue": "Biometrika", |
|
"volume": "74", |
|
"issue": "3", |
|
"pages": "563--570", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yoel Haitovsky. 1987. On Multivariate Ridge Regres- sion. Biometrika, 74(3):563-570.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Learning Abstract Concept Embeddings from Multi-Modal Data: Since You Probably Can't See What I Mean", |
|
"authors": [ |
|
{ |
|
"first": "Felix", |
|
"middle": [], |
|
"last": "Hill", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anna", |
|
"middle": [], |
|
"last": "Korhonen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "255--265", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/v1/D14-1032" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Felix Hill and Anna Korhonen. 2014. Learning Ab- stract Concept Embeddings from Multi-Modal Data: Since You Probably Can't See What I Mean. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 255-265, Doha, Qatar. Association for Com- putational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "A Quantitative Empirical Analysis of the Abstract/Concrete Distinction", |
|
"authors": [ |
|
{ |
|
"first": "Felix", |
|
"middle": [], |
|
"last": "Hill", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anna", |
|
"middle": [], |
|
"last": "Korhonen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christian", |
|
"middle": [], |
|
"last": "Bentz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Cognitive Science", |
|
"volume": "38", |
|
"issue": "1", |
|
"pages": "162--177", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1111/cogs.12076" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Felix Hill, Anna Korhonen, and Christian Bentz. 2014. A Quantitative Empirical Analysis of the Abstract/Concrete Distinction. Cognitive Science, 38(1):162-177.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "CogniVal: A Framework for Cognitive Word Embedding Evaluation", |
|
"authors": [ |
|
{ |
|
"first": "Nora", |
|
"middle": [], |
|
"last": "Hollenstein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Antonio", |
|
"middle": [], |
|
"last": "De La Torre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicolas", |
|
"middle": [], |
|
"last": "Langer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ce", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "538--549", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/K19-1050" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nora Hollenstein, Antonio de la Torre, Nicolas Langer, and Ce Zhang. 2019. CogniVal: A Framework for Cognitive Word Embedding Evaluation. In Proceed- ings of the 23rd Conference on Computational Nat- ural Language Learning (CoNLL), pages 538-549, Hong Kong, China. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Comprehension and recall of abstract and concrete sentences", |
|
"authors": [ |
|
{ |
|
"first": "V", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Holmes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Langford", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1976, |
|
"venue": "Journal of Verbal Learning and Verbal Behavior", |
|
"volume": "15", |
|
"issue": "5", |
|
"pages": "559--566", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1016/0022-5371(76)90050-5" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "V.M. Holmes and J. Langford. 1976. Comprehen- sion and recall of abstract and concrete sentences. Journal of Verbal Learning and Verbal Behavior, 15(5):559 -566.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Quantifying variability in neural responses and its application for the validation of model predictions", |
|
"authors": [ |
|
{ |
|
"first": "Anne", |
|
"middle": [], |
|
"last": "Hsu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Borst", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fr\u00e9d\u00e9ric", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Theunissen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Network: Computation in Neural Systems", |
|
"volume": "15", |
|
"issue": "2", |
|
"pages": "91--109", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1088/0954-898X_15_2_002" |
|
], |
|
"PMID": [ |
|
"15214701" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Anne Hsu, Alexander Borst, and Fr\u00e9d\u00e9ric E Theunis- sen. 2004. Quantifying variability in neural re- sponses and its application for the validation of model predictions. Network: Computation in Neu- ral Systems, 15(2):91-109. PMID: 15214701.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Natural speech reveals the semantic maps that tile human cerebral cortex", |
|
"authors": [ |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Huth", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wendy", |
|
"middle": [], |
|
"last": "Heer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Griffiths", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fr\u00e9d\u00e9ric", |
|
"middle": [], |
|
"last": "Theunissen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jack", |
|
"middle": [], |
|
"last": "Gallant", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Nature", |
|
"volume": "532", |
|
"issue": "", |
|
"pages": "453--458", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1038/nature17637" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexander Huth, Wendy Heer, Thomas Griffiths, Fr\u00e9d\u00e9ric Theunissen, and Jack Gallant. 2016. Natu- ral speech reveals the semantic maps that tile human cerebral cortex. Nature, 532:453-458.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Incorporating Context into Language Encoding Models for fMRI", |
|
"authors": [ |
|
{ |
|
"first": "Shailee", |
|
"middle": [], |
|
"last": "Jain", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Huth", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "31", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shailee Jain and Alexander Huth. 2018. Incorporating Context into Language Encoding Models for fMRI. In Advances in Neural Information Processing Sys- tems, volume 31. Curran Associates, Inc.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Interpretable multi-timescale models for predicting fMRI responses to continuous natural speech", |
|
"authors": [ |
|
{ |
|
"first": "Shailee", |
|
"middle": [], |
|
"last": "Jain", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vy", |
|
"middle": [], |
|
"last": "Vo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shivangi", |
|
"middle": [], |
|
"last": "Mahto", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Amanda", |
|
"middle": [], |
|
"last": "Lebel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Javier", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Turek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Huth", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "33", |
|
"issue": "", |
|
"pages": "13738--13749", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shailee Jain, Vy Vo, Shivangi Mahto, Amanda LeBel, Javier S Turek, and Alexander Huth. 2020. Inter- pretable multi-timescale models for predicting fMRI responses to continuous natural speech. In Ad- vances in Neural Information Processing Systems, volume 33, pages 13738-13749. Curran Associates, Inc.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Improved Optimization for the Robust and Accurate Linear Registration and Motion Correction of Brain Images", |
|
"authors": [ |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Jenkinson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Bannister", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Brady", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "NeuroImage", |
|
"volume": "17", |
|
"issue": "2", |
|
"pages": "825--841", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1006/nimg.2002.1132" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mark Jenkinson, Peter Bannister, Michael Brady, and Stephen Smith. 2002. Improved Optimization for the Robust and Accurate Linear Registration and Motion Correction of Brain Images. NeuroImage, 17(2):825 -841.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "A global optimisation method for robust affine registration of brain images", |
|
"authors": [ |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Jenkinson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Medical Image Analysis", |
|
"volume": "5", |
|
"issue": "2", |
|
"pages": "143--156", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1016/S1361-8415(01)00036-6" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mark Jenkinson and Stephen Smith. 2001. A global optimisation method for robust affine registration of brain images. Medical Image Analysis, 5(2):143 - 156.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Lexical access for concrete and abstract words", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Judith", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jill", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Kroll", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Merves", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1986, |
|
"venue": "Journal of Experimental Psychology: Learning, Memory, and Cognition", |
|
"volume": "12", |
|
"issue": "1", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Judith F Kroll and Jill S Merves. 1986. Lexical access for concrete and abstract words. Journal of Experi- mental Psychology: Learning, Memory, and Cogni- tion, 12(1):92.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Combining Language and Vision with a Multimodal Skip-gram Model", |
|
"authors": [ |
|
{ |
|
"first": "Angeliki", |
|
"middle": [], |
|
"last": "Lazaridou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Nghia The", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Pham", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Baroni", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "153--163", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/v1/N15-1016" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Angeliki Lazaridou, Nghia The Pham, and Marco Ba- roni. 2015. Combining Language and Vision with a Multimodal Skip-gram Model. In Proceedings of the 2015 Conference of the North American Chap- ter of the Association for Computational Linguis- tics: Human Language Technologies, pages 153- 163, Denver, Colorado. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Predicting Concreteness and Imageability of Words Within and Across Languages via Word Embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Nikola", |
|
"middle": [], |
|
"last": "Ljube\u0161i\u0107", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Darja", |
|
"middle": [], |
|
"last": "Fi\u0161er", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anita", |
|
"middle": [], |
|
"last": "Peti-Stanti\u0107", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of The Third Workshop on Representation Learning for NLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "217--222", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W18-3028" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nikola Ljube\u0161i\u0107, Darja Fi\u0161er, and Anita Peti-Stanti\u0107. 2018. Predicting Concreteness and Imageability of Words Within and Across Languages via Word Em- beddings. In Proceedings of The Third Workshop on Representation Learning for NLP, pages 217- 222, Melbourne, Australia. Association for Compu- tational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Distributed Representations of Words and Phrases and their Compositionality", |
|
"authors": [ |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Greg", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Corrado", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeff", |
|
"middle": [], |
|
"last": "Dean", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "26", |
|
"issue": "", |
|
"pages": "3111--3119", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed Representa- tions of Words and Phrases and their Composition- ality. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, editors, Ad- vances in Neural Information Processing Systems 26, pages 3111-3119. Curran Associates, Inc.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "WordNet: A Lexical Database for English", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "George", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Miller", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Commun. ACM", |
|
"volume": "38", |
|
"issue": "11", |
|
"pages": "39--41", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1145/219717.219748" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "George A. Miller. 1995. WordNet: A Lexical Database for English. Commun. ACM, 38(11):39-41.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Predicting Human Brain Activity Associated with the Meanings of Nouns", |
|
"authors": [ |
|
{ |
|
"first": "Tom", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Mitchell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Svetlana", |
|
"middle": [ |
|
"V" |
|
], |
|
"last": "Shinkareva", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Carlson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai-Min", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vicente", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Malave", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Mason", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marcel", |
|
"middle": [ |
|
"Adam" |
|
], |
|
"last": "Just", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Science", |
|
"volume": "320", |
|
"issue": "5880", |
|
"pages": "1191--1195", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1126/science.1152876" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tom M. Mitchell, Svetlana V. Shinkareva, Andrew Carlson, Kai-Min Chang, Vicente L. Malave, Robert A. Mason, and Marcel Adam Just. 2008. Predicting Human Brain Activity Associated with the Meanings of Nouns. Science, 320(5880):1191- 1195.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Permutation Tests for Studying Classifier Performance", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Ojala", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Garriga", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Ninth IEEE International Conference on Data Mining", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "908--913", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1109/ICDM.2009.108" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M. Ojala and G. C. Garriga. 2009. Permutation Tests for Studying Classifier Performance. In 2009 Ninth IEEE International Conference on Data Mining, pages 908-913.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Imagery and Verbal Processes", |
|
"authors": [ |
|
{ |
|
"first": "Allan", |
|
"middle": [], |
|
"last": "Paivio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1971, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Allan Paivio. 1971. Imagery and Verbal Processes. Holt, Rinehart and Winston.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Dual Coding Theory: Retrospect and Current Status", |
|
"authors": [ |
|
{ |
|
"first": "Allan", |
|
"middle": [], |
|
"last": "Paivio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1991, |
|
"venue": "Canadian Journal of Psychology/Revue Canadienne de Psychologie", |
|
"volume": "45", |
|
"issue": "3", |
|
"pages": "255--287", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1037/h0084295" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Allan Paivio. 1991. Dual Coding Theory: Retrospect and Current Status. Canadian Journal of Psychol- ogy/Revue Canadienne de Psychologie, 45(3):255- 287.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Language Models are Unsupervised Multitask Learners", |
|
"authors": [ |
|
{ |
|
"first": "Alec", |
|
"middle": [], |
|
"last": "Radford", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeff", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rewon", |
|
"middle": [], |
|
"last": "Child", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Luan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dario", |
|
"middle": [], |
|
"last": "Amodei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language Models are Unsupervised Multitask Learners.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "A Multimodal LDA Model integrating Textual, Cognitive and Visual Modalities", |
|
"authors": [ |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Roller", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sabine", |
|
"middle": [], |
|
"last": "Schulte Im Walde", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1146--1157", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stephen Roller and Sabine Schulte im Walde. 2013. A Multimodal LDA Model integrating Textual, Cog- nitive and Visual Modalities. In Proceedings of the 2013 Conference on Empirical Methods in Natu- ral Language Processing, pages 1146-1157, Seattle, Washington, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "Concreteness Effects in Different Tasks: Implications for Models of Short-Term Memory", |
|
"authors": [ |
|
{ |
|
"first": "Cristina", |
|
"middle": [], |
|
"last": "Romani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sheila", |
|
"middle": [], |
|
"last": "Mcalpine", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Randi", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Martin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Quarterly Journal of Experimental Psychology", |
|
"volume": "61", |
|
"issue": "2", |
|
"pages": "292--323", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1080/17470210601147747" |
|
], |
|
"PMID": [ |
|
"17853203" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Cristina Romani, Sheila Mcalpine, and Randi C. Martin. 2008. Concreteness Effects in Different Tasks: Implications for Models of Short-Term Mem- ory. Quarterly Journal of Experimental Psychology, 61(2):292-323. PMID: 17853203.", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "What Is a Savitzky-Golay Filter?", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Schafer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Lecture Notes", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1109/MSP.2011.941097" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "R. W. Schafer. 2011. What Is a Savitzky-Golay Filter? [Lecture Notes].", |
|
"links": null |
|
}, |
|
"BIBREF40": { |
|
"ref_id": "b40", |
|
"title": "The neural architecture of language: Integrative reverse-engineering converges on a model for predictive processing", |
|
"authors": [ |
|
{ |
|
"first": "Martin", |
|
"middle": [], |
|
"last": "Schrimpf", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Idan", |
|
"middle": [], |
|
"last": "Blank", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Greta", |
|
"middle": [], |
|
"last": "Tuckute", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Carina", |
|
"middle": [], |
|
"last": "Kauf", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Eghbal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nancy", |
|
"middle": [], |
|
"last": "Hosseini", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joshua", |
|
"middle": [], |
|
"last": "Kanwisher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Evelina", |
|
"middle": [], |
|
"last": "Tenenbaum", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Fedorenko", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1101/2020.06.26.174482" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Martin Schrimpf, Idan Blank, Greta Tuckute, Ca- rina Kauf, Eghbal A. Hosseini, Nancy Kanwisher, Joshua Tenenbaum, and Evelina Fedorenko. 2020. The neural architecture of language: Integrative reverse-engineering converges on a model for pre- dictive processing. bioRxiv.", |
|
"links": null |
|
}, |
|
"BIBREF41": { |
|
"ref_id": "b41", |
|
"title": "Inducing brain-relevant bias in natural language processing models", |
|
"authors": [ |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Schwartz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mariya", |
|
"middle": [], |
|
"last": "Toneva", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Leila", |
|
"middle": [], |
|
"last": "Wehbe", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "32", |
|
"issue": "", |
|
"pages": "14123--14133", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dan Schwartz, Mariya Toneva, and Leila Wehbe. 2019. Inducing brain-relevant bias in natural language pro- cessing models. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alch\u00e9-Buc, E. Fox, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 32, pages 14123-14133. Curran As- sociates, Inc.", |
|
"links": null |
|
}, |
|
"BIBREF42": { |
|
"ref_id": "b42", |
|
"title": "Learning Grounded Meaning Representations with Autoencoders", |
|
"authors": [ |
|
{ |
|
"first": "Carina", |
|
"middle": [], |
|
"last": "Silberer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mirella", |
|
"middle": [], |
|
"last": "Lapata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "721--732", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/v1/P14-1068" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Carina Silberer and Mirella Lapata. 2014. Learning Grounded Meaning Representations with Autoen- coders. In Proceedings of the 52nd Annual Meet- ing of the Association for Computational Linguis- tics (Volume 1: Long Papers), pages 721-732, Balti- more, Maryland. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF43": { |
|
"ref_id": "b43", |
|
"title": "Interpreting and improving natural-language processing (in machines) with natural language-processing (in the brain)", |
|
"authors": [ |
|
{ |
|
"first": "Mariya", |
|
"middle": [], |
|
"last": "Toneva", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Leila", |
|
"middle": [], |
|
"last": "Wehbe", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "32", |
|
"issue": "", |
|
"pages": "14954--14964", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mariya Toneva and Leila Wehbe. 2019. Interpret- ing and improving natural-language processing (in machines) with natural language-processing (in the brain). In H. Wallach, H. Larochelle, A. Beygelz- imer, F. d'Alch\u00e9-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 14954-14964. Curran Associates, Inc.", |
|
"links": null |
|
}, |
|
"BIBREF44": { |
|
"ref_id": "b44", |
|
"title": "Subtlex-UK: A New and Improved Word Frequency Database for British English", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"B" |
|
], |
|
"last": "Walter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pawel", |
|
"middle": [], |
|
"last": "Van Heuven", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Emmanuel", |
|
"middle": [], |
|
"last": "Mandera", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marc", |
|
"middle": [], |
|
"last": "Keuleers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Brysbaert", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Quarterly Journal of Experimental Psychology", |
|
"volume": "67", |
|
"issue": "6", |
|
"pages": "1176--1190", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1080/17470218.2013.850521" |
|
], |
|
"PMID": [ |
|
"24417251" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Walter J. B. van Heuven, Pawel Mandera, Emmanuel Keuleers, and Marc Brysbaert. 2014. Subtlex-UK: A New and Improved Word Frequency Database for British English. Quarterly Journal of Experimental Psychology, 67(6):1176-1190. PMID: 24417251.", |
|
"links": null |
|
}, |
|
"BIBREF45": { |
|
"ref_id": "b45", |
|
"title": "Decoding abstract and concrete concept representations based on single-trial fMRI data", |
|
"authors": [ |
|
{ |
|
"first": "Jing", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Laura", |
|
"middle": [ |
|
"B" |
|
], |
|
"last": "Baucom", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Svetlana", |
|
"middle": [ |
|
"V" |
|
], |
|
"last": "Shinkareva", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Human Brain Mapping", |
|
"volume": "34", |
|
"issue": "5", |
|
"pages": "1133--1147", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1002/hbm.21498" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jing Wang, Laura B. Baucom, and Svetlana V. Shinkareva. 2013. Decoding abstract and concrete concept representations based on single-trial fMRI data. Human Brain Mapping, 34(5):1133-1147.", |
|
"links": null |
|
}, |
|
"BIBREF46": { |
|
"ref_id": "b46", |
|
"title": "Simultaneously Uncovering the Patterns of Brain Regions Involved in Different Story Reading Subprocesses", |
|
"authors": [ |
|
{ |
|
"first": "Leila", |
|
"middle": [], |
|
"last": "Wehbe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Brian", |
|
"middle": [], |
|
"last": "Murphy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Partha", |
|
"middle": [], |
|
"last": "Talukdar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alona", |
|
"middle": [], |
|
"last": "Fyshe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aaditya", |
|
"middle": [], |
|
"last": "Ramdas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tom", |
|
"middle": [], |
|
"last": "Mitchell", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "PLOS ONE", |
|
"volume": "9", |
|
"issue": "11", |
|
"pages": "1--19", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1371/journal.pone.0112575" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Leila Wehbe, Brian Murphy, Partha Talukdar, Alona Fyshe, Aaditya Ramdas, and Tom Mitchell. 2014. Simultaneously Uncovering the Patterns of Brain Regions Involved in Different Story Reading Sub- processes. PLOS ONE, 9(11):1-19.", |
|
"links": null |
|
}, |
|
"BIBREF47": { |
|
"ref_id": "b47", |
|
"title": "Speaker identification on the SCOTUS corpus", |
|
"authors": [ |
|
{ |
|
"first": "Jiahong", |
|
"middle": [], |
|
"last": "Yuan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Liberman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Acoustical Society of America Journal", |
|
"volume": "123", |
|
"issue": "5", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1121/1.2935783" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jiahong Yuan and Mark Liberman. 2008. Speaker iden- tification on the SCOTUS corpus. Acoustical Soci- ety of America Journal, 123(5):3878.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "Variation in classification accuracies of all models over the six subjects' data.", |
|
"num": null |
|
}, |
|
"TABREF1": { |
|
"text": "Classification metrics across the six participants for the two residual models. And so at the earliest opportunity ... abstract ... with this kind of curious compassion. And ... abstract ... to suggest I might find myself on such a wayward path ... abstract ... . Kind of blissfully unaware of what was ... abstract ... start to get a little tricky. My husband ... abstract ... couple amens and some applause and then everybody ... concrete ... you know, for hundred dollars a night maybe ... concrete", |
|
"num": null, |
|
"type_str": "table", |
|
"html": null, |
|
"content": "<table><tr><td>Misclassified example</td><td>Ground-truth label</td></tr><tr><td>...</td><td/></tr></table>" |
|
} |
|
} |
|
} |
|
} |