|
{ |
|
"paper_id": "D17-1030", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T16:15:58.534476Z" |
|
}, |
|
"title": "High-risk learning: acquiring new word vectors from tiny data", |
|
"authors": [ |
|
{ |
|
"first": "Aur\u00e9lie", |
|
"middle": [], |
|
"last": "Herbelot", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Universitat Pompeu Fabra", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Baroni", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Trento", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Distributional semantics models are known to struggle with small data. It is generally accepted that in order to learn 'a good vector' for a word, a model must have sufficient examples of its usage. This contradicts the fact that humans can guess the meaning of a word from a few occurrences only. In this paper, we show that a neural language model such as Word2Vec only necessitates minor modifications to its standard architecture to learn new terms from tiny data, using background knowledge from a previously learnt semantic space. We test our model on word definitions and on a nonce task involving 2-6 sentences' worth of context, showing a large increase in performance over state-of-the-art models on the definitional task.", |
|
"pdf_parse": { |
|
"paper_id": "D17-1030", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Distributional semantics models are known to struggle with small data. It is generally accepted that in order to learn 'a good vector' for a word, a model must have sufficient examples of its usage. This contradicts the fact that humans can guess the meaning of a word from a few occurrences only. In this paper, we show that a neural language model such as Word2Vec only necessitates minor modifications to its standard architecture to learn new terms from tiny data, using background knowledge from a previously learnt semantic space. We test our model on word definitions and on a nonce task involving 2-6 sentences' worth of context, showing a large increase in performance over state-of-the-art models on the definitional task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Distributional models (DS: Turney and Pantel (2010); Clark (2012); Erk (2012)), and in particular neural network approaches (Bengio et al., 2003; Collobert et al., 2011; Huang et al., 2012; Mikolov et al., 2013) , do not fare well in the absence of large corpora. That is, for a DS model to learn a word vector, it must have seen that word a sufficient number of times. This is in sharp contrast with the human ability to perform fast mapping, i.e. the acquisition of a new concept from a single exposure to information (Lake et al., 2011; Trueswell et al., 2013; Lake et al., 2016) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 124, |
|
"end": 145, |
|
"text": "(Bengio et al., 2003;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 146, |
|
"end": 169, |
|
"text": "Collobert et al., 2011;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 170, |
|
"end": 189, |
|
"text": "Huang et al., 2012;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 190, |
|
"end": 211, |
|
"text": "Mikolov et al., 2013)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 520, |
|
"end": 539, |
|
"text": "(Lake et al., 2011;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 540, |
|
"end": 563, |
|
"text": "Trueswell et al., 2013;", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 564, |
|
"end": 582, |
|
"text": "Lake et al., 2016)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "There are at least two reasons for wanting to acquire vectors from very small data. First, some words are simply rare in corpora, but potentially crucial to some applications (consider, for instance, the processing of text containing technical terminology). Second, it seems that fast-mapping should be a prerequisite for any system pretending to cognitive plausibility: an intelligent agent with learning capabilities should be able to make educated guesses about new concepts it encounters.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "One way to deal with data sparsity issues when learning word vectors is to use morphological structure as a way to overcome the lack of primary data (Lazaridou et al., 2013; Luong et al., 2013; Kisselew et al., 2015; Pad\u00f3 et al., 2016) . Whilst such work has shown promising result, it is only applicable when there is transparent morphology to fall back on. Another strand of research has been started by Lazaridou et al. (2017) , who recently showed that by using simple summation over the (previously learnt) contexts of a nonce word, it is possible to obtain good correlation with human judgements in a similarity task. It is important to note that both these strategies assume that rare words are special cases of the distributional semantics apparatus, and thus require separate approaches to model them.", |
|
"cite_spans": [ |
|
{ |
|
"start": 149, |
|
"end": 173, |
|
"text": "(Lazaridou et al., 2013;", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 174, |
|
"end": 193, |
|
"text": "Luong et al., 2013;", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 194, |
|
"end": 216, |
|
"text": "Kisselew et al., 2015;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 217, |
|
"end": 235, |
|
"text": "Pad\u00f3 et al., 2016)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 406, |
|
"end": 429, |
|
"text": "Lazaridou et al. (2017)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Having different algorithms for modelling the same phenomenon means however that we need some meta-theory to know when to apply one or the other: it is for instance unclear at which frequency a rare word is not rare anymore. Further, methods like summation are naturally selflimiting: they create frustratingly strong baselines but are too simplistic to be extended and improved in any meaningful way. In this paper, our underlying assumption is thus that it would be desirable to build a single, all-purpose architecture to learn word representations from any amount of data. The work we present views fast-mapping as a component of an incremental architecture: the rare word case is simply the first part of the concept learning process, regardless of how many times it will eventually be encountered.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "With the aim of producing such an incremen-tal system, we demonstrate that the general architecture of neural language models like Word2Vec (Mikolov et al., 2013) is actually suited to modelling words from a few occurrences only, providing minor adjustments are made to the model itself and its parameters. Our main conclusion is that the combination of a heightened learning rate and greedy processing results in very reasonable oneshot learning, but that some safeguards must be in place to mitigate the high risks associated with this strategy.", |
|
"cite_spans": [ |
|
{ |
|
"start": 140, |
|
"end": 162, |
|
"text": "(Mikolov et al., 2013)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We want to simulate the process by which a competent speaker encounters a new word in known contexts. That is, we assume an existing vocabulary (i.e. a previously trained semantic space) which can help the speaker 'guess' the meaning of the new word. To evaluate this process, we use two datasets, described below.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task description", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The definitional nonce dataset We build a novel dataset based on encyclopedic data, simulating the case where the context of the unknown word is supposedly maximally informative. 1 We first record all Wikipedia titles containing one word only (e.g. Albedo, Insulin). We then extract the first sentence of the Wikipedia page corresponding to each target title (e.g. Insulin is a peptide hormone produced by beta cells in the pancreas.), and tokenise that sentence using the Spacy toolkit. 2 Each occurrence of the target in the sentence is replaced with a slot ( ). From this original dataset, we only retain sentences with enough information (i.e. a length over 10 words), corresponding to targets which are frequent enough in the UkWaC corpus (Baroni et al. (2009) , minimum frequency of 200). The frequency threshold allows us to make sure that we have a high-quality gold vector to compare our learnt representation to. We then randomly sample 1000 sentences, manually checking the data to remove instances that are, in fact, not definitional. We split the data into 700 training and 300 test instances.", |
|
"cite_spans": [ |
|
{ |
|
"start": 179, |
|
"end": 180, |
|
"text": "1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 744, |
|
"end": 765, |
|
"text": "(Baroni et al. (2009)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task description", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "On this dataset, we simulate first-time exposure to the nonce word by changing the label of the gold standard vector in the background semantic space, and producing a new, randomly initialised vector for the nonce. So for instance, insulin becomes insulin gold, and a new random embedding is added to the input matrix for insulin. This setup allows us to easily measure the similarity of the newly learnt vector, obtained from one definition, to the vector produced by exposure to the whole Wikipedia. To measure the relative performance of various setups, we calculate the Reciprocal Rank (RR) of the gold vector in the list of all nearest neighbours to the learnt representation. We average RRs over the number of instances in the dataset, thus obtaining a single MRR figure (Mean Reciprocal Rank).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task description", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The Chimera dataset Our second dataset is the 'Chimera' dataset of (Lazaridou et al., 2017) . 3 This dataset was specifically constructed to simulate a nonce situation where a speaker encounters a word for the first time in naturally-occurring (and not necessarily informative) sentences. Each instance in the data is a nonce, associated with 2-6 sentences showing the word in context. The novel concept is created as a 'chimera', i.e. a mixture of two existing and somewhat related concepts (e.g., a buffalo crossed with an elephant). The sentences associated with the nonce are utterances containing one of the components of the chimera, randomly extracted from a large corpus.", |
|
"cite_spans": [ |
|
{ |
|
"start": 67, |
|
"end": 91, |
|
"text": "(Lazaridou et al., 2017)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 94, |
|
"end": 95, |
|
"text": "3", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task description", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The dataset was annotated by humans in terms of the similarity of the nonce to other, randomly selected concepts. Fig. 1 gives an example of a data point with 2 sentences of context, with the nonce capitalised (VALTUOR, a combination of cucumber and celery). The sentences are followed by the 'probes' of the trial, i.e. the concepts that the nonce must be compared to. Finally, human similarity responses are given for each probe with respect to the nonce. Each chimera was rated by an average of 143 subjects. In our experiments, we simply replace all occurrences of the original nonce with a slot ( ) and learn a representation for that slot. For each setting (2, 4 and 6 sentences), we randomly split the 330 instances in the data into 220 for training and 110 for testing.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 114, |
|
"end": 120, |
|
"text": "Fig. 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Task description", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Following the authors of the dataset, we evaluate by calculating the correlation between system and human judgements. For each trial, we calculate Spearman correlation (\u03c1) between the similarities given by the system to each nonce-probe pair, and the human responses. The overall result is the average Spearman across all trials.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task description", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Sentences: Canned sardines and VALTUOR between two slices of wholemeal bread and thinly spread Flora Original. @@ Erm, VALTUOR, low fat dairy products, incidents of heart disease for those who have an olive oil rich diet.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task description", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Probes: rhubarb, onion, pear, strawberry, limousine, cushion Human responses: 3, 2.86, 1.43, 2.14, 1.29, 1.71 ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task description", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We test two state-of-the art systems: a) Word2Vec (W2V) in its Gensim 4 implementation, allowing for update of a prior semantic space; b) the additive model of Lazaridou et al. (2017) , using a background space from W2V.", |
|
"cite_spans": [ |
|
{ |
|
"start": 160, |
|
"end": 183, |
|
"text": "Lazaridou et al. (2017)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baseline models", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We note that both models allow for some sort of incrementality. W2V processes input one context at a time (or several, if mini-batches are implemented), performing gradient descent after each new input. The network's weights in the input, which correspond to the created word vectors, can be inspected at any time. 5 As for addition, it also affords the ability to stop and restart training at any time: a typical implementation of this behaviour can be found in distributional semantics models based on random indexing (see e.g. QasemiZadeh et al., 2017) . This is in contrast with so-called 'count-based' models calculated by computing a frequency matrix over a fixed corpus, which is then globally modified through a transformation such as Pointwise Mutual Information.", |
|
"cite_spans": [ |
|
{ |
|
"start": 530, |
|
"end": 555, |
|
"text": "QasemiZadeh et al., 2017)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baseline models", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We consider W2V's 'skip-gram' model, which learns word vectors by predicting the context words of a particular target. The W2V architecture includes several important parameters, which we briefly describe below.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Word2Vec", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In W2V, predicting a word implies the ability to distinguish it from so-called negative samples, i.e. other words which are not the observed item. The number of negative samples to be considered can be tuned. What counts as a context for a particular target depends on the window size around that target. W2V features random resizing of the window, which has been shown to increase the model's performance. Further, each sentence passed to the model undergoes subsampling, a random process by which some words are dropped out of the input as a function of their overall frequency. Finally, the learning rate \u03b1 measures how quickly the system learns at each training iteration. Traditionally, \u03b1 is set low (0.025 for Gensim) in order not to overshoot the system's error minimum.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Word2Vec", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Gensim has an update function which allows us to save a W2V model and continue learning from new data: this lets us simulate prior acquisition of a background vocabulary and new learning from a nonce's context. As background vocabulary, we use a semantic space trained on a Wikipedia snapshot of 1.6B words with Gensim's standard parameters (initial learning rate of 0.025, 5 negative samples, a window of \u00b15 words, subsampling 1e \u22123 , 5 epochs). We use the skip-gram model with a minimum word count of 50 and vector dimensionality 400. This results in a space with 259, 376 word vectors. We verify the quality of this space by calculating correlation with the similarity ratings in the MEN dataset (Bruni et al., 2014) . We obtain \u03c1 = 0.75, indicating an excellent fit with human judgements.", |
|
"cite_spans": [ |
|
{ |
|
"start": 699, |
|
"end": 719, |
|
"text": "(Bruni et al., 2014)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Word2Vec", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Additive model Lazaridou et al. (2017) use a simple additive model, which sums the vectors of the context words of the nonce, taking as context the entire sentence where the target occurs. Their model operates on multimodal vectors, built over both text and images. In the present work, however, we use the semantic space described above, built on Wikipedia text only. We do not normalise vectors before summing, as we found that the system's performance was better than with normalisation. We also discard function words when summing, using a stopword list. We found that this step affects results very positively.", |
|
"cite_spans": [ |
|
{ |
|
"start": 15, |
|
"end": 38, |
|
"text": "Lazaridou et al. (2017)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Word2Vec", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The results for our state-of-the-art models are shown in the top sections of Tables 1 and 2. W2V is run with the standard Gensim parameters, under the skip-gram model. It is clear from the results that W2V is unable to learn nonces from definitions (M RR = 0.00007). The additive model, on the other hand, performs well: an M RR of 0.03686 means that the median rank of the true vector is 861, out of a challenging 259, 376 neighbours (the size of the vocabulary). On the Chimeras dataset, W2V still performs well under the sum model -although the difference is not as marked and possibly indicates that this dataset is more difficult (which we would expect, as the sentences are not as informative as in the encyclopedia case).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 77, |
|
"end": 92, |
|
"text": "Tables 1 and 2.", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Word2Vec", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Our system, Nonce2Vec (N2V), 6 modifies W2V in the following ways.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Nonce2Vec", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Initialisation: since addition gives a good approximation of the nonce word, we initialise our vectors to the sum of all known words in the context sentences (see \u00a73). Note that this is not strictly equivalent to the pure sum model, as subsampling takes care of frequent word deletion in this setup (as opposed to a stopword list). In practice, this means that the initialised vectors are of slightly lesser quality than the ones from the sum model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Nonce2Vec", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Parameter choice: we experiment with higher learning rates coupled with larger window sizes.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Nonce2Vec", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "That is, the model should take the risk of a) overshooting a minimum error; b) greedily considering irrelevant contexts in order to increase its chance to learn anything. We mitigate these risks through selective training and appropriate parameter decay (see below).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Nonce2Vec", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Window resizing: we suppress the random window resizing step when learning the nonce. This is because we need as much data as possible and accordingly need a large window around the target. Resizing would make us run the risk of ending up with a small window of a few words only, which would be uninformative.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Nonce2Vec", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Subsampling: With the goal of keeping most of our tiny data, we adopt a subsampling rate that only discards extremely frequent words.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Nonce2Vec", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Selective training: we only train the nonce. That is, we only update the weights of the network for the target. This ensures that, despite the high selected learning rate, the previously learnt vectors, associated with the other words in the sentence, will not be radically shifted towards the meaning expressed in that particular sentence. Whilst the above modifications are appropriate to deal with the first mention of a word, we must ask in what measure they still are applicable when the term is encountered again (see \u00a71). With a 6 Code available at https://github.com/ minimalparts/nonce2vec.", |
|
"cite_spans": [ |
|
{ |
|
"start": 536, |
|
"end": 537, |
|
"text": "6", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Nonce2Vec", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Median rank W2V 0.00007 111012 Sum 0.03686 861 N2V 0.04907 623 Table 2 : Results on chimera dataset view to cater for incrementality, we introduce a notion of parameter decay in the system. We hypothesise that the initial high-risk strategy, combining high learning rate and greedy processing of the data, should only be used in the very first training steps. Indeed, this strategy drastically moves the initialised vector to what the system assumes is the right neighbourhood of the semantic space. Once this positioning has taken place, the system should refine its guess rather than wildly moving in the space. We thus suggest that the learning rate itself, but also the subsampling value and window size should be returned to more conventional standards as soon as it is desirable. To achieve this, we apply some exponential decay to the learning rate of the nonce, proportional to the number of times the term has been seen: every time t that we train a pair containing the target word, we set \u03b1 to \u03b1 0 e \u2212\u03bbt , where \u03b1 0 is our initial learning rate. We also decrease the window size and increase subsampling rate on a per-sentence basis (see \u00a75).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 63, |
|
"end": 70, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "MRR", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We first tune N2V's initial parameters on the training part of the definitional dataset. We experiment with a range of values for the learning rate ([0.5, 0.8, 1, 2, 5, 10, 20] ), window size ([5, 10, 15, 20] ), the number of negative samples ([3, 5, 10] ), the number of epochs ([1, 5] ) and the subsampling rate ([500, 1000, 10000] ). Here, given the size of the data, the minimum frequency for a word to be considered is 1. The best performance is obtained for a window of 15 words, 3 negative samples, a learning rate of 1, a subsampling rate of 10000, an exponential decay where \u03bb = 1 70 , and one single epoch (that is, the system truly implements fast-mapping). When applied to the test set, N2V shows a dramatic improvement in performance over the simple sum model, reaching M M R = 0.04907 (median rank 623).", |
|
"cite_spans": [ |
|
{ |
|
"start": 148, |
|
"end": 176, |
|
"text": "([0.5, 0.8, 1, 2, 5, 10, 20]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 192, |
|
"end": 208, |
|
"text": "([5, 10, 15, 20]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 243, |
|
"end": 254, |
|
"text": "([3, 5, 10]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 279, |
|
"end": 286, |
|
"text": "([1, 5]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 314, |
|
"end": 333, |
|
"text": "([500, 1000, 10000]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "On the training set of the Chimeras, we further tune the per-sentence decrease in window size and increase in subsampling. For the window size, we experiment with a reduction of [1...6] words on either side of the target, not going under a window of \u00b13 words. Further, we adjust each word's subsampling rate by a factor in the range [1.1, 1.2...1.9, 2.0]. Our results confirm that indeed, an appropriate change in those parameters is required: keeping them constant results in decreasing performance as more sentences are introduced. On the training set, we obtain our best performance (averaged over the 2-, 4-and 6sentences datasets) for a per-sentence window size decrease of 5 words on either side of the target, and adjusting subsampling by a factor of 1.9. Table 2 shows results on the three corresponding test sets using those parameters. Unfortunately, on this dataset, N2V does not improve on addition.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The difference in performance between the definitional and the Chimeras datasets may be explained in two ways. First, the chimera sentences were randomly selected and thus, are not necessarily hugely informative about the nature of the nonce. Second, the most informative sentences are not necessarily at the beginning of the fragment, so the system heightens its learning rate on the wrong data: the risk does not pay off. This suggests that a truly intelligent system should adjust its parameters in a non-monotonic way, to take into account the quality of the information it is processing. This point seems to be an important general requirement for any architecture that claims incrementality: our results indicate very strongly that a notion of informativeness must play a role in the learning decisions of the system. This conclusion is in line with work in other domains, e.g. interactive word learning using dialogue, where performance is linked to the ability of the system to measure its own confidence in particular pieces of knowledge and ask questions with a high information gain (Yu et al., 2016) . It also meets with general considerations on language acquisition, which accounts for the ability of young children to learn from limited 'primary linguistic data' by restricting explanatory models to those that provide such efficiency (Clark and Lappin, 2010) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 1094, |
|
"end": 1111, |
|
"text": "(Yu et al., 2016)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 1350, |
|
"end": 1374, |
|
"text": "(Clark and Lappin, 2010)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We have proposed Nonce2Vec, a Word2Vecinspired architecture to learn new words from tiny data. It requires a high-risk strategy combining heightened learning rate and greedy processing of the context. The particularly good performance of the system on definitions makes us confident that it is possible to build a unique, unified algorithm for learning word meaning from any amount of data. However, the less impressive performance on naturally-occurring sentences indicates that an ideal system should modulate its learning as a function of the informativeness of a context sentence, that is, take risks 'at the right time'.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "As pointed out in the introduction, Nonce2Vec is designed with a view to be an essential component of an incremental concept learning architecture. In order to validate our system as a suitable, generic solution for word learning, we will have to test it on various data sizes, from the type of low-to middle-frequency terms found in e.g. the Rare Words dataset (Luong et al., 2013) , to highly frequent words. We would like to systematically evaluate, in particular, how fast the system can gain an understanding of a concept which is fully equivalent to a vector built from big data. We believe that both quality and speed of learning will be strongly influenced by the ability of the algorithm to detect what we called informative sentences. Our future work will thus investigate how to capture and measure informativeness.", |
|
"cite_spans": [ |
|
{ |
|
"start": 362, |
|
"end": 382, |
|
"text": "(Luong et al., 2013)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Data available at http://aurelieherbelot. net/resources/.2 https://spacy.io/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Available at http://clic.cimec.unitn.it/ Files/PublicData/chimeras.zip.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Available athttps://github.com/ RaRe-Technologies/gensim.5 Technically speaking, standard W2V is not fully incremental, as it requires a first pass through the corpus to compute a vocabulary, with associated frequencies. As we show in \u00a75, it however allows for an incremental interpretation, given minor modifications.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We are grateful to Katrin Erk for inspiring conversations about tiny data and fast-mapping, and to Raffaella Bernardi and Sandro Pezzelle for comments on an early draft of this paper. We also thank the anonymous reviewers for their time and valuable comments. We acknowledge ERC 2011 Starting Independent Research Grant No 283554 (COMPOSES). This project has also received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sk\u0142odowska-Curie grant agreement No 751250.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "The WaCky wide web: a collection of very large linguistically processed web-crawled corpora. Language resources and evaluation", |
|
"authors": [ |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Baroni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Silvia", |
|
"middle": [], |
|
"last": "Bernardini", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adriano", |
|
"middle": [], |
|
"last": "Ferraresi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eros", |
|
"middle": [], |
|
"last": "Zanchetta", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "", |
|
"volume": "43", |
|
"issue": "", |
|
"pages": "209--226", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marco Baroni, Silvia Bernardini, Adriano Ferraresi, and Eros Zanchetta. 2009. The WaCky wide web: a collection of very large linguistically processed web-crawled corpora. Language resources and evaluation, 43(3):209-226.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "A neural probabilistic language model", |
|
"authors": [ |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R\u00e9jean", |
|
"middle": [], |
|
"last": "Ducharme", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pascal", |
|
"middle": [], |
|
"last": "Vincent", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christian", |
|
"middle": [], |
|
"last": "Jauvin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Journal of machine learning research", |
|
"volume": "3", |
|
"issue": "", |
|
"pages": "1137--1155", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yoshua Bengio, R\u00e9jean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic lan- guage model. Journal of machine learning research, 3:1137-1155.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Multimodal distributional semantics", |
|
"authors": [ |
|
{ |
|
"first": "Elia", |
|
"middle": [], |
|
"last": "Bruni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nam", |
|
"middle": [ |
|
"Khanh" |
|
], |
|
"last": "Tran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Baroni", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Journal of Artificial Intelligence Research", |
|
"volume": "49", |
|
"issue": "1", |
|
"pages": "1--47", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Elia Bruni, Nam Khanh Tran, and Marco Baroni. 2014. Multimodal distributional semantics. Journal of Ar- tificial Intelligence Research, 49(1):1-47.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Computational learning theory and language acquisition", |
|
"authors": [ |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shalom", |
|
"middle": [], |
|
"last": "Lappin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Philosophy of linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "445--475", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexander Clark and Shalom Lappin. 2010. Compu- tational learning theory and language acquisition. In Ruth M Kempson, Tim Fernando, and Nicholas Asher, editors, Philosophy of linguistics, pages 445- 475. Elsevier.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Vector space models of lexical meaning", |
|
"authors": [ |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Handbook of Contemporary Semantics -second edition", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stephen Clark. 2012. Vector space models of lexical meaning. In Shalom Lappin and Chris Fox, editors, Handbook of Contemporary Semantics -second edi- tion. Wiley-Blackwell.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Natural language processing (almost) from scratch", |
|
"authors": [ |
|
{ |
|
"first": "Ronan", |
|
"middle": [], |
|
"last": "Collobert", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Weston", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L\u00e9on", |
|
"middle": [], |
|
"last": "Bottou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Karlen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Koray", |
|
"middle": [], |
|
"last": "Kavukcuoglu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pavel", |
|
"middle": [], |
|
"last": "Kuksa", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Journal of Machine Learning Research", |
|
"volume": "12", |
|
"issue": "", |
|
"pages": "2493--2537", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ronan Collobert, Jason Weston, L\u00e9on Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12:2493-2537.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Vector space models of word meaning and phrase meaning: a survey", |
|
"authors": [ |
|
{ |
|
"first": "Katrin", |
|
"middle": [], |
|
"last": "Erk", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Language and Linguistics Compass", |
|
"volume": "6", |
|
"issue": "", |
|
"pages": "635--653", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Katrin Erk. 2012. Vector space models of word mean- ing and phrase meaning: a survey. Language and Linguistics Compass, 6:635-653.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Improving word representations via global context and multiple word prototypes", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Eric", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Christopher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew Y", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (ACL2012)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "873--882", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eric H Huang, Richard Socher, Christopher D Man- ning, and Andrew Y Ng. 2012. Improving word representations via global context and multiple word prototypes. In Proceedings of the 50th Annual Meet- ing of the Association for Computational Linguistics (ACL2012), pages 873-882.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Obtaining a Better Understanding of Distributional Models of German Derivational Morphology", |
|
"authors": [ |
|
{ |
|
"first": "Max", |
|
"middle": [], |
|
"last": "Kisselew", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Pad\u00f3", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexis", |
|
"middle": [], |
|
"last": "Palmer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ja\u0148", |
|
"middle": [], |
|
"last": "Snajder", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 11th International Conference on Computational Semantics (IWCS2015)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "58--63", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Max Kisselew, Sebastian Pad\u00f3, Alexis Palmer, and Ja\u0148 Snajder. 2015. Obtaining a Better Understanding of Distributional Models of German Derivational Morphology. In Proceedings of the 11th Inter- national Conference on Computational Semantics (IWCS2015), pages 58-63, London, UK.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "One-shot learning of simple visual concepts", |
|
"authors": [ |
|
{ |
|
"first": "Ruslan", |
|
"middle": [], |
|
"last": "Brenden M Lake", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Salakhutdinov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joshua", |
|
"middle": [ |
|
"B" |
|
], |
|
"last": "Gross", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Tenenbaum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 33rd Annual Meeting of the Cognitive Science Society (CogSci2012)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Brenden M Lake, Ruslan Salakhutdinov, Jason Gross, and Joshua B Tenenbaum. 2011. One-shot learn- ing of simple visual concepts. In Proceedings of the 33rd Annual Meeting of the Cognitive Science Soci- ety (CogSci2012), Boston, MA.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Building machines that learn and think like people. arxiv", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Brenden", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Lake", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Tomer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joshua", |
|
"middle": [ |
|
"B" |
|
], |
|
"last": "Ullman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Samuel", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Tenenbaum", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Gershman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenen- baum, and Samuel J. Gershman. 2016. Building machines that learn and think like people. arxiv, abs/1604.00289.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Multimodal word meaning induction from minimal exposure to natural text", |
|
"authors": [ |
|
{ |
|
"first": "Angeliki", |
|
"middle": [], |
|
"last": "Lazaridou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Marelli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Baroni", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Cognitive Science", |
|
"volume": "41", |
|
"issue": "S4", |
|
"pages": "677--705", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Angeliki Lazaridou, Marco Marelli, and Marco Baroni. 2017. Multimodal word meaning induction from minimal exposure to natural text. Cognitive Science, 41(S4):677-705.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Compositional-ly Derived Representations of Morphologically Complex Words in Distributional Semantics", |
|
"authors": [ |
|
{ |
|
"first": "Angeliki", |
|
"middle": [], |
|
"last": "Lazaridou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Marelli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roberto", |
|
"middle": [], |
|
"last": "Zamparelli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Baroni", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (ACL2013)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1517--1526", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Angeliki Lazaridou, Marco Marelli, Roberto Zampar- elli, and Marco Baroni. 2013. Compositional-ly De- rived Representations of Morphologically Complex Words in Distributional Semantics. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (ACL2013), pages 1517- 1526, Sofia, Bulgaria.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Better Word Representations with Recursive Neural Networks for Morphology", |
|
"authors": [ |
|
{ |
|
"first": "Thang", |
|
"middle": [], |
|
"last": "Luong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 17th Conference on Computational Natural Language Learning (CoNLL2013)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "104--113", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thang Luong, Richard Socher, and Christopher D. Manning. 2013. Better Word Representations with Recursive Neural Networks for Morphology. In Proceedings of the 17th Conference on Computa- tional Natural Language Learning (CoNLL2013), pages 104-113, Sofia, Bulgaria.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Distributed representations of words and phrases and their compositionality", |
|
"authors": [ |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Greg", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Corrado", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeff", |
|
"middle": [], |
|
"last": "Dean", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "26", |
|
"issue": "", |
|
"pages": "3111--3119", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their composition- ality. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, editors, Ad- vances in Neural Information Processing Systems 26, pages 3111-3119. Curran Associates, Inc.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Predictability of distributional semantics in derivational word formation", |
|
"authors": [ |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Pad\u00f3", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aur\u00e9lie", |
|
"middle": [], |
|
"last": "Herbelot", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Max", |
|
"middle": [], |
|
"last": "Kisselew", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jan\u0161najder", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 26th International Conference on Computational Linguistics (COLING2016)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sebastian Pad\u00f3, Aur\u00e9lie Herbelot, Max Kisselew, and Jan\u0160najder. 2016. Predictability of distributional semantics in derivational word formation. In Pro- ceedings of the 26th International Conference on Computational Linguistics (COLING2016), Osaka, Japan.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Non-Negative Randomized Word Embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Behrang", |
|
"middle": [], |
|
"last": "Qasemizadeh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Laura", |
|
"middle": [], |
|
"last": "Kallmeyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aur\u00e9lie", |
|
"middle": [], |
|
"last": "Herbelot", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of Traitement automatique des langues naturelles (TALN2017)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Behrang QasemiZadeh, Laura Kallmeyer, and Aur\u00e9lie Herbelot. 2017. Non-Negative Randomized Word Embeddings. In Proceedings of Traitement automa- tique des langues naturelles (TALN2017), Orl\u00e9ans, France.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Propose but verify: Fast mapping meets cross-situational word learning", |
|
"authors": [ |
|
{ |
|
"first": "Tamara", |
|
"middle": [ |
|
"Nicol" |
|
], |
|
"last": "John C Trueswell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alon", |
|
"middle": [], |
|
"last": "Medina", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lila", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Hafri", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Gleitman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Cognitive psychology", |
|
"volume": "66", |
|
"issue": "1", |
|
"pages": "126--156", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John C Trueswell, Tamara Nicol Medina, Alon Hafri, and Lila R Gleitman. 2013. Propose but verify: Fast mapping meets cross-situational word learning. Cognitive psychology, 66(1):126-156.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "From frequency to meaning: Vector space models of semantics", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Peter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Patrick", |
|
"middle": [], |
|
"last": "Turney", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Pantel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Journal of artificial intelligence research", |
|
"volume": "37", |
|
"issue": "", |
|
"pages": "141--188", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peter D Turney and Patrick Pantel. 2010. From fre- quency to meaning: Vector space models of se- mantics. Journal of artificial intelligence research, 37:141-188.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Training an adaptive dialogue policy for interactive learning of visually grounded word meanings", |
|
"authors": [ |
|
{ |
|
"first": "Yanchao", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arash", |
|
"middle": [], |
|
"last": "Eshghi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oliver", |
|
"middle": [], |
|
"last": "Lemon", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 17th Annual SIGdial Meeting on Discourse and Dialogue (SIGDIAL2016)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "339--349", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yanchao Yu, Arash Eshghi, and Oliver Lemon. 2016. Training an adaptive dialogue policy for interactive learning of visually grounded word meanings. In Proceedings of the 17th Annual SIGdial Meeting on Discourse and Dialogue (SIGDIAL2016), pages 339-349, Los Angeles,CA.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"text": "An example chimera (VALTUOR).", |
|
"type_str": "figure", |
|
"uris": null |
|
}, |
|
"TABREF0": { |
|
"content": "<table><tr><td>L2 \u03c1</td><td>L4 \u03c1</td><td>L6 \u03c1</td></tr><tr><td colspan=\"3\">W2V 0.1459 0.2457 0.2498</td></tr><tr><td colspan=\"3\">Sum 0.3376 0.3624 0.4080</td></tr><tr><td colspan=\"3\">N2V 0.3320 0.3668 0.3890</td></tr></table>", |
|
"num": null, |
|
"text": "Results on definitional dataset", |
|
"type_str": "table", |
|
"html": null |
|
} |
|
} |
|
} |
|
} |