Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "E14-1027",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:40:24.049682Z"
},
"title": "Incremental Bayesian Learning of Semantic Categories",
"authors": [
{
"first": "Lea",
"middle": [],
"last": "Frermann",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Edinburgh",
"location": {
"addrLine": "10 Crichton Street",
"postCode": "EH8 9AB",
"settlement": "Edinburgh"
}
},
"email": "[email protected]"
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Edinburgh",
"location": {
"addrLine": "10 Crichton Street",
"postCode": "EH8 9AB",
"settlement": "Edinburgh"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Models of category learning have been extensively studied in cognitive science and primarily tested on perceptual abstractions or artificial stimuli. In this paper we focus on categories acquired from natural language stimuli, that is words (e.g., chair is a member of the FURNITURE category). We present a Bayesian model which, unlike previous work, learns both categories and their features in a single process. Our model employs particle filters, a sequential Monte Carlo method commonly used for approximate probabilistic inference in an incremental setting. Comparison against a state-of-the-art graph-based approach reveals that our model learns qualitatively better categories and demonstrates cognitive plausibility during learning.",
"pdf_parse": {
"paper_id": "E14-1027",
"_pdf_hash": "",
"abstract": [
{
"text": "Models of category learning have been extensively studied in cognitive science and primarily tested on perceptual abstractions or artificial stimuli. In this paper we focus on categories acquired from natural language stimuli, that is words (e.g., chair is a member of the FURNITURE category). We present a Bayesian model which, unlike previous work, learns both categories and their features in a single process. Our model employs particle filters, a sequential Monte Carlo method commonly used for approximate probabilistic inference in an incremental setting. Comparison against a state-of-the-art graph-based approach reveals that our model learns qualitatively better categories and demonstrates cognitive plausibility during learning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Considerable psychological research has shown that people reason about novel objects they encounter by identifying the category to which these objects belong and extrapolating from their past experiences with other members of that category (Smith and Medin, 1981) . Categorization is a classic problem in cognitive science, underlying a variety of common mental tasks including perception, learning, and the use of language.",
"cite_spans": [
{
"start": 240,
"end": 263,
"text": "(Smith and Medin, 1981)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Given its fundamental nature, categorization has been extensively studied both experimentally and in simulations. Indeed, numerous models exist as to how humans categorize objects ranging from strict prototypes (categories are represented by a single idealized member which embodies their core properties; e.g., Reed 1972) to full exemplar models (categories are represented by a list of previously encountered members; e.g., Nosofsky 1988) and combinations of the two (e.g., Griffiths et al. 2007) . A common feature across different studies is the use of stimuli involving real-world objects (e.g., children's toys ; Starkey 1981) , perceptual abstractions (e.g., photographs of animals; Quinn and Eimas 1996) , or artificial ones (e.g., binary strings, dot patterns or geometric shapes; Medin and Schaffer 1978; Posner and Keele 1968; Bomba and Siqueland 1983) . Most existing models focus on adult categorization, in which it is assumed that a large number of categories have already been learnt (but see Anderson 1991 and Griffiths et al. 2007 for exceptions) .",
"cite_spans": [
{
"start": 312,
"end": 322,
"text": "Reed 1972)",
"ref_id": "BIBREF29"
},
{
"start": 426,
"end": 440,
"text": "Nosofsky 1988)",
"ref_id": "BIBREF26"
},
{
"start": 476,
"end": 498,
"text": "Griffiths et al. 2007)",
"ref_id": "BIBREF16"
},
{
"start": 617,
"end": 618,
"text": ";",
"ref_id": null
},
{
"start": 619,
"end": 632,
"text": "Starkey 1981)",
"ref_id": "BIBREF35"
},
{
"start": 690,
"end": 711,
"text": "Quinn and Eimas 1996)",
"ref_id": "BIBREF28"
},
{
"start": 790,
"end": 814,
"text": "Medin and Schaffer 1978;",
"ref_id": "BIBREF25"
},
{
"start": 815,
"end": 837,
"text": "Posner and Keele 1968;",
"ref_id": "BIBREF27"
},
{
"start": 838,
"end": 863,
"text": "Bomba and Siqueland 1983)",
"ref_id": "BIBREF4"
},
{
"start": 1009,
"end": 1026,
"text": "Anderson 1991 and",
"ref_id": "BIBREF1"
},
{
"start": 1027,
"end": 1064,
"text": "Griffiths et al. 2007 for exceptions)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work we focus on categories acquired from natural language stimuli (i.e., words) and investigate how the statistics of the linguistic environment (as approximated by large corpora) influence category formation (e.g., chair and table are FURNITURE whereas peach and apple are FRUIT 1 ). The idea of modeling categories using words as a stand-in for their referents has been previously used to explore categorization-related phenomena such as semantic priming (Cree et al., 1999) and typicality rating (Voorspoels et al., 2008) , to evaluate prototype and exemplar models (Storms et al., 2000) , and to simulate early language category acquisition (Fountain and Lapata, 2011) . The idea of using naturalistic corpora has received little attention. Most existing studies use feature norms as a proxy for people's representation of semantic concepts. In a typical procedure, participants are presented with a word and asked to generate the most relevant features or attributes for its referent concept. The most notable collection of feature norms is probably the multi-year project of McRae et al. (2005) , which obtained features for a set of 541 common English nouns.",
"cite_spans": [
{
"start": 466,
"end": 485,
"text": "(Cree et al., 1999)",
"ref_id": "BIBREF8"
},
{
"start": 508,
"end": 533,
"text": "(Voorspoels et al., 2008)",
"ref_id": "BIBREF38"
},
{
"start": 578,
"end": 599,
"text": "(Storms et al., 2000)",
"ref_id": "BIBREF36"
},
{
"start": 654,
"end": 681,
"text": "(Fountain and Lapata, 2011)",
"ref_id": "BIBREF13"
},
{
"start": 1090,
"end": 1109,
"text": "McRae et al. (2005)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our approach replaces feature norms with representations derived from words' contexts in corpora. While this is an impoverished view of how categories are acquired -it is clear that they are learnt through exposure to the linguistic environment and the physical world -perceptual infor-mation relevant for extracting semantic categories is to a large extent redundantly encoded in linguistic experience (Riordan and Jones, 2011) . Besides, there are known difficulties with feature norms such as the small number of words for which these can be obtained, the quality of the attributes, and variability in the way people generate them (see Zeigenfuse and Lee 2010 for details). Focusing on natural language categories allows us to build categorization models with theoretically unlimited scope.",
"cite_spans": [
{
"start": 403,
"end": 428,
"text": "(Riordan and Jones, 2011)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To this end, we present a probabilistic Bayesian model of category acquisition based on the key idea that learners can adaptively form category representations that capture the structure expressed in the observed data. We model category induction as two interrelated sub-problems: (a) the acquisition of features that discriminate among categories, and (b) the grouping of concepts into categories based on those features. An important modeling question concerns the exact mechanism with which categories are learned. To maintain cognitive plausibility, we develop an incremental learning algorithm. Incrementality is a central aspect of human learning which takes place sequentially and over time. Humans are capable of dealing with a situation even if only partial information is available. They adaptively learn as new information is presented and locally update their internal knowledge state without systematically revising everything known about the situation at hand. Memory and processing limitations also explain why humans must learn incrementally. It is not possible to store and have easy access to all the information one has been exposed to. It seems likely that people store the most prominent facts and generalizations, which they modify on they fly when new facts become available.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our model learns categories using a particle filter, a Markov Chain Monte Carlo (MCMC) inference mechanism which sequentially integrates newly observed data and can be thus viewed as a plausible proxy for human learning. Experimental results show that the incremental learner obtains meaningful categories which outperform the state of the art whilst at the same time acquiring semantic representations of words and their features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The problem of category induction has achieved much attention in the cognitive science literature. Incremental category learning was pioneered by Anderson (1991) who develops a non-parametric model able to induce categories from abstract stimuli represented by binary features. Sanborn et al. (2006) present a fully Bayesian adaptation of Anderson's original model, which yields a better fit with behavioral data. A separate line of work examines the cognitive characteristics of category acquisition as well as the processes of generalizing and generating new categories and exemplars (Jern and Kemp, 2013; Kemp et al., 2012) . The above models are conceptually similar to ours. However, they were developed with adult categorization in mind, and use rather simplistic categories representing toy-domains. It is therefore not clear whether they generalize to arbitrary stimuli and data sizes. We aim to show that it is possible to acquire natural language categories on a larger scale purely from linguistic context.",
"cite_spans": [
{
"start": 146,
"end": 161,
"text": "Anderson (1991)",
"ref_id": "BIBREF1"
},
{
"start": 278,
"end": 299,
"text": "Sanborn et al. (2006)",
"ref_id": "BIBREF32"
},
{
"start": 586,
"end": 607,
"text": "(Jern and Kemp, 2013;",
"ref_id": "BIBREF19"
},
{
"start": 608,
"end": 626,
"text": "Kemp et al., 2012)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Our model is loosely related to Bayesian models of word sense induction (Brody and Lapata, 2009; Yao and Durme, 2011) . We also assume that local linguistic context can provide important cues for word meaning and by extension category membership. However, the above models focus on performance optimization and learn in an ideal batch mode, while incorporating various kinds of additional features such as part of speech tags or dependencies. In contrast, we develop a cognitively plausible (early) language learning model and show that categories can be acquired purely from context, as well as in an incremental fashion.",
"cite_spans": [
{
"start": 72,
"end": 96,
"text": "(Brody and Lapata, 2009;",
"ref_id": "BIBREF7"
},
{
"start": 97,
"end": 117,
"text": "Yao and Durme, 2011)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "From a modeling perspective, we learn categories incrementally using a particle filtering algorithm (Doucet et al., 2001) . Particle filters are a family of sequential Monte Carlo algorithms which update the state space of a probabilistic model with newly encountered information. They have been successfully applied to natural language acquisition tasks such as word segmentation (Borschinger and Johnson, 2011) , or sentence processing (Levy et al., 2009) . Sanborn et al. (2006) also use particle filters for small-scale categorization experiments with artificial stimuli. To the best of our knowledge, we present the first particle filtering algorithm for large-scale category acquisition from natural text.",
"cite_spans": [
{
"start": 100,
"end": 121,
"text": "(Doucet et al., 2001)",
"ref_id": "BIBREF10"
},
{
"start": 381,
"end": 412,
"text": "(Borschinger and Johnson, 2011)",
"ref_id": "BIBREF5"
},
{
"start": 438,
"end": 457,
"text": "(Levy et al., 2009)",
"ref_id": "BIBREF22"
},
{
"start": 460,
"end": 481,
"text": "Sanborn et al. (2006)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Our work is closest to Fountain and Lapata (2011) who also develop a model for inducing natural language categories. Specifically, they propose an incremental version of Chinese Whispers (Biemann, 2006) , a randomized graph-clustering algorithm. The latter takes as input a graph which is constructed from corpus-based co-occurrence statistics and produces a hard clustering over the nodes in the graph. Contrary to our model, they treat the tasks of inferring a semantic representa- tion for concepts and their class membership as two separate processes. This allows to experiment with different ways of initializing the cooccurrence matrix (e.g., from bags of words or a dependency parsed corpus), however at the expense of cognitive plausibility. It is unlikely that humans have two entirely separate mechanisms for learning the meaning of words and their categories. We formulate a more expressive model within a probabilistic framework which captures the meaning of words, their similarity, and the predictive power of their linguistic contexts.",
"cite_spans": [
{
"start": 23,
"end": 49,
"text": "Fountain and Lapata (2011)",
"ref_id": "BIBREF13"
},
{
"start": 187,
"end": 202,
"text": "(Biemann, 2006)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In this section we present our Bayesian model of category induction (BayesCat for short). The input to the model is natural language text, and its final output is a set of clusters representing categories of semantic concepts found in the input data. Like many other semantic models, BayesCat is inspired by the distributional hypothesis which states that a word's meaning is predictable from its context (Harris, 1954) . By extension, we also assume that contextual information can be used to characterize general semantic categories. Accordingly, the input to our model is a corpus of documents, each defined as a target word t centered in a fixed-length context window:",
"cite_spans": [
{
"start": 405,
"end": 419,
"text": "(Harris, 1954)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The BayesCat Model",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "[c \u2212n ... c \u22121 t c 1 ... c n ]",
"eq_num": "(1)"
}
],
"section": "The BayesCat Model",
"sec_num": "3"
},
{
"text": "We assume that there exists one global distribution over categories from which all documents are generated. Each document is assigned a category label, based on two types of features: the document's target word and its context words, which are modeled through separate category-specific distributions. We argue that it is important to distinguish between these features, since words belonging to the same category do not necessarily co-occur, but tend to occur in the same contexts. For example, the words polar bear and anteater are both members of the category ANIMAL. However, they rarely co-occur (in fact, a cursory search using Google yields only three matches for the query \"polar bear * anteater\"). Nevertheless, we would expect to observe both words in similar contexts since both animals eat, sleep, hunt, have fur, four legs, and so on. This distinction contrasts our category acquisition task from the classical task of topic inference. Figure 1 presents a plate diagram of the BayesCat model; an overview of the generative process is given in Figure 2 . We first draw a global category distribution \u03b8 from the Dirichlet distribution with parameter \u03b1. Next, for each category k, we draw a distribution over target words \u03c6 k from a Dirichlet with parameter \u03b2 and a distribution over context words \u03c8 k from a Dirichlet with parameter \u03b3. For each document d, we draw a category z d , then a target word, and N context words from the category-specific distributions \u03c6 z d and \u03c8 z d , respectively.",
"cite_spans": [],
"ref_spans": [
{
"start": 949,
"end": 957,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 1056,
"end": 1064,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "The BayesCat Model",
"sec_num": "3"
},
{
"text": "Draw distribution over categories \u03b8 \u223c Dir(\u03b1) for category k do Draw target word distribution \u03c6 k \u223c Dir(\u03b2) Draw context word distribution \u03c8 k \u223c Dir(\u03b3) for Document d do Draw category z d \u223c Mult(\u03b8) Draw target word w d t \u223c Mult(\u03c6 z d ) for context position n = {1..N} do Draw context word w d,n c \u223c Mult(\u03c8 z d )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The BayesCat Model",
"sec_num": "3"
},
{
"text": "Our goal is to infer the joint distribution of all hidden model parameters, and observable data W . Since we use conjugate prior distributions throughout the model, this joint distribution can be simplified to:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P(W, Z, \u03b8, \u03c6, \u03c8; \u03b1, \u03b2, \u03b3) \u221d \u220f k \u0393(N k + \u03b1 k ) \u0393(\u2211 k N k + \u03b1 k ) \u00d7 K \u220f k=1 \u220f r \u0393(N k r + \u03b2 r ) \u0393(\u2211 r N k r + \u03b2 r ) \u00d7 K \u220f k=1 \u220f s \u0393(N k s + \u03b3 s ) \u0393(\u2211 s N k s + \u03b3 s ) ,",
"eq_num": "(2)"
}
],
"section": "Learning",
"sec_num": "4"
},
{
"text": "where r and s iterate over the target and context word vocabulary, respectively, and the distribu-tions \u03b8, \u03c6, and \u03c8 are integrated out and implicitly captured by the corresponding co-occurrence counts N * * . \u0393() denotes the Gamma function, a generalization of the factorial to real numbers. Since exact inference of the parameters of the BayesCat model is intractable, we use samplingbased approximate inference. Specifically, we present two learning algorithms, namely a Gibbs sampler and a particle filter.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "4"
},
{
"text": "The Gibbs Sampler Gibbs sampling is a wellestablished approximate learning algorithm, based on Markov Chain Monte Carlo methods (Geman and Geman, 1984) . It operates in batch-mode by repeatedly iterating through all data points (documents in our case) and assigning the currently sampled document d a category z d conditioned on the current labelings of all other documents z \u2212d :",
"cite_spans": [
{
"start": 128,
"end": 151,
"text": "(Geman and Geman, 1984)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "z d \u223c P(z d |z \u2212d ,W \u2212d ; \u03b1, \u03b2, \u03b3),",
"eq_num": "(3)"
}
],
"section": "Learning",
"sec_num": "4"
},
{
"text": "using equation 2but ignoring information from the currently sampled document in all cooccurrence counts. The Gibbs sampler can be seen as an ideal learner, which can view and revise any relevant information at any time during learning. From a cognitive perspective, this setting is implausible, since a human language learner encounters training data incrementally and does not systematically revisit previous learning decisions. Particle filters are a class of incremental, or sequential, Monte Carlo methods which can be used to model aspects of the language learning process more naturally.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "4"
},
{
"text": "The Particle Filter Intuitively, a particle filter (henceforth PF) entertains a fixed set of N weighted hypotheses (particles) based on previous training examples. Figure 3 shows an overview of the particle filtering learning procedure. At first, every particle of the PF is initialized from a base distribution P 0 (Initialization). Then a single iteration over the input data y is performed, during which the posterior distribution of each data point y t under all current particles is computed given information from all previously encountered data points y t\u22121 (Sampling/Prediction). Crucially, each update is conditioned only on the previous model state z t\u22121 , which results in a constant state space despite an increasing amount of available data. A common problem with PF algorithms is weight degeneration, i.e., one particle tends to accumulate most of the weight. To avoid this problem, at regular intervals the set of particles is resampled in order to discard particles with for particle p do Initialization Initialize randomly or from z 0 p \u223c p 0 (z) for observation t do for particle n do",
"cite_spans": [],
"ref_spans": [
{
"start": 164,
"end": 172,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Learning",
"sec_num": "4"
},
{
"text": "Sampling/Prediction P n (z t n |y t ) \u223c p(z t n |z t\u22121 n , \u03b1)P(y t |z t n , y t\u22121 ) z t \u221d Mult({P n (z t n )} N i=1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "4"
},
{
"text": ") Resampling Figure 3 : The particle filtering procedure.",
"cite_spans": [],
"ref_spans": [
{
"start": 13,
"end": 21,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Learning",
"sec_num": "4"
},
{
"text": "low probability and to ensure that the sample is representative of the state space at any time (Resampling). This general algorithm can be straightforwardly adapted to our learning problem (Griffiths et al., 2011; Fearnhead, 2004) . Each observation corresponds to a document, which needs to be assigned a category. To begin with, we assign the first observed document to category 0 in all particles (Initialization). Then, we iterate once over the remaining documents. For each particle n, we compute a probability distribution over K categories based on the simplified posterior distribution as defined in equation 2(Sampling/Prediction), with cooccurrence counts based on the information from all previously encountered documents. Thus, we obtain a distribution over N \u2022 K possible assignments. From this distribution we sample with replacement N new particles, assign the current document to the corresponding category (Resampling), and proceed to the next input document.",
"cite_spans": [
{
"start": 189,
"end": 213,
"text": "(Griffiths et al., 2011;",
"ref_id": "BIBREF17"
},
{
"start": 214,
"end": 230,
"text": "Fearnhead, 2004)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "4"
},
{
"text": "The goal of our experimental evaluation is to assess the quality of the inferred clusters by comparison to a gold standard and an existing graph-based model of category acquisition. In addition, we are interested in the incremental version of the model, whether it is able to learn meaningful categories and how these change over time. In the following, we give details on the corpora we used, describe how model parameters were selected, and explain our evaluation procedure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "5"
},
{
"text": "All our experiments were conducted on a lemmatized version of the British National Corpus (BNC). The corpus was further preprocessed by removing stopwords and infrequent words (occurring less than 800 times in the BNC).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "5.1"
},
{
"text": "The model output was evaluated against a gold standard set of categories which was created by collating the resources developed by Fountain and Lapata (2010) and Vinson and Vigliocco (2008) . Both datasets contain a classification of nouns into (possibly multiple) semantic categories produced by human participants. We therefore assume that they represent psychologically salient categories which the cognitive system is in principle capable of acquiring. After merging the two resources, and removing duplicates we obtained 42 semantic categories for 555 nouns. We split this gold standard into a development (41 categories, 492 nouns) and a test set (16 categories, 196 nouns) . 2 The input to our model consists of short chunks of text, namely a target word centered in a symmetric context window of five words (see (1)). In our experiments, the set of target words corresponds to the set of nouns in the evaluation dataset. Target word mentions and their context are extracted from the BNC.",
"cite_spans": [
{
"start": 131,
"end": 157,
"text": "Fountain and Lapata (2010)",
"ref_id": "BIBREF12"
},
{
"start": 162,
"end": 189,
"text": "Vinson and Vigliocco (2008)",
"ref_id": "BIBREF37"
},
{
"start": 682,
"end": 683,
"text": "2",
"ref_id": null
}
],
"ref_spans": [
{
"start": 653,
"end": 679,
"text": "(16 categories, 196 nouns)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "5.1"
},
{
"text": "We optimized the hyperparameters of the BayesCat model on the development set. For the particle filter, the optimal values are \u03b1 = 0.7, \u03b2 = 0.1, \u03b3 = 0.1. We used the same values for the Gibbs Sampler since it proved insensitive to hyperparameter variations. We run the Gibbs sampler for 200 iterations 3 and report results averaged over 10 runs. For the PF, we set the number of particles to 500, and report final scores averaged over 10 runs. For evaluation, we take the clustering from the particle with the highest weight 4 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameters for the BayesCat Model",
"sec_num": "5.2"
},
{
"text": "Chinese Whispers We compared our approach with Fountain and Lapata (2011) who present a non-parametric graph-based model for category acquisition. Their algorithm incrementally constructs a graph from co-occurrence counts of target words and their contexts (they use a symmetric context window of five words). Target words constitute the nodes of the graph, their co-occurrences are transformed into a vector of positive PMI values, and graph edges correspond to the cosine similarity between the PMI-vectors representing any two nodes. They use Chinese Whispers (Biemann, 2006) to partition a graph into categories.",
"cite_spans": [
{
"start": 563,
"end": 578,
"text": "(Biemann, 2006)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Comparison",
"sec_num": "5.3"
},
{
"text": "We replicated the bag-of-words model presented in Fountain and Lapata (2011) and assessed its performance on our training corpora and test sets. The scores we report are averaged over 10 runs.",
"cite_spans": [
{
"start": 50,
"end": 76,
"text": "Fountain and Lapata (2011)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Comparison",
"sec_num": "5.3"
},
{
"text": "Chinese Whispers can only make hard clustering decisions, whereas the BayesCat model returns a soft clustering of target nouns. In order to be able to compare the two models, we convert the soft clusters to hard clusters by assigning each target word w to category c such that cat(w) = max c P(w|c) \u2022 P(c|w).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Comparison",
"sec_num": "5.3"
},
{
"text": "LDA We also compared our model to a standard topic model, namely Latent Dirichlet Allocation (LDA; Blei et al. 2003) . LDA assumes that a document is generated from an individual mixture over topics, and each topic is associated with one word distribution. We trained a batch version of LDA using input identical to our model and the Mallet toolkit (McCallum, 2002) .",
"cite_spans": [
{
"start": 99,
"end": 116,
"text": "Blei et al. 2003)",
"ref_id": "BIBREF3"
},
{
"start": 349,
"end": 365,
"text": "(McCallum, 2002)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Comparison",
"sec_num": "5.3"
},
{
"text": "Chinese Whispers is a parameter-free algorithm and thus determines the number of clusters automatically. While the Bayesian models presented here are parametric in that an upper bound for the potential number of categories needs to be specified, the models themselves decide on the specific value of this number. We set the upper bound of categories to 100 for LDA as well as the batch and incremental version of the BayesCat model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Comparison",
"sec_num": "5.3"
},
{
"text": "Our aim is to learn a set of clusters each of which corresponds to one gold category, i.e., it contains all and only members of that gold category. We report evaluation scores based on three metrics which measure this tradeoff. Since in unsupervised clustering the cluster IDs are meaningless, all evaluation metrics involve a mapping from induced clusters to gold categories. The first two metrics described below perform a cluster-based mapping and are thus not ideal for assessing the output of soft clustering algorithms. The third metric performs an item-based mapping and can be directly used to evaluate soft clusters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "5.4"
},
{
"text": "Purity/Collocation are based on member overlap between induced clusters and gold classes (Lang and Lapata, 2011) . Purity measures the degree to which each cluster contains instances that share the same gold class, while collocation measures the degree to which instances with the same gold class are assigned to a single cluster. We report the harmonic mean of purity and collocation as a single measure of clustering quality.",
"cite_spans": [
{
"start": 89,
"end": 112,
"text": "(Lang and Lapata, 2011)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "5.4"
},
{
"text": "V-Measure is the harmonic mean between homogeneity and collocation (Rosenberg and Hirschberg, 2007) .",
"cite_spans": [
{
"start": 67,
"end": 99,
"text": "(Rosenberg and Hirschberg, 2007)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "5.4"
},
{
"text": "Like purity, V-Measure performs cluster-based comparisons but is an entropy-based method. It measures the conditional entropy of a cluster given a class, and vice versa.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "5.4"
},
{
"text": "Cluster-F1 is an item-based evaluation metric which we propose drawing inspiration from the supervised metric presented in Agirre and Soroa (2007) . Cluster-F1 maps each target word type to a gold cluster based on its soft class membership, and is thus appropriate for evaluation of soft clustering output. We first create a K \u00d7 G soft mapping matrix M from each induced category k i to gold classes g j from P(g j |k i ). We then map each target word type to a gold class by multiplying its probability distribution over soft clusters with the mapping matrix M , and taking the maximum value. Finally, we compute standard precision, recall and F1 between the mapped system categories and the gold classes.",
"cite_spans": [
{
"start": 123,
"end": 146,
"text": "Agirre and Soroa (2007)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "5.4"
},
{
"text": "Our experiments are designed to answer three questions: (1) How do the induced categories fare against gold standard categories? (2) Are there performance differences between BayesCat and Chinese Whispers, given that the two models adopt distinct mechanisms for representing lexical meaning and learning semantic categories? (3) Is our incremental learning mechanism cognitively plausible? In other words, does the quality of the induced clusters improve over time and how do the learnt categories differ from the output of an ideal batch learner?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "Clustering performance for the batch BayesCat model (BC-Batch), its incremental version (BC-Inc), Chinese Whispers (CW), and LDA is shown in Table 1 . Comparison of the two incremental models, namely BC-Inc and CW, shows that our model outperforms CW under all evaluation metrics both on the test and the development set. Our BC models perform at least as well as LDA, despite the more complex learning objective. Recall that LDA does not learn category specific features. BC-Batch performs best overall, however this is not surprising. The BayesCat model learnt in batch mode uses a Gibbs sampler which can be viewed as an ideal learner with access to the entire training data at any time, and the ability to systematically revise previous decisions. This puts the incremental variant at a disadvantage since the particle filter encounters the data incrementally and never resamples previously seen documents. Nevertheless, as shown in Table 1 BC-Inc's performance is very close to BC-Batch. BC-Inc outperforms the Gibbs sampler in the PC-F1 metric, because it achieves higher collocation scores. Inspection of the output reveals that the Gibbs sampler induces larger clusters compared to the particle filter (as well as less distinct clusters). Although the general pattern of results is the same on the development and test sets, absolute scores for all systems are higher on the test set. This is expected, since the test set contains less categories with a smaller number of exemplars and more accurate clusterings can be thus achieved (on average) more easily. Figure 4 displays the learning curves produced by CW and BC-Inc under the PC-F1 (left) and Cluster-F1 (right) evaluation metrics. Under PC-F1, CW produces a very steep initial learning curve which quickly flattens off, whereas no learning curve emerges for CW under Cluster-F1. The BayesCat model exhibits more discernible learning curves under both metrics. We also observe that learning curves for CW indicate much more variance during learning compared to BC-Inc, irrespectively of the evaluation metric being used. Figure 4b shows learning curves for BC-Inc when its output classes are interpreted in two ways, i.e., as soft or hard clusters. Interestingly, the two curves have a similar shape which points to the usefulness of Cluster-F1 as an evaluation metric for both types of clusters.",
"cite_spans": [],
"ref_spans": [
{
"start": 141,
"end": 148,
"text": "Table 1",
"ref_id": null
},
{
"start": 937,
"end": 944,
"text": "Table 1",
"ref_id": null
},
{
"start": 1567,
"end": 1575,
"text": "Figure 4",
"ref_id": "FIGREF2"
},
{
"start": 2086,
"end": 2095,
"text": "Figure 4b",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "In order to better understand the differences in the learning process between CW and BC-Inc we tracked the evolution of clusterings over time, as well as the variance across cluster sizes at each point in time. The results are plotted in Figure 5 . The top part of the figure compares the number of clusters learnt by the two models. We see that the number of clusters inferred by CW drops over time, but is closer to the number of clusters present in the gold standard. The final number of clusters inferred by CW is 26, whereas PF-Inc infers 90 clusters (there are 41 gold classes). The middle plot shows the variance in cluster size induced at any time by CW which is by orders of magnitude higher than the variance observed in the output of BayesCat (bottom plot). More importantly, the variance in BayesCat resembles the variance present in the gold standard much more closely. few very large clusters and a large number of very small (mostly singleton) clusters. Although some of the bigger clusters are meaningful, the overall structure of clusterings does not faithfully represent the gold standard.",
"cite_spans": [],
"ref_spans": [
{
"start": 238,
"end": 246,
"text": "Figure 5",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "Finally, note that in contrast to CW and LDA, the BayesCat model learns not only how to induce clusters of target words, but also information about their category-specific contexts. Table 2 presents examples of the learnt categories together with their most likely contexts. For example, one of the categories our model discovers corresponds to BUILDINGS. Some of the context words or features relating to buildings refer to their location (e.g., city, road, hill, north, park), architectural style (e.g., modern, period, estate), and material (e.g., stone).",
"cite_spans": [],
"ref_spans": [
{
"start": 182,
"end": 189,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "In this paper we have presented a Bayesian model of category acquisition. Our model learns to group concepts into categories as well as their features (i.e., context words associated with them). Cat-egory learning is performed incrementally, using a particle filtering algorithm which is a natural choice for modeling sequential aspects of language learning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "7"
},
{
"text": "We now return to our initial questions and summarize our findings. Firstly, we observe that our incremental model learns plausible linguistic categories when compared against the gold standard. Secondly, these categories are qualitatively better when evaluated against Chinese Whispers, a closely related graph-based incremental algorithm. Thirdly, analysis of the model's output shows that it simulates category learning in two important ways, it consistently improves over time and can additionally acquire category features. Overall, our model has a more cognitively plausible learning mechanism compared to CW, and is more expressive, as it can simulate both category and feature learning. Although CW ultimately yields some meaningful categories, it does not acquire any knowledge pertaining to their features. This is somewhat unrealistic given that humans are good at inferring missing features for unknown categories (Anderson, 1991) . It is also symptomatic of the nature of the algorithm which does not have an explicit learning mechanism. Each node in the graph iteratively adopts (in random order) the strongest class in its neighborhood (i.e., the set of nodes with which it shares an edge). We also showed that LDA is less appropriate for the category learning task on account of its formulation which does not allow to simultaneously acquire clusters and their features. There are several options for improving our model. The learning mechanism presented here is the most basic of particle methods. A common problem in particle filtering is sample impoverishment, i.e., particles become highly similar after a few iterations, and do not optimally represent the sample space. More involved resampling methods such as stratified sampling or residual resampling, have been shown to alleviate this problem (Douc, 2005) .",
"cite_spans": [
{
"start": 925,
"end": 941,
"text": "(Anderson, 1991)",
"ref_id": "BIBREF1"
},
{
"start": 1817,
"end": 1829,
"text": "(Douc, 2005)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "7"
},
{
"text": "From a cognitive perspective, the most obvious weakness of our algorithm is its strict incrementality. While our model simulates human mem-BUILDINGS wall, bridge, building, cottage, gate, house, train, bus, stone, chapel, brick, cathedral plan, include, park, city, stone, building, hotel, lead, road, hill, north, modern, visit, main, period, cathedral, estate, complete, site, owner, parish WEAPONS shotgun, pistol, knife, crowbar, gun, sledgehammer, baton, bullet, motorcycle, van, ambulance injure, ira, jail, yesterday, arrest, stolen, fire, officer, gun, police victim, hospital, steal, crash, murder, incident, driver, accident, hit INSTRUMENTS tuba, drum, harmonica, bagpipe, harp, violin, saxophone, rock, piano, banjo, guitar, flute, harpsichord, trumpet, rocker, clarinet, stereo, cello, accordion amp, orchestra, sound, electric, string, sing, song, drum, piano, condition, album, instrument, guitar, band, bass, music ory restrictions and uncertainty by learning based on a limited number of current knowledge states (i.e., particles), it never reconsiders past categorization decisions. In many linguistic tasks, however, learners revisit past decisions (Frazier and Rayner, 1982 ) and intuitively we would expect categories to change based on novel evidence, especially in the early learning phase. In fixed-lag smoothing, a particle smoothing variant, model updates include systematic revision of a fixed set of previous observations in the light of newly encountered evidence (Briers et al., 2010) . Based on this framework, we will investigate different schemes for informed sequential learning.",
"cite_spans": [
{
"start": 149,
"end": 930,
"text": "wall, bridge, building, cottage, gate, house, train, bus, stone, chapel, brick, cathedral plan, include, park, city, stone, building, hotel, lead, road, hill, north, modern, visit, main, period, cathedral, estate, complete, site, owner, parish WEAPONS shotgun, pistol, knife, crowbar, gun, sledgehammer, baton, bullet, motorcycle, van, ambulance injure, ira, jail, yesterday, arrest, stolen, fire, officer, gun, police victim, hospital, steal, crash, murder, incident, driver, accident, hit INSTRUMENTS tuba, drum, harmonica, bagpipe, harp, violin, saxophone, rock, piano, banjo, guitar, flute, harpsichord, trumpet, rocker, clarinet, stereo, cello, accordion amp, orchestra, sound, electric, string, sing, song, drum, piano, condition, album, instrument, guitar, band, bass, music",
"ref_id": null
},
{
"start": 1168,
"end": 1193,
"text": "(Frazier and Rayner, 1982",
"ref_id": "BIBREF14"
},
{
"start": 1493,
"end": 1514,
"text": "(Briers et al., 2010)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "7"
},
{
"text": "Finally, we would like to compare the model's predictions against behavioral data, and examine more thoroughly how categories and features evolve over time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "7"
},
{
"text": "Throughout this paper we will use small caps to denote CATEGORIES and italics for their members.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The dataset is available from www.frermann.de/data.3 We checked for convergence on the development set.4 While in theory particles should be averaged, we found that eventually they became highly similar -a common problem known as sample impoverishment, which we plan to tackle in the future. Nevertheless, diversity among particles is present in the initial learning phase, when uncertainty is greatest, so the model still benefits from multiple hypotheses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Acknowledgments We would like to thank Charles Sutton and members of the ILCC at the School of Informatics for their valuable feedback. We acknowledge the support of EPSRC through project grant EP/I037415/1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Semeval-2007 task 02: Evaluating word sense induction and discrimination systems",
"authors": [
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "Aitor",
"middle": [],
"last": "Soroa",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 4th International Workshop on Semantic Evaluations",
"volume": "",
"issue": "",
"pages": "7--12",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Agirre, Eneko and Aitor Soroa. 2007. Semeval- 2007 task 02: Evaluating word sense induc- tion and discrimination systems. In Proceedings of the 4th International Workshop on Semantic Evaluations. Prague, Czech Republic, pages 7- 12.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The adaptive nature of human categorization",
"authors": [
{
"first": "John",
"middle": [
"R"
],
"last": "Anderson",
"suffix": ""
}
],
"year": 1991,
"venue": "Psychological Review",
"volume": "98",
"issue": "",
"pages": "409--429",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anderson, John R. 1991. The adaptive nature of human categorization. Psychological Review 98:409-429.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Chinese Whispers -an efficient graph clustering algorithm and its application to natural language processing problems",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Biemann",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of TextGraphs: the 1st Workshop on Graph Based Methods for Natural Language Processing",
"volume": "",
"issue": "",
"pages": "73--80",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Biemann, Chris. 2006. Chinese Whispers -an effi- cient graph clustering algorithm and its applica- tion to natural language processing problems. In Proceedings of TextGraphs: the 1st Workshop on Graph Based Methods for Natural Language Processing. New York City, pages 73-80.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Latent Dirichlet allocation",
"authors": [
{
"first": "David",
"middle": [
"M"
],
"last": "Blei",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Andrew",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"I"
],
"last": "Ng",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jordan",
"suffix": ""
}
],
"year": 2003,
"venue": "Journal of Machine Learning Research",
"volume": "3",
"issue": "",
"pages": "993--1022",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Blei, David M., Andrew Y. Ng, and Michael I. Jor- dan. 2003. Latent Dirichlet allocation. Journal of Machine Learning Research 3:993-1022.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "The nature and structure of infant form categories",
"authors": [
{
"first": "Paul",
"middle": [
"C"
],
"last": "Bomba",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Eimas",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Siqueland",
"suffix": ""
}
],
"year": 1983,
"venue": "Journal of Experimental Child Psychology",
"volume": "35",
"issue": "",
"pages": "294--328",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bomba, Paul C. and Eimas R. Siqueland. 1983. The nature and structure of infant form cate- gories. Journal of Experimental Child Psychol- ogy 35:294-328.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A particle filter algorithm for Bayesian word segmentation",
"authors": [
{
"first": "Benjamin",
"middle": [],
"last": "Borschinger",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Australasian Language Technology Association Workshop",
"volume": "",
"issue": "",
"pages": "10--18",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Borschinger, Benjamin and Mark Johnson. 2011. A particle filter algorithm for Bayesian word segmentation. In Proceedings of the Aus- tralasian Language Technology Association Workshop. Canberra, Australia, pages 10-18.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Smoothing algorithms for state-space models",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Briers",
"suffix": ""
},
{
"first": "Arnaud",
"middle": [],
"last": "Doucet",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Maskell",
"suffix": ""
}
],
"year": 2010,
"venue": "Annals of the Institute of Statistical Mathematics",
"volume": "62",
"issue": "1",
"pages": "61--89",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Briers, Mark, Arnaud Doucet, and Simon Maskell. 2010. Smoothing algorithms for state-space models. Annals of the Institute of Statistical Mathematics 62(1):61-89.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Bayesian word sense induction",
"authors": [
{
"first": "Samuel",
"middle": [],
"last": "Brody",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 12th Conference of the European Chapter of the ACL",
"volume": "",
"issue": "",
"pages": "103--111",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brody, Samuel and Mirella Lapata. 2009. Bayesian word sense induction. In Proceedings of the 12th Conference of the European Chapter of the ACL. Athens, Greece, pages 103-111.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "An attractor model of lexical conceptual processing: Simulating semantic priming",
"authors": [
{
"first": "George",
"middle": [
"S"
],
"last": "Cree",
"suffix": ""
},
{
"first": "Ken",
"middle": [],
"last": "Mcrae",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Mcnorgan",
"suffix": ""
}
],
"year": 1999,
"venue": "Cognitive Science",
"volume": "23",
"issue": "3",
"pages": "371--414",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cree, George S., Ken McRae, and Chris McNor- gan. 1999. An attractor model of lexical con- ceptual processing: Simulating semantic prim- ing. Cognitive Science 23(3):371-414.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Comparison of resampling schemes for particle filtering",
"authors": [
{
"first": "Randal",
"middle": [],
"last": "Douc",
"suffix": ""
}
],
"year": 2005,
"venue": "4th International Symposium on Image and Signal Processing and Analysis",
"volume": "",
"issue": "",
"pages": "64--69",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Douc, Randal. 2005. Comparison of resampling schemes for particle filtering. In 4th Interna- tional Symposium on Image and Signal Pro- cessing and Analysis. Zagreb, Croatia, pages 64-69.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Sequential Monte Carlo Methods in Practice",
"authors": [
{
"first": "Arnaud",
"middle": [],
"last": "Doucet",
"suffix": ""
},
{
"first": "Nando",
"middle": [],
"last": "De Freitas",
"suffix": ""
},
{
"first": "Neil",
"middle": [],
"last": "Gordon",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Doucet, Arnaud, Nando de Freitas, and Neil Gor- don. 2001. Sequential Monte Carlo Methods in Practice. Springer, New York.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Particle filters for mixture models with an unknown number of components",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Fearnhead",
"suffix": ""
}
],
"year": 2004,
"venue": "Statistics and Computing",
"volume": "14",
"issue": "1",
"pages": "11--21",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fearnhead, Paul. 2004. Particle filters for mix- ture models with an unknown number of com- ponents. Statistics and Computing 14(1):11-21.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Meaning representation in natural language categorization",
"authors": [
{
"first": "Trevor",
"middle": [],
"last": "Fountain",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 32nd Annual Conference of the Cognitive Science Society",
"volume": "",
"issue": "",
"pages": "1916--1921",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fountain, Trevor and Mirella Lapata. 2010. Mean- ing representation in natural language catego- rization. In Proceedings of the 32nd Annual Conference of the Cognitive Science Society. Portland, Oregon, pages 1916-1921.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Incremental models of natural language category acquisition",
"authors": [
{
"first": "Trevor",
"middle": [],
"last": "Fountain",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 33nd Annual Conference of the Cognitive Science Society",
"volume": "",
"issue": "",
"pages": "255--260",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fountain, Trevor and Mirella Lapata. 2011. In- cremental models of natural language category acquisition. In Proceedings of the 33nd An- nual Conference of the Cognitive Science Soci- ety. Boston, Massachusetts, pages 255-260.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Making and correcting errors during sentence comprehension: Eye movements in the analysis of structurally ambiguous sentences",
"authors": [
{
"first": "Lyn",
"middle": [],
"last": "Frazier",
"suffix": ""
},
{
"first": "Keith",
"middle": [],
"last": "Rayner",
"suffix": ""
}
],
"year": 1982,
"venue": "Cognitive Psychology",
"volume": "14",
"issue": "2",
"pages": "178--210",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Frazier, Lyn and Keith Rayner. 1982. Making and correcting errors during sentence comprehen- sion: Eye movements in the analysis of struc- turally ambiguous sentences. Cognitive Psy- chology 14(2):178-210.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Stochastic relaxation, Gibbs distributions and the Bayesian restoration of images",
"authors": [
{
"first": "Stuart",
"middle": [],
"last": "Geman",
"suffix": ""
},
{
"first": "Donald",
"middle": [],
"last": "Geman",
"suffix": ""
}
],
"year": 1984,
"venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence",
"volume": "6",
"issue": "6",
"pages": "721--741",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Geman, Stuart and Donald Geman. 1984. Stochas- tic relaxation, Gibbs distributions and the Bayesian restoration of images. IEEE Trans- actions on Pattern Analysis and Machine Intel- ligence 6(6):721-741.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Unifying rational models of categorization via the hierarchical Dirichlet process",
"authors": [
{
"first": "Thomas",
"middle": [
"L"
],
"last": "Griffiths",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Kevin",
"suffix": ""
},
{
"first": "Adam",
"middle": [
"N"
],
"last": "Canini",
"suffix": ""
},
{
"first": "Daniel",
"middle": [
"J"
],
"last": "Sanborn",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Navarro",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 29th Annual Conference of the Cognitive Science Society",
"volume": "",
"issue": "",
"pages": "323--328",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Griffiths, Thomas L., Kevin R. Canini, Adam N. Sanborn, and Daniel J. Navarro. 2007. Unifying rational models of categorization via the hierar- chical Dirichlet process. In Proceedings of the 29th Annual Conference of the Cognitive Sci- ence Society. Nashville, Tennessee, pages 323- 328.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Nonparametric Bayesian models of categorization",
"authors": [
{
"first": "Thomas",
"middle": [
"L"
],
"last": "Griffiths",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Adam",
"suffix": ""
},
{
"first": "Kevin",
"middle": [
"R"
],
"last": "Sanborn",
"suffix": ""
},
{
"first": "John",
"middle": [
"D"
],
"last": "Canini",
"suffix": ""
},
{
"first": "Joshua",
"middle": [
"B"
],
"last": "Navarro",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tenenbaum",
"suffix": ""
}
],
"year": 2011,
"venue": "Formal Approaches in Categorization",
"volume": "",
"issue": "",
"pages": "173--198",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Griffiths, Thomas L., Adam N. Sanborn, Kevin R. Canini, John D. Navarro, and Joshua B. Tenen- baum. 2011. Nonparametric Bayesian mod- els of categorization. In Emmanuel M. Pothos and Andy J. Wills, editors, Formal Approaches in Categorization, Cambridge University Press, pages 173-198.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A probabilistic account of exemplar and category generation",
"authors": [
{
"first": "Alan",
"middle": [],
"last": "Jern",
"suffix": ""
},
{
"first": "Charles",
"middle": [],
"last": "Kemp",
"suffix": ""
}
],
"year": 2013,
"venue": "Cognitive Psychology",
"volume": "66",
"issue": "",
"pages": "85--125",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jern, Alan and Charles Kemp. 2013. A proba- bilistic account of exemplar and category gen- eration. Cognitive Psychology 66:85-125.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "An integrated account of generalization across objects and features",
"authors": [
{
"first": "Charles",
"middle": [],
"last": "Kemp",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Shafto",
"suffix": ""
},
{
"first": "Joshua",
"middle": [
"B"
],
"last": "Tenenbaum",
"suffix": ""
}
],
"year": 2012,
"venue": "Cognitive Psychology",
"volume": "64",
"issue": "",
"pages": "35--75",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kemp, Charles, Patrick Shafto, and Joshua B. Tenenbaum. 2012. An integrated account of generalization across objects and features. Cog- nitive Psychology 64:35-75.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Unsupervised semantic role induction with graph partitioning",
"authors": [
{
"first": "Joel",
"middle": [],
"last": "Lang",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1320--1331",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lang, Joel and Mirella Lapata. 2011. Unsuper- vised semantic role induction with graph par- titioning. In Proceedings of the 2011 Con- ference on Empirical Methods in Natural Lan- guage Processing. Edinburgh, Scotland, UK., pages 1320-1331.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Modeling the effects of memory on human online sentence processing with particle filters",
"authors": [
{
"first": "Roger",
"middle": [
"P"
],
"last": "Levy",
"suffix": ""
},
{
"first": "Florencia",
"middle": [],
"last": "Reali",
"suffix": ""
},
{
"first": "Thomas",
"middle": [
"L"
],
"last": "Griffiths",
"suffix": ""
}
],
"year": 2009,
"venue": "Advances in Neural Information Processing Systems",
"volume": "21",
"issue": "",
"pages": "937--944",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Levy, Roger P., Florencia Reali, and Thomas L. Griffiths. 2009. Modeling the effects of mem- ory on human online sentence processing with particle filters. In D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou, editors, Advances in Neural Information Processing Systems 21, pages 937-944.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Mallet: A machine learning for language toolkit",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kachites",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "McCallum, Andrew Kachites. 2002. Mal- let: A machine learning for language toolkit. http://mallet.cs.umass.edu.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Semantic feature production norms for a large set of living and nonliving things",
"authors": [
{
"first": "Ken",
"middle": [],
"last": "Mcrae",
"suffix": ""
},
{
"first": "George",
"middle": [
"S"
],
"last": "Cree",
"suffix": ""
},
{
"first": "Mark",
"middle": [
"S"
],
"last": "Seidenberg",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Mcnorgan",
"suffix": ""
}
],
"year": 2005,
"venue": "Behavioral Research Methods",
"volume": "37",
"issue": "4",
"pages": "547--59",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "McRae, Ken, George S. Cree, Mark S. Seidenberg, and Chris McNorgan. 2005. Semantic feature production norms for a large set of living and nonliving things. Behavioral Research Methods 37(4):547-59.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Context theory of classification learning",
"authors": [
{
"first": "Douglas",
"middle": [
"L"
],
"last": "Medin",
"suffix": ""
},
{
"first": "Marguerite",
"middle": [
"M"
],
"last": "Schaffer",
"suffix": ""
}
],
"year": 1978,
"venue": "Psychological Review",
"volume": "85",
"issue": "3",
"pages": "207--238",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Medin, Douglas L. and Marguerite M. Schaffer. 1978. Context theory of classification learning. Psychological Review 85(3):207-238.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Exemplar-based accounts of relations between classification, recognition, and typicality",
"authors": [
{
"first": "Robert",
"middle": [
"M"
],
"last": "Nosofsky",
"suffix": ""
}
],
"year": 1988,
"venue": "Journal of Experimental Psychology: Learning, Memory, and Cognition",
"volume": "14",
"issue": "",
"pages": "700--708",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nosofsky, Robert M. 1988. Exemplar-based accounts of relations between classification, recognition, and typicality. Journal of Exper- imental Psychology: Learning, Memory, and Cognition 14:700-708.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "On the genesis of abstract ideas",
"authors": [
{
"first": "Michael",
"middle": [
"I"
],
"last": "Posner",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Steven",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Keele",
"suffix": ""
}
],
"year": 1968,
"venue": "Journal of Experimental Psychology",
"volume": "21",
"issue": "",
"pages": "367--379",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Posner, Michael I. and Steven W. Keele. 1968. On the genesis of abstract ideas. Journal of Exper- imental Psychology 21:367-379.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Perceptual cues that permit categorical differentiation of animal species by infants",
"authors": [
{
"first": "Paul",
"middle": [
"C"
],
"last": "Quinn",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Eimas",
"suffix": ""
}
],
"year": 1996,
"venue": "Journal of Experimental Child Psychology",
"volume": "63",
"issue": "",
"pages": "189--211",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Quinn, Paul C. and Peter D. Eimas. 1996. Percep- tual cues that permit categorical differentiation of animal species by infants. Journal of Exper- imental Child Psychology 63:189-211.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Pattern recognition and categorization",
"authors": [
{
"first": "Stephen",
"middle": [
"K"
],
"last": "Reed",
"suffix": ""
}
],
"year": 1972,
"venue": "Cognitive psychology",
"volume": "3",
"issue": "3",
"pages": "382--407",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Reed, Stephen K. 1972. Pattern recognition and categorization. Cognitive psychology 3(3):382- 407.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Redundancy in perceptual and linguistic experience: Comparing feature-based and distributional models of semantic representation. Topics in",
"authors": [
{
"first": "Brian",
"middle": [],
"last": "Riordan",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"N"
],
"last": "Jones",
"suffix": ""
}
],
"year": 2011,
"venue": "Cognitive Science",
"volume": "3",
"issue": "2",
"pages": "303--345",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Riordan, Brian and Michael N. Jones. 2011. Re- dundancy in perceptual and linguistic experi- ence: Comparing feature-based and distribu- tional models of semantic representation. Top- ics in Cognitive Science 3(2):303-345.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "V-measure: A conditional entropy-based external cluster evaluation measure",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Rosenberg",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Hirschberg",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "410--420",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rosenberg, Andrew and Julia Hirschberg. 2007. V-measure: A conditional entropy-based ex- ternal cluster evaluation measure. In Proceed- ings of the 2007 Joint Conference on Empiri- cal Methods in Natural Language Processing and Computational Natural Language Learn- ing. Prague, Czech Republic, pages 410-420.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "A more rational model of categorization",
"authors": [
{
"first": "Adam",
"middle": [
"N"
],
"last": "Sanborn",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Daniel",
"middle": [
"J"
],
"last": "Griffiths",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Navarro",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 28th",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sanborn, Adam N., Thomas L. Griffiths, and Daniel J. Navarro. 2006. A more rational model of categorization. In Proceedings of the 28th",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Annual Conference of the Cognitive Science Society",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "726--731",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual Conference of the Cognitive Science So- ciety. Vancouver, Canada, pages 726-731.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Categories and Concepts",
"authors": [
{
"first": "Edward",
"middle": [
"E"
],
"last": "Smith",
"suffix": ""
},
{
"first": "Douglas",
"middle": [
"L"
],
"last": "Medin",
"suffix": ""
}
],
"year": 1981,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Smith, Edward E. and Douglas L. Medin. 1981. Categories and Concepts. Harvard University Press, Cambridge, MA, USA.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "The origins of concept formation: Object sorting and object preference in early infancy",
"authors": [
{
"first": "David",
"middle": [],
"last": "Starkey",
"suffix": ""
}
],
"year": 1981,
"venue": "",
"volume": "",
"issue": "",
"pages": "489--497",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Starkey, David. 1981. The origins of concept for- mation: Object sorting and object preference in early infancy. Child Development pages 489- 497.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Prototype and exemplar-based information in natural language categories",
"authors": [
{
"first": "Gert",
"middle": [],
"last": "Storms",
"suffix": ""
},
{
"first": "Paul",
"middle": [
"De"
],
"last": "Boeck",
"suffix": ""
},
{
"first": "Wim",
"middle": [],
"last": "Ruts",
"suffix": ""
}
],
"year": 2000,
"venue": "Journal of Memory and Language",
"volume": "42",
"issue": "",
"pages": "51--73",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Storms, Gert, Paul De Boeck, and Wim Ruts. 2000. Prototype and exemplar-based informa- tion in natural language categories. Journal of Memory and Language 42:51-73.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Semantic feature production norms for a large set of objects and events",
"authors": [
{
"first": "David",
"middle": [],
"last": "Vinson",
"suffix": ""
},
{
"first": "Gabriella",
"middle": [],
"last": "Vigliocco",
"suffix": ""
}
],
"year": 2008,
"venue": "Behavior Research Methods",
"volume": "40",
"issue": "1",
"pages": "183--190",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vinson, David and Gabriella Vigliocco. 2008. Se- mantic feature production norms for a large set of objects and events. Behavior Research Meth- ods 40(1):183-190.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Exemplars and prototypes in natural language concepts: A typicality-based evaluation",
"authors": [
{
"first": "Wouter",
"middle": [],
"last": "Voorspoels",
"suffix": ""
},
{
"first": "Wolf",
"middle": [],
"last": "Vanpaemel",
"suffix": ""
},
{
"first": "Gert",
"middle": [],
"last": "Storms",
"suffix": ""
}
],
"year": 2008,
"venue": "Psychonomic Bulletin & Review",
"volume": "15",
"issue": "3",
"pages": "630--637",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Voorspoels, Wouter, Wolf Vanpaemel, and Gert Storms. 2008. Exemplars and prototypes in natural language concepts: A typicality-based evaluation. Psychonomic Bulletin & Review 15(3):630-637.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Nonparametric Bayesian word sense induction",
"authors": [
{
"first": "Xuchen",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of TextGraphs-6: Graphbased Methods for Natural Language Processing",
"volume": "",
"issue": "",
"pages": "10--14",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yao, Xuchen and Benjamin Van Durme. 2011. Nonparametric Bayesian word sense induc- tion. In Proceedings of TextGraphs-6: Graph- based Methods for Natural Language Process- ing. Portland, Oregon, pages 10-14.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Finding the features that represent stimuli",
"authors": [
{
"first": "Matthew",
"middle": [
"D"
],
"last": "Zeigenfuse",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2010,
"venue": "Acta Psychologica",
"volume": "133",
"issue": "3",
"pages": "283--295",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zeigenfuse, Matthew D. and Michael D. Lee. 2010. Finding the features that represent stim- uli. Acta Psychologica 133(3):283-295.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Plate diagram representation of the BayesCat model.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF1": {
"text": "The generative process of the BayesCat model.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF2": {
"text": "Learning curves for BC-Inc and CW based on PC-F1 (left), and Cluster-F1 (right). The type of clusters being evaluated is shown within parentheses. Results are reported on the development set.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF4": {
"text": "Number of clusters over time (top). Cluster size variance for CW (middle) and BC-Inc (bottom). Results shown on the development set.",
"uris": null,
"num": null,
"type_str": "figure"
},
"TABREF1": {
"text": "Examples of categories induced by the incremental BayesCat model (upper row), together with their most likely context words (lower row).",
"type_str": "table",
"html": null,
"content": "<table/>",
"num": null
}
}
}
}