Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "K15-1023",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:09:04.865615Z"
},
"title": "Quantity, Contrast, and Convention in Cross-Situated Language Comprehension",
"authors": [
{
"first": "Ian",
"middle": [],
"last": "Perera",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "James",
"middle": [
"F"
],
"last": "Allen",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Typically, visually-grounded language learning systems only accept feature data about objects in the environment that are explicitly mentioned, whether through annotation labels or direct reference through natural language. We show that when objects are described ambiguously using natural language, a system can use a combination of the pragmatic principles of Contrast and Conventionality, and multiple-instance learning to learn from ambiguous examples in an online fashion. Applying child language learning strategies to visual learning enables more effective learning in real-time environments, which can lead to enhanced teaching interactions with robots or grounded systems in multi-object environments.",
"pdf_parse": {
"paper_id": "K15-1023",
"_pdf_hash": "",
"abstract": [
{
"text": "Typically, visually-grounded language learning systems only accept feature data about objects in the environment that are explicitly mentioned, whether through annotation labels or direct reference through natural language. We show that when objects are described ambiguously using natural language, a system can use a combination of the pragmatic principles of Contrast and Conventionality, and multiple-instance learning to learn from ambiguous examples in an online fashion. Applying child language learning strategies to visual learning enables more effective learning in real-time environments, which can lead to enhanced teaching interactions with robots or grounded systems in multi-object environments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "As opposed to the serial nature of labeled data presented to a machine learning classifier, children and robots \"in the wild\" must learn object names and attributes like color, size, and shape while being surrounded by a number of stimuli and possible referents. When a child hears \"the red ball\", they must first identify the object mentioned, then use existing knowledge to identify that \"red\" and \"ball\" are distinct concepts, and over time, learn that objects called \"red\" share some similarity in color while objects called \"ball\" share some similarity in shape. Learning for them therefore requires both identification and establishing joint attention with the speaker before assigning a label to an object, while also applying other language learning strategies to narrow down the search space of possible referents, as illustrated by Quine's \"gavagai\" problem (1964) .",
"cite_spans": [
{
"start": 842,
"end": 874,
"text": "Quine's \"gavagai\" problem (1964)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Trying to learn attributes and objects without non-linguistic cues such as pointing and gaze might seem an insurmountable challenge. Yet a child experiences many such situations and can nevertheless learn grounded concepts over time. Fortunately, adult speakers tend to understand the limitation of these cues in certain situations and adjust their speech in accordance to Grice's Maxim of Quantity when referring to objects : be only as informative as necessary (Grice, 1975) . We therefore treat the language describing a particular object in a scene as an expression of an iterative process, where the speaker is attempting to guide the listener towards the referent in a way that avoids both ambiguity and unnecessary verbosity.",
"cite_spans": [
{
"start": 463,
"end": 476,
"text": "(Grice, 1975)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Language learners additionally make use of the pragmatic assumptions of Conventionality, that speakers agree upon the meaning of a word, and Contrast, that different words have different meanings (Clark, 2009) . The extension of these principles to grounded language learning yields the assumptions that the referents picked out by a referring expression will have some similarity (perceptual in our domain), and will be dissimilar compared to objects not included in the reference. Children will eventually generalize learned concepts or accept synonyms in a way that violates these principles (Baldwin, 1992) , but these assumptions aid in the initial acquisition of concepts. In our system, we manifest these principles using distance metrics and thereby allow significant flexibility in the implementation of object and attribute representations while allowing a classifier to aid in reference resolution.",
"cite_spans": [
{
"start": 196,
"end": 209,
"text": "(Clark, 2009)",
"ref_id": "BIBREF3"
},
{
"start": 595,
"end": 610,
"text": "(Baldwin, 1992)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "When faced with unresolvable ambiguity in determining the correct referent, past, ambiguous experiences can be called upon to resolve ambiguity in the current situation in a strategy called Cross-Situational Learning (XSL). There is some debate over whether people use XSL, as it requires considerable memory and computational load (Trueswell et al., 2013) . However, other experiments show evidence for XSL in adults and children in certain situations (Smith and Yu, 2008; Smith et al., 2011) . We believe these instances that show evidence of XSL certainly merit an implementation both for better understanding language learning and for advancing grounded language learning in the realm of robotics where such limitations do not exist. We show that by reasoning over multiple ambiguous learning instances and constraining possibilities with pragmatic inferences, a system can quickly learn attributes and names of objects without a single unambiguous training example.",
"cite_spans": [
{
"start": 332,
"end": 356,
"text": "(Trueswell et al., 2013)",
"ref_id": "BIBREF29"
},
{
"start": 453,
"end": 473,
"text": "(Smith and Yu, 2008;",
"ref_id": "BIBREF27"
},
{
"start": 474,
"end": 493,
"text": "Smith et al., 2011)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our overarching research goal is to learn compositional models of grounded attributes towards describing an object in a scene, rather than just identifying it. That is, we do not only learn to recognize instances of objects, but also learn attributes constrained to feature spaces that will be compatible with contextual modifiers such as dark/light in terms of color, or small/large in terms of size and object classification. Therefore, we approach the static, visual aspects of the symbol grounding problem with an eye towards ensuring that our grounded representations of attributes can be composed in the same way that their semantic analogues can. We continue our previous work (Perera and Allen, 2013) with two evaluations to demonstrate the effectiveness of applying the principles of Quantity, Contrast, and Conventionality, as well as incorporating quantifier constraints, negative information, and classification in the training step. Our first evaluation is reference resolution to determine how well the system identifies the correct objects to attend to, and our second is description generation to determine how well the system uses those training examples to understand attributes and object classes.",
"cite_spans": [
{
"start": 684,
"end": 708,
"text": "(Perera and Allen, 2013)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our algorithm for reference resolution and XSL fits into our previous work on a situated language learning system for grounding linguistic symbols in perception. The integration of language in a multi-modal task is a burgeoning area of research, with the grounded data being any of a range of possible situations, from objects on a table (Matuszek et al., 2012) to wetlab experiments (Naim et al., 2014) . Our end goal of using natural language to learn from visual scenes is similar to work by and Yu and Siskind (2013) , and our emphasis on attributes is related to work by Farhadi et al. (2009) . However, our focus is on learning from situations that a child would be exposed to, without using annotated data, and to test implementations of child language learning strategies in a computational system.",
"cite_spans": [
{
"start": 338,
"end": 361,
"text": "(Matuszek et al., 2012)",
"ref_id": "BIBREF19"
},
{
"start": 384,
"end": 403,
"text": "(Naim et al., 2014)",
"ref_id": "BIBREF21"
},
{
"start": 499,
"end": 520,
"text": "Yu and Siskind (2013)",
"ref_id": "BIBREF31"
},
{
"start": 576,
"end": 597,
"text": "Farhadi et al. (2009)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We use a tutor-directed approach to training our system where the speaker presents objects to the system and describes them, as in work by Skocaj et al. (2011) . The focus of this work is in evaluating referring expressions as in work by Mohan et al. (2013) , although without any dialogue for disambiguation. also incorporate quantifier and pragmatic constraints on reference resolution in a setting similar to ours. In this work, we undertake a more detailed analysis of the effects of different pragmatic constraints on system performance.",
"cite_spans": [
{
"start": 139,
"end": 159,
"text": "Skocaj et al. (2011)",
"ref_id": "BIBREF26"
},
{
"start": 238,
"end": 257,
"text": "Mohan et al. (2013)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The task of training a classifier from \"bags\" of instances with a label applying to only some of the instances contained within is referred to as Multiple-Instance Learning (MIL) (Dietterich, 1997) , and is the machine-learning analogue of cross-situational learning. There is a wide range of methods used in MIL and a number of different assumptions that can be made to fit the task at hand (Foulds and Frank, 2010) . Online MIL methods so far have been used for object tracking (Li et al., 2010) , and Dindo and Zambuto (2010) apply MIL to grounded language learning, but we are not aware of any research that investigates the application of online MIL to studying cognitive models of incremental grounded language learning. In addition, we find that we must relax many assumptions used in MIL to handle natural language references, such as the 1-of-N assumption used by Dindo and Zambuto. The lack of appropriate algorithms for handling this task motivates our development of a novel algorithm for language learning situations. and describes one or more of the objects directly while possibly mentioning some relation to the surrounding objects. The goal of this setup is to facilitate object descriptions that more closely approximate child-directed speech, compared to the language in captioned images. Audio is recorded and transcribed by hand with timestamps at the utterance level, but there are no other annotations beyond timestamps. We use these intervals to match the spoken descriptions to the video data, which is recorded using the Microsoft Kinect to obtain RGB + Depth information.",
"cite_spans": [
{
"start": 179,
"end": 197,
"text": "(Dietterich, 1997)",
"ref_id": "BIBREF4"
},
{
"start": 392,
"end": 416,
"text": "(Foulds and Frank, 2010)",
"ref_id": "BIBREF8"
},
{
"start": 480,
"end": 497,
"text": "(Li et al., 2010)",
"ref_id": "BIBREF16"
},
{
"start": 873,
"end": 882,
"text": "Dindo and",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "All training instances involved multiple objects, with an average of 2.8 objects per demonstration. The subject could select any set of objects to describe (often with respect to the other objects). References to objects varied in detail, from \"the cube\" to \"a tall yellow rectangle\". Since a set of objects might have different shapes, the most common descriptor was \"block\". The majority (80%) of the quantifiers were definite or numeric, and 85% of the demonstrations referred to a single object. Test instances consisted solely of single objects presented one at a time. 20% of the objects used as test instances appeared in training because of the limited set of objects available, yet the objects were placed in slightly different orientations and at different locations, deforming the shape contour due to perspective.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training and Test Instances",
"sec_num": "3.2"
},
{
"text": "We encode some existing linguistic and perceptual knowledge into the system to aid in learning from unconstrained object descriptions. The representative feature, defined as the system's feature space assigned to a property name (e.g., color",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prior System Knowledge",
"sec_num": "3.3"
},
{
"text": "for \"white\", or shape for \"round\"), was prechosen for the task's vocabulary to reduce the number of factors affecting the evaluation of the system. In previous work, we showed that the accuracy of the system's automatic choice of representative features can reach 78% after about 50 demonstrations of objects presented one at a time (Perera and Allen, 2013) . In addition, we developed an extension to a semantic parser that distinguishes between attributes and object names using syntactic constructions.",
"cite_spans": [
{
"start": 333,
"end": 357,
"text": "(Perera and Allen, 2013)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Prior System Knowledge",
"sec_num": "3.3"
},
{
"text": "The transcribed utterances are passed through the TRIPS parser (Allen et al., 2008) for simultaneous lexicon learning and recognition of object descriptions. The parser outputs generalized quantifiers and numeric constraints (capturing singular/plural instances, as well as specific numbers) in referring expressions, which are used for applying quantifier constraints to the possibilities of the referent object or group of objects. The parser's ability to distinguish between attributes and objects through syntax greatly increases learning performance, as demonstrated in our previous work (Perera and Allen, 2013). We extract the speech act (for detecting when an utterance is demonstrating a new object or adding additional information to a known object) and the referring expression from the TRIPS semantic output. Although there may be many objects or groups of objects mentioned, we only store the properties of the reference that is the subject of the sentence. For example, in, \"Some blue cars are next to the yellow ones\", we will extract that there exists at least two blue cars. Because it is an indefinite reference, we cannot draw any further inference about whether the reference set includes all examples of blue cars.",
"cite_spans": [
{
"start": 63,
"end": 83,
"text": "(Allen et al., 2008)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Language Processing",
"sec_num": "3.4"
},
{
"text": "To extract features, we first perform object segmentation using Kinect depth information, which provides a pixel-level contour around each of the objects in the scene. Then for each object, we record its dimensions and location, extract visual features corresponding to color, shape, size, color variance, and texture. No sophisticated tracking algorithm is needed as the objects are stationary on the table. Color is represented in LAB space for perceptual similarity to humans using Euclidean distance, shape is captured using scale-and rotation-invariant 25-dimensional Zernike moments (Khotanzad and Hong, 1990) , and texture is captured using 13-dimensional Haralick features (Haralick et al., 1973) .",
"cite_spans": [
{
"start": 589,
"end": 615,
"text": "(Khotanzad and Hong, 1990)",
"ref_id": "BIBREF11"
},
{
"start": 681,
"end": 704,
"text": "(Haralick et al., 1973)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction",
"sec_num": "3.5"
},
{
"text": "To determine the similarity of new properties and objects to the system's previous knowledge of such descriptors, we use a k-Nearest Neighbor classifier (k-NN) with Mahalanobis distance metric (Mahalanobis, 1936) , distance weighting, and class weighting using the method described in Brown and Koplowitz (1979) .",
"cite_spans": [
{
"start": 193,
"end": 212,
"text": "(Mahalanobis, 1936)",
"ref_id": "BIBREF17"
},
{
"start": 285,
"end": 311,
"text": "Brown and Koplowitz (1979)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Classification and Distance Measures",
"sec_num": "3.6"
},
{
"text": "Our k-NN implementation allows negative examples so as to incorporate information that we infer about unmentioned objects. We do not train the system with any explicit negative information (i.e., we have no training examples described as \"This is not a red block.\", but if the system is confident that an object is not red, it can mark a training example as such). A negative example contributes a weight to the voting equal and opposite to what its weight would have been if it were a positive example of that class.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification and Distance Measures",
"sec_num": "3.6"
},
{
"text": "The Mahalanobis distance provides a way to incorporate a k-nearest neighbor classifier into a probabilistic framework. Because the squared Mahalanobis distance is equal to the number of standard deviations from the mean of the data assuming a normal distribution (Rencher, 2003) , we can convert the Mahalanobis distance to a probability measure to be used in probabilistic reasoning.",
"cite_spans": [
{
"start": 263,
"end": 278,
"text": "(Rencher, 2003)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Classification and Distance Measures",
"sec_num": "3.6"
},
{
"text": "To learn from underspecified training examples, we must resolve the referring expression and assign the properties and object name in the expression to the correct referents. To incorporate existing perceptual knowledge, semi-supervised meth-ods, and pragmatic constraints in the reference resolution task, we use a probabilistic lattice structure that we call the reference lattice.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Reference Lattice",
"sec_num": "4"
},
{
"text": "The reference lattice consists of nodes corresponding to possible partitions of the scene for each descriptor (either property or object name). There is one column of nodes for each descriptor, with the object name as the final column. Edges signify the set-intersection of the connected nodes along a path.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Reference Lattice",
"sec_num": "4"
},
{
"text": "Paths through the lattice correspond to a successive application of these set-intersections, ultimately resulting in a set of objects corresponding to the hypothesized referent group. In this way, paths represent a series of steps in referring expression generation where the speaker provides salient attributes sequentially to eventually make the referent set clear. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Reference Lattice",
"sec_num": "4"
},
{
"text": "For each descriptor, we generate a node for every possible partition of the scene into positive and negative examples of that descriptor. For example, if the descriptor is \"red\", each node is a hypothesized split that attempts to put red objects in the positive set and non-red objects in the negative set. For each column there are 2 n \u2212 1 nodes, where n is the number of objects in the scene (the empty set is not included, as it would lead to an empty reference set). We then generate lattice edges between every pair of partitions in adjacent columns.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lattice Generation",
"sec_num": "4.1"
},
{
"text": "We can discard a large proportion of these edges, as many will correspond to the intersection of disjoint partitions and will therefore be empty. Finally, we generate all possible paths through the lattice, and, if using quantifier constraints, discard any paths with a final output referent set that does not agree with the number constraints on the mentioned referent group.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lattice Generation",
"sec_num": "4.1"
},
{
"text": "The structure of the lattice is shown in Figure 3 . In this figure, partitions are represented by split boxes in the first two columns, with positive examples in solid lines and negative examples in dotted lines. Not shown are edges connecting each partition in one column with each partition in the next and the paths they create. The intersection of the partitions in path (a) lead to a null set, and the path is removed from the lattice. Path (b) is the ground truth path, as the individual partitions accurately describe the composition of the attributes. Path (c) contains an overspecified edge and achieves the correct referent set albeit using incorrect assumptions about the attributes. The result sets from both (b) and (c) agree with the quantifier constraint (definite singular).",
"cite_spans": [],
"ref_spans": [
{
"start": 41,
"end": 50,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Lattice Generation",
"sec_num": "4.1"
},
{
"text": "We consider two probabilities for determining the probability of a partition: that which can be determined from distance data (considering distances between objects in the partition), and that which requires previous labeled data to hypothesize a class using the classifier (considering distances from each object to the mean of the data labeled with the descriptor).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Node Probabilities",
"sec_num": "4.2"
},
{
"text": "The distance probability, our implementation of the principle of Contrast, is a prior that enforces minimum intraclass distance for the positive examples and maximum interclass distance across the partition. The motivation and implementation shares some similarities with the Diverse Density framework for multiple instance learning (Maron and Lozano-P\u00e9rez, 1998) , although here it also acts as an unsupervised clustering for determining the best reference set. It is the product of the minimum probability that any two objects in the positive examples are in the same set multiplied by the complement of the maximum probability that any two objects across the partition are in the same class. Therefore, for partition N with positive examples + and negative examples \u2212:",
"cite_spans": [
{
"start": 333,
"end": 363,
"text": "(Maron and Lozano-P\u00e9rez, 1998)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Node Probabilities",
"sec_num": "4.2"
},
{
"text": "P intra = min x,y \u2208 + P (x c = y c ) P inter = max x\u2208+,y\u2208\u2212 P (x c = y c ) if |\u2212| > 0 1 if |\u2212| = 0 P distance = P intra \u00d7 (1 \u2212 P inter )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Node Probabilities",
"sec_num": "4.2"
},
{
"text": "The classifier probability is similar, except rather than comparing objects to other objects in the partition, the objects are compared to the mean of the column's descriptor C in the descriptor's representative feature. If the descriptor is a class name, we instead choose the Zernike shape feature, implementing the shape bias children show in word learning (Landau et al., 1998) .",
"cite_spans": [
{
"start": 360,
"end": 381,
"text": "(Landau et al., 1998)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Node Probabilities",
"sec_num": "4.2"
},
{
"text": "If there is insufficient labeled data to use, then the classifier probability is set to 1 for the entire column, meaning only the distance probabilities will affect the probabilities of the nodes. For a given descriptor C, the classifier probabilities are as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Node Probabilities",
"sec_num": "4.2"
},
{
"text": "P pos (C) = min x\u2208+ P (x c = C) P neg (C) = max x\u2208\u2212 P (x c = C) if |\u2212| > 0 1 if |\u2212| = 0 P classif ier (C) = P pos (C) \u00d7 (1 \u2212 P neg (C))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Node Probabilities",
"sec_num": "4.2"
},
{
"text": "The final probability of a partition is the product of the distance probability and the classifier probability, and the node probabilities are normalized for each column.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Node Probabilities",
"sec_num": "4.2"
},
{
"text": "Edges have a constant transition probability equal to the overspecification probability if overspecified, or equal to the complement otherwise. We use these probabilities to incorporate the phenomenon of overspecification in our model, where, contrary to a strict interpretation of Grice's Maxim of Quantity, speakers will give more information than is needed to identify a referent (Koolen et al., 2011 ). An edge is considered overspecified if the hypothesis for the objects that satisfy the next descriptor does not add additional information, i.e., the set-intersection it corresponds to does not remove any possible objects from the referent set. Thus the model will prefer hypotheses for the next descriptor that narrow down the hypothesized set of referents.",
"cite_spans": [
{
"start": 383,
"end": 403,
"text": "(Koolen et al., 2011",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Overspecification and Edge Probabilities",
"sec_num": "4.3"
},
{
"text": "The probability of each path is the product of probabilities of each of the partitions along its path and the edge (overspecification) probabilities. If there is a single path with a probability greater than all others by an amount , the labels of the partitions along that path are assigned to the positive examples while also being assigned as negative properties for the negative examples. We perform this updating step after each utterance to simulate incremental continuous language learning and to provide the most current knowledge available for resolving new ambiguous data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Path Probabilities",
"sec_num": "4.4"
},
{
"text": "If there are multiple best paths within of the highest probability path, then the learning example is considered ambiguous and saved in memory to resolve with information from future examples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Path Probabilities",
"sec_num": "4.4"
},
{
"text": "In many cases, especially in the system's first learning instances, there is not enough information to unambiguously learn from the demonstration. Without any unambiguous examples, our system would struggle to learn no matter how much data was available to it. An ambiguous training example yields more than one highest probability path. Our goal is to use new information from each new training demonstration to reevaluate these paths and determine a singular best path, which allows us to update our knowledge accordingly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multiple-Instance Learning",
"sec_num": "4.5"
},
{
"text": "To do this, we independently consider columns for each unknown descriptors from unresolved demonstrations containing that descriptor and combine them to form super-partitions which are then evaluated using our distance probability function. For example, consider two instances described with \"the red box\". The first has a red and a blue box, while the second has a red and a green box. Individually they are ambiguous to a system that does not know what \"red\" means and therefore each demonstration would have two paths with equal probability. If we combine the partitions across the two demonstrations into four super-partitions, the highest probability will be generated when the two red boxes are in the positive set. This probability is stored in each of the constituent partitions as a meta-probability, which is otherwise 1 when multiple-instance learning is not required to resolve ambiguity. The metaprobability allows us to find the most probable path given previous instances.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multiple-Instance Learning",
"sec_num": "4.5"
},
{
"text": "To train the system on a video, we transcribe the video with sentence-level timestamps, and extract features from the demonstration video. The system takes as input the feature data aligned with utterances from the demonstration video. It then finds the most likely path through the reference lattice and adds all hypothesized positive examples for the descriptor as class examples for the classifier. If there is more than one likely path, it saves the lattice for later resolution using multipleinstance learning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "5.1"
},
{
"text": "During testing, the system generates a description for an object in the test set by finding examples of properties and objects similar to it in previously seen objects. For properties, the system checks each feature space separately to find previous examples of objects similar in that feature space and adds each found property label to the k-NN voting set, weighted by the distance. If the majority label does not have the matching representative feature, the system skips this feature space for adding a property to the description. The object name is chosen using a distance generated from the sum of the distances (normalized and weighted through the Mahalanobis distance metric) to the most similar previous examples. More details about the description generation process can be found in our previous paper (Perera and Allen, 2013) .",
"cite_spans": [
{
"start": 813,
"end": 837,
"text": "(Perera and Allen, 2013)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Description Generation",
"sec_num": "5.2"
},
{
"text": "To evaluate our system, we use two metrics: our evaluation method used in previous work for rating the quality of generated descriptions (Perera and Allen, 2013) , and a standard precision/recall measurement to determine the accuracy of reference resolution.",
"cite_spans": [
{
"start": 137,
"end": 161,
"text": "(Perera and Allen, 2013)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "6"
},
{
"text": "The description generated by the system is compared with a number of possible ground truth descriptions which are generated using precision and recall equivalence classes from our previous work. Precision is calculated according to which words in the description could be found in a ground truth description, while recall is calculated according to which words in the closest ground truth description were captured by the system's description. As an example, a system output of \"red rectangle\" when the ground truth description is \"red square\" or \"red cube\" would have a precision score of 1 (because both \"red\" and \"rectangle\" are accurate descriptors of the object) but a recall of .5 (because the square-ness was not captured by the system's description).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "6"
},
{
"text": "In the reference resolution evaluation, precision and recall are calculated on the training set according to the standard measures by comparing the referent set obtained by the system and the ground truth referent set (those objects actually referred to by the speaker). Training instances lacking feature data because of an error in recording were excluded from the F1-score for reference resolution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "6"
},
{
"text": "Each underspecified demonstration video consisted of 15-20 demonstrations containing one or more focal objects referenced in the description and, in most cases, distractor objects that are not mentioned. We used the same test video from our previous work with objects removed that could not be described using terms used in the training set, leaving 15 objects.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "6"
},
{
"text": "We tested eight different system configurations. The baseline system simply guessed at a path through the lattice without any multiple-instance learning (G). We then added multiple instance learning (M), distance probabilities (D), classifier probabilities (C), quantifier constraints (Q), and negative information (N). We show the data for these different methods in Figure 4 .",
"cite_spans": [],
"ref_spans": [
{
"start": 368,
"end": 376,
"text": "Figure 4",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "6"
},
{
"text": "7 Results and Discussion",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "6"
},
{
"text": "Rather than comparing our language learning system to others on a common dataset, we choose to focus our analysis on how our implementations of pragmatic inference and child language learning strategies affected performance of reference resolution and description generation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Methods",
"sec_num": "7.1"
},
{
"text": "The relatively strong naming performance of G can be attributed to the fact that many demonstrations had similarities among the objects presented that could be learned from choosing any of the objects. However, reference resolution performance for G averaged a .34 F1-score compared with a .70 F1-score for our best performing configuration. Adding quantifier constraints (GQ) did not help, although quantifier constraints with multiple-instance learning (GMQ) led to a significant increase in reference resolution performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Methods",
"sec_num": "7.1"
},
{
"text": "Multiple-instance learning provided a significant gain in reference resolution performance, and with quantifier constraints also yielded the highest naming performance (QDM and QDML). The relative lower performance by inclusion of classifier probabilities with this limited training data is due to errors in classification that compound in this online-learning framework. In multiple-instance cases where there are a number of previous examples to draw from, then the information provided by classifier probability is redundant and less accurate. However, as the approach scales and retaining previous instances is intractable, the classifier probabilities provide a more concise representation of knowledge to be used in future learning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Methods",
"sec_num": "7.1"
},
{
"text": "We found that negative information hurt performance in this framework (QDMCN vs. QDMC) for two reasons. First, the risk of introducing negative information is high compared to its possible reward. While it promises to remove some errors in classification, an accurate piece of negative information only removes one class from consideration when multiple other alternatives exist, while an inaccurate piece of negative information contributes to erroneous classification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Methods",
"sec_num": "7.1"
},
{
"text": "Second, situations where negative information might be inferred are induced by a natural language description which, by Grice's Maxims, will attempt to be as clear as possible given the listener's information. This means that, adhering to the Contrast principle, negative examples are likely already far from the positive examples for the class. Figure 5 shows results from the averaging of random combinations of 4 underspecified videos, using our highest-scoring configuration QDM to Figure 5 : Description generation results from training according to the number of training videos with standard error bars. The solid line is the QDM's performance learning from underspecified videos. The dashed line is the system's performance learning from videos where objects are presented one at a time. The dotted line is the baseline (G). F1-score for reference resolution in the underspecified case was consistent across videos (mean .7, SD .01).",
"cite_spans": [],
"ref_spans": [
{
"start": 346,
"end": 354,
"text": "Figure 5",
"ref_id": null
},
{
"start": 486,
"end": 494,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Learning Methods",
"sec_num": "7.1"
},
{
"text": "show the increase in performance as more training data is provided to the system. We compare our results on videos with multiple objects to the performance of the system with objects presented one at a time and with the baseline G. Because the training objects are slightly different, we present results on a subset of objects where at least a ground truth object name was present in the training data. Our results show that while the performance is lower in the ambiguous case, the general learning rate per video is comparable with the single-object case. In the 1-video case, guessing is equally as effective as our method due to the system being too tentative with assigning labels to objects without more information to minimize errors affecting learning in later demonstrations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Methods",
"sec_num": "7.1"
},
{
"text": "We did see an effect of the order in which videos were presented to the system on performance, suggesting that learning the correct concepts early on can have long-term ramifications for an online learning process. Possible ways to mitigate this effect include a memory model with forgetting or a more robust classifier. We leave such efforts to future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Methods",
"sec_num": "7.1"
},
{
"text": "While the number of nodes and paths in the lattice is exponential in the number of objects in the scene, our system can still perform quickly enough to serve as a language learning agent suitable for real-time interaction. The pragmatic constraints on possible referent sets allow us to remove a large number of paths, which is especially important when there are many objects in the scene or when the referring expression contains a number of descriptors. In situations with more than 4-5 objects, we expect that other cues can establish joint attention with enough resolution to remove some objects from consideration.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Running Time Performance",
"sec_num": "7.2"
},
{
"text": "Visual features can be extracted from video at 3 frames per second, which is acceptable for realtime interaction as only 5 frames are needed for training or testing. Not including the feature extraction (performed separately), the QUM configuration processed our 55 demonstrations in about 1 minute on a 2.3 GHz Intel Core i7.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Running Time Performance",
"sec_num": "7.2"
},
{
"text": "We compared our results from the description generation metric with the reference resolution metric to evaluate how the quality of reference resolution affected learning performance. The description generation F-score was more strongly positively correlated with the reference resolution precision than with the recall. We found a reference resolution F-score with \u03b2 = .7 (as opposed to the standard \u03b2 = 1) had the highest Pearson correlation with the F-score (r = .63, p < .0001), indicating that reference resolution precision is roughly 1.4 times more important than recall in predicting learning performance in this system. This result provides evidence that the quality of the first data in a limited data learning algorithm can be critical in establishing long-term performance, especially in an online learning system, and suggests that our results could be improved by correcting hypotheses that once appeared reasonable to the system. It also suggests that F1score may not be the most appropriate measure for performance of a component that is relied upon to give accurate data for further learning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relation Between Evaluation Metrics",
"sec_num": "7.3"
},
{
"text": "Accounting for overspecification in the model more closely approximates human speech at the expense of a strict interpretation of the Maxim of Quantity. It allows us to use graded pragmatic constraints that admit helpful heuristics for learning without treating them as a rule. In our training data, the speaker was typically over-descriptive, leading to a high optimal overspecification. Figure 6 shows the effect of different values for the overspecification probability on the performance of the Figure 6 : Effect of varying overspecification probability on the F1 score for both Description Generation (black dashed) and Reference Resolution (grey solid), calculated on a dataset with hand location information. system. The strong dip in reference resolution performance at an overspecification probability of 0 shows the significant negative effect a strict interpretation of the Maxim of Quantity would have in this situation. The correct value for overspecification probability for a given situation depends on a number of factors such as scene complexity and descriptor type (Koolen et al., 2011 ), but we have not yet incorporated these factors into our overspecification probability in this work.",
"cite_spans": [
{
"start": 1084,
"end": 1104,
"text": "(Koolen et al., 2011",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 389,
"end": 398,
"text": "Figure 6",
"ref_id": null
},
{
"start": 500,
"end": 508,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Overspecification",
"sec_num": "7.4"
},
{
"text": "Our multi-instance learning procedure can be classified as instance-level with witnesses, which means that we identify the positive examples that lead to the label of the \"bag\", or demonstration in this case. In addition, we relax the assumption that there is only a single positive instance corresponding to the label of the demonstration. This relaxation increases the complexity of cross-instance relationships, but allows for references to multiple objects simultaneously and therefore faster training than a sequential presentation would allow. In accounting for overspecification, we also must establish a dependence on the labels of the image via the edges of the lattice. This adds additional complexity, but our results show that accounting for overspecification can lead to increased performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison to Other Multi-Instance Learning Methods",
"sec_num": "7.5"
},
{
"text": "Work on this system is ongoing, with extensions planned for improving performance, generating more complete symbol grounding, and allowing more flexibility in both environment and language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future Work",
"sec_num": "8"
},
{
"text": "While the parser in our system can interpret phrases such as \"the tall block\", we do not have a way of resolving the non-intersective predicate \"tall\" in our current framework. Non-intersective predicates add complexity to the system because their reference point is not necessarily the other objects in the scene -it may be a reference to other objects in the same class (i.e., blocks).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future Work",
"sec_num": "8"
},
{
"text": "Also, our set of features is rather rudimentary and could be improved, as we chose lowdimensional, continuous features in an attempt to facilitate a close connection between language and vision. The use of continuous features ensures that primitive concepts are grounded solely in perception and not higher-order conceptual models (Perera and Allen, 2014) . Initial results using 3D shape features show a considerable performance increase on a kitchen dataset we are developing.",
"cite_spans": [
{
"start": 331,
"end": 355,
"text": "(Perera and Allen, 2014)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Future Work",
"sec_num": "8"
},
{
"text": "We have proposed a probabilistic framework for using pragmatic inference to learn from underspecified visual descriptions. We show that this system can use pragmatic assumptions attenuated by overspecification probability to learn attributes and object names from videos that include a number of distractors. We also analyzed various learning methods in an attempt to gain a deeper understanding of the theoretical and practical considerations of situated language learning, finding that Conventionality and Contrast learning strategies with quantifiers and overspecification probabilities yielded the best performing system. These results support the idea that an understanding of how humans learn and communicate can lead to better visually grounded language learning systems. We believe this work is an important step towards systems in which natural language not only stands in for manual annotation, but also enables new methods of training robots and other situated systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "9"
}
],
"back_matter": [
{
"text": "This work was funded by The Office of Naval Research (N000141210547), the Nuance Foundation, and DARPA Big Mechanism program under ARO contract W911NF-14-1-0391.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": "10"
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Deep Semantic Analysis of Text",
"authors": [
{
"first": "J",
"middle": [],
"last": "Allen",
"suffix": ""
},
{
"first": "Mary",
"middle": [],
"last": "Swift",
"suffix": ""
},
{
"first": "Will De",
"middle": [],
"last": "Beaumont",
"suffix": ""
}
],
"year": 2008,
"venue": "Symp. Semant. Syst. Text Process",
"volume": "",
"issue": "",
"pages": "343--354",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Allen, Mary Swift, and Will de Beaumont. 2008. Deep Semantic Analysis of Text. In Symp. Semant. Syst. Text Process., volume 2008, pages 343-354, Morristown, NJ, USA. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Clarifying the role of shape in children's taxonomic assumption",
"authors": [
{
"first": "D A",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 1992,
"venue": "J. Exp. Child Psychol",
"volume": "54",
"issue": "3",
"pages": "392--416",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D A Baldwin. 1992. Clarifying the role of shape in children's taxonomic assumption. J. Exp. Child Psy- chol., 54(3):392-416.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The Weighted Nearest Neighbor Rule for Class Dependent Sample Sizes",
"authors": [
{
"first": "T",
"middle": [],
"last": "Brown",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Koplowitz",
"suffix": ""
}
],
"year": 1979,
"venue": "IEEE Trans. Inf. Theory, I",
"volume": "",
"issue": "5",
"pages": "617--619",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T Brown and J Koplowitz. 1979. The Weighted Nearest Neighbor Rule for Class Dependent Sample Sizes. IEEE Trans. Inf. Theory, I(5):617-619.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "On the pragmatics of contrast",
"authors": [
{
"first": "Eve",
"middle": [
"V"
],
"last": "Clark",
"suffix": ""
}
],
"year": 2009,
"venue": "J. Child Lang",
"volume": "17",
"issue": "02",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eve V. Clark. 2009. On the pragmatics of contrast. J. Child Lang., 17(02):417, February.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Solving the multiple instance problem with axis-parallel rectangles",
"authors": [
{
"first": "",
"middle": [],
"last": "Dietterich",
"suffix": ""
}
],
"year": 1997,
"venue": "Artif. Intell",
"volume": "89",
"issue": "",
"pages": "31--71",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T Dietterich. 1997. Solving the multiple instance problem with axis-parallel rectangles. Artif. Intell., 89:31-71.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A probabilistic approach to learning a visually grounded language model through human-robot interaction",
"authors": [
{
"first": "Haris",
"middle": [],
"last": "Dindo",
"suffix": ""
},
{
"first": "Daniele",
"middle": [],
"last": "Zambuto",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Haris Dindo and Daniele Zambuto. 2010. A prob- abilistic approach to learning a visually grounded language model through human-robot interaction.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Conf. Intell. Robot. Syst. IROS 2010 -Conf. Proc",
"authors": [
{
"first": "",
"middle": [],
"last": "Ieee/Rsj",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "790--796",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "IEEE/RSJ 2010 Int. Conf. Intell. Robot. Syst. IROS 2010 -Conf. Proc., pages 790-796.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Describing objects by their attributes",
"authors": [
{
"first": "A",
"middle": [],
"last": "Farhadi",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Endres",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Hoiem",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Forsyth",
"suffix": ""
}
],
"year": 2009,
"venue": "IEEE Conf. Comput. Vis. Pattern Recognit",
"volume": "",
"issue": "",
"pages": "1778--1785",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Farhadi, I. Endres, D. Hoiem, and D. Forsyth. 2009. Describing objects by their attributes. 2009 IEEE Conf. Comput. Vis. Pattern Recognit., pages 1778- 1785, June.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A review of multi-instance learning assumptions",
"authors": [
{
"first": "James",
"middle": [],
"last": "Foulds",
"suffix": ""
},
{
"first": "Eibe",
"middle": [],
"last": "Frank",
"suffix": ""
}
],
"year": 2010,
"venue": "Knowl. Eng. Rev",
"volume": "25",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Foulds and Eibe Frank. 2010. A review of multi-instance learning assumptions. Knowl. Eng. Rev., 25:1.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Logic and conversation",
"authors": [
{
"first": "",
"middle": [],
"last": "Hp Grice",
"suffix": ""
}
],
"year": 1975,
"venue": "Syntax Semant",
"volume": "",
"issue": "",
"pages": "41--58",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "HP Grice. 1975. Logic and conversation. In Peter Cole and Jerry L. Morgan, editors, Syntax Semant., pages 41-58. Academic Press.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Textural features for image classification",
"authors": [
{
"first": "Robert",
"middle": [
"M"
],
"last": "Haralick",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 1973,
"venue": "IEEE Trans. Syst. Man, Cybern",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert M. Haralick, K. Shanmugam, and Its'hak Din- stein. 1973. Textural features for image classifica- tion. IEEE Trans. Syst. Man, Cybern. SMC-3.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Invariant Image Recognition by Zernike Moments",
"authors": [
{
"first": "A",
"middle": [],
"last": "Khotanzad",
"suffix": ""
},
{
"first": "Y H",
"middle": [],
"last": "Hong",
"suffix": ""
}
],
"year": 1990,
"venue": "IEEE Trans. Pattern Anal. Mach. Intell",
"volume": "12",
"issue": "5",
"pages": "489--497",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A Khotanzad and Y H Hong. 1990. Invariant Image Recognition by Zernike Moments. IEEE Trans. Pat- tern Anal. Mach. Intell., 12(5):489-497, May.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Toward interactive grounded language acquisition",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Kollar",
"suffix": ""
},
{
"first": "Jayant",
"middle": [],
"last": "Krishnamurthy",
"suffix": ""
},
{
"first": "Grant",
"middle": [],
"last": "Strimel",
"suffix": ""
}
],
"year": 2013,
"venue": "Proc. Robot. Sci. Syst",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Kollar, Jayant Krishnamurthy, and Grant Strimel. 2013. Toward interactive grounded lan- guage acquisition. Proc. Robot. Sci. Syst.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Factors causing overspecification in definite descriptions",
"authors": [
{
"first": "Ruud",
"middle": [],
"last": "Koolen",
"suffix": ""
},
{
"first": "Albert",
"middle": [],
"last": "Gatt",
"suffix": ""
},
{
"first": "Martijn",
"middle": [],
"last": "Goudbeek",
"suffix": ""
},
{
"first": "Emiel",
"middle": [],
"last": "Krahmer",
"suffix": ""
}
],
"year": 2011,
"venue": "J. Pragmat",
"volume": "43",
"issue": "13",
"pages": "3231--3250",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ruud Koolen, Albert Gatt, Martijn Goudbeek, and Emiel Krahmer. 2011. Factors causing over- specification in definite descriptions. J. Pragmat., 43(13):3231-3250, October.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Jointly Learning to Parse and Perceive: Connecting Natural Language to the",
"authors": [
{
"first": "Jayant",
"middle": [],
"last": "Krishnamurthy",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Kollar",
"suffix": ""
}
],
"year": 2013,
"venue": "Physical World. Trans. Assoc. Comput. Linguist",
"volume": "1",
"issue": "",
"pages": "193--206",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jayant Krishnamurthy and Thomas Kollar. 2013. Jointly Learning to Parse and Perceive: Connecting Natural Language to the Physical World. Trans. As- soc. Comput. Linguist., 1:193-206.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Object Shape, Object Function, and Object Name",
"authors": [
{
"first": "Barbara",
"middle": [],
"last": "Landau",
"suffix": ""
},
{
"first": "Linda",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "Susan",
"middle": [],
"last": "Jones",
"suffix": ""
}
],
"year": 1998,
"venue": "J. Mem. Lang",
"volume": "38",
"issue": "1",
"pages": "1--27",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Barbara Landau, Linda Smith, and Susan Jones. 1998. Object Shape, Object Function, and Object Name. J. Mem. Lang., 38(1):1-27, January.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Online multiple instance learning with no regret",
"authors": [
{
"first": "Mu",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "James",
"middle": [
"T"
],
"last": "Kwok",
"suffix": ""
},
{
"first": "Bao",
"middle": [
"Liang"
],
"last": "Lu",
"suffix": ""
}
],
"year": 2010,
"venue": "Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit",
"volume": "",
"issue": "",
"pages": "1395--1401",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mu Li, James T. Kwok, and Bao Liang Lu. 2010. Online multiple instance learning with no regret. Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pages 1395-1401.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "On The Generalized Distance in Statistics",
"authors": [
{
"first": "",
"middle": [],
"last": "Pc C Mahalanobis",
"suffix": ""
}
],
"year": 1936,
"venue": "Proc. Natl. Inst. Sci. India",
"volume": "",
"issue": "",
"pages": "49--55",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "PC C Mahalanobis. 1936. On The Generalized Dis- tance in Statistics. Proc. Natl. Inst. Sci. India, pages 49-55.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A framework for multiple-instance learning",
"authors": [
{
"first": "Oded",
"middle": [],
"last": "Maron",
"suffix": ""
},
{
"first": "Tom\u00e1s",
"middle": [],
"last": "Lozano-P\u00e9rez",
"suffix": ""
}
],
"year": 1998,
"venue": "Adv. Neural Inf. Process. Syst",
"volume": "10",
"issue": "",
"pages": "570--576",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oded Maron and Tom\u00e1s Lozano-P\u00e9rez. 1998. A framework for multiple-instance learning. Adv. Neu- ral Inf. Process. Syst., 10:570 -576.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A Joint Model of Language and Perception for Grounded Attribute Learning",
"authors": [
{
"first": "Cynthia",
"middle": [],
"last": "Matuszek",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Fitzgerald",
"suffix": ""
},
{
"first": "Liefeng",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Dieter",
"middle": [],
"last": "Bo",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Fox",
"suffix": ""
}
],
"year": 2012,
"venue": "Proc. Int. Conf. Mach. Learn",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cynthia Matuszek, N FitzGerald, Luke Zettlemoyer, Liefeng Bo, and Dieter Fox. 2012. A Joint Model of Language and Perception for Grounded Attribute Learning. In Proc. Int. Conf. Mach. Learn.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Towards an Indexical Model of Situated Language Comprehension for Real-World Cognitive Agents",
"authors": [
{
"first": "Shiwali",
"middle": [],
"last": "Mohan",
"suffix": ""
},
{
"first": "John",
"middle": [
"E"
],
"last": "Laird",
"suffix": ""
},
{
"first": "Laird Umich",
"middle": [],
"last": "Edu",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "153--170",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shiwali Mohan, John E Laird, and Laird Umich Edu. 2013. Towards an Indexical Model of Situated Language Comprehension for Real-World Cognitive Agents. 2013:153-170.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Unsupervised Alignment of Natural Language Instructions with Video Segments",
"authors": [
{
"first": "Iftekhar",
"middle": [],
"last": "Naim",
"suffix": ""
},
{
"first": "Young",
"middle": [
"Chol"
],
"last": "Song",
"suffix": ""
},
{
"first": "Qiguang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Henry",
"middle": [],
"last": "Kautz",
"suffix": ""
},
{
"first": "Jiebo",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
}
],
"year": 2014,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Iftekhar Naim, Young Chol Song, Qiguang Liu, Henry Kautz, Jiebo Luo, and Daniel Gildea. 2014. Un- supervised Alignment of Natural Language Instruc- tions with Video Segments. In AAAI.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "SALL-E: Situated Agent for Language Learning",
"authors": [
{
"first": "Ian",
"middle": [],
"last": "Perera",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Allen",
"suffix": ""
}
],
"year": 2013,
"venue": "Twenty-Seventh AAAI Conf. Artif. Intell",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ian Perera and JF Allen. 2013. SALL-E: Situated Agent for Language Learning. In Twenty-Seventh AAAI Conf. Artif. Intell.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "What is the Ground ? Continuous Maps for Grounding Perceptual Primitives",
"authors": [
{
"first": "Ian",
"middle": [],
"last": "Perera",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "James F Allen",
"suffix": ""
}
],
"year": 2014,
"venue": "Proc. 36th",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ian Perera and James F Allen. 2014. What is the Ground ? Continuous Maps for Grounding Percep- tual Primitives. In P Bello, M. Guarini, M Mc- Shane, and Brian Scassellati, editors, Proc. 36th",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Methods of Multivariate Analysis. Wiley Series in Probability and Statistics",
"authors": [
{
"first": "A C",
"middle": [],
"last": "Rencher",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A C Rencher. 2003. Methods of Multivariate Analysis. Wiley Series in Probability and Statistics. Wiley.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "A system for interactive learning in dialogue with a tutor",
"authors": [
{
"first": "Danijel",
"middle": [],
"last": "Skocaj",
"suffix": ""
},
{
"first": "Matej",
"middle": [],
"last": "Kristan",
"suffix": ""
},
{
"first": "Alen",
"middle": [],
"last": "Vrecko",
"suffix": ""
},
{
"first": "Marko",
"middle": [],
"last": "Mahnic",
"suffix": ""
},
{
"first": "Miroslav",
"middle": [],
"last": "Janicek",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Geert-Jan",
"suffix": ""
},
{
"first": "Marc",
"middle": [],
"last": "Kruijff",
"suffix": ""
},
{
"first": "Nick",
"middle": [],
"last": "Hanheide",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Hawes",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Keller",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Zillich",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2011,
"venue": "IEEE/RSJ Int. Conf. Intell. Robot. Syst",
"volume": "",
"issue": "",
"pages": "3387--3394",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Danijel Skocaj, Matej Kristan, Alen Vrecko, Marko Mahnic, Miroslav Janicek, Geert-Jan M. Krui- jff, Marc Hanheide, Nick Hawes, Thomas Keller, Michael Zillich, and Kai Zhou. 2011. A system for interactive learning in dialogue with a tutor. In 2011 IEEE/RSJ Int. Conf. Intell. Robot. Syst., pages 3387-3394. IEEE, September.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Infants rapidly learn word-referent mappings via cross-situational statistics",
"authors": [
{
"first": "Linda",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "Chen",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2008,
"venue": "Cognition",
"volume": "106",
"issue": "",
"pages": "1558--1568",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Linda Smith and Chen Yu. 2008. Infants rapidly learn word-referent mappings via cross-situational statis- tics. Cognition, 106:1558-1568.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Cross-situational learning: An experimental study of word-learning mechanisms",
"authors": [
{
"first": "Kenny",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "D M",
"middle": [],
"last": "Andrew",
"suffix": ""
},
{
"first": "Richard",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Blythe",
"suffix": ""
}
],
"year": 2011,
"venue": "Cogn. Sci",
"volume": "35",
"issue": "",
"pages": "480--498",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenny Smith, Andrew D M Smith, and Richard a. Blythe. 2011. Cross-situational learning: An exper- imental study of word-learning mechanisms. Cogn. Sci., 35:480-498.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Propose but verify: fast mapping meets cross-situational word learning",
"authors": [
{
"first": "Tamara",
"middle": [
"Nicol"
],
"last": "John C Trueswell",
"suffix": ""
},
{
"first": "Alon",
"middle": [],
"last": "Medina",
"suffix": ""
},
{
"first": "Lila",
"middle": [
"R"
],
"last": "Hafri",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gleitman",
"suffix": ""
}
],
"year": 2013,
"venue": "Cogn. Psychol",
"volume": "66",
"issue": "1",
"pages": "126--56",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John C Trueswell, Tamara Nicol Medina, Alon Hafri, and Lila R Gleitman. 2013. Propose but verify: fast mapping meets cross-situational word learning. Cogn. Psychol., 66(1):126-56, February.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Word and Object",
"authors": [
{
"first": "W",
"middle": [],
"last": "Van Orman Quine",
"suffix": ""
}
],
"year": 1964,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W Van Orman Quine. 1964. Word and Object. MIT Press paperback series. MIT Press.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Grounded Language Learning from Video Described with Sentences",
"authors": [
{
"first": "Haonan",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [
"Mark"
],
"last": "Siskind",
"suffix": ""
}
],
"year": 2013,
"venue": "Proc. 51st Annu",
"volume": "",
"issue": "",
"pages": "53--63",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Haonan Yu and Jeffrey Mark Siskind. 2013. Grounded Language Learning from Video Described with Sen- tences. In Proc. 51st Annu. Meet. Assoc. Comput. Linguist., pages 53-63.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"text": "One of the training examples, described as \"The two red blocks [left-most two blocks in this figure] are next to the other blocks.\"",
"num": null
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"text": "Figure 2shows the format of such a referring expression.(MENTIONED : ID ONT : : V11915 : TERMS ( ( TERM ONT : : V11915 : CLASS ( : * ONT : : REFERENTIAL\u2212SEM W: : BLOCK) : PROPERTIES ( ( : * ONT : : MODIFIER W: : YELLOW) ) :QUAN ONT : : THE) ) ) Primary referring expression extraction from the semantic parse for \"The yellow block is next to the others\".",
"num": null
},
"FIGREF2": {
"type_str": "figure",
"uris": null,
"text": "Three examples of paths in the reference lattice for the referring expression \"the white square\", when the visible objects are a grey square, white square, and a white triangle.",
"num": null
},
"FIGREF3": {
"type_str": "figure",
"uris": null,
"text": "F-Score for Description Generation in grey and Reference Resolution in black for various configurations of the system run on 4 underspecificed videos. Error bars are one standard deviation.",
"num": null
}
}
}
}