Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N16-1019",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:38:03.132624Z"
},
"title": "Grounded Semantic Role Labeling",
"authors": [
{
"first": "Shaohua",
"middle": [],
"last": "Yang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Michigan State University",
"location": {
"postCode": "48824",
"settlement": "East Lansing",
"region": "MI"
}
},
"email": ""
},
{
"first": "Qiaozi",
"middle": [],
"last": "Gao",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Michigan State University",
"location": {
"postCode": "48824",
"settlement": "East Lansing",
"region": "MI"
}
},
"email": "[email protected]"
},
{
"first": "Changsong",
"middle": [],
"last": "Liu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Michigan State University",
"location": {
"postCode": "48824",
"settlement": "East Lansing",
"region": "MI"
}
},
"email": "[email protected]"
},
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Song-Chun",
"middle": [],
"last": "Zhu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of California",
"location": {
"postCode": "90095",
"settlement": "Los Angeles",
"region": "CA"
}
},
"email": "[email protected]"
},
{
"first": "Joyce",
"middle": [
"Y"
],
"last": "Chai",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Michigan State University",
"location": {
"postCode": "48824",
"settlement": "East Lansing",
"region": "MI"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Semantic Role Labeling (SRL) captures semantic roles (or participants) such as agent, patient, and theme associated with verbs from the text. While it provides important intermediate semantic representations for many traditional NLP tasks (such as information extraction and question answering), it does not capture grounded semantics so that an artificial agent can reason, learn, and perform the actions with respect to the physical environment. To address this problem, this paper extends traditional SRL to grounded SRL where arguments of verbs are grounded to participants of actions in the physical world. By integrating language and vision processing through joint inference, our approach not only grounds explicit roles, but also grounds implicit roles that are not explicitly mentioned in language descriptions. This paper describes our empirical results and discusses challenges and future directions.",
"pdf_parse": {
"paper_id": "N16-1019",
"_pdf_hash": "",
"abstract": [
{
"text": "Semantic Role Labeling (SRL) captures semantic roles (or participants) such as agent, patient, and theme associated with verbs from the text. While it provides important intermediate semantic representations for many traditional NLP tasks (such as information extraction and question answering), it does not capture grounded semantics so that an artificial agent can reason, learn, and perform the actions with respect to the physical environment. To address this problem, this paper extends traditional SRL to grounded SRL where arguments of verbs are grounded to participants of actions in the physical world. By integrating language and vision processing through joint inference, our approach not only grounds explicit roles, but also grounds implicit roles that are not explicitly mentioned in language descriptions. This paper describes our empirical results and discusses challenges and future directions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Linguistic studies capture semantics of verbs by their frames of thematic roles (also referred to as semantic roles or verb arguments) (Levin, 1993) . For example, a verb can be characterized by agent (i.e., the animator of the action) and patient (i.e., the object on which the action is acted upon), and other roles such as instrument, source, destination, etc. Given a verb frame, the goal of Semantic Role Labeling (SRL) is to identify linguistic entities from the text that serve different thematic roles (Palmer et al., 2005; Gildea and Jurafsky, The woman takes out a cucumber from the refrigerator.",
"cite_spans": [
{
"start": 135,
"end": 148,
"text": "(Levin, 1993)",
"ref_id": "BIBREF21"
},
{
"start": 510,
"end": 531,
"text": "(Palmer et al., 2005;",
"ref_id": "BIBREF29"
},
{
"start": 532,
"end": 552,
"text": "Gildea and Jurafsky,",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Predicate: \"takes out\": track 1 Agent: ''The woman'' : track 2 Pa.ent: ''a cucumber'' : track 3 Source: ''from the refrigerator'' : track 4 Des.na.on: '' '' : track 5 2002; Collobert et al., 2011; Zhou and Xu, 2015) . For example, given the sentence the woman takes out a cucumber from the refrigerator., takes out is the main verb (also called predicate); the noun phrase the woman is the agent of this action; a cucumber is the patient; and the refrigerator is the source.",
"cite_spans": [
{
"start": 173,
"end": 196,
"text": "Collobert et al., 2011;",
"ref_id": "BIBREF6"
},
{
"start": 197,
"end": 215,
"text": "Zhou and Xu, 2015)",
"ref_id": "BIBREF45"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "SRL captures important semantic representations for actions associated with verbs, which have shown beneficial for a variety of applications such as information extraction (Emanuele et al., 2013) and question answering (Shen and Lapata, 2007) . However, the traditional SRL is not targeted to represent verb semantics that are grounded to the physical world so that artificial agents can truly understand the ongoing activities and (learn to) perform the specified actions. To address this issue, we propose a new task on grounded semantic role labeling. Figure 1 shows an example of grounded SRL.",
"cite_spans": [
{
"start": 172,
"end": 195,
"text": "(Emanuele et al., 2013)",
"ref_id": "BIBREF12"
},
{
"start": 219,
"end": 242,
"text": "(Shen and Lapata, 2007)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [
{
"start": 555,
"end": 563,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The sentence the woman takes out a cucumber from the refrigerator describes an activity in a visual scene. The semantic role representation from linguistic processing (including implicit roles such as destination) is first extracted and then grounded to tracks of visual entities as shown in the video. For example, the verb phrase take out is grounded to a trajectory of the right hand. The role agent is grounded to the person who actually does the take-out action in the visual scene (track 1) ; the patient is grounded to the cucumber taken out (track 3); and the source is grounded to the refrigerator (track 4). The implicit role of destination (which is not explicitly mentioned in the language description) is grounded to the cutting board (track 5).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To tackle this problem, we have developed an approach to jointly process language and vision by incorporating semantic role information. In particular, we use a benchmark dataset (TACoS) which consists of parallel video and language descriptions in a complex cooking domain (Regneri et al., 2013) in our investigation. We have further annotated several layers of information for developing and evaluating grounded semantic role labeling algorithms. Compared to previous works on language grounding (Tellex et al., 2011; Yu and Siskind, 2013; Krishnamurthy and Kollar, 2013) , our work presents several contributions. First, beyond arguments explicitly mentioned in language descriptions, our work simultaneously grounds explicit and implicit roles with an attempt to better connect verb semantics with actions from the underlying physical world. By incorporating semantic role information, our approach has led to better grounding performance. Second, most previous works only focused on a small number of verbs with limited activities. We base our investigation on a wider range of verbs and in a much more complex domain where object recognition and tracking are notably more difficult. Third, our work results in additional layers of annotation to part of the TACoS dataset. This annotation captures the structure of actions informed by semantic roles from the video. The annotated data is available for download 1 . It will provide a benchmark for future work on grounded SRL.",
"cite_spans": [
{
"start": 274,
"end": 296,
"text": "(Regneri et al., 2013)",
"ref_id": "BIBREF31"
},
{
"start": 498,
"end": 519,
"text": "(Tellex et al., 2011;",
"ref_id": "BIBREF35"
},
{
"start": 520,
"end": 541,
"text": "Yu and Siskind, 2013;",
"ref_id": "BIBREF43"
},
{
"start": 542,
"end": 573,
"text": "Krishnamurthy and Kollar, 2013)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1 http://lair.cse.msu.edu/gsrl.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recent years have witnessed an increasing amount of work in integrating language and vision, from earlier image annotation (Ramanathan et al., 2013; Kazemzadeh et al., 2014) to recent image/video caption generation (Kuznetsova et al., 2013; Venugopalan et al., 2015; Ortiz et al., ; Elliott and de Vries, 2015; Devlin et al., 2015) , video sentence alignment (Naim et al., 2015; Malmaud et al., 2015) , scene generation (Chang et al., 2015) , and multimodel embedding incorporating language and vision (Bruni et al., 2014; Lazaridou et al., 2015) .",
"cite_spans": [
{
"start": 123,
"end": 148,
"text": "(Ramanathan et al., 2013;",
"ref_id": "BIBREF30"
},
{
"start": 149,
"end": 173,
"text": "Kazemzadeh et al., 2014)",
"ref_id": "BIBREF17"
},
{
"start": 215,
"end": 240,
"text": "(Kuznetsova et al., 2013;",
"ref_id": "BIBREF19"
},
{
"start": 241,
"end": 266,
"text": "Venugopalan et al., 2015;",
"ref_id": "BIBREF39"
},
{
"start": 267,
"end": 282,
"text": "Ortiz et al., ;",
"ref_id": "BIBREF28"
},
{
"start": 283,
"end": 310,
"text": "Elliott and de Vries, 2015;",
"ref_id": "BIBREF11"
},
{
"start": 311,
"end": 331,
"text": "Devlin et al., 2015)",
"ref_id": "BIBREF9"
},
{
"start": 359,
"end": 378,
"text": "(Naim et al., 2015;",
"ref_id": "BIBREF27"
},
{
"start": 379,
"end": 400,
"text": "Malmaud et al., 2015)",
"ref_id": "BIBREF24"
},
{
"start": 420,
"end": 440,
"text": "(Chang et al., 2015)",
"ref_id": "BIBREF3"
},
{
"start": 502,
"end": 522,
"text": "(Bruni et al., 2014;",
"ref_id": "BIBREF2"
},
{
"start": 523,
"end": 546,
"text": "Lazaridou et al., 2015)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "What is more relevant to our work here is recent progress on grounded language understanding, which involves learning meanings of words through connections to machine perception (Roy, 2005) and grounding language expressions to the shared visual world, for example, to visual objects (Liu et al., 2012; Liu and Chai, 2015) , to physical landmarks (Tellex et al., 2011; Tellex et al., 2014) , and to perceived actions or activities (Tellex et al., 2014; Artzi and Zettlemoyer, 2013) .",
"cite_spans": [
{
"start": 178,
"end": 189,
"text": "(Roy, 2005)",
"ref_id": "BIBREF33"
},
{
"start": 284,
"end": 302,
"text": "(Liu et al., 2012;",
"ref_id": "BIBREF23"
},
{
"start": 303,
"end": 322,
"text": "Liu and Chai, 2015)",
"ref_id": "BIBREF22"
},
{
"start": 347,
"end": 368,
"text": "(Tellex et al., 2011;",
"ref_id": "BIBREF35"
},
{
"start": 369,
"end": 389,
"text": "Tellex et al., 2014)",
"ref_id": "BIBREF36"
},
{
"start": 431,
"end": 452,
"text": "(Tellex et al., 2014;",
"ref_id": "BIBREF36"
},
{
"start": 453,
"end": 481,
"text": "Artzi and Zettlemoyer, 2013)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Different approaches and emphases have been explored. For example, linear programming has been applied to mediate perceptual differences between humans and robots for referential grounding (Liu and Chai, 2015) . Approaches to semantic parsing have been applied to ground language to internal world representations (Chen and Mooney, 2008; Artzi and Zettlemoyer, 2013) . Logical Semantics with Perception (LSP) (Krishnamurthy and Kollar, 2013 ) was applied to ground natural language queries to visual referents through jointly parsing natural language (combinatory categorical grammar (CCG)) and visual attribute classification. Graphical models have been applied to word grounding. For example, a generative model was applied to integrate And-Or-Graph representations of language and vision for joint parsing (Tu et al., 2014) . A Factorial Hidden Markov Model (FHMM) was applied to learn the meaning of nouns, verbs, prepositions, adjectives and adverbs from short video clips paired with sentences (Yu and Siskind, 2013) . Discriminative models have also been applied to ground human commands or instructions to perceived visual entities, mostly for robotic applications (Tellex et al., 2011; Tellex et al., 2014) . More recently, deep learn-ing has been applied to ground phrases to image regions (Karpathy and Fei-Fei, 2015) .",
"cite_spans": [
{
"start": 189,
"end": 209,
"text": "(Liu and Chai, 2015)",
"ref_id": "BIBREF22"
},
{
"start": 314,
"end": 337,
"text": "(Chen and Mooney, 2008;",
"ref_id": "BIBREF4"
},
{
"start": 338,
"end": 366,
"text": "Artzi and Zettlemoyer, 2013)",
"ref_id": "BIBREF0"
},
{
"start": 409,
"end": 440,
"text": "(Krishnamurthy and Kollar, 2013",
"ref_id": "BIBREF18"
},
{
"start": 809,
"end": 826,
"text": "(Tu et al., 2014)",
"ref_id": "BIBREF37"
},
{
"start": 1000,
"end": 1022,
"text": "(Yu and Siskind, 2013)",
"ref_id": "BIBREF43"
},
{
"start": 1173,
"end": 1194,
"text": "(Tellex et al., 2011;",
"ref_id": "BIBREF35"
},
{
"start": 1195,
"end": 1215,
"text": "Tellex et al., 2014)",
"ref_id": "BIBREF36"
},
{
"start": 1300,
"end": 1328,
"text": "(Karpathy and Fei-Fei, 2015)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We first describe our problem formulation and then provide details on the learning and inference algorithms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "Given a sentence S and its corresponding video clip V , our goal is to ground explicit/implicit roles associated with a verb in S to video tracks in V. In this paper, we focus on the following set of semantic roles: {predicate, patient, location, source, destination, tool}. In the cooking domain, as actions always involve hands, the predicate is grounded to the hand pose represented by a trajectory of relevant hand(s). Normally agent would be grounded to the person who does the action. As there is only one person in the scene, we thus ignore the grounding of the agent in this work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formulation",
"sec_num": "3.1"
},
{
"text": "Video tracks capture tracks of objects (including hands) and locations. For example, in Figure 1 , there are 5 tracks: human, hand, cucumber, refrigerator and cutting board. Regarding the representation of locations, instead of discretization of a whole image to many small regions(large search space), we create locations corresponding to five spatial relations (center, up, down, left, right) with respect to each object track, which means we have 5 times number of locations compared with number of objects. For instance, in Figure 1 the center of the bounding boxes of the refrigerator track; and the destination is grounded to the center of the cutting board track. We use Conditional Random Field(CRF) to model this problem. An example CRF factor graph is shown in Figure 2 . The CRF structure is created based on information extracted from language. More Specifically, s 1 , ..., s 6 refers to the observed text and its semantic role. Notice that s 6 is an implicit role as there is no text from the sentence describing destination. Also note that the whole prepositional phrase \"from the drawer\" is identified as the source rather than \"the drawer\" alone. This is because the prepositions play an important role in specifying location information. For example, \"near the cutting boarding\" is describing a location that is near to, but not exactly at the location of the cutting board. Here v 1 , ..., v 6 are grounding random variables which take values from object tracks and locations in the video clip, and \u03c6 1 , ..., \u03c6 6 are binary random variables which take values {0,1}. When \u03c6 i equals to 1, it means v i is the correct grounding of corresponding linguistic semantic role, otherwise it is not. The introduction of random variables \u03c6 i follows previous work from Tellex and colleagues (Tellex et al., 2011) , which makes CRF learning more tractable.",
"cite_spans": [
{
"start": 1800,
"end": 1821,
"text": "(Tellex et al., 2011)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [
{
"start": 88,
"end": 96,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 528,
"end": 536,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 771,
"end": 779,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Problem Formulation",
"sec_num": "3.1"
},
{
"text": "\u03c6 4 \u03c6 3 \u03c6 6 \u03c6 1 \u03c6 2 v 2 v 1 v 6 v 3 v 4 v 5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formulation",
"sec_num": "3.1"
},
{
"text": "In the CRF model, we do not directly model the objective function as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning and Inference",
"sec_num": "3.2"
},
{
"text": "p(v 1 , ..., v k |S, V )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning and Inference",
"sec_num": "3.2"
},
{
"text": "where S refers to the sentence, V refers to the corresponding video clip and v i refers to the grounding variable. Because the gradient based learning method needs the expectation of v 1 , ..., v k , which is infeasible, we instead use the following objective function:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning and Inference",
"sec_num": "3.2"
},
{
"text": "P (\u03c6|s 1 , s 2 , . . . , s k , v 1 , v 2 , . . . , v k , V )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning and Inference",
"sec_num": "3.2"
},
{
"text": "where \u03c6 is a binary random vector [\u03c6 1 , ..., \u03c6 k ], indicating whether the grounding is correct. In this way, the objective function factorizes according to the structure of language with local normalization at each factor. Gradient ascent with L2 regularization was used for parameter learning to maximize the objective function:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning and Inference",
"sec_num": "3.2"
},
{
"text": "\u2202L \u2202w = i F (\u03c6 i , s i , v i , V ) \u2212 i E P (\u03c6 i |s i ,v i ,V ) F (\u03c6 i , s i , v i , V )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning and Inference",
"sec_num": "3.2"
},
{
"text": "where F refers to the feature function. During the training, we also use random grounding as negative samples for discriminative training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning and Inference",
"sec_num": "3.2"
},
{
"text": "During inference, the search space can be very large when the number of objects in the world increases. To address this problem we apply beam search to first ground roles including patient, tool, and then other roles including location, source, destination and predicate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning and Inference",
"sec_num": "3.2"
},
{
"text": "We conducted our investigation based on a subset of the TACoS corpus (Regneri et al., 2013) . This dataset contains a set of video clips paired with natural language descriptions related to several cooking tasks. The natural language descriptions were collected through crowd-sourcing on top of the \"MPII Cooking Composite Activities\" video corpus (Rohrbach et al., 2012) . In this paper, we selected two tasks \"cutting cucumber\" and \"cutting bread\" as our experimental data. Each task has 5 videos showing how different people perform the same task. Each video is segmented to a sequence of video clips where each video clip comes with one or more language descriptions. The original TACoS dataset does not contain annotation for grounded semantic roles.",
"cite_spans": [
{
"start": 69,
"end": 91,
"text": "(Regneri et al., 2013)",
"ref_id": "BIBREF31"
},
{
"start": 348,
"end": 371,
"text": "(Rohrbach et al., 2012)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4.1"
},
{
"text": "To support our investigation and evaluation, we had made a significant effort adding the following annotations. For each video clip, we annotated the objects' bounding boxes, their tracks, and their labels (cucumber, cutting board, etc.) using VATIC (Vondrick et al., 2013) . On average, each video clip is annotated with 15 tracks of objects. For each sentence, we annotated the ground truth parsing structure and the semantic frame for each verb. The ground truth parsing structure is the representation of dependency parsing results. The semantic frame of a verb includes slots, fillers, and their groundings. For each semantic role (including both explicit roles and implicit roles) of a given verb, we also annotated the ground truth grounding in terms of the object tracks and locations. In total, our annotated dataset includes 976 pairs of video clips and corresponding sentences, 1094 verbs occurrences, and 3593 groundings of semantic roles. To check annotation agreement, 10% of the data was annotated by two annotators. The kappa statistics is 0.83 (Cohen and others, 1960) .",
"cite_spans": [
{
"start": 250,
"end": 273,
"text": "(Vondrick et al., 2013)",
"ref_id": "BIBREF40"
},
{
"start": 1061,
"end": 1085,
"text": "(Cohen and others, 1960)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4.1"
},
{
"text": "From this dataset, we selected 11 most frequent verbs (i.e., get, take, wash, cut, rinse, slice, place, peel, put, remove, open) in our current investigation for the following reasons. First, they are used more frequently so that we can have sufficient samples of each verb to learn the model. Second, they cover different types of actions: some are more related to the change of the state such as take, and some are more related to the process such as wash. As it turns out, these verbs also have different semantic role patterns as shown in Table 1 . The patient roles of all these verbs are explicitly specified. This is not surprising as all these verbs are transitive verbs. There is a large variation for other roles. For example, for the verb take, the destination is rarely specified by lin-guistic expressions (i.e., only 2 instances), however it can be inferred from the video. For the verb cut, the location and the tool are also rarely specified by linguistic expressions. Nevertheless, these implicit roles contribute to the overall understanding of actions and should also be grounded too.",
"cite_spans": [],
"ref_spans": [
{
"start": 543,
"end": 550,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4.1"
},
{
"text": "To build the structure of the CRF as shown in Figure 2 and extract features for learning and inference, we have applied the following approaches to process language and vision. Language Processing. Language processing consists of three steps to build a structure containing syntactic and semantic information. First, the Stanford Parser (Manning et al., 2014 ) is applied to create a dependency parsing tree for each sentence. Second, Senna (Collobert et al., 2011) is applied to identify semantic role labels for the key verb in the sentence. The linguistic entities with semantic roles are matched against the dependency nodes in the tree and the corresponding semantic role labels are added to the tree. Third, for each verb, the Propbank (Palmer et al., 2005) entries are searched to extract all relevant semantic roles. The implicit roles (i.e., not specified linguistically) are added as direct children of verb nodes in the tree. Through these three steps, the resulting tree from language processing has both explicit and implicit semantic roles. These trees are further transformed to the CRF structures based on a set of rules.",
"cite_spans": [
{
"start": 337,
"end": 358,
"text": "(Manning et al., 2014",
"ref_id": "BIBREF25"
},
{
"start": 441,
"end": 465,
"text": "(Collobert et al., 2011)",
"ref_id": "BIBREF6"
},
{
"start": 742,
"end": 763,
"text": "(Palmer et al., 2005)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [
{
"start": 46,
"end": 54,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Automated Processing",
"sec_num": "4.2"
},
{
"text": "Vision Processing. A set of visual detectors are first trained for each type of objects. Here a random forest classifier is adopted. More specifically, we use 100 trees with HoG features (Dalal and Triggs, 2005) and color descriptors (Van De Weijer and Schmid, 2006) . Both HoG and Color descriptors are used, because some objects are more structural, such as knives, human; some are more textured such as towels. With the learned object detectors, given a candidate video clip, we run the detectors at each 10th frame (less than 0.5 second), and find the candidate windows for which the detector score corresponding to the object is larger than a threshold (set as 0.5). Then using the detected window as a starting point, we adopt tracking-by-detection (Danelljan et al., 2014) to go forward and backward to track this object and obtain the candidate track with this object label.",
"cite_spans": [
{
"start": 187,
"end": 211,
"text": "(Dalal and Triggs, 2005)",
"ref_id": "BIBREF7"
},
{
"start": 234,
"end": 266,
"text": "(Van De Weijer and Schmid, 2006)",
"ref_id": "BIBREF38"
},
{
"start": 755,
"end": 779,
"text": "(Danelljan et al., 2014)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Automated Processing",
"sec_num": "4.2"
},
{
"text": "Feature Extraction. Features in the CRF model can be divided into the following three categories:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automated Processing",
"sec_num": "4.2"
},
{
"text": "1. Linguistic features include word occurrence and semantic role information. They are extracted by language processing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automated Processing",
"sec_num": "4.2"
},
{
"text": "2. Track label features are the label information for tracks in the video. The labels come from human annotation or automated visual processing depending on different experimental settings (described in Section 4.3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automated Processing",
"sec_num": "4.2"
},
{
"text": "3. Visual features are a set of features involving geometric relations between tracks in the video. One important feature is the histogram comparison score. It measures the similarity between distance histograms. Specifically, histograms of distance values between the tracks of the predicate and other roles for each verb are first extracted from the training video clips. For an incoming distance histogram, we calculate its Chi-Square distances (Zhang et al., 2007) from the pre-extracted training histograms with the same verb and the same role. its histogram comparison score is set to be the average of top 5 smallest Chi-Square distances. Other visual features include geometric information for single tracks and geometric relations between two tracks. For example, size, average speed, and moving direction are extracted for a single track. Average distance, size-ratio, and relative direction are extracted between two tracks. For features that are continuous, we discretized them into uniform bins.",
"cite_spans": [
{
"start": 448,
"end": 468,
"text": "(Zhang et al., 2007)",
"ref_id": "BIBREF44"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Automated Processing",
"sec_num": "4.2"
},
{
"text": "To ground language into tracks from the video, instead of using track label features or visual features alone, we use a Cartesian product of these features with linguistic features. To learn the behavior of different semantic roles of different verbs, visual features are combined with the presence of both verbs and semantic roles through Cartesian product. To learn the correspondence between track labels and words, track label features are combined with the presence of words also through Cartesian product. To train the model, we randomly selected 75% of annotated 976 pairs of video clips and corresponding sentences as training set. The remaining 25% were used as the testing set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automated Processing",
"sec_num": "4.2"
},
{
"text": "Comparison. To evaluate the performance of our approach, we compare it with two approaches.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.3"
},
{
"text": "\u2022 Baseline: To identify the grounding for each semantic role, the first baseline chooses the most possible track based on the object type conditional distribution given the verb and semantic role. If an object type corresponds to multiple tracks in the video, e.g., multiple drawers or knives, we then randomly select one of the tracks as grounding. We ran this baseline method five times and reported the average performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.3"
},
{
"text": "\u2022 Tellex 2011: The second approach we compared with is based on an implementation (Tellex et al., 2011) . The difference is that they don't explicitly model fine-grained semantic role information. For a better comparison, we map the grounding results from this approach to different explicit semantic roles according to the SRL annotation of the sentence. Note that this approach is not able to ground implicit roles.",
"cite_spans": [
{
"start": 82,
"end": 103,
"text": "(Tellex et al., 2011)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.3"
},
{
"text": "More specifically, we compare these two approaches with two variations of our system:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.3"
},
{
"text": "\u2022 GSRL wo V : The CRF model using linguistic features and track label features (described in Section 4.2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.3"
},
{
"text": "\u2022 GSRL: The full CRF model using linguistic features, track label features, and visual features(described in Section 4.2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.3"
},
{
"text": "Configurations. Both automated language processing and vision processing are error-prone. To further understand the limitations of grounded SRL, we compare performance under different configurations along the two dimensions: (1) the CRF structure is built upon annotated ground-truth language parsing versus automated language parsing; (2) object tracking and labeling is based on annotation versus automated processing. These lead to four different experimental configurations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.3"
},
{
"text": "Evaluation Metrics. For experiments that are based on annotated object tracks, we can simply use the traditional accuracy that directly measures the percentage of grounded tracks that are correct. However, for experiments using automated tracking, evaluation can be difficult as tracking itself poses significant challenges. The grounding results (to tracks) cannot be directly compared with the annotated ground-truth tracks. To address this problem, we have defined a new metric called approximate accuracy. This metric is motivated by previous computer vision work that evaluates tracking performance (Bashir and Porikli, 2006) . Suppose the ground truth grounding for a role is track gt and the predicted grounding is track pt. The two tracks gt and pt are often not the same (although may have some overlaps). Suppose the number of frames in the video clip is k. For each frame, we calculate the distance between the centroids of these two tracks. If their distance is below a predefined threshold, we consider the two tracks overlap in this frame. We consider the grounding is correct if the ratio of the overlapping frames between gt and pt exceeds 50%. As can be seen, this is a lenient and an approximate measure of accuracy.",
"cite_spans": [
{
"start": 604,
"end": 630,
"text": "(Bashir and Porikli, 2006)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.3"
},
{
"text": "The results based on the ground-truth language parsing are shown in Table 2 , and the results based on automated language parsing are shown in Table 3 . For results based on annotated object tracking, the performance is reported in accuracy and for results based on automated object tracking, the performance is reported in approximate accuracy. When the number of testing samples is less than 15, we do not show the result as it tends to be unreliable (shown as NA). Tellex (2011) does not address implicit roles (shown as \"-\"). The best performance score is shown in bold. We also conducted a twotailed bootstrap significance testing (Efron and Tibshirani, 1994) . The score with a \"*\" indicates it is statistically significant (p < 0.05) compared to the baseline approach. The score with a \"+\" indicates it is statistically significant (p < 0.05) compared to the approach (Tellex et al., 2011) .",
"cite_spans": [
{
"start": 637,
"end": 665,
"text": "(Efron and Tibshirani, 1994)",
"ref_id": "BIBREF10"
},
{
"start": 876,
"end": 897,
"text": "(Tellex et al., 2011)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [
{
"start": 68,
"end": 75,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 143,
"end": 151,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.4"
},
{
"text": "For experiments based on automated object tracking, we also calculated an upper bound to assess the best possible performance which can be achieved by a perfect grounding algorithm given the current vision processing results. This upper bound is calculated based on grounding each role to the track which is closest to the ground-truth annotated track. For the experiments based on annotated tracking, the upper bound would be 100%. This measure provides some understandings about how good the grounding approach is given the limitation of vision processing. Notice that the grounding results in the gold/automatic language processing setting are not directly comparable as the automatic SRL can misidentify frame elements.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.4"
},
{
"text": "As shown in Table 2 and Table 3 , our approach consistently outperforms the baseline (for both explicit and implicit roles) and the Tellex (2011) approach. Under the configuration of gold recognition/tracking, the incorporation of visual features further improves the performance. However, this performance gain is not observed when automated object tracking and labeling is used. One possible explanation is that as we only had limited data, we did not use separate data to train models for object recognition/tracking. So the GSRL model was trained with gold recognition/tracking data and tested with automated recognition/tracking data.",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 31,
"text": "Table 2 and Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4.5"
},
{
"text": "By comparing our method with Tellex (2011), we can see that by incorporating fine grained semantic role information, our approach achieves better performance on almost all the explicit role (except for the patient role under the automated tracking condition).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4.5"
},
{
"text": "The results have also shown that some roles are easier to ground than others in this domain. For example, the predicate role is grounded to the hand tracks (either left hand or right hand), there are not many variations such that the simple baseline can achieve pretty high performance, especially when annotated tracking is used. The same situation happens to the location role as most of the locations happen near the sink when the verb is wash, and near the cutting board for verbs like cut, etc. However, for the patient role, there is a large difference between our approach and baseline approaches as there is a larger variation of different types of objects that can participate in the role for a given verb.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4.5"
},
{
"text": "For experiments with automated tracking, the upper bound for each role also varies. Some roles (e.g., patient) have a pretty low upper bound.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4.5"
},
{
"text": "The accuracy from our full GSRL model is already quite close to the upper bound. For other roles such as predicate and destination, there is a larger gap between the current performance and the upper bound. This difference reflects the model's capability in grounding different roles. Figure 3 shows a close-up look at the grounding performance to the patient role for each verb under the gold parsing and gold tracking configuration. The reason we only show the results of patient role here is every verb has this role to be grounded. For each verb, we also calculated its entropy based on the distribution of different types of objects that can serve as the patient role in the training data. The entropy is shown at the bottom of the figure. For verbs such as take and put, our full GSRL model leads to much better performance compared to the baseline. As the baseline approach relies on the entropy of the potential grounding for a role, we further measured the improvement of the performance and calculated the correlation between the improvement and the entropy of each verb. The result shows that Pearson coefficient between the entropy and the improvement of GSRL over the baseline is 0.614. This indicates the improvement from GSRL is positively correlated with the entropy value associated with a role, implying the GSRL model can deal with more uncertain situations. For the verb cut, The GSRL model performs slightly worse than the baseline. One explanation is that the possible objects that can participate as a patient for cut are relatively constrained where simple features might be sufficient. A large number of features may introduce noise, and thus jeopardizing the performance.",
"cite_spans": [],
"ref_spans": [
{
"start": 285,
"end": 293,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4.5"
},
{
"text": "We further compare the performance of our full GRSL model with Tellex (2011) (also shown in Figure 3) on the patient role of different verbs. Our approach outperforms Tellex (2011) on most of the verbs, especially put and open. A close look at the results have shown that in those cases, the patient roles are often specified by pronouns. Therefore, the track label features and linguistic features are not very helpful, and the correct grounding mainly depends on visual features. Our full GSRL model can better capture the geometry relations between different semantic roles by incorporating fine-grained role information.",
"cite_spans": [],
"ref_spans": [
{
"start": 92,
"end": 98,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4.5"
},
{
"text": "This paper investigates a new problem on grounded semantic role labeling. Besides semantic roles explicitly mentioned in language descriptions, our approach also grounds implicit roles which are not explicitly specified. As implicit roles also capture important participants related to an action (e.g., tools used in the action), our approach provides a more complete representation of action semantics which can be used by artificial agents for further reasoning and planning towards the physical world. Our empirical results on a complex cooking domain have shown that, by incorporating semantic role information with visual features, our approach can achieve better performance compared to baseline approaches. Our results have also shown that grounded semantic role labeling is a challenging problem which often depends on the quality of automated visual processing (e.g., object tracking and recognition).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "5"
},
{
"text": "There are several directions for future improvement. First, the current alignment between a video clip and a sentence is generated by some heuristics which are error-prone. One way to address this is to treat alignment and grounding as a joint problem. Second, our current visual features have not shown effective especially when they are extracted based on automatic visual processing. This is partly due to the complexity of the scene from the TACoS dataset and the lack of depth information. Recent advances in object tracking algorithms (Yang et al., 2013; Milan et al., 2014) together with 3D sensing can be explored in the future to improve visual processing. Moreover, linguistic studies have shown that action verbs such as cut and slice often denote some change of state as a result of the action (Hovav and Levin, 2010; Hovav and Levin, 2008) . The change of state can be perceived from the physical world. Thus another direction is to systematically study causality of verbs. Causality models for verbs can potentially provide top-down information to guide intermediate representations for visual processing and improve grounded language understanding.",
"cite_spans": [
{
"start": 541,
"end": 560,
"text": "(Yang et al., 2013;",
"ref_id": "BIBREF42"
},
{
"start": 561,
"end": 580,
"text": "Milan et al., 2014)",
"ref_id": "BIBREF26"
},
{
"start": 806,
"end": 829,
"text": "(Hovav and Levin, 2010;",
"ref_id": "BIBREF15"
},
{
"start": 830,
"end": 852,
"text": "Hovav and Levin, 2008)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "5"
},
{
"text": "The capability of grounding semantic roles to the physical world has many important implications. It will support the development of intelligent agents which can reason and act upon the shared physical world. For example, unlike traditional action recognition in computer vision (Wang et al., 2011) , grounded SRL will provide deeper understanding of the activities which involve participants in the actions guided by linguistic knowledge. For agents that can act upon the physical world such as robots, grounded SRL will allow the agents to acquire the grounded structure of human commands and thus perform the requested actions through planning (e.g., to follow the command \"put the cup on the table\"). Grounded SRL will also contribute to robot action learning where humans can teach the robot new actions (e.g., simple cooking tasks) through both task demonstration and language instruction.",
"cite_spans": [
{
"start": 279,
"end": 298,
"text": "(Wang et al., 2011)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "5"
},
{
"text": "For some verbs (e.g., get), there is a slight discrepancy between the sum of implicit/explicit roles across different cate-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "gories. This is partly due to the fact that some verb occurrences take more than one objects as grounding to a role. It is also possibly due to missed/duplicated annotation for some categories.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The authors are grateful to Austin Littley and Zach Richardson for their help on data annotation, and to anonymous reviewers for their valuable comments and suggestions. This work was supported in part by IIS-1208390 from the National Science Foundation and by N66001-15-C-4035 from the DARPA SIM-PLEX program.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgement",
"sec_num": "6"
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Weakly supervised learning of semantic parsers for mapping instructions to actions",
"authors": [
{
"first": "Yoav",
"middle": [],
"last": "Artzi",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2013,
"venue": "TACL",
"volume": "1",
"issue": "",
"pages": "49--62",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoav Artzi and Luke Zettlemoyer. 2013. Weakly su- pervised learning of semantic parsers for mapping in- structions to actions. TACL, 1:49-62.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Performance evaluation of object detection and tracking systems",
"authors": [
{
"first": "Faisal",
"middle": [],
"last": "Bashir",
"suffix": ""
},
{
"first": "Fatih",
"middle": [],
"last": "Porikli",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings 9th IEEE International Workshop on PETS",
"volume": "",
"issue": "",
"pages": "7--14",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Faisal Bashir and Fatih Porikli. 2006. Performance evaluation of object detection and tracking systems. In Proceedings 9th IEEE International Workshop on PETS, pages 7-14.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Multimodal distributional semantics",
"authors": [
{
"first": "Elia",
"middle": [],
"last": "Bruni",
"suffix": ""
},
{
"first": "Nam-Khanh",
"middle": [],
"last": "Tran",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2014,
"venue": "J. Artif. Intell. Res.(JAIR)",
"volume": "49",
"issue": "",
"pages": "1--47",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elia Bruni, Nam-Khanh Tran, and Marco Baroni. 2014. Multimodal distributional semantics. J. Artif. Intell. Res.(JAIR), 49:1-47.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Text to 3d scene generation with rich lexical grounding",
"authors": [
{
"first": "Angel",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Will",
"middle": [],
"last": "Monroe",
"suffix": ""
},
{
"first": "Manolis",
"middle": [],
"last": "Savva",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Potts",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1505.06289"
]
},
"num": null,
"urls": [],
"raw_text": "Angel Chang, Will Monroe, Manolis Savva, Christopher Potts, and Christopher D Manning. 2015. Text to 3d scene generation with rich lexical grounding. arXiv preprint arXiv:1505.06289.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Learning to sportscast: a test of grounded language acquisition",
"authors": [
{
"first": "L",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "Raymond J",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mooney",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 25th international conference on Machine learning",
"volume": "",
"issue": "",
"pages": "128--135",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David L Chen and Raymond J Mooney. 2008. Learning to sportscast: a test of grounded language acquisition. In Proceedings of the 25th international conference on Machine learning, pages 128-135. ACM.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A coefficient of agreement for nominal scales. Educational and psychological measurement",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Cohen",
"suffix": ""
}
],
"year": 1960,
"venue": "",
"volume": "20",
"issue": "",
"pages": "37--46",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Cohen et al. 1960. A coefficient of agreement for nominal scales. Educational and psychological mea- surement, 20(1):37-46.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Natural language processing (almost) from scratch",
"authors": [
{
"first": "Ronan",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "L\u00e9on",
"middle": [],
"last": "Bottou",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Karlen",
"suffix": ""
},
{
"first": "Koray",
"middle": [],
"last": "Kavukcuoglu",
"suffix": ""
},
{
"first": "Pavel",
"middle": [],
"last": "Kuksa",
"suffix": ""
}
],
"year": 2011,
"venue": "The Journal of Machine Learning Research",
"volume": "12",
"issue": "",
"pages": "2493--2537",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronan Collobert, Jason Weston, L\u00e9on Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. The Journal of Machine Learning Research, 12:2493- 2537.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Histograms of oriented gradients for human detection",
"authors": [
{
"first": "Navneet",
"middle": [],
"last": "Dalal",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Triggs",
"suffix": ""
}
],
"year": 2005,
"venue": "Computer Vision and Pattern Recognition",
"volume": "1",
"issue": "",
"pages": "886--893",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Navneet Dalal and Bill Triggs. 2005. Histograms of ori- ented gradients for human detection. In Computer Vi- sion and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on, volume 1, pages 886-893. IEEE.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Adaptive color attributes for real-time visual tracking",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Danelljan",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Fahad Shahbaz Khan",
"suffix": ""
},
{
"first": "Joost",
"middle": [],
"last": "Felsberg",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Van De Weijer",
"suffix": ""
}
],
"year": 2014,
"venue": "Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on",
"volume": "",
"issue": "",
"pages": "1090--1097",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martin Danelljan, Fahad Shahbaz Khan, Michael Fels- berg, and Joost van de Weijer. 2014. Adaptive color attributes for real-time visual tracking. In Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on, pages 1090-1097. IEEE.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Language models for image captioning: The quirks and what works",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Fang",
"suffix": ""
},
{
"first": "Saurabh",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Zweig",
"suffix": ""
},
{
"first": "Margaret",
"middle": [],
"last": "Mitchell",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1505.01809"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Hao Cheng, Hao Fang, Saurabh Gupta, Li Deng, Xiaodong He, Geoffrey Zweig, and Margaret Mitchell. 2015. Language models for image cap- tioning: The quirks and what works. arXiv preprint arXiv:1505.01809.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "An introduction to the bootstrap",
"authors": [
{
"first": "Bradley",
"middle": [],
"last": "Efron",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Robert",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tibshirani",
"suffix": ""
}
],
"year": 1994,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bradley Efron and Robert J Tibshirani. 1994. An intro- duction to the bootstrap. CRC press.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Describing images using inferred visual dependency representations",
"authors": [
{
"first": "Desmond",
"middle": [],
"last": "Elliott",
"suffix": ""
},
{
"first": "Arjen",
"middle": [],
"last": "De Vries",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "42--52",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Desmond Elliott and Arjen de Vries. 2015. Describ- ing images using inferred visual dependency repre- sentations. In Proceedings of the 53rd Annual Meet- ing of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 42-52, Beijing, China, July. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Textual inference and meaning representation in human robot interaction",
"authors": [
{
"first": "Giuseppe",
"middle": [],
"last": "Bastianelli Emanuele",
"suffix": ""
},
{
"first": "Danilo",
"middle": [],
"last": "Castellucci",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Croce",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Basili",
"suffix": ""
}
],
"year": 2013,
"venue": "Joint Symposium on Semantic Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bastianelli Emanuele, Giuseppe Castellucci, Danilo Croce, and Roberto Basili. 2013. Textual inference and meaning representation in human robot interac- tion. In Joint Symposium on Semantic Processing., page 65.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Automatic labeling of semantic roles",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2002,
"venue": "Computational linguistics",
"volume": "28",
"issue": "3",
"pages": "245--288",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Gildea and Daniel Jurafsky. 2002. Automatic la- beling of semantic roles. Computational linguistics, 28(3):245-288.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Reflections on manner/result complementarity",
"authors": [
{
"first": "Malka",
"middle": [
"Rappaport"
],
"last": "Hovav",
"suffix": ""
},
{
"first": "Beth",
"middle": [],
"last": "Levin",
"suffix": ""
}
],
"year": 2008,
"venue": "Lecture notes",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Malka Rappaport Hovav and Beth Levin. 2008. Re- flections on manner/result complementarity. Lecture notes.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Reflections on Manner / Result Complementarity. Lexical Semantics, Syntax, and Event Structure",
"authors": [
{
"first": "Malka",
"middle": [
"Rappaport"
],
"last": "Hovav",
"suffix": ""
},
{
"first": "Beth",
"middle": [],
"last": "Levin",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "21--38",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Malka Rappaport Hovav and Beth Levin. 2010. Reflec- tions on Manner / Result Complementarity. Lexical Semantics, Syntax, and Event Structure, pages 21-38.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Deep visualsemantic alignments for generating image descriptions",
"authors": [
{
"first": "Andrej",
"middle": [],
"last": "Karpathy",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Fei-Fei",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrej Karpathy and Li Fei-Fei. 2015. Deep visual- semantic alignments for generating image descrip- tions. June.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Referitgame: Referring to objects in photographs of natural scenes",
"authors": [
{
"first": "Sahar",
"middle": [],
"last": "Kazemzadeh",
"suffix": ""
},
{
"first": "Vicente",
"middle": [],
"last": "Ordonez",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Matten",
"suffix": ""
},
{
"first": "Tamara",
"middle": [],
"last": "Berg",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "787--798",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sahar Kazemzadeh, Vicente Ordonez, Mark Matten, and Tamara Berg. 2014. Referitgame: Referring to ob- jects in photographs of natural scenes. In Proceedings of the 2014 Conference on Empirical Methods in Nat- ural Language Processing (EMNLP), pages 787-798, Doha, Qatar, October. Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Jointly learning to parse and perceive: Connecting natural language to the physical world",
"authors": [
{
"first": "Jayant",
"middle": [],
"last": "Krishnamurthy",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Kollar",
"suffix": ""
}
],
"year": 2013,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "193--206",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jayant Krishnamurthy and Thomas Kollar. 2013. Jointly learning to parse and perceive: Connecting natural lan- guage to the physical world. Transactions of the Asso- ciation for Computational Linguistics, 1:193-206.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Generalizing image captions for image-text parallel corpus",
"authors": [
{
"first": "Polina",
"middle": [],
"last": "Kuznetsova",
"suffix": ""
},
{
"first": "Vicente",
"middle": [],
"last": "Ordonez",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"C"
],
"last": "Berg",
"suffix": ""
},
{
"first": "Tamara",
"middle": [
"L"
],
"last": "Berg",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2013,
"venue": "ACL (2)",
"volume": "",
"issue": "",
"pages": "790--796",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Polina Kuznetsova, Vicente Ordonez, Alexander C Berg, Tamara L Berg, and Yejin Choi. 2013. Generalizing image captions for image-text parallel corpus. In ACL (2), pages 790-796. Citeseer.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Combining language and vision with a multimodal skip-gram model",
"authors": [
{
"first": "Angeliki",
"middle": [],
"last": "Lazaridou",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nghia The",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1501.02598"
]
},
"num": null,
"urls": [],
"raw_text": "Angeliki Lazaridou, Nghia The Pham, and Marco Ba- roni. 2015. Combining language and vision with a multimodal skip-gram model. arXiv preprint arXiv:1501.02598.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "English verb classes and alternations: A preliminary investigation",
"authors": [
{
"first": "Beth",
"middle": [],
"last": "Levin",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Beth Levin. 1993. English verb classes and alternations: A preliminary investigation. University of Chicago press.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Learning to mediate perceptual differences in situated humanrobot dialogue",
"authors": [
{
"first": "Changsong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Joyce",
"middle": [
"Y"
],
"last": "Chai",
"suffix": ""
}
],
"year": 2015,
"venue": "The Twenty-Ninth Conference on Artificial Intelligence (AAAI-15)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Changsong Liu and Joyce Y. Chai. 2015. Learning to mediate perceptual differences in situated human- robot dialogue. In The Twenty-Ninth Conference on Artificial Intelligence (AAAI-15). to appear.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Towards mediating shared perceptual basis in situated dialogue",
"authors": [
{
"first": "Changsong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Fang",
"suffix": ""
},
{
"first": "Joyce",
"middle": [],
"last": "Chai",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 13th Annual Meeting of the Special Interest Group on Discourse and Dialogue",
"volume": "",
"issue": "",
"pages": "140--149",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Changsong Liu, Rui Fang, and Joyce Chai. 2012. To- wards mediating shared perceptual basis in situated di- alogue. In Proceedings of the 13th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 140-149, Seoul, South Korea.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "What's cookin'? interpreting cooking videos using text, speech and vision",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Malmaud",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Vivek",
"middle": [],
"last": "Rathod",
"suffix": ""
},
{
"first": "Nick",
"middle": [],
"last": "Johnston",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Rabinovich",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Murphy",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1503.01558"
]
},
"num": null,
"urls": [],
"raw_text": "Jonathan Malmaud, Jonathan Huang, Vivek Rathod, Nick Johnston, Andrew Rabinovich, and Kevin Mur- phy. 2015. What's cookin'? interpreting cooking videos using text, speech and vision. arXiv preprint arXiv:1503.01558.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "The Stanford CoreNLP natural language processing toolkit",
"authors": [
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Bauer",
"suffix": ""
},
{
"first": "Jenny",
"middle": [],
"last": "Finkel",
"suffix": ""
},
{
"first": "Steven",
"middle": [
"J"
],
"last": "Bethard",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Mcclosky",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations",
"volume": "",
"issue": "",
"pages": "55--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language pro- cessing toolkit. In Proceedings of 52nd Annual Meet- ing of the Association for Computational Linguistics: System Demonstrations, pages 55-60.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Continuous energy minimization for multitarget tracking. Pattern Analysis and Machine Intelligence",
"authors": [
{
"first": "Anton",
"middle": [],
"last": "Milan",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Roth",
"suffix": ""
},
{
"first": "Kaspar",
"middle": [],
"last": "Schindler",
"suffix": ""
}
],
"year": 2014,
"venue": "IEEE Transactions on",
"volume": "36",
"issue": "1",
"pages": "58--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anton Milan, Stefan Roth, and Kaspar Schindler. 2014. Continuous energy minimization for multitarget track- ing. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 36(1):58-72.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Discriminative unsupervised alignment of natural language instructions with corresponding video segments",
"authors": [
{
"first": "Iftekhar",
"middle": [],
"last": "Naim",
"suffix": ""
},
{
"first": "Young",
"middle": [
"C"
],
"last": "Song",
"suffix": ""
},
{
"first": "Qiguang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Henry",
"middle": [],
"last": "Kautz",
"suffix": ""
},
{
"first": "Jiebo",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "164--174",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Iftekhar Naim, Young C. Song, Qiguang Liu, Liang Huang, Henry Kautz, Jiebo Luo, and Daniel Gildea. 2015. Discriminative unsupervised alignment of nat- ural language instructions with corresponding video segments. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, pages 164-174, Denver, Colorado, May- June. Association for Computational Linguistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Learning to interpret and describe abstract scenes",
"authors": [
{
"first": "Luis Gilberto Mateos",
"middle": [],
"last": "Ortiz",
"suffix": ""
},
{
"first": "Clemens",
"middle": [],
"last": "Wolff",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "1505--1515",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luis Gilberto Mateos Ortiz, Clemens Wolff, and Mirella Lapata. Learning to interpret and describe abstract scenes. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technolo- gies, pages 1505-1515.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "The proposition bank: An annotated corpus of semantic roles",
"authors": [
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Kingsbury",
"suffix": ""
}
],
"year": 2005,
"venue": "Computational linguistics",
"volume": "31",
"issue": "1",
"pages": "71--106",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martha Palmer, Daniel Gildea, and Paul Kingsbury. 2005. The proposition bank: An annotated corpus of semantic roles. Computational linguistics, 31(1):71- 106.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Video event understanding using natural language descriptions",
"authors": [
{
"first": "Vignesh",
"middle": [],
"last": "Ramanathan",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Fei-Fei",
"suffix": ""
}
],
"year": 2013,
"venue": "Computer Vision (ICCV), 2013 IEEE International Conference on",
"volume": "",
"issue": "",
"pages": "905--912",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vignesh Ramanathan, Percy Liang, and Li Fei-Fei. 2013. Video event understanding using natural language de- scriptions. In Computer Vision (ICCV), 2013 IEEE In- ternational Conference on, pages 905-912. IEEE.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Dominikus Wetzel, Stefan Thater, Bernt Schiele, and Manfred Pinkal",
"authors": [
{
"first": "Michaela",
"middle": [],
"last": "Regneri",
"suffix": ""
},
{
"first": "Marcus",
"middle": [],
"last": "Rohrbach",
"suffix": ""
}
],
"year": 2013,
"venue": "Transactions of the Association for Computational Linguistics (TACL)",
"volume": "1",
"issue": "",
"pages": "25--36",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michaela Regneri, Marcus Rohrbach, Dominikus Wet- zel, Stefan Thater, Bernt Schiele, and Manfred Pinkal. 2013. Grounding action descriptions in videos. Trans- actions of the Association for Computational Linguis- tics (TACL), 1:25-36.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Mykhaylo Andriluka, Sikandar Amin, Manfred Pinkal, and Bernt Schiele",
"authors": [
{
"first": "Marcus",
"middle": [],
"last": "Rohrbach",
"suffix": ""
},
{
"first": "Michaela",
"middle": [],
"last": "Regneri",
"suffix": ""
}
],
"year": 2012,
"venue": "Computer Vision-ECCV 2012",
"volume": "",
"issue": "",
"pages": "144--157",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marcus Rohrbach, Michaela Regneri, Mykhaylo An- driluka, Sikandar Amin, Manfred Pinkal, and Bernt Schiele. 2012. Script data for attribute-based recog- nition of composite activities. In Computer Vision- ECCV 2012, pages 144-157. Springer.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Grounding words in perception and action: computational insights",
"authors": [
{
"first": "Deb",
"middle": [],
"last": "Roy",
"suffix": ""
}
],
"year": 2005,
"venue": "TRENDS in Cognitive Sciences",
"volume": "9",
"issue": "8",
"pages": "389--396",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Deb Roy. 2005. Grounding words in perception and ac- tion: computational insights. TRENDS in Cognitive Sciences, 9(8):389-396.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Using semantic roles to improve question answering",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2007,
"venue": "EMNLP-CoNLL",
"volume": "",
"issue": "",
"pages": "12--21",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Shen and Mirella Lapata. 2007. Using seman- tic roles to improve question answering. In EMNLP- CoNLL, pages 12-21.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Understanding natural language commands for robotic navigation and mobile manipulation",
"authors": [
{
"first": "Stefanie",
"middle": [],
"last": "Tellex",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Kollar",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Dickerson",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "Ashis Gopal",
"middle": [],
"last": "Walter",
"suffix": ""
},
{
"first": "Seth",
"middle": [
"J"
],
"last": "Banerjee",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [],
"last": "Teller",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Roy",
"suffix": ""
}
],
"year": 2011,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stefanie Tellex, Thomas Kollar, Steven Dickerson, Matthew R Walter, Ashis Gopal Banerjee, Seth J Teller, and Nicholas Roy. 2011. Understanding natu- ral language commands for robotic navigation and mo- bile manipulation. In AAAI.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Learning perceptually grounded word meanings from unaligned parallel data",
"authors": [
{
"first": "Stefanie",
"middle": [],
"last": "Tellex",
"suffix": ""
},
{
"first": "Pratiksha",
"middle": [],
"last": "Thaker",
"suffix": ""
},
{
"first": "Joshua",
"middle": [],
"last": "Joseph",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [],
"last": "Roy",
"suffix": ""
}
],
"year": 2014,
"venue": "Machine Learning",
"volume": "94",
"issue": "",
"pages": "151--167",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stefanie Tellex, Pratiksha Thaker, Joshua Joseph, and Nicholas Roy. 2014. Learning perceptually grounded word meanings from unaligned parallel data. Machine Learning, 94(2):151-167.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Joint video and text parsing for understanding events and answering queries",
"authors": [
{
"first": "Kewei",
"middle": [],
"last": "Tu",
"suffix": ""
},
{
"first": "Meng",
"middle": [],
"last": "Meng",
"suffix": ""
},
{
"first": "Mun",
"middle": [
"Wai"
],
"last": "Lee",
"suffix": ""
},
{
"first": "Tae",
"middle": [
"Eun"
],
"last": "Choe",
"suffix": ""
},
{
"first": "Song-Chun",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2014,
"venue": "MultiMedia",
"volume": "21",
"issue": "2",
"pages": "42--70",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kewei Tu, Meng Meng, Mun Wai Lee, Tae Eun Choe, and Song-Chun Zhu. 2014. Joint video and text pars- ing for understanding events and answering queries. MultiMedia, IEEE, 21(2):42-70.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Coloring local feature extraction",
"authors": [
{
"first": "Joost",
"middle": [],
"last": "Van De Weijer",
"suffix": ""
},
{
"first": "Cordelia",
"middle": [],
"last": "Schmid",
"suffix": ""
}
],
"year": 2006,
"venue": "Computer Vision-ECCV 2006",
"volume": "",
"issue": "",
"pages": "334--348",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joost Van De Weijer and Cordelia Schmid. 2006. Col- oring local feature extraction. In Computer Vision- ECCV 2006, pages 334-348. Springer.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Translating videos to natural language using deep recurrent neural networks",
"authors": [
{
"first": "Subhashini",
"middle": [],
"last": "Venugopalan",
"suffix": ""
},
{
"first": "Huijuan",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Donahue",
"suffix": ""
},
{
"first": "Marcus",
"middle": [],
"last": "Rohrbach",
"suffix": ""
},
{
"first": "Raymond",
"middle": [],
"last": "Mooney",
"suffix": ""
},
{
"first": "Kate",
"middle": [],
"last": "Saenko",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "1494--1504",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Subhashini Venugopalan, Huijuan Xu, Jeff Donahue, Marcus Rohrbach, Raymond Mooney, and Kate Saenko. 2015. Translating videos to natural language using deep recurrent neural networks. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, pages 1494-1504, Den- ver, Colorado, May-June. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Efficiently scaling up crowdsourced video annotation",
"authors": [
{
"first": "Carl",
"middle": [],
"last": "Vondrick",
"suffix": ""
},
{
"first": "Donald",
"middle": [],
"last": "Patterson",
"suffix": ""
},
{
"first": "Deva",
"middle": [],
"last": "Ramanan",
"suffix": ""
}
],
"year": 2013,
"venue": "International Journal of Computer Vision",
"volume": "101",
"issue": "1",
"pages": "184--204",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carl Vondrick, Donald Patterson, and Deva Ramanan. 2013. Efficiently scaling up crowdsourced video an- notation. International Journal of Computer Vision, 101(1):184-204.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Action recognition by dense trajectories",
"authors": [
{
"first": "Heng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Kl\u00e4ser",
"suffix": ""
},
{
"first": "Cordelia",
"middle": [],
"last": "Schmid",
"suffix": ""
},
{
"first": "Cheng-Lin",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2011,
"venue": "Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on",
"volume": "",
"issue": "",
"pages": "3169--3176",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Heng Wang, Alexander Kl\u00e4ser, Cordelia Schmid, and Cheng-Lin Liu. 2011. Action recognition by dense trajectories. In Computer Vision and Pattern Recogni- tion (CVPR), 2011 IEEE Conference on, pages 3169- 3176. IEEE.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Detection of manipulation action consequences (mac)",
"authors": [
{
"first": "Yezhou",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Cornelia",
"middle": [],
"last": "Fermuller",
"suffix": ""
},
{
"first": "Yiannis",
"middle": [],
"last": "Aloimonos",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "2563--2570",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yezhou Yang, Cornelia Fermuller, and Yiannis Aloi- monos. 2013. Detection of manipulation action con- sequences (mac). In Proceedings of the IEEE Con- ference on Computer Vision and Pattern Recognition, pages 2563-2570.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Grounded language learning from video described with sentences",
"authors": [
{
"first": "Haonan",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [
"Mark"
],
"last": "Siskind",
"suffix": ""
}
],
"year": 2013,
"venue": "ACL (1)",
"volume": "",
"issue": "",
"pages": "53--63",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Haonan Yu and Jeffrey Mark Siskind. 2013. Grounded language learning from video described with sen- tences. In ACL (1), pages 53-63.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Local features and kernels for classification of texture and object categories: A comprehensive study",
"authors": [
{
"first": "Jianguo",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Marcin",
"middle": [],
"last": "Marsza\u0142ek",
"suffix": ""
},
{
"first": "Svetlana",
"middle": [],
"last": "Lazebnik",
"suffix": ""
},
{
"first": "Cordelia",
"middle": [],
"last": "Schmid",
"suffix": ""
}
],
"year": 2007,
"venue": "International journal of computer vision",
"volume": "73",
"issue": "2",
"pages": "213--238",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jianguo Zhang, Marcin Marsza\u0142ek, Svetlana Lazebnik, and Cordelia Schmid. 2007. Local features and ker- nels for classification of texture and object categories: A comprehensive study. International journal of com- puter vision, 73(2):213-238.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "End-to-end learning of semantic role labeling using recurrent neural networks",
"authors": [
{
"first": "Jie",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "1127--1137",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jie Zhou and Wei Xu. 2015. End-to-end learning of se- mantic role labeling using recurrent neural networks. In Proceedings of the 53rd Annual Meeting of the As- sociation for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1127- 1137, Beijing, China, July. Association for Computa- tional Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "An example of grounded semantic role labeling for the sentence the woman takes out a cucumber from the refrigerator. The left hand side shows three frames of a video clip with the corresponding language description. The objects in the bounding boxes are tracked and each track has a unique identifier. The right hand side shows the grounding results where each role including the implicit role (destination) is grounded to a track id.",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF1": {
"text": "The CRF structure of sentence \"the person takes out a cutting board from the drawer\". The text in the square bracket indicates the corresponding semantic role.",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF2": {
"text": "The relation between the accuracy and the entropy of each verb's patient from the gold language, gold visual recognition/tracking setting. The entropy for the patient role of each verb is shown below the verb.",
"type_str": "figure",
"num": null,
"uris": null
},
"TABREF0": {
"html": null,
"type_str": "table",
"content": "<table><tr><td/><td/><td/><td/><td/><td>\u03c6 5</td></tr><tr><td>s 2</td><td>s 1</td><td>s 6</td><td>s 3</td><td>s 4</td><td>s 5</td></tr><tr><td>The person [Agent]</td><td>Takes out [Predicate]</td><td>[DesCnaCon]</td><td>A cuAng board [PaCent]</td><td>From [Source]</td><td>The drawer [Source]</td></tr></table>",
"text": ", the source is grounded to",
"num": null
},
"TABREF1": {
"html": null,
"type_str": "table",
"content": "<table><tr><td>Verb</td><td>Patient</td><td>Source</td><td colspan=\"2\">Destn Location</td><td>Tool</td></tr><tr><td>take</td><td colspan=\"3\">251 / 0 102 / 149 2 / 248</td><td>-</td><td>-</td></tr><tr><td>put</td><td>94 / 0</td><td>-</td><td>75 / 19</td><td>-</td><td>-</td></tr><tr><td>get</td><td>247 / 0</td><td colspan=\"2\">62 / 190 0 / 239</td><td>-</td><td>-</td></tr><tr><td>cut</td><td>134 / 1</td><td>64 / 64</td><td>-</td><td>3 / 131</td><td>5 / 130</td></tr><tr><td>open</td><td>23 / 0</td><td>-</td><td>-</td><td>0 / 23</td><td>2 / 21</td></tr><tr><td>wash</td><td>93 / 0</td><td>-</td><td>-</td><td>26 / 58</td><td>2 / 82</td></tr><tr><td>slice</td><td>69 / 1</td><td>-</td><td>-</td><td>2 / 68</td><td>2 / 66</td></tr><tr><td>rinse</td><td>76 / 0</td><td>0 / 74</td><td>-</td><td>8 / 64</td><td>-</td></tr><tr><td>place</td><td>104 / 1</td><td>-</td><td>105 / 7</td><td>-</td><td>-</td></tr><tr><td>peel</td><td>29 / 0</td><td>-</td><td>-</td><td>1 / 27</td><td>2 / 27</td></tr><tr><td>remove</td><td>40 / 0</td><td>34 / 6</td><td>-</td><td>-</td><td>-</td></tr></table>",
"text": "Statistics for a set of verbs and their semantic roles in our annotated dataset. The entry indicates the number of explicit/implicit roles for each category. \"-\" denotes no such role is observed in the data. 1",
"num": null
},
"TABREF2": {
"html": null,
"type_str": "table",
"content": "<table><tr><td/><td/><td/><td/><td colspan=\"6\">Accuracy On the Gold Recognition/Tracking Setting</td><td/><td/><td/><td/><td/></tr><tr><td>Methods</td><td>Predicate</td><td colspan=\"2\">Patient</td><td colspan=\"2\">Source</td><td colspan=\"2\">Destination</td><td colspan=\"2\">Location</td><td colspan=\"2\">Tool</td><td>Explicit All</td><td>Implicit All</td><td>All</td></tr><tr><td/><td/><td colspan=\"10\">explicit implicit explicit implicit explicit implicit explicit implicit explicit implicit</td><td/><td/><td/></tr><tr><td>Baseline</td><td>0.856</td><td>0.372</td><td>NA</td><td>0.225</td><td>0.314</td><td>0.311</td><td>0.569</td><td>NA</td><td>0.910</td><td>NA</td><td>0.853</td><td>0.556</td><td>0.620</td><td>0.583</td></tr><tr><td>Tellex(2011)</td><td>0.865</td><td>0.745</td><td>-</td><td>0.306</td><td>-</td><td>0.763</td><td>-</td><td>NA</td><td>-</td><td>NA</td><td>-</td><td>0.722</td><td>-</td><td>-</td></tr><tr><td>GSRL wo V</td><td>0.854</td><td>0.794 * +</td><td>NA</td><td colspan=\"2\">0.375 * 0.392 * +</td><td colspan=\"2\">0.658 * 0.615 * +</td><td>NA</td><td>0.920 +</td><td>NA</td><td colspan=\"2\">0.793 + 0.768 * +</td><td>0.648 * +</td><td>0.717 *</td></tr><tr><td>GSRL</td><td>0.878 * +</td><td>0.839 * +</td><td>NA</td><td>0.556 * +</td><td>0.684 * +</td><td colspan=\"2\">0.789 * 0.641 * +</td><td>NA</td><td>0.930 +</td><td>NA</td><td>0.897 * +</td><td>0.825 * +</td><td>0.768 * +</td><td>0.8 *</td></tr><tr><td/><td/><td/><td colspan=\"8\">Approximated Accuracy On the Automated Recognition/Tracking Setting</td><td/><td/><td/><td/></tr><tr><td>Methods</td><td>Predicate</td><td colspan=\"2\">Patient</td><td colspan=\"2\">Source</td><td colspan=\"2\">Destination</td><td colspan=\"2\">Location</td><td colspan=\"2\">Tool</td><td>Explicit All</td><td>Implicit All</td><td>All</td></tr><tr><td/><td/><td colspan=\"10\">explicit implicit explicit implicit explicit implicit explicit implicit explicit implicit</td><td/><td/><td/></tr><tr><td>Baseline</td><td>0.529</td><td>0.206</td><td>NA</td><td>0.169</td><td>0.119</td><td>0.236</td><td>0.566</td><td>NA</td><td>0.476</td><td>NA</td><td>0.6</td><td>0.352</td><td>0.393</td><td>0.369</td></tr><tr><td>Tellex(2011)</td><td>0.607</td><td>0.233</td><td>-</td><td>0.154</td><td>-</td><td>0.333</td><td>-</td><td>NA</td><td>-</td><td>NA</td><td>-</td><td>0.359</td><td>-</td><td>-</td></tr><tr><td>GSRL wo V</td><td>0.582 *</td><td>0.244 *</td><td>NA</td><td>0.262 * +</td><td colspan=\"2\">0.126 + 0.485 * +</td><td>0.613 * +</td><td>NA</td><td>0.467 +</td><td>NA</td><td>0.714 * +</td><td>0.410 * +</td><td>0.425 * +</td><td>0.417 *</td></tr><tr><td>GSRL</td><td>0.548</td><td>0.263 *</td><td>NA</td><td>0.262 * +</td><td colspan=\"3\">0.086 + 0.394 * 0.514 +</td><td>NA</td><td>0.456 +</td><td>NA</td><td>0.688 * +</td><td>0.399 * +</td><td colspan=\"2\">0.381 + 0.391 *</td></tr><tr><td>Upper Bound</td><td>0.920</td><td>0.309</td><td>NA</td><td>0.277</td><td>0.252</td><td>0.636</td><td>0.829</td><td>NA</td><td>0.511</td><td>NA</td><td>0.818</td><td>0.577</td><td>0.573</td><td>0.575</td></tr></table>",
"text": "Evaluation results based on annotated language parsing.",
"num": null
},
"TABREF3": {
"html": null,
"type_str": "table",
"content": "<table><tr><td/><td/><td/><td/><td colspan=\"6\">Accuracy On the Gold Recognition/Tracking Setting</td><td/><td/><td/><td/><td/></tr><tr><td>Methods</td><td>Predicate</td><td colspan=\"2\">Patient</td><td colspan=\"2\">Source</td><td colspan=\"2\">Destination</td><td colspan=\"2\">Location</td><td/><td>Tool</td><td>Explicit All</td><td>Implicit All</td><td>All</td></tr><tr><td/><td/><td colspan=\"10\">explicit implicit explicit implicit explicit implicit explicit implicit explicit implicit</td><td/><td/><td/></tr><tr><td>Baseline</td><td>0.881</td><td>0.318</td><td>NA</td><td>0.203</td><td>0.316</td><td>0.235</td><td>0.607</td><td>NA</td><td>0.877</td><td>NA</td><td>0.895</td><td>0.539</td><td>0.595</td><td>0.563</td></tr><tr><td>Tellex(2011)</td><td>0.903</td><td>0.746</td><td>-</td><td>0.156</td><td>-</td><td>0.353</td><td>-</td><td>NA</td><td>-</td><td>NA</td><td>-</td><td>0.680</td><td>-</td><td>-</td></tr><tr><td>GSRL wo V</td><td>0.873</td><td>0.813 * +</td><td>NA</td><td>0.328 * +</td><td colspan=\"3\">0.360 + 0.412 * 0.648 * +</td><td>NA</td><td>0.877 +</td><td>NA</td><td colspan=\"2\">0.818 + 0.769 * +</td><td>0.611 +</td><td>0.7 *</td></tr><tr><td colspan=\"15\">GSRL 0.787 Methods 0.873 0.875 * + NA 0.453 * + 0.667 * + 0.412 * 0.667 * + NA 0.891 + NA 0.891 + 0.823 * + 0.741 * + Predicate Patient Source Destination Location Tool Explicit All Implicit All All</td></tr><tr><td/><td/><td colspan=\"10\">explicit implicit explicit implicit explicit implicit explicit implicit explicit implicit</td><td/><td/><td/></tr><tr><td>Baseline</td><td>0.543</td><td>0.174</td><td>NA</td><td>0.121</td><td>0.113</td><td>0.093</td><td>0.594</td><td>NA</td><td>0.612</td><td>NA</td><td>0.567</td><td>0.327</td><td>0.405</td><td>0.362</td></tr><tr><td>Tellex(2011)</td><td>0.598</td><td>0.218</td><td>-</td><td>0.086</td><td>-</td><td>0.00</td><td>-</td><td>NA</td><td>-</td><td>NA</td><td>-</td><td>0.322</td><td>-</td><td>-</td></tr><tr><td>GSRL wo V</td><td>0.618 *</td><td>0.243 *</td><td>NA</td><td>0.190 * +</td><td colspan=\"3\">0.120 + 0.133 + 0.641 * +</td><td>NA</td><td>0.585 +</td><td>NA</td><td>0.723 * +</td><td>0.401 * +</td><td>0.434 * +</td><td>0.415 *</td></tr><tr><td>GSRL</td><td>0.493</td><td>0.243 *</td><td>NA</td><td>0.190 * +</td><td colspan=\"3\">0.063 + 0.133 + 0.612 +</td><td>NA</td><td>0.554 +</td><td>NA</td><td colspan=\"2\">0.617 + 0.367 * +</td><td>0.386 +</td><td>0.375</td></tr><tr><td>Upper Bound</td><td>0.908</td><td>0.277</td><td>NA</td><td>0.259</td><td>0.254</td><td>0.4</td><td>0.854</td><td>NA</td><td>0.631</td><td>NA</td><td>0.830</td><td>0.543</td><td>0.585</td><td>0.561</td></tr></table>",
"text": "Evaluation results based on automated language parsing.",
"num": null
}
}
}
}