Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "W12-0105",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T04:12:49.767008Z"
},
"title": "Natural Language Descriptions of Visual Scenes: Corpus Generation and Analysis",
"authors": [
{
"first": "Muhammad",
"middle": [],
"last": "Usman",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Sheffield",
"location": {
"country": "United Kingdom"
}
},
"email": ""
},
{
"first": "Ghani",
"middle": [],
"last": "Khan",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Sheffield",
"location": {
"country": "United Kingdom"
}
},
"email": ""
},
{
"first": "Rao",
"middle": [],
"last": "Muhammad",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Sheffield",
"location": {
"country": "United Kingdom"
}
},
"email": ""
},
{
"first": "Adeel",
"middle": [],
"last": "Nawab",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Sheffield",
"location": {
"country": "United Kingdom"
}
},
"email": "[email protected]"
},
{
"first": "Yoshihiko",
"middle": [],
"last": "Gotoh",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Sheffield",
"location": {
"country": "United Kingdom"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "As video contents continue to expand, it is increasingly important to properly annotate videos for effective search, mining and retrieval purposes. While the idea of annotating images with keywords is relatively well explored, work is still needed for annotating videos with natural language to improve the quality of video search. The focus of this work is to present a video dataset with natural language descriptions which is a step ahead of keywords based tagging. We describe our initial experiences with a corpus consisting of descriptions for video segments crafted from TREC video data. Analysis of the descriptions created by 13 annotators presents insights into humans' interests and thoughts on videos. Such resource can also be used to evaluate automatic natural language generation systems for video.",
"pdf_parse": {
"paper_id": "W12-0105",
"_pdf_hash": "",
"abstract": [
{
"text": "As video contents continue to expand, it is increasingly important to properly annotate videos for effective search, mining and retrieval purposes. While the idea of annotating images with keywords is relatively well explored, work is still needed for annotating videos with natural language to improve the quality of video search. The focus of this work is to present a video dataset with natural language descriptions which is a step ahead of keywords based tagging. We describe our initial experiences with a corpus consisting of descriptions for video segments crafted from TREC video data. Analysis of the descriptions created by 13 annotators presents insights into humans' interests and thoughts on videos. Such resource can also be used to evaluate automatic natural language generation systems for video.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "This paper presents our experiences in manually constructing a corpus, consisting of natural language descriptions of video segments crafted from a small subset of TREC video 1 data. In a broad sense the task can be considered one form of machine translation as it translates video streams into textual descriptions. To date the number of studies in this field is relatively small partially because of lack of appropriate dataset for such task. Another obstacle may be inherently larger variation for descriptions that can be produced for videos than a conventional translation from one language to another. Indeed humans are very subjective while annotating video 1 www-nlpir.nist.gov/projects/trecvid/ streams, e.g., two humans may produce quite different descriptions for the same video. Based on these descriptions we are interested to identify the most important and frequent high level features (HLFs); they may be 'keywords', such as a particular object and its position/moves, used for a semantic indexing task in video retrieval. Mostly HLFs are related to humans, objects, their moves and properties (e.g., gender, emotion and action) (Smeaton et al., 2009) .",
"cite_spans": [
{
"start": 1145,
"end": 1167,
"text": "(Smeaton et al., 2009)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper we present these HLFs in the form of ontologies and provides two hierarchical structures of important concepts -one most relevant for humans and their actions, and another for non human objects. The similarity of video descriptions is quantified using a bag of word model. The notion of sequence of events in a video was quantified using the order preserving sequence alignment algorithm (longest common subsequence). This corpus may also be used for evaluation of automatic natural language description systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The TREC video evaluation consists of on-going series of annual workshops focusing on a list of information retrieval (IR) tasks. The TREC video promotes research activities by providing a large test collection, uniform scoring procedures, and a forum for research teams interested in presenting their results. The high level feature extraction task aims to identify presence or absence of high level semantic features in a given video sequence (Smeaton et al., 2009) . Approaches to video summarisation have been explored using rushes video 2 (Over et al., 2007) . TREC video also provides a variety of meta data annotations for video datasets. For the HLF task, speech recognition transcripts, a list of master shot references, and shot IDs having HLFs in them are provided. Annotations are created for shots (i.e., one camera take) for the summarisation task. Multiple humans performing multiple actions in different backgrounds can be shown in one shot. Annotations typically consist of a few phrases with several words per phrase. Human related features (e.g., their presence, gender, age, action) are often described. Additionally, camera motion and camera angle, ethnicity information and human's dressing are often stated. On the other hand, details relating to events and objects are usually missing. Human emotion is another missing information in many of such annotations.",
"cite_spans": [
{
"start": 445,
"end": 467,
"text": "(Smeaton et al., 2009)",
"ref_id": "BIBREF15"
},
{
"start": 544,
"end": 563,
"text": "(Over et al., 2007)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "1.1"
},
{
"text": "We are exploring approaches to natural language descriptions of video data. The step one of the study is to create a dataset that can be used for development and evaluation. Textual annotations are manually generated in three different flavours, i.e., selection of HLFs (keywords), title assignment (a single phrase) and full description (multiple phrases). Keywords are useful for identification of objects and actions in videos. A title, in a sense, is a summary in the most compact form; it captures the most important content, or the theme, of the video in a short phrase. On the other hand, a full description is lengthy, comprising of several sentences with details of objects, activities and their interactions. Combination of keywords, a title, and a full descriptions will create a valuable resource for text based video retrieval and summarisation tasks. Finally, analysis of this dataset provides an insight into how humans generate natural language description for video.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Creation",
"sec_num": "2"
},
{
"text": "Most of previous datasets are related to specific tasks; PETS (Young and Ferryman, 2005) , CAVIAR (Fisher et al., 2005) and Terrascope (Jaynes et al., 2005) are for surveillance videos. KTH (Schuldt et al., 2004) and the Hollywood action dataset (Marszalek et al., 2009) are for human action recognition. MIT car dataset is for identification of cars (Papageorgiou and Poggio, 1999) . Caltech 101 and Caltech 256 are image datasets with 101 and 256 object categories respectively (Griffin et al., 2007) but there is no information about human actions or emotions.",
"cite_spans": [
{
"start": 62,
"end": 88,
"text": "(Young and Ferryman, 2005)",
"ref_id": "BIBREF18"
},
{
"start": 98,
"end": 119,
"text": "(Fisher et al., 2005)",
"ref_id": "BIBREF2"
},
{
"start": 135,
"end": 156,
"text": "(Jaynes et al., 2005)",
"ref_id": "BIBREF5"
},
{
"start": 190,
"end": 212,
"text": "(Schuldt et al., 2004)",
"ref_id": "BIBREF14"
},
{
"start": 246,
"end": 270,
"text": "(Marszalek et al., 2009)",
"ref_id": "BIBREF7"
},
{
"start": 351,
"end": 382,
"text": "(Papageorgiou and Poggio, 1999)",
"ref_id": "BIBREF10"
},
{
"start": 480,
"end": 502,
"text": "(Griffin et al., 2007)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Creation",
"sec_num": "2"
},
{
"text": "There are some datasets specially generated for scene settings such as MIT outdoor scene dataset (Oliva and Torralba, 2009) . Quattoni and Torralba (2009) created indoor dataset with 67 different scenes categories. For most of these datasets annotations are available in the form of keywords (e.g., actions such as sit, stand, walk). They were developed for keyword search, object recognition or event identification tasks. Rashtchian et al. (2010) provided an interesting dataset of 1000 images which contain natural language descriptions of those images.",
"cite_spans": [
{
"start": 97,
"end": 123,
"text": "(Oliva and Torralba, 2009)",
"ref_id": "BIBREF8"
},
{
"start": 126,
"end": 154,
"text": "Quattoni and Torralba (2009)",
"ref_id": null
},
{
"start": 424,
"end": 448,
"text": "Rashtchian et al. (2010)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Creation",
"sec_num": "2"
},
{
"text": "In this study we select video clips from TREC video benchmark for creating annotations. They include categories such as news, meeting, crowd, grouping, indoor/outdoor scene settings, traffic, costume, documentary, identity, music, sports and animals videos. The most important and probably the most frequent content in these videos appears to be a human (or humans), showing their activities, emotions and interactions with other objects. We do not intend to derive a dataset with a full scope of video categories, which is beyond our work. Instead, to keep the task manageable, we aim to create a compact dataset that can be used for developing approaches to translating video contents to natural language description.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Creation",
"sec_num": "2"
},
{
"text": "Annotations were manually created for a small subset of data prepared form the rushes video summarisation task and the HLF extraction task for the 2007 and 2008 TREC video evaluations. It consisted of 140 segments of videos -20 segments for each of the following seven categories: Each segment contained a single camera shot, spanning between 10 and 30 seconds in length. Two categories, 'Close-up' and 'Action', are mainly related to humans' activities, expressions and emotions. 'Grouping' and 'Meeting' depict relation and interaction between multiple humans. 'News' videos explain human activities in a constrained environment such as a broadcast studio. Last two categories, 'Indoor/Outdoor' and 'Traffic', are often observed in surveillance videos. They often shows for humans' interaction with other objects in indoor and outdoor settings. TREC video annotated most video segments with a brief description, comprising of multiple phrases and sentences. Further, 13 human subjects prepared additional annotation for these video segments, consisting of keywords, a title and a full description with multiple sentences. They are referred to as hand annotations in the rest of this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Creation",
"sec_num": "2"
},
{
"text": "There exist several freely available video annotation tools. One of the popular video annotation tool is Simple Video Annotation tool 3 . It allows to place a simple tag or annotation on a specified part of the screen at a particular time. The approach is similar to the one used by YouTube 4 . Another well-known video annotation tool is Video Annotation Tool 5 . A video can be scrolled for a certain time period and place annotations for that part of the video. In addition, an annotator can view a video clip, mark a time segment, attach a note to the time segment on a video timeline, or play back the segment. 'Elan' annotation tool allows to create annotations for both audio and visual data using temporal information (Wittenburg et al., 2006) . During that annotation process, a user selects a section of video using the timeline capability and writes annotation for the specific time.",
"cite_spans": [
{
"start": 726,
"end": 751,
"text": "(Wittenburg et al., 2006)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Tool",
"sec_num": "2.1"
},
{
"text": "We have developed our own annotation tool because of a few reasons. None of existing annotation tools provided the functionality of generating a description and/or a title for a video segment. Some tools allows selection of keywords in a free format, which is not suitable for our purpose of creating a list of HLFs. Figure 1 shows a screen shot of the video annotation tool developed, which is referred to as Video Description Tool (VDT). VDT is simple to operate and assist annotators in creating quality annotations. There are three main items to be annotated. An annotator is shown one video segment at one time. Firstly a restricted list of HLFs is provided for each segment and an annotator is required to select all HLFs occurring in the segment. Second, a title should be typed in. A title may be a theme of the video, typically a phrase or a sentence with several words. Lastly, a full description of video contents is created, consisting of several phrases and sentences. During the annotation, it is possible to stop, forward, reverse or play again the same video if required. Links are provided for navigation to the next and the previous videos. An annotator can delete or update earlier annotations if required.",
"cite_spans": [],
"ref_spans": [
{
"start": 317,
"end": 325,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Annotation Tool",
"sec_num": "2.1"
},
{
"text": "A total of 13 annotators were recruited to create texts for the video corpus. They were undergraduate or postgraduate students and fluent in English. It was expected that they could produce descriptions of good quality without detailed instructions or further training. A simple instruction set was given, leaving a wide room for individual interpretation about what might be included in the description. For quality reasons each annotator was given one week to complete the full set of videos.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Process",
"sec_num": "2.2"
},
{
"text": "Each annotator was presented with a complete set of 140 video segments on the annotation tool VDT. For each video annotators were instructed to provide",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Process",
"sec_num": "2.2"
},
{
"text": "\u2022 a title of one sentence long, indicating the main theme of the video;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Process",
"sec_num": "2.2"
},
{
"text": "\u2022 description of four to six sentences, related to what are shown in the video;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Process",
"sec_num": "2.2"
},
{
"text": "\u2022 selection of high level features (e.g., male, female, walk, smile , table) .",
"cite_spans": [],
"ref_spans": [
{
"start": 68,
"end": 76,
"text": ", table)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Annotation Process",
"sec_num": "2.2"
},
{
"text": "The annotations are made with open vocabulary -that is, they can use any English words as long as they contain only standard (ASCII) characters. They should avoid using any symbols or computer codes. Annotators were further guided not to use proper nouns (e.g., do not state the person name) and information obtained from audio. They were also instructed to select all HLFs appeared in the video.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Process",
"sec_num": "2.2"
},
{
"text": "13 annotators created descriptions for 140 videos (seven categories with 20 videos per category), resulting in 1820 documents in the corpus. The total number of words is 30954, hence the average length of one document is 17 words. We counted 1823 unique words and 1643 keywords (nouns and verbs). Figure 2 shows a video segment for a meeting scene, sampled at 1 fps (frame per second), and three examples for hand annotations. They typically contain two to five phrases or sentences. Most sentences are short, ranging between two to six words. Descriptions for human, gender, emotion and action are commonly observed. Occasionally minor details for objects and events are also stated. Descriptions for the background are Hand annotation 1 (title) interview in the studio; (description) three people are sitting on a red table; a tv presenter is interviewing his guests; he is talking to the guests; he is reading from papers in front of him; they are wearing a formal suit;",
"cite_spans": [],
"ref_spans": [
{
"start": 297,
"end": 305,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Corpus Analysis",
"sec_num": "3"
},
{
"text": "Hand annotation 2 (title) tv presenter and guests (description) there are three persons; the one is host; others are guests; they are all men;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Analysis",
"sec_num": "3"
},
{
"text": "Hand annotation 3 (title) three men are talking (description) three people are sitting around the table and talking each other; often associated with objects rather than humans. It is interesting to observe the subjectivity with the task; the variety of words were selected by individual annotators to express the same video contents. Figure 3 shows another example of a video segment for a human activity and hand annotations.",
"cite_spans": [],
"ref_spans": [
{
"start": 335,
"end": 343,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Corpus Analysis",
"sec_num": "3"
},
{
"text": "After removing function words, the frequency for each word was counted in hand annotations. Two classes are manually defined; one class is related directly to humans, their body structure, identity, action and interaction with other humans. (Another class represents artificial and natural objects and scene settings, i.e., all the words not directly related to humans, although they are important for semantic understanding of the visual scenedescribed further in the next section.) Note that some related words (e.g., 'woman' and 'lady') were replaced with a single concept ('female'); concepts were then built up into a hierarchical structure for each class. Figure 4 presents human related information observed in hand annotations. Annotators paid full attention to human gender information as the number of occurrences for 'female' and 'male' is Figure 4 : Human related information found in 13 hand annotations. Information is divided into structures (gender, age, identity, emotion, dressing, grouping and body parts) and activities (facial, hand and body). Each box contains a high level concept (e.g., 'woman' and 'lady' are both merged into 'female') and the number of its occurrences.",
"cite_spans": [],
"ref_spans": [
{
"start": 662,
"end": 670,
"text": "Figure 4",
"ref_id": null
},
{
"start": 851,
"end": 859,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Human Related Features",
"sec_num": "3.1"
},
{
"text": "Hand annotation 1 (title) outdoor talking scene; (description) young woman is sitting on chair in park and talking to man who is standing next to her;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human Related Features",
"sec_num": "3.1"
},
{
"text": "Hand annotation 2 (title) A couple is talking; (description) two person are talking; a lady is sitting and a man is standing; a man is wearing a black formal suit; a red bus is moving in the street; people are walking in the street; a yellow taxi is moving in the street;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human Related Features",
"sec_num": "3.1"
},
{
"text": "Hand annotation 3 (title) talk of two persons; (description) a man is wearing dark clothes; he is standing there; a woman is sitting in front of him; they are saying to each other; Figure 3 : A montage of video showing a human activity in an outdoor scene and three sets of hand annotations. In this video segment, a man is standing while a woman is sitting in outdoor -from TREC video '20041101 160000 CCTV4 DAILY NEWS CHN 41504210 '. the highest among HLFs. This highlights our conclusion that most interesting and important HLF is humans when they appear in a video. On the other hand age information (e.g., 'old ', 'young', 'child ') was not identified very often. Names for human body parts have mixed occurrences ranging from high ('hand ') to low ('moustache'). Six basic emotions -anger, disgust, fear, happiness, sadness, and surprise as discussed by Paul Ekman 6 -covered most of facial expressions.",
"cite_spans": [],
"ref_spans": [
{
"start": 181,
"end": 189,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Human Related Features",
"sec_num": "3.1"
},
{
"text": "Dressing became an interesting feature when a human was in a unique dress such as a formal suit, a coloured jacket, an army or police uniform. Videos with multiple humans were common, and thus human grouping information was frequently recognised. Human body parts were involved in identification of human activities; they included actions such as standing, sitting, walking, moving, holding and carrying. Actions related to human body and posture were frequently identified. It was rare that unique human identities, such as police, president and prime minister, were described. This may indicate that a viewer might want to know a specific type of an object to describe a particular situation instead of generalised concepts. Figure 5 shows the hierarchy created for HLFs that did not appear in Figure 4 . Most of the words are related to artificial objects. Humans interact with these objects to complete an activity - e.g., 'man is sitting on a chair', 'she is talking on the phone', 'he is wearing a hat '. Natural objects were usually in the background, providing the additional context of a visual scene -e.g., 'human is standing in the jungle, 'sky is clear today'. Place and location information (e.g., room, office, hospital, cafeteria) were important as they show the position of humans or other objects in the scene -e.g., 'there is a car on the road, 'people are walking in the park '. Colour information often plays an important part in identifying separate HLFs -e.g., 'a man in black shirt is walking with a woman with green jacket', 'she is wearing a white uniform '. The large number of occurrences for colours indicates human's interest in observing not only objects but also their colour scheme in a visual scene. Some hand descriptions reflected annotator's interest in scene settings shown in the foreground or in the background. Indoor/outdoor scene settings were also interested in by some annotators. These observations demonstrate that a viewer is interested in high level details of a video and relationships between different prominent objects in a visual scene. Figure 6 presents a list of the most frequent words and phrases related to spatial relations found in hand annotations. Spatial relations between HLFs are important when explaining the semantics of visual scenes. Their effective use leads to the smooth description. Spatial relations can be categorised into in (404); with (120); on (329); near (68); around (63); at (55); on the left (35); in front of (24); down (24); together (24); along (16); beside (16); on the right (16); into (14); far (11); between (10); in the middle (10); outside (8); off (8); over (8); pass-by (8); across (7); inside (7); middle (7); under (7); away (6); after (7) Figure 6 : List of frequent spatial relations with their counts found in hand annotations. static: relations between stationary objects; dynamic: direction and path of moving objects;",
"cite_spans": [
{
"start": 1395,
"end": 1397,
"text": "'.",
"ref_id": null
}
],
"ref_spans": [
{
"start": 727,
"end": 735,
"text": "Figure 5",
"ref_id": "FIGREF2"
},
{
"start": 796,
"end": 804,
"text": "Figure 4",
"ref_id": null
},
{
"start": 2090,
"end": 2098,
"text": "Figure 6",
"ref_id": null
},
{
"start": 2736,
"end": 2744,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Human Related Features",
"sec_num": "3.1"
},
{
"text": "inter-static and dynamic: relations between moving and not moving objects.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Spatial Relations",
"sec_num": "3.3"
},
{
"text": "Static relations can establish the scene settings (e.g., 'chairs around a table' may imply an indoor scene). Dynamic relations are used for finding activities present in the video (e.g., 'a man is running with a dog'). Inter-static and dynamic relations are a mixture of stationary and non stationary objects; they explain semantics of the complete scene (e.g., 'persons are sitting on the chairs around the table' indicates a meeting scene).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Spatial Relations",
"sec_num": "3.3"
},
{
"text": "Video is a class of time series data formed with highly complex multi dimensional contents. Let video X be a uniformly sampled frame sequence of length n, denoted by X = {x 1 , . . . , x n }, and each frame x i gives a chronological position of the sequence (Figure 7 ). To generate full description of video contents, annotators use temporal information to join descriptions of individual frames. For example,",
"cite_spans": [],
"ref_spans": [
{
"start": 258,
"end": 267,
"text": "(Figure 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Temporal Relations",
"sec_num": "3.4"
},
{
"text": "A man is walking. After sometime he enters the room. Later on he is sitting on the chair.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Temporal Relations",
"sec_num": "3.4"
},
{
"text": "Based on the analysis of the corpus, we describe temporal information in two flavors:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Temporal Relations",
"sec_num": "3.4"
},
{
"text": "1. temporal information extracted from activities of a single human;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Temporal Relations",
"sec_num": "3.4"
},
{
"text": "2. interactions between multiple humans.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Temporal Relations",
"sec_num": "3.4"
},
{
"text": "Most common relations in video sequences are 'before', 'after', 'start ' and 'finish ' for single humans, and 'overlap', 'during' and 'meeting' for multiple humans. Figure 8 presents a list of the most frequent words in the corpus related to temporal relations. It can be observed that annotators put much focus Figure 7 : Illustration of a video as a uniformly sampled sequence of length n. A video frame is denoted by x i , whose spatial context can be represented in the d dimensional feature space. single human: then (25); end (24); before (22); after (16); next (12); later on (12); start (11); previous (11); throughout (10); finish (8); afterwards (6); prior to (4); since (4) multiple humans: meet (114); while (37); during (27); at the same time (19); overlap (12); meanwhile (12); throughout (7); equals (4) Figure 8 : List of frequent temporal relations with their counts found in hand annotations.",
"cite_spans": [],
"ref_spans": [
{
"start": 165,
"end": 173,
"text": "Figure 8",
"ref_id": null
},
{
"start": 312,
"end": 320,
"text": "Figure 7",
"ref_id": null
},
{
"start": 819,
"end": 827,
"text": "Figure 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Temporal Relations",
"sec_num": "3.4"
},
{
"text": "on keywords related to activities of multiple humans as compared to single human cases. 'Meet' keyword has the highest frequency, as annotators usually consider most of the scenes involving multiple humans as the meeting scene. 'While' keyword is mostly used for showing separate activities of multiple humans such as 'a man is walking while a woman is sitting'.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Temporal Relations",
"sec_num": "3.4"
},
{
"text": "A well-established approach to calculating human inter-annotator agreement is kappa statistics (Eugenio and Glass, 2004) . However in the current task it is not possible to compute inter-annotator agreement using this approach because no category was defined for video descriptions. Further the description length for one video can vary among annotators. Alternatively the similarity between natural language descriptions can be calculated; an effective and commonly used measure to find the similarity between a pair of documents is the overlap similarity coefficient (Manning and Sch\u00fctze, 1999) :",
"cite_spans": [
{
"start": 108,
"end": 120,
"text": "Glass, 2004)",
"ref_id": "BIBREF1"
},
{
"start": 569,
"end": 596,
"text": "(Manning and Sch\u00fctze, 1999)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity between Descriptions",
"sec_num": "3.5"
},
{
"text": "Sim overlap (X, Y ) = |S(X, n) \u2229 S(Y, n)| min(|S(X, n)|, |S(Y, n)|)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity between Descriptions",
"sec_num": "3.5"
},
{
"text": "where S(X, n) and S(Y, n) are the set of distinct n-grams in documents X and Y respectively. It is a similarity measure related to the Jaccard index (Tan et al., 2006) . Note that when a set X is a subset of Y or the converse, the overlap coefficient is equal to one. Values for the overlap coefficient range between 0 and 1, where '0' presents the situation where documents are completely different and '1' describes the case where two documents are exactly the same. Table 1 shows the average overlap similarity scores for seven scene categories within 13 hand annotations. The average was calculated from scores for individual description, that was compared with the rest of descriptions in the same category. The outcome demonstrate the fact that humans have different observations and interests while watching videos. Calculation were repeated with two conditions; one with stop words removed and Porter stemmer (Porter, 1993) applied, but synonyms NOT replaced, and the other with stop words NOT removed, but Porter stemmer applied and synonyms replaced. It was found the latter combination of preprocessing techniques resulted in better scores. Not surprisingly synonym replacement led to increased performance, indicating that humans do express the same concept using different terms.",
"cite_spans": [
{
"start": 149,
"end": 167,
"text": "(Tan et al., 2006)",
"ref_id": "BIBREF16"
},
{
"start": 917,
"end": 931,
"text": "(Porter, 1993)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 469,
"end": 476,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Similarity between Descriptions",
"sec_num": "3.5"
},
{
"text": "The average overlap similarity score was higher for 'Traffic' videos than for the rest of categories. Because vehicles were the major entity in 'Traffic' videos, rather than humans and their actions, contributing for annotators to create more uniform descriptions. Scores for some other categories were lower. It probably means that there are more aspects to pay attention when watching videos in, e.g., 'Grouping' category, hence resulting in the wider range of natural language expressions produced.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity between Descriptions",
"sec_num": "3.5"
},
{
"text": "Video is a class of time series data which can be partitioned into time aligned frames (images). These frames are tied together sequentially and temporally. Therefore, it will be useful to know how a person captures the temporal information present in a video. As the order is preserved in a sequence of events, a suitable measure to quantify sequential and temporal information of a description is the longest common subsequence (LCS). This approach computes the similarity between a pair of token (i.e., word) sequences by simply counting the number of edit operations (insertions and deletions) required to transform one sequence into the other. The output is a sequence of common elements such that no other longer string is available. In the experiments, the LCS score between word sequences is normalised by the length of the shorter sequence. Table 2 presents results for identifying sequences of events in hand descriptions using the LCS similarity score. Individual descriptions were compared with the rest of descriptions in the same category and the average score was calculated. Relatively low scores in the table indicate the great variation in annotators' attention on the sequence of events, or temporal information, in a video. Events described by one annotator may not have been listed by another annotator. The News videos category resulted in the highest similarity score, confirming the fact that videos in this category are highly structured.",
"cite_spans": [],
"ref_spans": [
{
"start": 850,
"end": 857,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Sequence of Events Matching",
"sec_num": "3.6"
},
{
"text": "To demonstrate the application of this corpus with natural language descriptions, a supervised document classification task is outlined. Tf-idf score can express textual document features (Dumais et al., 1998) . Traditional tf-idf represents the relation between term t and document d. It provides a measure of the importance of a term within a particular document, calculated as",
"cite_spans": [
{
"start": 188,
"end": 209,
"text": "(Dumais et al., 1998)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Video Classification",
"sec_num": "3.7"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "tf idf (t, d) = tf (t, d) \u2022 idf (d)",
"eq_num": "(1)"
}
],
"section": "Video Classification",
"sec_num": "3.7"
},
{
"text": "where the term frequency tf (t, d) is given by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Video Classification",
"sec_num": "3.7"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "tf (t, d) = N t,d k N k,d",
"eq_num": "(2)"
}
],
"section": "Video Classification",
"sec_num": "3.7"
},
{
"text": "In the above",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Video Classification",
"sec_num": "3.7"
},
{
"text": "equation N t,d",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Video Classification",
"sec_num": "3.7"
},
{
"text": "is the number of occurrences of term t in document d, and the denominator is the sum of the number of occurrences for all terms in document d, that is, the size of the document |d|. Further the inverse document frequency idf (d) is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Video Classification",
"sec_num": "3.7"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "idf (d) = log N W (t)",
"eq_num": "(3)"
}
],
"section": "Video Classification",
"sec_num": "3.7"
},
{
"text": "where N is the total number of documents in the corpus and W (t) is the total number of document containing term t. A term-document matrix X is presented by T \u00d7 D matrix tf idf (t, d). In the experiment Naive Bayes probabilistic supervised learning algorithm was applied for classification using Weka machine learning library (Hall et al., 2009) . Tenfold cross validation was applied. The performance was measured using precision, recall and F1-measure (Table 3 ). F1-measure was low for 'Grouping' and 'Action' videos, indicating the difficulty in classifying these types of natural language descriptions. Best classification results were achieved for 'Traffic' and 'Indoor/Outdoor' scenes. Absence of humans and their actions might have contributed obtaining the high classification scores. Human actions and activities were present in most videos in various categories, hence the 'Action' category resulted in the lowest results. 'Grouping' category also showed Table 3 : Results for supervised classification using the tf-idf features. Figure 9 : The average overlap similarity scores for titles and for descriptions. 'uni', 'bi', and 'tri' indicate the unigram, bigram, and trigram based similarity scores, respectively. They were calculated without any preprocessing such as stop word removal or synonym replacement. weaker result; it was probably because processing for interaction between multiple people, with their overlapped actions, had not been fully developed. Overall classification results are encouraging which demonstrates that this dataset is a good resource for evaluating natural language description systems of short videos.",
"cite_spans": [
{
"start": 326,
"end": 345,
"text": "(Hall et al., 2009)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 454,
"end": 462,
"text": "(Table 3",
"ref_id": null
},
{
"start": 966,
"end": 973,
"text": "Table 3",
"ref_id": null
},
{
"start": 1041,
"end": 1049,
"text": "Figure 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "Video Classification",
"sec_num": "3.7"
},
{
"text": "A title may be considered a very short form of summary. We carried out further experiments to calculate the similarity between a title and a description manually created for a video. The length of a title varied between two to five words. Figure 9 shows the average overlapping similarity scores between titles and descriptions. It can be observed that, in general, scores for titles were lower than those for descriptions, apart from 'News' and 'Meeting' videos. It was probably caused by the short length of titles; by inspection we found phrases such as 'news video' and 'meeting scene' for these categories. Another experiment was performed for classification of videos based on title information only. Figure 10 shows comparison of classification per- formance with titles and with descriptions. We were able to make correct classification in many videos with titles alone, although the performance was slightly less for titles only than for descriptions.",
"cite_spans": [],
"ref_spans": [
{
"start": 239,
"end": 248,
"text": "Figure 9",
"ref_id": null
},
{
"start": 708,
"end": 717,
"text": "Figure 10",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Analysis of Title and Description",
"sec_num": "3.8"
},
{
"text": "This paper presented our experiments using a corpus created for natural language description of videos. For a small subset of TREC video data in seven categories, annotators produced titles, descriptions and selected high level features. This paper aimed to characterise the corpus based on analysis of hand annotations and a series of experiments for description similarity and video classification. In the future we plan to develop automatic machine annotations for video sequences and compare them against human authored annotations. Further, we aim to annotate this corpus in multiple languages such as Arabic and Urdu to generate a multilingual resource for video processing community.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "4"
},
{
"text": "Rushes are the unedited video footage, sometimes referred to as a pre-production video.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "videoannotation.codeplex.com/ 4 www.youtube.com/t/annotations about 5 dewey.at.northwestern.edu/ppad2/documents/help/video.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "en.wikipedia.org/wiki/Paul Ekman",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Inductive learning algorithms and representations for text categorization",
"authors": [
{
"first": "S",
"middle": [],
"last": "Dumais",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Platt",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Heckerman",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Sahami",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the seventh international conference on Information and knowledge management",
"volume": "",
"issue": "",
"pages": "148--155",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Dumais, J. Platt, D. Heckerman, and M. Sahami. 1998. Inductive learning algorithms and represen- tations for text categorization. In Proceedings of the seventh international conference on Information and knowledge management, pages 148-155. ACM.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The kappa statistic: A second look. Computational linguistics",
"authors": [
{
"first": "B",
"middle": [
"D"
],
"last": "Eugenio",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Glass",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "30",
"issue": "",
"pages": "95--101",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B.D. Eugenio and M. Glass. 2004. The kappa statis- tic: A second look. Computational linguistics, 30(1):95-101.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Caviar: Context aware vision using image-based active recognition",
"authors": [
{
"first": "R",
"middle": [],
"last": "Fisher",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Santos-Victor",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Crowley",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Fisher, J. Santos-Victor, and J. Crowley. 2005. Caviar: Context aware vision using image-based ac- tive recognition.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Caltech-256 object category dataset",
"authors": [
{
"first": "G",
"middle": [],
"last": "Griffin",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Holub",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Perona",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. Griffin, A. Holub, and P. Perona. 2007. Caltech- 256 object category dataset.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "The weka data mining software: an update",
"authors": [
{
"first": "M",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Frank",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Holmes",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Pfahringer",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Reutemann",
"suffix": ""
},
{
"first": "I",
"middle": [
"H"
],
"last": "Witten",
"suffix": ""
}
],
"year": 2009,
"venue": "ACM SIGKDD Explorations Newsletter",
"volume": "11",
"issue": "1",
"pages": "10--18",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Hall, E. Frank, G. Holmes, B. Pfahringer, P. Reute- mann, and I.H. Witten. 2009. The weka data min- ing software: an update. ACM SIGKDD Explo- rations Newsletter, 11(1):10-18.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "The terrascope dataset: A scripted multi-camera indoor video surveillance dataset with ground-truth",
"authors": [
{
"first": "C",
"middle": [],
"last": "Jaynes",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Kale",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Sanders",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Grossmann",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the IEEE Workshop on VS PETS",
"volume": "4",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Jaynes, A. Kale, N. Sanders, and E. Gross- mann. 2005. The terrascope dataset: A scripted multi-camera indoor video surveillance dataset with ground-truth. In Proceedings of the IEEE Workshop on VS PETS, volume 4. Citeseer.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Foundations of Statistical Natural Language Processing",
"authors": [
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher Manning and Hinrich Sch\u00fctze. 1999. Foundations of Statistical Natural Language Pro- cessing. MIT Press.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Actions in context",
"authors": [
{
"first": "M",
"middle": [],
"last": "Marszalek",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Laptev",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Schmid",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Marszalek, I. Laptev, and C. Schmid. 2009. Ac- tions in context.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Mit outdoor scene dataset",
"authors": [
{
"first": "A",
"middle": [],
"last": "Oliva",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Torralba",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Oliva and A. Torralba. 2009. Mit outdoor scene dataset.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "The trecvid 2007 bbc rushes summarization evaluation pilot",
"authors": [
{
"first": "P",
"middle": [],
"last": "Over",
"suffix": ""
},
{
"first": "A",
"middle": [
"F"
],
"last": "Smeaton",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Kelly",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the international workshop on TRECVID video summarization",
"volume": "",
"issue": "",
"pages": "1--15",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Over, A.F. Smeaton, and P. Kelly. 2007. The trecvid 2007 bbc rushes summarization evaluation pilot. In Proceedings of the international work- shop on TRECVID video summarization, pages 1- 15. ACM.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A trainable object detection system: Car detection in static images",
"authors": [
{
"first": "C",
"middle": [],
"last": "Papageorgiou",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Poggio",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "180",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Papageorgiou and T. Poggio. 1999. A trainable object detection system: Car detection in static im- ages. Technical Report 1673, October. (CBCL Memo 180).",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "An algorithm for suffix stripping. Program: electronic library and information systems",
"authors": [
{
"first": "M",
"middle": [
"F"
],
"last": "Porter",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "14",
"issue": "",
"pages": "130--137",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M.F. Porter. 1993. An algorithm for suffix stripping. Program: electronic library and information sys- tems, 14(3):130-137.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Collecting image annotations using amazon's mechanical turk",
"authors": [
{
"first": "C",
"middle": [],
"last": "Rashtchian",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Young",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Hodosh",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Hockenmaier",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon's Mechanical Turk",
"volume": "",
"issue": "",
"pages": "139--147",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Rashtchian, P. Young, M. Hodosh, and J. Hocken- maier. 2010. Collecting image annotations using amazon's mechanical turk. In Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon's Mechanical Turk, pages 139-147. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Recognizing human actions: A local svm approach",
"authors": [
{
"first": "C",
"middle": [],
"last": "Schuldt",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Laptev",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Caputo",
"suffix": ""
}
],
"year": 2004,
"venue": "Pattern Recognition, 2004. ICPR 2004. Proceedings of the 17th International Conference on",
"volume": "",
"issue": "",
"pages": "32--36",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Schuldt, I. Laptev, and B. Caputo. 2004. Recog- nizing human actions: A local svm approach. In Pattern Recognition, 2004. ICPR 2004. Proceed- ings of the 17th International Conference on, vol- ume 3, pages 32-36. IEEE.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Highlevel feature detection from video in trecvid: a 5-year retrospective of achievements",
"authors": [
{
"first": "A",
"middle": [
"F"
],
"last": "Smeaton",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Over",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Kraaij",
"suffix": ""
}
],
"year": 2009,
"venue": "Multimedia Content Analysis",
"volume": "",
"issue": "",
"pages": "1--24",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A.F. Smeaton, P. Over, and W. Kraaij. 2009. High- level feature detection from video in trecvid: a 5-year retrospective of achievements. Multimedia Content Analysis, pages 1-24.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Introduction to data mining",
"authors": [
{
"first": "P",
"middle": [
"N"
],
"last": "Tan",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Steinbach",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Kumar",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P.N. Tan, M. Steinbach, V. Kumar, et al. 2006. Intro- duction to data mining. Pearson Addison Wesley Boston.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Elan: a professional framework for multimodality research",
"authors": [
{
"first": "P",
"middle": [],
"last": "Wittenburg",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Brugman",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Russel",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Klassmann",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Sloetjes",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Wittenburg, H. Brugman, A. Russel, A. Klassmann, and H. Sloetjes. 2006. Elan: a professional frame- work for multimodality research. In Proceedings of LREC, volume 2006. Citeseer.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Pets metrics: On-line performance evaluation service",
"authors": [
{
"first": "D",
"middle": [
"P"
],
"last": "Young",
"suffix": ""
},
{
"first": "J",
"middle": [
"M"
],
"last": "Ferryman",
"suffix": ""
}
],
"year": 2005,
"venue": "Joint IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance (VS-PETS)",
"volume": "",
"issue": "",
"pages": "317--324",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D.P. Young and J.M. Ferryman. 2005. Pets metrics: On-line performance evaluation service. In Joint IEEE International Workshop on Visual Surveil- lance and Performance Evaluation of Tracking and Surveillance (VS-PETS), pages 317-324.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Video Description Tool (VDT). An annotator watches one video at one time, selects all HLFs present in the video, describes a theme of the video as a title and creates a full description for important contents in the video."
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "A montage showing a meeting scene in a news video and three sets of hand annotations. In this video segment, three persons are shown sitting on chairs around a table -extracted from TREC video '20041116 150100 CCTV4 DAILY NEWS CHN33050028'."
},
"FIGREF2": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Artificial and natural objects and scene settings were summarised into six groups."
},
"FIGREF3": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Video classification by titles, and by descriptions."
},
"TABREF0": {
"content": "<table><tr><td>table or chairs may not be present.</td></tr><tr><td>Traffic: Presence of vehicles such as cars, buses</td></tr><tr><td>and trucks. Traffic signals.</td></tr><tr><td>Indoor/Outdoor: Scene settings are more obvi-</td></tr><tr><td>ous than human activities. Examples may be</td></tr><tr><td>park scenes and office scenes (where com-</td></tr><tr><td>puters and files are visible).</td></tr><tr><td>hu-</td></tr><tr><td>man can be seen performing some action</td></tr><tr><td>such as 'sitting', 'standing', 'walking' and</td></tr><tr><td>'running'.</td></tr><tr><td>Close-up: Human face is visible. Facial expres-</td></tr><tr><td>sions and emotions usually define mood of</td></tr><tr><td>the video (e.g., happy, sad).</td></tr><tr><td>News: Presence of an anchor or reporters. Char-</td></tr><tr><td>acterised by scene settings such as weather</td></tr><tr><td>boards at the background.</td></tr><tr><td>Meeting: Multiple humans are sitting and com-</td></tr><tr><td>municating. Presence of objects such as</td></tr><tr><td>chairs and a table.</td></tr><tr><td>Grouping: Multiple humans interaction scenes</td></tr><tr><td>that do not belong to a meeting scenario. A</td></tr></table>",
"num": null,
"type_str": "table",
"html": null,
"text": ""
},
"TABREF2": {
"content": "<table><tr><td>: Similarity scores based on the longest com-</td></tr><tr><td>mon subsequence (LCS) in three conditions: scores</td></tr><tr><td>without any preprocessing (raw), scores after synonym</td></tr><tr><td>replacement (synonym), and scores by keyword com-</td></tr><tr><td>parison (keyword). For keyword comparison, verbs</td></tr><tr><td>and nouns were presented as keywords after stemming</td></tr><tr><td>and removing stop words.</td></tr></table>",
"num": null,
"type_str": "table",
"html": null,
"text": ""
}
}
}
}