|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T08:06:59.403019Z" |
|
}, |
|
"title": "Automatic Detection and Classification of Head Movements in Face-to-Face Conversations", |
|
"authors": [ |
|
{ |
|
"first": "Patrizia", |
|
"middle": [], |
|
"last": "Paggio", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Copenhagen", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Manex", |
|
"middle": [], |
|
"last": "Agirrezabal", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Copenhagen", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Bart", |
|
"middle": [], |
|
"last": "Jongejan", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Copenhagen", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Costanza", |
|
"middle": [], |
|
"last": "Navarretta", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Copenhagen", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "This paper presents an approach to automatic head movement detection and classification in data from a corpus of video-recorded face-toface conversations in Danish involving 12 different speakers. A number of classifiers were trained with different combinations of visual, acoustic and word features and tested in a leave-one-out cross validation scenario. The visual movement features were extracted from the raw video data using OpenPose, the acoustic ones from the sound files using Praat, and the word features from the transcriptions. The best results were obtained by a Multilayer Perceptron classifier, which reached an average 0.68 F1 score across the 12 speakers for head movement detection, and 0.40 for head movement classification given four different classes. In both cases, the classifier outperformed a simple most frequent class baseline, a more advanced baseline only relying on velocity features, and linear classifiers using different combinations of features.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "This paper presents an approach to automatic head movement detection and classification in data from a corpus of video-recorded face-toface conversations in Danish involving 12 different speakers. A number of classifiers were trained with different combinations of visual, acoustic and word features and tested in a leave-one-out cross validation scenario. The visual movement features were extracted from the raw video data using OpenPose, the acoustic ones from the sound files using Praat, and the word features from the transcriptions. The best results were obtained by a Multilayer Perceptron classifier, which reached an average 0.68 F1 score across the 12 speakers for head movement detection, and 0.40 for head movement classification given four different classes. In both cases, the classifier outperformed a simple most frequent class baseline, a more advanced baseline only relying on velocity features, and linear classifiers using different combinations of features.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Head movements play an important role in face-to-face communication in that they provide an effective means to express and elicit feedback, and consequently establish grounding and rapport between speakers; they contribute to turn exchange; they are used by speakers to manage their own communicative behaviour, e.g. in connection with lexical search (Allwood, 1988; Yngve, 1970; Duncan, 1972; McClave, 2000) . Therefore, it is crucial for conversational systems to be able to identify and interpret speakers' head movements as well as generate them correctly when interacting with users (Ruttkay and Pelachaud, 2006) . This paper is a contribution to the automatic identification of head movements from raw video data coming from faceto-face dyadic conversations. It builds on previous work where a number of models were trained to detect head movements based on movement and speech features, and extends that work in several directions by extracting movement features using newer software, by trying to distinguish between different kinds of movement, and by training and testing speaker-independent models based on a larger dataset. The paper is structured as follows. In section 2 we discuss related work in the area. Section 3 is dedicated to the features for the prediction of head movements. In section 4 we present the corpus that we used for the current study. Finally in section 5 we discuss the results and propose some possible future directions.", |
|
"cite_spans": [ |
|
{ |
|
"start": 351, |
|
"end": 366, |
|
"text": "(Allwood, 1988;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 367, |
|
"end": 379, |
|
"text": "Yngve, 1970;", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 380, |
|
"end": 393, |
|
"text": "Duncan, 1972;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 394, |
|
"end": 408, |
|
"text": "McClave, 2000)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 588, |
|
"end": 617, |
|
"text": "(Ruttkay and Pelachaud, 2006)", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "Several studies have been relatively successful in performing head movement detection from tracked data, for example by using coordinates obtained through eye-tracking (Kapoor and Picard, 2001; Tan and Rong, 2003) or Kinect sensors (Wei et al., 2013) . A different approach to the task is to detect head movements in raw video data. Such an approach has the potential of making available large amount of data to train systems to deal with multimodal communication in different languages and communicative scenar-ios. Large annotated multimodal corpora are in turn a prerequisite to the development of natural multimodal interactive systems. Surveys of the way computer vision techniques can be applied to gesture recognition are given in Wu and Huang (1999) and Gavrila (1999) . Both works conclude, however, that the field is still a fairly new one, and many problems remain as yet unsolved.", |
|
"cite_spans": [ |
|
{ |
|
"start": 168, |
|
"end": 193, |
|
"text": "(Kapoor and Picard, 2001;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 194, |
|
"end": 213, |
|
"text": "Tan and Rong, 2003)", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 232, |
|
"end": 250, |
|
"text": "(Wei et al., 2013)", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 738, |
|
"end": 757, |
|
"text": "Wu and Huang (1999)", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 762, |
|
"end": 776, |
|
"text": "Gavrila (1999)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "Work has also been done trying to detect gestures based on visual as well as language or speech features. In this line of research, Morency et al. (2005) proposed a methodology where SVM and HMM models were trained to predict feedback nods and shakes in human-robot interactions. The visual features used for head movement recognition were enriched with features from the dialogue context. It can be argued, however, that human-robot interaction is much more constrained than spontaneous human dialogue, and thus the task of predicting the user's head movements is probably easier, or at least different than in human-human communication data. In Morency et al. (2007) , models were trained to recognise head movements in video frames in a variety of datasets based on visual features obtained from tracked head velocities or eye gaze estimates extracted from video data. A number of different models were compared in the study, and it was found that LDCRF (Latent-Dynamic Conditional Random Field) was the best performing of the models. The authors attribute the result to the fact that the model is good at dealing with unsegmented sequences, in this case movement sequences. Morency (2009) studied the co-occurrence between head gestures and speech cues such as specific words and pauses in multi-party conversations, and relevant contextual cues were used to improve a visionbased LDCRF head gesture recognition model. In Jongejan (2012), OpenCV was applied to the detection of head movement from videos based on velocity and acceleration, in combination with customisable thresholds, for the automatic annotation of head movements using the ANVIL tool (Kipp, 2004) . The obtained annotations correlated well with the manual annotation at the onset, but generated a high number of false positives. In Jongejan et al. (2017), three visual movement features were used to train an SVM classifier of head movement. Frid et al. (2017) used the corpus of read news in Swedish described in Ambrazaitis and House (2017) to detect head movements that co-occur with words. The head movements were manually annotated and OpenCV for frontal face detection was used in order to calculate velocity and acceleration features. A Xgboost classifier was trained to predict absence or presence of head movements co-occurring with words. Acoustic features have also been used for head movement prediction. For example Germesin and Wilson (2009) combined pitch and energy of voice with word, pause and head pose information to identify agreement and disagreement signals in meeting data. Such work is based on linguistic and psycho-linguistic findings that have shown a tight relationship between facial movements and acoustic prominence, to the point of talking about audiovisual prominence (Granstr\u00f6m and House, 2005; Swerts and Krahmer, 2008; Ambrazaitis and House, 2017) . In the work by Paggio et al. (2018) , movement features were considered together with acoustic features to identify head movements in conversational data. The authors performed several experiments with different feature sets and also, several prediction paradigms were tested, including common classifiers and sequence-based models. It was observed that a Multilayer Perceptron showed the best results when trained on one speaker and tested on another one. In this study, we build on those preliminary results by extending our dataset to consider twelve different speakers, and we experiment with the classification of different head movement types.", |
|
"cite_spans": [ |
|
{ |
|
"start": 132, |
|
"end": 153, |
|
"text": "Morency et al. (2005)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 647, |
|
"end": 668, |
|
"text": "Morency et al. (2007)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 1178, |
|
"end": 1192, |
|
"text": "Morency (2009)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 1657, |
|
"end": 1669, |
|
"text": "(Kipp, 2004)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 1915, |
|
"end": 1933, |
|
"text": "Frid et al. (2017)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 1987, |
|
"end": 2015, |
|
"text": "Ambrazaitis and House (2017)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 2402, |
|
"end": 2428, |
|
"text": "Germesin and Wilson (2009)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 2775, |
|
"end": 2802, |
|
"text": "(Granstr\u00f6m and House, 2005;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 2803, |
|
"end": 2828, |
|
"text": "Swerts and Krahmer, 2008;", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 2829, |
|
"end": 2857, |
|
"text": "Ambrazaitis and House, 2017)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 2875, |
|
"end": 2895, |
|
"text": "Paggio et al. (2018)", |
|
"ref_id": "BIBREF27" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "Similarly to what was done in Paggio et al. (2018) , three time-related derivatives with respect to the changing position of the head are used here as features for the identification of head movements: velocity, acceleration and jerk. Velocity is change of position per unit of time, acceleration is change of velocity per unit of time, and finally jerk is change of acceleration per unit of time. We suggest that a sequence of frames for which jerk has a high value either horizontally or vertically may correspond to the stroke of the movement (Kendon, 2004) . OpenPose (Cao et al., 2018) was used to extract nose tip positions from the data. Using a sliding window, velocity, acceleration and jerk values were computed for video frame sequences using a polynomial (linear, quadratic and cubic, respectively) regression over a number of observations of nose tip positions. Several window frames were experimented with. The results reported in this paper were obtained by considering 9 frames for velocity, 11 for acceleration and 13 for jerk. For each of the three derivatives, four values are computed for each frame and used to train the models. The 12 values are both the cartesian (x and y) and polar (radius and angle) coordinates of the velocity, acceleration and jerk vectors. Since we analyse video data, we do not have depth information, and so we are restricted to express velocity, acceleration and jerk as vectors in a two dimensional plane. Angle values have integer values between 1 and 12, like the directions on a clock dial. It must be noted that the video recordings are characterised by 25 frames per second and a resolution of either 640x360 (.avi) or 640x369 (.mov). Thus the quality is quite low given today's standards. In addition, since the participants are recorded almost in full height, the head movements are very tiny when expressed in pixels. All of this is bound to have an effect on how accurately the movement derivatives can predict head movement. Acoustic features were extracted from the speech channels of all speakers using the PRAAT software (Boersma and Weenink, 2009) . In general, several studies indicate that head movements are likely to occur together with prosodic stress, whereas the opposite is not necessarily true (Hadar et al., 1983; Loehr, 2007) . Since in Danish, which is the language of our study, stress is expressed through fundamental frequency, vowel duration and quality, as well as intensity (Thorsen, 1980) , we decided to rely on pitch and intensity features to model a possible relation between focal patterns and head movements. F0 values and intensity values were sampled with 25 frames per second as is done for the movement features and added to the training data. The hypothesis is that changes in pitch or peaks of intensity might be associated with head movement strokes, and thus help in identifying movement. Based on the analysis of co-occurrence patterns between head movements and verbalisation in the corpus data (Paggio et al., 2017), we finally added to the predictive features information as to whether the person performing the movement, the gesturer, is speaking or not. This binary feature was added to each frame based on the speech transcription, which was done manually and includes word boundaries. The data used for this study is taken from the Danish NOMCO corpus (Paggio et al., 2010) , a collection of twelve video-recorded first encounter conversations between pairs of speakers (half females, half males) for a total interaction of approximately one hour. Each speaker took part in two different conversations, one with a male and one with a female. The speakers are standing in front of each other. The conversations were recorded in a studio using three different cameras and two cardioid microphones. For the work presented here we used a version of the recordings in which both speakers are being viewed almost frontally, and the two views are combined in a singled video as shown in Figure 1 . The data have been annotated Table 2 : Distribution of different head movement types in the dataset: average mean number of frames and coefficient of variation across 12 speakers with many different annotation layers (Paggio and Navarretta, 2016) , including a manually obtained speech transcription with word-specific boundaries, and temporal segments corresponding to different types of head movement (Allwood et al., 2007) . The Cohen's (1960) \uf8ff score results of inter-coder agreement experiments involving two annotators are between 0.72 and 0.8 for the identification and classification of head movements (Navarretta et al., 2011) . For this study, we have focused on two ways of looking at the head movements; i. distinguishing between head movement and absence of it; ii. distinguishing between nods, shakes, other kind of head movement, and no movement.", |
|
"cite_spans": [ |
|
{ |
|
"start": 30, |
|
"end": 50, |
|
"text": "Paggio et al. (2018)", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 546, |
|
"end": 560, |
|
"text": "(Kendon, 2004)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 572, |
|
"end": 590, |
|
"text": "(Cao et al., 2018)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 2084, |
|
"end": 2111, |
|
"text": "(Boersma and Weenink, 2009)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 2267, |
|
"end": 2287, |
|
"text": "(Hadar et al., 1983;", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 2288, |
|
"end": 2300, |
|
"text": "Loehr, 2007)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 2456, |
|
"end": 2471, |
|
"text": "(Thorsen, 1980)", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 3356, |
|
"end": 3377, |
|
"text": "(Paggio et al., 2010)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 4212, |
|
"end": 4241, |
|
"text": "(Paggio and Navarretta, 2016)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 4398, |
|
"end": 4420, |
|
"text": "(Allwood et al., 2007)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 4427, |
|
"end": 4441, |
|
"text": "Cohen's (1960)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 4605, |
|
"end": 4630, |
|
"text": "(Navarretta et al., 2011)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 3984, |
|
"end": 3992, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 4024, |
|
"end": 4031, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Predictive features", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "In table 1 we show the distribution of the four types of head movement in the annotated corpus both in terms of entire movement sequences and number of video frames. Thus, 3,117 head movements were annotated in total, corresponding to 72,313 movement frames. Frames containing no head movement constitute by far the majority of the video footage. The Nod class subsumes both down and up nods. It was singled out together with Shake because these two classes have been targeted previously in head movement detection studies (Morency et al., 2005) . The Other category groups a number of distinct types in the annotation, i.e. HeadBackward, HeadForward, SideTurn, Tilt, Waggle and HeadOther.", |
|
"cite_spans": [ |
|
{ |
|
"start": 523, |
|
"end": 545, |
|
"text": "(Morency et al., 2005)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data, training and test setup", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "There is of course speaker-dependent variation in the frequency of the various movement types. Table 2 displays mean averages and coefficient of variations for how different movement and non movement frames are distributed across the twelve speakers. The figures show that the frequency of occurrence of both Nod and Shake varies considerably in the speaker sample. The duration of the head movements in the annotated corpus is 934.78 ms on average (SD: 579.44 ms). A histogram of head movement duration is given in Figure 2 . Although most movements are shorter than 1500 ms, we see a long tail of outliers with a maximum duration of up to 7,080 ms. To derive training data from the twelve annotated videos, movement, acoustic and word features were extracted as explained in the previous section so that for each frame in each video a vector was created with features expressing presence/absence of movement, a label for each of the four movement classes, four velocity, four acceleration and four jerk features, pitch and intensity values referring to the gesturer and a binary feature expressing whether the same gesturer is speaking or not. The data were then used to train a number of different classifiers to predict the head movements of each speaker given training data from the other eleven speakers (leave-one-out cross validation). In what follows, we will report accuracy and F1 results achieved by the various classifiers on average across speakers. It should be kept in mind, however, that there is variation across speakers in number of types of head movement produced, as already noted. Moreover, the accuracy of the classifiers may be influenced by the fact that some speakers are sometimes situated on the left and sometimes on the right, and others are in the same position in both the conversations they took part in. As mentioned earlier, two tasks were conducted. The first is detection of head movement (irrespective of the type), and the second is classification of head movement type given the four classes None, Nod, Shake and Other. Two baselines were chosen. The first one corresponds to the results obtained by a simple most-frequent category model, which will always predict that there is no movement in the frame. The second one is a logistic regression classifier that only uses velocity features. We then experimented with the complete range of movement derivatives (velocity, acceleration and jerk). Finally, we added acoustic and word information relative to the gesturer. The following classifier types were used to train models using the various feature combinations: i. a Logistic Regression (LR) classifier, which is an example of a simple model, ii. a linear Support Vector Machine (LINEARSVC), which was used by several earlier studies for head movement detection, and iii. a Multilayer Perceptron (MLP) with four layers, as an example of a non-linear classifier. 1", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 95, |
|
"end": 102, |
|
"text": "Table 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 516, |
|
"end": 524, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Data, training and test setup", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "The results of the binary classification experiments are given in terms of average accuracy in Table 6 : Classification of different types of head movements in the whole dataset: total number of frames, precision and recall for each type baseline. We also see that the MLP classifier performs better than all the others irrespective of the combination of features used in the training. The overall best accuracy is achieved by MLP using all the three movement features, whereas acoustic and word features seem to introduce some noise (even though the difference between the MLP results in experiment 2 on the one hand and 2 and 3 on the other is marginal). Turning to F1, we observe again that all models definitely outperform the baseline, and that the MLP classifier is consistently the best in all experiments. In this case, the best result is achieved either using the entire range of features or only the visual ones. Adding acoustic features alone produces a slightly lower F1. Figure 3 shows how the F1 score obtained by the best binary models, i.e. those trained with the complete range of features, varies depending on the speaker. The MLP classifier is not only the best performing one on average, but also the one where the F1 score varies the least. However, there is still some variation. In fact, the standard deviation for the results achieved by MLP is 0.053 for accuracy and 0.046 for F1. We now turn to the results of the multi-class prediction experiments, which are shown in table 7 for accuracy and ta- 4 in table 4) ble 8 for F1 score (macro average). Determining the type of head movement in a multi-class prediction scenario is a more difficult task than having to choose between movement and non-movement. Therefore, it is not surprising that the results are generally worse. Nevertheless, all the models perform better than the baseline both as regards accuracy and F1. Also in this case, MLP is generally the best classifier. If we now focus on the accuracy results first, we see again that the best accuracy is achieved by MLP when using all the movement features but no acoustic or word features. When we look at the F1 scores, however, we see that acoustic features this time not only help the classifier, but provide the best performing model in combination with movement features. Further analysis of the results is provided by the error matrix in table 5, which relates to the best performing classifier (MLP in exp. 3). We see first of all that head movements of all types are confused with no movement, and to some extent with movements of type Other. Nods and shakes, on the contrary, are seldom exchanged for one another, which seems a good result given the fact that they are quite different from the point of view of their movement characteristics.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 95, |
|
"end": 102, |
|
"text": "Table 6", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 984, |
|
"end": 992, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF2" |
|
}, |
|
{ |
|
"start": 1524, |
|
"end": 1537, |
|
"text": "4 in table 4)", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5." |
|
}, |
|
{ |
|
"text": "In table 6 we show precision and recall figures for the different movement types. Recall is in general low for movement frames, while precision is better. We see this as an advantage in that an automatic procedure that misses exist- ing head movements seems more acceptable than one that finds non-existing ones. Precision in the detection of head movements is highest for Nod, followed by Other, followed by Shake. The degree of precision depends not only on frequency of occurrence (there are more nods than shakes), but also on how homogeneous the classes are (the class Other is not as homogeneous as the class Nod).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5." |
|
}, |
|
{ |
|
"text": "In general, it is difficult to compare our results directly to what other head movement detection studies have achieved because of the diversity of recording settings, number of participants, communicative situations etc. The work that resembles ours the most in terms of the methodology used is perhaps the paper by Frid et al. (2017) in that they also rely on movement derivatives. They also look at the cooccurrence of head movements and words, but do so in a different way by predicting for each word whether it is accompanied by a movement or not. Their results, 0.89 accuracy and 0.61 F1 score, are not very dissimilar from those obtained by our best model in the binary classification. It must be noted, however, that we are detecting head movements in less favourable conditions since our subjects are recorded in full body size. In addition, the quality of our videos is, as already mentioned, not up to today's standards. Furthermore, the acoustic signal is also far from optimal because the microphones were hanging from the ceiling rather than being close to the participants' mouths. The present study is a further development of the earlier experiment reported in Paggio et al. (2018) , where we performed head movement detection in a subset of the data only consisting of two speakers. The best result was obtained in that study by a Multilayer Perceptron trained on visual and acoustic features, which achieved 0.75 accuracy and outperformed a classifier trained on monomodal visual features. The performance of the best model in the current study, which applies to the entire dataset, is only about 2% lower, thus showing that our methodology is reasonably robust. An interesting question is whether approaching the problem in terms of single frames is a good way of approximating what the human annotators did. After all, they were asked to annotate whole head movements, not individual frames.", |
|
"cite_spans": [ |
|
{ |
|
"start": 317, |
|
"end": 335, |
|
"text": "Frid et al. (2017)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 1178, |
|
"end": 1198, |
|
"text": "Paggio et al. (2018)", |
|
"ref_id": "BIBREF27" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "6." |
|
}, |
|
{ |
|
"text": "Duration in ms A way to compare the results of the frame-wise predictions made by the models is to look at the number and duration of uninterrupted movement frame sequences and compare them with the gold standard. The total number of movements predicted by the best binary classifier is 7,782, and their mean duration is 291.25 ms (SD: 360.91). In comparison to the annotated movements, the classifier detects many more but shorter ones. In figure 4 we visualise the whole distribution of the duration of the predicted movements. If we compare it with the histogram in figure 2, we can clearly see that the classifier tends to find many more shorter movements (up to 500 ms), and even though the distribution is also left-skewed, the maximum duration of 4,880 is considerably shorter than the longest movement in the gold standard. There may be several explanations for these differences, e.g. the fact that annotators may have seen a sequence of movements as an uninterrupted repeated gesture of a certain kind rather than separate individual ones. Looking at the feature combinations used in the experiments, the results confirm the fact that combining the three movement derivatives in the training reliably improves detection and classification for all the models. It can be discussed, however, whether all the values currently used in the vectors are in fact necessary. Having a representation of velocity, acceleration and jerk not only in terms of polar coordinates but also in terms of cartesian coordinates is redundant since such representations are equivalent. We repeated some of the experiments without the inclusion of polar coordinates. Only the MLP classifier was not adversely influenced by this and became even marginally better. The linear classifiers, on the other hand, performed not any better than the baseline without the polar coordinates. The role played by the acoustic and word features, on the contrary, is not totally clear in that they only add marginal gains to the F1 scores obtained by the models and in some cases even harm them. It is possible that the speech signal is superfluous, but also that we have not found the most efficient way to combine those features with the visual ones. More research is needed to understand this. Finally, as we noted the performance of the classifiers varies depending on the speaker. A first analysis of the data indicates that the factors which might influence the results in this direction are the types of head movement performed by the speakers as well as whether the speaker is standing on the same side during the two conversations or not.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Duration of predicted uninterrupted movement sequences", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In conclusion, we have shown that head movements can be detected in unseen speaker data by an MLP classifier trained with multimodal data including movement and acoustic features. The results achieved by this classifier perform at state-of-the-art level. When the same method is applied to the classification of four different types of head movement in the same data, the performance decreases. In order to develop the present work further, we can investigate different approaches. Firstly, we plan to add more features from OpenPose: the position of ears and chin, for example, might be helpful to add to the position of the nose for some of the head movements. An alternative to Open-Pose, or a method that we would like to use in combination with it, could be found in computer vision techniques that identify changing head positions as proposed in Ruiz et al. (2018) , who trained a multiloss Convolutional Neural Network on a synthetically created dataset in order to predict yaw, pitch and roll from image intensities. Secondly, we intend to investigate different ways to use acoustic and word features, either by adding more features or by using them in more selective ways for specific head movement classes. Thirdly, we would like to analyse the extent to which the depth of the neural network contributes to the results by testing different numbers of layers. Furthermore, we would like to experiment with sequential models such as Recurrent Neural Networks (RNN), which are often used to analyse video sequences and might therefore predict gestures more precisely than the classifiers we have tested until now. In that connection, it would also be interesting to experiment with an architecture in which representations are learnt separately for each feature by different networks and then concatenated into one vector. Finally, we want to carry out a more precise comparison of the movements predicted and the annotated ones by making the predictions readable by the ANVIL gesture annotation tool.", |
|
"cite_spans": [ |
|
{ |
|
"start": 852, |
|
"end": 870, |
|
"text": "Ruiz et al. (2018)", |
|
"ref_id": "BIBREF28" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and future work", |
|
"sec_num": "7." |
|
}, |
|
{ |
|
"text": "We have obtained written permission by the participants to use the videos for research purposes specific to the project within which the recordings were obtained. Therefore, we are making all the features extracted from the corpus available together with the code we have used to train and test the classifiers. However, we do not share the videos or the transcriptions from the corpus because of privacy and data protection issues.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Ethical considerations", |
|
"sec_num": "8." |
|
}, |
|
{ |
|
"text": "The data and the Jupyter notebooks that were used in our experiments can be found at https://github.com/ kuhumcst/head_movement_detection.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "The MUMIN coding scheme for the annotation of feedback, turn management and sequencing phenomena", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Allwood", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Cerrato", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Jokinen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Navarretta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Paggio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Multimodal Corpora for Modelling Human Multimodal Behaviour", |
|
"volume": "41", |
|
"issue": "", |
|
"pages": "273--287", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Allwood, J., Cerrato, L., Jokinen, K., Navarretta, C., and Paggio, P. (2007). The MUMIN coding scheme for the annotation of feedback, turn management and sequenc- ing phenomena. In Jean-Claude Martin, et al., editors, Multimodal Corpora for Modelling Human Multimodal Behaviour, volume 41 of Special issue of the Interna- tional Journal of Language Resources and Evaluation, pages 273-287. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "The Structure of Dialog", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Allwood", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1988, |
|
"venue": "Structure of Multimodal Dialog II", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3--24", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Allwood, J. (1988). The Structure of Dialog. In Martin M. Taylor, et al., editors, Structure of Multimodal Dialog II, pages 3-24. John Benjamins, Amsterdam.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Acoustic features of multimodal prominences: Do visual beat gestures affect verbal pitch accent realization?", |
|
"authors": [ |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Ambrazaitis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "House", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of The 14th International Conference on Auditory-Visual Speech Processing (AVSP2017). KTH", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ambrazaitis, G. and House, D. (2017). Acoustic features of multimodal prominences: Do visual beat gestures af- fect verbal pitch accent realization? In Slim Ouni, et al., editors, Proceedings of The 14th International Confer- ence on Auditory-Visual Speech Processing (AVSP2017). KTH.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Praat: doing phonetics by computer", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Boersma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Weenink", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Boersma, P. and Weenink, D. (2009). Praat: doing pho- netics by computer (version 5.1.05) [computer program].", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "OpenPose: realtime multi-person 2D pose estimation using Part Affinity Fields", |
|
"authors": [ |
|
{ |
|
"first": "Z", |
|
"middle": [], |
|
"last": "Cao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Hidalgo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Simon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S.-E", |
|
"middle": [], |
|
"last": "Wei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Sheikh", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXivpreprintarXiv:1812.08008" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Cao, Z., Hidalgo, G., Simon, T., Wei, S.-E., and Sheikh, Y. (2018). OpenPose: realtime multi-person 2D pose estimation using Part Affinity Fields. In arXiv preprint arXiv:1812.08008.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "A coefficient of agreement for nominal scales. Educational and Psychological Measurement", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Cohen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1960, |
|
"venue": "", |
|
"volume": "20", |
|
"issue": "", |
|
"pages": "37--46", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Cohen, J. (1960). A coefficient of agreement for nom- inal scales. Educational and Psychological Measure- ment, 20(1):37-46.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Some signals and rules for taking speaking turns in conversations", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Duncan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1972, |
|
"venue": "Journal of Personality and Social Psychology", |
|
"volume": "23", |
|
"issue": "", |
|
"pages": "283--292", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Duncan, S. (1972). Some signals and rules for taking speaking turns in conversations. Journal of Personality and Social Psychology, 23:283-292.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Towards classification of head movements in audiovisual recordings of read news", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Frid", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Ambrazaitis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Svensson-Lundmark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "House", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 4th European and 7th Nordic Symposium on Multimodal Communication (MMSYM 2016), number 141", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "Link\u00f6pings uni-- versitet", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Frid, J., Ambrazaitis, G., Svensson-Lundmark, M., and House, D. (2017). Towards classification of head move- ments in audiovisual recordings of read news. In Pro- ceedings of the 4th European and 7th Nordic Sympo- sium on Multimodal Communication (MMSYM 2016), number 141, pages 4-9, Copenhagen, September 2016. Link\u00f6ping University Electronic Press, Link\u00f6pings uni- versitet.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "The visual analysis of human movement: A survey. Computer Vision and Image Understanding", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Gavrila", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "", |
|
"volume": "73", |
|
"issue": "", |
|
"pages": "82--98", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gavrila, D. (1999). The visual analysis of human move- ment: A survey. Computer Vision and Image Under- standing, 73(1):82 -98.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Agreement detection in multiparty conversation", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Germesin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Wilson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of ICMI-MLMI 2009", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "7--14", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Germesin, S. and Wilson, T. (2009). Agreement detec- tion in multiparty conversation. In Proceedings of ICMI- MLMI 2009, pages 7-14.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Audiovisual representation of prosody in expressive speech communication", |
|
"authors": [ |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Granstr\u00f6m", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "House", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Speech Communication", |
|
"volume": "46", |
|
"issue": "3", |
|
"pages": "473--484", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Granstr\u00f6m, B. and House, D. (2005). Audiovisual repre- sentation of prosody in expressive speech communica- tion. Speech Communication, 46(3):473-484, July.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Head Movement Correlates of Juncture and Stress at Sentence Level", |
|
"authors": [ |
|
{ |
|
"first": "U", |
|
"middle": [], |
|
"last": "Hadar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Steiner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Grant", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Clifford", |
|
"middle": [], |
|
"last": "Rose", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1983, |
|
"venue": "Language and Speech", |
|
"volume": "26", |
|
"issue": "2", |
|
"pages": "117--129", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hadar, U., Steiner, T., Grant, E., and Clifford Rose, F. (1983). Head Movement Correlates of Juncture and Stress at Sentence Level. Language and Speech, 26(2):117-129, April.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Classifying head movements in video-recorded conversations based on movement velocity, acceleration and jerk", |
|
"authors": [ |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Jongejan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Paggio", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Navarretta", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 4th European and 7th Nordic Symposium on Multimodal Communication (MMSYM 2016)", |
|
"volume": "141", |
|
"issue": "", |
|
"pages": "10--17", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jongejan, B., Paggio, P., and Navarretta, C. (2017). Classifying head movements in video-recorded con- versations based on movement velocity, acceleration and jerk. In Proceedings of the 4th European and 7th Nordic Symposium on Multimodal Communication (MMSYM 2016), Copenhagen, 29-30 September 2016, number 141, pages 10-17. Link\u00f6ping University Elec- tronic Press, Link\u00f6pings universitet.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Automatic annotation of head velocity and acceleration in Anvil", |
|
"authors": [ |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Jongejan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "201--208", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jongejan, B. (2012). Automatic annotation of head ve- locity and acceleration in Anvil. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12), pages 201-208. European Language Resources Distribution Agency.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "A real-time head nod and shake detector", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Kapoor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Picard", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Proceedings of the 2001 Workshop on Perceptive User Interfaces, PUI '01", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--5", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kapoor, A. and Picard, R. W. (2001). A real-time head nod and shake detector. In Proceedings of the 2001 Workshop on Perceptive User Interfaces, PUI '01, pages 1-5, New York, NY, USA. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Gesture", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Kendon", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kendon, A. (2004). Gesture. Cambridge University Press.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Gesture Generation by Imitation -From Human Behavior to Computer Character Animation", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Kipp", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kipp, M. (2004). Gesture Generation by Imitation -From Human Behavior to Computer Character Animation. Boca Raton, Florida: Dissertation.com.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Aspects of rhythm in gesture and speech", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Loehr", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Gesture", |
|
"volume": "7", |
|
"issue": "2", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Loehr, D. P. (2007). Aspects of rhythm in gesture and speech. Gesture, 7(2).", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Linguistic functions of head movements in the context of speech", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Mcclave", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Journal of Pragmatics", |
|
"volume": "32", |
|
"issue": "", |
|
"pages": "855--878", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "McClave, E. (2000). Linguistic functions of head move- ments in the context of speech. Journal of Pragmatics, 32:855-878.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Contextual recognition of head gestures", |
|
"authors": [], |
|
"year": null, |
|
"venue": "Proc. Int. Conf. on Multimodal Interfaces (ICMI)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Contextual recognition of head gestures. In Proc. Int. Conf. on Multimodal Interfaces (ICMI).", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Latent-dynamic discriminative models for continuous gesture recognition", |
|
"authors": [ |
|
{ |
|
"first": "L.-P", |
|
"middle": [], |
|
"last": "Morency", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Quattoni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Darrell", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "2007 IEEE conference on computer vision and pattern recognition", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--8", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Morency, L.-P., Quattoni, A., and Darrell, T. (2007). Latent-dynamic discriminative models for continuous gesture recognition. In 2007 IEEE conference on com- puter vision and pattern recognition, pages 1-8. IEEE.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Co-occurrence graphs: contextual representation for head gesture recognition during multiparty interactions", |
|
"authors": [ |
|
{ |
|
"first": "L.-P", |
|
"middle": [], |
|
"last": "Morency", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the Workshop on Use of Context in Vision Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--6", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Morency, L.-P. (2009). Co-occurrence graphs: contextual representation for head gesture recognition during multi- party interactions. In Proceedings of the Workshop on Use of Context in Vision Processing, pages 1-6.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Creating Comparable Multimodal Corpora for Nordic Languages", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Navarretta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Ahls\u00e9n", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Allwood", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Jokinen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Paggio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 18th Conference Nordic Conference of Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "153--160", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Navarretta, C., Ahls\u00e9n, E., Allwood, J., Jokinen, K., and Paggio, P. (2011). Creating Comparable Multimodal Corpora for Nordic Languages. In Proceedings of the 18th Conference Nordic Conference of Computational Linguistics, pages 153-160, Riga, Latvia, May 11-13.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "The Danish NOMCO corpus: Multimodal interaction in first acquaintance conversations. Language Resources and Evaluation", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Paggio", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Navarretta", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--32", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Paggio, P. and Navarretta, C. (2016). The Danish NOMCO corpus: Multimodal interaction in first acquaintance con- versations. Language Resources and Evaluation, pages 1-32.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "The NOMCO multimodal nordic resource -goals and characteristics", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Paggio", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Allwood", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Ahls\u00e9n", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Jokinen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Navarretta", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the Seventh conference on International Language Resources and Evaluation (LREC'10)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Paggio, P., Allwood, J., Ahls\u00e9n, E., Jokinen, K., and Navar- retta, C. (2010). The NOMCO multimodal nordic re- source -goals and characteristics. In Proceedings of the Seventh conference on International Language Re- sources and Evaluation (LREC'10), Valletta, Malta. Eu- ropean Language Resources Association (ELRA).", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Automatic identification of head movements in videorecorded conversations: Can words help?", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Paggio", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Navarretta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jongejan", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the Sixth Workshop on Vision and Language", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "40--42", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Paggio, P., Navarretta, C., and Jongejan, B. (2017). Automatic identification of head movements in video- recorded conversations: Can words help? In Proceed- ings of the Sixth Workshop on Vision and Language, pages 40-42, Valencia, Spain, April. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Detecting head movements in video-recorded dyadic conversations", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Paggio", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Jongejan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Agirrezabal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Navarretta", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 20th International Conference on Multimodal Interaction: Adjunct", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--6", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Paggio, P., Jongejan, B., Agirrezabal, M., and Navarretta, C. (2018). Detecting head movements in video-recorded dyadic conversations. In Proceedings of the 20th In- ternational Conference on Multimodal Interaction: Ad- junct, pages 1-6.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Fine-grained head pose estimation without keypoints", |
|
"authors": [ |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Ruiz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Chong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Rehg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2074--2083", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ruiz, N., Chong, E., and Rehg, J. (2018). Fine-grained head pose estimation without keypoints. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 2074-2083.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "From Brows to Trust: Evaluating Embodied Conversational Agents. Human-Computer Interaction Series", |
|
"authors": [ |
|
{ |
|
"first": "Z", |
|
"middle": [], |
|
"last": "Ruttkay", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Pelachaud", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ruttkay, Z. and Pelachaud, C. (2006). From Brows to Trust: Evaluating Embodied Conversational Agents. Human-Computer Interaction Series. Springer Nether- lands.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Facial expression and prosodic prominence: Effects of modality and facial area", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Swerts", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Krahmer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Journal of Phonetics", |
|
"volume": "36", |
|
"issue": "2", |
|
"pages": "219--238", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Swerts, M. and Krahmer, E. (2008). Facial expression and prosodic prominence: Effects of modality and facial area. Journal of Phonetics, 36(2):219-238.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "A real-time head nod and shake detector using HMMs", |
|
"authors": [ |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Tan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Rong", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Expert Systems with Applications", |
|
"volume": "25", |
|
"issue": "3", |
|
"pages": "461--466", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tan, W. and Rong, G. (2003). A real-time head nod and shake detector using HMMs. Expert Systems with Appli- cations, 25(3):461-466.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Neutral stress, emphatic stress, and sentence Intonation in Advanced Standard Copenhagen Danish", |
|
"authors": [ |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Thorsen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1980, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thorsen, N. (1980). Neutral stress, emphatic stress, and sentence Intonation in Advanced Standard Copenhagen Danish. Technical Report 14, University of Copenhagen.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Real-time head nod and shake detection for continuous human affect recognition", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Wei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Scanlon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Monaghan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Connor", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "14th International Workshop on Image Analysis for Multimedia Interactive Services (WIAMIS)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--4", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wei, H., Scanlon, P., Li, Y., Monaghan, D. S., and O'Connor, N. E. (2013). Real-time head nod and shake detection for continuous human affect recognition. In 14th International Workshop on Image Analysis for Multimedia Interactive Services (WIAMIS), pages 1-4. IEEE.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Vision-based gesture recognition: A review", |
|
"authors": [ |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Huang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "International Gesture Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "103--115", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wu, Y. and Huang, T. S. (1999). Vision-based gesture recognition: A review. In International Gesture Work- shop, pages 103-115. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "On getting a word in edgewise", |
|
"authors": [ |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Yngve", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1970, |
|
"venue": "Papers from the sixth regional meeting of the Chicago Linguistic Society", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "567--578", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yngve, V. (1970). On getting a word in edgewise. In Pa- pers from the sixth regional meeting of the Chicago Lin- guistic Society, pages 567-578.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"text": "Screen shot from one of the video recordings showing combined almost frontal camera views", |
|
"type_str": "figure", |
|
"uris": null |
|
}, |
|
"FIGREF1": { |
|
"num": null, |
|
"text": "Duration of annotated head movements in the dataset", |
|
"type_str": "figure", |
|
"uris": null |
|
}, |
|
"FIGREF2": { |
|
"num": null, |
|
"text": "Visualisation of the F1-score of the binary model that include all features (exp.", |
|
"type_str": "figure", |
|
"uris": null |
|
}, |
|
"FIGREF3": { |
|
"num": null, |
|
"text": "Histogram of the duration of uninterrupted sequences of movement frames predicted by the binary MLP classifier in exp. 4", |
|
"type_str": "figure", |
|
"uris": null |
|
}, |
|
"TABREF1": { |
|
"type_str": "table", |
|
"html": null, |
|
"num": null, |
|
"text": "Different types of head movements in the dataset: total number of frames and whole movements", |
|
"content": "<table><tr><td/><td>None</td><td>Nod</td><td colspan=\"2\">Shake Other</td><td>All</td></tr><tr><td colspan=\"3\">Mean 10,479 1,813</td><td>792</td><td colspan=\"2\">3,421 6,026</td></tr><tr><td>CV</td><td>0.13</td><td>0.47</td><td>0.50</td><td>0.20</td><td>0.20</td></tr></table>" |
|
}, |
|
"TABREF2": { |
|
"type_str": "table", |
|
"html": null, |
|
"num": null, |
|
"text": "", |
|
"content": "<table><tr><td colspan=\"2\">Exp Features</td><td>MF</td><td>LR</td><td colspan=\"2\">LINEARSVC MLP</td></tr><tr><td>1</td><td>Only velocity</td><td colspan=\"2\">0.635 0.686</td><td>0.680</td><td>0.707</td></tr><tr><td>2</td><td>All visual features (no sound)</td><td colspan=\"2\">0.635 0.721</td><td>0.718</td><td>0.733</td></tr><tr><td>3</td><td>All visual and acoustic (only gesturer)</td><td colspan=\"2\">0.635 0.722</td><td>0.718</td><td>0.730</td></tr><tr><td>4</td><td colspan=\"3\">All visual and acoustic+word (only gesturer) 0.635 0.725</td><td>0.723</td><td>0.730</td></tr></table>" |
|
}, |
|
"TABREF3": { |
|
"type_str": "table", |
|
"html": null, |
|
"num": null, |
|
"text": "Accuracy results of classification experiments (mean over 12 speakers). Classes are presence and absence of movement.", |
|
"content": "<table><tr><td colspan=\"2\">Exp Features</td><td>MF</td><td>LR</td><td colspan=\"2\">LINEARSVC MLP</td></tr><tr><td>1</td><td>Only velocity</td><td colspan=\"2\">0.387 0.575</td><td>0.557</td><td>0.648</td></tr><tr><td>2</td><td>All visual features (no sound)</td><td colspan=\"2\">0.387 0.644</td><td>0.633</td><td>0.684</td></tr><tr><td>3</td><td>All visual and acoustic (only gesturer)</td><td colspan=\"2\">0.387 0.646</td><td>0.634</td><td>0.681</td></tr><tr><td>4</td><td colspan=\"3\">All visual and acoustic+word (only gesturer) 0.387 0.658</td><td>0.650</td><td>0.684</td></tr></table>" |
|
}, |
|
"TABREF4": { |
|
"type_str": "table", |
|
"html": null, |
|
"num": null, |
|
"text": "", |
|
"content": "<table><tr><td/><td colspan=\"6\">: F1 results (macro average) of classification experiments (mean over 12 speakers). Classes are presence and absence</td></tr><tr><td colspan=\"2\">of movement.</td><td/><td/><td/><td/></tr><tr><td/><td/><td/><td/><td colspan=\"2\">Predicted as</td></tr><tr><td/><td/><td>None</td><td>Nod</td><td>Shake</td><td>Other</td><td>Sum</td></tr><tr><td>Gold value</td><td>None Nod Shake Other</td><td colspan=\"2\">113,566 1,984 13,429 4,528 5,977 184 23,148 2,089</td><td>327 74 618 584</td><td>9,870 3,724 2,726 15,232</td><td>125,747 21,755 9,505 41,053</td></tr></table>" |
|
}, |
|
"TABREF5": { |
|
"type_str": "table", |
|
"html": null, |
|
"num": null, |
|
"text": "Classification of different types of head movements in the whole dataset: error matrix", |
|
"content": "<table><tr><td colspan=\"4\">Movement type No. frames Precision (%) Recall (%)</td></tr><tr><td>None</td><td>125,747</td><td>72.74</td><td>90.31</td></tr><tr><td>Nod</td><td>21,755</td><td>51.54</td><td>20.81</td></tr><tr><td>Shake</td><td>9,505</td><td>38.55</td><td>6.5</td></tr><tr><td>Other</td><td>41,053</td><td>48.28</td><td>37.1</td></tr></table>" |
|
}, |
|
"TABREF7": { |
|
"type_str": "table", |
|
"html": null, |
|
"num": null, |
|
"text": "Accuracy results of multi-class prediction experiments (mean over 12 speakers). Classes are nod, shake, other, none.", |
|
"content": "<table><tr><td colspan=\"2\">Exp Features</td><td>MF</td><td>LR</td><td colspan=\"2\">LINEARSVC MLP</td></tr><tr><td>1</td><td>Only velocity</td><td colspan=\"2\">0.194 0.256</td><td>0.249</td><td>0.308</td></tr><tr><td>2</td><td>All visual features (no sound)</td><td colspan=\"2\">0.194 0.291</td><td>0.277</td><td>0.396</td></tr><tr><td>3</td><td>All visual and acoustic (only gesturer)</td><td colspan=\"2\">0.194 0.294</td><td>0.279</td><td>0.397</td></tr><tr><td>4</td><td colspan=\"3\">All visual and acoustic+word (only gesturer) 0.194 0.313</td><td>0.297</td><td>0.394</td></tr></table>" |
|
}, |
|
"TABREF8": { |
|
"type_str": "table", |
|
"html": null, |
|
"num": null, |
|
"text": "", |
|
"content": "<table><tr><td>: F1 results (macro average) of multi-class prediction experiments (mean over 12 speakers). Classes are nod, shake,</td></tr><tr><td>other, none.</td></tr></table>" |
|
} |
|
} |
|
} |
|
} |