|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T13:04:57.002954Z" |
|
}, |
|
"title": "A Multi-modal Approach to Fine-grained Opinion Mining on Video Reviews", |
|
"authors": [ |
|
{ |
|
"first": "Edison", |
|
"middle": [], |
|
"last": "Marrese-Taylor", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "The University of Tokyo", |
|
"location": { |
|
"country": "Japan" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Cristian", |
|
"middle": [], |
|
"last": "Rodriguez-Opazo", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "The University of Tokyo", |
|
"location": { |
|
"country": "Japan" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Jorge", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Balazs", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "The University of Tokyo", |
|
"location": { |
|
"country": "Japan" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Gould", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "The University of Tokyo", |
|
"location": { |
|
"country": "Japan" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Yutaka", |
|
"middle": [], |
|
"last": "Matsuo", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "The University of Tokyo", |
|
"location": { |
|
"country": "Japan" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Despite the recent advances in opinion mining for written reviews, few works have tackled the problem on other sources of reviews. In light of this issue, we propose a multimodal approach for mining fine-grained opinions from video reviews that is able to determine the aspects of the item under review that are being discussed and the sentiment orientation towards them. Our approach works at the sentence level without the need for time annotations and uses features derived from the audio, video and language transcriptions of its contents. We evaluate our approach on two datasets and show that leveraging the video and audio modalities consistently provides increased performance over text-only baselines, providing evidence these extra modalities are key in better understanding video reviews.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Despite the recent advances in opinion mining for written reviews, few works have tackled the problem on other sources of reviews. In light of this issue, we propose a multimodal approach for mining fine-grained opinions from video reviews that is able to determine the aspects of the item under review that are being discussed and the sentiment orientation towards them. Our approach works at the sentence level without the need for time annotations and uses features derived from the audio, video and language transcriptions of its contents. We evaluate our approach on two datasets and show that leveraging the video and audio modalities consistently provides increased performance over text-only baselines, providing evidence these extra modalities are key in better understanding video reviews.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Sentiment analysis (SA) is an important task in natural language processing, aiming at identifying and extracting opinions, emotions, and subjectivity. As a result, sentiment can be automatically collected, analyzed and summarized. Because of this, SA has received much attention not only in academia but also in industry, helping provide feedback based on customers' opinions about products or services. The underlying assumption in SA is that the entire input has an overall polarity, however, this is usually not the case. For example, laptop reviews generally not only express the overall sentiment about a specific model (e.g., \"This is a great laptop\"), but also relate to its specific aspects, such as the hardware, software or price. Subsequently, a review may convey opposing sentiments (e.g., \"Its performance is ideal, I wish I could say the same about the price\") or objective information (e.g., \"This one still has the CD slot\") for different aspects of an entity. Aspect-based sentiment analysis (ABSA) or fine-grained opinion mining aims to extract opinion targets or aspects of entities being reviewed in a text, and to determine the sentiment reviewers express for each. ABSA allows us to evaluate aggregated sentiments for each aspect of a given product or service and gain a more granular understanding of their quality. This is of especial interest for companies as it enables them to refine specifications for a given product or service, and leading to an improved overall customer satisfaction.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Fine-grained opinion mining is also important for a variety of NLP tasks, including opinion-oriented question answering and opinion summarization. In practical terms, the ABSA task can be divided into two sub-steps, namely aspect extraction (AE) and (aspect level) sentiment classification (SC), which can be tackled in a pipeline fashion, or simultaneously (AESC). These tasks can be regarded as a token-level sequence labeling problem, and are generally tackled using supervised learning. The 2014 and 2015 SemEval workshops, co-located with COLING 2014 and NAACL 2015 respectively, included shared tasks on ABSA (Pontiki et al., 2014) and also followed this approach, which has also served as a way to encourage developments alongside this line of research (Mitchell et al., 2013; Irsoy and Cardie, 2014; Zhang et al., 2015) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 615, |
|
"end": 637, |
|
"text": "(Pontiki et al., 2014)", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 760, |
|
"end": 783, |
|
"text": "(Mitchell et al., 2013;", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 784, |
|
"end": 807, |
|
"text": "Irsoy and Cardie, 2014;", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 808, |
|
"end": 827, |
|
"text": "Zhang et al., 2015)", |
|
"ref_id": "BIBREF50" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The flexibility provided by the deep learning setting has helped multi-modal approaches to bloom. Examples of this include tasks such as machine translation (Specia et al., 2016; Elliott et al., 2017) , word sense disambiguation (Chen et al., 2015) , visual question answering , language grounding (Beinborn et al.; Lazaridou et al., 2015) , and sentiment analysis (Poria et al., 2015; Zadeh et al., 2016) . Specifically in this last example, the task focuses on generalizing text-based sentiment analysis to opinionated videos, where three communicative modalities are present: language (spoken words), visual (gestures), and acoustic (voice).", |
|
"cite_spans": [ |
|
{ |
|
"start": 157, |
|
"end": 178, |
|
"text": "(Specia et al., 2016;", |
|
"ref_id": "BIBREF37" |
|
}, |
|
{ |
|
"start": 179, |
|
"end": 200, |
|
"text": "Elliott et al., 2017)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 229, |
|
"end": 248, |
|
"text": "(Chen et al., 2015)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 298, |
|
"end": 315, |
|
"text": "(Beinborn et al.;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 316, |
|
"end": 339, |
|
"text": "Lazaridou et al., 2015)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 365, |
|
"end": 385, |
|
"text": "(Poria et al., 2015;", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 386, |
|
"end": 405, |
|
"text": "Zadeh et al., 2016)", |
|
"ref_id": "BIBREF49" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Although reviews often come under the form of a written commentary, people are increasingly turning to video platforms such as YouTube looking for product reviews to help them shop. In this context, Marrese-Taylor et al. (2017) explored a new direction, arguing that video reviews are the natural evolution of written product reviews and introduced a dataset of annotated video product review transcripts. Similarly, Garcia et al. (2019b) recently presented an improved version of the POM movie review dataset (Park et al., 2014) , with annotated fine-grained opinions.", |
|
"cite_spans": [ |
|
{ |
|
"start": 417, |
|
"end": 438, |
|
"text": "Garcia et al. (2019b)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 510, |
|
"end": 529, |
|
"text": "(Park et al., 2014)", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Although the videos in these kinds of datasets represent a rich multi-modal source of opinions, the features of the language in them may fundamentally differ from written reviews given that information is conveyed through multiple channels (one for speech, one for gestures, one for facial expressions, one for vocal inflections, etc.) In these, different information channels complement each other to maximize the coherence and clarity of their message. This means that although the content of each channel may be comprehended in isolation, in theory we need to process the information in all the channels simultaneously to fully comprehend the message (Hasan et al., 2019) . In this context, information extracted from nonverbal language in videos, such as gestures and facial expressions, as well as from audio in the manner of voice inflections or pauses, and from scenes, object or images in the video, become critical for performing well.", |
|
"cite_spans": [ |
|
{ |
|
"start": 654, |
|
"end": 674, |
|
"text": "(Hasan et al., 2019)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In light of this, our paper introduces a multi-modal approach for fine-grained opinion mining. We conduct extensive experiments on two datasets built upon transcriptions of video reviews, Youtubean (Marrese-Taylor et al., 2017 ) and a fine-grain annotated version of the Persuasive Opinion Multimedia (POM) dataset (Park et al., 2014; Garcia et al., 2019b) , adapting them to our setting by associating timestamps to each annotated sentence using the video subtitles. Our results demonstrate the effectiveness of our proposed approach and show that by leveraging the additional modalities we can consistently obtain better performance.", |
|
"cite_spans": [ |
|
{ |
|
"start": 198, |
|
"end": 226, |
|
"text": "(Marrese-Taylor et al., 2017", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 315, |
|
"end": 334, |
|
"text": "(Park et al., 2014;", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 335, |
|
"end": 356, |
|
"text": "Garcia et al., 2019b)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Our work is related to aspect extraction using deep learning, a task that is often tackled as a sequence labeling problem. In particular, our work is related to Irsoy and Cardie (2014) , who pioneered in the field by using multi-layered RNNs. Later, successfully adapted the architectures by Mesnil et al. (2013) which were originally developed for slot-filling in the context of Natural Language Understanding.", |
|
"cite_spans": [ |
|
{ |
|
"start": 161, |
|
"end": 184, |
|
"text": "Irsoy and Cardie (2014)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 292, |
|
"end": 312, |
|
"text": "Mesnil et al. (2013)", |
|
"ref_id": "BIBREF27" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Literature offers related work on the usage of RNNs for open domain targeted sentiment (Mitchell et al., 2013) , where Zhang et al. (2015) experimented with neural CRF models using various RNN architectures on a dataset of informal language from Twitter.", |
|
"cite_spans": [ |
|
{ |
|
"start": 87, |
|
"end": 110, |
|
"text": "(Mitchell et al., 2013)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 119, |
|
"end": 138, |
|
"text": "Zhang et al. (2015)", |
|
"ref_id": "BIBREF50" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Regarding target-based sentiment analysis, the literature contains several ad-hoc models that account for the sentence structure and the position of the aspect on it (Tang et al., 2016a,b) . These approaches mainly use attention-augmented RNNs for solving the task. However, they require the location of the aspect to be known in advance and therefore are only useful in pipeline models, while instead we model aspect extraction and sentiment classification as a joint task or using multitasking.", |
|
"cite_spans": [ |
|
{ |
|
"start": 166, |
|
"end": 188, |
|
"text": "(Tang et al., 2016a,b)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "AESC has also often been tackled as a sequence labeling problem, mainly using Conditional Random Fields (CRFs) (Mitchell et al., 2013) . To model the problem in this fashion, collapsed or sentiment-bearing IOB labels (Zhang et al., 2015) are used. Pipeline models (i.e. task-independent model ensembles) have also been extensively studied by the same authors. Xu et al. (2014) performed AESC by modeling the linking relation between aspects and the sentiment-bearing phrases.", |
|
"cite_spans": [ |
|
{ |
|
"start": 111, |
|
"end": 134, |
|
"text": "(Mitchell et al., 2013)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 217, |
|
"end": 237, |
|
"text": "(Zhang et al., 2015)", |
|
"ref_id": "BIBREF50" |
|
}, |
|
{ |
|
"start": 360, |
|
"end": 376, |
|
"text": "Xu et al. (2014)", |
|
"ref_id": "BIBREF45" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "When it comes to the video review domain, there is related work on YouTube mining, mainly focused on exploiting user comments. For example, Wu et al. (2014) exploited crowdsourced textual data from timesynced commented videos, proposing a temporal topic model based on LDA. Tahara et al. (2010) introduced a similar approach for Nico Nico, using time-indexed social annotations to search for desirable scenes inside videos.", |
|
"cite_spans": [ |
|
{ |
|
"start": 140, |
|
"end": 156, |
|
"text": "Wu et al. (2014)", |
|
"ref_id": "BIBREF43" |
|
}, |
|
{ |
|
"start": 274, |
|
"end": 294, |
|
"text": "Tahara et al. (2010)", |
|
"ref_id": "BIBREF39" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "On the other hand, Severyn et al. (2014) proposed a systematic approach to mine user comments that relies on tree kernel models. Additionally, Krishna et al. (2013) performed sentiment analysis on YouTube comments related to popular topics using machine learning techniques, showing that the trends in users' sentiments is well correlated to the corresponding realworld events. Siersdorfer et al. (2010) presented an analysis of dependencies between comments and comment ratings, proving that community feedback in combination with term features in comments can be used for automatically determining the community acceptance of comments.", |
|
"cite_spans": [ |
|
{ |
|
"start": 19, |
|
"end": 40, |
|
"text": "Severyn et al. (2014)", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 143, |
|
"end": 164, |
|
"text": "Krishna et al. (2013)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 378, |
|
"end": 403, |
|
"text": "Siersdorfer et al. (2010)", |
|
"ref_id": "BIBREF36" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We also find some papers that have successfully attempted to use closed caption mining for video activity recognition (Gupta and Mooney, 2010) and scene segmentation (Gupta and Mooney, 2009) . Similar work has been done using closed captions to classify movies by genre (Brezeale and Cook, 2006) and summarize video programs (Brezeale and Cook, 2006) . Regarding multi-modal approaches for sentiment analysis, we see that previous work has focused mainly on sentiment classification, or the related task of emotion detection (Lakomkin et al., 2017) , where the CMU MOSI dataset (Zadeh et al., 2016) appears as the main resource. In this setting, the main problem is how to model and capture cross-modality interactions to predict the sentiment correctly. In this regard proposed a tensor fusion layer that can better capture cross-modality interactions between text, audio and video inputs, while modeled inter-dependencies across difference utterances of a single video, obtaining further improvements. Blanchard et al. (2018) are, to the best of our knowledge, the first to tackle scalable multi-modal sentiment classification using both visual and acoustic modalities. More recently Ghosal et al. (2018) proposed an RNNbased multi-modal approach that relies on attention to learn the contributing features among multi-utterance representations. On the other hand Pham et al. (2018) introduced multi-modal sequence-to-sequence models which perform specially well in bi-modal settings. Finally, Akhtar et al. (2019) proposed a multi-modal, multi-task approach in which the inputs from a video (text, acoustic and visual frames), are exploited for simultaneously predicting the sentiment and expressed emotions of an utterance. Our work is related to all of these approaches, but it is different in that we apply multi-modal techniques not only for sentiment classification, but also for aspect extraction.", |
|
"cite_spans": [ |
|
{ |
|
"start": 118, |
|
"end": 142, |
|
"text": "(Gupta and Mooney, 2010)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 166, |
|
"end": 190, |
|
"text": "(Gupta and Mooney, 2009)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 270, |
|
"end": 295, |
|
"text": "(Brezeale and Cook, 2006)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 325, |
|
"end": 350, |
|
"text": "(Brezeale and Cook, 2006)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 525, |
|
"end": 548, |
|
"text": "(Lakomkin et al., 2017)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 578, |
|
"end": 598, |
|
"text": "(Zadeh et al., 2016)", |
|
"ref_id": "BIBREF49" |
|
}, |
|
{ |
|
"start": 1004, |
|
"end": 1027, |
|
"text": "Blanchard et al. (2018)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 1186, |
|
"end": 1206, |
|
"text": "Ghosal et al. (2018)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 1366, |
|
"end": 1384, |
|
"text": "Pham et al. (2018)", |
|
"ref_id": "BIBREF31" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Finally, Marrese-Taylor et al. 2017and Garcia et al. (2019b) contributed multi-modal datasets obtained from product and movie reviews respectively, specifically for the task of fine-grained opinion mining. Furthermore, Garcia et al. (2019a) recently used the latter to propose a hierarchical multi-modal model for opinion mining. Compared to them, our approach follows a more traditional setting for fine-grained opinion mining, while also offering a more general framework for the problem. Garcia et al. (2019a) utilize a single encoder that receives as input the concatenation of the features for each modality, for each token. This requires explicit alignment between the features of the different modalities at the token level. In contrast, since each modality is encoded separately in our approach, we only require the feature alignment to be at the sentence level.", |
|
"cite_spans": [ |
|
{ |
|
"start": 39, |
|
"end": 60, |
|
"text": "Garcia et al. (2019b)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 219, |
|
"end": 240, |
|
"text": "Garcia et al. (2019a)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Opinion mining can be performed at several levels of granularity, the most common ones being the sentence level, and the more fine-grained aspect level. Finegrained opinion mining can be further subdivided in two tasks: aspect extraction and aspect-level sentiment classification. The former deals with finding the aspects being referred to, and the latter with associating them with a sentiment.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task Description", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Previous work usually casts this task as a sequencelabeling problem, where models have to predict whether a token is a part of an aspect and infer its sentiment polarity (Mitchell et al., 2013; Zhang et al., 2015; . Depending on the dataset annotations, aspect categories are in some cases specified as well.", |
|
"cite_spans": [ |
|
{ |
|
"start": 170, |
|
"end": 193, |
|
"text": "(Mitchell et al., 2013;", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 194, |
|
"end": 213, |
|
"text": "Zhang et al., 2015;", |
|
"ref_id": "BIBREF50" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task Description", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Formally, given a sentence s = [x 1 , . . . , x n ], we want to automatically annotate each token x i with its aspect membership and polarity. In the simpler case where we only want to perform Aspect Extraction, a common annotation scheme is to tag each token with a label y i \u2208 L AE where L AE = {I, O, B}. In this scheme, commonly known as IOB, O labels indicate that a token is not a member of an aspect, B labels indicate that a token is at the beginning of an aspect, and I labels indicate that the token is inside an aspect.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task Description", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Similarly, performing token-level Sentiment Classification only is equivalent to tagging each token with a label y i \u2208 L SC where L SC = {\u03c6, +, \u2212}, and \u03c6 denotes no sentiment, + denotes a positive polarity and \u2212 a negative one.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task Description", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "It is also possible to define a collapsed annotation scheme, where aspect membership and sentiment polarity are encoded in a single tag. We define the label set for this setting as L C = {O, B+, B\u2212, I+, I\u2212}. Table 1 shows the possible ways to annotate the sentence \"I love the saturated colors!\" under these three annotation schemes, where the aspect being referred to is \"saturated colors\". Table 1 : Label definition alternatives for the tasks in ABSA using sequence labeling.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 208, |
|
"end": 215, |
|
"text": "Table 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 392, |
|
"end": 399, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Task Description", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "I love the saturated colors ! L AE O O O B I O L SC \u03c6 \u03c6 \u03c6 + + \u03c6 L C O O O B+ I+ O", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task Description", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Labels can be further augmented with type information. For example used different tags for opinion targets (e.g. B-TARG), and opinion expressions (e.g., B-EXPR), however, we do not rely on this information.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task Description", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We propose a multi-modal approach for aspect extraction and sentiment classification that leverages video, audio and textual features. This approach assumes we have a video review v containing opinions, its extracted audio stream a, and a transcription of the audio into a sequence of sentences S. Further, each sentence s \u2208 S is annotated with its respective start and end times in the video effectively mapping them to a video segment v s \u2282 v and its corresponding audio segment a s \u2282 a. These segments do not necessarily cover the whole video i.e. \u222a v s \u2282 v since the reviews may include parts that have no speech and therefore no sentences are associated to those. Our end goal is to produce a sequence of labels l = [y 1 , . . . , y n ] for each sentence s = [x 1 , . . . , x n ] while exploiting the information contained in v s and a s . Figure 1 presents a high-level overview of our approach. We rely on an encoder-decoder paradigm to create separate representations for each modality (Cho et al., 2014) . The text encoding module generates a representation for each token in the input text, while the video and audio encoding layers produce utterancelevel representations from each modality.", |
|
"cite_spans": [ |
|
{ |
|
"start": 994, |
|
"end": 1012, |
|
"text": "(Cho et al., 2014)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 845, |
|
"end": 853, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Proposed Approach", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We propose combining these representations with an approach inspired by early-fusion , which allows for the word-level representations to interact with audio and visual features. Finally, a sequence labeling module is in charge of taking the final token-level representations and producing a token-level label. In the following sub-sections we describe each component of our model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Proposed Approach", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "This module generates a representation of the natural language input so that the obtained representation is ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Text Encoding Module", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "v h \u00af a BiGRU h t i a 1 a 2 \u22ef a m Sentences = { } S j =\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Text Encoding Module", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The battery is not good\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Text Encoding Module", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "S 1 I3D v 1 v 2 \u22ef v l v 3 v 4", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Text Encoding Module", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Sequence Labeling Figure 1 : Overview of our proposed approach for multi-modal opinion mining useful for the sequence labeling task. Our text encoder first maps each word x i into an embedded input sequence x = [x 1 , . . . , x n ], then projects this into a vector h t i \u2208 R dt , where d t corresponds to the hidden dimension of the obtained text representation. Although our text encoding module is generic, in this paper we implement it as a bi-directional GRU (Cho et al., 2014) , on top of pre-trained word embeddings, specifically GloVe (Pennington et al., 2014) , as follows.", |
|
"cite_spans": [ |
|
{ |
|
"start": 464, |
|
"end": 482, |
|
"text": "(Cho et al., 2014)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 543, |
|
"end": 568, |
|
"text": "(Pennington et al., 2014)", |
|
"ref_id": "BIBREF30" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 18, |
|
"end": 26, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Text Encoding Module", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "h t i = BiGRU(x i , h t i\u22121 )", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Text Encoding Module", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We assume the existence of a finite set of time-ordered audio features a = [a 1 , . . . , a m ] extracted from each audio utterance a s , for instance with the procedure described in Section 5.2. We feed these vectors into another bi-directional GRU to add context to each time step, obtaining hidden states h a j \u2208 R da .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Audio Encoding Module", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "h a j = BiGRU(a j , h a j\u22121 )", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "Audio Encoding Module", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "To obtain a condensed representation from the audio signal we again utilize mean pooling over the intermediate memory vectors, obtainingh a .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Audio Encoding Module", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "We propose a video encoding layer that generates a visual representation summarizing spatio-temporal patterns directly from the raw input frames. Concretely, given a video segment ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Video Encoding Module", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "v = [v 1 , . . . , v T ], where v", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Video Encoding Module", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "h v k = BiGRU(v k , h v k\u22121 )", |
|
"eq_num": "(3)" |
|
} |
|
], |
|
"section": "Video Encoding Module", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "We utilize an early fusion strategy similar to to aggregate the representations obtained from each modality. We concatenate the contextualized representation h t i for each token to the summarized representations of the additional modalities,h a andh v , and feed this final vector representation to an additional Bi-GRU:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Fusion Module", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "h i = BiGRU([h t i ;h a ;h v ], h i\u22121 )", |
|
"eq_num": "(4)" |
|
} |
|
], |
|
"section": "Fusion Module", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "As a result, our model now allows the representation of each word in the input sentence to interact with the audio and visual features, enabling it to learn potentially different ways to associate each word with the additional modalities. An alternative way to achieve this would be to utilize attention mechanisms to enforce such association behavior, however, we instead let the model learn this relation without using any additional inductive bias.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Fusion Module", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "The main labeling module is a multi-layer perceptron guided by a self attention component. The self attention component enriches the representation h i with contextual information coming from every other sequence element by performing the following operations:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sequence Labeling Module", |
|
"sec_num": "4.5" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "u i,j = v \u03b1 tanh(W \u03b1 [h i ; h j ] + b \u03b1 ) (5) \u03b1 i,j = softmax(u i,j )", |
|
"eq_num": "(6)" |
|
} |
|
], |
|
"section": "Sequence Labeling Module", |
|
"sec_num": "4.5" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "t i = n j=1 \u03b1 i,j \u2022 h j (7) o i = W l [h i ; t i ] + b l", |
|
"eq_num": "(8)" |
|
} |
|
], |
|
"section": "Sequence Labeling Module", |
|
"sec_num": "4.5" |
|
}, |
|
{ |
|
"text": "Where o i is a vector associated to input x i , and v \u03b1 , W \u03b1 , W l and b \u03b1 , b l are trainable parameters. As shown, these vectors are obtained using both the corresponding aligned input h i and the attention-weighted vector t i .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sequence Labeling Module", |
|
"sec_num": "4.5" |
|
}, |
|
{ |
|
"text": "Following previous work, we feed these vectors into a Linear Chain CRF layer, which performs the final labeling. Neural CRFs have proven to be especially effective for various sequence segmentation or labeling tasks in NLP (Ma and Hovy, 2016; , and have also been used successfully in the past for open domain opinion mining (Zhang et al., 2015) . Concretely, we model emission and transition potentials as follows.", |
|
"cite_spans": [ |
|
{ |
|
"start": 223, |
|
"end": 242, |
|
"text": "(Ma and Hovy, 2016;", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 325, |
|
"end": 345, |
|
"text": "(Zhang et al., 2015)", |
|
"ref_id": "BIBREF50" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sequence Labeling Module", |
|
"sec_num": "4.5" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u03c8 i := e(x i , y i ; \u03b8) = h i \u2022 y i (9) \u03c8 i,j := q(y i , y j ; \u03a0) = \u03a0 yi,yj", |
|
"eq_num": "(10)" |
|
} |
|
], |
|
"section": "Sequence Labeling Module", |
|
"sec_num": "4.5" |
|
}, |
|
{ |
|
"text": "Where h i is the fused hidden state for position i and \u03b8 denotes the parameters involved in computing this vector, y i is a one-hot vector associated to y i , and \u03a0 is a trainable matrix of size L AE or L C depending on the setting -see Section 5 for more details on this. The score function of a given input sentence s and output sequence of labels l is defined as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sequence Labeling Module", |
|
"sec_num": "4.5" |
|
}, |
|
{ |
|
"text": "\u03a6(s, l) = n i=1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sequence Labeling Module", |
|
"sec_num": "4.5" |
|
}, |
|
{ |
|
"text": "log e(x, y i ; \u03b8)+log q(y i , y i\u22121 ; \u03a0) (11)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sequence Labeling Module", |
|
"sec_num": "4.5" |
|
}, |
|
{ |
|
"text": "In this work we directly optimize the negative loglikelihood associated to this score during training, and apply Viterbi decoding during inference to obtain the most likely labels.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sequence Labeling Module", |
|
"sec_num": "4.5" |
|
}, |
|
{ |
|
"text": "We evaluate our proposal in several experimental settings based on previous work.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "\u2022 Simple: We only focus on the task of aspect extraction, following a sequence labeling approach with regular IOB tags in L AE .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "\u2022 Collapsed Aspect-Level (CAL): We perform aspect extraction and aspect-level sentiment classification with a sequence labeling model, utilizing sentiment-bearing IOB tags in L C .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "\u2022 Collapsed Sentence-Level (CSL): Like the previous setting, but we only keep sentence examples that contain a single sentiment, so we can perform sentence-level sentiment classification. Again, we use sequence labeling with sentiment-bearing IOB tags in L C .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "\u2022 Joint Sentence-Level (JSL): We use a multitasking approach and perform sequence labeling for aspect extraction with regular IOB tags in L AE , and sequence classification to predict the sentence-level sentiment. In this sense, we add a final 3-layer fully-connected neural network that receives a mean-pooled representation of the fusion layerh = 1 n n i=1 h i and predicts a sentence-level sentiment. As loss function we utilize the mini-batch average cross-entropy with the gold standard class label. The total loss is the sum of the losses for sequence labeling and sequence classification.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Previous work has also shown that most sentences present a single aspect, and therefore a single sentiment (Marrese-Taylor et al., 2017; Zuo et al., 2018; Zhao et al., 2010) , which motivates the introduction of the CSL and JSL settings. For these cases we filtered out sentences that do not fit this description.", |
|
"cite_spans": [ |
|
{ |
|
"start": 107, |
|
"end": 136, |
|
"text": "(Marrese-Taylor et al., 2017;", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 137, |
|
"end": 154, |
|
"text": "Zuo et al., 2018;", |
|
"ref_id": "BIBREF52" |
|
}, |
|
{ |
|
"start": 155, |
|
"end": 173, |
|
"text": "Zhao et al., 2010)", |
|
"ref_id": "BIBREF51" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We report results on two different datasets containing fine-grained annotations for both opinion targets and sentiment.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "First, we work with the Youtubean dataset (Marrese-Taylor et al., 2017), which contains sentences extracted from YouTube video annotated with aspects and their respective sentiments. The data comes from the userprovided closed-captions derived from 7 different long product review videos about a cell phone, totaling up to 71 minutes of audiovisual data. In total there are 578 long sentences from free spoken descriptions of the product, on average each sentence consist of 20 words. The dataset has a total of 525 aspects, with more than 66% of the sentences containing at least one mention.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Second, we work with the fine-grained annotations gathered for the POM dataset by Garcia et al. (2019b) . This dataset is composed of 1000 videos containing reviews where a single speaker in frontal view makes a critique of a movie that he/she has watched. There are videos from 372 unique speakers, with 600 different movie titles being reviewed. Each video has an average length of about 94 seconds and contains 15.1 sentences on average. The fine-grained annotations we utilize are available for each token indicating if it is responsible for the understanding of the polarity of the sentence, and whether it describes the target of an opinion; each sentence has an average of 22.5 tokens. We assume that whenever there is an overlap between the span annotations for a given target and a certain polarity, the corresponding polarity can be assigned to that target, otherwise it is labeled as neutral.", |
|
"cite_spans": [ |
|
{ |
|
"start": 82, |
|
"end": 103, |
|
"text": "Garcia et al. (2019b)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Since the annotated sentences in both datasets are not associated to specific timestamps, in this work we propose a method based on heuristics to rescue the video segments that correspond to each annotated sentence by leveraging video subtitles (or closed-captions.) As shown in Figure 2 , closed captions or subtitles are composed of chunks that contain: (1) A numeric counter identifying each chunk, (2) The time at which the subtitle should appear on the screen followed by --> and the time when it should disappear, (3) The subtitle text itself on one or more lines, and (4) A blank line containing no text, indicating the end of this subtitle. These chunks exhibit a large variance in terms of their length, meaning that sentences are usually split into many chunks.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 279, |
|
"end": 287, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Starting from a subtitle file associated to a given product review video, we apply a fuzzy-matching approach between each annotated sentence for that review and each closed caption chunk. This is repeated for each one of the videos in our datasets. Whenever an annotated sentence matches exactly or has over 90% similarity with a closed caption chunk, its time-span is associated to that sentence. Finally, the \"start\" and \"end\" timestamps assigned to each sentence are defined by the start and end time spans of their first and last associated closed captions, sorted by time.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Pre-processing for the natural language input is performed utilizing spacy 1 , which we use mainly to tokenize. Input sentences are trimmed to a maximum length of 300 tokens, and tokens with frequency lower than 1 are replaced with a special UNK marker. To work with the POM dataset, which is already tokenized, we first convert it to the ABSA format, which is tokenization agnostic, and then we process it.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Implementation Details", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Although our audio encoder is generic, in this work we follow Lakomkin et al. (2017) and use Fast Fourier Transform spectrograms to extract rich vectors from each audio segment. Specifically, we use a window length of 1024 points and 512 points overlap, giving us vectors of size 513. Alternative audio feature extractors such as Degottex et al. (2014) could also be utilized.", |
|
"cite_spans": [ |
|
{ |
|
"start": 62, |
|
"end": 84, |
|
"text": "Lakomkin et al. (2017)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 330, |
|
"end": 352, |
|
"text": "Degottex et al. (2014)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Implementation Details", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "On the other hand, in this work we model video feature extraction using I3D . This method inflates the 2D filters of a wellknown network e.g. Inception Ioffe and Szegedy, 2015) or ResNet (He et al., 2016) for image classification to obtain 3D filters, helping us better exploit the spatio-temporal nature of video. We first pre-process the videos by extracting features of size 1024 using I3D with average pooling, taking as input the raw frames of dimension 256 \u00d7 256, at 25 fps. We use the model pre-trained on the kinetics400 dataset (Kay et al., 2017) released by the same authors. Despite our choice to obtain video features, again we note that our video encoder is generic, so other alternatives such as C3D (Tran et al., 2015 ) could be utilized.", |
|
"cite_spans": [ |
|
{ |
|
"start": 152, |
|
"end": 176, |
|
"text": "Ioffe and Szegedy, 2015)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 187, |
|
"end": 204, |
|
"text": "(He et al., 2016)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 537, |
|
"end": 555, |
|
"text": "(Kay et al., 2017)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 714, |
|
"end": 732, |
|
"text": "(Tran et al., 2015", |
|
"ref_id": "BIBREF42" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Implementation Details", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Finally, all of our models are trained in an end-toend fashion using Adam (Kingma and Ba, 2014) with a learning rate of 10 \u22123 . To prevent over-fitting, we add dropout to the text encoding layer. We use a batch size of 8 for the Youtubean dataset, and of 64 for the POM dataset. The language encoder uses a hidden state of size 150, and we fine-tune the pre-trained GloVe.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Implementation Details", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "On each case we compare the performance of our proposed approach against a baseline model that does not consider multi-modality, does not utilize pretrained GloVe word embeddings and is based on a cross-entropy loss, in which case we simply utilize the mini-batch average cross-entropy between\u0177 i = softmax(o i ) and the gold standard one-hot encoded labels y i , a vector that is the size of the tag label vocabulary for the corresponding task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Implementation Details", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Since the size of Youtubean is relatively small, all our experiments in this dataset are evaluated using 5-fold cross validation. In the case of the POM dataset, we report performance on the validation and test sets averaging results for 5 different random seeds. In both cases we compare models using paired two-sided t-tests to check for statistical significance of the differences.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "To evaluate our sequence labeling tasks we used the CoNLL conlleval script, taking the aspect extraction F1-score as our model selection metric for early stopping. To perform joint aspect extraction and sentiment classification, we considered positive, negative and neutral as sentiment classes, and decoupled the IOB collapsed tags using simple heuristics. Concretely, we recover the aspect extraction F1-score as well as classification performances for each sentiment class.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "To evaluate the effectiveness of our proposals, we perform several ablation studies on the Simple setting for the Youtubean dataset. Using variations of our baseline with pre-trained GLoVe embeddings (GV), conditional random field (CRF), audio and video modalities (A+V). Experiments are also performed using 5-fold cross-validation, and comparisons are always tested for significance using paired two-sided t-tests.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "As Table 4 shows, although every proposed model variation performs better than the baseline, only the model uses video and audio modalities obtains a statistically superior performance. We also see that our proposed multi-modal variation is the one that obtains the best performance, also being statistically significant at the highest level of confidence. We believe these results show that our proposed multi-modal architecture is not only able to exploit the features in the audio and video inputs, but it can also leverage the information in the pre-trained word embeddings and benefit from having an inductive bias that is tailored for the task at hand, in this case, with a loss based on structured prediction for sequence labeling. Table 2 summarizes our results for the Youtubean dataset, where we can see that our proposed multimodal approach is able to outperform the baseline model for all settings in the aspect extraction task. When it comes to sentiment classification, our multimodal approaches do not obtain significant performance gain in all cases, sometimes performing worse although without statistical significance.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 10, |
|
"text": "Table 4", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 739, |
|
"end": 746, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We also compare our results to the performance reported by Marrese-Taylor et al. (2017) , who experimented on the Simple and CSL settings. Their models also use pre-trained word embedding -although different from Table 4 : Ablation study on aspect extraction on the simple setting. *** denotes differences against the only text model (T) results are statistically significant at 99% confidence, ** at 95% and * at 90%. (A + V) refers to the audio and video modalities, (GV) stands for GLoVe embeddings and (CRF) for the model trained using the Conditional Random Fields loss.", |
|
"cite_spans": [ |
|
{ |
|
"start": 59, |
|
"end": 87, |
|
"text": "Marrese-Taylor et al. (2017)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 213, |
|
"end": 220, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "GloVe-and as input they additionally receives binary features derived from POS tags and other word-level cues. We note, however, that they only experimented with a maximum length of 200 tokens, which makes our results not directly comparable. Their performance on aspect extraction for the Simple and CAL tasks are 0.561 and 0.555 F1-Score respectively, both of which are lower than ours. In terms of sentiment classification, they report results for each sentiment class with F1-Scores of 0.523, 0.149 and 0.811 for the positive, Table 5 : Results for the validation set of the POM dataset, where *** denotes results are statistically significant at 99% confidence, ** at 95% and * at 90%.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 531, |
|
"end": 538, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "negative and neutral classes, respectively. Our model is able to outperform this baseline, with a cross-class average F1-Score of 0.718. We do not deepen the analysis in this regard, as numbers are difficult to interpret without statistical testing. Table 5 and Table 3 summarize our results for the POM dataset for the validation and test splits respectively. Compared to the previous dataset we see similar results where our multi-modal approach consistently outperforms the baseline for aspect extraction, but with the gains being comparatively smaller. We also see that our model is able to significantly outperform the baseline in the sentiment classification tasks at least in two of out the three settings. In terms of previous work, our results cannot be directly compared to Garcia et al. (2019a) and Garcia et al. (2019b) as their problem setting is different from ours.", |
|
"cite_spans": [ |
|
{ |
|
"start": 784, |
|
"end": 805, |
|
"text": "Garcia et al. (2019a)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 810, |
|
"end": 831, |
|
"text": "Garcia et al. (2019b)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 250, |
|
"end": 269, |
|
"text": "Table 5 and Table 3", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "On a more broad perspective, we think the performance differences across datasets are related to the nature of each dataset. Meanwhile Youtubean contains reviews about actual physical products, which are often shown in the videos at the same time the reviewer is speaking, the POM dataset contains movie reviews where the speakers directly face the camera during most of the video, without utilizing any additional support material. As a result, the video reviews in the Youtubean dataset mainly focus on capturing images of the products under discussion, with relatively fewer scenes showing the reviewer. This means that there may be few visual cues in the manner of facial expressions or other specific actions that the models could exploit in order to perform better at the sentiment classification task, but more cues useful for aspect extraction. This situation is reverted in the POM dataset, which could explain why our models tend to perform better for sentiment classification, but offering smaller gains for the AE task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We also think performance differences across datasets are to some extent explained by the nature of the annotations on each case. The annotation guidelines utilized to elaborate each dataset are actually quite different, with the annotations in the Youtubean dataset closely following those of the well-known SemEval datasets, which are target-centric and the POM standards substantially diverging from this. Concretely, Garcia et al. (2019b) propose a two-level annotation method, where \"the smallest span of words that contains all the words necessary for the recognition of an opinion\" are to be annotated. As a result, aspects annotated in the POM dataset often include pronouns which are more difficult to identify as aspects, often requiring co-reference resolution. With regards to aspect polarity, while it can be extracted directly from the Youtubean annotations, in the case of POM we needed some pre-processing as target and sentiment are annotated using independent text spans.", |
|
"cite_spans": [ |
|
{ |
|
"start": 421, |
|
"end": 442, |
|
"text": "Garcia et al. (2019b)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Qualitative results of the POM and Youtubean dataset in a multitask CAL can be seen in Figure 3 and 4 respectively, results suggest that the method learn to use the information from additional modalities and enhance the sentiment and aspect prediction.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 87, |
|
"end": 95, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Finally, as we observe that our models tend to obtain bigger gains on the AE tasks rather than on SC, we think this behavior can be partially attributed to the inductive bias of our model, which makes it specially suitable for sequence segmentation tasks.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "In this paper we have presented a multi-modal approach for fine-grained opinion mining, introducing a modular architecture that utilizes features derived from the audio, video frames and language transcription of video reviews to perform aspect extraction and sentiment classification at the sentence level. To test our proposals we have taken two datasets built upon video review transcriptions containing fine-grained opinions, and introduced a technique that leverages the video subtitles to associate timestamps to each annotated sentence. Our results offer empirical evidence showing that the additional modalities contain useful information that can be exploited by our models to offer increased performance for both aspect extraction and sentiment classification, consistently outperforming text-only baselines.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "For future work, we are interested in exploring other ways to capture cross-modal interactions, exploit the temporal relationship between the representations of different modalities, and test alternative ways to better deal with our multi-task settings.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "https://spacy.io", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We are grateful for the support provided by the NVIDIA Corporation, donating two of the GPUs used for this research.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Multi-task Learning for Multimodal Emotion Recognition and Sentiment Analysis", |
|
"authors": [ |
|
{ |
|
"first": "Dushyant", |
|
"middle": [], |
|
"last": "Md Shad Akhtar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Deepanway", |
|
"middle": [], |
|
"last": "Chauhan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Soujanya", |
|
"middle": [], |
|
"last": "Ghosal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Poria", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "370--379", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N19-1034" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Md Shad Akhtar, Dushyant Chauhan, Deepanway Ghosal, Soujanya Poria, Asif Ekbal, and Pushpak Bhattacharyya. 2019. Multi-task Learning for Multi- modal Emotion Recognition and Sentiment Analy- sis. In Proceedings of the 2019 Conference of the North, pages 370-379, Minneapolis, Minnesota. As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Multimodal Grounding for Language Processing", |
|
"authors": [ |
|
{ |
|
"first": "Lisa", |
|
"middle": [], |
|
"last": "Beinborn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Teresa", |
|
"middle": [], |
|
"last": "Botschen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iryna", |
|
"middle": [], |
|
"last": "Gurevych", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lisa Beinborn, Teresa Botschen, and Iryna Gurevych. Multimodal Grounding for Language Processing. page 15.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Getting the subtext without the text: Scalable multimodal sentiment classification from visual and acoustic modalities", |
|
"authors": [ |
|
{ |
|
"first": "Nathaniel", |
|
"middle": [], |
|
"last": "Blanchard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Moreira", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aparna", |
|
"middle": [], |
|
"last": "Bharati", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Walter", |
|
"middle": [], |
|
"last": "Scheirer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of Grand Challenge and Workshop on Human Multimodal Language (Challenge-HML)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--10", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W18-3301" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nathaniel Blanchard, Daniel Moreira, Aparna Bharati, and Walter Scheirer. 2018. Getting the subtext with- out the text: Scalable multimodal sentiment classifi- cation from visual and acoustic modalities. In Pro- ceedings of Grand Challenge and Workshop on Hu- man Multimodal Language (Challenge-HML), pages 1-10, Melbourne, Australia. Association for Com- putational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Using closed captions and visual features to classify movies by genre", |
|
"authors": [ |
|
{ |
|
"first": "Darin", |
|
"middle": [], |
|
"last": "Brezeale", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Diane", |
|
"middle": [], |
|
"last": "Cook", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the 7th International Workshop on Multimedia Data Mining (MDM/KDD06): Poster Session", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Darin Brezeale and Diane Cook. 2006. Using closed captions and visual features to classify movies by genre. In Proceedings of the 7th International Work- shop on Multimedia Data Mining (MDM/KDD06): Poster Session, Washington, DC, USA. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Quo vadis, action recognition? a new model and the kinetics dataset", |
|
"authors": [ |
|
{ |
|
"first": "Joao", |
|
"middle": [], |
|
"last": "Carreira", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Zisserman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "CVPR", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Joao Carreira and Andrew Zisserman. 2017. Quo vadis, action recognition? a new model and the ki- netics dataset. In CVPR.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Reading Wikipedia to Answer Open-Domain Questions", |
|
"authors": [ |
|
{ |
|
"first": "Danqi", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Fisch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Weston", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Antoine", |
|
"middle": [], |
|
"last": "Bordes", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1870--1879", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P17-1171" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to Answer Open- Domain Questions. pages 1870-1879.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Sense Discovery via Co-Clustering on Images and Text", |
|
"authors": [ |
|
{ |
|
"first": "Xinlei", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alan", |
|
"middle": [], |
|
"last": "Ritter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Abhinav", |
|
"middle": [], |
|
"last": "Gupta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tom", |
|
"middle": [], |
|
"last": "Mitchell", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5298--5306", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xinlei Chen, Alan Ritter, Abhinav Gupta, and Tom Mitchell. 2015. Sense Discovery via Co-Clustering on Images and Text. pages 5298-5306.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Learning Phrase Representations using RNN EncoderDecoder for Statistical Machine Translation", |
|
"authors": [ |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bart", |
|
"middle": [], |
|
"last": "Van Merrienboer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Caglar", |
|
"middle": [], |
|
"last": "Gulcehre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dzmitry", |
|
"middle": [], |
|
"last": "Bahdanau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fethi", |
|
"middle": [], |
|
"last": "Bougares", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Holger", |
|
"middle": [], |
|
"last": "Schwenk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1724--1734", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kyunghyun Cho, Bart van Merrienboer, Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning Phrase Representations using RNN EncoderDecoder for Statistical Machine Translation. In Proceed- ings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724-1734, Doha, Qatar. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Covarepa collaborative voice analysis repository for speech technologies", |
|
"authors": [ |
|
{ |
|
"first": "Gilles", |
|
"middle": [], |
|
"last": "Degottex", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Kane", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Drugman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tuomo", |
|
"middle": [], |
|
"last": "Raitio", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stefan", |
|
"middle": [], |
|
"last": "Scherer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "2014 ieee international conference on acoustics, speech and signal processing (icassp)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "960--964", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gilles Degottex, John Kane, Thomas Drugman, Tuomo Raitio, and Stefan Scherer. 2014. Covarepa col- laborative voice analysis repository for speech tech- nologies. In 2014 ieee international conference on acoustics, speech and signal processing (icassp), pages 960-964. IEEE.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Findings of the Second Shared Task on Multimodal Machine Translation and Multilingual Image Description", |
|
"authors": [ |
|
{ |
|
"first": "Desmond", |
|
"middle": [], |
|
"last": "Elliott", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stella", |
|
"middle": [], |
|
"last": "Frank", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Loc", |
|
"middle": [], |
|
"last": "Barrault", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fethi", |
|
"middle": [], |
|
"last": "Bougares", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lucia", |
|
"middle": [], |
|
"last": "Specia", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the Second Conference on Machine Translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "215--233", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W17-4718" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Desmond Elliott, Stella Frank, Loc Barrault, Fethi Bougares, and Lucia Specia. 2017. Findings of the Second Shared Task on Multimodal Machine Translation and Multilingual Image Description. In Proceedings of the Second Conference on Machine Translation, pages 215-233, Copenhagen, Denmark. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "From the Token to the Review: A Hierarchical Multimodal approach to Opinion Mining", |
|
"authors": [ |
|
{ |
|
"first": "Alexandre", |
|
"middle": [], |
|
"last": "Garcia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pierre", |
|
"middle": [], |
|
"last": "Colombo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Slim", |
|
"middle": [], |
|
"last": "Florence D'alch\u00e9-Buc", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chlo\u00e9", |
|
"middle": [], |
|
"last": "Essid", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Clavel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5542--5551", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D19-1556" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexandre Garcia, Pierre Colombo, Florence d'Alch\u00e9- Buc, Slim Essid, and Chlo\u00e9 Clavel. 2019a. From the Token to the Review: A Hierarchical Multimodal ap- proach to Opinion Mining. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5542-5551, Hong Kong, China. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "A multimodal movie review corpus for fine-grained opinion mining", |
|
"authors": [ |
|
{ |
|
"first": "Alexandre", |
|
"middle": [], |
|
"last": "Garcia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Slim", |
|
"middle": [], |
|
"last": "Essid", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Florence D'alch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chlo", |
|
"middle": [], |
|
"last": "Buc", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Clavel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1902.10102[cs].ArXiv:1902.10102" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexandre Garcia, Slim Essid, Florence d'Alch Buc, and Chlo Clavel. 2019b. A multimodal movie review corpus for fine-grained opinion mining. arXiv:1902.10102 [cs]. ArXiv: 1902.10102.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Contextual Inter-modal Attention for Multi-modal Sentiment Analysis", |
|
"authors": [ |
|
{ |
|
"first": "Deepanway", |
|
"middle": [], |
|
"last": "Ghosal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shad", |
|
"middle": [], |
|
"last": "Md", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dushyant", |
|
"middle": [], |
|
"last": "Akhtar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Soujanya", |
|
"middle": [], |
|
"last": "Chauhan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Poria", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3454--3466", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D18-1382" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Deepanway Ghosal, Md Shad Akhtar, Dushyant Chauhan, Soujanya Poria, Asif Ekbal, and Pushpak Bhattacharyya. 2018. Contextual Inter-modal Atten- tion for Multi-modal Sentiment Analysis. In Pro- ceedings of the 2018 Conference on Empirical Meth- ods in Natural Language Processing, pages 3454- 3466, Brussels, Belgium. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Using closed captions to train activity recognizers that improve video retrieval", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Gupta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Mooney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Computer Vision and Pattern Recognition Workshops", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "30--37", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1109/CVPRW.2009.5204202" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "S. Gupta and R.J. Mooney. 2009. Using closed cap- tions to train activity recognizers that improve video retrieval. In Computer Vision and Pattern Recogni- tion Workshops, 2009. CVPR Workshops 2009. IEEE Computer Society Conference on, pages 30-37.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Using closed captions as supervision for video activity recognition", |
|
"authors": [ |
|
{ |
|
"first": "Sonal", |
|
"middle": [], |
|
"last": "Gupta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Raymond", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mooney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the Twenty-Fourth AAAI Conference on Artificial Intelligence (AAAI-2010)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1083--1088", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sonal Gupta and Raymond J. Mooney. 2010. Us- ing closed captions as supervision for video activity recognition. In Proceedings of the Twenty-Fourth AAAI Conference on Artificial Intelligence (AAAI- 2010), pages 1083-1088, Atlanta, GA.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Md Iftekhar Tanveer, Louis-Philippe Morency, and Mohammed (Ehsan) Hoque", |
|
"authors": [ |
|
{ |
|
"first": "Wasifur", |
|
"middle": [], |
|
"last": "Md Kamrul Hasan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Amirali", |
|
"middle": [], |
|
"last": "Rahman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jianyuan", |
|
"middle": [], |
|
"last": "Bagher Zadeh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Zhong", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2046--2056", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D19-1211" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Md Kamrul Hasan, Wasifur Rahman, AmirAli Bagher Zadeh, Jianyuan Zhong, Md Iftekhar Tanveer, Louis-Philippe Morency, and Mo- hammed (Ehsan) Hoque. 2019. UR-FUNNY: A multimodal language dataset for understanding humor. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2046-2056, Hong Kong, China. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Deep residual learning for image recognition", |
|
"authors": [ |
|
{ |
|
"first": "Kaiming", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiangyu", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shaoqing", |
|
"middle": [], |
|
"last": "Ren", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jian", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recog- nition. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR).", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", |
|
"authors": [ |
|
{ |
|
"first": "Sergey", |
|
"middle": [], |
|
"last": "Ioffe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christian", |
|
"middle": [], |
|
"last": "Szegedy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1502.03167" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sergey Ioffe and Christian Szegedy. 2015. Batch nor- malization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Opinion Mining with Deep Recurrent Neural Networks", |
|
"authors": [ |
|
{ |
|
"first": "Ozan", |
|
"middle": [], |
|
"last": "Irsoy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Claire", |
|
"middle": [], |
|
"last": "Cardie", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "720--728", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ozan Irsoy and Claire Cardie. 2014. Opinion Mining with Deep Recurrent Neural Networks. In Proceed- ings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 720-728, Doha, Qatar. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "The kinetics human action video dataset", |
|
"authors": [ |
|
{ |
|
"first": "Will", |
|
"middle": [], |
|
"last": "Kay", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jo\u00e3o", |
|
"middle": [], |
|
"last": "Carreira", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Karen", |
|
"middle": [], |
|
"last": "Simonyan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Brian", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chloe", |
|
"middle": [], |
|
"last": "Hillier", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sudheendra", |
|
"middle": [], |
|
"last": "Vijayanarasimhan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fabio", |
|
"middle": [], |
|
"last": "Viola", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tim", |
|
"middle": [], |
|
"last": "Green", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Trevor", |
|
"middle": [], |
|
"last": "Back", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paul", |
|
"middle": [], |
|
"last": "Natsev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mustafa", |
|
"middle": [], |
|
"last": "Suleyman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Zisserman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Will Kay, Jo\u00e3o Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijaya- narasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, and Andrew Zisserman. 2017. The kinetics human action video dataset. CoRR.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Adam: A method for stochastic optimization", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Diederik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jimmy", |
|
"middle": [], |
|
"last": "Kingma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ba", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Polarity trend analysis of public sentiment on youtube", |
|
"authors": [ |
|
{ |
|
"first": "Amar", |
|
"middle": [], |
|
"last": "Krishna", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joseph", |
|
"middle": [], |
|
"last": "Zambreno", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sandeep", |
|
"middle": [], |
|
"last": "Krishnan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 19th International Conference on Management of Data, CO-MAD '13", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "125--128", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Amar Krishna, Joseph Zambreno, and Sandeep Krish- nan. 2013. Polarity trend analysis of public senti- ment on youtube. In Proceedings of the 19th Inter- national Conference on Management of Data, CO- MAD '13, pages 125-128, Mumbai, India, India. Computer Society of India.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Automatically augmenting an emotion dataset improves classification using audio", |
|
"authors": [ |
|
{ |
|
"first": "Egor", |
|
"middle": [], |
|
"last": "Lakomkin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cornelius", |
|
"middle": [], |
|
"last": "Weber", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stefan", |
|
"middle": [], |
|
"last": "Wermter", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "194--197", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Egor Lakomkin, Cornelius Weber, and Stefan Wermter. 2017. Automatically augmenting an emotion dataset improves classification using audio. In Proceed- ings of the 15th Conference of the European Chap- ter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 194-197, Valencia, Spain. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Combining Language and Vision with a Multimodal Skip-gram Model", |
|
"authors": [ |
|
{ |
|
"first": "Angeliki", |
|
"middle": [], |
|
"last": "Lazaridou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Nghia The", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Pham", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Baroni", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "153--163", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/v1/N15-1016" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Angeliki Lazaridou, Nghia The Pham, and Marco Ba- roni. 2015. Combining Language and Vision with a Multimodal Skip-gram Model. In Proceedings of the 2015 Conference of the North American Chap- ter of the Association for Computational Linguis- tics: Human Language Technologies, pages 153- 163, Denver, Colorado. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Finegrained Opinion Mining with Recurrent Neural Networks and Word Embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Pengfei", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shafiq", |
|
"middle": [], |
|
"last": "Joty", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Helen", |
|
"middle": [], |
|
"last": "Meng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1433--1443", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pengfei Liu, Shafiq Joty, and Helen Meng. 2015. Fine- grained Opinion Mining with Recurrent Neural Net- works and Word Embeddings. In Proceedings of the 2015 Conference on Empirical Methods in Nat- ural Language Processing, pages 1433-1443, Lis- bon, Portugal. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "End-to-end Sequence Labeling via Bi-directional LSTM-CNNs-CRF", |
|
"authors": [ |
|
{ |
|
"first": "Xuezhe", |
|
"middle": [], |
|
"last": "Ma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eduard", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1064--1074", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P16-1101" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xuezhe Ma and Eduard Hovy. 2016. End-to-end Se- quence Labeling via Bi-directional LSTM-CNNs- CRF. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 1064-1074, Berlin, Germany. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Mining fine-grained opinions on closed captions of YouTube videos with an attention-RNN", |
|
"authors": [ |
|
{ |
|
"first": "Edison", |
|
"middle": [], |
|
"last": "Marrese-Taylor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jorge", |
|
"middle": [], |
|
"last": "Balazs", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yutaka", |
|
"middle": [], |
|
"last": "Matsuo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 8th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "102--111", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Edison Marrese-Taylor, Jorge Balazs, and Yutaka Mat- suo. 2017. Mining fine-grained opinions on closed captions of YouTube videos with an attention-RNN. In Proceedings of the 8th Workshop on Computa- tional Approaches to Subjectivity, Sentiment and So- cial Media Analysis, pages 102-111, Copenhagen, Denmark. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Investigation of recurrent-neuralnetwork architectures and learning methods for spoken language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Grgoire", |
|
"middle": [], |
|
"last": "Mesnil", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaodong", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Li", |
|
"middle": [], |
|
"last": "Deng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "INTERSPEECH", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3771--3775", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Grgoire Mesnil, Xiaodong He, Li Deng, and Yoshua Bengio. 2013. Investigation of recurrent-neural- network architectures and learning methods for spo- ken language understanding. In INTERSPEECH, pages 3771-3775.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Open Domain Targeted Sentiment", |
|
"authors": [ |
|
{ |
|
"first": "Margaret", |
|
"middle": [], |
|
"last": "Mitchell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jacqui", |
|
"middle": [], |
|
"last": "Aguilar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Theresa", |
|
"middle": [], |
|
"last": "Wilson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Van Durme", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1643--1654", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Margaret Mitchell, Jacqui Aguilar, Theresa Wilson, and Benjamin Van Durme. 2013. Open Domain Tar- geted Sentiment. In Proceedings of the 2013 Con- ference on Empirical Methods in Natural Language Processing, pages 1643-1654, Seattle, Washington, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Computational Analysis of Persuasiveness in Social Multimedia: A Novel Dataset and Multimodal Prediction Approach", |
|
"authors": [ |
|
{ |
|
"first": "Sunghyun", |
|
"middle": [], |
|
"last": "Park", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Han", |
|
"middle": [ |
|
"Suk" |
|
], |
|
"last": "Shim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Moitreya", |
|
"middle": [], |
|
"last": "Chatterjee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenji", |
|
"middle": [], |
|
"last": "Sagae", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Louis-Philippe", |
|
"middle": [], |
|
"last": "Morency", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 16th International Conference on Multimodal Interaction, ICMI '14", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "50--57", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1145/2663204.2663260" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sunghyun Park, Han Suk Shim, Moitreya Chatterjee, Kenji Sagae, and Louis-Philippe Morency. 2014. Computational Analysis of Persuasiveness in Social Multimedia: A Novel Dataset and Multimodal Pre- diction Approach. In Proceedings of the 16th In- ternational Conference on Multimodal Interaction, ICMI '14, pages 50-57, New York, NY, USA. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Glove: Global vectors for word representation", |
|
"authors": [ |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Pennington", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jeffrey Pennington, Richard Socher, and Christo- pher D. Manning. 2014. Glove: Global vectors for word representation. In EMNLP.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Seq2seq2sentiment: Multimodal Sequence to Sequence Models for Sentiment Analysis", |
|
"authors": [ |
|
{ |
|
"first": "Hai", |
|
"middle": [], |
|
"last": "Pham", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Manzini", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paul", |
|
"middle": [ |
|
"Pu" |
|
], |
|
"last": "Liang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Barnabs", |
|
"middle": [], |
|
"last": "Poczs", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of Grand Challenge and Workshop on Human Multimodal Language (Challenge-HML)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "53--63", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W18-3308" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hai Pham, Thomas Manzini, Paul Pu Liang, and Barn- abs Poczs. 2018. Seq2seq2sentiment: Multimodal Sequence to Sequence Models for Sentiment Analy- sis. In Proceedings of Grand Challenge and Work- shop on Human Multimodal Language (Challenge- HML), pages 53-63, Melbourne, Australia. Associ- ation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "SemEval-2014 Task 4: Aspect Based Sentiment Analysis", |
|
"authors": [ |
|
{ |
|
"first": "Maria", |
|
"middle": [], |
|
"last": "Pontiki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dimitris", |
|
"middle": [], |
|
"last": "Galanis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Pavlopoulos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Harris", |
|
"middle": [], |
|
"last": "Papageorgiou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 8th International Workshop on Semantic Evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "27--35", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Maria Pontiki, Dimitris Galanis, John Pavlopoulos, Harris Papageorgiou, Ion Androutsopoulos, and Suresh Manandhar. 2014. SemEval-2014 Task 4: Aspect Based Sentiment Analysis. In Proceedings of the 8th International Workshop on Semantic Eval- uation (SemEval 2014), pages 27-35, Dublin, Ire- land. Association for Computational Linguistics and Dublin City University.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Deep Convolutional Neural Network Textual Features and Multiple Kernel Learning for Utterance-level Multimodal Sentiment Analysis", |
|
"authors": [ |
|
{ |
|
"first": "Soujanya", |
|
"middle": [], |
|
"last": "Poria", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Erik", |
|
"middle": [], |
|
"last": "Cambria", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Gelbukh", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2539--2544", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Soujanya Poria, Erik Cambria, and Alexander Gel- bukh. 2015. Deep Convolutional Neural Network Textual Features and Multiple Kernel Learning for Utterance-level Multimodal Sentiment Analysis. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2539-2544, Lisbon, Portugal. Association for Com- putational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Context-Dependent Sentiment Analysis in User-Generated Videos", |
|
"authors": [ |
|
{ |
|
"first": "Soujanya", |
|
"middle": [], |
|
"last": "Poria", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Erik", |
|
"middle": [], |
|
"last": "Cambria", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Devamanyu", |
|
"middle": [], |
|
"last": "Hazarika", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Navonil", |
|
"middle": [], |
|
"last": "Majumder", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Amir", |
|
"middle": [], |
|
"last": "Zadeh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Louis-Philippe", |
|
"middle": [], |
|
"last": "Morency", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "873--883", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P17-1081" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Soujanya Poria, Erik Cambria, Devamanyu Hazarika, Navonil Majumder, Amir Zadeh, and Louis-Philippe Morency. 2017. Context-Dependent Sentiment Analysis in User-Generated Videos. In Proceed- ings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 873-883, Vancouver, Canada. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Opinion Mining on YouTube", |
|
"authors": [ |
|
{ |
|
"first": "Aliaksei", |
|
"middle": [], |
|
"last": "Severyn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alessandro", |
|
"middle": [], |
|
"last": "Moschitti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Olga", |
|
"middle": [], |
|
"last": "Uryupina", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Barbara", |
|
"middle": [], |
|
"last": "Plank", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Katja", |
|
"middle": [], |
|
"last": "Filippova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1252--1261", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/v1/P14-1118" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Aliaksei Severyn, Alessandro Moschitti, Olga Uryupina, Barbara Plank, and Katja Filippova. 2014. Opinion Mining on YouTube. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1252-1261, Baltimore, Maryland. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "How useful are your comments?: Analyzing and predicting youtube comments and comment ratings", |
|
"authors": [ |
|
{ |
|
"first": "Stefan", |
|
"middle": [], |
|
"last": "Siersdorfer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sergiu", |
|
"middle": [], |
|
"last": "Chelaru", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wolfgang", |
|
"middle": [], |
|
"last": "Nejdl", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jose", |
|
"middle": [], |
|
"last": "San Pedro", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 19th International Conference on World Wide Web, WWW '10", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "891--900", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1145/1772690.1772781" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stefan Siersdorfer, Sergiu Chelaru, Wolfgang Nejdl, and Jose San Pedro. 2010. How useful are your comments?: Analyzing and predicting youtube com- ments and comment ratings. In Proceedings of the 19th International Conference on World Wide Web, WWW '10, pages 891-900, New York, NY, USA. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "A Shared Task on Multimodal Machine Translation and Crosslingual Image Description", |
|
"authors": [ |
|
{ |
|
"first": "Lucia", |
|
"middle": [], |
|
"last": "Specia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stella", |
|
"middle": [], |
|
"last": "Frank", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Khalil", |
|
"middle": [], |
|
"last": "Sima'an", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Desmond", |
|
"middle": [], |
|
"last": "Elliott", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the First Conference on Machine Translation", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "543--553", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W16-2346" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lucia Specia, Stella Frank, Khalil Sima'an, and Desmond Elliott. 2016. A Shared Task on Multi- modal Machine Translation and Crosslingual Image Description. In Proceedings of the First Conference on Machine Translation: Volume 2, Shared Task Pa- pers, pages 543-553, Berlin, Germany. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "Going deeper with convolutions", |
|
"authors": [ |
|
{ |
|
"first": "Christian", |
|
"middle": [], |
|
"last": "Szegedy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yangqing", |
|
"middle": [], |
|
"last": "Jia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pierre", |
|
"middle": [], |
|
"last": "Sermanet", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Scott", |
|
"middle": [], |
|
"last": "Reed", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dragomir", |
|
"middle": [], |
|
"last": "Anguelov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dumitru", |
|
"middle": [], |
|
"last": "Erhan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vincent", |
|
"middle": [], |
|
"last": "Vanhoucke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Rabinovich", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the IEEE conference on computer vision and pattern recognition", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--9", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Ser- manet, Scott Reed, Dragomir Anguelov, Dumitru Er- han, Vincent Vanhoucke, and Andrew Rabinovich. 2015. Going deeper with convolutions. In Proceed- ings of the IEEE conference on computer vision and pattern recognition, pages 1-9.", |
|
"links": null |
|
}, |
|
"BIBREF39": { |
|
"ref_id": "b39", |
|
"title": "Nicoscene: Video scene search by keywords based on social annotation", |
|
"authors": [ |
|
{ |
|
"first": "Yasuyuki", |
|
"middle": [], |
|
"last": "Tahara", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Atsushi", |
|
"middle": [], |
|
"last": "Tago", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hiroyuki", |
|
"middle": [], |
|
"last": "Nakagawa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Akihiko", |
|
"middle": [], |
|
"last": "Ohsuga", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Aijun An, Pawan Lingras, Sheila Petty, and Runhe Huang", |
|
"volume": "6335", |
|
"issue": "", |
|
"pages": "461--474", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yasuyuki Tahara, Atsushi Tago, Hiroyuki Nakagawa, and Akihiko Ohsuga. 2010. Nicoscene: Video scene search by keywords based on social annotation. In Aijun An, Pawan Lingras, Sheila Petty, and Runhe Huang, editors, Active Media Technology, volume 6335 of Lecture Notes in Computer Science, pages 461-474. Springer Berlin Heidelberg.", |
|
"links": null |
|
}, |
|
"BIBREF40": { |
|
"ref_id": "b40", |
|
"title": "Effective LSTMs for Target-Dependent Sentiment Classification", |
|
"authors": [ |
|
{ |
|
"first": "Duyu", |
|
"middle": [], |
|
"last": "Tang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bing", |
|
"middle": [], |
|
"last": "Qin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaocheng", |
|
"middle": [], |
|
"last": "Feng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ting", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3298--3307", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Duyu Tang, Bing Qin, Xiaocheng Feng, and Ting Liu. 2016a. Effective LSTMs for Target-Dependent Sen- timent Classification. In Proceedings of COLING 2016, the 26th International Conference on Compu- tational Linguistics: Technical Papers, pages 3298- 3307, Osaka, Japan. The COLING 2016 Organizing Committee.", |
|
"links": null |
|
}, |
|
"BIBREF41": { |
|
"ref_id": "b41", |
|
"title": "Aspect Level Sentiment Classification with Deep Memory Network", |
|
"authors": [ |
|
{ |
|
"first": "Duyu", |
|
"middle": [], |
|
"last": "Tang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bing", |
|
"middle": [], |
|
"last": "Qin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ting", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "214--224", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Duyu Tang, Bing Qin, and Ting Liu. 2016b. Aspect Level Sentiment Classification with Deep Memory Network. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 214-224, Austin, Texas. Association for Com- putational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF42": { |
|
"ref_id": "b42", |
|
"title": "Learning spatiotemporal features with 3d convolutional networks", |
|
"authors": [ |
|
{ |
|
"first": "Du", |
|
"middle": [], |
|
"last": "Tran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lubomir", |
|
"middle": [], |
|
"last": "Bourdev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rob", |
|
"middle": [], |
|
"last": "Fergus", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lorenzo", |
|
"middle": [], |
|
"last": "Torresani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Manohar", |
|
"middle": [], |
|
"last": "Paluri", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the IEEE international conference on computer vision", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4489--4497", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Du Tran, Lubomir Bourdev, Rob Fergus, Lorenzo Tor- resani, and Manohar Paluri. 2015. Learning spa- tiotemporal features with 3d convolutional networks. In Proceedings of the IEEE international conference on computer vision, pages 4489-4497.", |
|
"links": null |
|
}, |
|
"BIBREF43": { |
|
"ref_id": "b43", |
|
"title": "Crowdsourced time-sync video tagging using temporal and personalized topic modeling", |
|
"authors": [ |
|
{ |
|
"first": "Bin", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Erheng", |
|
"middle": [], |
|
"last": "Zhong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ben", |
|
"middle": [], |
|
"last": "Tan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Horner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qiang", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '14", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "721--730", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1145/2623330.2623625" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bin Wu, Erheng Zhong, Ben Tan, Andrew Horner, and Qiang Yang. 2014. Crowdsourced time-sync video tagging using temporal and personalized topic mod- eling. In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '14, pages 721-730, New York, NY, USA. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF44": { |
|
"ref_id": "b44", |
|
"title": "Multilevel Language and Vision Integration for Textto-Clip Retrieval", |
|
"authors": [ |
|
{ |
|
"first": "Huijuan", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kun", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bryan", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Plummer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Leonid", |
|
"middle": [], |
|
"last": "Sigal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stan", |
|
"middle": [], |
|
"last": "Sclaroff", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kate", |
|
"middle": [], |
|
"last": "Saenko", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1804.05113[cs].ArXiv:1804.05113" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Huijuan Xu, Kun He, Bryan A. Plummer, Leonid Si- gal, Stan Sclaroff, and Kate Saenko. 2018. Mul- tilevel Language and Vision Integration for Text- to-Clip Retrieval. arXiv:1804.05113 [cs]. ArXiv: 1804.05113.", |
|
"links": null |
|
}, |
|
"BIBREF45": { |
|
"ref_id": "b45", |
|
"title": "Joint Opinion Relation Detection Using One-Class Deep Neural Network", |
|
"authors": [ |
|
{ |
|
"first": "Liheng", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kang", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jun", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "677--687", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Liheng Xu, Kang Liu, and Jun Zhao. 2014. Joint Opinion Relation Detection Using One-Class Deep Neural Network. In Proceedings of COLING 2014, the 25th International Conference on Computa- tional Linguistics: Technical Papers, pages 677- 687, Dublin, Ireland. Dublin City University and As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF46": { |
|
"ref_id": "b46", |
|
"title": "Design Challenges and Misconceptions in Neural Sequence Labeling", |
|
"authors": [ |
|
{ |
|
"first": "Jie", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shuailong", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yue", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 27th International Conference on Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3879--3889", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jie Yang, Shuailong Liang, and Yue Zhang. 2018. De- sign Challenges and Misconceptions in Neural Se- quence Labeling. In Proceedings of the 27th Inter- national Conference on Computational Linguistics, pages 3879-3889, Santa Fe, New Mexico, USA. As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF47": { |
|
"ref_id": "b47", |
|
"title": "NCRF++: An Open-source Neural Sequence Labeling Toolkit", |
|
"authors": [ |
|
{ |
|
"first": "Jie", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yue", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of ACL 2018, System Demonstrations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "74--79", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P18-4013" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jie Yang and Yue Zhang. 2018. NCRF++: An Open-source Neural Sequence Labeling Toolkit. In Proceedings of ACL 2018, System Demonstrations, pages 74-79, Melbourne, Australia. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF48": { |
|
"ref_id": "b48", |
|
"title": "Tensor Fusion Network for Multimodal Sentiment Analysis", |
|
"authors": [ |
|
{ |
|
"first": "Amir", |
|
"middle": [], |
|
"last": "Zadeh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Minghai", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Soujanya", |
|
"middle": [], |
|
"last": "Poria", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Erik", |
|
"middle": [], |
|
"last": "Cambria", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Louis-Philippe", |
|
"middle": [], |
|
"last": "Morency", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1103--1114", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D17-1115" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Amir Zadeh, Minghai Chen, Soujanya Poria, Erik Cambria, and Louis-Philippe Morency. 2017. Ten- sor Fusion Network for Multimodal Sentiment Anal- ysis. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1103-1114, Copenhagen, Denmark. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF49": { |
|
"ref_id": "b49", |
|
"title": "MOSI: Multimodal Corpus of Sentiment Intensity and Subjectivity Analysis in Online Opinion Videos", |
|
"authors": [ |
|
{ |
|
"first": "Amir", |
|
"middle": [], |
|
"last": "Zadeh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rowan", |
|
"middle": [], |
|
"last": "Zellers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eli", |
|
"middle": [], |
|
"last": "Pincus", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Louis-Philippe", |
|
"middle": [], |
|
"last": "Morency", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1606.06259" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Amir Zadeh, Rowan Zellers, Eli Pincus, and Louis- Philippe Morency. 2016. MOSI: Multimodal Cor- pus of Sentiment Intensity and Subjectivity Analysis in Online Opinion Videos. arXiv:1606.06259 [cs].", |
|
"links": null |
|
}, |
|
"BIBREF50": { |
|
"ref_id": "b50", |
|
"title": "Neural Networks for Open Domain Targeted Sentiment", |
|
"authors": [ |
|
{ |
|
"first": "Meishan", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yue", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Duy Tin", |
|
"middle": [], |
|
"last": "Vo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "612--621", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Meishan Zhang, Yue Zhang, and Duy Tin Vo. 2015. Neural Networks for Open Domain Targeted Sen- timent. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Process- ing, pages 612-621, Lisbon, Portugal. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF51": { |
|
"ref_id": "b51", |
|
"title": "Jointly modeling aspects and opinions with a MaxEnt-LDA hybrid", |
|
"authors": [ |
|
{ |
|
"first": "Xin", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jing", |
|
"middle": [], |
|
"last": "Jiang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hongfei", |
|
"middle": [], |
|
"last": "Yan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaoming", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "56--65", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xin Zhao, Jing Jiang, Hongfei Yan, and Xiaoming Li. 2010. Jointly modeling aspects and opinions with a MaxEnt-LDA hybrid. In Proceedings of the 2010 Conference on Empirical Methods in Natural Lan- guage Processing, pages 56-65, Cambridge, MA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF52": { |
|
"ref_id": "b52", |
|
"title": "Complementary aspect-based opinion mining", |
|
"authors": [ |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Zuo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "IEEE Transactions on Knowledge and Data Engineering", |
|
"volume": "30", |
|
"issue": "2", |
|
"pages": "249--262", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1109/TKDE.2017.2764084" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Y. Zuo, J. Wu, H. Zhang, D. Wang, and K. Xu. 2018. Complementary aspect-based opinion mining. IEEE Transactions on Knowledge and Data Engineering, 30(2):249-262.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"num": null, |
|
"text": "How d i d he do t h a t ? \u2212 Made him an o f f e r he c o u l d n o t r e f u s e .", |
|
"type_str": "figure" |
|
}, |
|
"FIGREF1": { |
|
"uris": null, |
|
"num": null, |
|
"text": "Excerpt of a subtitle chunk (in SubRip format,) showing its main components.", |
|
"type_str": "figure" |
|
}, |
|
"FIGREF2": { |
|
"uris": null, |
|
"num": null, |
|
"text": "Qualitative comparison between baseline and our method on the Youtubean dataset. Green and yellow boxes represent positive and neutral sentiment respectively.", |
|
"type_str": "figure" |
|
}, |
|
"TABREF1": { |
|
"num": null, |
|
"html": null, |
|
"type_str": "table", |
|
"text": "i is a vector representing a single frame in v s , our encoding module first maps this sequence into another sequence of video featuresv = [v 1 , . . . ,v l ] following the method described in Section 5.2. Later, this new sequence is mapped into a vectorh v \u2208 R dv that captures summarized high-level visual semantics in the video, as follows:", |
|
"content": "<table/>" |
|
}, |
|
"TABREF3": { |
|
"num": null, |
|
"html": null, |
|
"type_str": "table", |
|
"text": "Summary of our results on the Youtubean dataset, *** denotes statistical significance at 99% confidence, ** at 95% and * at 90%.", |
|
"content": "<table><tr><td>Setting</td><td>Model</td><td colspan=\"3\">Aspect Extraction</td><td colspan=\"3\">Sentiment Classification</td></tr><tr><td/><td/><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td></tr><tr><td>Simple</td><td>Baseline Ours</td><td>0.394 0.396</td><td>0.379 0.406</td><td>0.386 0.399</td><td>--</td><td>--</td><td>--</td></tr><tr><td>CAL</td><td>Baseline Ours</td><td>0.364 0.444**</td><td>0.401* 0.368</td><td>0.382 0.402**</td><td>0.540*** 0.488</td><td colspan=\"2\">0.416 0.466*** 0.342*** 0.270</td></tr><tr><td>CSL</td><td>Baseline Ours</td><td>0.387 0.438*</td><td>0.375 0.378</td><td>0.408* 0.404</td><td>0.614 0.532</td><td>0.446 0.446</td><td>0.296 0.304</td></tr><tr><td>JSL</td><td>Baseline Ours</td><td colspan=\"2\">0.381 0.442*** 0.401* 0.357</td><td>0.367 0.420*</td><td colspan=\"3\">0.798 0.924*** 0.924*** 0.922*** 0.802 0.788</td></tr></table>" |
|
}, |
|
"TABREF4": { |
|
"num": null, |
|
"html": null, |
|
"type_str": "table", |
|
"text": "Summary of our results for the test set of the POM dataset, *** denotes statistical significance at 99% confidence, ** at 95% and * at 90%.", |
|
"content": "<table><tr><td>Model</td><td colspan=\"3\">Aspect Extraction</td></tr><tr><td/><td>P</td><td>R</td><td>F1</td></tr><tr><td>T</td><td>0.532</td><td>0.543</td><td>0.533</td></tr><tr><td>T + CRF</td><td>0.558</td><td>0.528</td><td>0.541</td></tr><tr><td>T + GV</td><td>0.562</td><td>0.537</td><td>0.548</td></tr><tr><td>T + GV + CRF</td><td>0.576*</td><td>0.569</td><td>0.571**</td></tr><tr><td>T + A + V</td><td>0.587*</td><td>0.578</td><td>0.580*</td></tr><tr><td>T + CRF + A + V</td><td>0.578</td><td>0.570</td><td>0.573*</td></tr><tr><td colspan=\"4\">T + GV + CRF + A + V 0.602** 0.568 0.584***</td></tr></table>" |
|
}, |
|
"TABREF6": { |
|
"num": null, |
|
"html": null, |
|
"type_str": "table", |
|
"text": "Qualitative comparison between baseline and our method on the POM dataset. Green and red boxes represent positive and negative sentiment respectively.", |
|
"content": "<table><tr><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td colspan=\"3\">O k a y</td><td colspan=\"2\">d o</td><td colspan=\"2\">n o t</td><td>s e e</td><td colspan=\"2\">t h i s</td><td colspan=\"2\">f i l m</td><td/><td/><td/><td/><td/><td/><td/><td colspan=\"3\">T h i s</td><td colspan=\"2\">m o v i e</td><td>h a s</td><td>e v e r y t h i n g</td></tr><tr><td/><td colspan=\"3\">G o l</td><td>d</td><td colspan=\"3\">S t a n d</td><td colspan=\"3\">a r</td><td colspan=\"2\">d</td><td/><td>O</td><td/><td/><td>O</td><td>O</td><td/><td>O</td><td>B</td><td/><td>I</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>B</td><td/><td>I</td><td>O</td><td>O</td></tr><tr><td/><td/><td/><td/><td/><td/><td colspan=\"3\">B a s e l i</td><td colspan=\"3\">n</td><td>e</td><td/><td>O</td><td/><td/><td>O</td><td>O</td><td/><td>O</td><td>O</td><td/><td>O</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>O</td><td/><td>O</td><td>O</td><td>O</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td colspan=\"3\">O u</td><td colspan=\"3\">r s</td><td/><td>O</td><td/><td/><td>O</td><td>O</td><td/><td>O</td><td>B</td><td/><td>I</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>B</td><td/><td>I</td><td>O</td><td>O</td></tr><tr><td colspan=\"13\">Figure 3: Y o u</td><td>g e t</td><td>a</td><td>t o n</td><td>o f</td><td>s e</td><td>t t i n g s</td><td>a n d</td><td>f e a t u r e s</td><td>i n</td><td>t h e</td><td>c a m e</td><td>r a</td><td>a p p</td><td>w h i c h</td><td>i s</td><td>a l s o</td><td>i m p r</td><td>o v e d</td><td>T h e</td><td>f</td><td>i r</td><td>s t</td><td>t h i n g</td><td>w e</td><td>n o t i c e</td><td>i s</td><td>t h a t</td><td>t h e</td><td>b a c k</td><td>c o v e r</td><td>i s</td><td>w a y</td><td>l e s s</td><td>g l o s</td><td>s y .</td></tr><tr><td>G o l</td><td>d</td><td>S</td><td colspan=\"5\">t a n d a r d</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td/><td/><td colspan=\"3\">B a s e</td><td>l</td><td colspan=\"2\">i n e</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td/><td/><td/><td/><td colspan=\"3\">O u r</td><td>s</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr></table>" |
|
} |
|
} |
|
} |
|
} |