id
stringlengths
9
16
submitter
stringlengths
3
64
authors
stringlengths
5
6.63k
title
stringlengths
7
245
comments
stringlengths
1
482
journal-ref
stringlengths
4
382
doi
stringlengths
9
151
report-no
stringclasses
984 values
categories
stringlengths
5
108
license
stringclasses
9 values
abstract
stringlengths
83
3.41k
versions
listlengths
1
20
update_date
timestamp[s]date
2007-05-23 00:00:00
2025-04-11 00:00:00
authors_parsed
sequencelengths
1
427
prompt
stringlengths
166
3.49k
label
stringclasses
2 values
prob
float64
0.5
0.98
1705.10447
Jimmy Ren
Jimmy Ren, Zhiyang Yu, Jianbo Liu, Rui Zhang, Wenxiu Sun, Jiahao Pang, Xiaohao Chen, Qiong Yan
Robust Tracking Using Region Proposal Networks
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent advances in visual tracking showed that deep Convolutional Neural Networks (CNN) trained for image classification can be strong feature extractors for discriminative trackers. However, due to the drastic difference between image classification and tracking, extra treatments such as model ensemble and feature engineering must be carried out to bridge the two domains. Such procedures are either time consuming or hard to generalize well across datasets. In this paper we discovered that the internal structure of Region Proposal Network (RPN)'s top layer feature can be utilized for robust visual tracking. We showed that such property has to be unleashed by a novel loss function which simultaneously considers classification accuracy and bounding box quality. Without ensemble and any extra treatment on feature maps, our proposed method achieved state-of-the-art results on several large scale benchmarks including OTB50, OTB100 and VOT2016. We will make our code publicly available.
[ { "version": "v1", "created": "Tue, 30 May 2017 03:32:07 GMT" } ]
2017-05-31T00:00:00
[ [ "Ren", "Jimmy", "" ], [ "Yu", "Zhiyang", "" ], [ "Liu", "Jianbo", "" ], [ "Zhang", "Rui", "" ], [ "Sun", "Wenxiu", "" ], [ "Pang", "Jiahao", "" ], [ "Chen", "Xiaohao", "" ], [ "Yan", "Qiong", "" ] ]
TITLE: Robust Tracking Using Region Proposal Networks ABSTRACT: Recent advances in visual tracking showed that deep Convolutional Neural Networks (CNN) trained for image classification can be strong feature extractors for discriminative trackers. However, due to the drastic difference between image classification and tracking, extra treatments such as model ensemble and feature engineering must be carried out to bridge the two domains. Such procedures are either time consuming or hard to generalize well across datasets. In this paper we discovered that the internal structure of Region Proposal Network (RPN)'s top layer feature can be utilized for robust visual tracking. We showed that such property has to be unleashed by a novel loss function which simultaneously considers classification accuracy and bounding box quality. Without ensemble and any extra treatment on feature maps, our proposed method achieved state-of-the-art results on several large scale benchmarks including OTB50, OTB100 and VOT2016. We will make our code publicly available.
no_new_dataset
0.946448
1705.10586
Zhenzhou Wu
Zhenzhou Wu and Xin Zheng and Daniel Dahlmeier
Character-Based Text Classification using Top Down Semantic Model for Sentence Representation
null
null
null
null
cs.CL cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Despite the success of deep learning on many fronts especially image and speech, its application in text classification often is still not as good as a simple linear SVM on n-gram TF-IDF representation especially for smaller datasets. Deep learning tends to emphasize on sentence level semantics when learning a representation with models like recurrent neural network or recursive neural network, however from the success of TF-IDF representation, it seems a bag-of-words type of representation has its strength. Taking advantage of both representions, we present a model known as TDSM (Top Down Semantic Model) for extracting a sentence representation that considers both the word-level semantics by linearly combining the words with attention weights and the sentence-level semantics with BiLSTM and use it on text classification. We apply the model on characters and our results show that our model is better than all the other character-based and word-based convolutional neural network models by \cite{zhang15} across seven different datasets with only 1\% of their parameters. We also demonstrate that this model beats traditional linear models on TF-IDF vectors on small and polished datasets like news article in which typically deep learning models surrender.
[ { "version": "v1", "created": "Mon, 29 May 2017 15:53:00 GMT" } ]
2017-05-31T00:00:00
[ [ "Wu", "Zhenzhou", "" ], [ "Zheng", "Xin", "" ], [ "Dahlmeier", "Daniel", "" ] ]
TITLE: Character-Based Text Classification using Top Down Semantic Model for Sentence Representation ABSTRACT: Despite the success of deep learning on many fronts especially image and speech, its application in text classification often is still not as good as a simple linear SVM on n-gram TF-IDF representation especially for smaller datasets. Deep learning tends to emphasize on sentence level semantics when learning a representation with models like recurrent neural network or recursive neural network, however from the success of TF-IDF representation, it seems a bag-of-words type of representation has its strength. Taking advantage of both representions, we present a model known as TDSM (Top Down Semantic Model) for extracting a sentence representation that considers both the word-level semantics by linearly combining the words with attention weights and the sentence-level semantics with BiLSTM and use it on text classification. We apply the model on characters and our results show that our model is better than all the other character-based and word-based convolutional neural network models by \cite{zhang15} across seven different datasets with only 1\% of their parameters. We also demonstrate that this model beats traditional linear models on TF-IDF vectors on small and polished datasets like news article in which typically deep learning models surrender.
no_new_dataset
0.954009
1705.10659
Xiatian Zhu
Jingya Wang, Xiatian Zhu, Shaogang Gong
Discovering Visual Concept Structure with Sparse and Incomplete Tags
Artificial Intelligence journal 2017
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Discovering automatically the semantic structure of tagged visual data (e.g. web videos and images) is important for visual data analysis and interpretation, enabling the machine intelligence for effectively processing the fast-growing amount of multi-media data. However, this is non-trivial due to the need for jointly learning underlying correlations between heterogeneous visual and tag data. The task is made more challenging by inherently sparse and incomplete tags. In this work, we develop a method for modelling the inherent visual data concept structures based on a novel Hierarchical-Multi-Label Random Forest model capable of correlating structured visual and tag information so as to more accurately interpret the visual semantics, e.g. disclosing meaningful visual groups with similar high-level concepts, and recovering missing tags for individual visual data samples. Specifically, our model exploits hierarchically structured tags of different semantic abstractness and multiple tag statistical correlations in addition to modelling visual and tag interactions. As a result, our model is able to discover more accurate semantic correlation between textual tags and visual features, and finally providing favourable visual semantics interpretation even with highly sparse and incomplete tags. We demonstrate the advantages of our proposed approach in two fundamental applications, visual data clustering and missing tag completion, on benchmarking video (i.e. TRECVID MED 2011) and image (i.e. NUS-WIDE) datasets.
[ { "version": "v1", "created": "Tue, 30 May 2017 14:12:43 GMT" } ]
2017-05-31T00:00:00
[ [ "Wang", "Jingya", "" ], [ "Zhu", "Xiatian", "" ], [ "Gong", "Shaogang", "" ] ]
TITLE: Discovering Visual Concept Structure with Sparse and Incomplete Tags ABSTRACT: Discovering automatically the semantic structure of tagged visual data (e.g. web videos and images) is important for visual data analysis and interpretation, enabling the machine intelligence for effectively processing the fast-growing amount of multi-media data. However, this is non-trivial due to the need for jointly learning underlying correlations between heterogeneous visual and tag data. The task is made more challenging by inherently sparse and incomplete tags. In this work, we develop a method for modelling the inherent visual data concept structures based on a novel Hierarchical-Multi-Label Random Forest model capable of correlating structured visual and tag information so as to more accurately interpret the visual semantics, e.g. disclosing meaningful visual groups with similar high-level concepts, and recovering missing tags for individual visual data samples. Specifically, our model exploits hierarchically structured tags of different semantic abstractness and multiple tag statistical correlations in addition to modelling visual and tag interactions. As a result, our model is able to discover more accurate semantic correlation between textual tags and visual features, and finally providing favourable visual semantics interpretation even with highly sparse and incomplete tags. We demonstrate the advantages of our proposed approach in two fundamental applications, visual data clustering and missing tag completion, on benchmarking video (i.e. TRECVID MED 2011) and image (i.e. NUS-WIDE) datasets.
no_new_dataset
0.948298
1705.10698
Mark Marsden
Mark Marsden, Kevin McGuinness, Suzanne Little, Noel E. O'Connor
ResnetCrowd: A Residual Deep Learning Architecture for Crowd Counting, Violent Behaviour Detection and Crowd Density Level Classification
7 Pages, AVSS 2017
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we propose ResnetCrowd, a deep residual architecture for simultaneous crowd counting, violent behaviour detection and crowd density level classification. To train and evaluate the proposed multi-objective technique, a new 100 image dataset referred to as Multi Task Crowd is constructed. This new dataset is the first computer vision dataset fully annotated for crowd counting, violent behaviour detection and density level classification. Our experiments show that a multi-task approach boosts individual task performance for all tasks and most notably for violent behaviour detection which receives a 9\% boost in ROC curve AUC (Area under the curve). The trained ResnetCrowd model is also evaluated on several additional benchmarks highlighting the superior generalisation of crowd analysis models trained for multiple objectives.
[ { "version": "v1", "created": "Tue, 30 May 2017 15:18:41 GMT" } ]
2017-05-31T00:00:00
[ [ "Marsden", "Mark", "" ], [ "McGuinness", "Kevin", "" ], [ "Little", "Suzanne", "" ], [ "O'Connor", "Noel E.", "" ] ]
TITLE: ResnetCrowd: A Residual Deep Learning Architecture for Crowd Counting, Violent Behaviour Detection and Crowd Density Level Classification ABSTRACT: In this paper we propose ResnetCrowd, a deep residual architecture for simultaneous crowd counting, violent behaviour detection and crowd density level classification. To train and evaluate the proposed multi-objective technique, a new 100 image dataset referred to as Multi Task Crowd is constructed. This new dataset is the first computer vision dataset fully annotated for crowd counting, violent behaviour detection and density level classification. Our experiments show that a multi-task approach boosts individual task performance for all tasks and most notably for violent behaviour detection which receives a 9\% boost in ROC curve AUC (Area under the curve). The trained ResnetCrowd model is also evaluated on several additional benchmarks highlighting the superior generalisation of crowd analysis models trained for multiple objectives.
new_dataset
0.955444
1705.10716
Ali Taalimi
Ali Taalimi, Liu Liu and Hairong Qi
Addressing Ambiguity in Multi-target Tracking by Hierarchical Strategy
5 pages, Accepted in International Conference of Image Processing, 2017
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a novel hierarchical approach for the simultaneous tracking of multiple targets in a video. We use a network flow approach to link detections in low-level and tracklets in high-level. At each step of the hierarchy, the confidence of candidates is measured by using a new scoring system, ConfRank, that considers the quality and the quantity of its neighborhood. The output of the first stage is a collection of safe tracklets and unlinked high-confidence detections. For each individual detection, we determine if it belongs to an existing or is a new tracklet. We show the effect of our framework to recover missed detections and reduce switch identity. The proposed tracker is referred to as TVOD for multi-target tracking using the visual tracker and generic object detector. We achieve competitive results with lower identity switches on several datasets comparing to state-of-the-art.
[ { "version": "v1", "created": "Tue, 30 May 2017 16:11:34 GMT" } ]
2017-05-31T00:00:00
[ [ "Taalimi", "Ali", "" ], [ "Liu", "Liu", "" ], [ "Qi", "Hairong", "" ] ]
TITLE: Addressing Ambiguity in Multi-target Tracking by Hierarchical Strategy ABSTRACT: This paper presents a novel hierarchical approach for the simultaneous tracking of multiple targets in a video. We use a network flow approach to link detections in low-level and tracklets in high-level. At each step of the hierarchy, the confidence of candidates is measured by using a new scoring system, ConfRank, that considers the quality and the quantity of its neighborhood. The output of the first stage is a collection of safe tracklets and unlinked high-confidence detections. For each individual detection, we determine if it belongs to an existing or is a new tracklet. We show the effect of our framework to recover missed detections and reduce switch identity. The proposed tracker is referred to as TVOD for multi-target tracking using the visual tracker and generic object detector. We achieve competitive results with lower identity switches on several datasets comparing to state-of-the-art.
no_new_dataset
0.94545
1705.10742
Martin Jaggi
Tina Fang, Martin Jaggi, Katerina Argyraki
Generating Steganographic Text with LSTMs
ACL 2017 Student Research Workshop
null
null
null
cs.AI cs.CR cs.MM
http://creativecommons.org/licenses/by/4.0/
Motivated by concerns for user privacy, we design a steganographic system ("stegosystem") that enables two users to exchange encrypted messages without an adversary detecting that such an exchange is taking place. We propose a new linguistic stegosystem based on a Long Short-Term Memory (LSTM) neural network. We demonstrate our approach on the Twitter and Enron email datasets and show that it yields high-quality steganographic text while significantly improving capacity (encrypted bits per word) relative to the state-of-the-art.
[ { "version": "v1", "created": "Tue, 30 May 2017 16:52:48 GMT" } ]
2017-05-31T00:00:00
[ [ "Fang", "Tina", "" ], [ "Jaggi", "Martin", "" ], [ "Argyraki", "Katerina", "" ] ]
TITLE: Generating Steganographic Text with LSTMs ABSTRACT: Motivated by concerns for user privacy, we design a steganographic system ("stegosystem") that enables two users to exchange encrypted messages without an adversary detecting that such an exchange is taking place. We propose a new linguistic stegosystem based on a Long Short-Term Memory (LSTM) neural network. We demonstrate our approach on the Twitter and Enron email datasets and show that it yields high-quality steganographic text while significantly improving capacity (encrypted bits per word) relative to the state-of-the-art.
no_new_dataset
0.951953
1705.10744
Ondrej Bajgar
Rudolf Kadlec, Ondrej Bajgar and Jan Kleindienst
Knowledge Base Completion: Baselines Strike Back
null
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many papers have been published on the knowledge base completion task in the past few years. Most of these introduce novel architectures for relation learning that are evaluated on standard datasets such as FB15k and WN18. This paper shows that the accuracy of almost all models published on the FB15k can be outperformed by an appropriately tuned baseline - our reimplementation of the DistMult model. Our findings cast doubt on the claim that the performance improvements of recent models are due to architectural changes as opposed to hyper-parameter tuning or different training objectives. This should prompt future research to re-consider how the performance of models is evaluated and reported.
[ { "version": "v1", "created": "Tue, 30 May 2017 16:54:19 GMT" } ]
2017-05-31T00:00:00
[ [ "Kadlec", "Rudolf", "" ], [ "Bajgar", "Ondrej", "" ], [ "Kleindienst", "Jan", "" ] ]
TITLE: Knowledge Base Completion: Baselines Strike Back ABSTRACT: Many papers have been published on the knowledge base completion task in the past few years. Most of these introduce novel architectures for relation learning that are evaluated on standard datasets such as FB15k and WN18. This paper shows that the accuracy of almost all models published on the FB15k can be outperformed by an appropriately tuned baseline - our reimplementation of the DistMult model. Our findings cast doubt on the claim that the performance improvements of recent models are due to architectural changes as opposed to hyper-parameter tuning or different training objectives. This should prompt future research to re-consider how the performance of models is evaluated and reported.
no_new_dataset
0.944689
1705.10750
Junier Oliva
Junier B. Oliva, Kumar Avinava Dubey, Barnabas Poczos, Eric Xing, Jeff Schneider
Recurrent Estimation of Distributions
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents the recurrent estimation of distributions (RED) for modeling real-valued data in a semiparametric fashion. RED models make two novel uses of recurrent neural networks (RNNs) for density estimation of general real-valued data. First, RNNs are used to transform input covariates into a latent space to better capture conditional dependencies in inputs. After, an RNN is used to compute the conditional distributions of the latent covariates. The resulting model is efficient to train, compute, and sample from, whilst producing normalized pdfs. The effectiveness of RED is shown via several real-world data experiments. Our results show that RED models achieve a lower held-out negative log-likelihood than other neural network approaches across multiple dataset sizes and dimensionalities. Further context of the efficacy of RED is provided by considering anomaly detection tasks, where we also observe better performance over alternative models.
[ { "version": "v1", "created": "Tue, 30 May 2017 17:00:59 GMT" } ]
2017-05-31T00:00:00
[ [ "Oliva", "Junier B.", "" ], [ "Dubey", "Kumar Avinava", "" ], [ "Poczos", "Barnabas", "" ], [ "Xing", "Eric", "" ], [ "Schneider", "Jeff", "" ] ]
TITLE: Recurrent Estimation of Distributions ABSTRACT: This paper presents the recurrent estimation of distributions (RED) for modeling real-valued data in a semiparametric fashion. RED models make two novel uses of recurrent neural networks (RNNs) for density estimation of general real-valued data. First, RNNs are used to transform input covariates into a latent space to better capture conditional dependencies in inputs. After, an RNN is used to compute the conditional distributions of the latent covariates. The resulting model is efficient to train, compute, and sample from, whilst producing normalized pdfs. The effectiveness of RED is shown via several real-world data experiments. Our results show that RED models achieve a lower held-out negative log-likelihood than other neural network approaches across multiple dataset sizes and dimensionalities. Further context of the efficacy of RED is provided by considering anomaly detection tasks, where we also observe better performance over alternative models.
no_new_dataset
0.948775
1705.10754
Francisco Rangel
Francisco Rangel and Marc Franco-Salvador and Paolo Rosso
A Low Dimensionality Representation for Language Variety Identification
null
CICLing - Computational Linguistics and Intelligent Text Processing, 2016
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Language variety identification aims at labelling texts in a native language (e.g. Spanish, Portuguese, English) with its specific variation (e.g. Argentina, Chile, Mexico, Peru, Spain; Brazil, Portugal; UK, US). In this work we propose a low dimensionality representation (LDR) to address this task with five different varieties of Spanish: Argentina, Chile, Mexico, Peru and Spain. We compare our LDR method with common state-of-the-art representations and show an increase in accuracy of ~35%. Furthermore, we compare LDR with two reference distributed representation models. Experimental results show competitive performance while dramatically reducing the dimensionality --and increasing the big data suitability-- to only 6 features per variety. Additionally, we analyse the behaviour of the employed machine learning algorithms and the most discriminating features. Finally, we employ an alternative dataset to test the robustness of our low dimensionality representation with another set of similar languages.
[ { "version": "v1", "created": "Tue, 30 May 2017 17:07:45 GMT" } ]
2017-05-31T00:00:00
[ [ "Rangel", "Francisco", "" ], [ "Franco-Salvador", "Marc", "" ], [ "Rosso", "Paolo", "" ] ]
TITLE: A Low Dimensionality Representation for Language Variety Identification ABSTRACT: Language variety identification aims at labelling texts in a native language (e.g. Spanish, Portuguese, English) with its specific variation (e.g. Argentina, Chile, Mexico, Peru, Spain; Brazil, Portugal; UK, US). In this work we propose a low dimensionality representation (LDR) to address this task with five different varieties of Spanish: Argentina, Chile, Mexico, Peru and Spain. We compare our LDR method with common state-of-the-art representations and show an increase in accuracy of ~35%. Furthermore, we compare LDR with two reference distributed representation models. Experimental results show competitive performance while dramatically reducing the dimensionality --and increasing the big data suitability-- to only 6 features per variety. Additionally, we analyse the behaviour of the employed machine learning algorithms and the most discriminating features. Finally, we employ an alternative dataset to test the robustness of our low dimensionality representation with another set of similar languages.
new_dataset
0.514309
1604.06518
Vu Nguyen
Trung Le and Tu Dinh Nguyen and Vu Nguyen and Dinh Phung
Approximation Vector Machines for Large-scale Online Learning
54 pages
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One of the most challenging problems in kernel online learning is to bound the model size and to promote the model sparsity. Sparse models not only improve computation and memory usage, but also enhance the generalization capacity, a principle that concurs with the law of parsimony. However, inappropriate sparsity modeling may also significantly degrade the performance. In this paper, we propose Approximation Vector Machine (AVM), a model that can simultaneously encourage the sparsity and safeguard its risk in compromising the performance. When an incoming instance arrives, we approximate this instance by one of its neighbors whose distance to it is less than a predefined threshold. Our key intuition is that since the newly seen instance is expressed by its nearby neighbor the optimal performance can be analytically formulated and maintained. We develop theoretical foundations to support this intuition and further establish an analysis to characterize the gap between the approximation and optimal solutions. This gap crucially depends on the frequency of approximation and the predefined threshold. We perform the convergence analysis for a wide spectrum of loss functions including Hinge, smooth Hinge, and Logistic for classification task, and $l_1$, $l_2$, and $\epsilon$-insensitive for regression task. We conducted extensive experiments for classification task in batch and online modes, and regression task in online mode over several benchmark datasets. The results show that our proposed AVM achieved a comparable predictive performance with current state-of-the-art methods while simultaneously achieving significant computational speed-up due to the ability of the proposed AVM in maintaining the model size.
[ { "version": "v1", "created": "Fri, 22 Apr 2016 01:57:01 GMT" }, { "version": "v2", "created": "Mon, 25 Apr 2016 01:16:21 GMT" }, { "version": "v3", "created": "Wed, 5 Apr 2017 01:43:29 GMT" }, { "version": "v4", "created": "Sun, 28 May 2017 01:26:48 GMT" } ]
2017-05-30T00:00:00
[ [ "Le", "Trung", "" ], [ "Nguyen", "Tu Dinh", "" ], [ "Nguyen", "Vu", "" ], [ "Phung", "Dinh", "" ] ]
TITLE: Approximation Vector Machines for Large-scale Online Learning ABSTRACT: One of the most challenging problems in kernel online learning is to bound the model size and to promote the model sparsity. Sparse models not only improve computation and memory usage, but also enhance the generalization capacity, a principle that concurs with the law of parsimony. However, inappropriate sparsity modeling may also significantly degrade the performance. In this paper, we propose Approximation Vector Machine (AVM), a model that can simultaneously encourage the sparsity and safeguard its risk in compromising the performance. When an incoming instance arrives, we approximate this instance by one of its neighbors whose distance to it is less than a predefined threshold. Our key intuition is that since the newly seen instance is expressed by its nearby neighbor the optimal performance can be analytically formulated and maintained. We develop theoretical foundations to support this intuition and further establish an analysis to characterize the gap between the approximation and optimal solutions. This gap crucially depends on the frequency of approximation and the predefined threshold. We perform the convergence analysis for a wide spectrum of loss functions including Hinge, smooth Hinge, and Logistic for classification task, and $l_1$, $l_2$, and $\epsilon$-insensitive for regression task. We conducted extensive experiments for classification task in batch and online modes, and regression task in online mode over several benchmark datasets. The results show that our proposed AVM achieved a comparable predictive performance with current state-of-the-art methods while simultaneously achieving significant computational speed-up due to the ability of the proposed AVM in maintaining the model size.
no_new_dataset
0.946151
1606.06159
Jie Sun
Yazhen Jiang, Joseph Skufca, Jie Sun
BiFold visualization of bipartite datasets
18 pages, 6 figures
EPJ Data Sci. (2017) 6: 2
10.1140/epjds/s13688-017-0098-4
null
cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The emerging domain of data-enabled science necessitates development of algorithms and tools for knowledge discovery. Human interaction with data through well-constructed graphical representation can take special advantage of our visual ability to identify patterns. We develop a data visualization framework, called BiFold, for exploratory analysis of bipartite datasets that describe binary relationships between groups of objects. Typical data examples would include voting records, organizational memberships, and pairwise associations, or other binary datasets. BiFold provides a low dimensional embedding of data that represents similarity by visual nearness, analogous to Multidimensional Scaling (MDS). The unique and new feature of BiFold is its ability to simultaneously capture both within-group and between-group relationships among objects, enhancing knowledge discovery. We benchmark BiFold using the {\it Southern Women Dataset}, where social groups are now visually evident. We construct BiFold plots for two US voting datasets: For the presidential election outcomes since 1976, BiFold illustrates the evolving geopolitical structures that underlie these election results. For Senate congressional voting, BiFold identifies a partisan coordinate, separating senators into two parties while simultaneously visualizing a bipartisan-coalition coordinate which captures the ultimate fate of the bills (pass/fail). Finally, we consider a global cuisine dataset of the association between recipes and food ingredients. BiFold allows us to visually compare and contrast cuisines while also allowing identification of signature ingredients of individual cuisines.
[ { "version": "v1", "created": "Mon, 20 Jun 2016 15:00:59 GMT" }, { "version": "v2", "created": "Tue, 21 Jun 2016 08:27:52 GMT" }, { "version": "v3", "created": "Sun, 28 May 2017 23:57:32 GMT" } ]
2017-05-30T00:00:00
[ [ "Jiang", "Yazhen", "" ], [ "Skufca", "Joseph", "" ], [ "Sun", "Jie", "" ] ]
TITLE: BiFold visualization of bipartite datasets ABSTRACT: The emerging domain of data-enabled science necessitates development of algorithms and tools for knowledge discovery. Human interaction with data through well-constructed graphical representation can take special advantage of our visual ability to identify patterns. We develop a data visualization framework, called BiFold, for exploratory analysis of bipartite datasets that describe binary relationships between groups of objects. Typical data examples would include voting records, organizational memberships, and pairwise associations, or other binary datasets. BiFold provides a low dimensional embedding of data that represents similarity by visual nearness, analogous to Multidimensional Scaling (MDS). The unique and new feature of BiFold is its ability to simultaneously capture both within-group and between-group relationships among objects, enhancing knowledge discovery. We benchmark BiFold using the {\it Southern Women Dataset}, where social groups are now visually evident. We construct BiFold plots for two US voting datasets: For the presidential election outcomes since 1976, BiFold illustrates the evolving geopolitical structures that underlie these election results. For Senate congressional voting, BiFold identifies a partisan coordinate, separating senators into two parties while simultaneously visualizing a bipartisan-coalition coordinate which captures the ultimate fate of the bills (pass/fail). Finally, we consider a global cuisine dataset of the association between recipes and food ingredients. BiFold allows us to visually compare and contrast cuisines while also allowing identification of signature ingredients of individual cuisines.
new_dataset
0.511034
1608.00508
Paul Michel
Paul Michel, Okko R\"as\"anen, Roland Thiolli\`ere, Emmanuel Dupoux
Blind phoneme segmentation with temporal prediction errors
7 pages 3 figures. Presented at ACL SRW 2017
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Phonemic segmentation of speech is a critical step of speech recognition systems. We propose a novel unsupervised algorithm based on sequence prediction models such as Markov chains and recurrent neural network. Our approach consists in analyzing the error profile of a model trained to predict speech features frame-by-frame. Specifically, we try to learn the dynamics of speech in the MFCC space and hypothesize boundaries from local maxima in the prediction error. We evaluate our system on the TIMIT dataset, with improvements over similar methods.
[ { "version": "v1", "created": "Mon, 1 Aug 2016 17:51:03 GMT" }, { "version": "v2", "created": "Sat, 27 May 2017 04:01:13 GMT" } ]
2017-05-30T00:00:00
[ [ "Michel", "Paul", "" ], [ "Räsänen", "Okko", "" ], [ "Thiollière", "Roland", "" ], [ "Dupoux", "Emmanuel", "" ] ]
TITLE: Blind phoneme segmentation with temporal prediction errors ABSTRACT: Phonemic segmentation of speech is a critical step of speech recognition systems. We propose a novel unsupervised algorithm based on sequence prediction models such as Markov chains and recurrent neural network. Our approach consists in analyzing the error profile of a model trained to predict speech features frame-by-frame. Specifically, we try to learn the dynamics of speech in the MFCC space and hypothesize boundaries from local maxima in the prediction error. We evaluate our system on the TIMIT dataset, with improvements over similar methods.
no_new_dataset
0.947088
1610.03466
Xianzhi Du
Xianzhi Du and Mostafa El-Khamy and Jungwon Lee and Larry S. Davis
Fused DNN: A deep neural network fusion approach to fast and robust pedestrian detection
WACV 2017
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a deep neural network fusion architecture for fast and robust pedestrian detection. The proposed network fusion architecture allows for parallel processing of multiple networks for speed. A single shot deep convolutional network is trained as a object detector to generate all possible pedestrian candidates of different sizes and occlusions. This network outputs a large variety of pedestrian candidates to cover the majority of ground-truth pedestrians while also introducing a large number of false positives. Next, multiple deep neural networks are used in parallel for further refinement of these pedestrian candidates. We introduce a soft-rejection based network fusion method to fuse the soft metrics from all networks together to generate the final confidence scores. Our method performs better than existing state-of-the-arts, especially when detecting small-size and occluded pedestrians. Furthermore, we propose a method for integrating pixel-wise semantic segmentation network into the network fusion architecture as a reinforcement to the pedestrian detector. The approach outperforms state-of-the-art methods on most protocols on Caltech Pedestrian dataset, with significant boosts on several protocols. It is also faster than all other methods.
[ { "version": "v1", "created": "Tue, 11 Oct 2016 18:59:12 GMT" }, { "version": "v2", "created": "Sun, 28 May 2017 15:45:56 GMT" } ]
2017-05-30T00:00:00
[ [ "Du", "Xianzhi", "" ], [ "El-Khamy", "Mostafa", "" ], [ "Lee", "Jungwon", "" ], [ "Davis", "Larry S.", "" ] ]
TITLE: Fused DNN: A deep neural network fusion approach to fast and robust pedestrian detection ABSTRACT: We propose a deep neural network fusion architecture for fast and robust pedestrian detection. The proposed network fusion architecture allows for parallel processing of multiple networks for speed. A single shot deep convolutional network is trained as a object detector to generate all possible pedestrian candidates of different sizes and occlusions. This network outputs a large variety of pedestrian candidates to cover the majority of ground-truth pedestrians while also introducing a large number of false positives. Next, multiple deep neural networks are used in parallel for further refinement of these pedestrian candidates. We introduce a soft-rejection based network fusion method to fuse the soft metrics from all networks together to generate the final confidence scores. Our method performs better than existing state-of-the-arts, especially when detecting small-size and occluded pedestrians. Furthermore, we propose a method for integrating pixel-wise semantic segmentation network into the network fusion architecture as a reinforcement to the pedestrian detector. The approach outperforms state-of-the-art methods on most protocols on Caltech Pedestrian dataset, with significant boosts on several protocols. It is also faster than all other methods.
no_new_dataset
0.950503
1611.04878
Yeounoh Chung
Yeounoh Chung, Sanjay Krishnan, Tim Kraska
A Data Quality Metric (DQM): How to Estimate The Number of Undetected Errors in Data Sets
To appear in VLDB 2017
null
null
null
cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Data cleaning, whether manual or algorithmic, is rarely perfect leaving a dataset with an unknown number of false positives and false negatives after cleaning. In many scenarios, quantifying the number of remaining errors is challenging because our data integrity rules themselves may be incomplete, or the available gold-standard datasets may be too small to extrapolate. As the use of inherently fallible crowds becomes more prevalent in data cleaning problems, it is important to have estimators to quantify the extent of such errors. We propose novel species estimators to estimate the number of distinct remaining errors in a dataset after it has been cleaned by a set of crowd workers -- essentially, quantifying the utility of hiring additional workers to clean the dataset. This problem requires new estimators that are robust to false positives and false negatives, and we empirically show on three real-world datasets that existing species estimators are unstable for this problem, while our proposed techniques quickly converge.
[ { "version": "v1", "created": "Tue, 15 Nov 2016 15:00:53 GMT" }, { "version": "v2", "created": "Sat, 11 Mar 2017 04:46:38 GMT" }, { "version": "v3", "created": "Fri, 26 May 2017 18:24:26 GMT" } ]
2017-05-30T00:00:00
[ [ "Chung", "Yeounoh", "" ], [ "Krishnan", "Sanjay", "" ], [ "Kraska", "Tim", "" ] ]
TITLE: A Data Quality Metric (DQM): How to Estimate The Number of Undetected Errors in Data Sets ABSTRACT: Data cleaning, whether manual or algorithmic, is rarely perfect leaving a dataset with an unknown number of false positives and false negatives after cleaning. In many scenarios, quantifying the number of remaining errors is challenging because our data integrity rules themselves may be incomplete, or the available gold-standard datasets may be too small to extrapolate. As the use of inherently fallible crowds becomes more prevalent in data cleaning problems, it is important to have estimators to quantify the extent of such errors. We propose novel species estimators to estimate the number of distinct remaining errors in a dataset after it has been cleaned by a set of crowd workers -- essentially, quantifying the utility of hiring additional workers to clean the dataset. This problem requires new estimators that are robust to false positives and false negatives, and we empirically show on three real-world datasets that existing species estimators are unstable for this problem, while our proposed techniques quickly converge.
no_new_dataset
0.9455
1612.03530
Diqi Chen
Diqi Chen, Yizhou Wang, Tianfu Wu, Wen Gao
An Attention-Driven Approach of No-Reference Image Quality Assessment
9 pages, 7 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we present a novel method of no-reference image quality assessment (NR-IQA), which is to predict the perceptual quality score of a given image without using any reference image. The proposed method harnesses three functions (i) the visual attention mechanism, which affects many aspects of visual perception including image quality assessment, however, is overlooked in the NR-IQA literature. The method assumes that the fixation areas on an image contain key information to the process of IQA. (ii) the robust averaging strategy, which is a means \--- supported by psychology studies \--- to integrating multiple/step-wise evidence to make a final perceptual judgment. (iii) the multi-task learning, which is believed to be an effectual means to shape representation learning and could result in a more generalized model. To exploit the synergy of the three, we consider the NR-IQA as a dynamic perception process, in which the model samples a sequence of "informative" areas and aggregates the information to learn a representation for the tasks of jointly predicting the image quality score and the distortion type. The model learning is implemented by a reinforcement strategy, in which the rewards of both tasks guide the learning of the optimal sampling policy to acquire the "task-informative" image regions so that the predictions can be made accurately and efficiently (in terms of the sampling steps). The reinforcement learning is realized by a deep network with the policy gradient method and trained through back-propagation. In experiments, the model is tested on the TID2008 dataset and it outperforms several state-of-the-art methods. Furthermore, the model is very efficient in the sense that a small number of fixations are used in NR-IQA.
[ { "version": "v1", "created": "Mon, 12 Dec 2016 03:25:35 GMT" }, { "version": "v2", "created": "Tue, 21 Mar 2017 01:46:45 GMT" }, { "version": "v3", "created": "Mon, 29 May 2017 02:42:28 GMT" } ]
2017-05-30T00:00:00
[ [ "Chen", "Diqi", "" ], [ "Wang", "Yizhou", "" ], [ "Wu", "Tianfu", "" ], [ "Gao", "Wen", "" ] ]
TITLE: An Attention-Driven Approach of No-Reference Image Quality Assessment ABSTRACT: In this paper, we present a novel method of no-reference image quality assessment (NR-IQA), which is to predict the perceptual quality score of a given image without using any reference image. The proposed method harnesses three functions (i) the visual attention mechanism, which affects many aspects of visual perception including image quality assessment, however, is overlooked in the NR-IQA literature. The method assumes that the fixation areas on an image contain key information to the process of IQA. (ii) the robust averaging strategy, which is a means \--- supported by psychology studies \--- to integrating multiple/step-wise evidence to make a final perceptual judgment. (iii) the multi-task learning, which is believed to be an effectual means to shape representation learning and could result in a more generalized model. To exploit the synergy of the three, we consider the NR-IQA as a dynamic perception process, in which the model samples a sequence of "informative" areas and aggregates the information to learn a representation for the tasks of jointly predicting the image quality score and the distortion type. The model learning is implemented by a reinforcement strategy, in which the rewards of both tasks guide the learning of the optimal sampling policy to acquire the "task-informative" image regions so that the predictions can be made accurately and efficiently (in terms of the sampling steps). The reinforcement learning is realized by a deep network with the policy gradient method and trained through back-propagation. In experiments, the model is tested on the TID2008 dataset and it outperforms several state-of-the-art methods. Furthermore, the model is very efficient in the sense that a small number of fixations are used in NR-IQA.
no_new_dataset
0.94474
1612.06530
Shaodi You
Shijie Zhang, Lizhen Qu, Shaodi You, Zhenglu Yang, Jiawan Zhang
Automatic Generation of Grounded Visual Questions
VQA
IJCAI 2017
null
null
cs.CV cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose the first model to be able to generate visually grounded questions with diverse types for a single image. Visual question generation is an emerging topic which aims to ask questions in natural language based on visual input. To the best of our knowledge, it lacks automatic methods to generate meaningful questions with various types for the same visual input. To circumvent the problem, we propose a model that automatically generates visually grounded questions with varying types. Our model takes as input both images and the captions generated by a dense caption model, samples the most probable question types, and generates the questions in sequel. The experimental results on two real world datasets show that our model outperforms the strongest baseline in terms of both correctness and diversity with a wide margin.
[ { "version": "v1", "created": "Tue, 20 Dec 2016 07:20:16 GMT" }, { "version": "v2", "created": "Mon, 29 May 2017 12:54:35 GMT" } ]
2017-05-30T00:00:00
[ [ "Zhang", "Shijie", "" ], [ "Qu", "Lizhen", "" ], [ "You", "Shaodi", "" ], [ "Yang", "Zhenglu", "" ], [ "Zhang", "Jiawan", "" ] ]
TITLE: Automatic Generation of Grounded Visual Questions ABSTRACT: In this paper, we propose the first model to be able to generate visually grounded questions with diverse types for a single image. Visual question generation is an emerging topic which aims to ask questions in natural language based on visual input. To the best of our knowledge, it lacks automatic methods to generate meaningful questions with various types for the same visual input. To circumvent the problem, we propose a model that automatically generates visually grounded questions with varying types. Our model takes as input both images and the captions generated by a dense caption model, samples the most probable question types, and generates the questions in sequel. The experimental results on two real world datasets show that our model outperforms the strongest baseline in terms of both correctness and diversity with a wide margin.
no_new_dataset
0.952131
1702.04459
Dianhui Wang
Dianhui Wang, Ming Li
Robust Stochastic Configuration Networks with Kernel Density Estimation
14 pages
null
null
null
cs.NE cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neural networks have been widely used as predictive models to fit data distribution, and they could be implemented through learning a collection of samples. In many applications, however, the given dataset may contain noisy samples or outliers which may result in a poor learner model in terms of generalization. This paper contributes to a development of robust stochastic configuration networks (RSCNs) for resolving uncertain data regression problems. RSCNs are built on original stochastic configuration networks with weighted least squares method for evaluating the output weights, and the input weights and biases are incrementally and randomly generated by satisfying with a set of inequality constrains. The kernel density estimation (KDE) method is employed to set the penalty weights for each training samples, so that some negative impacts, caused by noisy data or outliers, on the resulting learner model can be reduced. The alternating optimization technique is applied for updating a RSCN model with improved penalty weights computed from the kernel density estimation function. Performance evaluation is carried out by a function approximation, four benchmark datasets and a case study on engineering application. Comparisons to other robust randomised neural modelling techniques, including the probabilistic robust learning algorithm for neural networks with random weights and improved RVFL networks, indicate that the proposed RSCNs with KDE perform favourably and demonstrate good potential for real-world applications.
[ { "version": "v1", "created": "Wed, 15 Feb 2017 03:54:29 GMT" }, { "version": "v2", "created": "Mon, 29 May 2017 15:29:47 GMT" } ]
2017-05-30T00:00:00
[ [ "Wang", "Dianhui", "" ], [ "Li", "Ming", "" ] ]
TITLE: Robust Stochastic Configuration Networks with Kernel Density Estimation ABSTRACT: Neural networks have been widely used as predictive models to fit data distribution, and they could be implemented through learning a collection of samples. In many applications, however, the given dataset may contain noisy samples or outliers which may result in a poor learner model in terms of generalization. This paper contributes to a development of robust stochastic configuration networks (RSCNs) for resolving uncertain data regression problems. RSCNs are built on original stochastic configuration networks with weighted least squares method for evaluating the output weights, and the input weights and biases are incrementally and randomly generated by satisfying with a set of inequality constrains. The kernel density estimation (KDE) method is employed to set the penalty weights for each training samples, so that some negative impacts, caused by noisy data or outliers, on the resulting learner model can be reduced. The alternating optimization technique is applied for updating a RSCN model with improved penalty weights computed from the kernel density estimation function. Performance evaluation is carried out by a function approximation, four benchmark datasets and a case study on engineering application. Comparisons to other robust randomised neural modelling techniques, including the probabilistic robust learning algorithm for neural networks with random weights and improved RVFL networks, indicate that the proposed RSCNs with KDE perform favourably and demonstrate good potential for real-world applications.
no_new_dataset
0.947866
1702.07817
Jianshu Chen
Yu Liu, Jianshu Chen, Li Deng
Unsupervised Sequence Classification using Sequential Output Statistics
All authors contributed equally to the paper. 17 pages, 7 figures and 2 tables
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider learning a sequence classifier without labeled data by using sequential output statistics. The problem is highly valuable since obtaining labels in training data is often costly, while the sequential output statistics (e.g., language models) could be obtained independently of input data and thus with low or no cost. To address the problem, we propose an unsupervised learning cost function and study its properties. We show that, compared to earlier works, it is less inclined to be stuck in trivial solutions and avoids the need for a strong generative model. Although it is harder to optimize in its functional form, a stochastic primal-dual gradient method is developed to effectively solve the problem. Experiment results on real-world datasets demonstrate that the new unsupervised learning method gives drastically lower errors than other baseline methods. Specifically, it reaches test errors about twice of those obtained by fully supervised learning.
[ { "version": "v1", "created": "Sat, 25 Feb 2017 01:55:38 GMT" }, { "version": "v2", "created": "Fri, 26 May 2017 18:30:24 GMT" } ]
2017-05-30T00:00:00
[ [ "Liu", "Yu", "" ], [ "Chen", "Jianshu", "" ], [ "Deng", "Li", "" ] ]
TITLE: Unsupervised Sequence Classification using Sequential Output Statistics ABSTRACT: We consider learning a sequence classifier without labeled data by using sequential output statistics. The problem is highly valuable since obtaining labels in training data is often costly, while the sequential output statistics (e.g., language models) could be obtained independently of input data and thus with low or no cost. To address the problem, we propose an unsupervised learning cost function and study its properties. We show that, compared to earlier works, it is less inclined to be stuck in trivial solutions and avoids the need for a strong generative model. Although it is harder to optimize in its functional form, a stochastic primal-dual gradient method is developed to effectively solve the problem. Experiment results on real-world datasets demonstrate that the new unsupervised learning method gives drastically lower errors than other baseline methods. Specifically, it reaches test errors about twice of those obtained by fully supervised learning.
no_new_dataset
0.951639
1704.00616
Mohammadreza Zolfaghari
Mohammadreza Zolfaghari, Gabriel L. Oliveira, Nima Sedaghat, and Thomas Brox
Chained Multi-stream Networks Exploiting Pose, Motion, and Appearance for Action Classification and Detection
10 pages, 7 figures, ICCV 2017 submission
null
null
null
cs.CV cs.AI cs.HC cs.MM cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
General human action recognition requires understanding of various visual cues. In this paper, we propose a network architecture that computes and integrates the most important visual cues for action recognition: pose, motion, and the raw images. For the integration, we introduce a Markov chain model which adds cues successively. The resulting approach is efficient and applicable to action classification as well as to spatial and temporal action localization. The two contributions clearly improve the performance over respective baselines. The overall approach achieves state-of-the-art action classification performance on HMDB51, J-HMDB and NTU RGB+D datasets. Moreover, it yields state-of-the-art spatio-temporal action localization results on UCF101 and J-HMDB.
[ { "version": "v1", "created": "Mon, 3 Apr 2017 14:29:40 GMT" }, { "version": "v2", "created": "Fri, 26 May 2017 18:40:14 GMT" } ]
2017-05-30T00:00:00
[ [ "Zolfaghari", "Mohammadreza", "" ], [ "Oliveira", "Gabriel L.", "" ], [ "Sedaghat", "Nima", "" ], [ "Brox", "Thomas", "" ] ]
TITLE: Chained Multi-stream Networks Exploiting Pose, Motion, and Appearance for Action Classification and Detection ABSTRACT: General human action recognition requires understanding of various visual cues. In this paper, we propose a network architecture that computes and integrates the most important visual cues for action recognition: pose, motion, and the raw images. For the integration, we introduce a Markov chain model which adds cues successively. The resulting approach is efficient and applicable to action classification as well as to spatial and temporal action localization. The two contributions clearly improve the performance over respective baselines. The overall approach achieves state-of-the-art action classification performance on HMDB51, J-HMDB and NTU RGB+D datasets. Moreover, it yields state-of-the-art spatio-temporal action localization results on UCF101 and J-HMDB.
no_new_dataset
0.949716
1704.02801
Ahmed Alaa
Ahmed M. Alaa and Mihaela van der Schaar
Bayesian Inference of Individualized Treatment Effects using Multi-task Gaussian Processes
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Predicated on the increasing abundance of electronic health records, we investi- gate the problem of inferring individualized treatment effects using observational data. Stemming from the potential outcomes model, we propose a novel multi- task learning framework in which factual and counterfactual outcomes are mod- eled as the outputs of a function in a vector-valued reproducing kernel Hilbert space (vvRKHS). We develop a nonparametric Bayesian method for learning the treatment effects using a multi-task Gaussian process (GP) with a linear coregion- alization kernel as a prior over the vvRKHS. The Bayesian approach allows us to compute individualized measures of confidence in our estimates via pointwise credible intervals, which are crucial for realizing the full potential of precision medicine. The impact of selection bias is alleviated via a risk-based empirical Bayes method for adapting the multi-task GP prior, which jointly minimizes the empirical error in factual outcomes and the uncertainty in (unobserved) counter- factual outcomes. We conduct experiments on observational datasets for an inter- ventional social program applied to premature infants, and a left ventricular assist device applied to cardiac patients wait-listed for a heart transplant. In both experi- ments, we show that our method significantly outperforms the state-of-the-art.
[ { "version": "v1", "created": "Mon, 10 Apr 2017 11:03:36 GMT" }, { "version": "v2", "created": "Sun, 28 May 2017 13:29:58 GMT" } ]
2017-05-30T00:00:00
[ [ "Alaa", "Ahmed M.", "" ], [ "van der Schaar", "Mihaela", "" ] ]
TITLE: Bayesian Inference of Individualized Treatment Effects using Multi-task Gaussian Processes ABSTRACT: Predicated on the increasing abundance of electronic health records, we investi- gate the problem of inferring individualized treatment effects using observational data. Stemming from the potential outcomes model, we propose a novel multi- task learning framework in which factual and counterfactual outcomes are mod- eled as the outputs of a function in a vector-valued reproducing kernel Hilbert space (vvRKHS). We develop a nonparametric Bayesian method for learning the treatment effects using a multi-task Gaussian process (GP) with a linear coregion- alization kernel as a prior over the vvRKHS. The Bayesian approach allows us to compute individualized measures of confidence in our estimates via pointwise credible intervals, which are crucial for realizing the full potential of precision medicine. The impact of selection bias is alleviated via a risk-based empirical Bayes method for adapting the multi-task GP prior, which jointly minimizes the empirical error in factual outcomes and the uncertainty in (unobserved) counter- factual outcomes. We conduct experiments on observational datasets for an inter- ventional social program applied to premature infants, and a left ventricular assist device applied to cardiac patients wait-listed for a heart transplant. In both experi- ments, we show that our method significantly outperforms the state-of-the-art.
no_new_dataset
0.947817
1705.01921
Liliang Ren
Liliang Ren
Recurrent Soft Attention Model for Common Object Recognition
5 pages, 4 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose the Recurrent Soft Attention Model, which integrates the visual attention from the original image to a LSTM memory cell through a down-sample network. The model recurrently transmits visual attention to the memory cells for glimpse mask generation, which is a more natural way for attention integration and exploitation in general object detection and recognition problem. We test our model under the metric of the top-1 accuracy on the CIFAR-10 dataset. The experiment shows that our down-sample network and feedback mechanism plays an effective role among the whole network structure.
[ { "version": "v1", "created": "Thu, 4 May 2017 17:27:42 GMT" }, { "version": "v2", "created": "Mon, 29 May 2017 07:02:52 GMT" } ]
2017-05-30T00:00:00
[ [ "Ren", "Liliang", "" ] ]
TITLE: Recurrent Soft Attention Model for Common Object Recognition ABSTRACT: We propose the Recurrent Soft Attention Model, which integrates the visual attention from the original image to a LSTM memory cell through a down-sample network. The model recurrently transmits visual attention to the memory cells for glimpse mask generation, which is a more natural way for attention integration and exploitation in general object detection and recognition problem. We test our model under the metric of the top-1 accuracy on the CIFAR-10 dataset. The experiment shows that our down-sample network and feedback mechanism plays an effective role among the whole network structure.
no_new_dataset
0.952175
1705.02758
Xiu-Shen Wei
Xiu-Shen Wei, Chen-Lin Zhang, Yao Li, Chen-Wei Xie, Jianxin Wu, Chunhua Shen, Zhi-Hua Zhou
Deep Descriptor Transforming for Image Co-Localization
Accepted by IJCAI 2017
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reusable model design becomes desirable with the rapid expansion of machine learning applications. In this paper, we focus on the reusability of pre-trained deep convolutional models. Specifically, different from treating pre-trained models as feature extractors, we reveal more treasures beneath convolutional layers, i.e., the convolutional activations could act as a detector for the common object in the image co-localization problem. We propose a simple but effective method, named Deep Descriptor Transforming (DDT), for evaluating the correlations of descriptors and then obtaining the category-consistent regions, which can accurately locate the common object in a set of images. Empirical studies validate the effectiveness of the proposed DDT method. On benchmark image co-localization datasets, DDT consistently outperforms existing state-of-the-art methods by a large margin. Moreover, DDT also demonstrates good generalization ability for unseen categories and robustness for dealing with noisy data.
[ { "version": "v1", "created": "Mon, 8 May 2017 06:52:44 GMT" } ]
2017-05-30T00:00:00
[ [ "Wei", "Xiu-Shen", "" ], [ "Zhang", "Chen-Lin", "" ], [ "Li", "Yao", "" ], [ "Xie", "Chen-Wei", "" ], [ "Wu", "Jianxin", "" ], [ "Shen", "Chunhua", "" ], [ "Zhou", "Zhi-Hua", "" ] ]
TITLE: Deep Descriptor Transforming for Image Co-Localization ABSTRACT: Reusable model design becomes desirable with the rapid expansion of machine learning applications. In this paper, we focus on the reusability of pre-trained deep convolutional models. Specifically, different from treating pre-trained models as feature extractors, we reveal more treasures beneath convolutional layers, i.e., the convolutional activations could act as a detector for the common object in the image co-localization problem. We propose a simple but effective method, named Deep Descriptor Transforming (DDT), for evaluating the correlations of descriptors and then obtaining the category-consistent regions, which can accurately locate the common object in a set of images. Empirical studies validate the effectiveness of the proposed DDT method. On benchmark image co-localization datasets, DDT consistently outperforms existing state-of-the-art methods by a large margin. Moreover, DDT also demonstrates good generalization ability for unseen categories and robustness for dealing with noisy data.
no_new_dataset
0.947039
1705.08111
Benjamin Gutierrez Becker
Benjam\'in Guti\'errez and Lo\"ic Peter and Tassilo Klein and Christian Wachinger
A Multi-Armed Bandit to Smartly Select a Training Set from Big Medical Data
MICCAI 2017 Proceedings
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the availability of big medical image data, the selection of an adequate training set is becoming more important to address the heterogeneity of different datasets. Simply including all the data does not only incur high processing costs but can even harm the prediction. We formulate the smart and efficient selection of a training dataset from big medical image data as a multi-armed bandit problem, solved by Thompson sampling. Our method assumes that image features are not available at the time of the selection of the samples, and therefore relies only on meta information associated with the images. Our strategy simultaneously exploits data sources with high chances of yielding useful samples and explores new data regions. For our evaluation, we focus on the application of estimating the age from a brain MRI. Our results on 7,250 subjects from 10 datasets show that our approach leads to higher accuracy while only requiring a fraction of the training data.
[ { "version": "v1", "created": "Tue, 23 May 2017 07:51:54 GMT" }, { "version": "v2", "created": "Mon, 29 May 2017 12:50:19 GMT" } ]
2017-05-30T00:00:00
[ [ "Gutiérrez", "Benjamín", "" ], [ "Peter", "Loïc", "" ], [ "Klein", "Tassilo", "" ], [ "Wachinger", "Christian", "" ] ]
TITLE: A Multi-Armed Bandit to Smartly Select a Training Set from Big Medical Data ABSTRACT: With the availability of big medical image data, the selection of an adequate training set is becoming more important to address the heterogeneity of different datasets. Simply including all the data does not only incur high processing costs but can even harm the prediction. We formulate the smart and efficient selection of a training dataset from big medical image data as a multi-armed bandit problem, solved by Thompson sampling. Our method assumes that image features are not available at the time of the selection of the samples, and therefore relies only on meta information associated with the images. Our strategy simultaneously exploits data sources with high chances of yielding useful samples and explores new data regions. For our evaluation, we focus on the application of estimating the age from a brain MRI. Our results on 7,250 subjects from 10 datasets show that our approach leads to higher accuracy while only requiring a fraction of the training data.
no_new_dataset
0.950824
1705.09724
Iroro Orife
Shane Walker, Morten Pedersen, Iroro Orife and Jason Flaks
Semi-Supervised Model Training for Unbounded Conversational Speech Recognition
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For conversational large-vocabulary continuous speech recognition (LVCSR) tasks, up to about two thousand hours of audio is commonly used to train state of the art models. Collection of labeled conversational audio however, is prohibitively expensive, laborious and error-prone. Furthermore, academic corpora like Fisher English (2004) or Switchboard (1992) are inadequate to train models with sufficient accuracy in the unbounded space of conversational speech. These corpora are also timeworn due to dated acoustic telephony features and the rapid advancement of colloquial vocabulary and idiomatic speech over the last decades. Utilizing the colossal scale of our unlabeled telephony dataset, we propose a technique to construct a modern, high quality conversational speech training corpus on the order of hundreds of millions of utterances (or tens of thousands of hours) for both acoustic and language model training. We describe the data collection, selection and training, evaluating the results of our updated speech recognition system on a test corpus of 7K manually transcribed utterances. We show relative word error rate (WER) reductions of {35%, 19%} on {agent, caller} utterances over our seed model and 5% absolute WER improvements over IBM Watson STT on this conversational speech task.
[ { "version": "v1", "created": "Fri, 26 May 2017 21:10:15 GMT" } ]
2017-05-30T00:00:00
[ [ "Walker", "Shane", "" ], [ "Pedersen", "Morten", "" ], [ "Orife", "Iroro", "" ], [ "Flaks", "Jason", "" ] ]
TITLE: Semi-Supervised Model Training for Unbounded Conversational Speech Recognition ABSTRACT: For conversational large-vocabulary continuous speech recognition (LVCSR) tasks, up to about two thousand hours of audio is commonly used to train state of the art models. Collection of labeled conversational audio however, is prohibitively expensive, laborious and error-prone. Furthermore, academic corpora like Fisher English (2004) or Switchboard (1992) are inadequate to train models with sufficient accuracy in the unbounded space of conversational speech. These corpora are also timeworn due to dated acoustic telephony features and the rapid advancement of colloquial vocabulary and idiomatic speech over the last decades. Utilizing the colossal scale of our unlabeled telephony dataset, we propose a technique to construct a modern, high quality conversational speech training corpus on the order of hundreds of millions of utterances (or tens of thousands of hours) for both acoustic and language model training. We describe the data collection, selection and training, evaluating the results of our updated speech recognition system on a test corpus of 7K manually transcribed utterances. We show relative word error rate (WER) reductions of {35%, 19%} on {agent, caller} utterances over our seed model and 5% absolute WER improvements over IBM Watson STT on this conversational speech task.
new_dataset
0.955775
1705.09800
Guy Uziel
Guy Uziel and Ran El-Yaniv
Growth-Optimal Portfolio Selection under CVaR Constraints
null
null
null
null
q-fin.MF cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Online portfolio selection research has so far focused mainly on minimizing regret defined in terms of wealth growth. Practical financial decision making, however, is deeply concerned with both wealth and risk. We consider online learning of portfolios of stocks whose prices are governed by arbitrary (unknown) stationary and ergodic processes, where the goal is to maximize wealth while keeping the conditional value at risk (CVaR) below a desired threshold. We characterize the asymptomatically optimal risk-adjusted performance and present an investment strategy whose portfolios are guaranteed to achieve the asymptotic optimal solution while fulfilling the desired risk constraint. We also numerically demonstrate and validate the viability of our method on standard datasets.
[ { "version": "v1", "created": "Sat, 27 May 2017 10:27:03 GMT" } ]
2017-05-30T00:00:00
[ [ "Uziel", "Guy", "" ], [ "El-Yaniv", "Ran", "" ] ]
TITLE: Growth-Optimal Portfolio Selection under CVaR Constraints ABSTRACT: Online portfolio selection research has so far focused mainly on minimizing regret defined in terms of wealth growth. Practical financial decision making, however, is deeply concerned with both wealth and risk. We consider online learning of portfolios of stocks whose prices are governed by arbitrary (unknown) stationary and ergodic processes, where the goal is to maximize wealth while keeping the conditional value at risk (CVaR) below a desired threshold. We characterize the asymptomatically optimal risk-adjusted performance and present an investment strategy whose portfolios are guaranteed to achieve the asymptotic optimal solution while fulfilling the desired risk constraint. We also numerically demonstrate and validate the viability of our method on standard datasets.
no_new_dataset
0.943971
1705.09888
Peng Xu
Peng Xu, Qiyue Yin, Yongye Huang, Yi-Zhe Song, Zhanyu Ma, Liang Wang, Tao Xiang, W. Bastiaan Kleijn, Jun Guo
Cross-modal Subspace Learning for Fine-grained Sketch-based Image Retrieval
Accepted by Neurocomputing
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Sketch-based image retrieval (SBIR) is challenging due to the inherent domain-gap between sketch and photo. Compared with pixel-perfect depictions of photos, sketches are iconic renderings of the real world with highly abstract. Therefore, matching sketch and photo directly using low-level visual clues are unsufficient, since a common low-level subspace that traverses semantically across the two modalities is non-trivial to establish. Most existing SBIR studies do not directly tackle this cross-modal problem. This naturally motivates us to explore the effectiveness of cross-modal retrieval methods in SBIR, which have been applied in the image-text matching successfully. In this paper, we introduce and compare a series of state-of-the-art cross-modal subspace learning methods and benchmark them on two recently released fine-grained SBIR datasets. Through thorough examination of the experimental results, we have demonstrated that the subspace learning can effectively model the sketch-photo domain-gap. In addition we draw a few key insights to drive future research.
[ { "version": "v1", "created": "Sun, 28 May 2017 03:45:26 GMT" } ]
2017-05-30T00:00:00
[ [ "Xu", "Peng", "" ], [ "Yin", "Qiyue", "" ], [ "Huang", "Yongye", "" ], [ "Song", "Yi-Zhe", "" ], [ "Ma", "Zhanyu", "" ], [ "Wang", "Liang", "" ], [ "Xiang", "Tao", "" ], [ "Kleijn", "W. Bastiaan", "" ], [ "Guo", "Jun", "" ] ]
TITLE: Cross-modal Subspace Learning for Fine-grained Sketch-based Image Retrieval ABSTRACT: Sketch-based image retrieval (SBIR) is challenging due to the inherent domain-gap between sketch and photo. Compared with pixel-perfect depictions of photos, sketches are iconic renderings of the real world with highly abstract. Therefore, matching sketch and photo directly using low-level visual clues are unsufficient, since a common low-level subspace that traverses semantically across the two modalities is non-trivial to establish. Most existing SBIR studies do not directly tackle this cross-modal problem. This naturally motivates us to explore the effectiveness of cross-modal retrieval methods in SBIR, which have been applied in the image-text matching successfully. In this paper, we introduce and compare a series of state-of-the-art cross-modal subspace learning methods and benchmark them on two recently released fine-grained SBIR datasets. Through thorough examination of the experimental results, we have demonstrated that the subspace learning can effectively model the sketch-photo domain-gap. In addition we draw a few key insights to drive future research.
no_new_dataset
0.944689
1705.09892
Chunhua Shen
Bohan Zhuang, Qi Wu, Chunhua Shen, Ian Reid, Anton van den Hengel
Care about you: towards large-scale human-centric visual relationship detection
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Visual relationship detection aims to capture interactions between pairs of objects in images. Relationships between objects and humans represent a particularly important subset of this problem, with implications for challenges such as understanding human behaviour, and identifying affordances, amongst others. In addressing this problem we first construct a large-scale human-centric visual relationship detection dataset (HCVRD), which provides many more types of relationship annotation (nearly 10K categories) than the previous released datasets. This large label space better reflects the reality of human-object interactions, but gives rise to a long-tail distribution problem, which in turn demands a zero-shot approach to labels appearing only in the test set. This is the first time this issue has been addressed. We propose a webly-supervised approach to these problems and demonstrate that the proposed model provides a strong baseline on our HCVRD dataset.
[ { "version": "v1", "created": "Sun, 28 May 2017 05:53:38 GMT" } ]
2017-05-30T00:00:00
[ [ "Zhuang", "Bohan", "" ], [ "Wu", "Qi", "" ], [ "Shen", "Chunhua", "" ], [ "Reid", "Ian", "" ], [ "Hengel", "Anton van den", "" ] ]
TITLE: Care about you: towards large-scale human-centric visual relationship detection ABSTRACT: Visual relationship detection aims to capture interactions between pairs of objects in images. Relationships between objects and humans represent a particularly important subset of this problem, with implications for challenges such as understanding human behaviour, and identifying affordances, amongst others. In addressing this problem we first construct a large-scale human-centric visual relationship detection dataset (HCVRD), which provides many more types of relationship annotation (nearly 10K categories) than the previous released datasets. This large label space better reflects the reality of human-object interactions, but gives rise to a long-tail distribution problem, which in turn demands a zero-shot approach to labels appearing only in the test set. This is the first time this issue has been addressed. We propose a webly-supervised approach to these problems and demonstrate that the proposed model provides a strong baseline on our HCVRD dataset.
new_dataset
0.959687
1705.09906
Haichao Zhang
Haichao Zhang, Haonan Yu, and Wei Xu
Listen, Interact and Talk: Learning to Speak via Interaction
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One of the long-term goals of artificial intelligence is to build an agent that can communicate intelligently with human in natural language. Most existing work on natural language learning relies heavily on training over a pre-collected dataset with annotated labels, leading to an agent that essentially captures the statistics of the fixed external training data. As the training data is essentially a static snapshot representation of the knowledge from the annotator, the agent trained this way is limited in adaptiveness and generalization of its behavior. Moreover, this is very different from the language learning process of humans, where language is acquired during communication by taking speaking action and learning from the consequences of speaking action in an interactive manner. This paper presents an interactive setting for grounded natural language learning, where an agent learns natural language by interacting with a teacher and learning from feedback, thus learning and improving language skills while taking part in the conversation. To achieve this goal, we propose a model which incorporates both imitation and reinforcement by leveraging jointly sentence and reward feedbacks from the teacher. Experiments are conducted to validate the effectiveness of the proposed approach.
[ { "version": "v1", "created": "Sun, 28 May 2017 07:48:14 GMT" } ]
2017-05-30T00:00:00
[ [ "Zhang", "Haichao", "" ], [ "Yu", "Haonan", "" ], [ "Xu", "Wei", "" ] ]
TITLE: Listen, Interact and Talk: Learning to Speak via Interaction ABSTRACT: One of the long-term goals of artificial intelligence is to build an agent that can communicate intelligently with human in natural language. Most existing work on natural language learning relies heavily on training over a pre-collected dataset with annotated labels, leading to an agent that essentially captures the statistics of the fixed external training data. As the training data is essentially a static snapshot representation of the knowledge from the annotator, the agent trained this way is limited in adaptiveness and generalization of its behavior. Moreover, this is very different from the language learning process of humans, where language is acquired during communication by taking speaking action and learning from the consequences of speaking action in an interactive manner. This paper presents an interactive setting for grounded natural language learning, where an agent learns natural language by interacting with a teacher and learning from feedback, thus learning and improving language skills while taking part in the conversation. To achieve this goal, we propose a model which incorporates both imitation and reinforcement by leveraging jointly sentence and reward feedbacks from the teacher. Experiments are conducted to validate the effectiveness of the proposed approach.
no_new_dataset
0.946101
1705.09920
Ali Nassif
Mohammad Azzeh, Ali Bou Nassif
Analyzing the Relationship between Project Productivity and Environment Factors in the Use Case Points Method
Journal of Software: Evolution and Process, 2017
null
10.1002/smr.1882
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Project productivity is a key factor for producing effort estimates from Use Case Points (UCP), especially when the historical dataset is absent. The first versions of UCP effort estimation models used a fixed number or very limited numbers of productivity ratios for all new projects. These approaches have not been well examined over a large number of projects so the validity of these studies was a matter for criticism. The newly available large software datasets allow us to perform further research on the usefulness of productivity for effort estimation of software development. Specifically, we studied the relationship between project productivity and UCP environmental factors, as they have a significant impact on the amount of productivity needed for a software project. Therefore, we designed four studies, using various classification and regression methods, to examine the usefulness of that relationship and its impact on UCP effort estimation. The results we obtained are encouraging and show potential improvement in effort estimation. Furthermore, the efficiency of that relationship is better over a dataset that comes from industry because of the quality of data collection. Our comment on the findings is that it is better to exclude environmental factors from calculating UCP and make them available only for computing productivity. The study also encourages project managers to understand how to better assess the environmental factors as they do have a significant impact on productivity
[ { "version": "v1", "created": "Sun, 28 May 2017 09:44:18 GMT" } ]
2017-05-30T00:00:00
[ [ "Azzeh", "Mohammad", "" ], [ "Nassif", "Ali Bou", "" ] ]
TITLE: Analyzing the Relationship between Project Productivity and Environment Factors in the Use Case Points Method ABSTRACT: Project productivity is a key factor for producing effort estimates from Use Case Points (UCP), especially when the historical dataset is absent. The first versions of UCP effort estimation models used a fixed number or very limited numbers of productivity ratios for all new projects. These approaches have not been well examined over a large number of projects so the validity of these studies was a matter for criticism. The newly available large software datasets allow us to perform further research on the usefulness of productivity for effort estimation of software development. Specifically, we studied the relationship between project productivity and UCP environmental factors, as they have a significant impact on the amount of productivity needed for a software project. Therefore, we designed four studies, using various classification and regression methods, to examine the usefulness of that relationship and its impact on UCP effort estimation. The results we obtained are encouraging and show potential improvement in effort estimation. Furthermore, the efficiency of that relationship is better over a dataset that comes from industry because of the quality of data collection. Our comment on the findings is that it is better to exclude environmental factors from calculating UCP and make them available only for computing productivity. The study also encourages project managers to understand how to better assess the environmental factors as they do have a significant impact on productivity
no_new_dataset
0.929632
1705.09975
Nazli Farajidavar
Nazli Farajidavar, Sefki Kolozali and Payam Barnaghi
A Deep Multi-View Learning Framework for City Event Extraction from Twitter Data Streams
null
null
null
null
cs.SI cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cities have been a thriving place for citizens over the centuries due to their complex infrastructure. The emergence of the Cyber-Physical-Social Systems (CPSS) and context-aware technologies boost a growing interest in analysing, extracting and eventually understanding city events which subsequently can be utilised to leverage the citizen observations of their cities. In this paper, we investigate the feasibility of using Twitter textual streams for extracting city events. We propose a hierarchical multi-view deep learning approach to contextualise citizen observations of various city systems and services. Our goal has been to build a flexible architecture that can learn representations useful for tasks, thus avoiding excessive task-specific feature engineering. We apply our approach on a real-world dataset consisting of event reports and tweets of over four months from San Francisco Bay Area dataset and additional datasets collected from London. The results of our evaluations show that our proposed solution outperforms the existing models and can be used for extracting city related events with an averaged accuracy of 81% over all classes. To further evaluate the impact of our Twitter event extraction model, we have used two sources of authorised reports through collecting road traffic disruptions data from Transport for London API, and parsing the Time Out London website for sociocultural events. The analysis showed that 49.5% of the Twitter traffic comments are reported approximately five hours prior to the authorities official records. Moreover, we discovered that amongst the scheduled sociocultural event topics; tweets reporting transportation, cultural and social events are 31.75% more likely to influence the distribution of the Twitter comments than sport, weather and crime topics.
[ { "version": "v1", "created": "Sun, 28 May 2017 18:22:15 GMT" } ]
2017-05-30T00:00:00
[ [ "Farajidavar", "Nazli", "" ], [ "Kolozali", "Sefki", "" ], [ "Barnaghi", "Payam", "" ] ]
TITLE: A Deep Multi-View Learning Framework for City Event Extraction from Twitter Data Streams ABSTRACT: Cities have been a thriving place for citizens over the centuries due to their complex infrastructure. The emergence of the Cyber-Physical-Social Systems (CPSS) and context-aware technologies boost a growing interest in analysing, extracting and eventually understanding city events which subsequently can be utilised to leverage the citizen observations of their cities. In this paper, we investigate the feasibility of using Twitter textual streams for extracting city events. We propose a hierarchical multi-view deep learning approach to contextualise citizen observations of various city systems and services. Our goal has been to build a flexible architecture that can learn representations useful for tasks, thus avoiding excessive task-specific feature engineering. We apply our approach on a real-world dataset consisting of event reports and tweets of over four months from San Francisco Bay Area dataset and additional datasets collected from London. The results of our evaluations show that our proposed solution outperforms the existing models and can be used for extracting city related events with an averaged accuracy of 81% over all classes. To further evaluate the impact of our Twitter event extraction model, we have used two sources of authorised reports through collecting road traffic disruptions data from Transport for London API, and parsing the Time Out London website for sociocultural events. The analysis showed that 49.5% of the Twitter traffic comments are reported approximately five hours prior to the authorities official records. Moreover, we discovered that amongst the scheduled sociocultural event topics; tweets reporting transportation, cultural and social events are 31.75% more likely to influence the distribution of the Twitter comments than sport, weather and crime topics.
no_new_dataset
0.875521
1705.10034
Xiaopeng Zhang
Xiaopeng Zhang, Hongkai Xiong, Weiyao Lin, Qi Tian
Ensemble of Part Detectors for Simultaneous Classification and Localization
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Part-based representation has been proven to be effective for a variety of visual applications. However, automatic discovery of discriminative parts without object/part-level annotations is challenging. This paper proposes a discriminative mid-level representation paradigm based on the responses of a collection of part detectors, which only requires the image-level labels. Towards this goal, we first develop a detector-based spectral clustering method to mine the representative and discriminative mid-level patterns for detector initialization. The advantage of the proposed pattern mining technology is that the distance metric based on detectors only focuses on discriminative details, and a set of such grouped detectors offer an effective way for consistent pattern mining. Relying on the discovered patterns, we further formulate the detector learning process as a confidence-loss sparse Multiple Instance Learning (cls-MIL) task, which considers the diversity of the positive samples, while avoid drifting away the well localized ones by assigning a confidence value to each positive sample. The responses of the learned detectors can form an effective mid-level image representation for both image classification and object localization. Experiments conducted on benchmark datasets demonstrate the superiority of our method over existing approaches.
[ { "version": "v1", "created": "Mon, 29 May 2017 04:04:08 GMT" } ]
2017-05-30T00:00:00
[ [ "Zhang", "Xiaopeng", "" ], [ "Xiong", "Hongkai", "" ], [ "Lin", "Weiyao", "" ], [ "Tian", "Qi", "" ] ]
TITLE: Ensemble of Part Detectors for Simultaneous Classification and Localization ABSTRACT: Part-based representation has been proven to be effective for a variety of visual applications. However, automatic discovery of discriminative parts without object/part-level annotations is challenging. This paper proposes a discriminative mid-level representation paradigm based on the responses of a collection of part detectors, which only requires the image-level labels. Towards this goal, we first develop a detector-based spectral clustering method to mine the representative and discriminative mid-level patterns for detector initialization. The advantage of the proposed pattern mining technology is that the distance metric based on detectors only focuses on discriminative details, and a set of such grouped detectors offer an effective way for consistent pattern mining. Relying on the discovered patterns, we further formulate the detector learning process as a confidence-loss sparse Multiple Instance Learning (cls-MIL) task, which considers the diversity of the positive samples, while avoid drifting away the well localized ones by assigning a confidence value to each positive sample. The responses of the learned detectors can form an effective mid-level image representation for both image classification and object localization. Experiments conducted on benchmark datasets demonstrate the superiority of our method over existing approaches.
no_new_dataset
0.951233
1705.10130
Murtadha AL-Sharuee
Murtadha Talib AL-Sharuee, Fei Liu, Mahardhika Pratama
An Automatic Contextual Analysis and Clustering Classifiers Ensemble approach to Sentiment Analysis
This article is submitted to a journal
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Products reviews are one of the major resources to determine the public sentiment. The existing literature on reviews sentiment analysis mainly utilizes supervised paradigm, which needs labeled data to be trained on and suffers from domain-dependency. This article addresses these issues by describes a completely automatic approach for sentiment analysis based on unsupervised ensemble learning. The method consists of two phases. The first phase is contextual analysis, which has five processes, namely (1) data preparation; (2) spelling correction; (3) intensifier handling; (4) negation handling and (5) contrast handling. The second phase comprises the unsupervised learning approach, which is an ensemble of clustering classifiers using a majority voting mechanism with different weight schemes. The base classifier of the ensemble method is a modified k-means algorithm. The base classifier is modified by extracting initial centroids from the feature set via using SentWordNet (SWN). We also introduce new sentiment analysis problems of Australian airlines and home builders which offer potential benchmark problems in the sentiment analysis field. Our experiments on datasets from different domains show that contextual analysis and the ensemble phases improve the clustering performance in term of accuracy, stability and generalization ability.
[ { "version": "v1", "created": "Mon, 29 May 2017 11:37:58 GMT" } ]
2017-05-30T00:00:00
[ [ "AL-Sharuee", "Murtadha Talib", "" ], [ "Liu", "Fei", "" ], [ "Pratama", "Mahardhika", "" ] ]
TITLE: An Automatic Contextual Analysis and Clustering Classifiers Ensemble approach to Sentiment Analysis ABSTRACT: Products reviews are one of the major resources to determine the public sentiment. The existing literature on reviews sentiment analysis mainly utilizes supervised paradigm, which needs labeled data to be trained on and suffers from domain-dependency. This article addresses these issues by describes a completely automatic approach for sentiment analysis based on unsupervised ensemble learning. The method consists of two phases. The first phase is contextual analysis, which has five processes, namely (1) data preparation; (2) spelling correction; (3) intensifier handling; (4) negation handling and (5) contrast handling. The second phase comprises the unsupervised learning approach, which is an ensemble of clustering classifiers using a majority voting mechanism with different weight schemes. The base classifier of the ensemble method is a modified k-means algorithm. The base classifier is modified by extracting initial centroids from the feature set via using SentWordNet (SWN). We also introduce new sentiment analysis problems of Australian airlines and home builders which offer potential benchmark problems in the sentiment analysis field. Our experiments on datasets from different domains show that contextual analysis and the ensemble phases improve the clustering performance in term of accuracy, stability and generalization ability.
no_new_dataset
0.945751
1705.10194
Feng Nan
Feng Nan, Venkatesh Saligrama
Adaptive Classification for Prediction Under a Budget
arXiv admin note: substantial text overlap with arXiv:1704.07505
null
null
null
stat.ML cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a novel adaptive approximation approach for test-time resource-constrained prediction. Given an input instance at test-time, a gating function identifies a prediction model for the input among a collection of models. Our objective is to minimize overall average cost without sacrificing accuracy. We learn gating and prediction models on fully labeled training data by means of a bottom-up strategy. Our novel bottom-up method first trains a high-accuracy complex model. Then a low-complexity gating and prediction model are subsequently learned to adaptively approximate the high-accuracy model in regions where low-cost models are capable of making highly accurate predictions. We pose an empirical loss minimization problem with cost constraints to jointly train gating and prediction models. On a number of benchmark datasets our method outperforms state-of-the-art achieving higher accuracy for the same cost.
[ { "version": "v1", "created": "Fri, 26 May 2017 12:28:42 GMT" } ]
2017-05-30T00:00:00
[ [ "Nan", "Feng", "" ], [ "Saligrama", "Venkatesh", "" ] ]
TITLE: Adaptive Classification for Prediction Under a Budget ABSTRACT: We propose a novel adaptive approximation approach for test-time resource-constrained prediction. Given an input instance at test-time, a gating function identifies a prediction model for the input among a collection of models. Our objective is to minimize overall average cost without sacrificing accuracy. We learn gating and prediction models on fully labeled training data by means of a bottom-up strategy. Our novel bottom-up method first trains a high-accuracy complex model. Then a low-complexity gating and prediction model are subsequently learned to adaptively approximate the high-accuracy model in regions where low-cost models are capable of making highly accurate predictions. We pose an empirical loss minimization problem with cost constraints to jointly train gating and prediction models. On a number of benchmark datasets our method outperforms state-of-the-art achieving higher accuracy for the same cost.
no_new_dataset
0.941815
1705.10209
Micha{\l} Zapotoczny
Micha{\l} Zapotoczny, Pawe{\l} Rychlikowski, and Jan Chorowski
On Multilingual Training of Neural Dependency Parsers
preprint accepted into the TSD2017
null
null
null
cs.CL cs.LG cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We show that a recently proposed neural dependency parser can be improved by joint training on multiple languages from the same family. The parser is implemented as a deep neural network whose only input is orthographic representations of words. In order to successfully parse, the network has to discover how linguistically relevant concepts can be inferred from word spellings. We analyze the representations of characters and words that are learned by the network to establish which properties of languages were accounted for. In particular we show that the parser has approximately learned to associate Latin characters with their Cyrillic counterparts and that it can group Polish and Russian words that have a similar grammatical function. Finally, we evaluate the parser on selected languages from the Universal Dependencies dataset and show that it is competitive with other recently proposed state-of-the art methods, while having a simple structure.
[ { "version": "v1", "created": "Mon, 29 May 2017 14:24:08 GMT" } ]
2017-05-30T00:00:00
[ [ "Zapotoczny", "Michał", "" ], [ "Rychlikowski", "Paweł", "" ], [ "Chorowski", "Jan", "" ] ]
TITLE: On Multilingual Training of Neural Dependency Parsers ABSTRACT: We show that a recently proposed neural dependency parser can be improved by joint training on multiple languages from the same family. The parser is implemented as a deep neural network whose only input is orthographic representations of words. In order to successfully parse, the network has to discover how linguistically relevant concepts can be inferred from word spellings. We analyze the representations of characters and words that are learned by the network to establish which properties of languages were accounted for. In particular we show that the parser has approximately learned to associate Latin characters with their Cyrillic counterparts and that it can group Polish and Russian words that have a similar grammatical function. Finally, we evaluate the parser on selected languages from the Universal Dependencies dataset and show that it is competitive with other recently proposed state-of-the art methods, while having a simple structure.
no_new_dataset
0.940463
1511.07111
Eric Tzeng
Eric Tzeng, Coline Devin, Judy Hoffman, Chelsea Finn, Pieter Abbeel, Sergey Levine, Kate Saenko, Trevor Darrell
Adapting Deep Visuomotor Representations with Weak Pairwise Constraints
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Real-world robotics problems often occur in domains that differ significantly from the robot's prior training environment. For many robotic control tasks, real world experience is expensive to obtain, but data is easy to collect in either an instrumented environment or in simulation. We propose a novel domain adaptation approach for robot perception that adapts visual representations learned on a large easy-to-obtain source dataset (e.g. synthetic images) to a target real-world domain, without requiring expensive manual data annotation of real world data before policy search. Supervised domain adaptation methods minimize cross-domain differences using pairs of aligned images that contain the same object or scene in both the source and target domains, thus learning a domain-invariant representation. However, they require manual alignment of such image pairs. Fully unsupervised adaptation methods rely on minimizing the discrepancy between the feature distributions across domains. We propose a novel, more powerful combination of both distribution and pairwise image alignment, and remove the requirement for expensive annotation by using weakly aligned pairs of images in the source and target domains. Focusing on adapting from simulation to real world data using a PR2 robot, we evaluate our approach on a manipulation task and show that by using weakly paired images, our method compensates for domain shift more effectively than previous techniques, enabling better robot performance in the real world.
[ { "version": "v1", "created": "Mon, 23 Nov 2015 05:07:15 GMT" }, { "version": "v2", "created": "Mon, 11 Jan 2016 19:55:50 GMT" }, { "version": "v3", "created": "Tue, 12 Apr 2016 22:05:27 GMT" }, { "version": "v4", "created": "Mon, 21 Nov 2016 21:37:58 GMT" }, { "version": "v5", "created": "Thu, 25 May 2017 21:51:55 GMT" } ]
2017-05-29T00:00:00
[ [ "Tzeng", "Eric", "" ], [ "Devin", "Coline", "" ], [ "Hoffman", "Judy", "" ], [ "Finn", "Chelsea", "" ], [ "Abbeel", "Pieter", "" ], [ "Levine", "Sergey", "" ], [ "Saenko", "Kate", "" ], [ "Darrell", "Trevor", "" ] ]
TITLE: Adapting Deep Visuomotor Representations with Weak Pairwise Constraints ABSTRACT: Real-world robotics problems often occur in domains that differ significantly from the robot's prior training environment. For many robotic control tasks, real world experience is expensive to obtain, but data is easy to collect in either an instrumented environment or in simulation. We propose a novel domain adaptation approach for robot perception that adapts visual representations learned on a large easy-to-obtain source dataset (e.g. synthetic images) to a target real-world domain, without requiring expensive manual data annotation of real world data before policy search. Supervised domain adaptation methods minimize cross-domain differences using pairs of aligned images that contain the same object or scene in both the source and target domains, thus learning a domain-invariant representation. However, they require manual alignment of such image pairs. Fully unsupervised adaptation methods rely on minimizing the discrepancy between the feature distributions across domains. We propose a novel, more powerful combination of both distribution and pairwise image alignment, and remove the requirement for expensive annotation by using weakly aligned pairs of images in the source and target domains. Focusing on adapting from simulation to real world data using a PR2 robot, we evaluate our approach on a manipulation task and show that by using weakly paired images, our method compensates for domain shift more effectively than previous techniques, enabling better robot performance in the real world.
no_new_dataset
0.95452
1610.02891
Kaixiang Mo
Kaixiang Mo, Shuangyin Li, Yu Zhang, Jiajun Li, Qiang Yang
Personalizing a Dialogue System with Transfer Reinforcement Learning
null
null
null
null
cs.AI cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It is difficult to train a personalized task-oriented dialogue system because the data collected from each individual is often insufficient. Personalized dialogue systems trained on a small dataset can overfit and make it difficult to adapt to different user needs. One way to solve this problem is to consider a collection of multiple users' data as a source domain and an individual user's data as a target domain, and to perform a transfer learning from the source to the target domain. By following this idea, we propose "PETAL"(PErsonalized Task-oriented diALogue), a transfer-learning framework based on POMDP to learn a personalized dialogue system. The system first learns common dialogue knowledge from the source domain and then adapts this knowledge to the target user. This framework can avoid the negative transfer problem by considering differences between source and target users. The policy in the personalized POMDP can learn to choose different actions appropriately for different users. Experimental results on a real-world coffee-shopping data and simulation data show that our personalized dialogue system can choose different optimal actions for different users, and thus effectively improve the dialogue quality under the personalized setting.
[ { "version": "v1", "created": "Mon, 10 Oct 2016 12:51:05 GMT" }, { "version": "v2", "created": "Mon, 17 Oct 2016 14:08:42 GMT" }, { "version": "v3", "created": "Fri, 26 May 2017 14:05:07 GMT" } ]
2017-05-29T00:00:00
[ [ "Mo", "Kaixiang", "" ], [ "Li", "Shuangyin", "" ], [ "Zhang", "Yu", "" ], [ "Li", "Jiajun", "" ], [ "Yang", "Qiang", "" ] ]
TITLE: Personalizing a Dialogue System with Transfer Reinforcement Learning ABSTRACT: It is difficult to train a personalized task-oriented dialogue system because the data collected from each individual is often insufficient. Personalized dialogue systems trained on a small dataset can overfit and make it difficult to adapt to different user needs. One way to solve this problem is to consider a collection of multiple users' data as a source domain and an individual user's data as a target domain, and to perform a transfer learning from the source to the target domain. By following this idea, we propose "PETAL"(PErsonalized Task-oriented diALogue), a transfer-learning framework based on POMDP to learn a personalized dialogue system. The system first learns common dialogue knowledge from the source domain and then adapts this knowledge to the target user. This framework can avoid the negative transfer problem by considering differences between source and target users. The policy in the personalized POMDP can learn to choose different actions appropriately for different users. Experimental results on a real-world coffee-shopping data and simulation data show that our personalized dialogue system can choose different optimal actions for different users, and thus effectively improve the dialogue quality under the personalized setting.
no_new_dataset
0.946892
1705.07844
Paul Guerrero
Paul Guerrero, Holger Winnem\"oller, Wilmot Li, Niloy J. Mitra
DepthCut: Improved Depth Edge Estimation Using Multiple Unreliable Channels
12 pages
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the context of scene understanding, a variety of methods exists to estimate different information channels from mono or stereo images, including disparity, depth, and normals. Although several advances have been reported in the recent years for these tasks, the estimated information is often imprecise particularly near depth discontinuities or creases. Studies have however shown that precisely such depth edges carry critical cues for the perception of shape, and play important roles in tasks like depth-based segmentation or foreground selection. Unfortunately, the currently extracted channels often carry conflicting signals, making it difficult for subsequent applications to effectively use them. In this paper, we focus on the problem of obtaining high-precision depth edges (i.e., depth contours and creases) by jointly analyzing such unreliable information channels. We propose DepthCut, a data-driven fusion of the channels using a convolutional neural network trained on a large dataset with known depth. The resulting depth edges can be used for segmentation, decomposing a scene into depth layers with relatively flat depth, or improving the accuracy of the depth estimate near depth edges by constraining its gradients to agree with these edges. Quantitatively, we compare against 15 variants of baselines and demonstrate that our depth edges result in an improved segmentation performance and an improved depth estimate near depth edges compared to data-agnostic channel fusion. Qualitatively, we demonstrate that the depth edges result in superior segmentation and depth orderings.
[ { "version": "v1", "created": "Mon, 22 May 2017 16:48:15 GMT" }, { "version": "v2", "created": "Fri, 26 May 2017 14:21:54 GMT" } ]
2017-05-29T00:00:00
[ [ "Guerrero", "Paul", "" ], [ "Winnemöller", "Holger", "" ], [ "Li", "Wilmot", "" ], [ "Mitra", "Niloy J.", "" ] ]
TITLE: DepthCut: Improved Depth Edge Estimation Using Multiple Unreliable Channels ABSTRACT: In the context of scene understanding, a variety of methods exists to estimate different information channels from mono or stereo images, including disparity, depth, and normals. Although several advances have been reported in the recent years for these tasks, the estimated information is often imprecise particularly near depth discontinuities or creases. Studies have however shown that precisely such depth edges carry critical cues for the perception of shape, and play important roles in tasks like depth-based segmentation or foreground selection. Unfortunately, the currently extracted channels often carry conflicting signals, making it difficult for subsequent applications to effectively use them. In this paper, we focus on the problem of obtaining high-precision depth edges (i.e., depth contours and creases) by jointly analyzing such unreliable information channels. We propose DepthCut, a data-driven fusion of the channels using a convolutional neural network trained on a large dataset with known depth. The resulting depth edges can be used for segmentation, decomposing a scene into depth layers with relatively flat depth, or improving the accuracy of the depth estimate near depth edges by constraining its gradients to agree with these edges. Quantitatively, we compare against 15 variants of baselines and demonstrate that our depth edges result in an improved segmentation performance and an improved depth estimate near depth edges compared to data-agnostic channel fusion. Qualitatively, we demonstrate that the depth edges result in superior segmentation and depth orderings.
no_new_dataset
0.954563
1705.08039
Maximilian Nickel
Maximilian Nickel, Douwe Kiela
Poincar\'e Embeddings for Learning Hierarchical Representations
null
null
null
null
cs.AI cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Representation learning has become an invaluable approach for learning from symbolic data such as text and graphs. However, while complex symbolic datasets often exhibit a latent hierarchical structure, state-of-the-art methods typically learn embeddings in Euclidean vector spaces, which do not account for this property. For this purpose, we introduce a new approach for learning hierarchical representations of symbolic data by embedding them into hyperbolic space -- or more precisely into an n-dimensional Poincar\'e ball. Due to the underlying hyperbolic geometry, this allows us to learn parsimonious representations of symbolic data by simultaneously capturing hierarchy and similarity. We introduce an efficient algorithm to learn the embeddings based on Riemannian optimization and show experimentally that Poincar\'e embeddings outperform Euclidean embeddings significantly on data with latent hierarchies, both in terms of representation capacity and in terms of generalization ability.
[ { "version": "v1", "created": "Mon, 22 May 2017 23:14:36 GMT" }, { "version": "v2", "created": "Fri, 26 May 2017 17:40:55 GMT" } ]
2017-05-29T00:00:00
[ [ "Nickel", "Maximilian", "" ], [ "Kiela", "Douwe", "" ] ]
TITLE: Poincar\'e Embeddings for Learning Hierarchical Representations ABSTRACT: Representation learning has become an invaluable approach for learning from symbolic data such as text and graphs. However, while complex symbolic datasets often exhibit a latent hierarchical structure, state-of-the-art methods typically learn embeddings in Euclidean vector spaces, which do not account for this property. For this purpose, we introduce a new approach for learning hierarchical representations of symbolic data by embedding them into hyperbolic space -- or more precisely into an n-dimensional Poincar\'e ball. Due to the underlying hyperbolic geometry, this allows us to learn parsimonious representations of symbolic data by simultaneously capturing hierarchy and similarity. We introduce an efficient algorithm to learn the embeddings based on Riemannian optimization and show experimentally that Poincar\'e embeddings outperform Euclidean embeddings significantly on data with latent hierarchies, both in terms of representation capacity and in terms of generalization ability.
no_new_dataset
0.948394
1705.09366
Erik Saule
Erik Saule, Dinesh Panchananam, Alexander Hohl, Wenwu Tang, Eric Delmelle
Parallel Space-Time Kernel Density Estimation
null
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The exponential growth of available data has increased the need for interactive exploratory analysis. Dataset can no longer be understood through manual crawling and simple statistics. In Geographical Information Systems (GIS), the dataset is often composed of events localized in space and time; and visualizing such a dataset involves building a map of where the events occurred. We focus in this paper on events that are localized among three dimensions (latitude, longitude, and time), and on computing the first step of the visualization pipeline, space-time kernel density estimation (STKDE), which is most computationally expensive. Starting from a gold standard implementation, we show how algorithm design and engineering, parallel decomposition, and scheduling can be applied to bring near real-time computing to space-time kernel density estimation. We validate our techniques on real world datasets extracted from infectious disease, social media, and ornithology.
[ { "version": "v1", "created": "Thu, 25 May 2017 21:16:37 GMT" } ]
2017-05-29T00:00:00
[ [ "Saule", "Erik", "" ], [ "Panchananam", "Dinesh", "" ], [ "Hohl", "Alexander", "" ], [ "Tang", "Wenwu", "" ], [ "Delmelle", "Eric", "" ] ]
TITLE: Parallel Space-Time Kernel Density Estimation ABSTRACT: The exponential growth of available data has increased the need for interactive exploratory analysis. Dataset can no longer be understood through manual crawling and simple statistics. In Geographical Information Systems (GIS), the dataset is often composed of events localized in space and time; and visualizing such a dataset involves building a map of where the events occurred. We focus in this paper on events that are localized among three dimensions (latitude, longitude, and time), and on computing the first step of the visualization pipeline, space-time kernel density estimation (STKDE), which is most computationally expensive. Starting from a gold standard implementation, we show how algorithm design and engineering, parallel decomposition, and scheduling can be applied to bring near real-time computing to space-time kernel density estimation. We validate our techniques on real world datasets extracted from infectious disease, social media, and ornithology.
no_new_dataset
0.947527
1705.09425
Yao Qin
Yao Qin, Mengyang Feng, Huchuan Lu, Garrison W. Cottrell
Hierarchical Cellular Automata for Visual Saliency
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Saliency detection, finding the most important parts of an image, has become increasingly popular in computer vision. In this paper, we introduce Hierarchical Cellular Automata (HCA) -- a temporally evolving model to intelligently detect salient objects. HCA consists of two main components: Single-layer Cellular Automata (SCA) and Cuboid Cellular Automata (CCA). As an unsupervised propagation mechanism, Single-layer Cellular Automata can exploit the intrinsic relevance of similar regions through interactions with neighbors. Low-level image features as well as high-level semantic information extracted from deep neural networks are incorporated into the SCA to measure the correlation between different image patches. With these hierarchical deep features, an impact factor matrix and a coherence matrix are constructed to balance the influences on each cell's next state. The saliency values of all cells are iteratively updated according to a well-defined update rule. Furthermore, we propose CCA to integrate multiple saliency maps generated by SCA at different scales in a Bayesian framework. Therefore, single-layer propagation and multi-layer integration are jointly modeled in our unified HCA. Surprisingly, we find that the SCA can improve all existing methods that we applied it to, resulting in a similar precision level regardless of the original results. The CCA can act as an efficient pixel-wise aggregation algorithm that can integrate state-of-the-art methods, resulting in even better results. Extensive experiments on four challenging datasets demonstrate that the proposed algorithm outperforms state-of-the-art conventional methods and is competitive with deep learning based approaches.
[ { "version": "v1", "created": "Fri, 26 May 2017 03:43:16 GMT" } ]
2017-05-29T00:00:00
[ [ "Qin", "Yao", "" ], [ "Feng", "Mengyang", "" ], [ "Lu", "Huchuan", "" ], [ "Cottrell", "Garrison W.", "" ] ]
TITLE: Hierarchical Cellular Automata for Visual Saliency ABSTRACT: Saliency detection, finding the most important parts of an image, has become increasingly popular in computer vision. In this paper, we introduce Hierarchical Cellular Automata (HCA) -- a temporally evolving model to intelligently detect salient objects. HCA consists of two main components: Single-layer Cellular Automata (SCA) and Cuboid Cellular Automata (CCA). As an unsupervised propagation mechanism, Single-layer Cellular Automata can exploit the intrinsic relevance of similar regions through interactions with neighbors. Low-level image features as well as high-level semantic information extracted from deep neural networks are incorporated into the SCA to measure the correlation between different image patches. With these hierarchical deep features, an impact factor matrix and a coherence matrix are constructed to balance the influences on each cell's next state. The saliency values of all cells are iteratively updated according to a well-defined update rule. Furthermore, we propose CCA to integrate multiple saliency maps generated by SCA at different scales in a Bayesian framework. Therefore, single-layer propagation and multi-layer integration are jointly modeled in our unified HCA. Surprisingly, we find that the SCA can improve all existing methods that we applied it to, resulting in a similar precision level regardless of the original results. The CCA can act as an efficient pixel-wise aggregation algorithm that can integrate state-of-the-art methods, resulting in even better results. Extensive experiments on four challenging datasets demonstrate that the proposed algorithm outperforms state-of-the-art conventional methods and is competitive with deep learning based approaches.
no_new_dataset
0.94801
1705.09439
Kosetsu Tsukuda
Kosetsu Tsukuda, Masataka Goto
Taste or Addiction?: Using Play Logs to Infer Song Selection Motivation
Accepted by The 21st Pacific-Asia Conference on Knowledge Discovery and Data Mining (PAKDD 2017)
null
null
null
cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Online music services are increasing in popularity. They enable us to analyze people's music listening behavior based on play logs. Although it is known that people listen to music based on topic (e.g., rock or jazz), we assume that when a user is addicted to an artist, s/he chooses the artist's songs regardless of topic. Based on this assumption, in this paper, we propose a probabilistic model to analyze people's music listening behavior. Our main contributions are three-fold. First, to the best of our knowledge, this is the first study modeling music listening behavior by taking into account the influence of addiction to artists. Second, by using real-world datasets of play logs, we showed the effectiveness of our proposed model. Third, we carried out qualitative experiments and showed that taking addiction into account enables us to analyze music listening behavior from a new viewpoint in terms of how people listen to music according to the time of day, how an artist's songs are listened to by people, etc. We also discuss the possibility of applying the analysis results to applications such as artist similarity computation and song recommendation.
[ { "version": "v1", "created": "Fri, 26 May 2017 05:54:20 GMT" } ]
2017-05-29T00:00:00
[ [ "Tsukuda", "Kosetsu", "" ], [ "Goto", "Masataka", "" ] ]
TITLE: Taste or Addiction?: Using Play Logs to Infer Song Selection Motivation ABSTRACT: Online music services are increasing in popularity. They enable us to analyze people's music listening behavior based on play logs. Although it is known that people listen to music based on topic (e.g., rock or jazz), we assume that when a user is addicted to an artist, s/he chooses the artist's songs regardless of topic. Based on this assumption, in this paper, we propose a probabilistic model to analyze people's music listening behavior. Our main contributions are three-fold. First, to the best of our knowledge, this is the first study modeling music listening behavior by taking into account the influence of addiction to artists. Second, by using real-world datasets of play logs, we showed the effectiveness of our proposed model. Third, we carried out qualitative experiments and showed that taking addiction into account enables us to analyze music listening behavior from a new viewpoint in terms of how people listen to music according to the time of day, how an artist's songs are listened to by people, etc. We also discuss the possibility of applying the analysis results to applications such as artist similarity computation and song recommendation.
no_new_dataset
0.953144
1705.09467
Yichao Yan
Yichao Yan, Bingbing Ni, Xiaokang Yang
Predicting Human Interaction via Relative Attention Model
To appear in IJCAI 2017
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Predicting human interaction is challenging as the on-going activity has to be inferred based on a partially observed video. Essentially, a good algorithm should effectively model the mutual influence between the two interacting subjects. Also, only a small region in the scene is discriminative for identifying the on-going interaction. In this work, we propose a relative attention model to explicitly address these difficulties. Built on a tri-coupled deep recurrent structure representing both interacting subjects and global interaction status, the proposed network collects spatio-temporal information from each subject, rectified with global interaction information, yielding effective interaction representation. Moreover, the proposed network also unifies an attention module to assign higher importance to the regions which are relevant to the on-going action. Extensive experiments have been conducted on two public datasets, and the results demonstrate that the proposed relative attention network successfully predicts informative regions between interacting subjects, which in turn yields superior human interaction prediction accuracy.
[ { "version": "v1", "created": "Fri, 26 May 2017 08:04:24 GMT" } ]
2017-05-29T00:00:00
[ [ "Yan", "Yichao", "" ], [ "Ni", "Bingbing", "" ], [ "Yang", "Xiaokang", "" ] ]
TITLE: Predicting Human Interaction via Relative Attention Model ABSTRACT: Predicting human interaction is challenging as the on-going activity has to be inferred based on a partially observed video. Essentially, a good algorithm should effectively model the mutual influence between the two interacting subjects. Also, only a small region in the scene is discriminative for identifying the on-going interaction. In this work, we propose a relative attention model to explicitly address these difficulties. Built on a tri-coupled deep recurrent structure representing both interacting subjects and global interaction status, the proposed network collects spatio-temporal information from each subject, rectified with global interaction information, yielding effective interaction representation. Moreover, the proposed network also unifies an attention module to assign higher importance to the regions which are relevant to the on-going action. Extensive experiments have been conducted on two public datasets, and the results demonstrate that the proposed relative attention network successfully predicts informative regions between interacting subjects, which in turn yields superior human interaction prediction accuracy.
no_new_dataset
0.947527
1705.09474
Donghui Wang
Yanan Li, Donghui Wang
Zero-Shot Learning with Generative Latent Prototype Model
This work was completed in Oct, 2016
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Zero-shot learning, which studies the problem of object classification for categories for which we have no training examples, is gaining increasing attention from community. Most existing ZSL methods exploit deterministic transfer learning via an in-between semantic embedding space. In this paper, we try to attack this problem from a generative probabilistic modelling perspective. We assume for any category, the observed representation, e.g. images or texts, is developed from a unique prototype in a latent space, in which the semantic relationship among prototypes is encoded via linear reconstruction. Taking advantage of this assumption, virtual instances of unseen classes can be generated from the corresponding prototype, giving rise to a novel ZSL model which can alleviate the domain shift problem existing in the way of direct transfer learning. Extensive experiments on three benchmark datasets show our proposed model can achieve state-of-the-art results.
[ { "version": "v1", "created": "Fri, 26 May 2017 08:22:13 GMT" } ]
2017-05-29T00:00:00
[ [ "Li", "Yanan", "" ], [ "Wang", "Donghui", "" ] ]
TITLE: Zero-Shot Learning with Generative Latent Prototype Model ABSTRACT: Zero-shot learning, which studies the problem of object classification for categories for which we have no training examples, is gaining increasing attention from community. Most existing ZSL methods exploit deterministic transfer learning via an in-between semantic embedding space. In this paper, we try to attack this problem from a generative probabilistic modelling perspective. We assume for any category, the observed representation, e.g. images or texts, is developed from a unique prototype in a latent space, in which the semantic relationship among prototypes is encoded via linear reconstruction. Taking advantage of this assumption, virtual instances of unseen classes can be generated from the corresponding prototype, giving rise to a novel ZSL model which can alleviate the domain shift problem existing in the way of direct transfer learning. Extensive experiments on three benchmark datasets show our proposed model can achieve state-of-the-art results.
no_new_dataset
0.949529
1705.09476
Donghui Wang
Yanan Li, Donghui Wang
Learning Robust Features with Incremental Auto-Encoders
This work was completed in Feb, 2015
null
null
null
cs.LG cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Automatically learning features, especially robust features, has attracted much attention in the machine learning community. In this paper, we propose a new method to learn non-linear robust features by taking advantage of the data manifold structure. We first follow the commonly used trick of the trade, that is learning robust features with artificially corrupted data, which are training samples with manually injected noise. Following the idea of the auto-encoder, we first assume features should contain much information to well reconstruct the input from its corrupted copies. However, merely reconstructing clean input from its noisy copies could make data manifold in the feature space noisy. To address this problem, we propose a new method, called Incremental Auto-Encoders, to iteratively denoise the extracted features. We assume the noisy manifold structure is caused by a diffusion process. Consequently, we reverse this specific diffusion process to further contract this noisy manifold, which results in an incremental optimization of model parameters . Furthermore, we show these learned non-linear features can be stacked into a hierarchy of features. Experimental results on real-world datasets demonstrate the proposed method can achieve better classification performances.
[ { "version": "v1", "created": "Fri, 26 May 2017 08:30:41 GMT" } ]
2017-05-29T00:00:00
[ [ "Li", "Yanan", "" ], [ "Wang", "Donghui", "" ] ]
TITLE: Learning Robust Features with Incremental Auto-Encoders ABSTRACT: Automatically learning features, especially robust features, has attracted much attention in the machine learning community. In this paper, we propose a new method to learn non-linear robust features by taking advantage of the data manifold structure. We first follow the commonly used trick of the trade, that is learning robust features with artificially corrupted data, which are training samples with manually injected noise. Following the idea of the auto-encoder, we first assume features should contain much information to well reconstruct the input from its corrupted copies. However, merely reconstructing clean input from its noisy copies could make data manifold in the feature space noisy. To address this problem, we propose a new method, called Incremental Auto-Encoders, to iteratively denoise the extracted features. We assume the noisy manifold structure is caused by a diffusion process. Consequently, we reverse this specific diffusion process to further contract this noisy manifold, which results in an incremental optimization of model parameters . Furthermore, we show these learned non-linear features can be stacked into a hierarchy of features. Experimental results on real-world datasets demonstrate the proposed method can achieve better classification performances.
no_new_dataset
0.944228
1705.09602
Elena Burceanu
Elena Burceanu and Marius Leordeanu
Learning a Robust Society of Tracking Parts
9.5 pages of main content, 2.5 of bibliography, 2 pages of appendix, 3 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Object tracking is an essential task in computer vision that has been studied since the early days of the field. Being able to follow objects that undergo different transformations in the video sequence, including changes in scale, illumination, shape and occlusions, makes the problem extremely difficult. One of the real challenges is to keep track of the changes in objects appearance and not drift towards the background clutter. Different from previous approaches, we obtain robustness against background with a tracker model that is composed of many different parts. They are classifiers that respond at different scales and locations. The tracker system functions as a society of parts, each having its own role and level of credibility. Reliable classifiers decide the tracker's next move, while newcomers are first monitored before gaining the necessary level of reliability to participate in the decision process. Some parts that loose their consistency are rejected, while others that show consistency for a sufficiently long time are promoted to permanent roles. The tracker system, as a whole, could also go through different phases, from the usual, normal functioning to states of weak agreement and even crisis. The tracker system has different governing rules in each state. What truly distinguishes our work from others is not necessarily the strength of individual tracking parts, but the way in which they work together and build a strong and robust organization. We also propose an efficient way to learn simultaneously many tracking parts, with a single closed-form formulation. We obtain a fast and robust tracker with state of the art performance on the challenging OTB50 dataset.
[ { "version": "v1", "created": "Fri, 26 May 2017 14:51:43 GMT" } ]
2017-05-29T00:00:00
[ [ "Burceanu", "Elena", "" ], [ "Leordeanu", "Marius", "" ] ]
TITLE: Learning a Robust Society of Tracking Parts ABSTRACT: Object tracking is an essential task in computer vision that has been studied since the early days of the field. Being able to follow objects that undergo different transformations in the video sequence, including changes in scale, illumination, shape and occlusions, makes the problem extremely difficult. One of the real challenges is to keep track of the changes in objects appearance and not drift towards the background clutter. Different from previous approaches, we obtain robustness against background with a tracker model that is composed of many different parts. They are classifiers that respond at different scales and locations. The tracker system functions as a society of parts, each having its own role and level of credibility. Reliable classifiers decide the tracker's next move, while newcomers are first monitored before gaining the necessary level of reliability to participate in the decision process. Some parts that loose their consistency are rejected, while others that show consistency for a sufficiently long time are promoted to permanent roles. The tracker system, as a whole, could also go through different phases, from the usual, normal functioning to states of weak agreement and even crisis. The tracker system has different governing rules in each state. What truly distinguishes our work from others is not necessarily the strength of individual tracking parts, but the way in which they work together and build a strong and robust organization. We also propose an efficient way to learn simultaneously many tracking parts, with a single closed-form formulation. We obtain a fast and robust tracker with state of the art performance on the challenging OTB50 dataset.
no_new_dataset
0.940024
1610.02616
Zecheng Xie
Zecheng Xie, Zenghui Sun, Lianwen Jin, Hao Ni, Terry Lyons
Learning Spatial-Semantic Context with Fully Convolutional Recurrent Network for Online Handwritten Chinese Text Recognition
14 pages, 9 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Online handwritten Chinese text recognition (OHCTR) is a challenging problem as it involves a large-scale character set, ambiguous segmentation, and variable-length input sequences. In this paper, we exploit the outstanding capability of path signature to translate online pen-tip trajectories into informative signature feature maps using a sliding window-based method, successfully capturing the analytic and geometric properties of pen strokes with strong local invariance and robustness. A multi-spatial-context fully convolutional recurrent network (MCFCRN) is proposed to exploit the multiple spatial contexts from the signature feature maps and generate a prediction sequence while completely avoiding the difficult segmentation problem. Furthermore, an implicit language model is developed to make predictions based on semantic context within a predicting feature sequence, providing a new perspective for incorporating lexicon constraints and prior knowledge about a certain language in the recognition procedure. Experiments on two standard benchmarks, Dataset-CASIA and Dataset-ICDAR, yielded outstanding results, with correct rates of 97.10% and 97.15%, respectively, which are significantly better than the best result reported thus far in the literature.
[ { "version": "v1", "created": "Sun, 9 Oct 2016 02:39:07 GMT" }, { "version": "v2", "created": "Thu, 25 May 2017 15:33:19 GMT" } ]
2017-05-26T00:00:00
[ [ "Xie", "Zecheng", "" ], [ "Sun", "Zenghui", "" ], [ "Jin", "Lianwen", "" ], [ "Ni", "Hao", "" ], [ "Lyons", "Terry", "" ] ]
TITLE: Learning Spatial-Semantic Context with Fully Convolutional Recurrent Network for Online Handwritten Chinese Text Recognition ABSTRACT: Online handwritten Chinese text recognition (OHCTR) is a challenging problem as it involves a large-scale character set, ambiguous segmentation, and variable-length input sequences. In this paper, we exploit the outstanding capability of path signature to translate online pen-tip trajectories into informative signature feature maps using a sliding window-based method, successfully capturing the analytic and geometric properties of pen strokes with strong local invariance and robustness. A multi-spatial-context fully convolutional recurrent network (MCFCRN) is proposed to exploit the multiple spatial contexts from the signature feature maps and generate a prediction sequence while completely avoiding the difficult segmentation problem. Furthermore, an implicit language model is developed to make predictions based on semantic context within a predicting feature sequence, providing a new perspective for incorporating lexicon constraints and prior knowledge about a certain language in the recognition procedure. Experiments on two standard benchmarks, Dataset-CASIA and Dataset-ICDAR, yielded outstanding results, with correct rates of 97.10% and 97.15%, respectively, which are significantly better than the best result reported thus far in the literature.
no_new_dataset
0.950088
1611.01086
Shohreh Shaghaghian Ms
Shohreh Shaghaghian, Mark Coates
Online Bayesian Inference of Diffusion Networks
null
null
null
null
cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Understanding the process by which a contagion disseminates throughout a network is of great importance in many real world applications. The required sophistication of the inference approach depends on the type of information we want to extract as well as the number of observations that are available to us. We analyze scenarios in which not only the underlying network structure (parental relationships and link strengths) needs to be detected, but also the infection times must be estimated. We assume that our only observation of the diffusion process is a set of time series, one for each node of the network, which exhibit changepoints when an infection occurs. After formulating a model to describe the contagion, and selecting appropriate prior distributions, we seek to find the set of model parameters that best explains our observations. Modeling the problem in a Bayesian framework, we exploit Monte Carlo Markov Chain, Sequential Monte Carlo, and time series analysis techniques to develop batch and online inference algorithms. We evaluate the performance of our proposed algorithms via numerical simulations of synthetic network contagions and analysis of real-world datasets.
[ { "version": "v1", "created": "Thu, 3 Nov 2016 16:41:02 GMT" }, { "version": "v2", "created": "Thu, 25 May 2017 01:16:05 GMT" } ]
2017-05-26T00:00:00
[ [ "Shaghaghian", "Shohreh", "" ], [ "Coates", "Mark", "" ] ]
TITLE: Online Bayesian Inference of Diffusion Networks ABSTRACT: Understanding the process by which a contagion disseminates throughout a network is of great importance in many real world applications. The required sophistication of the inference approach depends on the type of information we want to extract as well as the number of observations that are available to us. We analyze scenarios in which not only the underlying network structure (parental relationships and link strengths) needs to be detected, but also the infection times must be estimated. We assume that our only observation of the diffusion process is a set of time series, one for each node of the network, which exhibit changepoints when an infection occurs. After formulating a model to describe the contagion, and selecting appropriate prior distributions, we seek to find the set of model parameters that best explains our observations. Modeling the problem in a Bayesian framework, we exploit Monte Carlo Markov Chain, Sequential Monte Carlo, and time series analysis techniques to develop batch and online inference algorithms. We evaluate the performance of our proposed algorithms via numerical simulations of synthetic network contagions and analysis of real-world datasets.
no_new_dataset
0.947817
1704.01704
Yi Han
Yi Han, Benjamin I. P. Rubinstein
Adequacy of the Gradient-Descent Method for Classifier Evasion Attacks
10 pages, 7 figures, 10 tables
null
null
null
cs.CR cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite the wide use of machine learning in adversarial settings including computer security, recent studies have demonstrated vulnerabilities to evasion attacks---carefully crafted adversarial samples that closely resemble legitimate instances, but cause misclassification. In this paper, we examine the adequacy of the leading approach to generating adversarial samples---the gradient descent approach. In particular (1) we perform extensive experiments on three datasets, MNIST, USPS and Spambase, in order to analyse the effectiveness of the gradient-descent method against non-linear support vector machines, and conclude that carefully reduced kernel smoothness can significantly increase robustness to the attack; (2) we demonstrate that separated inter-class support vectors lead to more secure models, and propose a quantity similar to margin that can efficiently predict potential susceptibility to gradient-descent attacks, before the attack is launched; and (3) we design a new adversarial sample construction algorithm based on optimising the multiplicative ratio of class decision functions.
[ { "version": "v1", "created": "Thu, 6 Apr 2017 04:35:40 GMT" }, { "version": "v2", "created": "Thu, 25 May 2017 04:32:43 GMT" } ]
2017-05-26T00:00:00
[ [ "Han", "Yi", "" ], [ "Rubinstein", "Benjamin I. P.", "" ] ]
TITLE: Adequacy of the Gradient-Descent Method for Classifier Evasion Attacks ABSTRACT: Despite the wide use of machine learning in adversarial settings including computer security, recent studies have demonstrated vulnerabilities to evasion attacks---carefully crafted adversarial samples that closely resemble legitimate instances, but cause misclassification. In this paper, we examine the adequacy of the leading approach to generating adversarial samples---the gradient descent approach. In particular (1) we perform extensive experiments on three datasets, MNIST, USPS and Spambase, in order to analyse the effectiveness of the gradient-descent method against non-linear support vector machines, and conclude that carefully reduced kernel smoothness can significantly increase robustness to the attack; (2) we demonstrate that separated inter-class support vectors lead to more secure models, and propose a quantity similar to margin that can efficiently predict potential susceptibility to gradient-descent attacks, before the attack is launched; and (3) we design a new adversarial sample construction algorithm based on optimising the multiplicative ratio of class decision functions.
no_new_dataset
0.947866
1704.08772
Grigorios Chrysos
Grigorios G. Chrysos, Stefanos Zafeiriou
Deep Face Deblurring
null
null
null
null
cs.CV cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Blind deblurring consists a long studied task, however the outcomes of generic methods are not effective in real world blurred images. Domain-specific methods for deblurring targeted object categories, e.g. text or faces, frequently outperform their generic counterparts, hence they are attracting an increasing amount of attention. In this work, we develop such a domain-specific method to tackle deblurring of human faces, henceforth referred to as face deblurring. Studying faces is of tremendous significance in computer vision, however face deblurring has yet to demonstrate some convincing results. This can be partly attributed to the combination of i) poor texture and ii) highly structure shape that yield the contour/gradient priors (that are typically used) sub-optimal. In our work instead of making assumptions over the prior, we adopt a learning approach by inserting weak supervision that exploits the well-documented structure of the face. Namely, we utilise a deep network to perform the deblurring and employ a face alignment technique to pre-process each face. We additionally surpass the requirement of the deep network for thousands training samples, by introducing an efficient framework that allows the generation of a large dataset. We utilised this framework to create 2MF2, a dataset of over two million frames. We conducted experiments with real world blurred facial images and report that our method returns a result close to the sharp natural latent image.
[ { "version": "v1", "created": "Thu, 27 Apr 2017 23:01:45 GMT" }, { "version": "v2", "created": "Thu, 25 May 2017 07:45:36 GMT" } ]
2017-05-26T00:00:00
[ [ "Chrysos", "Grigorios G.", "" ], [ "Zafeiriou", "Stefanos", "" ] ]
TITLE: Deep Face Deblurring ABSTRACT: Blind deblurring consists a long studied task, however the outcomes of generic methods are not effective in real world blurred images. Domain-specific methods for deblurring targeted object categories, e.g. text or faces, frequently outperform their generic counterparts, hence they are attracting an increasing amount of attention. In this work, we develop such a domain-specific method to tackle deblurring of human faces, henceforth referred to as face deblurring. Studying faces is of tremendous significance in computer vision, however face deblurring has yet to demonstrate some convincing results. This can be partly attributed to the combination of i) poor texture and ii) highly structure shape that yield the contour/gradient priors (that are typically used) sub-optimal. In our work instead of making assumptions over the prior, we adopt a learning approach by inserting weak supervision that exploits the well-documented structure of the face. Namely, we utilise a deep network to perform the deblurring and employ a face alignment technique to pre-process each face. We additionally surpass the requirement of the deep network for thousands training samples, by introducing an efficient framework that allows the generation of a large dataset. We utilised this framework to create 2MF2, a dataset of over two million frames. We conducted experiments with real world blurred facial images and report that our method returns a result close to the sharp natural latent image.
new_dataset
0.892234
1705.08923
Tao Zhou
Tao Zhou, Muhao Chen, Jie Yu, Demetri Terzopoulos
Attention-based Natural Language Person Retrieval
CVPR 2017 Workshop (vision meets cognition)
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Following the recent progress in image classification and captioning using deep learning, we develop a novel natural language person retrieval system based on an attention mechanism. More specifically, given the description of a person, the goal is to localize the person in an image. To this end, we first construct a benchmark dataset for natural language person retrieval. To do so, we generate bounding boxes for persons in a public image dataset from the segmentation masks, which are then annotated with descriptions and attributes using the Amazon Mechanical Turk. We then adopt a region proposal network in Faster R-CNN as a candidate region generator. The cropped images based on the region proposals as well as the whole images with attention weights are fed into Convolutional Neural Networks for visual feature extraction, while the natural language expression and attributes are input to Bidirectional Long Short- Term Memory (BLSTM) models for text feature extraction. The visual and text features are integrated to score region proposals, and the one with the highest score is retrieved as the output of our system. The experimental results show significant improvement over the state-of-the-art method for generic object retrieval and this line of research promises to benefit search in surveillance video footage.
[ { "version": "v1", "created": "Wed, 24 May 2017 18:36:58 GMT" } ]
2017-05-26T00:00:00
[ [ "Zhou", "Tao", "" ], [ "Chen", "Muhao", "" ], [ "Yu", "Jie", "" ], [ "Terzopoulos", "Demetri", "" ] ]
TITLE: Attention-based Natural Language Person Retrieval ABSTRACT: Following the recent progress in image classification and captioning using deep learning, we develop a novel natural language person retrieval system based on an attention mechanism. More specifically, given the description of a person, the goal is to localize the person in an image. To this end, we first construct a benchmark dataset for natural language person retrieval. To do so, we generate bounding boxes for persons in a public image dataset from the segmentation masks, which are then annotated with descriptions and attributes using the Amazon Mechanical Turk. We then adopt a region proposal network in Faster R-CNN as a candidate region generator. The cropped images based on the region proposals as well as the whole images with attention weights are fed into Convolutional Neural Networks for visual feature extraction, while the natural language expression and attributes are input to Bidirectional Long Short- Term Memory (BLSTM) models for text feature extraction. The visual and text features are integrated to score region proposals, and the one with the highest score is retrieved as the output of our system. The experimental results show significant improvement over the state-of-the-art method for generic object retrieval and this line of research promises to benefit search in surveillance video footage.
new_dataset
0.941708
1705.08982
Shuai Xiao
Shuai Xiao, Junchi Yan, Stephen M. Chu, Xiaokang Yang, Hongyuan Zha
Modeling The Intensity Function Of Point Process Via Recurrent Neural Networks
Accepted at Thirty-First AAAI Conference on Artificial Intelligence (AAAI17)
null
null
null
cs.LG cs.AI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Event sequence, asynchronously generated with random timestamp, is ubiquitous among applications. The precise and arbitrary timestamp can carry important clues about the underlying dynamics, and has lent the event data fundamentally different from the time-series whereby series is indexed with fixed and equal time interval. One expressive mathematical tool for modeling event is point process. The intensity functions of many point processes involve two components: the background and the effect by the history. Due to its inherent spontaneousness, the background can be treated as a time series while the other need to handle the history events. In this paper, we model the background by a Recurrent Neural Network (RNN) with its units aligned with time series indexes while the history effect is modeled by another RNN whose units are aligned with asynchronous events to capture the long-range dynamics. The whole model with event type and timestamp prediction output layers can be trained end-to-end. Our approach takes an RNN perspective to point process, and models its background and history effect. For utility, our method allows a black-box treatment for modeling the intensity which is often a pre-defined parametric form in point processes. Meanwhile end-to-end training opens the venue for reusing existing rich techniques in deep network for point process modeling. We apply our model to the predictive maintenance problem using a log dataset by more than 1000 ATMs from a global bank headquartered in North America.
[ { "version": "v1", "created": "Wed, 24 May 2017 22:23:14 GMT" } ]
2017-05-26T00:00:00
[ [ "Xiao", "Shuai", "" ], [ "Yan", "Junchi", "" ], [ "Chu", "Stephen M.", "" ], [ "Yang", "Xiaokang", "" ], [ "Zha", "Hongyuan", "" ] ]
TITLE: Modeling The Intensity Function Of Point Process Via Recurrent Neural Networks ABSTRACT: Event sequence, asynchronously generated with random timestamp, is ubiquitous among applications. The precise and arbitrary timestamp can carry important clues about the underlying dynamics, and has lent the event data fundamentally different from the time-series whereby series is indexed with fixed and equal time interval. One expressive mathematical tool for modeling event is point process. The intensity functions of many point processes involve two components: the background and the effect by the history. Due to its inherent spontaneousness, the background can be treated as a time series while the other need to handle the history events. In this paper, we model the background by a Recurrent Neural Network (RNN) with its units aligned with time series indexes while the history effect is modeled by another RNN whose units are aligned with asynchronous events to capture the long-range dynamics. The whole model with event type and timestamp prediction output layers can be trained end-to-end. Our approach takes an RNN perspective to point process, and models its background and history effect. For utility, our method allows a black-box treatment for modeling the intensity which is often a pre-defined parametric form in point processes. Meanwhile end-to-end training opens the venue for reusing existing rich techniques in deep network for point process modeling. We apply our model to the predictive maintenance problem using a log dataset by more than 1000 ATMs from a global bank headquartered in North America.
no_new_dataset
0.952838
1705.08994
Hassan Jameel Asghar
Hassan Jameel Asghar, Paul Tyler, Mohamed Ali Kaafar
On the Privacy of the Opal Data Release: A Response
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This document is a response to a report from the University of Melbourne on the privacy of the Opal dataset release. The Opal dataset was released by Data61 (CSIRO) in conjunction with the Transport for New South Wales (TfNSW). The data consists of two separate weeks of "tap-on/tap-off" data of individuals who used any of the four different modes of public transport from TfNSW: buses, light rail, train and ferries. These taps are recorded through the smart ticketing system, known as Opal, available in the state of New South Wales, Australia.
[ { "version": "v1", "created": "Wed, 24 May 2017 23:11:13 GMT" } ]
2017-05-26T00:00:00
[ [ "Asghar", "Hassan Jameel", "" ], [ "Tyler", "Paul", "" ], [ "Kaafar", "Mohamed Ali", "" ] ]
TITLE: On the Privacy of the Opal Data Release: A Response ABSTRACT: This document is a response to a report from the University of Melbourne on the privacy of the Opal dataset release. The Opal dataset was released by Data61 (CSIRO) in conjunction with the Transport for New South Wales (TfNSW). The data consists of two separate weeks of "tap-on/tap-off" data of individuals who used any of the four different modes of public transport from TfNSW: buses, light rail, train and ferries. These taps are recorded through the smart ticketing system, known as Opal, available in the state of New South Wales, Australia.
no_new_dataset
0.88136
1705.09054
Zhipeng Xie
Zhipeng Xie and Junfeng Hu
Max-Cosine Matching Based Neural Models for Recognizing Textual Entailment
null
DASFAA (1) 2017: 295-308
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recognizing textual entailment is a fundamental task in a variety of text mining or natural language processing applications. This paper proposes a simple neural model for RTE problem. It first matches each word in the hypothesis with its most-similar word in the premise, producing an augmented representation of the hypothesis conditioned on the premise as a sequence of word pairs. The LSTM model is then used to model this augmented sequence, and the final output from the LSTM is fed into a softmax layer to make the prediction. Besides the base model, in order to enhance its performance, we also proposed three techniques: the integration of multiple word-embedding library, bi-way integration, and ensemble based on model averaging. Experimental results on the SNLI dataset have shown that the three techniques are effective in boosting the predicative accuracy and that our method outperforms several state-of-the-state ones.
[ { "version": "v1", "created": "Thu, 25 May 2017 05:45:42 GMT" } ]
2017-05-26T00:00:00
[ [ "Xie", "Zhipeng", "" ], [ "Hu", "Junfeng", "" ] ]
TITLE: Max-Cosine Matching Based Neural Models for Recognizing Textual Entailment ABSTRACT: Recognizing textual entailment is a fundamental task in a variety of text mining or natural language processing applications. This paper proposes a simple neural model for RTE problem. It first matches each word in the hypothesis with its most-similar word in the premise, producing an augmented representation of the hypothesis conditioned on the premise as a sequence of word pairs. The LSTM model is then used to model this augmented sequence, and the final output from the LSTM is fed into a softmax layer to make the prediction. Besides the base model, in order to enhance its performance, we also proposed three techniques: the integration of multiple word-embedding library, bi-way integration, and ensemble based on model averaging. Experimental results on the SNLI dataset have shown that the three techniques are effective in boosting the predicative accuracy and that our method outperforms several state-of-the-state ones.
no_new_dataset
0.948155
1705.09058
Yihui He
Yihui He, Ming Xiang
An Empirical Analysis of Approximation Algorithms for the Euclidean Traveling Salesman Problem
4 pages, 5 figures
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With applications to many disciplines, the traveling salesman problem (TSP) is a classical computer science optimization problem with applications to industrial engineering, theoretical computer science, bioinformatics, and several other disciplines. In recent years, there have been a plethora of novel approaches for approximate solutions ranging from simplistic greedy to cooperative distributed algorithms derived from artificial intelligence. In this paper, we perform an evaluation and analysis of cornerstone algorithms for the Euclidean TSP. We evaluate greedy, 2-opt, and genetic algorithms. We use several datasets as input for the algorithms including a small dataset, a mediumsized dataset representing cities in the United States, and a synthetic dataset consisting of 200 cities to test algorithm scalability. We discover that the greedy and 2-opt algorithms efficiently calculate solutions for smaller datasets. Genetic algorithm has the best performance for optimality for medium to large datasets, but generally have longer runtime. Our implementations is public available.
[ { "version": "v1", "created": "Thu, 25 May 2017 06:21:39 GMT" } ]
2017-05-26T00:00:00
[ [ "He", "Yihui", "" ], [ "Xiang", "Ming", "" ] ]
TITLE: An Empirical Analysis of Approximation Algorithms for the Euclidean Traveling Salesman Problem ABSTRACT: With applications to many disciplines, the traveling salesman problem (TSP) is a classical computer science optimization problem with applications to industrial engineering, theoretical computer science, bioinformatics, and several other disciplines. In recent years, there have been a plethora of novel approaches for approximate solutions ranging from simplistic greedy to cooperative distributed algorithms derived from artificial intelligence. In this paper, we perform an evaluation and analysis of cornerstone algorithms for the Euclidean TSP. We evaluate greedy, 2-opt, and genetic algorithms. We use several datasets as input for the algorithms including a small dataset, a mediumsized dataset representing cities in the United States, and a synthetic dataset consisting of 200 cities to test algorithm scalability. We discover that the greedy and 2-opt algorithms efficiently calculate solutions for smaller datasets. Genetic algorithm has the best performance for optimality for medium to large datasets, but generally have longer runtime. Our implementations is public available.
new_dataset
0.826432
1705.09142
Konda Reddy Mopuri
Konda Reddy Mopuri, Vishal B. Athreya and R. Venkatesh Babu
Deep image representations using caption generators
ICME 2017
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep learning exploits large volumes of labeled data to learn powerful models. When the target dataset is small, it is a common practice to perform transfer learning using pre-trained models to learn new task specific representations. However, pre-trained CNNs for image recognition are provided with limited information about the image during training, which is label alone. Tasks such as scene retrieval suffer from features learned from this weak supervision and require stronger supervision to better understand the contents of the image. In this paper, we exploit the features learned from caption generating models to learn novel task specific image representations. In particular, we consider the state-of-the art captioning system Show and Tell~\cite{SnT-pami-2016} and the dense region description model DenseCap~\cite{densecap-cvpr-2016}. We demonstrate that, owing to richer supervision provided during the process of training, the features learned by the captioning system perform better than those of CNNs. Further, we train a siamese network with a modified pair-wise loss to fuse the features learned by~\cite{SnT-pami-2016} and~\cite{densecap-cvpr-2016} and learn image representations suitable for retrieval. Experiments show that the proposed fusion exploits the complementary nature of the individual features and yields state-of-the art retrieval results on benchmark datasets.
[ { "version": "v1", "created": "Thu, 25 May 2017 12:13:27 GMT" } ]
2017-05-26T00:00:00
[ [ "Mopuri", "Konda Reddy", "" ], [ "Athreya", "Vishal B.", "" ], [ "Babu", "R. Venkatesh", "" ] ]
TITLE: Deep image representations using caption generators ABSTRACT: Deep learning exploits large volumes of labeled data to learn powerful models. When the target dataset is small, it is a common practice to perform transfer learning using pre-trained models to learn new task specific representations. However, pre-trained CNNs for image recognition are provided with limited information about the image during training, which is label alone. Tasks such as scene retrieval suffer from features learned from this weak supervision and require stronger supervision to better understand the contents of the image. In this paper, we exploit the features learned from caption generating models to learn novel task specific image representations. In particular, we consider the state-of-the art captioning system Show and Tell~\cite{SnT-pami-2016} and the dense region description model DenseCap~\cite{densecap-cvpr-2016}. We demonstrate that, owing to richer supervision provided during the process of training, the features learned by the captioning system perform better than those of CNNs. Further, we train a siamese network with a modified pair-wise loss to fuse the features learned by~\cite{SnT-pami-2016} and~\cite{densecap-cvpr-2016} and learn image representations suitable for retrieval. Experiments show that the proposed fusion exploits the complementary nature of the individual features and yields state-of-the art retrieval results on benchmark datasets.
no_new_dataset
0.948775
1705.09269
Joseph Anderson
Joseph Anderson
Geometric Methods for Robust Data Analysis in High Dimension
180 Pages, 7 Figures, PhD thesis, Ohio State (2017)
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Machine learning and data analysis now finds both scientific and industrial application in biology, chemistry, geology, medicine, and physics. These applications rely on large quantities of data gathered from automated sensors and user input. Furthermore, the dimensionality of many datasets is extreme: more details are being gathered about single user interactions or sensor readings. All of these applications encounter problems with a common theme: use observed data to make inferences about the world. Our work obtains the first provably efficient algorithms for Independent Component Analysis (ICA) in the presence of heavy-tailed data. The main tool in this result is the centroid body (a well-known topic in convex geometry), along with optimization and random walks for sampling from a convex body. This is the first algorithmic use of the centroid body and it is of independent theoretical interest, since it effectively replaces the estimation of covariance from samples, and is more generally accessible. This reduction relies on a non-linear transformation of samples from such an intersection of halfspaces (i.e. a simplex) to samples which are approximately from a linearly transformed product distribution. Through this transformation of samples, which can be done efficiently, one can then use an ICA algorithm to recover the vertices of the intersection of halfspaces. Finally, we again use ICA as an algorithmic primitive to construct an efficient solution to the widely-studied problem of learning the parameters of a Gaussian mixture model. Our algorithm again transforms samples from a Gaussian mixture model into samples which fit into the ICA model and, when processed by an ICA algorithm, result in recovery of the mixture parameters. Our algorithm is effective even when the number of Gaussians in the mixture grows polynomially with the ambient dimension
[ { "version": "v1", "created": "Thu, 25 May 2017 17:25:04 GMT" } ]
2017-05-26T00:00:00
[ [ "Anderson", "Joseph", "" ] ]
TITLE: Geometric Methods for Robust Data Analysis in High Dimension ABSTRACT: Machine learning and data analysis now finds both scientific and industrial application in biology, chemistry, geology, medicine, and physics. These applications rely on large quantities of data gathered from automated sensors and user input. Furthermore, the dimensionality of many datasets is extreme: more details are being gathered about single user interactions or sensor readings. All of these applications encounter problems with a common theme: use observed data to make inferences about the world. Our work obtains the first provably efficient algorithms for Independent Component Analysis (ICA) in the presence of heavy-tailed data. The main tool in this result is the centroid body (a well-known topic in convex geometry), along with optimization and random walks for sampling from a convex body. This is the first algorithmic use of the centroid body and it is of independent theoretical interest, since it effectively replaces the estimation of covariance from samples, and is more generally accessible. This reduction relies on a non-linear transformation of samples from such an intersection of halfspaces (i.e. a simplex) to samples which are approximately from a linearly transformed product distribution. Through this transformation of samples, which can be done efficiently, one can then use an ICA algorithm to recover the vertices of the intersection of halfspaces. Finally, we again use ICA as an algorithmic primitive to construct an efficient solution to the widely-studied problem of learning the parameters of a Gaussian mixture model. Our algorithm again transforms samples from a Gaussian mixture model into samples which fit into the ICA model and, when processed by an ICA algorithm, result in recovery of the mixture parameters. Our algorithm is effective even when the number of Gaussians in the mixture grows polynomially with the ambient dimension
no_new_dataset
0.939858
1511.05286
Oren Kraus
Oren Z. Kraus, Lei Jimmy Ba, Brendan Frey
Classifying and Segmenting Microscopy Images Using Convolutional Multiple Instance Learning
null
Bioinformatics (2016) 32 (12): i52-i59
10.1093/bioinformatics/btw252
null
cs.CV q-bio.SC stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Convolutional neural networks (CNN) have achieved state of the art performance on both classification and segmentation tasks. Applying CNNs to microscopy images is challenging due to the lack of datasets labeled at the single cell level. We extend the application of CNNs to microscopy image classification and segmentation using multiple instance learning (MIL). We present the adaptive Noisy-AND MIL pooling function, a new MIL operator that is robust to outliers. Combining CNNs with MIL enables training CNNs using full resolution microscopy images with global labels. We base our approach on the similarity between the aggregation function used in MIL and pooling layers used in CNNs. We show that training MIL CNNs end-to-end outperforms several previous methods on both mammalian and yeast microscopy images without requiring any segmentation steps.
[ { "version": "v1", "created": "Tue, 17 Nov 2015 06:55:58 GMT" } ]
2017-05-25T00:00:00
[ [ "Kraus", "Oren Z.", "" ], [ "Ba", "Lei Jimmy", "" ], [ "Frey", "Brendan", "" ] ]
TITLE: Classifying and Segmenting Microscopy Images Using Convolutional Multiple Instance Learning ABSTRACT: Convolutional neural networks (CNN) have achieved state of the art performance on both classification and segmentation tasks. Applying CNNs to microscopy images is challenging due to the lack of datasets labeled at the single cell level. We extend the application of CNNs to microscopy image classification and segmentation using multiple instance learning (MIL). We present the adaptive Noisy-AND MIL pooling function, a new MIL operator that is robust to outliers. Combining CNNs with MIL enables training CNNs using full resolution microscopy images with global labels. We base our approach on the similarity between the aggregation function used in MIL and pooling layers used in CNNs. We show that training MIL CNNs end-to-end outperforms several previous methods on both mammalian and yeast microscopy images without requiring any segmentation steps.
no_new_dataset
0.956227
1603.01508
Robert Kleinberg
Arpita Ghosh and Robert Kleinberg
Inferential Privacy Guarantees for Differentially Private Mechanisms
null
null
null
null
cs.DS cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The correlations and network structure amongst individuals in datasets today---whether explicitly articulated, or deduced from biological or behavioral connections---pose new issues around privacy guarantees, because of inferences that can be made about one individual from another's data. This motivates quantifying privacy in networked contexts in terms of "inferential privacy"---which measures the change in beliefs about an individual's data from the result of a computation---as originally proposed by Dalenius in the 1970's. Inferential privacy is implied by differential privacy when data are independent, but can be much worse when data are correlated; indeed, simple examples, as well as a general impossibility theorem of Dwork and Naor, preclude the possibility of achieving non-trivial inferential privacy when the adversary can have arbitrary auxiliary information. In this paper, we ask how differential privacy guarantees translate to guarantees on inferential privacy in networked contexts: specifically, under what limitations on the adversary's information about correlations, modeled as a prior distribution over datasets, can we deduce an inferential guarantee from a differential one? We prove two main results. The first result pertains to distributions that satisfy a natural positive-affiliation condition, and gives an upper bound on the inferential privacy guarantee for any differentially private mechanism. This upper bound is matched by a simple mechanism that adds Laplace noise to the sum of the data. The second result pertains to distributions that have weak correlations, defined in terms of a suitable "influence matrix". The result provides an upper bound for inferential privacy in terms of the differential privacy parameter and the spectral norm of this matrix.
[ { "version": "v1", "created": "Fri, 4 Mar 2016 15:50:24 GMT" }, { "version": "v2", "created": "Tue, 23 May 2017 18:52:06 GMT" } ]
2017-05-25T00:00:00
[ [ "Ghosh", "Arpita", "" ], [ "Kleinberg", "Robert", "" ] ]
TITLE: Inferential Privacy Guarantees for Differentially Private Mechanisms ABSTRACT: The correlations and network structure amongst individuals in datasets today---whether explicitly articulated, or deduced from biological or behavioral connections---pose new issues around privacy guarantees, because of inferences that can be made about one individual from another's data. This motivates quantifying privacy in networked contexts in terms of "inferential privacy"---which measures the change in beliefs about an individual's data from the result of a computation---as originally proposed by Dalenius in the 1970's. Inferential privacy is implied by differential privacy when data are independent, but can be much worse when data are correlated; indeed, simple examples, as well as a general impossibility theorem of Dwork and Naor, preclude the possibility of achieving non-trivial inferential privacy when the adversary can have arbitrary auxiliary information. In this paper, we ask how differential privacy guarantees translate to guarantees on inferential privacy in networked contexts: specifically, under what limitations on the adversary's information about correlations, modeled as a prior distribution over datasets, can we deduce an inferential guarantee from a differential one? We prove two main results. The first result pertains to distributions that satisfy a natural positive-affiliation condition, and gives an upper bound on the inferential privacy guarantee for any differentially private mechanism. This upper bound is matched by a simple mechanism that adds Laplace noise to the sum of the data. The second result pertains to distributions that have weak correlations, defined in terms of a suitable "influence matrix". The result provides an upper bound for inferential privacy in terms of the differential privacy parameter and the spectral norm of this matrix.
no_new_dataset
0.9455
1605.05197
Philippe Weinzaepfel
Philippe Weinzaepfel, Xavier Martin, Cordelia Schmid
Human Action Localization with Sparse Spatial Supervision
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce an approach for spatio-temporal human action localization using sparse spatial supervision. Our method leverages the large amount of annotated humans available today and extracts human tubes by combining a state-of-the-art human detector with a tracking-by-detection approach. Given these high-quality human tubes and temporal supervision, we select positive and negative tubes with very sparse spatial supervision, i.e., only one spatially annotated frame per instance. The selected tubes allow us to effectively learn a spatio-temporal action detector based on dense trajectories or CNNs. We conduct experiments on existing action localization benchmarks: UCF-Sports, J-HMDB and UCF-101. Our results show that our approach, despite using sparse spatial supervision, performs on par with methods using full supervision, i.e., one bounding box annotation per frame. To further validate our method, we introduce DALY (Daily Action Localization in YouTube), a dataset for realistic action localization in space and time. It contains high quality temporal and spatial annotations for 3.6k instances of 10 actions in 31 hours of videos (3.3M frames). It is an order of magnitude larger than existing datasets, with more diversity in appearance and long untrimmed videos.
[ { "version": "v1", "created": "Tue, 17 May 2016 14:55:03 GMT" }, { "version": "v2", "created": "Tue, 23 May 2017 19:19:23 GMT" } ]
2017-05-25T00:00:00
[ [ "Weinzaepfel", "Philippe", "" ], [ "Martin", "Xavier", "" ], [ "Schmid", "Cordelia", "" ] ]
TITLE: Human Action Localization with Sparse Spatial Supervision ABSTRACT: We introduce an approach for spatio-temporal human action localization using sparse spatial supervision. Our method leverages the large amount of annotated humans available today and extracts human tubes by combining a state-of-the-art human detector with a tracking-by-detection approach. Given these high-quality human tubes and temporal supervision, we select positive and negative tubes with very sparse spatial supervision, i.e., only one spatially annotated frame per instance. The selected tubes allow us to effectively learn a spatio-temporal action detector based on dense trajectories or CNNs. We conduct experiments on existing action localization benchmarks: UCF-Sports, J-HMDB and UCF-101. Our results show that our approach, despite using sparse spatial supervision, performs on par with methods using full supervision, i.e., one bounding box annotation per frame. To further validate our method, we introduce DALY (Daily Action Localization in YouTube), a dataset for realistic action localization in space and time. It contains high quality temporal and spatial annotations for 3.6k instances of 10 actions in 31 hours of videos (3.3M frames). It is an order of magnitude larger than existing datasets, with more diversity in appearance and long untrimmed videos.
new_dataset
0.956796
1609.09560
Michele Nogueira
Michele Nogueira and Augusto Almeida Santos and Jos\'e M. F. Moura
Early Signals from Volumetric DDoS Attacks: An Empirical Study
null
null
null
null
cs.NI cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Distributed Denial of Service (DDoS) is a common type of Cybercrime. It can strongly damage a company reputation and increase its costs. Attackers improve continuously their strategies. They doubled the amount of unleashed communication requests in volume, size, and frequency in the last few years. This occurs against different hosts, causing resource exhaustion. Previous studies focused on detecting or mitigating ongoing DDoS attacks. Yet, addressing DDoS attacks when they are already in place may be too late. In this article, we consider network resilience by early prediction of attack trends. We show empirically the advantage of using non-parametric leading indicators for early prediction of volumetric DDoS attacks. We report promising results over a real dataset from CAIDA. Our results raise new questions and opportunities for further research in early predicting trends of DDoS attacks.
[ { "version": "v1", "created": "Fri, 30 Sep 2016 01:17:07 GMT" }, { "version": "v2", "created": "Tue, 23 May 2017 22:48:04 GMT" } ]
2017-05-25T00:00:00
[ [ "Nogueira", "Michele", "" ], [ "Santos", "Augusto Almeida", "" ], [ "Moura", "José M. F.", "" ] ]
TITLE: Early Signals from Volumetric DDoS Attacks: An Empirical Study ABSTRACT: Distributed Denial of Service (DDoS) is a common type of Cybercrime. It can strongly damage a company reputation and increase its costs. Attackers improve continuously their strategies. They doubled the amount of unleashed communication requests in volume, size, and frequency in the last few years. This occurs against different hosts, causing resource exhaustion. Previous studies focused on detecting or mitigating ongoing DDoS attacks. Yet, addressing DDoS attacks when they are already in place may be too late. In this article, we consider network resilience by early prediction of attack trends. We show empirically the advantage of using non-parametric leading indicators for early prediction of volumetric DDoS attacks. We report promising results over a real dataset from CAIDA. Our results raise new questions and opportunities for further research in early predicting trends of DDoS attacks.
no_new_dataset
0.948585
1611.02737
Jaroslaw Szlichta
Sridevi Baskaran, Alexander Keller, Fei Chiang, Golab Lukasz, Jaroslaw Szlichta
Efficient Discovery of Ontology Functional Dependencies
12 pages
null
null
null
cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Poor data quality has become a pervasive issue due to the increasing complexity and size of modern datasets. Constraint based data cleaning techniques rely on integrity constraints as a benchmark to identify and correct errors. Data values that do not satisfy the given set of constraints are flagged as dirty, and data updates are made to re-align the data and the constraints. However, many errors often require user input to resolve due to domain expertise defining specific terminology and relationships. For example, in pharmaceuticals, 'Advil' \emph{is-a} brand name for 'ibuprofen' that can be captured in a pharmaceutical ontology. While functional dependencies (FDs) have traditionally been used in existing data cleaning solutions to model syntactic equivalence, they are not able to model broader relationships (e.g., is-a) defined by an ontology. In this paper, we take a first step towards extending the set of data quality constraints used in data cleaning by defining and discovering \emph{Ontology Functional Dependencies} (OFDs). We lay out theoretical and practical foundations for OFDs, including a set of sound and complete axioms, and a linear inference procedure. We then develop effective algorithms for discovering OFDs, and a set of optimizations that efficiently prune the search space. Our experimental evaluation using real data show the scalability and accuracy of our algorithms.
[ { "version": "v1", "created": "Tue, 8 Nov 2016 22:03:35 GMT" }, { "version": "v2", "created": "Wed, 16 Nov 2016 05:13:36 GMT" }, { "version": "v3", "created": "Wed, 24 May 2017 01:44:45 GMT" } ]
2017-05-25T00:00:00
[ [ "Baskaran", "Sridevi", "" ], [ "Keller", "Alexander", "" ], [ "Chiang", "Fei", "" ], [ "Lukasz", "Golab", "" ], [ "Szlichta", "Jaroslaw", "" ] ]
TITLE: Efficient Discovery of Ontology Functional Dependencies ABSTRACT: Poor data quality has become a pervasive issue due to the increasing complexity and size of modern datasets. Constraint based data cleaning techniques rely on integrity constraints as a benchmark to identify and correct errors. Data values that do not satisfy the given set of constraints are flagged as dirty, and data updates are made to re-align the data and the constraints. However, many errors often require user input to resolve due to domain expertise defining specific terminology and relationships. For example, in pharmaceuticals, 'Advil' \emph{is-a} brand name for 'ibuprofen' that can be captured in a pharmaceutical ontology. While functional dependencies (FDs) have traditionally been used in existing data cleaning solutions to model syntactic equivalence, they are not able to model broader relationships (e.g., is-a) defined by an ontology. In this paper, we take a first step towards extending the set of data quality constraints used in data cleaning by defining and discovering \emph{Ontology Functional Dependencies} (OFDs). We lay out theoretical and practical foundations for OFDs, including a set of sound and complete axioms, and a linear inference procedure. We then develop effective algorithms for discovering OFDs, and a set of optimizations that efficiently prune the search space. Our experimental evaluation using real data show the scalability and accuracy of our algorithms.
no_new_dataset
0.949669
1611.04308
Panos Parchas Mr
Panos Parchas, Nikolaos Papailiou, Dimitris Papadias, Francesco Bonchi
Uncertain Graph Sparsification
null
null
null
null
cs.DS cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Uncertain graphs are prevalent in several applications including communications systems, biological databases and social networks. The ever increasing size of the underlying data renders both graph storage and query processing extremely expensive. Sparsification has often been used to reduce the size of deterministic graphs by maintaining only the important edges. However, adaptation of deterministic sparsification methods fails in the uncertain setting. To overcome this problem, we introduce the first sparsification techniques aimed explicitly at uncertain graphs. The proposed methods reduce the number of edges and redistribute their probabilities in order to decrease the graph size, while preserving its underlying structure. The resulting graph can be used to efficiently and accurately approximate any query and mining tasks on the original graph. An extensive experimental evaluation with real and synthetic datasets illustrates the effectiveness of our techniques on several common graph tasks, including clustering coefficient, page rank, reliability and shortest path distance.
[ { "version": "v1", "created": "Mon, 14 Nov 2016 09:58:11 GMT" }, { "version": "v2", "created": "Sun, 29 Jan 2017 11:39:21 GMT" }, { "version": "v3", "created": "Tue, 9 May 2017 11:29:06 GMT" }, { "version": "v4", "created": "Wed, 24 May 2017 05:50:38 GMT" } ]
2017-05-25T00:00:00
[ [ "Parchas", "Panos", "" ], [ "Papailiou", "Nikolaos", "" ], [ "Papadias", "Dimitris", "" ], [ "Bonchi", "Francesco", "" ] ]
TITLE: Uncertain Graph Sparsification ABSTRACT: Uncertain graphs are prevalent in several applications including communications systems, biological databases and social networks. The ever increasing size of the underlying data renders both graph storage and query processing extremely expensive. Sparsification has often been used to reduce the size of deterministic graphs by maintaining only the important edges. However, adaptation of deterministic sparsification methods fails in the uncertain setting. To overcome this problem, we introduce the first sparsification techniques aimed explicitly at uncertain graphs. The proposed methods reduce the number of edges and redistribute their probabilities in order to decrease the graph size, while preserving its underlying structure. The resulting graph can be used to efficiently and accurately approximate any query and mining tasks on the original graph. An extensive experimental evaluation with real and synthetic datasets illustrates the effectiveness of our techniques on several common graph tasks, including clustering coefficient, page rank, reliability and shortest path distance.
no_new_dataset
0.947478
1612.01988
Abhishek Kumar
Anant Raj, Abhishek Kumar, Youssef Mroueh, P. Thomas Fletcher, Bernhard Sch\"olkopf
Local Group Invariant Representations via Orbit Embeddings
AISTATS 2017 accepted version including appendix, 18 pages, 1 figure
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Invariance to nuisance transformations is one of the desirable properties of effective representations. We consider transformations that form a \emph{group} and propose an approach based on kernel methods to derive local group invariant representations. Locality is achieved by defining a suitable probability distribution over the group which in turn induces distributions in the input feature space. We learn a decision function over these distributions by appealing to the powerful framework of kernel methods and generate local invariant random feature maps via kernel approximations. We show uniform convergence bounds for kernel approximation and provide excess risk bounds for learning with these features. We evaluate our method on three real datasets, including Rotated MNIST and CIFAR-10, and observe that it outperforms competing kernel based approaches. The proposed method also outperforms deep CNN on Rotated-MNIST and performs comparably to the recently proposed group-equivariant CNN.
[ { "version": "v1", "created": "Tue, 6 Dec 2016 20:46:39 GMT" }, { "version": "v2", "created": "Wed, 24 May 2017 16:50:08 GMT" } ]
2017-05-25T00:00:00
[ [ "Raj", "Anant", "" ], [ "Kumar", "Abhishek", "" ], [ "Mroueh", "Youssef", "" ], [ "Fletcher", "P. Thomas", "" ], [ "Schölkopf", "Bernhard", "" ] ]
TITLE: Local Group Invariant Representations via Orbit Embeddings ABSTRACT: Invariance to nuisance transformations is one of the desirable properties of effective representations. We consider transformations that form a \emph{group} and propose an approach based on kernel methods to derive local group invariant representations. Locality is achieved by defining a suitable probability distribution over the group which in turn induces distributions in the input feature space. We learn a decision function over these distributions by appealing to the powerful framework of kernel methods and generate local invariant random feature maps via kernel approximations. We show uniform convergence bounds for kernel approximation and provide excess risk bounds for learning with these features. We evaluate our method on three real datasets, including Rotated MNIST and CIFAR-10, and observe that it outperforms competing kernel based approaches. The proposed method also outperforms deep CNN on Rotated-MNIST and performs comparably to the recently proposed group-equivariant CNN.
no_new_dataset
0.941761
1703.03386
Cristian Danescu-Niculescu-Mizil
William L. Hamilton, Justine Zhang, Cristian Danescu-Niculescu-Mizil, Dan Jurafsky, Jure Leskovec
Loyalty in Online Communities
Extended version of a paper appearing in the Proceedings of ICWSM 2017 (with the same title); please cite the official ICWSM version
null
null
null
cs.SI cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Loyalty is an essential component of multi-community engagement. When users have the choice to engage with a variety of different communities, they often become loyal to just one, focusing on that community at the expense of others. However, it is unclear how loyalty is manifested in user behavior, or whether loyalty is encouraged by certain community characteristics. In this paper we operationalize loyalty as a user-community relation: users loyal to a community consistently prefer it over all others; loyal communities retain their loyal users over time. By exploring this relation using a large dataset of discussion communities from Reddit, we reveal that loyalty is manifested in remarkably consistent behaviors across a wide spectrum of communities. Loyal users employ language that signals collective identity and engage with more esoteric, less popular content, indicating they may play a curational role in surfacing new material. Loyal communities have denser user-user interaction networks and lower rates of triadic closure, suggesting that community-level loyalty is associated with more cohesive interactions and less fragmentation into subgroups. We exploit these general patterns to predict future rates of loyalty. Our results show that a user's propensity to become loyal is apparent from their first interactions with a community, suggesting that some users are intrinsically loyal from the very beginning.
[ { "version": "v1", "created": "Thu, 9 Mar 2017 18:37:50 GMT" }, { "version": "v2", "created": "Tue, 4 Apr 2017 01:09:26 GMT" }, { "version": "v3", "created": "Wed, 24 May 2017 14:45:13 GMT" } ]
2017-05-25T00:00:00
[ [ "Hamilton", "William L.", "" ], [ "Zhang", "Justine", "" ], [ "Danescu-Niculescu-Mizil", "Cristian", "" ], [ "Jurafsky", "Dan", "" ], [ "Leskovec", "Jure", "" ] ]
TITLE: Loyalty in Online Communities ABSTRACT: Loyalty is an essential component of multi-community engagement. When users have the choice to engage with a variety of different communities, they often become loyal to just one, focusing on that community at the expense of others. However, it is unclear how loyalty is manifested in user behavior, or whether loyalty is encouraged by certain community characteristics. In this paper we operationalize loyalty as a user-community relation: users loyal to a community consistently prefer it over all others; loyal communities retain their loyal users over time. By exploring this relation using a large dataset of discussion communities from Reddit, we reveal that loyalty is manifested in remarkably consistent behaviors across a wide spectrum of communities. Loyal users employ language that signals collective identity and engage with more esoteric, less popular content, indicating they may play a curational role in surfacing new material. Loyal communities have denser user-user interaction networks and lower rates of triadic closure, suggesting that community-level loyalty is associated with more cohesive interactions and less fragmentation into subgroups. We exploit these general patterns to predict future rates of loyalty. Our results show that a user's propensity to become loyal is apparent from their first interactions with a community, suggesting that some users are intrinsically loyal from the very beginning.
no_new_dataset
0.943867
1703.03856
Laurel Orr
Laurel Orr, Magda Balazinska, and Dan Suciu
Probabilistic Database Summarization for Interactive Data Exploration
To appear VLDB 2017
null
null
null
cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a probabilistic approach to generate a small, query-able summary of a dataset for interactive data exploration. Departing from traditional summarization techniques, we use the Principle of Maximum Entropy to generate a probabilistic representation of the data that can be used to give approximate query answers. We develop the theoretical framework and formulation of our probabilistic representation and show how to use it to answer queries. We then present solving techniques and give three critical optimizations to improve preprocessing time and query accuracy. Lastly, we experimentally evaluate our work using a 5 GB dataset of flights within the United States and a 210 GB dataset from an astronomy particle simulation. While our current work only supports linear queries, we show that our technique can successfully answer queries faster than sampling while introducing, on average, no more error than sampling and can better distinguish between rare and nonexistent values.
[ { "version": "v1", "created": "Fri, 10 Mar 2017 22:17:22 GMT" }, { "version": "v2", "created": "Tue, 23 May 2017 20:44:53 GMT" } ]
2017-05-25T00:00:00
[ [ "Orr", "Laurel", "" ], [ "Balazinska", "Magda", "" ], [ "Suciu", "Dan", "" ] ]
TITLE: Probabilistic Database Summarization for Interactive Data Exploration ABSTRACT: We present a probabilistic approach to generate a small, query-able summary of a dataset for interactive data exploration. Departing from traditional summarization techniques, we use the Principle of Maximum Entropy to generate a probabilistic representation of the data that can be used to give approximate query answers. We develop the theoretical framework and formulation of our probabilistic representation and show how to use it to answer queries. We then present solving techniques and give three critical optimizations to improve preprocessing time and query accuracy. Lastly, we experimentally evaluate our work using a 5 GB dataset of flights within the United States and a 210 GB dataset from an astronomy particle simulation. While our current work only supports linear queries, we show that our technique can successfully answer queries faster than sampling while introducing, on average, no more error than sampling and can better distinguish between rare and nonexistent values.
no_new_dataset
0.92421
1705.05098
Lahari Poddar
Lahari Poddar, Wynne Hsu, Mong Li Lee
Quantifying Aspect Bias in Ordinal Ratings using a Bayesian Approach
Accepted for publication in IJCAI 2017
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
User opinions expressed in the form of ratings can influence an individual's view of an item. However, the true quality of an item is often obfuscated by user biases, and it is not obvious from the observed ratings the importance different users place on different aspects of an item. We propose a probabilistic modeling of the observed aspect ratings to infer (i) each user's aspect bias and (ii) latent intrinsic quality of an item. We model multi-aspect ratings as ordered discrete data and encode the dependency between different aspects by using a latent Gaussian structure. We handle the Gaussian-Categorical non-conjugacy using a stick-breaking formulation coupled with P\'{o}lya-Gamma auxiliary variable augmentation for a simple, fully Bayesian inference. On two real world datasets, we demonstrate the predictive ability of our model and its effectiveness in learning explainable user biases to provide insights towards a more reliable product quality estimation.
[ { "version": "v1", "created": "Mon, 15 May 2017 07:35:59 GMT" }, { "version": "v2", "created": "Wed, 24 May 2017 08:47:24 GMT" } ]
2017-05-25T00:00:00
[ [ "Poddar", "Lahari", "" ], [ "Hsu", "Wynne", "" ], [ "Lee", "Mong Li", "" ] ]
TITLE: Quantifying Aspect Bias in Ordinal Ratings using a Bayesian Approach ABSTRACT: User opinions expressed in the form of ratings can influence an individual's view of an item. However, the true quality of an item is often obfuscated by user biases, and it is not obvious from the observed ratings the importance different users place on different aspects of an item. We propose a probabilistic modeling of the observed aspect ratings to infer (i) each user's aspect bias and (ii) latent intrinsic quality of an item. We model multi-aspect ratings as ordered discrete data and encode the dependency between different aspects by using a latent Gaussian structure. We handle the Gaussian-Categorical non-conjugacy using a stick-breaking formulation coupled with P\'{o}lya-Gamma auxiliary variable augmentation for a simple, fully Bayesian inference. On two real world datasets, we demonstrate the predictive ability of our model and its effectiveness in learning explainable user biases to provide insights towards a more reliable product quality estimation.
no_new_dataset
0.947284
1705.07425
Thomas Niebler
Thomas Niebler, Martin Becker, Christian P\"olitz, Andreas Hotho
Learning Semantic Relatedness From Human Feedback Using Metric Learning
Under review at ISWC 2017
null
null
null
cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Assessing the degree of semantic relatedness between words is an important task with a variety of semantic applications, such as ontology learning for the Semantic Web, semantic search or query expansion. To accomplish this in an automated fashion, many relatedness measures have been proposed. However, most of these metrics only encode information contained in the underlying corpus and thus do not directly model human intuition. To solve this, we propose to utilize a metric learning approach to improve existing semantic relatedness measures by learning from additional information, such as explicit human feedback. For this, we argue to use word embeddings instead of traditional high-dimensional vector representations in order to leverage their semantic density and to reduce computational cost. We rigorously test our approach on several domains including tagging data as well as publicly available embeddings based on Wikipedia texts and navigation. Human feedback about semantic relatedness for learning and evaluation is extracted from publicly available datasets such as MEN or WS-353. We find that our method can significantly improve semantic relatedness measures by learning from additional information, such as explicit human feedback. For tagging data, we are the first to generate and study embeddings. Our results are of special interest for ontology and recommendation engineers, but also for any other researchers and practitioners of Semantic Web techniques.
[ { "version": "v1", "created": "Sun, 21 May 2017 10:16:49 GMT" }, { "version": "v2", "created": "Wed, 24 May 2017 13:07:07 GMT" } ]
2017-05-25T00:00:00
[ [ "Niebler", "Thomas", "" ], [ "Becker", "Martin", "" ], [ "Pölitz", "Christian", "" ], [ "Hotho", "Andreas", "" ] ]
TITLE: Learning Semantic Relatedness From Human Feedback Using Metric Learning ABSTRACT: Assessing the degree of semantic relatedness between words is an important task with a variety of semantic applications, such as ontology learning for the Semantic Web, semantic search or query expansion. To accomplish this in an automated fashion, many relatedness measures have been proposed. However, most of these metrics only encode information contained in the underlying corpus and thus do not directly model human intuition. To solve this, we propose to utilize a metric learning approach to improve existing semantic relatedness measures by learning from additional information, such as explicit human feedback. For this, we argue to use word embeddings instead of traditional high-dimensional vector representations in order to leverage their semantic density and to reduce computational cost. We rigorously test our approach on several domains including tagging data as well as publicly available embeddings based on Wikipedia texts and navigation. Human feedback about semantic relatedness for learning and evaluation is extracted from publicly available datasets such as MEN or WS-353. We find that our method can significantly improve semantic relatedness measures by learning from additional information, such as explicit human feedback. For tagging data, we are the first to generate and study embeddings. Our results are of special interest for ontology and recommendation engineers, but also for any other researchers and practitioners of Semantic Web techniques.
no_new_dataset
0.951188
1705.08473
Malay Chakrabarti
Malay Chakrabarti, Lenwood Heath, Naren Ramakrishnan
New methods to generate massive synthetic networks
null
null
null
null
cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One of the biggest needs in network science research is access to large realistic datasets. As data analytics methods permeate a range of diverse disciplines---e.g., computational epidemiology, sustainability, social media analytics, biology, and transportation--- network datasets that can exhibit characteristics encountered in each of these disciplines becomes paramount. The key technical issue is to be able to generate synthetic topologies with pre-specified, arbitrary, degree distributions. Existing methods are limited in their ability to faithfully reproduce macro-level characteristics of networks while at the same time respecting particular degree distributions. We present a suite of three algorithms that exploit the principle of residual degree attenuation to generate synthetic topologies that adhere to macro-level real-world characteristics. By evaluating these algorithms w.r.t. several real-world datasets we demonstrate their ability to faithfully reproduce network characteristics such as node degree, clustering coefficient, hop length, and k-core structure distributions.
[ { "version": "v1", "created": "Tue, 23 May 2017 18:37:51 GMT" } ]
2017-05-25T00:00:00
[ [ "Chakrabarti", "Malay", "" ], [ "Heath", "Lenwood", "" ], [ "Ramakrishnan", "Naren", "" ] ]
TITLE: New methods to generate massive synthetic networks ABSTRACT: One of the biggest needs in network science research is access to large realistic datasets. As data analytics methods permeate a range of diverse disciplines---e.g., computational epidemiology, sustainability, social media analytics, biology, and transportation--- network datasets that can exhibit characteristics encountered in each of these disciplines becomes paramount. The key technical issue is to be able to generate synthetic topologies with pre-specified, arbitrary, degree distributions. Existing methods are limited in their ability to faithfully reproduce macro-level characteristics of networks while at the same time respecting particular degree distributions. We present a suite of three algorithms that exploit the principle of residual degree attenuation to generate synthetic topologies that adhere to macro-level real-world characteristics. By evaluating these algorithms w.r.t. several real-world datasets we demonstrate their ability to faithfully reproduce network characteristics such as node degree, clustering coefficient, hop length, and k-core structure distributions.
no_new_dataset
0.947478
1705.08550
Wentao Zhu
Wentao Zhu, Qi Lou, Yeeleng Scott Vang, and Xiaohui Xie
Deep Multi-instance Networks with Sparse Label Assignment for Whole Mammogram Classification
MICCAI 2017 Camera Ready
null
null
null
cs.CV cs.LG cs.NE
http://creativecommons.org/licenses/by/4.0/
Mammogram classification is directly related to computer-aided diagnosis of breast cancer. Traditional methods rely on regions of interest (ROIs) which require great efforts to annotate. Inspired by the success of using deep convolutional features for natural image analysis and multi-instance learning (MIL) for labeling a set of instances/patches, we propose end-to-end trained deep multi-instance networks for mass classification based on whole mammogram without the aforementioned ROIs. We explore three different schemes to construct deep multi-instance networks for whole mammogram classification. Experimental results on the INbreast dataset demonstrate the robustness of proposed networks compared to previous work using segmentation and detection annotations.
[ { "version": "v1", "created": "Tue, 23 May 2017 22:16:20 GMT" } ]
2017-05-25T00:00:00
[ [ "Zhu", "Wentao", "" ], [ "Lou", "Qi", "" ], [ "Vang", "Yeeleng Scott", "" ], [ "Xie", "Xiaohui", "" ] ]
TITLE: Deep Multi-instance Networks with Sparse Label Assignment for Whole Mammogram Classification ABSTRACT: Mammogram classification is directly related to computer-aided diagnosis of breast cancer. Traditional methods rely on regions of interest (ROIs) which require great efforts to annotate. Inspired by the success of using deep convolutional features for natural image analysis and multi-instance learning (MIL) for labeling a set of instances/patches, we propose end-to-end trained deep multi-instance networks for mass classification based on whole mammogram without the aforementioned ROIs. We explore three different schemes to construct deep multi-instance networks for whole mammogram classification. Experimental results on the INbreast dataset demonstrate the robustness of proposed networks compared to previous work using segmentation and detection annotations.
no_new_dataset
0.952042
1705.08557
Ankit Vani
Ankit Vani, Yacine Jernite, David Sontag
Grounded Recurrent Neural Networks
null
null
null
null
stat.ML cs.CL cs.LG cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work, we present the Grounded Recurrent Neural Network (GRNN), a recurrent neural network architecture for multi-label prediction which explicitly ties labels to specific dimensions of the recurrent hidden state (we call this process "grounding"). The approach is particularly well-suited for extracting large numbers of concepts from text. We apply the new model to address an important problem in healthcare of understanding what medical concepts are discussed in clinical text. Using a publicly available dataset derived from Intensive Care Units, we learn to label a patient's diagnoses and procedures from their discharge summary. Our evaluation shows a clear advantage to using our proposed architecture over a variety of strong baselines.
[ { "version": "v1", "created": "Tue, 23 May 2017 23:17:49 GMT" } ]
2017-05-25T00:00:00
[ [ "Vani", "Ankit", "" ], [ "Jernite", "Yacine", "" ], [ "Sontag", "David", "" ] ]
TITLE: Grounded Recurrent Neural Networks ABSTRACT: In this work, we present the Grounded Recurrent Neural Network (GRNN), a recurrent neural network architecture for multi-label prediction which explicitly ties labels to specific dimensions of the recurrent hidden state (we call this process "grounding"). The approach is particularly well-suited for extracting large numbers of concepts from text. We apply the new model to address an important problem in healthcare of understanding what medical concepts are discussed in clinical text. Using a publicly available dataset derived from Intensive Care Units, we learn to label a patient's diagnoses and procedures from their discharge summary. Our evaluation shows a clear advantage to using our proposed architecture over a variety of strong baselines.
no_new_dataset
0.954223
1705.08583
Anoop Cherian
Anoop Cherian, Suvrit Sra, Richard Hartley
Sequence Summarization Using Order-constrained Kernelized Feature Subspaces
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Representations that can compactly and effectively capture temporal evolution of semantic content are important to machine learning algorithms that operate on multi-variate time-series data. We investigate such representations motivated by the task of human action recognition. Here each data instance is encoded by a multivariate feature (such as via a deep CNN) where action dynamics are characterized by their variations in time. As these features are often non-linear, we propose a novel pooling method, kernelized rank pooling, that represents a given sequence compactly as the pre-image of the parameters of a hyperplane in an RKHS, projections of data onto which captures their temporal order. We develop this idea further and show that such a pooling scheme can be cast as an order-constrained kernelized PCA objective; we then propose to use the parameters of a kernelized low-rank feature subspace as the representation of the sequences. We cast our formulation as an optimization problem on generalized Grassmann manifolds and then solve it efficiently using Riemannian optimization techniques. We present experiments on several action recognition datasets using diverse feature modalities and demonstrate state-of-the-art results.
[ { "version": "v1", "created": "Wed, 24 May 2017 02:11:04 GMT" } ]
2017-05-25T00:00:00
[ [ "Cherian", "Anoop", "" ], [ "Sra", "Suvrit", "" ], [ "Hartley", "Richard", "" ] ]
TITLE: Sequence Summarization Using Order-constrained Kernelized Feature Subspaces ABSTRACT: Representations that can compactly and effectively capture temporal evolution of semantic content are important to machine learning algorithms that operate on multi-variate time-series data. We investigate such representations motivated by the task of human action recognition. Here each data instance is encoded by a multivariate feature (such as via a deep CNN) where action dynamics are characterized by their variations in time. As these features are often non-linear, we propose a novel pooling method, kernelized rank pooling, that represents a given sequence compactly as the pre-image of the parameters of a hyperplane in an RKHS, projections of data onto which captures their temporal order. We develop this idea further and show that such a pooling scheme can be cast as an order-constrained kernelized PCA objective; we then propose to use the parameters of a kernelized low-rank feature subspace as the representation of the sequences. We cast our formulation as an optimization problem on generalized Grassmann manifolds and then solve it efficiently using Riemannian optimization techniques. We present experiments on several action recognition datasets using diverse feature modalities and demonstrate state-of-the-art results.
no_new_dataset
0.947624
1705.08593
Davit Buniatyan
Davit Buniatyan, Thomas Macrina, Dodam Ih, Jonathan Zung, H. Sebastian Seung
Deep Learning Improves Template Matching by Normalized Cross Correlation
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Template matching by normalized cross correlation (NCC) is widely used for finding image correspondences. We improve the robustness of this algorithm by preprocessing images with "siamese" convolutional networks trained to maximize the contrast between NCC values of true and false matches. The improvement is quantified using patches of brain images from serial section electron microscopy. Relative to a parameter-tuned bandpass filter, siamese convolutional networks significantly reduce false matches. Furthermore, all false matches can be eliminated by removing a tiny fraction of all matches based on NCC values. The improved accuracy of our method could be essential for connectomics, because emerging petascale datasets may require billions of template matches to assemble 2D images of serial sections into a 3D image stack. Our method is also expected to generalize to many other computer vision applications that use NCC template matching to find image correspondences.
[ { "version": "v1", "created": "Wed, 24 May 2017 03:24:25 GMT" } ]
2017-05-25T00:00:00
[ [ "Buniatyan", "Davit", "" ], [ "Macrina", "Thomas", "" ], [ "Ih", "Dodam", "" ], [ "Zung", "Jonathan", "" ], [ "Seung", "H. Sebastian", "" ] ]
TITLE: Deep Learning Improves Template Matching by Normalized Cross Correlation ABSTRACT: Template matching by normalized cross correlation (NCC) is widely used for finding image correspondences. We improve the robustness of this algorithm by preprocessing images with "siamese" convolutional networks trained to maximize the contrast between NCC values of true and false matches. The improvement is quantified using patches of brain images from serial section electron microscopy. Relative to a parameter-tuned bandpass filter, siamese convolutional networks significantly reduce false matches. Furthermore, all false matches can be eliminated by removing a tiny fraction of all matches based on NCC values. The improved accuracy of our method could be essential for connectomics, because emerging petascale datasets may require billions of template matches to assemble 2D images of serial sections into a 3D image stack. Our method is also expected to generalize to many other computer vision applications that use NCC template matching to find image correspondences.
no_new_dataset
0.949995
1705.08631
Lluis Gomez
Lluis Gomez, Yash Patel, Mar\c{c}al Rusi\~nol, Dimosthenis Karatzas, C.V. Jawahar
Self-supervised learning of visual features through embedding images into text topic spaces
Accepted CVPR 2017 paper
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
End-to-end training from scratch of current deep architectures for new computer vision problems would require Imagenet-scale datasets, and this is not always possible. In this paper we present a method that is able to take advantage of freely available multi-modal content to train computer vision algorithms without human supervision. We put forward the idea of performing self-supervised learning of visual features by mining a large scale corpus of multi-modal (text and image) documents. We show that discriminative visual features can be learnt efficiently by training a CNN to predict the semantic context in which a particular image is more probable to appear as an illustration. For this we leverage the hidden semantic structures discovered in the text corpus with a well-known topic modeling technique. Our experiments demonstrate state of the art performance in image classification, object detection, and multi-modal retrieval compared to recent self-supervised or natural-supervised approaches.
[ { "version": "v1", "created": "Wed, 24 May 2017 06:59:30 GMT" } ]
2017-05-25T00:00:00
[ [ "Gomez", "Lluis", "" ], [ "Patel", "Yash", "" ], [ "Rusiñol", "Marçal", "" ], [ "Karatzas", "Dimosthenis", "" ], [ "Jawahar", "C. V.", "" ] ]
TITLE: Self-supervised learning of visual features through embedding images into text topic spaces ABSTRACT: End-to-end training from scratch of current deep architectures for new computer vision problems would require Imagenet-scale datasets, and this is not always possible. In this paper we present a method that is able to take advantage of freely available multi-modal content to train computer vision algorithms without human supervision. We put forward the idea of performing self-supervised learning of visual features by mining a large scale corpus of multi-modal (text and image) documents. We show that discriminative visual features can be learnt efficiently by training a CNN to predict the semantic context in which a particular image is more probable to appear as an illustration. For this we leverage the hidden semantic structures discovered in the text corpus with a well-known topic modeling technique. Our experiments demonstrate state of the art performance in image classification, object detection, and multi-modal retrieval compared to recent self-supervised or natural-supervised approaches.
no_new_dataset
0.942665
1705.08759
Stefan Lee
Qing Sun, Stefan Lee, Dhruv Batra
Bidirectional Beam Search: Forward-Backward Inference in Neural Sequence Models for Fill-in-the-Blank Image Captioning
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We develop the first approximate inference algorithm for 1-Best (and M-Best) decoding in bidirectional neural sequence models by extending Beam Search (BS) to reason about both forward and backward time dependencies. Beam Search (BS) is a widely used approximate inference algorithm for decoding sequences from unidirectional neural sequence models. Interestingly, approximate inference in bidirectional models remains an open problem, despite their significant advantage in modeling information from both the past and future. To enable the use of bidirectional models, we present Bidirectional Beam Search (BiBS), an efficient algorithm for approximate bidirectional inference.To evaluate our method and as an interesting problem in its own right, we introduce a novel Fill-in-the-Blank Image Captioning task which requires reasoning about both past and future sentence structure to reconstruct sensible image descriptions. We use this task as well as the Visual Madlibs dataset to demonstrate the effectiveness of our approach, consistently outperforming all baseline methods.
[ { "version": "v1", "created": "Wed, 24 May 2017 13:42:47 GMT" } ]
2017-05-25T00:00:00
[ [ "Sun", "Qing", "" ], [ "Lee", "Stefan", "" ], [ "Batra", "Dhruv", "" ] ]
TITLE: Bidirectional Beam Search: Forward-Backward Inference in Neural Sequence Models for Fill-in-the-Blank Image Captioning ABSTRACT: We develop the first approximate inference algorithm for 1-Best (and M-Best) decoding in bidirectional neural sequence models by extending Beam Search (BS) to reason about both forward and backward time dependencies. Beam Search (BS) is a widely used approximate inference algorithm for decoding sequences from unidirectional neural sequence models. Interestingly, approximate inference in bidirectional models remains an open problem, despite their significant advantage in modeling information from both the past and future. To enable the use of bidirectional models, we present Bidirectional Beam Search (BiBS), an efficient algorithm for approximate bidirectional inference.To evaluate our method and as an interesting problem in its own right, we introduce a novel Fill-in-the-Blank Image Captioning task which requires reasoning about both past and future sentence structure to reconstruct sensible image descriptions. We use this task as well as the Visual Madlibs dataset to demonstrate the effectiveness of our approach, consistently outperforming all baseline methods.
no_new_dataset
0.944074
1705.08828
Laurent Besacier
Jeremy Ferrero, Laurent Besacier, Didier Schwab and Frederic Agnes
Deep Investigation of Cross-Language Plagiarism Detection Methods
Accepted to BUCC (10th Workshop on Building and Using Comparable Corpora) colocated with ACL 2017
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
This paper is a deep investigation of cross-language plagiarism detection methods on a new recently introduced open dataset, which contains parallel and comparable collections of documents with multiple characteristics (different genres, languages and sizes of texts). We investigate cross-language plagiarism detection methods for 6 language pairs on 2 granularities of text units in order to draw robust conclusions on the best methods while deeply analyzing correlations across document styles and languages.
[ { "version": "v1", "created": "Wed, 24 May 2017 15:50:47 GMT" } ]
2017-05-25T00:00:00
[ [ "Ferrero", "Jeremy", "" ], [ "Besacier", "Laurent", "" ], [ "Schwab", "Didier", "" ], [ "Agnes", "Frederic", "" ] ]
TITLE: Deep Investigation of Cross-Language Plagiarism Detection Methods ABSTRACT: This paper is a deep investigation of cross-language plagiarism detection methods on a new recently introduced open dataset, which contains parallel and comparable collections of documents with multiple characteristics (different genres, languages and sizes of texts). We investigate cross-language plagiarism detection methods for 6 language pairs on 2 granularities of text units in order to draw robust conclusions on the best methods while deeply analyzing correlations across document styles and languages.
new_dataset
0.948822
1705.08844
Rodrigo Toro Icarte
Rodrigo Toro Icarte, Jorge A. Baier, Cristian Ruz, Alvaro Soto
How a General-Purpose Commonsense Ontology can Improve Performance of Learning-Based Image Retrieval
Accepted in IJCAI-17
null
null
null
cs.AI cs.CV cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The knowledge representation community has built general-purpose ontologies which contain large amounts of commonsense knowledge over relevant aspects of the world, including useful visual information, e.g.: "a ball is used by a football player", "a tennis player is located at a tennis court". Current state-of-the-art approaches for visual recognition do not exploit these rule-based knowledge sources. Instead, they learn recognition models directly from training examples. In this paper, we study how general-purpose ontologies---specifically, MIT's ConceptNet ontology---can improve the performance of state-of-the-art vision systems. As a testbed, we tackle the problem of sentence-based image retrieval. Our retrieval approach incorporates knowledge from ConceptNet on top of a large pool of object detectors derived from a deep learning technique. In our experiments, we show that ConceptNet can improve performance on a common benchmark dataset. Key to our performance is the use of the ESPGAME dataset to select visually relevant relations from ConceptNet. Consequently, a main conclusion of this work is that general-purpose commonsense ontologies improve performance on visual reasoning tasks when properly filtered to select meaningful visual relations.
[ { "version": "v1", "created": "Wed, 24 May 2017 16:22:53 GMT" } ]
2017-05-25T00:00:00
[ [ "Icarte", "Rodrigo Toro", "" ], [ "Baier", "Jorge A.", "" ], [ "Ruz", "Cristian", "" ], [ "Soto", "Alvaro", "" ] ]
TITLE: How a General-Purpose Commonsense Ontology can Improve Performance of Learning-Based Image Retrieval ABSTRACT: The knowledge representation community has built general-purpose ontologies which contain large amounts of commonsense knowledge over relevant aspects of the world, including useful visual information, e.g.: "a ball is used by a football player", "a tennis player is located at a tennis court". Current state-of-the-art approaches for visual recognition do not exploit these rule-based knowledge sources. Instead, they learn recognition models directly from training examples. In this paper, we study how general-purpose ontologies---specifically, MIT's ConceptNet ontology---can improve the performance of state-of-the-art vision systems. As a testbed, we tackle the problem of sentence-based image retrieval. Our retrieval approach incorporates knowledge from ConceptNet on top of a large pool of object detectors derived from a deep learning technique. In our experiments, we show that ConceptNet can improve performance on a common benchmark dataset. Key to our performance is the use of the ESPGAME dataset to select visually relevant relations from ConceptNet. Consequently, a main conclusion of this work is that general-purpose commonsense ontologies improve performance on visual reasoning tasks when properly filtered to select meaningful visual relations.
no_new_dataset
0.945399
1705.08858
Galina Lavrentyeva
Galina Lavrentyeva, Sergey Novoselov, Egor Malykh, Alexander Kozlov, Oleg Kudashev and Vadim Shchemelinin
Audio-replay attack detection countermeasures
11 pages, 3 figures, accepted for Specom 2017
null
null
null
cs.SD cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents the Speech Technology Center (STC) replay attack detection systems proposed for Automatic Speaker Verification Spoofing and Countermeasures Challenge 2017. In this study we focused on comparison of different spoofing detection approaches. These were GMM based methods, high level features extraction with simple classifier and deep learning frameworks. Experiments performed on the development and evaluation parts of the challenge dataset demonstrated stable efficiency of deep learning approaches in case of changing acoustic conditions. At the same time SVM classifier with high level features provided a substantial input in the efficiency of the resulting STC systems according to the fusion systems results.
[ { "version": "v1", "created": "Wed, 24 May 2017 16:48:03 GMT" } ]
2017-05-25T00:00:00
[ [ "Lavrentyeva", "Galina", "" ], [ "Novoselov", "Sergey", "" ], [ "Malykh", "Egor", "" ], [ "Kozlov", "Alexander", "" ], [ "Kudashev", "Oleg", "" ], [ "Shchemelinin", "Vadim", "" ] ]
TITLE: Audio-replay attack detection countermeasures ABSTRACT: This paper presents the Speech Technology Center (STC) replay attack detection systems proposed for Automatic Speaker Verification Spoofing and Countermeasures Challenge 2017. In this study we focused on comparison of different spoofing detection approaches. These were GMM based methods, high level features extraction with simple classifier and deep learning frameworks. Experiments performed on the development and evaluation parts of the challenge dataset demonstrated stable efficiency of deep learning approaches in case of changing acoustic conditions. At the same time SVM classifier with high level features provided a substantial input in the efficiency of the resulting STC systems according to the fusion systems results.
no_new_dataset
0.94428
1705.08865
Galina Lavrentyeva
Galina Lavrentyeva, Sergey Novoselov and Konstantin Simonchik
Anti-spoofing Methods for Automatic SpeakerVerification System
12 pages, 0 figures, published in Springer Communications in Computer and Information Science (CCIS) vol. 661
null
null
null
cs.SD cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Growing interest in automatic speaker verification (ASV)systems has lead to significant quality improvement of spoofing attackson them. Many research works confirm that despite the low equal er-ror rate (EER) ASV systems are still vulnerable to spoofing attacks. Inthis work we overview different acoustic feature spaces and classifiersto determine reliable and robust countermeasures against spoofing at-tacks. We compared several spoofing detection systems, presented so far,on the development and evaluation datasets of the Automatic SpeakerVerification Spoofing and Countermeasures (ASVspoof) Challenge 2015.Experimental results presented in this paper demonstrate that the useof magnitude and phase information combination provides a substantialinput into the efficiency of the spoofing detection systems. Also wavelet-based features show impressive results in terms of equal error rate. Inour overview we compare spoofing performance for systems based on dif-ferent classifiers. Comparison results demonstrate that the linear SVMclassifier outperforms the conventional GMM approach. However, manyresearchers inspired by the great success of deep neural networks (DNN)approaches in the automatic speech recognition, applied DNN in thespoofing detection task and obtained quite low EER for known and un-known type of spoofing attacks.
[ { "version": "v1", "created": "Wed, 24 May 2017 16:58:03 GMT" } ]
2017-05-25T00:00:00
[ [ "Lavrentyeva", "Galina", "" ], [ "Novoselov", "Sergey", "" ], [ "Simonchik", "Konstantin", "" ] ]
TITLE: Anti-spoofing Methods for Automatic SpeakerVerification System ABSTRACT: Growing interest in automatic speaker verification (ASV)systems has lead to significant quality improvement of spoofing attackson them. Many research works confirm that despite the low equal er-ror rate (EER) ASV systems are still vulnerable to spoofing attacks. Inthis work we overview different acoustic feature spaces and classifiersto determine reliable and robust countermeasures against spoofing at-tacks. We compared several spoofing detection systems, presented so far,on the development and evaluation datasets of the Automatic SpeakerVerification Spoofing and Countermeasures (ASVspoof) Challenge 2015.Experimental results presented in this paper demonstrate that the useof magnitude and phase information combination provides a substantialinput into the efficiency of the spoofing detection systems. Also wavelet-based features show impressive results in terms of equal error rate. Inour overview we compare spoofing performance for systems based on dif-ferent classifiers. Comparison results demonstrate that the linear SVMclassifier outperforms the conventional GMM approach. However, manyresearchers inspired by the great success of deep neural networks (DNN)approaches in the automatic speech recognition, applied DNN in thespoofing detection task and obtained quite low EER for known and un-known type of spoofing attacks.
no_new_dataset
0.940898
1601.02522
Nathanael Perraudin N. P.
Nathana\"el Perraudin, Pierre Vandergheynst
Stationary signal processing on graphs
null
null
10.1109/TSP.2017.2690388
null
cs.DS stat.AP stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Graphs are a central tool in machine learning and information processing as they allow to conveniently capture the structure of complex datasets. In this context, it is of high importance to develop flexible models of signals defined over graphs or networks. In this paper, we generalize the traditional concept of wide sense stationarity to signals defined over the vertices of arbitrary weighted undirected graphs. We show that stationarity is expressed through the graph localization operator reminiscent of translation. We prove that stationary graph signals are characterized by a well-defined Power Spectral Density that can be efficiently estimated even for large graphs. We leverage this new concept to derive Wiener-type estimation procedures of noisy and partially observed signals and illustrate the performance of this new model for denoising and regression.
[ { "version": "v1", "created": "Mon, 11 Jan 2016 16:58:45 GMT" }, { "version": "v2", "created": "Tue, 12 Jan 2016 16:42:30 GMT" }, { "version": "v3", "created": "Thu, 21 Apr 2016 16:34:34 GMT" }, { "version": "v4", "created": "Fri, 8 Jul 2016 21:25:26 GMT" }, { "version": "v5", "created": "Fri, 21 Apr 2017 18:30:15 GMT" } ]
2017-05-24T00:00:00
[ [ "Perraudin", "Nathanaël", "" ], [ "Vandergheynst", "Pierre", "" ] ]
TITLE: Stationary signal processing on graphs ABSTRACT: Graphs are a central tool in machine learning and information processing as they allow to conveniently capture the structure of complex datasets. In this context, it is of high importance to develop flexible models of signals defined over graphs or networks. In this paper, we generalize the traditional concept of wide sense stationarity to signals defined over the vertices of arbitrary weighted undirected graphs. We show that stationarity is expressed through the graph localization operator reminiscent of translation. We prove that stationary graph signals are characterized by a well-defined Power Spectral Density that can be efficiently estimated even for large graphs. We leverage this new concept to derive Wiener-type estimation procedures of noisy and partially observed signals and illustrate the performance of this new model for denoising and regression.
no_new_dataset
0.950686
1602.02885
Yeejin Lee
Y.J. Lee, K. Hirakawa, and T.Q. Nguyen
Joint Defogging and Demosaicking
null
null
10.1109/TIP.2016.2631880
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Image defogging is a technique used extensively for enhancing visual quality of images in bad weather condition. Even though defogging algorithms have been well studied, defogging performance is degraded by demosaicking artifacts and sensor noise amplification in distant scenes. In order to improve visual quality of restored images, we propose a novel approach to perform defogging and demosaicking simultaneously. We conclude that better defogging performance with fewer artifacts can be achieved when a defogging algorithm is combined with a demosaicking algorithm simultaneously. We also demonstrate that the proposed joint algorithm has the benefit of suppressing noise amplification in distant scene. In addition, we validate our theoretical analysis and observations for both synthesized datasets with ground truth fog-free images and natural scene datasets captured in a raw format.
[ { "version": "v1", "created": "Tue, 9 Feb 2016 08:01:20 GMT" } ]
2017-05-24T00:00:00
[ [ "Lee", "Y. J.", "" ], [ "Hirakawa", "K.", "" ], [ "Nguyen", "T. Q.", "" ] ]
TITLE: Joint Defogging and Demosaicking ABSTRACT: Image defogging is a technique used extensively for enhancing visual quality of images in bad weather condition. Even though defogging algorithms have been well studied, defogging performance is degraded by demosaicking artifacts and sensor noise amplification in distant scenes. In order to improve visual quality of restored images, we propose a novel approach to perform defogging and demosaicking simultaneously. We conclude that better defogging performance with fewer artifacts can be achieved when a defogging algorithm is combined with a demosaicking algorithm simultaneously. We also demonstrate that the proposed joint algorithm has the benefit of suppressing noise amplification in distant scene. In addition, we validate our theoretical analysis and observations for both synthesized datasets with ground truth fog-free images and natural scene datasets captured in a raw format.
no_new_dataset
0.949995
1606.02009
Salman Khan Mr.
Salman H Khan, Xuming He, Fatih Porikli, Mohammed Bennamoun, Ferdous Sohel, Roberto Togneri
Learning deep structured network for weakly supervised change detection
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Conventional change detection methods require a large number of images to learn background models or depend on tedious pixel-level labeling by humans. In this paper, we present a weakly supervised approach that needs only image-level labels to simultaneously detect and localize changes in a pair of images. To this end, we employ a deep neural network with DAG topology to learn patterns of change from image-level labeled training data. On top of the initial CNN activations, we define a CRF model to incorporate the local differences and context with the dense connections between individual pixels. We apply a constrained mean-field algorithm to estimate the pixel-level labels, and use the estimated labels to update the parameters of the CNN in an iterative EM framework. This enables imposing global constraints on the observed foreground probability mass function. Our evaluations on four benchmark datasets demonstrate superior detection and localization performance.
[ { "version": "v1", "created": "Tue, 7 Jun 2016 03:20:37 GMT" }, { "version": "v2", "created": "Tue, 23 May 2017 01:22:06 GMT" } ]
2017-05-24T00:00:00
[ [ "Khan", "Salman H", "" ], [ "He", "Xuming", "" ], [ "Porikli", "Fatih", "" ], [ "Bennamoun", "Mohammed", "" ], [ "Sohel", "Ferdous", "" ], [ "Togneri", "Roberto", "" ] ]
TITLE: Learning deep structured network for weakly supervised change detection ABSTRACT: Conventional change detection methods require a large number of images to learn background models or depend on tedious pixel-level labeling by humans. In this paper, we present a weakly supervised approach that needs only image-level labels to simultaneously detect and localize changes in a pair of images. To this end, we employ a deep neural network with DAG topology to learn patterns of change from image-level labeled training data. On top of the initial CNN activations, we define a CRF model to incorporate the local differences and context with the dense connections between individual pixels. We apply a constrained mean-field algorithm to estimate the pixel-level labels, and use the estimated labels to update the parameters of the CNN in an iterative EM framework. This enables imposing global constraints on the observed foreground probability mass function. Our evaluations on four benchmark datasets demonstrate superior detection and localization performance.
no_new_dataset
0.952397
1608.01807
Liang Zheng
Liang Zheng, Yi Yang, Qi Tian
SIFT Meets CNN: A Decade Survey of Instance Retrieval
Accepted to IEEE Transactions on Pattern Analysis and Machine Intelligence
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the early days, content-based image retrieval (CBIR) was studied with global features. Since 2003, image retrieval based on local descriptors (de facto SIFT) has been extensively studied for over a decade due to the advantage of SIFT in dealing with image transformations. Recently, image representations based on the convolutional neural network (CNN) have attracted increasing interest in the community and demonstrated impressive performance. Given this time of rapid evolution, this article provides a comprehensive survey of instance retrieval over the last decade. Two broad categories, SIFT-based and CNN-based methods, are presented. For the former, according to the codebook size, we organize the literature into using large/medium-sized/small codebooks. For the latter, we discuss three lines of methods, i.e., using pre-trained or fine-tuned CNN models, and hybrid methods. The first two perform a single-pass of an image to the network, while the last category employs a patch-based feature extraction scheme. This survey presents milestones in modern instance retrieval, reviews a broad selection of previous works in different categories, and provides insights on the connection between SIFT and CNN-based methods. After analyzing and comparing retrieval performance of different categories on several datasets, we discuss promising directions towards generic and specialized instance retrieval.
[ { "version": "v1", "created": "Fri, 5 Aug 2016 08:50:58 GMT" }, { "version": "v2", "created": "Tue, 23 May 2017 08:10:33 GMT" } ]
2017-05-24T00:00:00
[ [ "Zheng", "Liang", "" ], [ "Yang", "Yi", "" ], [ "Tian", "Qi", "" ] ]
TITLE: SIFT Meets CNN: A Decade Survey of Instance Retrieval ABSTRACT: In the early days, content-based image retrieval (CBIR) was studied with global features. Since 2003, image retrieval based on local descriptors (de facto SIFT) has been extensively studied for over a decade due to the advantage of SIFT in dealing with image transformations. Recently, image representations based on the convolutional neural network (CNN) have attracted increasing interest in the community and demonstrated impressive performance. Given this time of rapid evolution, this article provides a comprehensive survey of instance retrieval over the last decade. Two broad categories, SIFT-based and CNN-based methods, are presented. For the former, according to the codebook size, we organize the literature into using large/medium-sized/small codebooks. For the latter, we discuss three lines of methods, i.e., using pre-trained or fine-tuned CNN models, and hybrid methods. The first two perform a single-pass of an image to the network, while the last category employs a patch-based feature extraction scheme. This survey presents milestones in modern instance retrieval, reviews a broad selection of previous works in different categories, and provides insights on the connection between SIFT and CNN-based methods. After analyzing and comparing retrieval performance of different categories on several datasets, we discuss promising directions towards generic and specialized instance retrieval.
no_new_dataset
0.945851
1610.07986
Thomas Kreuz
Thomas Kreuz, Eero Satuvuori, Martin Pofahl, Mario Mulansky
Leaders and followers: Quantifying consistency in spatio-temporal propagation patterns
18 pages; 18 figures; revised version
null
10.1088/1367-2630/aa68c3
null
physics.data-an q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Repetitive spatio-temporal propagation patterns are encountered in fields as wide-ranging as climatology, social communication and network science. In neuroscience, perfectly consistent repetitions of the same global propagation pattern are called a synfire pattern. For any recording of sequences of discrete events (in neuroscience terminology: sets of spike trains) the questions arise how closely it resembles such a synfire pattern and which are the spike trains that lead/follow. Here we address these questions and introduce an algorithm built on two new indicators, termed SPIKE-Order and Spike Train Order, that define the Synfire Indicator value, which allows to sort multiple spike trains from leader to follower and to quantify the consistency of the temporal leader-follower relationships for both the original and the optimized sorting. We demonstrate our new approach using artificially generated datasets before we apply it to analyze the consistency of propagation patterns in two real datasets from neuroscience (Giant Depolarized Potentials in mice slices) and climatology (El Ni~no sea surface temperature recordings). The new algorithm is distinguished by conceptual and practical simplicity, low computational cost, as well as flexibility and universality.
[ { "version": "v1", "created": "Tue, 25 Oct 2016 17:51:23 GMT" }, { "version": "v2", "created": "Wed, 26 Oct 2016 08:24:50 GMT" }, { "version": "v3", "created": "Mon, 13 Feb 2017 21:36:09 GMT" }, { "version": "v4", "created": "Wed, 22 Mar 2017 15:26:05 GMT" } ]
2017-05-24T00:00:00
[ [ "Kreuz", "Thomas", "" ], [ "Satuvuori", "Eero", "" ], [ "Pofahl", "Martin", "" ], [ "Mulansky", "Mario", "" ] ]
TITLE: Leaders and followers: Quantifying consistency in spatio-temporal propagation patterns ABSTRACT: Repetitive spatio-temporal propagation patterns are encountered in fields as wide-ranging as climatology, social communication and network science. In neuroscience, perfectly consistent repetitions of the same global propagation pattern are called a synfire pattern. For any recording of sequences of discrete events (in neuroscience terminology: sets of spike trains) the questions arise how closely it resembles such a synfire pattern and which are the spike trains that lead/follow. Here we address these questions and introduce an algorithm built on two new indicators, termed SPIKE-Order and Spike Train Order, that define the Synfire Indicator value, which allows to sort multiple spike trains from leader to follower and to quantify the consistency of the temporal leader-follower relationships for both the original and the optimized sorting. We demonstrate our new approach using artificially generated datasets before we apply it to analyze the consistency of propagation patterns in two real datasets from neuroscience (Giant Depolarized Potentials in mice slices) and climatology (El Ni~no sea surface temperature recordings). The new algorithm is distinguished by conceptual and practical simplicity, low computational cost, as well as flexibility and universality.
no_new_dataset
0.935051
1611.00812
Jiemin Chen
Jianguo Li, Yong Tang and Jiemin Chen
Leveraging tagging and rating for recommendation: RMF meets weighted diffusion on tripartite graphs
null
null
10.1016/j.physa.2017.04.121
null
cs.IR cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recommender systems (RSs) have been a widely exploited approach to solving the information overload problem. However, the performance is still limited due to the extreme sparsity of the rating data. With the popularity of Web 2.0, the social tagging system provides more external information to improve recommendation accuracy. Although some existing approaches combine the matrix factorization models with co-occurrence properties and context of tags, they neglect the issue of tag sparsity without the commonly associated tags problem that would also result in inaccurate recommendations. Consequently, in this paper, we propose a novel hybrid collaborative filtering model named WUDiff_RMF, which improves Regularized Matrix Factorization (RMF) model by integrating Weighted User-Diffusion-based CF algorithm(WUDiff) that obtains the information of similar users from the weighted tripartite user-item-tag graph. This model aims to capture the degree correlation of the user-item-tag tripartite network to enhance the performance of recommendation. Experiments conducted on four real-world datasets demonstrate that our approach significantly performs better than already widely used methods in the accuracy of recommendation. Moreover, results show that WUDiff_RMF can alleviate the data sparsity, especially in the circumstance that users have made few ratings and few tags.
[ { "version": "v1", "created": "Tue, 1 Nov 2016 02:59:04 GMT" } ]
2017-05-24T00:00:00
[ [ "Li", "Jianguo", "" ], [ "Tang", "Yong", "" ], [ "Chen", "Jiemin", "" ] ]
TITLE: Leveraging tagging and rating for recommendation: RMF meets weighted diffusion on tripartite graphs ABSTRACT: Recommender systems (RSs) have been a widely exploited approach to solving the information overload problem. However, the performance is still limited due to the extreme sparsity of the rating data. With the popularity of Web 2.0, the social tagging system provides more external information to improve recommendation accuracy. Although some existing approaches combine the matrix factorization models with co-occurrence properties and context of tags, they neglect the issue of tag sparsity without the commonly associated tags problem that would also result in inaccurate recommendations. Consequently, in this paper, we propose a novel hybrid collaborative filtering model named WUDiff_RMF, which improves Regularized Matrix Factorization (RMF) model by integrating Weighted User-Diffusion-based CF algorithm(WUDiff) that obtains the information of similar users from the weighted tripartite user-item-tag graph. This model aims to capture the degree correlation of the user-item-tag tripartite network to enhance the performance of recommendation. Experiments conducted on four real-world datasets demonstrate that our approach significantly performs better than already widely used methods in the accuracy of recommendation. Moreover, results show that WUDiff_RMF can alleviate the data sparsity, especially in the circumstance that users have made few ratings and few tags.
no_new_dataset
0.948822
1701.08349
Xiaoxia Sun
Xiaoxia Sun, Nasser M. Nasrabadi, Trac D. Tran
Supervised Deep Sparse Coding Networks
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we describe the deep sparse coding network (SCN), a novel deep network that encodes intermediate representations with nonnegative sparse coding. The SCN is built upon a number of cascading bottleneck modules, where each module consists of two sparse coding layers with relatively wide and slim dictionaries that are specialized to produce high dimensional discriminative features and low dimensional representations for clustering, respectively. During training, both the dictionaries and regularization parameters are optimized with an end-to-end supervised learning algorithm based on multilevel optimization. Effectiveness of an SCN with seven bottleneck modules is verified on several popular benchmark datasets. Remarkably, with few parameters to learn, our SCN achieves 5.81% and 19.93% classification error rate on CIFAR-10 and CIFAR-100, respectively.
[ { "version": "v1", "created": "Sun, 29 Jan 2017 04:03:39 GMT" }, { "version": "v2", "created": "Sun, 21 May 2017 01:33:04 GMT" }, { "version": "v3", "created": "Tue, 23 May 2017 01:31:04 GMT" } ]
2017-05-24T00:00:00
[ [ "Sun", "Xiaoxia", "" ], [ "Nasrabadi", "Nasser M.", "" ], [ "Tran", "Trac D.", "" ] ]
TITLE: Supervised Deep Sparse Coding Networks ABSTRACT: In this paper, we describe the deep sparse coding network (SCN), a novel deep network that encodes intermediate representations with nonnegative sparse coding. The SCN is built upon a number of cascading bottleneck modules, where each module consists of two sparse coding layers with relatively wide and slim dictionaries that are specialized to produce high dimensional discriminative features and low dimensional representations for clustering, respectively. During training, both the dictionaries and regularization parameters are optimized with an end-to-end supervised learning algorithm based on multilevel optimization. Effectiveness of an SCN with seven bottleneck modules is verified on several popular benchmark datasets. Remarkably, with few parameters to learn, our SCN achieves 5.81% and 19.93% classification error rate on CIFAR-10 and CIFAR-100, respectively.
no_new_dataset
0.948489
1702.05931
Francesco Ciompi
Francesco Ciompi, Oscar Geessink, Babak Ehteshami Bejnordi, Gabriel Silva de Souza, Alexi Baidoshvili, Geert Litjens, Bram van Ginneken, Iris Nagtegaal, Jeroen van der Laak
The importance of stain normalization in colorectal tissue classification with convolutional networks
Published in Proceedings of IEEE International Symposium on Biomedical Imaging (ISBI) 2017
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The development of reliable imaging biomarkers for the analysis of colorectal cancer (CRC) in hematoxylin and eosin (H&E) stained histopathology images requires an accurate and reproducible classification of the main tissue components in the image. In this paper, we propose a system for CRC tissue classification based on convolutional networks (ConvNets). We investigate the importance of stain normalization in tissue classification of CRC tissue samples in H&E-stained images. Furthermore, we report the performance of ConvNets on a cohort of rectal cancer samples and on an independent publicly available dataset of colorectal H&E images.
[ { "version": "v1", "created": "Mon, 20 Feb 2017 11:11:50 GMT" }, { "version": "v2", "created": "Tue, 23 May 2017 12:34:17 GMT" } ]
2017-05-24T00:00:00
[ [ "Ciompi", "Francesco", "" ], [ "Geessink", "Oscar", "" ], [ "Bejnordi", "Babak Ehteshami", "" ], [ "de Souza", "Gabriel Silva", "" ], [ "Baidoshvili", "Alexi", "" ], [ "Litjens", "Geert", "" ], [ "van Ginneken", "Bram", "" ], [ "Nagtegaal", "Iris", "" ], [ "van der Laak", "Jeroen", "" ] ]
TITLE: The importance of stain normalization in colorectal tissue classification with convolutional networks ABSTRACT: The development of reliable imaging biomarkers for the analysis of colorectal cancer (CRC) in hematoxylin and eosin (H&E) stained histopathology images requires an accurate and reproducible classification of the main tissue components in the image. In this paper, we propose a system for CRC tissue classification based on convolutional networks (ConvNets). We investigate the importance of stain normalization in tissue classification of CRC tissue samples in H&E-stained images. Furthermore, we report the performance of ConvNets on a cohort of rectal cancer samples and on an independent publicly available dataset of colorectal H&E images.
no_new_dataset
0.952086
1704.00390
Alex Kendall
Alex Kendall and Roberto Cipolla
Geometric Loss Functions for Camera Pose Regression with Deep Learning
CVPR 2017
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep learning has shown to be effective for robust and real-time monocular image relocalisation. In particular, PoseNet is a deep convolutional neural network which learns to regress the 6-DOF camera pose from a single image. It learns to localize using high level features and is robust to difficult lighting, motion blur and unknown camera intrinsics, where point based SIFT registration fails. However, it was trained using a naive loss function, with hyper-parameters which require expensive tuning. In this paper, we give the problem a more fundamental theoretical treatment. We explore a number of novel loss functions for learning camera pose which are based on geometry and scene reprojection error. Additionally we show how to automatically learn an optimal weighting to simultaneously regress position and orientation. By leveraging geometry, we demonstrate that our technique significantly improves PoseNet's performance across datasets ranging from indoor rooms to a small city.
[ { "version": "v1", "created": "Sun, 2 Apr 2017 23:58:22 GMT" }, { "version": "v2", "created": "Tue, 23 May 2017 13:45:48 GMT" } ]
2017-05-24T00:00:00
[ [ "Kendall", "Alex", "" ], [ "Cipolla", "Roberto", "" ] ]
TITLE: Geometric Loss Functions for Camera Pose Regression with Deep Learning ABSTRACT: Deep learning has shown to be effective for robust and real-time monocular image relocalisation. In particular, PoseNet is a deep convolutional neural network which learns to regress the 6-DOF camera pose from a single image. It learns to localize using high level features and is robust to difficult lighting, motion blur and unknown camera intrinsics, where point based SIFT registration fails. However, it was trained using a naive loss function, with hyper-parameters which require expensive tuning. In this paper, we give the problem a more fundamental theoretical treatment. We explore a number of novel loss functions for learning camera pose which are based on geometry and scene reprojection error. Additionally we show how to automatically learn an optimal weighting to simultaneously regress position and orientation. By leveraging geometry, we demonstrate that our technique significantly improves PoseNet's performance across datasets ranging from indoor rooms to a small city.
no_new_dataset
0.9455
1705.07062
Nathanael Lemessa Baisa
Nathanael L. Baisa, St\'ephanie Bricq, Alain Lalande
MRI-PET Registration with Automated Algorithm in Pre-clinical Studies
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET) automatic 3-D registration is implemented and validated for small animal image volumes so that the high-resolution anatomical MRI information can be fused with the low spatial resolution of functional PET information for the localization of lesion that is currently in high demand in the study of tumor of cancer (oncology) and its corresponding preparation of pharmaceutical drugs. Though many registration algorithms are developed and applied on human brain volumes, these methods may not be as efficient on small animal datasets due to lack of intensity information and often the high anisotropy in voxel dimensions. Therefore, a fully automatic registration algorithm which can register not only assumably rigid small animal volumes such as brain but also deformable organs such as kidney, cardiac and chest is developed using a combination of global affine and local B-spline transformation models in which mutual information is used as a similarity criterion. The global affine registration uses a multi-resolution pyramid on image volumes of 3 levels whereas in local B-spline registration, a multi-resolution scheme is applied on the B-spline grid of 2 levels on the finest resolution of the image volumes in which only the transform itself is affected rather than the image volumes. Since mutual information lacks sufficient spatial information, PCA is used to inject it by estimating initial translation and rotation parameters. It is computationally efficient since it is implemented using C++ and ITK library, and is qualitatively and quantitatively shown that this PCA-initialized global registration followed by local registration is in close agreement with expert manual registration and outperforms the one without PCA initialization tested on small animal brain and kidney.
[ { "version": "v1", "created": "Fri, 19 May 2017 15:46:30 GMT" }, { "version": "v2", "created": "Tue, 23 May 2017 13:42:39 GMT" } ]
2017-05-24T00:00:00
[ [ "Baisa", "Nathanael L.", "" ], [ "Bricq", "Stéphanie", "" ], [ "Lalande", "Alain", "" ] ]
TITLE: MRI-PET Registration with Automated Algorithm in Pre-clinical Studies ABSTRACT: Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET) automatic 3-D registration is implemented and validated for small animal image volumes so that the high-resolution anatomical MRI information can be fused with the low spatial resolution of functional PET information for the localization of lesion that is currently in high demand in the study of tumor of cancer (oncology) and its corresponding preparation of pharmaceutical drugs. Though many registration algorithms are developed and applied on human brain volumes, these methods may not be as efficient on small animal datasets due to lack of intensity information and often the high anisotropy in voxel dimensions. Therefore, a fully automatic registration algorithm which can register not only assumably rigid small animal volumes such as brain but also deformable organs such as kidney, cardiac and chest is developed using a combination of global affine and local B-spline transformation models in which mutual information is used as a similarity criterion. The global affine registration uses a multi-resolution pyramid on image volumes of 3 levels whereas in local B-spline registration, a multi-resolution scheme is applied on the B-spline grid of 2 levels on the finest resolution of the image volumes in which only the transform itself is affected rather than the image volumes. Since mutual information lacks sufficient spatial information, PCA is used to inject it by estimating initial translation and rotation parameters. It is computationally efficient since it is implemented using C++ and ITK library, and is qualitatively and quantitatively shown that this PCA-initialized global registration followed by local registration is in close agreement with expert manual registration and outperforms the one without PCA initialization tested on small animal brain and kidney.
no_new_dataset
0.950273
1705.08030
Saeed Maleki
Saeed Maleki, Madanlal Musuvathi, Todd Mytkowicz
Parallel Stochastic Gradient Descent with Sound Combiners
16 pages, 4 figures
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Stochastic gradient descent (SGD) is a well known method for regression and classification tasks. However, it is an inherently sequential algorithm at each step, the processing of the current example depends on the parameters learned from the previous examples. Prior approaches to parallelizing linear learners using SGD, such as HOGWILD! and ALLREDUCE, do not honor these dependencies across threads and thus can potentially suffer poor convergence rates and/or poor scalability. This paper proposes SYMSGD, a parallel SGD algorithm that, to a first-order approximation, retains the sequential semantics of SGD. Each thread learns a local model in addition to a model combiner, which allows local models to be combined to produce the same result as what a sequential SGD would have produced. This paper evaluates SYMSGD's accuracy and performance on 6 datasets on a shared-memory machine shows upto 11x speedup over our heavily optimized sequential baseline on 16 cores and 2.2x, on average, faster than HOGWILD!.
[ { "version": "v1", "created": "Mon, 22 May 2017 22:32:28 GMT" } ]
2017-05-24T00:00:00
[ [ "Maleki", "Saeed", "" ], [ "Musuvathi", "Madanlal", "" ], [ "Mytkowicz", "Todd", "" ] ]
TITLE: Parallel Stochastic Gradient Descent with Sound Combiners ABSTRACT: Stochastic gradient descent (SGD) is a well known method for regression and classification tasks. However, it is an inherently sequential algorithm at each step, the processing of the current example depends on the parameters learned from the previous examples. Prior approaches to parallelizing linear learners using SGD, such as HOGWILD! and ALLREDUCE, do not honor these dependencies across threads and thus can potentially suffer poor convergence rates and/or poor scalability. This paper proposes SYMSGD, a parallel SGD algorithm that, to a first-order approximation, retains the sequential semantics of SGD. Each thread learns a local model in addition to a model combiner, which allows local models to be combined to produce the same result as what a sequential SGD would have produced. This paper evaluates SYMSGD's accuracy and performance on 6 datasets on a shared-memory machine shows upto 11x speedup over our heavily optimized sequential baseline on 16 cores and 2.2x, on average, faster than HOGWILD!.
no_new_dataset
0.940024
1705.08066
Bo Jiang
Bo Jiang and Chris Ding and Bin Luo
Multiple Images Recovery Using a Single Affine Transformation
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In many real-world applications, image data often come with noises, corruptions or large errors. One approach to deal with noise image data is to use data recovery techniques which aim to recover the true uncorrupted signals from the observed noise images. In this paper, we first introduce a novel corruption recovery transformation (CRT) model which aims to recover multiple (or a collection of) corrupted images using a single affine transformation. Then, we show that the introduced CRT can be efficiently constructed through learning from training data. Once CRT is learned, we can recover the true signals from the new incoming/test corrupted images explicitly. As an application, we apply our CRT to image recognition task. Experimental results on six image datasets demonstrate that the proposed CRT model is effective in recovering noise image data and thus leads to better recognition results.
[ { "version": "v1", "created": "Tue, 23 May 2017 03:14:50 GMT" } ]
2017-05-24T00:00:00
[ [ "Jiang", "Bo", "" ], [ "Ding", "Chris", "" ], [ "Luo", "Bin", "" ] ]
TITLE: Multiple Images Recovery Using a Single Affine Transformation ABSTRACT: In many real-world applications, image data often come with noises, corruptions or large errors. One approach to deal with noise image data is to use data recovery techniques which aim to recover the true uncorrupted signals from the observed noise images. In this paper, we first introduce a novel corruption recovery transformation (CRT) model which aims to recover multiple (or a collection of) corrupted images using a single affine transformation. Then, we show that the introduced CRT can be efficiently constructed through learning from training data. Once CRT is learned, we can recover the true signals from the new incoming/test corrupted images explicitly. As an application, we apply our CRT to image recognition task. Experimental results on six image datasets demonstrate that the proposed CRT model is effective in recovering noise image data and thus leads to better recognition results.
no_new_dataset
0.955486
1705.08180
Pietro Morerio
Pietro Morerio and Vittorio Murino
Correlation Alignment by Riemannian Metric for Domain Adaptation
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Domain adaptation techniques address the problem of reducing the sensitivity of machine learning methods to the so-called domain shift, namely the difference between source (training) and target (test) data distributions. In particular, unsupervised domain adaptation assumes no labels are available in the target domain. To this end, aligning second order statistics (covariances) of target and source domains have proven to be an effective approach ti fill the gap between the domains. However, covariance matrices do not form a subspace of the Euclidean space, but live in a Riemannian manifold with non-positive curvature, making the usual Euclidean metric suboptimal to measure distances. In this paper, we extend the idea of training a neural network with a constraint on the covariances of the hidden layer features, by rigorously accounting for the curved structure of the manifold of symmetric positive definite matrices. The resulting loss function exploits a theoretically sound geodesic distance on such manifold. Results show indeed the suboptimal nature of the Euclidean distance. This makes us able to perform better than previous approaches on the standard Office dataset, a benchmark for domain adaptation techniques.
[ { "version": "v1", "created": "Tue, 23 May 2017 11:08:48 GMT" } ]
2017-05-24T00:00:00
[ [ "Morerio", "Pietro", "" ], [ "Murino", "Vittorio", "" ] ]
TITLE: Correlation Alignment by Riemannian Metric for Domain Adaptation ABSTRACT: Domain adaptation techniques address the problem of reducing the sensitivity of machine learning methods to the so-called domain shift, namely the difference between source (training) and target (test) data distributions. In particular, unsupervised domain adaptation assumes no labels are available in the target domain. To this end, aligning second order statistics (covariances) of target and source domains have proven to be an effective approach ti fill the gap between the domains. However, covariance matrices do not form a subspace of the Euclidean space, but live in a Riemannian manifold with non-positive curvature, making the usual Euclidean metric suboptimal to measure distances. In this paper, we extend the idea of training a neural network with a constraint on the covariances of the hidden layer features, by rigorously accounting for the curved structure of the manifold of symmetric positive definite matrices. The resulting loss function exploits a theoretically sound geodesic distance on such manifold. Results show indeed the suboptimal nature of the Euclidean distance. This makes us able to perform better than previous approaches on the standard Office dataset, a benchmark for domain adaptation techniques.
no_new_dataset
0.946941
1705.08207
Tam Nguyen
Tam V. Nguyen, Luoqi Liu
Salient Object Detection with Semantic Priors
accepted to IJCAI 2017
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Salient object detection has increasingly become a popular topic in cognitive and computational sciences, including computer vision and artificial intelligence research. In this paper, we propose integrating \textit{semantic priors} into the salient object detection process. Our algorithm consists of three basic steps. Firstly, the explicit saliency map is obtained based on the semantic segmentation refined by the explicit saliency priors learned from the data. Next, the implicit saliency map is computed based on a trained model which maps the implicit saliency priors embedded into regional features with the saliency values. Finally, the explicit semantic map and the implicit map are adaptively fused to form a pixel-accurate saliency map which uniformly covers the objects of interest. We further evaluate the proposed framework on two challenging datasets, namely, ECSSD and HKUIS. The extensive experimental results demonstrate that our method outperforms other state-of-the-art methods.
[ { "version": "v1", "created": "Tue, 23 May 2017 12:24:09 GMT" } ]
2017-05-24T00:00:00
[ [ "Nguyen", "Tam V.", "" ], [ "Liu", "Luoqi", "" ] ]
TITLE: Salient Object Detection with Semantic Priors ABSTRACT: Salient object detection has increasingly become a popular topic in cognitive and computational sciences, including computer vision and artificial intelligence research. In this paper, we propose integrating \textit{semantic priors} into the salient object detection process. Our algorithm consists of three basic steps. Firstly, the explicit saliency map is obtained based on the semantic segmentation refined by the explicit saliency priors learned from the data. Next, the implicit saliency map is computed based on a trained model which maps the implicit saliency priors embedded into regional features with the saliency values. Finally, the explicit semantic map and the implicit map are adaptively fused to form a pixel-accurate saliency map which uniformly covers the objects of interest. We further evaluate the proposed framework on two challenging datasets, namely, ECSSD and HKUIS. The extensive experimental results demonstrate that our method outperforms other state-of-the-art methods.
no_new_dataset
0.951414
1705.08214
Michael Gygli
Michael Gygli
Ridiculously Fast Shot Boundary Detection with Fully Convolutional Neural Networks
null
null
null
null
cs.CV cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Shot boundary detection (SBD) is an important component of many video analysis tasks, such as action recognition, video indexing, summarization and editing. Previous work typically used a combination of low-level features like color histograms, in conjunction with simple models such as SVMs. Instead, we propose to learn shot detection end-to-end, from pixels to final shot boundaries. For training such a model, we rely on our insight that all shot boundaries are generated. Thus, we create a dataset with one million frames and automatically generated transitions such as cuts, dissolves and fades. In order to efficiently analyze hours of videos, we propose a Convolutional Neural Network (CNN) which is fully convolutional in time, thus allowing to use a large temporal context without the need to repeatedly processing frames. With this architecture our method obtains state-of-the-art results while running at an unprecedented speed of more than 120x real-time.
[ { "version": "v1", "created": "Tue, 23 May 2017 12:39:51 GMT" } ]
2017-05-24T00:00:00
[ [ "Gygli", "Michael", "" ] ]
TITLE: Ridiculously Fast Shot Boundary Detection with Fully Convolutional Neural Networks ABSTRACT: Shot boundary detection (SBD) is an important component of many video analysis tasks, such as action recognition, video indexing, summarization and editing. Previous work typically used a combination of low-level features like color histograms, in conjunction with simple models such as SVMs. Instead, we propose to learn shot detection end-to-end, from pixels to final shot boundaries. For training such a model, we rely on our insight that all shot boundaries are generated. Thus, we create a dataset with one million frames and automatically generated transitions such as cuts, dissolves and fades. In order to efficiently analyze hours of videos, we propose a Convolutional Neural Network (CNN) which is fully convolutional in time, thus allowing to use a large temporal context without the need to repeatedly processing frames. With this architecture our method obtains state-of-the-art results while running at an unprecedented speed of more than 120x real-time.
new_dataset
0.950732
1705.08260
Menglong Ye
Menglong Ye and Edward Johns and Ankur Handa and Lin Zhang and Philip Pratt and Guang-Zhong Yang
Self-Supervised Siamese Learning on Stereo Image Pairs for Depth Estimation in Robotic Surgery
A two-page short report to be presented at the Hamlyn Symposium on Medical Robotics 2017. An extension of this work is on progress
null
null
null
cs.CV cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Robotic surgery has become a powerful tool for performing minimally invasive procedures, providing advantages in dexterity, precision, and 3D vision, over traditional surgery. One popular robotic system is the da Vinci surgical platform, which allows preoperative information to be incorporated into live procedures using Augmented Reality (AR). Scene depth estimation is a prerequisite for AR, as accurate registration requires 3D correspondences between preoperative and intraoperative organ models. In the past decade, there has been much progress on depth estimation for surgical scenes, such as using monocular or binocular laparoscopes [1,2]. More recently, advances in deep learning have enabled depth estimation via Convolutional Neural Networks (CNNs) [3], but training requires a large image dataset with ground truth depths. Inspired by [4], we propose a deep learning framework for surgical scene depth estimation using self-supervision for scalable data acquisition. Our framework consists of an autoencoder for depth prediction, and a differentiable spatial transformer for training the autoencoder on stereo image pairs without ground truth depths. Validation was conducted on stereo videos collected in robotic partial nephrectomy.
[ { "version": "v1", "created": "Wed, 17 May 2017 11:10:49 GMT" } ]
2017-05-24T00:00:00
[ [ "Ye", "Menglong", "" ], [ "Johns", "Edward", "" ], [ "Handa", "Ankur", "" ], [ "Zhang", "Lin", "" ], [ "Pratt", "Philip", "" ], [ "Yang", "Guang-Zhong", "" ] ]
TITLE: Self-Supervised Siamese Learning on Stereo Image Pairs for Depth Estimation in Robotic Surgery ABSTRACT: Robotic surgery has become a powerful tool for performing minimally invasive procedures, providing advantages in dexterity, precision, and 3D vision, over traditional surgery. One popular robotic system is the da Vinci surgical platform, which allows preoperative information to be incorporated into live procedures using Augmented Reality (AR). Scene depth estimation is a prerequisite for AR, as accurate registration requires 3D correspondences between preoperative and intraoperative organ models. In the past decade, there has been much progress on depth estimation for surgical scenes, such as using monocular or binocular laparoscopes [1,2]. More recently, advances in deep learning have enabled depth estimation via Convolutional Neural Networks (CNNs) [3], but training requires a large image dataset with ground truth depths. Inspired by [4], we propose a deep learning framework for surgical scene depth estimation using self-supervision for scalable data acquisition. Our framework consists of an autoencoder for depth prediction, and a differentiable spatial transformer for training the autoencoder on stereo image pairs without ground truth depths. Validation was conducted on stereo videos collected in robotic partial nephrectomy.
no_new_dataset
0.947381
1705.08374
Carlos Becker
Carlos Becker, Nicolai H\"ani, Elena Rosinskaya, Emmanuel d'Angelo, Christoph Strecha
Classification of Aerial Photogrammetric 3D Point Clouds
ISPRS 2017
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a powerful method to extract per-point semantic class labels from aerialphotogrammetry data. Labeling this kind of data is important for tasks such as environmental modelling, object classification and scene understanding. Unlike previous point cloud classification methods that rely exclusively on geometric features, we show that incorporating color information yields a significant increase in accuracy in detecting semantic classes. We test our classification method on three real-world photogrammetry datasets that were generated with Pix4Dmapper Pro, and with varying point densities. We show that off-the-shelf machine learning techniques coupled with our new features allow us to train highly accurate classifiers that generalize well to unseen data, processing point clouds containing 10 million points in less than 3 minutes on a desktop computer.
[ { "version": "v1", "created": "Tue, 23 May 2017 15:44:40 GMT" } ]
2017-05-24T00:00:00
[ [ "Becker", "Carlos", "" ], [ "Häni", "Nicolai", "" ], [ "Rosinskaya", "Elena", "" ], [ "d'Angelo", "Emmanuel", "" ], [ "Strecha", "Christoph", "" ] ]
TITLE: Classification of Aerial Photogrammetric 3D Point Clouds ABSTRACT: We present a powerful method to extract per-point semantic class labels from aerialphotogrammetry data. Labeling this kind of data is important for tasks such as environmental modelling, object classification and scene understanding. Unlike previous point cloud classification methods that rely exclusively on geometric features, we show that incorporating color information yields a significant increase in accuracy in detecting semantic classes. We test our classification method on three real-world photogrammetry datasets that were generated with Pix4Dmapper Pro, and with varying point densities. We show that off-the-shelf machine learning techniques coupled with our new features allow us to train highly accurate classifiers that generalize well to unseen data, processing point clouds containing 10 million points in less than 3 minutes on a desktop computer.
no_new_dataset
0.947332
1705.08409
Leye Wang
Leye Wang, Xu Geng, Jintao Ke, Chen Peng, Xiaojuan Ma, Daqing Zhang, Qiang Yang
Ridesourcing Car Detection by Transfer Learning
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Ridesourcing platforms like Uber and Didi are getting more and more popular around the world. However, unauthorized ridesourcing activities taking advantages of the sharing economy can greatly impair the healthy development of this emerging industry. As the first step to regulate on-demand ride services and eliminate black market, we design a method to detect ridesourcing cars from a pool of cars based on their trajectories. Since licensed ridesourcing car traces are not openly available and may be completely missing in some cities due to legal issues, we turn to transferring knowledge from public transport open data, i.e, taxis and buses, to ridesourcing detection among ordinary vehicles. We propose a two-stage transfer learning framework. In Stage 1, we take taxi and bus data as input to learn a random forest (RF) classifier using trajectory features shared by taxis/buses and ridesourcing/other cars. Then, we use the RF to label all the candidate cars. In Stage 2, leveraging the subset of high confident labels from the previous stage as input, we further learn a convolutional neural network (CNN) classifier for ridesourcing detection, and iteratively refine RF and CNN, as well as the feature set, via a co-training process. Finally, we use the resulting ensemble of RF and CNN to identify the ridesourcing cars in the candidate pool. Experiments on real car, taxi and bus traces show that our transfer learning framework, with no need of a pre-labeled ridesourcing dataset, can achieve similar accuracy as the supervised learning methods.
[ { "version": "v1", "created": "Tue, 23 May 2017 16:59:29 GMT" } ]
2017-05-24T00:00:00
[ [ "Wang", "Leye", "" ], [ "Geng", "Xu", "" ], [ "Ke", "Jintao", "" ], [ "Peng", "Chen", "" ], [ "Ma", "Xiaojuan", "" ], [ "Zhang", "Daqing", "" ], [ "Yang", "Qiang", "" ] ]
TITLE: Ridesourcing Car Detection by Transfer Learning ABSTRACT: Ridesourcing platforms like Uber and Didi are getting more and more popular around the world. However, unauthorized ridesourcing activities taking advantages of the sharing economy can greatly impair the healthy development of this emerging industry. As the first step to regulate on-demand ride services and eliminate black market, we design a method to detect ridesourcing cars from a pool of cars based on their trajectories. Since licensed ridesourcing car traces are not openly available and may be completely missing in some cities due to legal issues, we turn to transferring knowledge from public transport open data, i.e, taxis and buses, to ridesourcing detection among ordinary vehicles. We propose a two-stage transfer learning framework. In Stage 1, we take taxi and bus data as input to learn a random forest (RF) classifier using trajectory features shared by taxis/buses and ridesourcing/other cars. Then, we use the RF to label all the candidate cars. In Stage 2, leveraging the subset of high confident labels from the previous stage as input, we further learn a convolutional neural network (CNN) classifier for ridesourcing detection, and iteratively refine RF and CNN, as well as the feature set, via a co-training process. Finally, we use the resulting ensemble of RF and CNN to identify the ridesourcing cars in the candidate pool. Experiments on real car, taxi and bus traces show that our transfer learning framework, with no need of a pre-labeled ridesourcing dataset, can achieve similar accuracy as the supervised learning methods.
no_new_dataset
0.944893
1506.06112
Ethan Rudd
Ethan M. Rudd, Lalit P. Jain, Walter J. Scheirer, Terrance E. Boult
The Extreme Value Machine
Pre-print of a manuscript accepted to the IEEE Transactions on Pattern Analysis and Machine Intelligence (T-PAMI) journal
null
10.1109/TPAMI.2017.2707495
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It is often desirable to be able to recognize when inputs to a recognition function learned in a supervised manner correspond to classes unseen at training time. With this ability, new class labels could be assigned to these inputs by a human operator, allowing them to be incorporated into the recognition function --- ideally under an efficient incremental update mechanism. While good algorithms that assume inputs from a fixed set of classes exist, e.g., artificial neural networks and kernel machines, it is not immediately obvious how to extend them to perform incremental learning in the presence of unknown query classes. Existing algorithms take little to no distributional information into account when learning recognition functions and lack a strong theoretical foundation. We address this gap by formulating a novel, theoretically sound classifier --- the Extreme Value Machine (EVM). The EVM has a well-grounded interpretation derived from statistical Extreme Value Theory (EVT), and is the first classifier to be able to perform nonlinear kernel-free variable bandwidth incremental learning. Compared to other classifiers in the same deep network derived feature space, the EVM is accurate and efficient on an established benchmark partition of the ImageNet dataset.
[ { "version": "v1", "created": "Fri, 19 Jun 2015 19:04:54 GMT" }, { "version": "v2", "created": "Tue, 12 Jan 2016 00:21:24 GMT" }, { "version": "v3", "created": "Wed, 18 May 2016 00:57:06 GMT" }, { "version": "v4", "created": "Sun, 21 May 2017 01:47:04 GMT" } ]
2017-05-23T00:00:00
[ [ "Rudd", "Ethan M.", "" ], [ "Jain", "Lalit P.", "" ], [ "Scheirer", "Walter J.", "" ], [ "Boult", "Terrance E.", "" ] ]
TITLE: The Extreme Value Machine ABSTRACT: It is often desirable to be able to recognize when inputs to a recognition function learned in a supervised manner correspond to classes unseen at training time. With this ability, new class labels could be assigned to these inputs by a human operator, allowing them to be incorporated into the recognition function --- ideally under an efficient incremental update mechanism. While good algorithms that assume inputs from a fixed set of classes exist, e.g., artificial neural networks and kernel machines, it is not immediately obvious how to extend them to perform incremental learning in the presence of unknown query classes. Existing algorithms take little to no distributional information into account when learning recognition functions and lack a strong theoretical foundation. We address this gap by formulating a novel, theoretically sound classifier --- the Extreme Value Machine (EVM). The EVM has a well-grounded interpretation derived from statistical Extreme Value Theory (EVT), and is the first classifier to be able to perform nonlinear kernel-free variable bandwidth incremental learning. Compared to other classifiers in the same deep network derived feature space, the EVM is accurate and efficient on an established benchmark partition of the ImageNet dataset.
no_new_dataset
0.943504
1507.06951
David Weyburne
David Weyburne
A Cautionary Note on the Zagarola and Smits Similarity Parameter for the Turbulent Boundary Layer
18 pages, 11 figures, 1 appendix, Latest version offers improved readability
null
null
null
physics.flu-dyn
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Zagarola and Smits developed an empirical velocity parameter for scaling the outer region of the turbulent boundary layer velocity profile that has been widely applied to experimental datasets. Plots of the scaled defect profiles indicate that most datasets display similar-like behavior using the Zagarola and Smits scaling parameter. In the work herein, it is shown that the common practice of finding similarity behavior using the defect profile is often incomplete in the sense that not all of the criteria for similarity have been checked for compliance. When full compliance is checked, it is found that most of the datasets which display defect similarity do not satisfy all the criteria required for similarity. The nature of this contradiction and noncompliance is described in detail. It is shown that the original datasets used by Zagarola and Smits display this flawed similarity behavior. Hence, a careful reassessment of any claims in the literature is required for those groups that attempted to use the defect profile and the Zagarola and Smits type of velocity scaling parameter to assert similarity of the velocity profile.
[ { "version": "v1", "created": "Thu, 23 Jul 2015 19:57:45 GMT" }, { "version": "v2", "created": "Mon, 31 Aug 2015 16:51:38 GMT" }, { "version": "v3", "created": "Tue, 15 Mar 2016 18:06:34 GMT" }, { "version": "v4", "created": "Sat, 20 May 2017 16:15:58 GMT" } ]
2017-05-23T00:00:00
[ [ "Weyburne", "David", "" ] ]
TITLE: A Cautionary Note on the Zagarola and Smits Similarity Parameter for the Turbulent Boundary Layer ABSTRACT: Zagarola and Smits developed an empirical velocity parameter for scaling the outer region of the turbulent boundary layer velocity profile that has been widely applied to experimental datasets. Plots of the scaled defect profiles indicate that most datasets display similar-like behavior using the Zagarola and Smits scaling parameter. In the work herein, it is shown that the common practice of finding similarity behavior using the defect profile is often incomplete in the sense that not all of the criteria for similarity have been checked for compliance. When full compliance is checked, it is found that most of the datasets which display defect similarity do not satisfy all the criteria required for similarity. The nature of this contradiction and noncompliance is described in detail. It is shown that the original datasets used by Zagarola and Smits display this flawed similarity behavior. Hence, a careful reassessment of any claims in the literature is required for those groups that attempted to use the defect profile and the Zagarola and Smits type of velocity scaling parameter to assert similarity of the velocity profile.
no_new_dataset
0.954095
1511.04511
Ziming Zhang
Ziming Zhang, Yun Liu, Xi Chen, Yanjun Zhu, Ming-Ming Cheng, Venkatesh Saligrama, and Philip H.S. Torr
Sequential Optimization for Efficient High-Quality Object Proposal Generation
Accepted by TPAMI
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We are motivated by the need for a generic object proposal generation algorithm which achieves good balance between object detection recall, proposal localization quality and computational efficiency. We propose a novel object proposal algorithm, BING++, which inherits the virtue of good computational efficiency of BING but significantly improves its proposal localization quality. At high level we formulate the problem of object proposal generation from a novel probabilistic perspective, based on which our BING++ manages to improve the localization quality by employing edges and segments to estimate object boundaries and update the proposals sequentially. We propose learning the parameters efficiently by searching for approximate solutions in a quantized parameter space for complexity reduction. We demonstrate the generalization of BING++ with the same fixed parameters across different object classes and datasets. Empirically our BING++ can run at half speed of BING on CPU, but significantly improve the localization quality by 18.5% and 16.7% on both VOC2007 and Microhsoft COCO datasets, respectively. Compared with other state-of-the-art approaches, BING++ can achieve comparable performance, but run significantly faster.
[ { "version": "v1", "created": "Sat, 14 Nov 2015 05:45:47 GMT" }, { "version": "v2", "created": "Thu, 28 Jul 2016 04:35:15 GMT" }, { "version": "v3", "created": "Mon, 22 May 2017 17:23:07 GMT" } ]
2017-05-23T00:00:00
[ [ "Zhang", "Ziming", "" ], [ "Liu", "Yun", "" ], [ "Chen", "Xi", "" ], [ "Zhu", "Yanjun", "" ], [ "Cheng", "Ming-Ming", "" ], [ "Saligrama", "Venkatesh", "" ], [ "Torr", "Philip H. S.", "" ] ]
TITLE: Sequential Optimization for Efficient High-Quality Object Proposal Generation ABSTRACT: We are motivated by the need for a generic object proposal generation algorithm which achieves good balance between object detection recall, proposal localization quality and computational efficiency. We propose a novel object proposal algorithm, BING++, which inherits the virtue of good computational efficiency of BING but significantly improves its proposal localization quality. At high level we formulate the problem of object proposal generation from a novel probabilistic perspective, based on which our BING++ manages to improve the localization quality by employing edges and segments to estimate object boundaries and update the proposals sequentially. We propose learning the parameters efficiently by searching for approximate solutions in a quantized parameter space for complexity reduction. We demonstrate the generalization of BING++ with the same fixed parameters across different object classes and datasets. Empirically our BING++ can run at half speed of BING on CPU, but significantly improve the localization quality by 18.5% and 16.7% on both VOC2007 and Microhsoft COCO datasets, respectively. Compared with other state-of-the-art approaches, BING++ can achieve comparable performance, but run significantly faster.
no_new_dataset
0.948537
1611.01484
Ankan Bansal
Ankan Bansal, Anirudh Nanduri, Carlos Castillo, Rajeev Ranjan, Rama Chellappa
UMDFaces: An Annotated Face Dataset for Training Deep Networks
Updates: Verified keypoints, removed duplicate subjects, released test protocol
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent progress in face detection (including keypoint detection), and recognition is mainly being driven by (i) deeper convolutional neural network architectures, and (ii) larger datasets. However, most of the large datasets are maintained by private companies and are not publicly available. The academic computer vision community needs larger and more varied datasets to make further progress. In this paper we introduce a new face dataset, called UMDFaces, which has 367,888 annotated faces of 8,277 subjects. We also introduce a new face recognition evaluation protocol which will help advance the state-of-the-art in this area. We discuss how a large dataset can be collected and annotated using human annotators and deep networks. We provide human curated bounding boxes for faces. We also provide estimated pose (roll, pitch and yaw), locations of twenty-one key-points and gender information generated by a pre-trained neural network. In addition, the quality of keypoint annotations has been verified by humans for about 115,000 images. Finally, we compare the quality of the dataset with other publicly available face datasets at similar scales.
[ { "version": "v1", "created": "Fri, 4 Nov 2016 18:37:41 GMT" }, { "version": "v2", "created": "Sun, 21 May 2017 08:00:42 GMT" } ]
2017-05-23T00:00:00
[ [ "Bansal", "Ankan", "" ], [ "Nanduri", "Anirudh", "" ], [ "Castillo", "Carlos", "" ], [ "Ranjan", "Rajeev", "" ], [ "Chellappa", "Rama", "" ] ]
TITLE: UMDFaces: An Annotated Face Dataset for Training Deep Networks ABSTRACT: Recent progress in face detection (including keypoint detection), and recognition is mainly being driven by (i) deeper convolutional neural network architectures, and (ii) larger datasets. However, most of the large datasets are maintained by private companies and are not publicly available. The academic computer vision community needs larger and more varied datasets to make further progress. In this paper we introduce a new face dataset, called UMDFaces, which has 367,888 annotated faces of 8,277 subjects. We also introduce a new face recognition evaluation protocol which will help advance the state-of-the-art in this area. We discuss how a large dataset can be collected and annotated using human annotators and deep networks. We provide human curated bounding boxes for faces. We also provide estimated pose (roll, pitch and yaw), locations of twenty-one key-points and gender information generated by a pre-trained neural network. In addition, the quality of keypoint annotations has been verified by humans for about 115,000 images. Finally, we compare the quality of the dataset with other publicly available face datasets at similar scales.
new_dataset
0.962462