id
stringlengths
9
16
submitter
stringlengths
3
64
authors
stringlengths
5
6.63k
title
stringlengths
7
245
comments
stringlengths
1
482
journal-ref
stringlengths
4
382
doi
stringlengths
9
151
report-no
stringclasses
984 values
categories
stringlengths
5
108
license
stringclasses
9 values
abstract
stringlengths
83
3.41k
versions
listlengths
1
20
update_date
timestamp[s]date
2007-05-23 00:00:00
2025-04-11 00:00:00
authors_parsed
sequencelengths
1
427
prompt
stringlengths
166
3.49k
label
stringclasses
2 values
prob
float64
0.5
0.98
1504.04596
Yadong Zhu
Yadong Zhu, Yanyan Lan, Jiafeng Guo, Xueqi Cheng
Structural Learning of Diverse Ranking
Discriminant Function, Diversity Feature, Learning Framework
null
null
null
cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Relevance and diversity are both crucial criteria for an effective search system. In this paper, we propose a unified learning framework for simultaneously optimizing both relevance and diversity. Specifically, the problem is formalized as a structural learning framework optimizing Diversity-Correlated Evaluation Measures (DCEM), such as ERR-IA, a-NDCG and NRBP. Within this framework, the discriminant function is defined to be a bi-criteria objective maximizing the sum of the relevance scores and dissimilarities (or diversity) among the documents. Relevance and diversity features are utilized to define the relevance scores and dissimilarities, respectively. Compared with traditional methods, the advantages of our approach lie in that: (1) Directly optimizing DCEM as the loss function is more fundamental for the task; (2) Our framework does not rely on explicit diversity information such as subtopics, thus is more adaptive to real application; (3) The representation of diversity as the feature-based scoring function is more flexible to incorporate rich diversity-based features into the learning framework. Extensive experiments on the public TREC datasets show that our approach significantly outperforms state-of-the-art diversification approaches, which validate the above advantages.
[ { "version": "v1", "created": "Fri, 17 Apr 2015 18:23:16 GMT" }, { "version": "v2", "created": "Mon, 20 Apr 2015 09:59:00 GMT" } ]
2015-04-21T00:00:00
[ [ "Zhu", "Yadong", "" ], [ "Lan", "Yanyan", "" ], [ "Guo", "Jiafeng", "" ], [ "Cheng", "Xueqi", "" ] ]
TITLE: Structural Learning of Diverse Ranking ABSTRACT: Relevance and diversity are both crucial criteria for an effective search system. In this paper, we propose a unified learning framework for simultaneously optimizing both relevance and diversity. Specifically, the problem is formalized as a structural learning framework optimizing Diversity-Correlated Evaluation Measures (DCEM), such as ERR-IA, a-NDCG and NRBP. Within this framework, the discriminant function is defined to be a bi-criteria objective maximizing the sum of the relevance scores and dissimilarities (or diversity) among the documents. Relevance and diversity features are utilized to define the relevance scores and dissimilarities, respectively. Compared with traditional methods, the advantages of our approach lie in that: (1) Directly optimizing DCEM as the loss function is more fundamental for the task; (2) Our framework does not rely on explicit diversity information such as subtopics, thus is more adaptive to real application; (3) The representation of diversity as the feature-based scoring function is more flexible to incorporate rich diversity-based features into the learning framework. Extensive experiments on the public TREC datasets show that our approach significantly outperforms state-of-the-art diversification approaches, which validate the above advantages.
no_new_dataset
0.946794
1504.04646
Kwetishe Danjuma
Kwetishe Joro Danjuma
Performance Evaluation of Machine Learning Algorithms in Post-operative Life Expectancy in the Lung Cancer Patients
11 pages, 3 figures, 2 tables, ISSN (Print): 1694-0814 | ISSN (Online): 1694-0784
IJCSI International Journal of Computer Science Issues, Volume 12, Issue 2, March 2015
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The nature of clinical data makes it difficult to quickly select, tune and apply machine learning algorithms to clinical prognosis. As a result, a lot of time is spent searching for the most appropriate machine learning algorithms applicable in clinical prognosis that contains either binary-valued or multi-valued attributes. The study set out to identify and evaluate the performance of machine learning classification schemes applied in clinical prognosis of post-operative life expectancy in the lung cancer patients. Multilayer Perceptron, J48, and the Naive Bayes algorithms were used to train and test models on Thoracic Surgery datasets obtained from the University of California Irvine machine learning repository. Stratified 10-fold cross-validation was used to evaluate baseline performance accuracy of the classifiers. The comparative analysis shows that multilayer perceptron performed best with classification accuracy of 82.3%, J48 came out second with classification accuracy of 81.8%, and Naive Bayes came out the worst with classification accuracy of 74.4%. The quality and outcome of the chosen machine learning algorithms depends on the ingenuity of the clinical miner.
[ { "version": "v1", "created": "Fri, 17 Apr 2015 22:05:34 GMT" } ]
2015-04-21T00:00:00
[ [ "Danjuma", "Kwetishe Joro", "" ] ]
TITLE: Performance Evaluation of Machine Learning Algorithms in Post-operative Life Expectancy in the Lung Cancer Patients ABSTRACT: The nature of clinical data makes it difficult to quickly select, tune and apply machine learning algorithms to clinical prognosis. As a result, a lot of time is spent searching for the most appropriate machine learning algorithms applicable in clinical prognosis that contains either binary-valued or multi-valued attributes. The study set out to identify and evaluate the performance of machine learning classification schemes applied in clinical prognosis of post-operative life expectancy in the lung cancer patients. Multilayer Perceptron, J48, and the Naive Bayes algorithms were used to train and test models on Thoracic Surgery datasets obtained from the University of California Irvine machine learning repository. Stratified 10-fold cross-validation was used to evaluate baseline performance accuracy of the classifiers. The comparative analysis shows that multilayer perceptron performed best with classification accuracy of 82.3%, J48 came out second with classification accuracy of 81.8%, and Naive Bayes came out the worst with classification accuracy of 74.4%. The quality and outcome of the chosen machine learning algorithms depends on the ingenuity of the clinical miner.
no_new_dataset
0.946941
1504.04730
Sourav Bhattacharya
Sourav Bhattacharya and Otto Huhta and N. Asokan
LookAhead: Augmenting Crowdsourced Website Reputation Systems With Predictive Modeling
12 pages
null
null
null
cs.CR cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Unsafe websites consist of malicious as well as inappropriate sites, such as those hosting questionable or offensive content. Website reputation systems are intended to help ordinary users steer away from these unsafe sites. However, the process of assigning safety ratings for websites typically involves humans. Consequently it is time consuming, costly and not scalable. This has resulted in two major problems: (i) a significant proportion of the web space remains unrated and (ii) there is an unacceptable time lag before new websites are rated. In this paper, we show that by leveraging structural and content-based properties of websites, it is possible to reliably and efficiently predict their safety ratings, thereby mitigating both problems. We demonstrate the effectiveness of our approach using four datasets of up to 90,000 websites. We use ratings from Web of Trust (WOT), a popular crowdsourced web reputation system, as ground truth. We propose a novel ensemble classification technique that makes opportunistic use of available structural and content properties of webpages to predict their eventual ratings in two dimensions used by WOT: trustworthiness and child safety. Ours is the first classification system to predict such subjective ratings and the same approach works equally well in identifying malicious websites. Across all datasets, our classification performs well with average F$_1$-score in the 74--90\% range.
[ { "version": "v1", "created": "Sat, 18 Apr 2015 15:13:13 GMT" } ]
2015-04-21T00:00:00
[ [ "Bhattacharya", "Sourav", "" ], [ "Huhta", "Otto", "" ], [ "Asokan", "N.", "" ] ]
TITLE: LookAhead: Augmenting Crowdsourced Website Reputation Systems With Predictive Modeling ABSTRACT: Unsafe websites consist of malicious as well as inappropriate sites, such as those hosting questionable or offensive content. Website reputation systems are intended to help ordinary users steer away from these unsafe sites. However, the process of assigning safety ratings for websites typically involves humans. Consequently it is time consuming, costly and not scalable. This has resulted in two major problems: (i) a significant proportion of the web space remains unrated and (ii) there is an unacceptable time lag before new websites are rated. In this paper, we show that by leveraging structural and content-based properties of websites, it is possible to reliably and efficiently predict their safety ratings, thereby mitigating both problems. We demonstrate the effectiveness of our approach using four datasets of up to 90,000 websites. We use ratings from Web of Trust (WOT), a popular crowdsourced web reputation system, as ground truth. We propose a novel ensemble classification technique that makes opportunistic use of available structural and content properties of webpages to predict their eventual ratings in two dimensions used by WOT: trustworthiness and child safety. Ours is the first classification system to predict such subjective ratings and the same approach works equally well in identifying malicious websites. Across all datasets, our classification performs well with average F$_1$-score in the 74--90\% range.
no_new_dataset
0.949716
1504.04739
Wojciech Czarnecki
Rafal Jozefowicz, Wojciech Marian Czarnecki
Fast optimization of Multithreshold Entropy Linear Classifier
Presented at Theoretical Foundations of Machine Learning 2015 (http://tfml.gmum.net), final version published in Schedae Informaticae Journal
null
10.4467/20838476SI.14.005.3022
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multithreshold Entropy Linear Classifier (MELC) is a density based model which searches for a linear projection maximizing the Cauchy-Schwarz Divergence of dataset kernel density estimation. Despite its good empirical results, one of its drawbacks is the optimization speed. In this paper we analyze how one can speed it up through solving an approximate problem. We analyze two methods, both similar to the approximate solutions of the Kernel Density Estimation querying and provide adaptive schemes for selecting a crucial parameters based on user-specified acceptable error. Furthermore we show how one can exploit well known conjugate gradients and L-BFGS optimizers despite the fact that the original optimization problem should be solved on the sphere. All above methods and modifications are tested on 10 real life datasets from UCI repository to confirm their practical usability.
[ { "version": "v1", "created": "Sat, 18 Apr 2015 16:19:22 GMT" } ]
2015-04-21T00:00:00
[ [ "Jozefowicz", "Rafal", "" ], [ "Czarnecki", "Wojciech Marian", "" ] ]
TITLE: Fast optimization of Multithreshold Entropy Linear Classifier ABSTRACT: Multithreshold Entropy Linear Classifier (MELC) is a density based model which searches for a linear projection maximizing the Cauchy-Schwarz Divergence of dataset kernel density estimation. Despite its good empirical results, one of its drawbacks is the optimization speed. In this paper we analyze how one can speed it up through solving an approximate problem. We analyze two methods, both similar to the approximate solutions of the Kernel Density Estimation querying and provide adaptive schemes for selecting a crucial parameters based on user-specified acceptable error. Furthermore we show how one can exploit well known conjugate gradients and L-BFGS optimizers despite the fact that the original optimization problem should be solved on the sphere. All above methods and modifications are tested on 10 real life datasets from UCI repository to confirm their practical usability.
no_new_dataset
0.94743
1504.04740
Wojciech Czarnecki
Wojciech Marian Czarnecki
On the consistency of Multithreshold Entropy Linear Classifier
Presented at Theoretical Foundations of Machine Learning 2015 (http://tfml.gmum.net), final version published in Schedae Informaticae Journal
null
10.4467/20838476SI.15.012.3034
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multithreshold Entropy Linear Classifier (MELC) is a recent classifier idea which employs information theoretic concept in order to create a multithreshold maximum margin model. In this paper we analyze its consistency over multithreshold linear models and show that its objective function upper bounds the amount of misclassified points in a similar manner like hinge loss does in support vector machines. For further confirmation we also conduct some numerical experiments on five datasets.
[ { "version": "v1", "created": "Sat, 18 Apr 2015 16:29:26 GMT" } ]
2015-04-21T00:00:00
[ [ "Czarnecki", "Wojciech Marian", "" ] ]
TITLE: On the consistency of Multithreshold Entropy Linear Classifier ABSTRACT: Multithreshold Entropy Linear Classifier (MELC) is a recent classifier idea which employs information theoretic concept in order to create a multithreshold maximum margin model. In this paper we analyze its consistency over multithreshold linear models and show that its objective function upper bounds the amount of misclassified points in a similar manner like hinge loss does in support vector machines. For further confirmation we also conduct some numerical experiments on five datasets.
no_new_dataset
0.955693
1504.04871
Sukrit Shankar
Sukrit Shankar, Vikas K. Garg, Roberto Cipolla
DEEP-CARVING: Discovering Visual Attributes by Carving Deep Neural Nets
10 pages, 8 figures, CVPR 2015
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most of the approaches for discovering visual attributes in images demand significant supervision, which is cumbersome to obtain. In this paper, we aim to discover visual attributes in a weakly supervised setting that is commonly encountered with contemporary image search engines. Deep Convolutional Neural Networks (CNNs) have enjoyed remarkable success in vision applications recently. However, in a weakly supervised scenario, widely used CNN training procedures do not learn a robust model for predicting multiple attribute labels simultaneously. The primary reason is that the attributes highly co-occur within the training data. To ameliorate this limitation, we propose Deep-Carving, a novel training procedure with CNNs, that helps the net efficiently carve itself for the task of multiple attribute prediction. During training, the responses of the feature maps are exploited in an ingenious way to provide the net with multiple pseudo-labels (for training images) for subsequent iterations. The process is repeated periodically after a fixed number of iterations, and enables the net carve itself iteratively for efficiently disentangling features. Additionally, we contribute a noun-adjective pairing inspired Natural Scenes Attributes Dataset to the research community, CAMIT - NSAD, containing a number of co-occurring attributes within a noun category. We describe, in detail, salient aspects of this dataset. Our experiments on CAMIT-NSAD and the SUN Attributes Dataset, with weak supervision, clearly demonstrate that the Deep-Carved CNNs consistently achieve considerable improvement in the precision of attribute prediction over popular baseline methods.
[ { "version": "v1", "created": "Sun, 19 Apr 2015 18:56:52 GMT" } ]
2015-04-21T00:00:00
[ [ "Shankar", "Sukrit", "" ], [ "Garg", "Vikas K.", "" ], [ "Cipolla", "Roberto", "" ] ]
TITLE: DEEP-CARVING: Discovering Visual Attributes by Carving Deep Neural Nets ABSTRACT: Most of the approaches for discovering visual attributes in images demand significant supervision, which is cumbersome to obtain. In this paper, we aim to discover visual attributes in a weakly supervised setting that is commonly encountered with contemporary image search engines. Deep Convolutional Neural Networks (CNNs) have enjoyed remarkable success in vision applications recently. However, in a weakly supervised scenario, widely used CNN training procedures do not learn a robust model for predicting multiple attribute labels simultaneously. The primary reason is that the attributes highly co-occur within the training data. To ameliorate this limitation, we propose Deep-Carving, a novel training procedure with CNNs, that helps the net efficiently carve itself for the task of multiple attribute prediction. During training, the responses of the feature maps are exploited in an ingenious way to provide the net with multiple pseudo-labels (for training images) for subsequent iterations. The process is repeated periodically after a fixed number of iterations, and enables the net carve itself iteratively for efficiently disentangling features. Additionally, we contribute a noun-adjective pairing inspired Natural Scenes Attributes Dataset to the research community, CAMIT - NSAD, containing a number of co-occurring attributes within a noun category. We describe, in detail, salient aspects of this dataset. Our experiments on CAMIT-NSAD and the SUN Attributes Dataset, with weak supervision, clearly demonstrate that the Deep-Carved CNNs consistently achieve considerable improvement in the precision of attribute prediction over popular baseline methods.
no_new_dataset
0.717631
1504.04923
Chunhua Shen
Ruizhi Qiao, Lingqiao Liu, Chunhua Shen, and Anton von den Hengel
Learning discriminative trajectorylet detector sets for accurate skeleton-based action recognition
10 pages
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The introduction of low-cost RGB-D sensors has promoted the research in skeleton-based human action recognition. Devising a representation suitable for characterising actions on the basis of noisy skeleton sequences remains a challenge, however. We here provide two insights into this challenge. First, we show that the discriminative information of a skeleton sequence usually resides in a short temporal interval and we propose a simple-but-effective local descriptor called trajectorylet to capture the static and kinematic information within this interval. Second, we further propose to encode each trajectorylet with a discriminative trajectorylet detector set which is selected from a large number of candidate detectors trained through exemplar-SVMs. The action-level representation is obtained by pooling trajectorylet encodings. Evaluating on standard datasets acquired from the Kinect sensor, it is demonstrated that our method obtains superior results over existing approaches under various experimental setups.
[ { "version": "v1", "created": "Mon, 20 Apr 2015 02:41:03 GMT" } ]
2015-04-21T00:00:00
[ [ "Qiao", "Ruizhi", "" ], [ "Liu", "Lingqiao", "" ], [ "Shen", "Chunhua", "" ], [ "Hengel", "Anton von den", "" ] ]
TITLE: Learning discriminative trajectorylet detector sets for accurate skeleton-based action recognition ABSTRACT: The introduction of low-cost RGB-D sensors has promoted the research in skeleton-based human action recognition. Devising a representation suitable for characterising actions on the basis of noisy skeleton sequences remains a challenge, however. We here provide two insights into this challenge. First, we show that the discriminative information of a skeleton sequence usually resides in a short temporal interval and we propose a simple-but-effective local descriptor called trajectorylet to capture the static and kinematic information within this interval. Second, we further propose to encode each trajectorylet with a discriminative trajectorylet detector set which is selected from a large number of candidate detectors trained through exemplar-SVMs. The action-level representation is obtained by pooling trajectorylet encodings. Evaluating on standard datasets acquired from the Kinect sensor, it is demonstrated that our method obtains superior results over existing approaches under various experimental setups.
no_new_dataset
0.944228
1504.04945
Yadong Zhu
Yadong Zhu, Yanyan Lan, Jiafeng Guo, Xueqi Cheng
Topic-focused Dynamic Information Filtering in Social Media
null
null
null
null
cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the quick development of online social media such as twitter or sina weibo in china, many users usually track hot topics to satisfy their desired information need. For a hot topic, new opinions or ideas will be continuously produced in the form of online data stream. In this scenario, how to effectively filter and display information for a certain topic dynamically, will be a critical problem. We call the problem as Topic-focused Dynamic Information Filtering (denoted as TDIF for short) in social media. In this paper, we start open discussions on such application problems. We first analyze the properties of the TDIF problem, which usually contains several typical requirements: relevance, diversity, recency and confidence. Recency means that users want to follow the recent opinions or news. Additionally, the confidence of information must be taken into consideration. How to balance these factors properly in online data stream is very important and challenging. We propose a dynamic preservation strategy on the basis of an existing feature-based utility function, to solve the TDIF problem. Additionally, we propose new dynamic diversity measures, to get a more reasonable evaluation for such application problems. Extensive exploratory experiments have been conducted on TREC public twitter dataset, and the experimental results validate the effectiveness of our approach.
[ { "version": "v1", "created": "Mon, 20 Apr 2015 06:12:06 GMT" } ]
2015-04-21T00:00:00
[ [ "Zhu", "Yadong", "" ], [ "Lan", "Yanyan", "" ], [ "Guo", "Jiafeng", "" ], [ "Cheng", "Xueqi", "" ] ]
TITLE: Topic-focused Dynamic Information Filtering in Social Media ABSTRACT: With the quick development of online social media such as twitter or sina weibo in china, many users usually track hot topics to satisfy their desired information need. For a hot topic, new opinions or ideas will be continuously produced in the form of online data stream. In this scenario, how to effectively filter and display information for a certain topic dynamically, will be a critical problem. We call the problem as Topic-focused Dynamic Information Filtering (denoted as TDIF for short) in social media. In this paper, we start open discussions on such application problems. We first analyze the properties of the TDIF problem, which usually contains several typical requirements: relevance, diversity, recency and confidence. Recency means that users want to follow the recent opinions or news. Additionally, the confidence of information must be taken into consideration. How to balance these factors properly in online data stream is very important and challenging. We propose a dynamic preservation strategy on the basis of an existing feature-based utility function, to solve the TDIF problem. Additionally, we propose new dynamic diversity measures, to get a more reasonable evaluation for such application problems. Extensive exploratory experiments have been conducted on TREC public twitter dataset, and the experimental results validate the effectiveness of our approach.
no_new_dataset
0.952131
1504.05035
Wangmeng Zuo
Xiaohe Wu, Wangmeng Zuo, Yuanyuan Zhu, Liang Lin
F-SVM: Combination of Feature Transformation and SVM Learning via Convex Relaxation
11 pages, 5 figures
null
null
null
cs.LG cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The generalization error bound of support vector machine (SVM) depends on the ratio of radius and margin, while standard SVM only considers the maximization of the margin but ignores the minimization of the radius. Several approaches have been proposed to integrate radius and margin for joint learning of feature transformation and SVM classifier. However, most of them either require the form of the transformation matrix to be diagonal, or are non-convex and computationally expensive. In this paper, we suggest a novel approximation for the radius of minimum enclosing ball (MEB) in feature space, and then propose a convex radius-margin based SVM model for joint learning of feature transformation and SVM classifier, i.e., F-SVM. An alternating minimization method is adopted to solve the F-SVM model, where the feature transformation is updatedvia gradient descent and the classifier is updated by employing the existing SVM solver. By incorporating with kernel principal component analysis, F-SVM is further extended for joint learning of nonlinear transformation and classifier. Experimental results on the UCI machine learning datasets and the LFW face datasets show that F-SVM outperforms the standard SVM and the existing radius-margin based SVMs, e.g., RMM, R-SVM+ and R-SVM+{\mu}.
[ { "version": "v1", "created": "Mon, 20 Apr 2015 12:36:50 GMT" } ]
2015-04-21T00:00:00
[ [ "Wu", "Xiaohe", "" ], [ "Zuo", "Wangmeng", "" ], [ "Zhu", "Yuanyuan", "" ], [ "Lin", "Liang", "" ] ]
TITLE: F-SVM: Combination of Feature Transformation and SVM Learning via Convex Relaxation ABSTRACT: The generalization error bound of support vector machine (SVM) depends on the ratio of radius and margin, while standard SVM only considers the maximization of the margin but ignores the minimization of the radius. Several approaches have been proposed to integrate radius and margin for joint learning of feature transformation and SVM classifier. However, most of them either require the form of the transformation matrix to be diagonal, or are non-convex and computationally expensive. In this paper, we suggest a novel approximation for the radius of minimum enclosing ball (MEB) in feature space, and then propose a convex radius-margin based SVM model for joint learning of feature transformation and SVM classifier, i.e., F-SVM. An alternating minimization method is adopted to solve the F-SVM model, where the feature transformation is updatedvia gradient descent and the classifier is updated by employing the existing SVM solver. By incorporating with kernel principal component analysis, F-SVM is further extended for joint learning of nonlinear transformation and classifier. Experimental results on the UCI machine learning datasets and the LFW face datasets show that F-SVM outperforms the standard SVM and the existing radius-margin based SVMs, e.g., RMM, R-SVM+ and R-SVM+{\mu}.
no_new_dataset
0.954265
1305.1495
Lorenzo Del Castello
Alessandro Attanasi, Andrea Cavagna, Lorenzo Del Castello, Irene Giardina, Asja Jelic, Stefania Melillo, Leonardo Parisi, Fabio Pellacini, Edward Shen, Edmondo Silvestri, Massimiliano Viale
GReTA - a novel Global and Recursive Tracking Algorithm in three dimensions
13 pages, 6 figures, 3 tables. Version 3 was slightly shortened, and new comprative results on the public datasets (thermal infrared videos of flying bats) by Z. Wu and coworkers (2014) were included. in A. Attanasi et al., "GReTA - A Novel Global and Recursive Tracking Algorithm in Three Dimensions", IEEE Trans. Pattern Anal. Mach. Intell., vol.37 (2015)
null
10.1109/TPAMI.2015.2414427
null
q-bio.QM cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Tracking multiple moving targets allows quantitative measure of the dynamic behavior in systems as diverse as animal groups in biology, turbulence in fluid dynamics and crowd and traffic control. In three dimensions, tracking several targets becomes increasingly hard since optical occlusions are very likely, i.e. two featureless targets frequently overlap for several frames. Occlusions are particularly frequent in biological groups such as bird flocks, fish schools, and insect swarms, a fact that has severely limited collective animal behavior field studies in the past. This paper presents a 3D tracking method that is robust in the case of severe occlusions. To ensure robustness, we adopt a global optimization approach that works on all objects and frames at once. To achieve practicality and scalability, we employ a divide and conquer formulation, thanks to which the computational complexity of the problem is reduced by orders of magnitude. We tested our algorithm with synthetic data, with experimental data of bird flocks and insect swarms and with public benchmark datasets, and show that our system yields high quality trajectories for hundreds of moving targets with severe overlap. The results obtained on very heterogeneous data show the potential applicability of our method to the most diverse experimental situations.
[ { "version": "v1", "created": "Tue, 7 May 2013 12:45:30 GMT" }, { "version": "v2", "created": "Thu, 24 Apr 2014 14:55:59 GMT" }, { "version": "v3", "created": "Fri, 17 Apr 2015 16:36:59 GMT" } ]
2015-04-20T00:00:00
[ [ "Attanasi", "Alessandro", "" ], [ "Cavagna", "Andrea", "" ], [ "Del Castello", "Lorenzo", "" ], [ "Giardina", "Irene", "" ], [ "Jelic", "Asja", "" ], [ "Melillo", "Stefania", "" ], [ "Parisi", "Leonardo", "" ], [ "Pellacini", "Fabio", "" ], [ "Shen", "Edward", "" ], [ "Silvestri", "Edmondo", "" ], [ "Viale", "Massimiliano", "" ] ]
TITLE: GReTA - a novel Global and Recursive Tracking Algorithm in three dimensions ABSTRACT: Tracking multiple moving targets allows quantitative measure of the dynamic behavior in systems as diverse as animal groups in biology, turbulence in fluid dynamics and crowd and traffic control. In three dimensions, tracking several targets becomes increasingly hard since optical occlusions are very likely, i.e. two featureless targets frequently overlap for several frames. Occlusions are particularly frequent in biological groups such as bird flocks, fish schools, and insect swarms, a fact that has severely limited collective animal behavior field studies in the past. This paper presents a 3D tracking method that is robust in the case of severe occlusions. To ensure robustness, we adopt a global optimization approach that works on all objects and frames at once. To achieve practicality and scalability, we employ a divide and conquer formulation, thanks to which the computational complexity of the problem is reduced by orders of magnitude. We tested our algorithm with synthetic data, with experimental data of bird flocks and insect swarms and with public benchmark datasets, and show that our system yields high quality trajectories for hundreds of moving targets with severe overlap. The results obtained on very heterogeneous data show the potential applicability of our method to the most diverse experimental situations.
no_new_dataset
0.950869
1408.5661
Keisuke Yamazaki
Keisuke Yamazaki
Asymptotic Accuracy of Bayesian Estimation for a Single Latent Variable
28 pages, 3 figures
null
null
null
stat.ML cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In data science and machine learning, hierarchical parametric models, such as mixture models, are often used. They contain two kinds of variables: observable variables, which represent the parts of the data that can be directly measured, and latent variables, which represent the underlying processes that generate the data. Although there has been an increase in research on the estimation accuracy for observable variables, the theoretical analysis of estimating latent variables has not been thoroughly investigated. In a previous study, we determined the accuracy of a Bayes estimation for the joint probability of the latent variables in a dataset, and we proved that the Bayes method is asymptotically more accurate than the maximum-likelihood method. However, the accuracy of the Bayes estimation for a single latent variable remains unknown. In the present paper, we derive the asymptotic expansions of the error functions, which are defined by the Kullback-Leibler divergence, for two types of single-variable estimations when the statistical regularity is satisfied. Our results indicate that the accuracies of the Bayes and maximum-likelihood methods are asymptotically equivalent and clarify that the Bayes method is only advantageous for multivariable estimations.
[ { "version": "v1", "created": "Mon, 25 Aug 2014 04:44:53 GMT" }, { "version": "v2", "created": "Tue, 27 Jan 2015 04:00:06 GMT" }, { "version": "v3", "created": "Fri, 17 Apr 2015 06:59:26 GMT" } ]
2015-04-20T00:00:00
[ [ "Yamazaki", "Keisuke", "" ] ]
TITLE: Asymptotic Accuracy of Bayesian Estimation for a Single Latent Variable ABSTRACT: In data science and machine learning, hierarchical parametric models, such as mixture models, are often used. They contain two kinds of variables: observable variables, which represent the parts of the data that can be directly measured, and latent variables, which represent the underlying processes that generate the data. Although there has been an increase in research on the estimation accuracy for observable variables, the theoretical analysis of estimating latent variables has not been thoroughly investigated. In a previous study, we determined the accuracy of a Bayes estimation for the joint probability of the latent variables in a dataset, and we proved that the Bayes method is asymptotically more accurate than the maximum-likelihood method. However, the accuracy of the Bayes estimation for a single latent variable remains unknown. In the present paper, we derive the asymptotic expansions of the error functions, which are defined by the Kullback-Leibler divergence, for two types of single-variable estimations when the statistical regularity is satisfied. Our results indicate that the accuracies of the Bayes and maximum-likelihood methods are asymptotically equivalent and clarify that the Bayes method is only advantageous for multivariable estimations.
no_new_dataset
0.947914
1412.5083
Qiang Qiu
Qiang Qiu, Guillermo Sapiro, Alex Bronstein
Random Forests Can Hash
null
null
null
null
cs.CV cs.IR cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Hash codes are a very efficient data representation needed to be able to cope with the ever growing amounts of data. We introduce a random forest semantic hashing scheme with information-theoretic code aggregation, showing for the first time how random forest, a technique that together with deep learning have shown spectacular results in classification, can also be extended to large-scale retrieval. Traditional random forest fails to enforce the consistency of hashes generated from each tree for the same class data, i.e., to preserve the underlying similarity, and it also lacks a principled way for code aggregation across trees. We start with a simple hashing scheme, where independently trained random trees in a forest are acting as hashing functions. We the propose a subspace model as the splitting function, and show that it enforces the hash consistency in a tree for data from the same class. We also introduce an information-theoretic approach for aggregating codes of individual trees into a single hash code, producing a near-optimal unique hash for each class. Experiments on large-scale public datasets are presented, showing that the proposed approach significantly outperforms state-of-the-art hashing methods for retrieval tasks.
[ { "version": "v1", "created": "Tue, 16 Dec 2014 17:02:18 GMT" }, { "version": "v2", "created": "Tue, 24 Feb 2015 18:26:12 GMT" }, { "version": "v3", "created": "Fri, 17 Apr 2015 01:00:24 GMT" } ]
2015-04-20T00:00:00
[ [ "Qiu", "Qiang", "" ], [ "Sapiro", "Guillermo", "" ], [ "Bronstein", "Alex", "" ] ]
TITLE: Random Forests Can Hash ABSTRACT: Hash codes are a very efficient data representation needed to be able to cope with the ever growing amounts of data. We introduce a random forest semantic hashing scheme with information-theoretic code aggregation, showing for the first time how random forest, a technique that together with deep learning have shown spectacular results in classification, can also be extended to large-scale retrieval. Traditional random forest fails to enforce the consistency of hashes generated from each tree for the same class data, i.e., to preserve the underlying similarity, and it also lacks a principled way for code aggregation across trees. We start with a simple hashing scheme, where independently trained random trees in a forest are acting as hashing functions. We the propose a subspace model as the splitting function, and show that it enforces the hash consistency in a tree for data from the same class. We also introduce an information-theoretic approach for aggregating codes of individual trees into a single hash code, producing a near-optimal unique hash for each class. Experiments on large-scale public datasets are presented, showing that the proposed approach significantly outperforms state-of-the-art hashing methods for retrieval tasks.
no_new_dataset
0.949435
1503.05787
Tobias Isenberg
Maarten H. Everts, Henk Bekker, Jos B.T.M. Roerdink, and Tobias Isenberg
Interactive Illustrative Line Styles and Line Style Transfer Functions for Flow Visualization
Extended version of a short paper at Pacific Graphics 2011 (http://dx.doi.org/10.2312/PE/PG/PG2011short/105-110)
null
null
null
cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a flexible illustrative line style model for the visualization of streamline data. Our model partitions view-oriented line strips into parallel bands whose basic visual properties can be controlled independently. We thus extend previous line stylization techniques specifically for visualization purposes by allowing the parametrization of these bands based on the local line data attributes. Moreover, our approach supports emphasis and abstraction by introducing line style transfer functions that map local line attribute values to complete line styles. With a flexible GPU implementation of this line style model we enable the interactive exploration of visual representations of streamlines. We demonstrate the effectiveness of our model by applying it to 3D flow field datasets.
[ { "version": "v1", "created": "Thu, 19 Mar 2015 14:53:55 GMT" }, { "version": "v2", "created": "Fri, 17 Apr 2015 14:01:50 GMT" } ]
2015-04-20T00:00:00
[ [ "Everts", "Maarten H.", "" ], [ "Bekker", "Henk", "" ], [ "Roerdink", "Jos B. T. M.", "" ], [ "Isenberg", "Tobias", "" ] ]
TITLE: Interactive Illustrative Line Styles and Line Style Transfer Functions for Flow Visualization ABSTRACT: We present a flexible illustrative line style model for the visualization of streamline data. Our model partitions view-oriented line strips into parallel bands whose basic visual properties can be controlled independently. We thus extend previous line stylization techniques specifically for visualization purposes by allowing the parametrization of these bands based on the local line data attributes. Moreover, our approach supports emphasis and abstraction by introducing line style transfer functions that map local line attribute values to complete line styles. With a flexible GPU implementation of this line style model we enable the interactive exploration of visual representations of streamlines. We demonstrate the effectiveness of our model by applying it to 3D flow field datasets.
no_new_dataset
0.948394
1504.04531
Nicolas Dobigeon
Laetitia Loncan, Luis B. Almeida, Jos\'e M. Bioucas-Dias, Xavier Briottet, Jocelyn Chanussot, Nicolas Dobigeon, Sophie Fabre, Wenzhi Liao, Giorgio A. Licciardi, Miguel Sim\~oes, Jean-Yves Tourneret, Miguel A. Veganzones, Gemine Vivone, Qi Wei and Naoto Yokoya
Hyperspectral pansharpening: a review
null
null
null
null
cs.CV physics.data-an stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Pansharpening aims at fusing a panchromatic image with a multispectral one, to generate an image with the high spatial resolution of the former and the high spectral resolution of the latter. In the last decade, many algorithms have been presented in the literature for pansharpening using multispectral data. With the increasing availability of hyperspectral systems, these methods are now being adapted to hyperspectral images. In this work, we compare new pansharpening techniques designed for hyperspectral data with some of the state of the art methods for multispectral pansharpening, which have been adapted for hyperspectral data. Eleven methods from different classes (component substitution, multiresolution analysis, hybrid, Bayesian and matrix factorization) are analyzed. These methods are applied to three datasets and their effectiveness and robustness are evaluated with widely used performance indicators. In addition, all the pansharpening techniques considered in this paper have been implemented in a MATLAB toolbox that is made available to the community.
[ { "version": "v1", "created": "Fri, 17 Apr 2015 15:07:11 GMT" } ]
2015-04-20T00:00:00
[ [ "Loncan", "Laetitia", "" ], [ "Almeida", "Luis B.", "" ], [ "Bioucas-Dias", "José M.", "" ], [ "Briottet", "Xavier", "" ], [ "Chanussot", "Jocelyn", "" ], [ "Dobigeon", "Nicolas", "" ], [ "Fabre", "Sophie", "" ], [ "Liao", "Wenzhi", "" ], [ "Licciardi", "Giorgio A.", "" ], [ "Simões", "Miguel", "" ], [ "Tourneret", "Jean-Yves", "" ], [ "Veganzones", "Miguel A.", "" ], [ "Vivone", "Gemine", "" ], [ "Wei", "Qi", "" ], [ "Yokoya", "Naoto", "" ] ]
TITLE: Hyperspectral pansharpening: a review ABSTRACT: Pansharpening aims at fusing a panchromatic image with a multispectral one, to generate an image with the high spatial resolution of the former and the high spectral resolution of the latter. In the last decade, many algorithms have been presented in the literature for pansharpening using multispectral data. With the increasing availability of hyperspectral systems, these methods are now being adapted to hyperspectral images. In this work, we compare new pansharpening techniques designed for hyperspectral data with some of the state of the art methods for multispectral pansharpening, which have been adapted for hyperspectral data. Eleven methods from different classes (component substitution, multiresolution analysis, hybrid, Bayesian and matrix factorization) are analyzed. These methods are applied to three datasets and their effectiveness and robustness are evaluated with widely used performance indicators. In addition, all the pansharpening techniques considered in this paper have been implemented in a MATLAB toolbox that is made available to the community.
no_new_dataset
0.952175
1504.04548
Simone Bianco
Simone Bianco, Claudio Cusano, Raimondo Schettini
Color Constancy Using CNNs
Accepted at DeepVision: Deep Learning in Computer Vision 2015 (CVPR 2015 workshop)
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work we describe a Convolutional Neural Network (CNN) to accurately predict the scene illumination. Taking image patches as input, the CNN works in the spatial domain without using hand-crafted features that are employed by most previous methods. The network consists of one convolutional layer with max pooling, one fully connected layer and three output nodes. Within the network structure, feature learning and regression are integrated into one optimization process, which leads to a more effective model for estimating scene illumination. This approach achieves state-of-the-art performance on a standard dataset of RAW images. Preliminary experiments on images with spatially varying illumination demonstrate the stability of the local illuminant estimation ability of our CNN.
[ { "version": "v1", "created": "Fri, 17 Apr 2015 15:51:07 GMT" } ]
2015-04-20T00:00:00
[ [ "Bianco", "Simone", "" ], [ "Cusano", "Claudio", "" ], [ "Schettini", "Raimondo", "" ] ]
TITLE: Color Constancy Using CNNs ABSTRACT: In this work we describe a Convolutional Neural Network (CNN) to accurately predict the scene illumination. Taking image patches as input, the CNN works in the spatial domain without using hand-crafted features that are employed by most previous methods. The network consists of one convolutional layer with max pooling, one fully connected layer and three output nodes. Within the network structure, feature learning and regression are integrated into one optimization process, which leads to a more effective model for estimating scene illumination. This approach achieves state-of-the-art performance on a standard dataset of RAW images. Preliminary experiments on images with spatially varying illumination demonstrate the stability of the local illuminant estimation ability of our CNN.
no_new_dataset
0.948489
1504.04588
James-A. Goulet
James-A. Goulet
The Nataf-Beta Random Field Classifier: An Extension of the Beta Conjugate Prior to Classification Problems
17 pages, 4 figures, Submitted for publication in the Journal of Machine Learning Research
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents the Nataf-Beta Random Field Classifier, a discriminative approach that extends the applicability of the Beta conjugate prior to classification problems. The approach's key feature is to model the probability of a class conditional on attribute values as a random field whose marginals are Beta distributed, and where the parameters of marginals are themselves described by random fields. Although the classification accuracy of the approach proposed does not statistically outperform the best accuracies reported in the literature, it ranks among the top tier for the six benchmark datasets tested. The Nataf-Beta Random Field Classifier is suited as a general purpose classification approach for real-continuous and real-integer attribute value problems.
[ { "version": "v1", "created": "Fri, 17 Apr 2015 17:32:00 GMT" } ]
2015-04-20T00:00:00
[ [ "Goulet", "James-A.", "" ] ]
TITLE: The Nataf-Beta Random Field Classifier: An Extension of the Beta Conjugate Prior to Classification Problems ABSTRACT: This paper presents the Nataf-Beta Random Field Classifier, a discriminative approach that extends the applicability of the Beta conjugate prior to classification problems. The approach's key feature is to model the probability of a class conditional on attribute values as a random field whose marginals are Beta distributed, and where the parameters of marginals are themselves described by random fields. Although the classification accuracy of the approach proposed does not statistically outperform the best accuracies reported in the literature, it ranks among the top tier for the six benchmark datasets tested. The Nataf-Beta Random Field Classifier is suited as a general purpose classification approach for real-continuous and real-integer attribute value problems.
no_new_dataset
0.952397
1409.0940
Haim Avron
Vikas Sindhwani and Haim Avron
High-performance Kernel Machines with Implicit Distributed Optimization and Randomization
Work presented at MMDS 2014 (June 2014) and JSM 2014
null
null
null
stat.ML cs.DC cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In order to fully utilize "big data", it is often required to use "big models". Such models tend to grow with the complexity and size of the training data, and do not make strong parametric assumptions upfront on the nature of the underlying statistical dependencies. Kernel methods fit this need well, as they constitute a versatile and principled statistical methodology for solving a wide range of non-parametric modelling problems. However, their high computational costs (in storage and time) pose a significant barrier to their widespread adoption in big data applications. We propose an algorithmic framework and high-performance implementation for massive-scale training of kernel-based statistical models, based on combining two key technical ingredients: (i) distributed general purpose convex optimization, and (ii) the use of randomization to improve the scalability of kernel methods. Our approach is based on a block-splitting variant of the Alternating Directions Method of Multipliers, carefully reconfigured to handle very large random feature matrices, while exploiting hybrid parallelism typically found in modern clusters of multicore machines. Our implementation supports a variety of statistical learning tasks by enabling several loss functions, regularization schemes, kernels, and layers of randomized approximations for both dense and sparse datasets, in a highly extensible framework. We evaluate the ability of our framework to learn models on data from applications, and provide a comparison against existing sequential and parallel libraries.
[ { "version": "v1", "created": "Wed, 3 Sep 2014 02:28:51 GMT" }, { "version": "v2", "created": "Tue, 23 Dec 2014 21:38:15 GMT" }, { "version": "v3", "created": "Thu, 16 Apr 2015 18:06:53 GMT" } ]
2015-04-17T00:00:00
[ [ "Sindhwani", "Vikas", "" ], [ "Avron", "Haim", "" ] ]
TITLE: High-performance Kernel Machines with Implicit Distributed Optimization and Randomization ABSTRACT: In order to fully utilize "big data", it is often required to use "big models". Such models tend to grow with the complexity and size of the training data, and do not make strong parametric assumptions upfront on the nature of the underlying statistical dependencies. Kernel methods fit this need well, as they constitute a versatile and principled statistical methodology for solving a wide range of non-parametric modelling problems. However, their high computational costs (in storage and time) pose a significant barrier to their widespread adoption in big data applications. We propose an algorithmic framework and high-performance implementation for massive-scale training of kernel-based statistical models, based on combining two key technical ingredients: (i) distributed general purpose convex optimization, and (ii) the use of randomization to improve the scalability of kernel methods. Our approach is based on a block-splitting variant of the Alternating Directions Method of Multipliers, carefully reconfigured to handle very large random feature matrices, while exploiting hybrid parallelism typically found in modern clusters of multicore machines. Our implementation supports a variety of statistical learning tasks by enabling several loss functions, regularization schemes, kernels, and layers of randomized approximations for both dense and sparse datasets, in a highly extensible framework. We evaluate the ability of our framework to learn models on data from applications, and provide a comparison against existing sequential and parallel libraries.
no_new_dataset
0.940353
1502.05337
Emiliano De Cristofaro
Julien Freudiger and Emiliano De Cristofaro and Alex Brito
Controlled Data Sharing for Collaborative Predictive Blacklisting
A preliminary version of this paper appears in DIMVA 2015. This is the full version. arXiv admin note: substantial text overlap with arXiv:1403.2123
null
null
null
cs.CR cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Although sharing data across organizations is often advocated as a promising way to enhance cybersecurity, collaborative initiatives are rarely put into practice owing to confidentiality, trust, and liability challenges. In this paper, we investigate whether collaborative threat mitigation can be realized via a controlled data sharing approach, whereby organizations make informed decisions as to whether or not, and how much, to share. Using appropriate cryptographic tools, entities can estimate the benefits of collaboration and agree on what to share in a privacy-preserving way, without having to disclose their datasets. We focus on collaborative predictive blacklisting, i.e., forecasting attack sources based on one's logs and those contributed by other organizations. We study the impact of different sharing strategies by experimenting on a real-world dataset of two billion suspicious IP addresses collected from Dshield over two months. We find that controlled data sharing yields up to 105% accuracy improvement on average, while also reducing the false positive rate.
[ { "version": "v1", "created": "Wed, 18 Feb 2015 18:48:56 GMT" }, { "version": "v2", "created": "Thu, 16 Apr 2015 09:09:16 GMT" } ]
2015-04-17T00:00:00
[ [ "Freudiger", "Julien", "" ], [ "De Cristofaro", "Emiliano", "" ], [ "Brito", "Alex", "" ] ]
TITLE: Controlled Data Sharing for Collaborative Predictive Blacklisting ABSTRACT: Although sharing data across organizations is often advocated as a promising way to enhance cybersecurity, collaborative initiatives are rarely put into practice owing to confidentiality, trust, and liability challenges. In this paper, we investigate whether collaborative threat mitigation can be realized via a controlled data sharing approach, whereby organizations make informed decisions as to whether or not, and how much, to share. Using appropriate cryptographic tools, entities can estimate the benefits of collaboration and agree on what to share in a privacy-preserving way, without having to disclose their datasets. We focus on collaborative predictive blacklisting, i.e., forecasting attack sources based on one's logs and those contributed by other organizations. We study the impact of different sharing strategies by experimenting on a real-world dataset of two billion suspicious IP addresses collected from Dshield over two months. We find that controlled data sharing yields up to 105% accuracy improvement on average, while also reducing the false positive rate.
no_new_dataset
0.847968
1503.01224
Chunhua Shen
Peng Wang, Yuanzhouhan Cao, Chunhua Shen, Lingqiao Liu, Heng Tao Shen
Temporal Pyramid Pooling Based Convolutional Neural Networks for Action Recognition
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Encouraged by the success of Convolutional Neural Networks (CNNs) in image classification, recently much effort is spent on applying CNNs to video based action recognition problems. One challenge is that video contains a varying number of frames which is incompatible to the standard input format of CNNs. Existing methods handle this issue either by directly sampling a fixed number of frames or bypassing this issue by introducing a 3D convolutional layer which conducts convolution in spatial-temporal domain. To solve this issue, here we propose a novel network structure which allows an arbitrary number of frames as the network input. The key of our solution is to introduce a module consisting of an encoding layer and a temporal pyramid pooling layer. The encoding layer maps the activation from previous layers to a feature vector suitable for pooling while the temporal pyramid pooling layer converts multiple frame-level activations into a fixed-length video-level representation. In addition, we adopt a feature concatenation layer which combines appearance information and motion information. Compared with the frame sampling strategy, our method avoids the risk of missing any important frames. Compared with the 3D convolutional method which requires a huge video dataset for network training, our model can be learned on a small target dataset because we can leverage the off-the-shelf image-level CNN for model parameter initialization. Experiments on two challenging datasets, Hollywood2 and HMDB51, demonstrate that our method achieves superior performance over state-of-the-art methods while requiring much fewer training data.
[ { "version": "v1", "created": "Wed, 4 Mar 2015 05:18:18 GMT" }, { "version": "v2", "created": "Thu, 16 Apr 2015 12:18:46 GMT" } ]
2015-04-17T00:00:00
[ [ "Wang", "Peng", "" ], [ "Cao", "Yuanzhouhan", "" ], [ "Shen", "Chunhua", "" ], [ "Liu", "Lingqiao", "" ], [ "Shen", "Heng Tao", "" ] ]
TITLE: Temporal Pyramid Pooling Based Convolutional Neural Networks for Action Recognition ABSTRACT: Encouraged by the success of Convolutional Neural Networks (CNNs) in image classification, recently much effort is spent on applying CNNs to video based action recognition problems. One challenge is that video contains a varying number of frames which is incompatible to the standard input format of CNNs. Existing methods handle this issue either by directly sampling a fixed number of frames or bypassing this issue by introducing a 3D convolutional layer which conducts convolution in spatial-temporal domain. To solve this issue, here we propose a novel network structure which allows an arbitrary number of frames as the network input. The key of our solution is to introduce a module consisting of an encoding layer and a temporal pyramid pooling layer. The encoding layer maps the activation from previous layers to a feature vector suitable for pooling while the temporal pyramid pooling layer converts multiple frame-level activations into a fixed-length video-level representation. In addition, we adopt a feature concatenation layer which combines appearance information and motion information. Compared with the frame sampling strategy, our method avoids the risk of missing any important frames. Compared with the 3D convolutional method which requires a huge video dataset for network training, our model can be learned on a small target dataset because we can leverage the off-the-shelf image-level CNN for model parameter initialization. Experiments on two challenging datasets, Hollywood2 and HMDB51, demonstrate that our method achieves superior performance over state-of-the-art methods while requiring much fewer training data.
no_new_dataset
0.951639
1504.03641
Sergey Zagoruyko
Sergey Zagoruyko and Nikos Komodakis
Learning to Compare Image Patches via Convolutional Neural Networks
CVPR 2015
null
null
null
cs.CV cs.LG cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we show how to learn directly from image data (i.e., without resorting to manually-designed features) a general similarity function for comparing image patches, which is a task of fundamental importance for many computer vision problems. To encode such a function, we opt for a CNN-based model that is trained to account for a wide variety of changes in image appearance. To that end, we explore and study multiple neural network architectures, which are specifically adapted to this task. We show that such an approach can significantly outperform the state-of-the-art on several problems and benchmark datasets.
[ { "version": "v1", "created": "Tue, 14 Apr 2015 17:53:51 GMT" } ]
2015-04-17T00:00:00
[ [ "Zagoruyko", "Sergey", "" ], [ "Komodakis", "Nikos", "" ] ]
TITLE: Learning to Compare Image Patches via Convolutional Neural Networks ABSTRACT: In this paper we show how to learn directly from image data (i.e., without resorting to manually-designed features) a general similarity function for comparing image patches, which is a task of fundamental importance for many computer vision problems. To encode such a function, we opt for a CNN-based model that is trained to account for a wide variety of changes in image appearance. To that end, we explore and study multiple neural network architectures, which are specifically adapted to this task. We show that such an approach can significantly outperform the state-of-the-art on several problems and benchmark datasets.
no_new_dataset
0.949576
1504.04054
Xin Yuan
Yunchen Pu, Xin Yuan, Lawrence Carin
A Generative Model for Deep Convolutional Learning
3 pages, 1 figure, ICLR workshop
null
null
null
stat.ML cs.LG cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A generative model is developed for deep (multi-layered) convolutional dictionary learning. A novel probabilistic pooling operation is integrated into the deep model, yielding efficient bottom-up (pretraining) and top-down (refinement) probabilistic learning. Experimental results demonstrate powerful capabilities of the model to learn multi-layer features from images, and excellent classification results are obtained on the MNIST and Caltech 101 datasets.
[ { "version": "v1", "created": "Wed, 15 Apr 2015 21:31:58 GMT" } ]
2015-04-17T00:00:00
[ [ "Pu", "Yunchen", "" ], [ "Yuan", "Xin", "" ], [ "Carin", "Lawrence", "" ] ]
TITLE: A Generative Model for Deep Convolutional Learning ABSTRACT: A generative model is developed for deep (multi-layered) convolutional dictionary learning. A novel probabilistic pooling operation is integrated into the deep model, yielding efficient bottom-up (pretraining) and top-down (refinement) probabilistic learning. Experimental results demonstrate powerful capabilities of the model to learn multi-layer features from images, and excellent classification results are obtained on the MNIST and Caltech 101 datasets.
no_new_dataset
0.946892
1504.04208
Andrea Scharnhorst
Rob Koopman, Shenghui Wang, Andrea Scharnhorst
Contextualization of topics - browsing through terms, authors, journals and cluster allocations
proceedings of the ISSI 2015 conference (accepted)
null
null
null
cs.DL cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper builds on an innovative Information Retrieval tool, Ariadne. The tool has been developed as an interactive network visualization and browsing tool for large-scale bibliographic databases. It basically allows to gain insights into a topic by contextualizing a search query (Koopman et al., 2015). In this paper, we apply the Ariadne tool to a far smaller dataset of 111,616 documents in astronomy and astrophysics. Labeled as the Berlin dataset, this data have been used by several research teams to apply and later compare different clustering algorithms. The quest for this team effort is how to delineate topics. This paper contributes to this challenge in two different ways. First, we produce one of the different cluster solution and second, we use Ariadne (the method behind it, and the interface - called LittleAriadne) to display cluster solutions of the different group members. By providing a tool that allows the visual inspection of the similarity of article clusters produced by different algorithms, we present a complementary approach to other possible means of comparison. More particular, we discuss how we can - with LittleAriadne - browse through the network of topical terms, authors, journals and cluster solutions in the Berlin dataset and compare cluster solutions as well as see their context.
[ { "version": "v1", "created": "Thu, 16 Apr 2015 12:38:10 GMT" } ]
2015-04-17T00:00:00
[ [ "Koopman", "Rob", "" ], [ "Wang", "Shenghui", "" ], [ "Scharnhorst", "Andrea", "" ] ]
TITLE: Contextualization of topics - browsing through terms, authors, journals and cluster allocations ABSTRACT: This paper builds on an innovative Information Retrieval tool, Ariadne. The tool has been developed as an interactive network visualization and browsing tool for large-scale bibliographic databases. It basically allows to gain insights into a topic by contextualizing a search query (Koopman et al., 2015). In this paper, we apply the Ariadne tool to a far smaller dataset of 111,616 documents in astronomy and astrophysics. Labeled as the Berlin dataset, this data have been used by several research teams to apply and later compare different clustering algorithms. The quest for this team effort is how to delineate topics. This paper contributes to this challenge in two different ways. First, we produce one of the different cluster solution and second, we use Ariadne (the method behind it, and the interface - called LittleAriadne) to display cluster solutions of the different group members. By providing a tool that allows the visual inspection of the similarity of article clusters produced by different algorithms, we present a complementary approach to other possible means of comparison. More particular, we discuss how we can - with LittleAriadne - browse through the network of topical terms, authors, journals and cluster solutions in the Berlin dataset and compare cluster solutions as well as see their context.
no_new_dataset
0.888227
1406.5670
Zhirong Wu
Zhirong Wu, Shuran Song, Aditya Khosla, Fisher Yu, Linguang Zhang, Xiaoou Tang, Jianxiong Xiao
3D ShapeNets: A Deep Representation for Volumetric Shapes
to be appeared in CVPR 2015
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
3D shape is a crucial but heavily underutilized cue in today's computer vision systems, mostly due to the lack of a good generic shape representation. With the recent availability of inexpensive 2.5D depth sensors (e.g. Microsoft Kinect), it is becoming increasingly important to have a powerful 3D shape representation in the loop. Apart from category recognition, recovering full 3D shapes from view-based 2.5D depth maps is also a critical part of visual understanding. To this end, we propose to represent a geometric 3D shape as a probability distribution of binary variables on a 3D voxel grid, using a Convolutional Deep Belief Network. Our model, 3D ShapeNets, learns the distribution of complex 3D shapes across different object categories and arbitrary poses from raw CAD data, and discovers hierarchical compositional part representations automatically. It naturally supports joint object recognition and shape completion from 2.5D depth maps, and it enables active object recognition through view planning. To train our 3D deep learning model, we construct ModelNet -- a large-scale 3D CAD model dataset. Extensive experiments show that our 3D deep representation enables significant performance improvement over the-state-of-the-arts in a variety of tasks.
[ { "version": "v1", "created": "Sun, 22 Jun 2014 03:31:52 GMT" }, { "version": "v2", "created": "Mon, 1 Sep 2014 04:59:49 GMT" }, { "version": "v3", "created": "Wed, 15 Apr 2015 16:46:05 GMT" } ]
2015-04-16T00:00:00
[ [ "Wu", "Zhirong", "" ], [ "Song", "Shuran", "" ], [ "Khosla", "Aditya", "" ], [ "Yu", "Fisher", "" ], [ "Zhang", "Linguang", "" ], [ "Tang", "Xiaoou", "" ], [ "Xiao", "Jianxiong", "" ] ]
TITLE: 3D ShapeNets: A Deep Representation for Volumetric Shapes ABSTRACT: 3D shape is a crucial but heavily underutilized cue in today's computer vision systems, mostly due to the lack of a good generic shape representation. With the recent availability of inexpensive 2.5D depth sensors (e.g. Microsoft Kinect), it is becoming increasingly important to have a powerful 3D shape representation in the loop. Apart from category recognition, recovering full 3D shapes from view-based 2.5D depth maps is also a critical part of visual understanding. To this end, we propose to represent a geometric 3D shape as a probability distribution of binary variables on a 3D voxel grid, using a Convolutional Deep Belief Network. Our model, 3D ShapeNets, learns the distribution of complex 3D shapes across different object categories and arbitrary poses from raw CAD data, and discovers hierarchical compositional part representations automatically. It naturally supports joint object recognition and shape completion from 2.5D depth maps, and it enables active object recognition through view planning. To train our 3D deep learning model, we construct ModelNet -- a large-scale 3D CAD model dataset. Extensive experiments show that our 3D deep representation enables significant performance improvement over the-state-of-the-arts in a variety of tasks.
new_dataset
0.902352
1410.6910
Thomas Kreuz
Thomas Kreuz and Mario Mulansky and Nebojsa Bozanic
SPIKY: A graphical user interface for monitoring spike train synchrony
15 pages, 7 figures
null
null
null
physics.data-an cs.MS cs.SE physics.bio-ph physics.med-ph q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Techniques for recording large-scale neuronal spiking activity are developing very fast. This leads to an increasing demand for algorithms capable of analyzing large amounts of experimental spike train data. One of the most crucial and demanding tasks is the identification of similarity patterns with a very high temporal resolution and across different spatial scales. To address this task, in recent years three time-resolved measures of spike train synchrony have been proposed, the ISI-distance, the SPIKE-distance, and event synchronization. The Matlab source codes for calculating and visualizing these measures have been made publicly available. However, due to the many different possible representations of the results the use of these codes is rather complicated and their application requires some basic knowledge of Matlab. Thus it became desirable to provide a more user-friendly and interactive interface. Here we address this need and present SPIKY, a graphical user interface which facilitates the application of time-resolved measures of spike train synchrony to both simulated and real data. SPIKY includes implementations of the ISI-distance, the SPIKE-distance and SPIKE-synchronization (an improved and simplified extension of event synchronization) which have been optimized with respect to computation speed and memory demand. It also comprises a spike train generator and an event detector which makes it capable of analyzing continuous data. Finally, the SPIKY package includes additional complementary programs aimed at the analysis of large numbers of datasets and the estimation of significance levels.
[ { "version": "v1", "created": "Sat, 25 Oct 2014 11:02:26 GMT" }, { "version": "v2", "created": "Sat, 24 Jan 2015 22:29:28 GMT" }, { "version": "v3", "created": "Wed, 15 Apr 2015 16:40:11 GMT" } ]
2015-04-16T00:00:00
[ [ "Kreuz", "Thomas", "" ], [ "Mulansky", "Mario", "" ], [ "Bozanic", "Nebojsa", "" ] ]
TITLE: SPIKY: A graphical user interface for monitoring spike train synchrony ABSTRACT: Techniques for recording large-scale neuronal spiking activity are developing very fast. This leads to an increasing demand for algorithms capable of analyzing large amounts of experimental spike train data. One of the most crucial and demanding tasks is the identification of similarity patterns with a very high temporal resolution and across different spatial scales. To address this task, in recent years three time-resolved measures of spike train synchrony have been proposed, the ISI-distance, the SPIKE-distance, and event synchronization. The Matlab source codes for calculating and visualizing these measures have been made publicly available. However, due to the many different possible representations of the results the use of these codes is rather complicated and their application requires some basic knowledge of Matlab. Thus it became desirable to provide a more user-friendly and interactive interface. Here we address this need and present SPIKY, a graphical user interface which facilitates the application of time-resolved measures of spike train synchrony to both simulated and real data. SPIKY includes implementations of the ISI-distance, the SPIKE-distance and SPIKE-synchronization (an improved and simplified extension of event synchronization) which have been optimized with respect to computation speed and memory demand. It also comprises a spike train generator and an event detector which makes it capable of analyzing continuous data. Finally, the SPIKY package includes additional complementary programs aimed at the analysis of large numbers of datasets and the estimation of significance levels.
no_new_dataset
0.942242
1412.6596
Scott Reed
Scott Reed, Honglak Lee, Dragomir Anguelov, Christian Szegedy, Dumitru Erhan, Andrew Rabinovich
Training Deep Neural Networks on Noisy Labels with Bootstrapping
null
null
null
null
cs.CV cs.LG cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Current state-of-the-art deep learning systems for visual object recognition and detection use purely supervised training with regularization such as dropout to avoid overfitting. The performance depends critically on the amount of labeled examples, and in current practice the labels are assumed to be unambiguous and accurate. However, this assumption often does not hold; e.g. in recognition, class labels may be missing; in detection, objects in the image may not be localized; and in general, the labeling may be subjective. In this work we propose a generic way to handle noisy and incomplete labeling by augmenting the prediction objective with a notion of consistency. We consider a prediction consistent if the same prediction is made given similar percepts, where the notion of similarity is between deep network features computed from the input data. In experiments we demonstrate that our approach yields substantial robustness to label noise on several datasets. On MNIST handwritten digits, we show that our model is robust to label corruption. On the Toronto Face Database, we show that our model handles well the case of subjective labels in emotion recognition, achieving state-of-the- art results, and can also benefit from unlabeled face images with no modification to our method. On the ILSVRC2014 detection challenge data, we show that our approach extends to very deep networks, high resolution images and structured outputs, and results in improved scalable detection.
[ { "version": "v1", "created": "Sat, 20 Dec 2014 04:11:33 GMT" }, { "version": "v2", "created": "Sat, 7 Feb 2015 22:30:39 GMT" }, { "version": "v3", "created": "Wed, 15 Apr 2015 19:48:37 GMT" } ]
2015-04-16T00:00:00
[ [ "Reed", "Scott", "" ], [ "Lee", "Honglak", "" ], [ "Anguelov", "Dragomir", "" ], [ "Szegedy", "Christian", "" ], [ "Erhan", "Dumitru", "" ], [ "Rabinovich", "Andrew", "" ] ]
TITLE: Training Deep Neural Networks on Noisy Labels with Bootstrapping ABSTRACT: Current state-of-the-art deep learning systems for visual object recognition and detection use purely supervised training with regularization such as dropout to avoid overfitting. The performance depends critically on the amount of labeled examples, and in current practice the labels are assumed to be unambiguous and accurate. However, this assumption often does not hold; e.g. in recognition, class labels may be missing; in detection, objects in the image may not be localized; and in general, the labeling may be subjective. In this work we propose a generic way to handle noisy and incomplete labeling by augmenting the prediction objective with a notion of consistency. We consider a prediction consistent if the same prediction is made given similar percepts, where the notion of similarity is between deep network features computed from the input data. In experiments we demonstrate that our approach yields substantial robustness to label noise on several datasets. On MNIST handwritten digits, we show that our model is robust to label corruption. On the Toronto Face Database, we show that our model handles well the case of subjective labels in emotion recognition, achieving state-of-the- art results, and can also benefit from unlabeled face images with no modification to our method. On the ILSVRC2014 detection challenge data, we show that our approach extends to very deep networks, high resolution images and structured outputs, and results in improved scalable detection.
no_new_dataset
0.944382
1404.3708
Yuxiao Dong
Yuxiao Dong, Jie Tang, Nitesh Chawla, Tiancheng Lou, Yang Yang, Bai Wang
Inferring Social Status and Rich Club Effects in Enterprise Communication Networks
13 pages, 4 figures
PLoS ONE 10(3): e0119446. 2015
10.1371/journal.pone.0119446
null
cs.SI cs.AI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Social status, defined as the relative rank or position that an individual holds in a social hierarchy, is known to be among the most important motivating forces in social behaviors. In this paper, we consider the notion of status from the perspective of a position or title held by a person in an enterprise. We study the intersection of social status and social networks in an enterprise. We study whether enterprise communication logs can help reveal how social interactions and individual status manifest themselves in social networks. To that end, we use two enterprise datasets with three communication channels --- voice call, short message, and email --- to demonstrate the social-behavioral differences among individuals with different status. We have several interesting findings and based on these findings we also develop a model to predict social status. On the individual level, high-status individuals are more likely to be spanned as structural holes by linking to people in parts of the enterprise networks that are otherwise not well connected to one another. On the community level, the principle of homophily, social balance and clique theory generally indicate a "rich club" maintained by high-status individuals, in the sense that this community is much more connected, balanced and dense. Our model can predict social status of individuals with 93% accuracy.
[ { "version": "v1", "created": "Mon, 14 Apr 2014 19:32:08 GMT" }, { "version": "v2", "created": "Tue, 15 Apr 2014 01:25:26 GMT" }, { "version": "v3", "created": "Wed, 16 Apr 2014 20:12:21 GMT" }, { "version": "v4", "created": "Tue, 14 Apr 2015 19:44:47 GMT" } ]
2015-04-15T00:00:00
[ [ "Dong", "Yuxiao", "" ], [ "Tang", "Jie", "" ], [ "Chawla", "Nitesh", "" ], [ "Lou", "Tiancheng", "" ], [ "Yang", "Yang", "" ], [ "Wang", "Bai", "" ] ]
TITLE: Inferring Social Status and Rich Club Effects in Enterprise Communication Networks ABSTRACT: Social status, defined as the relative rank or position that an individual holds in a social hierarchy, is known to be among the most important motivating forces in social behaviors. In this paper, we consider the notion of status from the perspective of a position or title held by a person in an enterprise. We study the intersection of social status and social networks in an enterprise. We study whether enterprise communication logs can help reveal how social interactions and individual status manifest themselves in social networks. To that end, we use two enterprise datasets with three communication channels --- voice call, short message, and email --- to demonstrate the social-behavioral differences among individuals with different status. We have several interesting findings and based on these findings we also develop a model to predict social status. On the individual level, high-status individuals are more likely to be spanned as structural holes by linking to people in parts of the enterprise networks that are otherwise not well connected to one another. On the community level, the principle of homophily, social balance and clique theory generally indicate a "rich club" maintained by high-status individuals, in the sense that this community is much more connected, balanced and dense. Our model can predict social status of individuals with 93% accuracy.
no_new_dataset
0.945701
1412.0623
Sean Bell
Sean Bell and Paul Upchurch and Noah Snavely and Kavita Bala
Material Recognition in the Wild with the Materials in Context Database
CVPR 2015. Sean Bell and Paul Upchurch contributed equally
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recognizing materials in real-world images is a challenging task. Real-world materials have rich surface texture, geometry, lighting conditions, and clutter, which combine to make the problem particularly difficult. In this paper, we introduce a new, large-scale, open dataset of materials in the wild, the Materials in Context Database (MINC), and combine this dataset with deep learning to achieve material recognition and segmentation of images in the wild. MINC is an order of magnitude larger than previous material databases, while being more diverse and well-sampled across its 23 categories. Using MINC, we train convolutional neural networks (CNNs) for two tasks: classifying materials from patches, and simultaneous material recognition and segmentation in full images. For patch-based classification on MINC we found that the best performing CNN architectures can achieve 85.2% mean class accuracy. We convert these trained CNN classifiers into an efficient fully convolutional framework combined with a fully connected conditional random field (CRF) to predict the material at every pixel in an image, achieving 73.1% mean class accuracy. Our experiments demonstrate that having a large, well-sampled dataset such as MINC is crucial for real-world material recognition and segmentation.
[ { "version": "v1", "created": "Mon, 1 Dec 2014 20:11:44 GMT" }, { "version": "v2", "created": "Tue, 14 Apr 2015 05:29:32 GMT" } ]
2015-04-15T00:00:00
[ [ "Bell", "Sean", "" ], [ "Upchurch", "Paul", "" ], [ "Snavely", "Noah", "" ], [ "Bala", "Kavita", "" ] ]
TITLE: Material Recognition in the Wild with the Materials in Context Database ABSTRACT: Recognizing materials in real-world images is a challenging task. Real-world materials have rich surface texture, geometry, lighting conditions, and clutter, which combine to make the problem particularly difficult. In this paper, we introduce a new, large-scale, open dataset of materials in the wild, the Materials in Context Database (MINC), and combine this dataset with deep learning to achieve material recognition and segmentation of images in the wild. MINC is an order of magnitude larger than previous material databases, while being more diverse and well-sampled across its 23 categories. Using MINC, we train convolutional neural networks (CNNs) for two tasks: classifying materials from patches, and simultaneous material recognition and segmentation in full images. For patch-based classification on MINC we found that the best performing CNN architectures can achieve 85.2% mean class accuracy. We convert these trained CNN classifiers into an efficient fully convolutional framework combined with a fully connected conditional random field (CRF) to predict the material at every pixel in an image, achieving 73.1% mean class accuracy. Our experiments demonstrate that having a large, well-sampled dataset such as MINC is crucial for real-world material recognition and segmentation.
new_dataset
0.962883
1412.2306
Andrej Karpathy
Andrej Karpathy, Li Fei-Fei
Deep Visual-Semantic Alignments for Generating Image Descriptions
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a model that generates natural language descriptions of images and their regions. Our approach leverages datasets of images and their sentence descriptions to learn about the inter-modal correspondences between language and visual data. Our alignment model is based on a novel combination of Convolutional Neural Networks over image regions, bidirectional Recurrent Neural Networks over sentences, and a structured objective that aligns the two modalities through a multimodal embedding. We then describe a Multimodal Recurrent Neural Network architecture that uses the inferred alignments to learn to generate novel descriptions of image regions. We demonstrate that our alignment model produces state of the art results in retrieval experiments on Flickr8K, Flickr30K and MSCOCO datasets. We then show that the generated descriptions significantly outperform retrieval baselines on both full images and on a new dataset of region-level annotations.
[ { "version": "v1", "created": "Sun, 7 Dec 2014 02:36:07 GMT" }, { "version": "v2", "created": "Tue, 14 Apr 2015 05:02:53 GMT" } ]
2015-04-15T00:00:00
[ [ "Karpathy", "Andrej", "" ], [ "Fei-Fei", "Li", "" ] ]
TITLE: Deep Visual-Semantic Alignments for Generating Image Descriptions ABSTRACT: We present a model that generates natural language descriptions of images and their regions. Our approach leverages datasets of images and their sentence descriptions to learn about the inter-modal correspondences between language and visual data. Our alignment model is based on a novel combination of Convolutional Neural Networks over image regions, bidirectional Recurrent Neural Networks over sentences, and a structured objective that aligns the two modalities through a multimodal embedding. We then describe a Multimodal Recurrent Neural Network architecture that uses the inferred alignments to learn to generate novel descriptions of image regions. We demonstrate that our alignment model produces state of the art results in retrieval experiments on Flickr8K, Flickr30K and MSCOCO datasets. We then show that the generated descriptions significantly outperform retrieval baselines on both full images and on a new dataset of region-level annotations.
new_dataset
0.951323
1412.3709
Abel Gonzalez-Garcia
Abel Gonzalez-Garcia, Alexander Vezhnevets, Vittorio Ferrari
An active search strategy for efficient object class detection
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Object class detectors typically apply a window classifier to all the windows in a large set, either in a sliding window manner or using object proposals. In this paper, we develop an active search strategy that sequentially chooses the next window to evaluate based on all the information gathered before. This results in a substantial reduction in the number of classifier evaluations and in a more elegant approach in general. Our search strategy is guided by two forces. First, we exploit context as the statistical relation between the appearance of a window and its location relative to the object, as observed in the training set. This enables to jump across distant regions in the image (e.g. observing a sky region suggests that cars might be far below) and is done efficiently in a Random Forest framework. Second, we exploit the score of the classifier to attract the search to promising areas surrounding a highly scored window, and to keep away from areas near low scored ones. Our search strategy can be applied on top of any classifier as it treats it as a black-box. In experiments with R-CNN on the challenging SUN2012 dataset, our method matches the detection accuracy of evaluating all windows independently, while evaluating 9x fewer windows.
[ { "version": "v1", "created": "Thu, 11 Dec 2014 16:23:38 GMT" }, { "version": "v2", "created": "Tue, 14 Apr 2015 11:29:51 GMT" } ]
2015-04-15T00:00:00
[ [ "Gonzalez-Garcia", "Abel", "" ], [ "Vezhnevets", "Alexander", "" ], [ "Ferrari", "Vittorio", "" ] ]
TITLE: An active search strategy for efficient object class detection ABSTRACT: Object class detectors typically apply a window classifier to all the windows in a large set, either in a sliding window manner or using object proposals. In this paper, we develop an active search strategy that sequentially chooses the next window to evaluate based on all the information gathered before. This results in a substantial reduction in the number of classifier evaluations and in a more elegant approach in general. Our search strategy is guided by two forces. First, we exploit context as the statistical relation between the appearance of a window and its location relative to the object, as observed in the training set. This enables to jump across distant regions in the image (e.g. observing a sky region suggests that cars might be far below) and is done efficiently in a Random Forest framework. Second, we exploit the score of the classifier to attract the search to promising areas surrounding a highly scored window, and to keep away from areas near low scored ones. Our search strategy can be applied on top of any classifier as it treats it as a black-box. In experiments with R-CNN on the challenging SUN2012 dataset, our method matches the detection accuracy of evaluating all windows independently, while evaluating 9x fewer windows.
no_new_dataset
0.951684
1501.00100
Marco Gramaglia
Marco Gramaglia and Marco Fiore
On the anonymizability of mobile traffic datasets
null
null
null
null
cs.CY cs.CR
http://creativecommons.org/licenses/by-nc-sa/3.0/
Preserving user privacy is paramount when it comes to publicly disclosed datasets that contain fine-grained data about large populations. The problem is especially critical in the case of mobile traffic datasets collected by cellular operators, as they feature elevate subscriber trajectory uniqueness and they are resistant to anonymization through spatiotemporal generalization. In this work, we investigate the $k$-anonymizability of trajectories in two large-scale mobile traffic datasets, by means of a novel dedicated measure. Our results are in agreement with those of previous analyses, however they also provide additional insights on the reasons behind the poor anonimizability of mobile traffic datasets. As such, our study is a step forward in the direction of a more robust dataset anonymization.
[ { "version": "v1", "created": "Wed, 31 Dec 2014 09:53:31 GMT" }, { "version": "v2", "created": "Tue, 14 Apr 2015 09:35:41 GMT" } ]
2015-04-15T00:00:00
[ [ "Gramaglia", "Marco", "" ], [ "Fiore", "Marco", "" ] ]
TITLE: On the anonymizability of mobile traffic datasets ABSTRACT: Preserving user privacy is paramount when it comes to publicly disclosed datasets that contain fine-grained data about large populations. The problem is especially critical in the case of mobile traffic datasets collected by cellular operators, as they feature elevate subscriber trajectory uniqueness and they are resistant to anonymization through spatiotemporal generalization. In this work, we investigate the $k$-anonymizability of trajectories in two large-scale mobile traffic datasets, by means of a novel dedicated measure. Our results are in agreement with those of previous analyses, however they also provide additional insights on the reasons behind the poor anonimizability of mobile traffic datasets. As such, our study is a step forward in the direction of a more robust dataset anonymization.
no_new_dataset
0.952264
1502.01916
Daniel Lemire
Wayne Xin Zhao, Xudong Zhang, Daniel Lemire, Dongdong Shan, Jian-Yun Nie, Hongfei Yan, Ji-Rong Wen
A General SIMD-based Approach to Accelerating Compression Algorithms
null
ACM Trans. Inf. Syst. 33, 3, Article 15 (March 2015)
10.1145/2735629
null
cs.IR
http://creativecommons.org/licenses/by/3.0/
Compression algorithms are important for data oriented tasks, especially in the era of Big Data. Modern processors equipped with powerful SIMD instruction sets, provide us an opportunity for achieving better compression performance. Previous research has shown that SIMD-based optimizations can multiply decoding speeds. Following these pioneering studies, we propose a general approach to accelerate compression algorithms. By instantiating the approach, we have developed several novel integer compression algorithms, called Group-Simple, Group-Scheme, Group-AFOR, and Group-PFD, and implemented their corresponding vectorized versions. We evaluate the proposed algorithms on two public TREC datasets, a Wikipedia dataset and a Twitter dataset. With competitive compression ratios and encoding speeds, our SIMD-based algorithms outperform state-of-the-art non-vectorized algorithms with respect to decoding speeds.
[ { "version": "v1", "created": "Fri, 6 Feb 2015 15:11:47 GMT" } ]
2015-04-15T00:00:00
[ [ "Zhao", "Wayne Xin", "" ], [ "Zhang", "Xudong", "" ], [ "Lemire", "Daniel", "" ], [ "Shan", "Dongdong", "" ], [ "Nie", "Jian-Yun", "" ], [ "Yan", "Hongfei", "" ], [ "Wen", "Ji-Rong", "" ] ]
TITLE: A General SIMD-based Approach to Accelerating Compression Algorithms ABSTRACT: Compression algorithms are important for data oriented tasks, especially in the era of Big Data. Modern processors equipped with powerful SIMD instruction sets, provide us an opportunity for achieving better compression performance. Previous research has shown that SIMD-based optimizations can multiply decoding speeds. Following these pioneering studies, we propose a general approach to accelerate compression algorithms. By instantiating the approach, we have developed several novel integer compression algorithms, called Group-Simple, Group-Scheme, Group-AFOR, and Group-PFD, and implemented their corresponding vectorized versions. We evaluate the proposed algorithms on two public TREC datasets, a Wikipedia dataset and a Twitter dataset. With competitive compression ratios and encoding speeds, our SIMD-based algorithms outperform state-of-the-art non-vectorized algorithms with respect to decoding speeds.
no_new_dataset
0.934275
1504.03154
Carlo Ciliberto
Giulia Pasquale, Carlo Ciliberto, Francesca Odone, Lorenzo Rosasco and Lorenzo Natale
Real-world Object Recognition with Off-the-shelf Deep Conv Nets: How Many Objects can iCub Learn?
18 pages, 9 figures, 3 tables
null
null
null
cs.RO cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The ability to visually recognize objects is a fundamental skill for robotics systems. Indeed, a large variety of tasks involving manipulation, navigation or interaction with other agents, deeply depends on the accurate understanding of the visual scene. Yet, at the time being, robots are lacking good visual perceptual systems, which often become the main bottleneck preventing the use of autonomous agents for real-world applications. Lately in computer vision, systems that learn suitable visual representations and based on multi-layer deep convolutional networks are showing remarkable performance in tasks such as large-scale visual recognition and image retrieval. To this regard, it is natural to ask whether such remarkable performance would generalize also to the robotic setting. In this paper we investigate such possibility, while taking further steps in developing a computational vision system to be embedded on a robotic platform, the iCub humanoid robot. In particular, we release a new dataset ({\sc iCubWorld28}) that we use as a benchmark to address the question: {\it how many objects can iCub recognize?} Our study is developed in a learning framework which reflects the typical visual experience of a humanoid robot like the iCub. Experiments shed interesting insights on the strength and weaknesses of current computer vision approaches applied in real robotic settings.
[ { "version": "v1", "created": "Mon, 13 Apr 2015 12:45:09 GMT" }, { "version": "v2", "created": "Tue, 14 Apr 2015 05:56:01 GMT" } ]
2015-04-15T00:00:00
[ [ "Pasquale", "Giulia", "" ], [ "Ciliberto", "Carlo", "" ], [ "Odone", "Francesca", "" ], [ "Rosasco", "Lorenzo", "" ], [ "Natale", "Lorenzo", "" ] ]
TITLE: Real-world Object Recognition with Off-the-shelf Deep Conv Nets: How Many Objects can iCub Learn? ABSTRACT: The ability to visually recognize objects is a fundamental skill for robotics systems. Indeed, a large variety of tasks involving manipulation, navigation or interaction with other agents, deeply depends on the accurate understanding of the visual scene. Yet, at the time being, robots are lacking good visual perceptual systems, which often become the main bottleneck preventing the use of autonomous agents for real-world applications. Lately in computer vision, systems that learn suitable visual representations and based on multi-layer deep convolutional networks are showing remarkable performance in tasks such as large-scale visual recognition and image retrieval. To this regard, it is natural to ask whether such remarkable performance would generalize also to the robotic setting. In this paper we investigate such possibility, while taking further steps in developing a computational vision system to be embedded on a robotic platform, the iCub humanoid robot. In particular, we release a new dataset ({\sc iCubWorld28}) that we use as a benchmark to address the question: {\it how many objects can iCub recognize?} Our study is developed in a learning framework which reflects the typical visual experience of a humanoid robot like the iCub. Experiments shed interesting insights on the strength and weaknesses of current computer vision approaches applied in real robotic settings.
new_dataset
0.960249
1504.03425
Iftekhar Naim
Iftekhar Naim, M. Iftekhar Tanveer, Daniel Gildea, Mohammed (Ehsan) Hoque
Automated Analysis and Prediction of Job Interview Performance
14 pages, 8 figures, 6 tables
null
null
null
cs.HC cs.AI cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a computational framework for automatically quantifying verbal and nonverbal behaviors in the context of job interviews. The proposed framework is trained by analyzing the videos of 138 interview sessions with 69 internship-seeking undergraduates at the Massachusetts Institute of Technology (MIT). Our automated analysis includes facial expressions (e.g., smiles, head gestures, facial tracking points), language (e.g., word counts, topic modeling), and prosodic information (e.g., pitch, intonation, and pauses) of the interviewees. The ground truth labels are derived by taking a weighted average over the ratings of 9 independent judges. Our framework can automatically predict the ratings for interview traits such as excitement, friendliness, and engagement with correlation coefficients of 0.75 or higher, and can quantify the relative importance of prosody, language, and facial expressions. By analyzing the relative feature weights learned by the regression models, our framework recommends to speak more fluently, use less filler words, speak as "we" (vs. "I"), use more unique words, and smile more. We also find that the students who were rated highly while answering the first interview question were also rated highly overall (i.e., first impression matters). Finally, our MIT Interview dataset will be made available to other researchers to further validate and expand our findings.
[ { "version": "v1", "created": "Tue, 14 Apr 2015 05:49:26 GMT" } ]
2015-04-15T00:00:00
[ [ "Naim", "Iftekhar", "", "Ehsan" ], [ "Tanveer", "M. Iftekhar", "", "Ehsan" ], [ "Gildea", "Daniel", "", "Ehsan" ], [ "Mohammed", "", "", "Ehsan" ], [ "Hoque", "", "" ] ]
TITLE: Automated Analysis and Prediction of Job Interview Performance ABSTRACT: We present a computational framework for automatically quantifying verbal and nonverbal behaviors in the context of job interviews. The proposed framework is trained by analyzing the videos of 138 interview sessions with 69 internship-seeking undergraduates at the Massachusetts Institute of Technology (MIT). Our automated analysis includes facial expressions (e.g., smiles, head gestures, facial tracking points), language (e.g., word counts, topic modeling), and prosodic information (e.g., pitch, intonation, and pauses) of the interviewees. The ground truth labels are derived by taking a weighted average over the ratings of 9 independent judges. Our framework can automatically predict the ratings for interview traits such as excitement, friendliness, and engagement with correlation coefficients of 0.75 or higher, and can quantify the relative importance of prosody, language, and facial expressions. By analyzing the relative feature weights learned by the regression models, our framework recommends to speak more fluently, use less filler words, speak as "we" (vs. "I"), use more unique words, and smile more. We also find that the students who were rated highly while answering the first interview question were also rated highly overall (i.e., first impression matters). Finally, our MIT Interview dataset will be made available to other researchers to further validate and expand our findings.
new_dataset
0.953622
1504.03486
Mark Leake
Mark Leake
Analytical tools for single-molecule fluorescence imaging in cellulo
null
null
null
null
q-bio.BM physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent technological advances in cutting-edge ultrasensitive fluorescence microscopy have allowed single-molecule imaging experiments in living cells across all three domains of life to become commonplace. Single-molecule live-cell data is typically obtained in a low signal-to-noise ratio (SNR) regime sometimes only marginally in excess of 1, in which a combination of detector shot noise, sub-optimal probe photophysics, native cell autofluorescence and intrinsically underlying stochastic of molecules result in highly noisy datasets for which underlying true molecular behaviour is non-trivial to discern. The ability to elucidate real molecular phenomena is essential in relating experimental single-molecule observations to both the biological system under study as well as offering insight into the fine details of the physical and chemical environments of the living cell. To confront this problem of faithful signal extraction and analysis in a noise-dominated regime, the needle in a haystack challenge, such experiments benefit enormously from a suite of objective, automated, high-throughput analysis tools that can home in on the underlying molecular signature and generate meaningful statistics across a large population of individual cells and molecules. Here, I discuss the development and application of several analytical methods applied to real case studies, including objective methods of segmenting cellular images from light microscopy data, tools to robustly localize and track single fluorescently-labelled molecules, algorithms to objectively interpret molecular mobility, analysis protocols to reliably estimate molecular stoichiometry and turnover, and methods to objectively render distributions of molecular parameters
[ { "version": "v1", "created": "Tue, 14 Apr 2015 10:35:06 GMT" } ]
2015-04-15T00:00:00
[ [ "Leake", "Mark", "" ] ]
TITLE: Analytical tools for single-molecule fluorescence imaging in cellulo ABSTRACT: Recent technological advances in cutting-edge ultrasensitive fluorescence microscopy have allowed single-molecule imaging experiments in living cells across all three domains of life to become commonplace. Single-molecule live-cell data is typically obtained in a low signal-to-noise ratio (SNR) regime sometimes only marginally in excess of 1, in which a combination of detector shot noise, sub-optimal probe photophysics, native cell autofluorescence and intrinsically underlying stochastic of molecules result in highly noisy datasets for which underlying true molecular behaviour is non-trivial to discern. The ability to elucidate real molecular phenomena is essential in relating experimental single-molecule observations to both the biological system under study as well as offering insight into the fine details of the physical and chemical environments of the living cell. To confront this problem of faithful signal extraction and analysis in a noise-dominated regime, the needle in a haystack challenge, such experiments benefit enormously from a suite of objective, automated, high-throughput analysis tools that can home in on the underlying molecular signature and generate meaningful statistics across a large population of individual cells and molecules. Here, I discuss the development and application of several analytical methods applied to real case studies, including objective methods of segmenting cellular images from light microscopy data, tools to robustly localize and track single fluorescently-labelled molecules, algorithms to objectively interpret molecular mobility, analysis protocols to reliably estimate molecular stoichiometry and turnover, and methods to objectively render distributions of molecular parameters
no_new_dataset
0.948058
1504.03504
Fang Wang
Fang Wang, Le Kang, Yi Li
Sketch-based 3D Shape Retrieval using Convolutional Neural Networks
CVPR 2015
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Retrieving 3D models from 2D human sketches has received considerable attention in the areas of graphics, image retrieval, and computer vision. Almost always in state of the art approaches a large amount of "best views" are computed for 3D models, with the hope that the query sketch matches one of these 2D projections of 3D models using predefined features. We argue that this two stage approach (view selection -- matching) is pragmatic but also problematic because the "best views" are subjective and ambiguous, which makes the matching inputs obscure. This imprecise nature of matching further makes it challenging to choose features manually. Instead of relying on the elusive concept of "best views" and the hand-crafted features, we propose to define our views using a minimalism approach and learn features for both sketches and views. Specifically, we drastically reduce the number of views to only two predefined directions for the whole dataset. Then, we learn two Siamese Convolutional Neural Networks (CNNs), one for the views and one for the sketches. The loss function is defined on the within-domain as well as the cross-domain similarities. Our experiments on three benchmark datasets demonstrate that our method is significantly better than state of the art approaches, and outperforms them in all conventional metrics.
[ { "version": "v1", "created": "Tue, 14 Apr 2015 11:55:45 GMT" } ]
2015-04-15T00:00:00
[ [ "Wang", "Fang", "" ], [ "Kang", "Le", "" ], [ "Li", "Yi", "" ] ]
TITLE: Sketch-based 3D Shape Retrieval using Convolutional Neural Networks ABSTRACT: Retrieving 3D models from 2D human sketches has received considerable attention in the areas of graphics, image retrieval, and computer vision. Almost always in state of the art approaches a large amount of "best views" are computed for 3D models, with the hope that the query sketch matches one of these 2D projections of 3D models using predefined features. We argue that this two stage approach (view selection -- matching) is pragmatic but also problematic because the "best views" are subjective and ambiguous, which makes the matching inputs obscure. This imprecise nature of matching further makes it challenging to choose features manually. Instead of relying on the elusive concept of "best views" and the hand-crafted features, we propose to define our views using a minimalism approach and learn features for both sketches and views. Specifically, we drastically reduce the number of views to only two predefined directions for the whole dataset. Then, we learn two Siamese Convolutional Neural Networks (CNNs), one for the views and one for the sketches. The loss function is defined on the within-domain as well as the cross-domain similarities. Our experiments on three benchmark datasets demonstrate that our method is significantly better than state of the art approaches, and outperforms them in all conventional metrics.
no_new_dataset
0.949012
1504.03522
Lukas Neumann
Luk\'a\v{s} Neumann, Ji\v{r}\'i Matas
Efficient Scene Text Localization and Recognition with Local Character Refinement
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An unconstrained end-to-end text localization and recognition method is presented. The method detects initial text hypothesis in a single pass by an efficient region-based method and subsequently refines the text hypothesis using a more robust local text model, which deviates from the common assumption of region-based methods that all characters are detected as connected components. Additionally, a novel feature based on character stroke area estimation is introduced. The feature is efficiently computed from a region distance map, it is invariant to scaling and rotations and allows to efficiently detect text regions regardless of what portion of text they capture. The method runs in real time and achieves state-of-the-art text localization and recognition results on the ICDAR 2013 Robust Reading dataset.
[ { "version": "v1", "created": "Tue, 14 Apr 2015 12:42:56 GMT" } ]
2015-04-15T00:00:00
[ [ "Neumann", "Lukáš", "" ], [ "Matas", "Jiří", "" ] ]
TITLE: Efficient Scene Text Localization and Recognition with Local Character Refinement ABSTRACT: An unconstrained end-to-end text localization and recognition method is presented. The method detects initial text hypothesis in a single pass by an efficient region-based method and subsequently refines the text hypothesis using a more robust local text model, which deviates from the common assumption of region-based methods that all characters are detected as connected components. Additionally, a novel feature based on character stroke area estimation is introduced. The feature is efficiently computed from a region distance map, it is invariant to scaling and rotations and allows to efficiently detect text regions regardless of what portion of text they capture. The method runs in real time and achieves state-of-the-art text localization and recognition results on the ICDAR 2013 Robust Reading dataset.
no_new_dataset
0.94868
1504.03659
Azad Dehghan Mr
Azad Dehghan
Temporal ordering of clinical events
null
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This report describes a minimalistic set of methods engineered to anchor clinical events onto a temporal space. Specifically, we describe methods to extract clinical events (e.g., Problems, Treatments and Tests), temporal expressions (i.e., time, date, duration, and frequency), and temporal links (e.g., Before, After, Overlap) between events and temporal entities. These methods are developed and validated using high quality datasets.
[ { "version": "v1", "created": "Tue, 14 Apr 2015 18:48:58 GMT" } ]
2015-04-15T00:00:00
[ [ "Dehghan", "Azad", "" ] ]
TITLE: Temporal ordering of clinical events ABSTRACT: This report describes a minimalistic set of methods engineered to anchor clinical events onto a temporal space. Specifically, we describe methods to extract clinical events (e.g., Problems, Treatments and Tests), temporal expressions (i.e., time, date, duration, and frequency), and temporal links (e.g., Before, After, Overlap) between events and temporal entities. These methods are developed and validated using high quality datasets.
no_new_dataset
0.946448
1412.6597
Tom Paine
Tom Le Paine, Pooya Khorrami, Wei Han, Thomas S. Huang
An Analysis of Unsupervised Pre-training in Light of Recent Advances
Accepted as a workshop contribution to ICLR 2015
null
null
null
cs.CV cs.LG cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Convolutional neural networks perform well on object recognition because of a number of recent advances: rectified linear units (ReLUs), data augmentation, dropout, and large labelled datasets. Unsupervised data has been proposed as another way to improve performance. Unfortunately, unsupervised pre-training is not used by state-of-the-art methods leading to the following question: Is unsupervised pre-training still useful given recent advances? If so, when? We answer this in three parts: we 1) develop an unsupervised method that incorporates ReLUs and recent unsupervised regularization techniques, 2) analyze the benefits of unsupervised pre-training compared to data augmentation and dropout on CIFAR-10 while varying the ratio of unsupervised to supervised samples, 3) verify our findings on STL-10. We discover unsupervised pre-training, as expected, helps when the ratio of unsupervised to supervised samples is high, and surprisingly, hurts when the ratio is low. We also use unsupervised pre-training with additional color augmentation to achieve near state-of-the-art performance on STL-10.
[ { "version": "v1", "created": "Sat, 20 Dec 2014 04:20:55 GMT" }, { "version": "v2", "created": "Tue, 27 Jan 2015 22:03:40 GMT" }, { "version": "v3", "created": "Mon, 2 Mar 2015 21:05:34 GMT" }, { "version": "v4", "created": "Fri, 10 Apr 2015 21:26:31 GMT" } ]
2015-04-14T00:00:00
[ [ "Paine", "Tom Le", "" ], [ "Khorrami", "Pooya", "" ], [ "Han", "Wei", "" ], [ "Huang", "Thomas S.", "" ] ]
TITLE: An Analysis of Unsupervised Pre-training in Light of Recent Advances ABSTRACT: Convolutional neural networks perform well on object recognition because of a number of recent advances: rectified linear units (ReLUs), data augmentation, dropout, and large labelled datasets. Unsupervised data has been proposed as another way to improve performance. Unfortunately, unsupervised pre-training is not used by state-of-the-art methods leading to the following question: Is unsupervised pre-training still useful given recent advances? If so, when? We answer this in three parts: we 1) develop an unsupervised method that incorporates ReLUs and recent unsupervised regularization techniques, 2) analyze the benefits of unsupervised pre-training compared to data augmentation and dropout on CIFAR-10 while varying the ratio of unsupervised to supervised samples, 3) verify our findings on STL-10. We discover unsupervised pre-training, as expected, helps when the ratio of unsupervised to supervised samples is high, and surprisingly, hurts when the ratio is low. We also use unsupervised pre-training with additional color augmentation to achieve near state-of-the-art performance on STL-10.
no_new_dataset
0.945399
1412.6806
Alexey Dosovitskiy
Jost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, Martin Riedmiller
Striving for Simplicity: The All Convolutional Net
accepted to ICLR-2015 workshop track; no changes other than style
null
null
null
cs.LG cs.CV cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most modern convolutional neural networks (CNNs) used for object recognition are built using the same principles: Alternating convolution and max-pooling layers followed by a small number of fully connected layers. We re-evaluate the state of the art for object recognition from small images with convolutional networks, questioning the necessity of different components in the pipeline. We find that max-pooling can simply be replaced by a convolutional layer with increased stride without loss in accuracy on several image recognition benchmarks. Following this finding -- and building on other recent work for finding simple network structures -- we propose a new architecture that consists solely of convolutional layers and yields competitive or state of the art performance on several object recognition datasets (CIFAR-10, CIFAR-100, ImageNet). To analyze the network we introduce a new variant of the "deconvolution approach" for visualizing features learned by CNNs, which can be applied to a broader range of network structures than existing approaches.
[ { "version": "v1", "created": "Sun, 21 Dec 2014 16:16:37 GMT" }, { "version": "v2", "created": "Mon, 2 Mar 2015 21:44:06 GMT" }, { "version": "v3", "created": "Mon, 13 Apr 2015 07:58:17 GMT" } ]
2015-04-14T00:00:00
[ [ "Springenberg", "Jost Tobias", "" ], [ "Dosovitskiy", "Alexey", "" ], [ "Brox", "Thomas", "" ], [ "Riedmiller", "Martin", "" ] ]
TITLE: Striving for Simplicity: The All Convolutional Net ABSTRACT: Most modern convolutional neural networks (CNNs) used for object recognition are built using the same principles: Alternating convolution and max-pooling layers followed by a small number of fully connected layers. We re-evaluate the state of the art for object recognition from small images with convolutional networks, questioning the necessity of different components in the pipeline. We find that max-pooling can simply be replaced by a convolutional layer with increased stride without loss in accuracy on several image recognition benchmarks. Following this finding -- and building on other recent work for finding simple network structures -- we propose a new architecture that consists solely of convolutional layers and yields competitive or state of the art performance on several object recognition datasets (CIFAR-10, CIFAR-100, ImageNet). To analyze the network we introduce a new variant of the "deconvolution approach" for visualizing features learned by CNNs, which can be applied to a broader range of network structures than existing approaches.
no_new_dataset
0.949529
1412.8419
Pedro O. Pinheiro
Remi Lebret and Pedro O. Pinheiro and Ronan Collobert
Simple Image Description Generator via a Linear Phrase-Based Approach
Accepted as a workshop paper at ICLR 2015
null
null
null
cs.CL cs.CV cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Generating a novel textual description of an image is an interesting problem that connects computer vision and natural language processing. In this paper, we present a simple model that is able to generate descriptive sentences given a sample image. This model has a strong focus on the syntax of the descriptions. We train a purely bilinear model that learns a metric between an image representation (generated from a previously trained Convolutional Neural Network) and phrases that are used to described them. The system is then able to infer phrases from a given image sample. Based on caption syntax statistics, we propose a simple language model that can produce relevant descriptions for a given test image using the phrases inferred. Our approach, which is considerably simpler than state-of-the-art models, achieves comparable results on the recently release Microsoft COCO dataset.
[ { "version": "v1", "created": "Mon, 29 Dec 2014 18:43:10 GMT" }, { "version": "v2", "created": "Wed, 18 Mar 2015 05:09:13 GMT" }, { "version": "v3", "created": "Sat, 11 Apr 2015 03:53:26 GMT" } ]
2015-04-14T00:00:00
[ [ "Lebret", "Remi", "" ], [ "Pinheiro", "Pedro O.", "" ], [ "Collobert", "Ronan", "" ] ]
TITLE: Simple Image Description Generator via a Linear Phrase-Based Approach ABSTRACT: Generating a novel textual description of an image is an interesting problem that connects computer vision and natural language processing. In this paper, we present a simple model that is able to generate descriptive sentences given a sample image. This model has a strong focus on the syntax of the descriptions. We train a purely bilinear model that learns a metric between an image representation (generated from a previously trained Convolutional Neural Network) and phrases that are used to described them. The system is then able to infer phrases from a given image sample. Based on caption syntax statistics, we propose a simple language model that can produce relevant descriptions for a given test image using the phrases inferred. Our approach, which is considerably simpler than state-of-the-art models, achieves comparable results on the recently release Microsoft COCO dataset.
no_new_dataset
0.944587
1502.07645
Yu-Xiang Wang
Yu-Xiang Wang, Stephen E. Fienberg, Alex Smola
Privacy for Free: Posterior Sampling and Stochastic Gradient Monte Carlo
null
null
null
null
stat.ML cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the problem of Bayesian learning on sensitive datasets and present two simple but somewhat surprising results that connect Bayesian learning to "differential privacy:, a cryptographic approach to protect individual-level privacy while permiting database-level utility. Specifically, we show that that under standard assumptions, getting one single sample from a posterior distribution is differentially private "for free". We will see that estimator is statistically consistent, near optimal and computationally tractable whenever the Bayesian model of interest is consistent, optimal and tractable. Similarly but separately, we show that a recent line of works that use stochastic gradient for Hybrid Monte Carlo (HMC) sampling also preserve differentially privacy with minor or no modifications of the algorithmic procedure at all, these observations lead to an "anytime" algorithm for Bayesian learning under privacy constraint. We demonstrate that it performs much better than the state-of-the-art differential private methods on synthetic and real datasets.
[ { "version": "v1", "created": "Thu, 26 Feb 2015 17:38:47 GMT" }, { "version": "v2", "created": "Sun, 12 Apr 2015 02:53:05 GMT" } ]
2015-04-14T00:00:00
[ [ "Wang", "Yu-Xiang", "" ], [ "Fienberg", "Stephen E.", "" ], [ "Smola", "Alex", "" ] ]
TITLE: Privacy for Free: Posterior Sampling and Stochastic Gradient Monte Carlo ABSTRACT: We consider the problem of Bayesian learning on sensitive datasets and present two simple but somewhat surprising results that connect Bayesian learning to "differential privacy:, a cryptographic approach to protect individual-level privacy while permiting database-level utility. Specifically, we show that that under standard assumptions, getting one single sample from a posterior distribution is differentially private "for free". We will see that estimator is statistically consistent, near optimal and computationally tractable whenever the Bayesian model of interest is consistent, optimal and tractable. Similarly but separately, we show that a recent line of works that use stochastic gradient for Hybrid Monte Carlo (HMC) sampling also preserve differentially privacy with minor or no modifications of the algorithmic procedure at all, these observations lead to an "anytime" algorithm for Bayesian learning under privacy constraint. We demonstrate that it performs much better than the state-of-the-art differential private methods on synthetic and real datasets.
no_new_dataset
0.947914
1503.02801
Jiaming Xu
Jiaming Xu, Bo Xu, Guanhua Tian, Jun Zhao, Fangyuan Wang, Hongwei Hao
Short Text Hashing Improved by Integrating Multi-Granularity Topics and Tags
12 pages, accepted at CICLing 2015
null
10.1007/978-3-319-18111-0_33
null
cs.IR cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Due to computational and storage efficiencies of compact binary codes, hashing has been widely used for large-scale similarity search. Unfortunately, many existing hashing methods based on observed keyword features are not effective for short texts due to the sparseness and shortness. Recently, some researchers try to utilize latent topics of certain granularity to preserve semantic similarity in hash codes beyond keyword matching. However, topics of certain granularity are not adequate to represent the intrinsic semantic information. In this paper, we present a novel unified approach for short text Hashing using Multi-granularity Topics and Tags, dubbed HMTT. In particular, we propose a selection method to choose the optimal multi-granularity topics depending on the type of dataset, and design two distinct hashing strategies to incorporate multi-granularity topics. We also propose a simple and effective method to exploit tags to enhance the similarity of related texts. We carry out extensive experiments on one short text dataset as well as on one normal text dataset. The results demonstrate that our approach is effective and significantly outperforms baselines on several evaluation metrics.
[ { "version": "v1", "created": "Tue, 10 Mar 2015 07:51:59 GMT" } ]
2015-04-14T00:00:00
[ [ "Xu", "Jiaming", "" ], [ "Xu", "Bo", "" ], [ "Tian", "Guanhua", "" ], [ "Zhao", "Jun", "" ], [ "Wang", "Fangyuan", "" ], [ "Hao", "Hongwei", "" ] ]
TITLE: Short Text Hashing Improved by Integrating Multi-Granularity Topics and Tags ABSTRACT: Due to computational and storage efficiencies of compact binary codes, hashing has been widely used for large-scale similarity search. Unfortunately, many existing hashing methods based on observed keyword features are not effective for short texts due to the sparseness and shortness. Recently, some researchers try to utilize latent topics of certain granularity to preserve semantic similarity in hash codes beyond keyword matching. However, topics of certain granularity are not adequate to represent the intrinsic semantic information. In this paper, we present a novel unified approach for short text Hashing using Multi-granularity Topics and Tags, dubbed HMTT. In particular, we propose a selection method to choose the optimal multi-granularity topics depending on the type of dataset, and design two distinct hashing strategies to incorporate multi-granularity topics. We also propose a simple and effective method to exploit tags to enhance the similarity of related texts. We carry out extensive experiments on one short text dataset as well as on one normal text dataset. The results demonstrate that our approach is effective and significantly outperforms baselines on several evaluation metrics.
no_new_dataset
0.944893
1503.06813
Tarek El-Gaaly
Haopeng Zhang, Tarek El-Gaaly, Ahmed Elgammal, Zhiguo Jiang
Factorization of View-Object Manifolds for Joint Object Recognition and Pose Estimation
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Due to large variations in shape, appearance, and viewing conditions, object recognition is a key precursory challenge in the fields of object manipulation and robotic/AI visual reasoning in general. Recognizing object categories, particular instances of objects and viewpoints/poses of objects are three critical subproblems robots must solve in order to accurately grasp/manipulate objects and reason about their environments. Multi-view images of the same object lie on intrinsic low-dimensional manifolds in descriptor spaces (e.g. visual/depth descriptor spaces). These object manifolds share the same topology despite being geometrically different. Each object manifold can be represented as a deformed version of a unified manifold. The object manifolds can thus be parameterized by its homeomorphic mapping/reconstruction from the unified manifold. In this work, we develop a novel framework to jointly solve the three challenging recognition sub-problems, by explicitly modeling the deformations of object manifolds and factorizing it in a view-invariant space for recognition. We perform extensive experiments on several challenging datasets and achieve state-of-the-art results.
[ { "version": "v1", "created": "Mon, 23 Mar 2015 20:05:36 GMT" }, { "version": "v2", "created": "Mon, 13 Apr 2015 02:59:41 GMT" } ]
2015-04-14T00:00:00
[ [ "Zhang", "Haopeng", "" ], [ "El-Gaaly", "Tarek", "" ], [ "Elgammal", "Ahmed", "" ], [ "Jiang", "Zhiguo", "" ] ]
TITLE: Factorization of View-Object Manifolds for Joint Object Recognition and Pose Estimation ABSTRACT: Due to large variations in shape, appearance, and viewing conditions, object recognition is a key precursory challenge in the fields of object manipulation and robotic/AI visual reasoning in general. Recognizing object categories, particular instances of objects and viewpoints/poses of objects are three critical subproblems robots must solve in order to accurately grasp/manipulate objects and reason about their environments. Multi-view images of the same object lie on intrinsic low-dimensional manifolds in descriptor spaces (e.g. visual/depth descriptor spaces). These object manifolds share the same topology despite being geometrically different. Each object manifold can be represented as a deformed version of a unified manifold. The object manifolds can thus be parameterized by its homeomorphic mapping/reconstruction from the unified manifold. In this work, we develop a novel framework to jointly solve the three challenging recognition sub-problems, by explicitly modeling the deformations of object manifolds and factorizing it in a view-invariant space for recognition. We perform extensive experiments on several challenging datasets and achieve state-of-the-art results.
no_new_dataset
0.948917
1503.08909
George Toderici
Joe Yue-Hei Ng and Matthew Hausknecht and Sudheendra Vijayanarasimhan and Oriol Vinyals and Rajat Monga and George Toderici
Beyond Short Snippets: Deep Networks for Video Classification
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Convolutional neural networks (CNNs) have been extensively applied for image recognition problems giving state-of-the-art results on recognition, detection, segmentation and retrieval. In this work we propose and evaluate several deep neural network architectures to combine image information across a video over longer time periods than previously attempted. We propose two methods capable of handling full length videos. The first method explores various convolutional temporal feature pooling architectures, examining the various design choices which need to be made when adapting a CNN for this task. The second proposed method explicitly models the video as an ordered sequence of frames. For this purpose we employ a recurrent neural network that uses Long Short-Term Memory (LSTM) cells which are connected to the output of the underlying CNN. Our best networks exhibit significant performance improvements over previously published results on the Sports 1 million dataset (73.1% vs. 60.9%) and the UCF-101 datasets with (88.6% vs. 88.0%) and without additional optical flow information (82.6% vs. 72.8%).
[ { "version": "v1", "created": "Tue, 31 Mar 2015 04:34:12 GMT" }, { "version": "v2", "created": "Mon, 13 Apr 2015 19:44:25 GMT" } ]
2015-04-14T00:00:00
[ [ "Ng", "Joe Yue-Hei", "" ], [ "Hausknecht", "Matthew", "" ], [ "Vijayanarasimhan", "Sudheendra", "" ], [ "Vinyals", "Oriol", "" ], [ "Monga", "Rajat", "" ], [ "Toderici", "George", "" ] ]
TITLE: Beyond Short Snippets: Deep Networks for Video Classification ABSTRACT: Convolutional neural networks (CNNs) have been extensively applied for image recognition problems giving state-of-the-art results on recognition, detection, segmentation and retrieval. In this work we propose and evaluate several deep neural network architectures to combine image information across a video over longer time periods than previously attempted. We propose two methods capable of handling full length videos. The first method explores various convolutional temporal feature pooling architectures, examining the various design choices which need to be made when adapting a CNN for this task. The second proposed method explicitly models the video as an ordered sequence of frames. For this purpose we employ a recurrent neural network that uses Long Short-Term Memory (LSTM) cells which are connected to the output of the underlying CNN. Our best networks exhibit significant performance improvements over previously published results on the Sports 1 million dataset (73.1% vs. 60.9%) and the UCF-101 datasets with (88.6% vs. 88.0%) and without additional optical flow information (82.6% vs. 72.8%).
no_new_dataset
0.953622
1504.02902
Alexander Kalmanovich
Alexander Kalmanovich and Gal Chechik
Gradual Training Method for Denoising Auto Encoders
arXiv admin note: substantial text overlap with arXiv:1412.6257
null
null
null
cs.LG cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Stacked denoising auto encoders (DAEs) are well known to learn useful deep representations, which can be used to improve supervised training by initializing a deep network. We investigate a training scheme of a deep DAE, where DAE layers are gradually added and keep adapting as additional layers are added. We show that in the regime of mid-sized datasets, this gradual training provides a small but consistent improvement over stacked training in both reconstruction quality and classification error over stacked training on MNIST and CIFAR datasets.
[ { "version": "v1", "created": "Sat, 11 Apr 2015 17:51:41 GMT" } ]
2015-04-14T00:00:00
[ [ "Kalmanovich", "Alexander", "" ], [ "Chechik", "Gal", "" ] ]
TITLE: Gradual Training Method for Denoising Auto Encoders ABSTRACT: Stacked denoising auto encoders (DAEs) are well known to learn useful deep representations, which can be used to improve supervised training by initializing a deep network. We investigate a training scheme of a deep DAE, where DAE layers are gradually added and keep adapting as additional layers are added. We show that in the regime of mid-sized datasets, this gradual training provides a small but consistent improvement over stacked training in both reconstruction quality and classification error over stacked training on MNIST and CIFAR datasets.
no_new_dataset
0.949482
1504.02975
F. Ozgur Catak
Ferhat \"Ozg\"ur \c{C}atak
Classification with Extreme Learning Machine and Ensemble Algorithms Over Randomly Partitioned Data
In Turkish, SIU
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this age of Big Data, machine learning based data mining methods are extensively used to inspect large scale data sets. Deriving applicable predictive modeling from these type of data sets is a challenging obstacle because of their high complexity. Opportunity with high data availability levels, automated classification of data sets has become a critical and complicated function. In this paper, the power of applying MapReduce based Distributed AdaBoosting of Extreme Learning Machine (ELM) are explored to build reliable predictive bag of classification models. Thus, (i) dataset ensembles are build; (ii) ELM algorithm is used to build weak classification models; and (iii) build a strong classification model from a set of weak classification models. This training model is applied to the publicly available knowledge discovery and data mining datasets.
[ { "version": "v1", "created": "Sun, 12 Apr 2015 14:03:25 GMT" } ]
2015-04-14T00:00:00
[ [ "Çatak", "Ferhat Özgür", "" ] ]
TITLE: Classification with Extreme Learning Machine and Ensemble Algorithms Over Randomly Partitioned Data ABSTRACT: In this age of Big Data, machine learning based data mining methods are extensively used to inspect large scale data sets. Deriving applicable predictive modeling from these type of data sets is a challenging obstacle because of their high complexity. Opportunity with high data availability levels, automated classification of data sets has become a critical and complicated function. In this paper, the power of applying MapReduce based Distributed AdaBoosting of Extreme Learning Machine (ELM) are explored to build reliable predictive bag of classification models. Thus, (i) dataset ensembles are build; (ii) ELM algorithm is used to build weak classification models; and (iii) build a strong classification model from a set of weak classification models. This training model is applied to the publicly available knowledge discovery and data mining datasets.
no_new_dataset
0.94545
1301.1275
Luca Allodi
Luca Allodi, Fabio Massacci
My Software has a Vulnerability, should I worry?
12 pages, 4 figures
ACM TISSEC Vol 17 Issue 1, 2014
10.1145/2630069
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
(U.S) Rule-based policies to mitigate software risk suggest to use the CVSS score to measure the individual vulnerability risk and act accordingly: an HIGH CVSS score according to the NVD (National (U.S.) Vulnerability Database) is therefore translated into a "Yes". A key issue is whether such rule is economically sensible, in particular if reported vulnerabilities have been actually exploited in the wild, and whether the risk score do actually match the risk of actual exploitation. We compare the NVD dataset with two additional datasets, the EDB for the white market of vulnerabilities (such as those present in Metasploit), and the EKITS for the exploits traded in the black market. We benchmark them against Symantec's threat explorer dataset (SYM) of actual exploit in the wild. We analyze the whole spectrum of CVSS submetrics and use these characteristics to perform a case-controlled analysis of CVSS scores (similar to those used to link lung cancer and smoking) to test its reliability as a risk factor for actual exploitation. We conclude that (a) fixing just because a high CVSS score in NVD only yields negligible risk reduction, (b) the additional existence of proof of concepts exploits (e.g. in EDB) may yield some additional but not large risk reduction, (c) fixing in response to presence in black markets yields the equivalent risk reduction of wearing safety belt in cars (you might also die but still..). On the negative side, our study shows that as industry we miss a metric with high specificity (ruling out vulns for which we shouldn't worry). In order to address the feedback from BlackHat 2013's audience, the final revision (V3) provides additional data in Appendix A detailing how the control variables in the study affect the results.
[ { "version": "v1", "created": "Mon, 7 Jan 2013 17:32:36 GMT" }, { "version": "v2", "created": "Mon, 5 Aug 2013 21:09:41 GMT" }, { "version": "v3", "created": "Tue, 24 Sep 2013 14:05:27 GMT" } ]
2015-04-13T00:00:00
[ [ "Allodi", "Luca", "" ], [ "Massacci", "Fabio", "" ] ]
TITLE: My Software has a Vulnerability, should I worry? ABSTRACT: (U.S) Rule-based policies to mitigate software risk suggest to use the CVSS score to measure the individual vulnerability risk and act accordingly: an HIGH CVSS score according to the NVD (National (U.S.) Vulnerability Database) is therefore translated into a "Yes". A key issue is whether such rule is economically sensible, in particular if reported vulnerabilities have been actually exploited in the wild, and whether the risk score do actually match the risk of actual exploitation. We compare the NVD dataset with two additional datasets, the EDB for the white market of vulnerabilities (such as those present in Metasploit), and the EKITS for the exploits traded in the black market. We benchmark them against Symantec's threat explorer dataset (SYM) of actual exploit in the wild. We analyze the whole spectrum of CVSS submetrics and use these characteristics to perform a case-controlled analysis of CVSS scores (similar to those used to link lung cancer and smoking) to test its reliability as a risk factor for actual exploitation. We conclude that (a) fixing just because a high CVSS score in NVD only yields negligible risk reduction, (b) the additional existence of proof of concepts exploits (e.g. in EDB) may yield some additional but not large risk reduction, (c) fixing in response to presence in black markets yields the equivalent risk reduction of wearing safety belt in cars (you might also die but still..). On the negative side, our study shows that as industry we miss a metric with high specificity (ruling out vulns for which we shouldn't worry). In order to address the feedback from BlackHat 2013's audience, the final revision (V3) provides additional data in Appendix A detailing how the control variables in the study affect the results.
no_new_dataset
0.902995
1406.2080
Sainbayar Sukhbaatar
Sainbayar Sukhbaatar, Joan Bruna, Manohar Paluri, Lubomir Bourdev and Rob Fergus
Training Convolutional Networks with Noisy Labels
Accepted as a workshop contribution at ICLR 2015
null
null
null
cs.CV cs.LG cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The availability of large labeled datasets has allowed Convolutional Network models to achieve impressive recognition results. However, in many settings manual annotation of the data is impractical; instead our data has noisy labels, i.e. there is some freely available label for each image which may or may not be accurate. In this paper, we explore the performance of discriminatively-trained Convnets when trained on such noisy data. We introduce an extra noise layer into the network which adapts the network outputs to match the noisy label distribution. The parameters of this noise layer can be estimated as part of the training process and involve simple modifications to current training infrastructures for deep networks. We demonstrate the approaches on several datasets, including large scale experiments on the ImageNet classification benchmark.
[ { "version": "v1", "created": "Mon, 9 Jun 2014 05:45:12 GMT" }, { "version": "v2", "created": "Sat, 20 Dec 2014 21:10:03 GMT" }, { "version": "v3", "created": "Tue, 3 Mar 2015 21:13:47 GMT" }, { "version": "v4", "created": "Fri, 10 Apr 2015 16:44:00 GMT" } ]
2015-04-13T00:00:00
[ [ "Sukhbaatar", "Sainbayar", "" ], [ "Bruna", "Joan", "" ], [ "Paluri", "Manohar", "" ], [ "Bourdev", "Lubomir", "" ], [ "Fergus", "Rob", "" ] ]
TITLE: Training Convolutional Networks with Noisy Labels ABSTRACT: The availability of large labeled datasets has allowed Convolutional Network models to achieve impressive recognition results. However, in many settings manual annotation of the data is impractical; instead our data has noisy labels, i.e. there is some freely available label for each image which may or may not be accurate. In this paper, we explore the performance of discriminatively-trained Convnets when trained on such noisy data. We introduce an extra noise layer into the network which adapts the network outputs to match the noisy label distribution. The parameters of this noise layer can be estimated as part of the training process and involve simple modifications to current training infrastructures for deep networks. We demonstrate the approaches on several datasets, including large scale experiments on the ImageNet classification benchmark.
no_new_dataset
0.948822
1409.1556
Karen Simonyan
Karen Simonyan, Andrew Zisserman
Very Deep Convolutional Networks for Large-Scale Image Recognition
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.
[ { "version": "v1", "created": "Thu, 4 Sep 2014 19:48:04 GMT" }, { "version": "v2", "created": "Mon, 15 Sep 2014 19:58:29 GMT" }, { "version": "v3", "created": "Tue, 18 Nov 2014 20:43:11 GMT" }, { "version": "v4", "created": "Fri, 19 Dec 2014 20:01:21 GMT" }, { "version": "v5", "created": "Tue, 23 Dec 2014 20:05:00 GMT" }, { "version": "v6", "created": "Fri, 10 Apr 2015 16:25:04 GMT" } ]
2015-04-13T00:00:00
[ [ "Simonyan", "Karen", "" ], [ "Zisserman", "Andrew", "" ] ]
TITLE: Very Deep Convolutional Networks for Large-Scale Image Recognition ABSTRACT: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.
no_new_dataset
0.951639
1410.1165
Rupesh Kumar Srivastava
Rupesh Kumar Srivastava, Jonathan Masci, Faustino Gomez, J\"urgen Schmidhuber
Understanding Locally Competitive Networks
9 pages + 2 supplementary, Accepted to ICLR 2015 Conference track
null
null
null
cs.NE cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently proposed neural network activation functions such as rectified linear, maxout, and local winner-take-all have allowed for faster and more effective training of deep neural architectures on large and complex datasets. The common trait among these functions is that they implement local competition between small groups of computational units within a layer, so that only part of the network is activated for any given input pattern. In this paper, we attempt to visualize and understand this self-modularization, and suggest a unified explanation for the beneficial properties of such networks. We also show how our insights can be directly useful for efficiently performing retrieval over large datasets using neural networks.
[ { "version": "v1", "created": "Sun, 5 Oct 2014 14:46:47 GMT" }, { "version": "v2", "created": "Mon, 22 Dec 2014 20:07:17 GMT" }, { "version": "v3", "created": "Thu, 9 Apr 2015 01:22:49 GMT" } ]
2015-04-13T00:00:00
[ [ "Srivastava", "Rupesh Kumar", "" ], [ "Masci", "Jonathan", "" ], [ "Gomez", "Faustino", "" ], [ "Schmidhuber", "Jürgen", "" ] ]
TITLE: Understanding Locally Competitive Networks ABSTRACT: Recently proposed neural network activation functions such as rectified linear, maxout, and local winner-take-all have allowed for faster and more effective training of deep neural architectures on large and complex datasets. The common trait among these functions is that they implement local competition between small groups of computational units within a layer, so that only part of the network is activated for any given input pattern. In this paper, we attempt to visualize and understand this self-modularization, and suggest a unified explanation for the beneficial properties of such networks. We also show how our insights can be directly useful for efficiently performing retrieval over large datasets using neural networks.
no_new_dataset
0.94801
1410.8516
Laurent Dinh
Laurent Dinh, David Krueger and Yoshua Bengio
NICE: Non-linear Independent Components Estimation
11 pages and 2 pages Appendix, workshop paper at ICLR 2015
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a deep learning framework for modeling complex high-dimensional densities called Non-linear Independent Component Estimation (NICE). It is based on the idea that a good representation is one in which the data has a distribution that is easy to model. For this purpose, a non-linear deterministic transformation of the data is learned that maps it to a latent space so as to make the transformed data conform to a factorized distribution, i.e., resulting in independent latent variables. We parametrize this transformation so that computing the Jacobian determinant and inverse transform is trivial, yet we maintain the ability to learn complex non-linear transformations, via a composition of simple building blocks, each based on a deep neural network. The training criterion is simply the exact log-likelihood, which is tractable. Unbiased ancestral sampling is also easy. We show that this approach yields good generative models on four image datasets and can be used for inpainting.
[ { "version": "v1", "created": "Thu, 30 Oct 2014 19:44:20 GMT" }, { "version": "v2", "created": "Fri, 19 Dec 2014 22:40:18 GMT" }, { "version": "v3", "created": "Tue, 6 Jan 2015 18:10:44 GMT" }, { "version": "v4", "created": "Mon, 9 Mar 2015 18:06:58 GMT" }, { "version": "v5", "created": "Thu, 12 Mar 2015 06:25:20 GMT" }, { "version": "v6", "created": "Fri, 10 Apr 2015 12:27:56 GMT" } ]
2015-04-13T00:00:00
[ [ "Dinh", "Laurent", "" ], [ "Krueger", "David", "" ], [ "Bengio", "Yoshua", "" ] ]
TITLE: NICE: Non-linear Independent Components Estimation ABSTRACT: We propose a deep learning framework for modeling complex high-dimensional densities called Non-linear Independent Component Estimation (NICE). It is based on the idea that a good representation is one in which the data has a distribution that is easy to model. For this purpose, a non-linear deterministic transformation of the data is learned that maps it to a latent space so as to make the transformed data conform to a factorized distribution, i.e., resulting in independent latent variables. We parametrize this transformation so that computing the Jacobian determinant and inverse transform is trivial, yet we maintain the ability to learn complex non-linear transformations, via a composition of simple building blocks, each based on a deep neural network. The training criterion is simply the exact log-likelihood, which is tractable. Unbiased ancestral sampling is also easy. We show that this approach yields good generative models on four image datasets and can be used for inpainting.
no_new_dataset
0.944638
1503.08663
Guanbin Li
Guanbin Li and Yizhou Yu
Visual Saliency Based on Multiscale Deep Features
To appear in CVPR 2015
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Visual saliency is a fundamental problem in both cognitive and computational sciences, including computer vision. In this CVPR 2015 paper, we discover that a high-quality visual saliency model can be trained with multiscale features extracted using a popular deep learning architecture, convolutional neural networks (CNNs), which have had many successes in visual recognition tasks. For learning such saliency models, we introduce a neural network architecture, which has fully connected layers on top of CNNs responsible for extracting features at three different scales. We then propose a refinement method to enhance the spatial coherence of our saliency results. Finally, aggregating multiple saliency maps computed for different levels of image segmentation can further boost the performance, yielding saliency maps better than those generated from a single segmentation. To promote further research and evaluation of visual saliency models, we also construct a new large database of 4447 challenging images and their pixelwise saliency annotation. Experimental results demonstrate that our proposed method is capable of achieving state-of-the-art performance on all public benchmarks, improving the F-Measure by 5.0% and 13.2% respectively on the MSRA-B dataset and our new dataset (HKU-IS), and lowering the mean absolute error by 5.7% and 35.1% respectively on these two datasets.
[ { "version": "v1", "created": "Mon, 30 Mar 2015 13:21:09 GMT" }, { "version": "v2", "created": "Tue, 31 Mar 2015 05:21:02 GMT" }, { "version": "v3", "created": "Fri, 10 Apr 2015 06:40:46 GMT" } ]
2015-04-13T00:00:00
[ [ "Li", "Guanbin", "" ], [ "Yu", "Yizhou", "" ] ]
TITLE: Visual Saliency Based on Multiscale Deep Features ABSTRACT: Visual saliency is a fundamental problem in both cognitive and computational sciences, including computer vision. In this CVPR 2015 paper, we discover that a high-quality visual saliency model can be trained with multiscale features extracted using a popular deep learning architecture, convolutional neural networks (CNNs), which have had many successes in visual recognition tasks. For learning such saliency models, we introduce a neural network architecture, which has fully connected layers on top of CNNs responsible for extracting features at three different scales. We then propose a refinement method to enhance the spatial coherence of our saliency results. Finally, aggregating multiple saliency maps computed for different levels of image segmentation can further boost the performance, yielding saliency maps better than those generated from a single segmentation. To promote further research and evaluation of visual saliency models, we also construct a new large database of 4447 challenging images and their pixelwise saliency annotation. Experimental results demonstrate that our proposed method is capable of achieving state-of-the-art performance on all public benchmarks, improving the F-Measure by 5.0% and 13.2% respectively on the MSRA-B dataset and our new dataset (HKU-IS), and lowering the mean absolute error by 5.7% and 35.1% respectively on these two datasets.
new_dataset
0.961893
1504.02564
Ragesh Jaiswal
Anup Bhattacharya, Ragesh Jaiswal, Amit Kumar
Faster Algorithms for the Constrained k-means Problem
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The classical center based clustering problems such as $k$-means/median/center assume that the optimal clusters satisfy the locality property that the points in the same cluster are close to each other. A number of clustering problems arise in machine learning where the optimal clusters do not follow such a locality property. Consider a variant of the $k$-means problem that may be regarded as a general version of such problems. Here, the optimal clusters $O_1, ..., O_k$ are an arbitrary partition of the dataset and the goal is to output $k$-centers $c_1, ..., c_k$ such that the objective function $\sum_{i=1}^{k} \sum_{x \in O_{i}} ||x - c_{i}||^2$ is minimized. It is not difficult to argue that any algorithm (without knowing the optimal clusters) that outputs a single set of $k$ centers, will not behave well as far as optimizing the above objective function is concerned. However, this does not rule out the existence of algorithms that output a list of such $k$ centers such that at least one of these $k$ centers behaves well. Given an error parameter $\varepsilon > 0$, let $\ell$ denote the size of the smallest list of $k$-centers such that at least one of the $k$-centers gives a $(1+\varepsilon)$ approximation w.r.t. the objective function above. In this paper, we show an upper bound on $\ell$ by giving a randomized algorithm that outputs a list of $2^{\tilde{O}(k/\varepsilon)}$ $k$-centers. We also give a closely matching lower bound of $2^{\tilde{\Omega}(k/\sqrt{\varepsilon})}$. Moreover, our algorithm runs in time $O \left(n d \cdot 2^{\tilde{O}(k/\varepsilon)} \right)$. This is a significant improvement over the previous result of Ding and Xu who gave an algorithm with running time $O \left(n d \cdot (\log{n})^{k} \cdot 2^{poly(k/\varepsilon)} \right)$ and output a list of size $O \left((\log{n})^k \cdot 2^{poly(k/\varepsilon)} \right)$.
[ { "version": "v1", "created": "Fri, 10 Apr 2015 07:03:58 GMT" } ]
2015-04-13T00:00:00
[ [ "Bhattacharya", "Anup", "" ], [ "Jaiswal", "Ragesh", "" ], [ "Kumar", "Amit", "" ] ]
TITLE: Faster Algorithms for the Constrained k-means Problem ABSTRACT: The classical center based clustering problems such as $k$-means/median/center assume that the optimal clusters satisfy the locality property that the points in the same cluster are close to each other. A number of clustering problems arise in machine learning where the optimal clusters do not follow such a locality property. Consider a variant of the $k$-means problem that may be regarded as a general version of such problems. Here, the optimal clusters $O_1, ..., O_k$ are an arbitrary partition of the dataset and the goal is to output $k$-centers $c_1, ..., c_k$ such that the objective function $\sum_{i=1}^{k} \sum_{x \in O_{i}} ||x - c_{i}||^2$ is minimized. It is not difficult to argue that any algorithm (without knowing the optimal clusters) that outputs a single set of $k$ centers, will not behave well as far as optimizing the above objective function is concerned. However, this does not rule out the existence of algorithms that output a list of such $k$ centers such that at least one of these $k$ centers behaves well. Given an error parameter $\varepsilon > 0$, let $\ell$ denote the size of the smallest list of $k$-centers such that at least one of the $k$-centers gives a $(1+\varepsilon)$ approximation w.r.t. the objective function above. In this paper, we show an upper bound on $\ell$ by giving a randomized algorithm that outputs a list of $2^{\tilde{O}(k/\varepsilon)}$ $k$-centers. We also give a closely matching lower bound of $2^{\tilde{\Omega}(k/\sqrt{\varepsilon})}$. Moreover, our algorithm runs in time $O \left(n d \cdot 2^{\tilde{O}(k/\varepsilon)} \right)$. This is a significant improvement over the previous result of Ding and Xu who gave an algorithm with running time $O \left(n d \cdot (\log{n})^{k} \cdot 2^{poly(k/\varepsilon)} \right)$ and output a list of size $O \left((\log{n})^k \cdot 2^{poly(k/\varepsilon)} \right)$.
no_new_dataset
0.946101
1504.02696
Xinyan Yan
Xinyan Yan, Vadim Indelman, Byron Boots
Incremental Sparse GP Regression for Continuous-time Trajectory Estimation & Mapping
10 pages, 10 figures
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent work on simultaneous trajectory estimation and mapping (STEAM) for mobile robots has found success by representing the trajectory as a Gaussian process. Gaussian processes can represent a continuous-time trajectory, elegantly handle asynchronous and sparse measurements, and allow the robot to query the trajectory to recover its estimated position at any time of interest. A major drawback of this approach is that STEAM is formulated as a batch estimation problem. In this paper we provide the critical extensions necessary to transform the existing batch algorithm into an extremely efficient incremental algorithm. In particular, we are able to vastly speed up the solution time through efficient variable reordering and incremental sparse updates, which we believe will greatly increase the practicality of Gaussian process methods for robot mapping and localization. Finally, we demonstrate the approach and its advantages on both synthetic and real datasets.
[ { "version": "v1", "created": "Fri, 10 Apr 2015 14:47:25 GMT" } ]
2015-04-13T00:00:00
[ [ "Yan", "Xinyan", "" ], [ "Indelman", "Vadim", "" ], [ "Boots", "Byron", "" ] ]
TITLE: Incremental Sparse GP Regression for Continuous-time Trajectory Estimation & Mapping ABSTRACT: Recent work on simultaneous trajectory estimation and mapping (STEAM) for mobile robots has found success by representing the trajectory as a Gaussian process. Gaussian processes can represent a continuous-time trajectory, elegantly handle asynchronous and sparse measurements, and allow the robot to query the trajectory to recover its estimated position at any time of interest. A major drawback of this approach is that STEAM is formulated as a batch estimation problem. In this paper we provide the critical extensions necessary to transform the existing batch algorithm into an extremely efficient incremental algorithm. In particular, we are able to vastly speed up the solution time through efficient variable reordering and incremental sparse updates, which we believe will greatly increase the practicality of Gaussian process methods for robot mapping and localization. Finally, we demonstrate the approach and its advantages on both synthetic and real datasets.
no_new_dataset
0.946646
1504.02764
Roozbeh Mottaghi
Roozbeh Mottaghi, Yu Xiang, Silvio Savarese
A Coarse-to-Fine Model for 3D Pose Estimation and Sub-category Recognition
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite the fact that object detection, 3D pose estimation, and sub-category recognition are highly correlated tasks, they are usually addressed independently from each other because of the huge space of parameters. To jointly model all of these tasks, we propose a coarse-to-fine hierarchical representation, where each level of the hierarchy represents objects at a different level of granularity. The hierarchical representation prevents performance loss, which is often caused by the increase in the number of parameters (as we consider more tasks to model), and the joint modelling enables resolving ambiguities that exist in independent modelling of these tasks. We augment PASCAL3D+ dataset with annotations for these tasks and show that our hierarchical model is effective in joint modelling of object detection, 3D pose estimation, and sub-category recognition.
[ { "version": "v1", "created": "Fri, 10 Apr 2015 19:18:59 GMT" } ]
2015-04-13T00:00:00
[ [ "Mottaghi", "Roozbeh", "" ], [ "Xiang", "Yu", "" ], [ "Savarese", "Silvio", "" ] ]
TITLE: A Coarse-to-Fine Model for 3D Pose Estimation and Sub-category Recognition ABSTRACT: Despite the fact that object detection, 3D pose estimation, and sub-category recognition are highly correlated tasks, they are usually addressed independently from each other because of the huge space of parameters. To jointly model all of these tasks, we propose a coarse-to-fine hierarchical representation, where each level of the hierarchy represents objects at a different level of granularity. The hierarchical representation prevents performance loss, which is often caused by the increase in the number of parameters (as we consider more tasks to model), and the joint modelling enables resolving ambiguities that exist in independent modelling of these tasks. We augment PASCAL3D+ dataset with annotations for these tasks and show that our hierarchical model is effective in joint modelling of object detection, 3D pose estimation, and sub-category recognition.
no_new_dataset
0.947088
1408.1054
Wojciech Czarnecki
Wojciech Marian Czarnecki, Jacek Tabor
Multithreshold Entropy Linear Classifier
null
null
10.1016/j.eswa.2015.03.007
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Linear classifiers separate the data with a hyperplane. In this paper we focus on the novel method of construction of multithreshold linear classifier, which separates the data with multiple parallel hyperplanes. Proposed model is based on the information theory concepts -- namely Renyi's quadratic entropy and Cauchy-Schwarz divergence. We begin with some general properties, including data scale invariance. Then we prove that our method is a multithreshold large margin classifier, which shows the analogy to the SVM, while in the same time works with much broader class of hypotheses. What is also interesting, proposed method is aimed at the maximization of the balanced quality measure (such as Matthew's Correlation Coefficient) as opposed to very common maximization of the accuracy. This feature comes directly from the optimization problem statement and is further confirmed by the experiments on the UCI datasets. It appears, that our Multithreshold Entropy Linear Classifier (MELC) obtaines similar or higher scores than the ones given by SVM on both synthetic and real data. We show how proposed approach can be benefitial for the cheminformatics in the task of ligands activity prediction, where despite better classification results, MELC gives some additional insight into the data structure (classes of underrepresented chemical compunds).
[ { "version": "v1", "created": "Mon, 4 Aug 2014 18:01:29 GMT" } ]
2015-04-10T00:00:00
[ [ "Czarnecki", "Wojciech Marian", "" ], [ "Tabor", "Jacek", "" ] ]
TITLE: Multithreshold Entropy Linear Classifier ABSTRACT: Linear classifiers separate the data with a hyperplane. In this paper we focus on the novel method of construction of multithreshold linear classifier, which separates the data with multiple parallel hyperplanes. Proposed model is based on the information theory concepts -- namely Renyi's quadratic entropy and Cauchy-Schwarz divergence. We begin with some general properties, including data scale invariance. Then we prove that our method is a multithreshold large margin classifier, which shows the analogy to the SVM, while in the same time works with much broader class of hypotheses. What is also interesting, proposed method is aimed at the maximization of the balanced quality measure (such as Matthew's Correlation Coefficient) as opposed to very common maximization of the accuracy. This feature comes directly from the optimization problem statement and is further confirmed by the experiments on the UCI datasets. It appears, that our Multithreshold Entropy Linear Classifier (MELC) obtaines similar or higher scores than the ones given by SVM on both synthetic and real data. We show how proposed approach can be benefitial for the cheminformatics in the task of ligands activity prediction, where despite better classification results, MELC gives some additional insight into the data structure (classes of underrepresented chemical compunds).
no_new_dataset
0.948965
1412.7009
Jan Rudy
Jan Rudy, Graham Taylor
Generative Class-conditional Autoencoders
null
null
null
null
cs.NE cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent work by Bengio et al. (2013) proposes a sampling procedure for denoising autoencoders which involves learning the transition operator of a Markov chain. The transition operator is typically unimodal, which limits its capacity to model complex data. In order to perform efficient sampling from conditional distributions, we extend this work, both theoretically and algorithmically, to gated autoencoders (Memisevic, 2013), The proposed model is able to generate convincing class-conditional samples when trained on both the MNIST and TFD datasets.
[ { "version": "v1", "created": "Mon, 22 Dec 2014 14:57:05 GMT" }, { "version": "v2", "created": "Sat, 28 Feb 2015 00:16:55 GMT" }, { "version": "v3", "created": "Thu, 9 Apr 2015 01:54:33 GMT" } ]
2015-04-10T00:00:00
[ [ "Rudy", "Jan", "" ], [ "Taylor", "Graham", "" ] ]
TITLE: Generative Class-conditional Autoencoders ABSTRACT: Recent work by Bengio et al. (2013) proposes a sampling procedure for denoising autoencoders which involves learning the transition operator of a Markov chain. The transition operator is typically unimodal, which limits its capacity to model complex data. In order to perform efficient sampling from conditional distributions, we extend this work, both theoretically and algorithmically, to gated autoencoders (Memisevic, 2013), The proposed model is able to generate convincing class-conditional samples when trained on both the MNIST and TFD datasets.
no_new_dataset
0.949809
1502.01423
Chen Fang
Chen Fang, Hailin Jin, Jianchao Yang, Zhe Lin
Collaborative Feature Learning from Social Media
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Image feature representation plays an essential role in image recognition and related tasks. The current state-of-the-art feature learning paradigm is supervised learning from labeled data. However, this paradigm requires large-scale category labels, which limits its applicability to domains where labels are hard to obtain. In this paper, we propose a new data-driven feature learning paradigm which does not rely on category labels. Instead, we learn from user behavior data collected on social media. Concretely, we use the image relationship discovered in the latent space from the user behavior data to guide the image feature learning. We collect a large-scale image and user behavior dataset from Behance.net. The dataset consists of 1.9 million images and over 300 million view records from 1.9 million users. We validate our feature learning paradigm on this dataset and find that the learned feature significantly outperforms the state-of-the-art image features in learning better image similarities. We also show that the learned feature performs competitively on various recognition benchmarks.
[ { "version": "v1", "created": "Thu, 5 Feb 2015 03:32:19 GMT" }, { "version": "v2", "created": "Sat, 4 Apr 2015 19:27:33 GMT" }, { "version": "v3", "created": "Thu, 9 Apr 2015 18:36:54 GMT" } ]
2015-04-10T00:00:00
[ [ "Fang", "Chen", "" ], [ "Jin", "Hailin", "" ], [ "Yang", "Jianchao", "" ], [ "Lin", "Zhe", "" ] ]
TITLE: Collaborative Feature Learning from Social Media ABSTRACT: Image feature representation plays an essential role in image recognition and related tasks. The current state-of-the-art feature learning paradigm is supervised learning from labeled data. However, this paradigm requires large-scale category labels, which limits its applicability to domains where labels are hard to obtain. In this paper, we propose a new data-driven feature learning paradigm which does not rely on category labels. Instead, we learn from user behavior data collected on social media. Concretely, we use the image relationship discovered in the latent space from the user behavior data to guide the image feature learning. We collect a large-scale image and user behavior dataset from Behance.net. The dataset consists of 1.9 million images and over 300 million view records from 1.9 million users. We validate our feature learning paradigm on this dataset and find that the learned feature significantly outperforms the state-of-the-art image features in learning better image similarities. We also show that the learned feature performs competitively on various recognition benchmarks.
new_dataset
0.91181
1502.03671
R\'emi Lebret
R\'emi Lebret, Pedro O. Pinheiro, Ronan Collobert
Phrase-based Image Captioning
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Generating a novel textual description of an image is an interesting problem that connects computer vision and natural language processing. In this paper, we present a simple model that is able to generate descriptive sentences given a sample image. This model has a strong focus on the syntax of the descriptions. We train a purely bilinear model that learns a metric between an image representation (generated from a previously trained Convolutional Neural Network) and phrases that are used to described them. The system is then able to infer phrases from a given image sample. Based on caption syntax statistics, we propose a simple language model that can produce relevant descriptions for a given test image using the phrases inferred. Our approach, which is considerably simpler than state-of-the-art models, achieves comparable results in two popular datasets for the task: Flickr30k and the recently proposed Microsoft COCO.
[ { "version": "v1", "created": "Thu, 12 Feb 2015 14:17:15 GMT" }, { "version": "v2", "created": "Thu, 9 Apr 2015 09:48:52 GMT" } ]
2015-04-10T00:00:00
[ [ "Lebret", "Rémi", "" ], [ "Pinheiro", "Pedro O.", "" ], [ "Collobert", "Ronan", "" ] ]
TITLE: Phrase-based Image Captioning ABSTRACT: Generating a novel textual description of an image is an interesting problem that connects computer vision and natural language processing. In this paper, we present a simple model that is able to generate descriptive sentences given a sample image. This model has a strong focus on the syntax of the descriptions. We train a purely bilinear model that learns a metric between an image representation (generated from a previously trained Convolutional Neural Network) and phrases that are used to described them. The system is then able to infer phrases from a given image sample. Based on caption syntax statistics, we propose a simple language model that can produce relevant descriptions for a given test image using the phrases inferred. Our approach, which is considerably simpler than state-of-the-art models, achieves comparable results in two popular datasets for the task: Flickr30k and the recently proposed Microsoft COCO.
no_new_dataset
0.949201
1504.02147
Thomas Goldstein
Tom Goldstein, Gavin Taylor, Kawika Barabin, Kent Sayre
Unwrapping ADMM: Efficient Distributed Computing via Transpose Reduction
null
null
null
null
cs.DC cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent approaches to distributed model fitting rely heavily on consensus ADMM, where each node solves small sub-problems using only local data. We propose iterative methods that solve {\em global} sub-problems over an entire distributed dataset. This is possible using transpose reduction strategies that allow a single node to solve least-squares over massive datasets without putting all the data in one place. This results in simple iterative methods that avoid the expensive inner loops required for consensus methods. To demonstrate the efficiency of this approach, we fit linear classifiers and sparse linear models to datasets over 5 Tb in size using a distributed implementation with over 7000 cores in far less time than previous approaches.
[ { "version": "v1", "created": "Wed, 8 Apr 2015 22:35:18 GMT" } ]
2015-04-10T00:00:00
[ [ "Goldstein", "Tom", "" ], [ "Taylor", "Gavin", "" ], [ "Barabin", "Kawika", "" ], [ "Sayre", "Kent", "" ] ]
TITLE: Unwrapping ADMM: Efficient Distributed Computing via Transpose Reduction ABSTRACT: Recent approaches to distributed model fitting rely heavily on consensus ADMM, where each node solves small sub-problems using only local data. We propose iterative methods that solve {\em global} sub-problems over an entire distributed dataset. This is possible using transpose reduction strategies that allow a single node to solve least-squares over massive datasets without putting all the data in one place. This results in simple iterative methods that avoid the expensive inner loops required for consensus methods. To demonstrate the efficiency of this approach, we fit linear classifiers and sparse linear models to datasets over 5 Tb in size using a distributed implementation with over 7000 cores in far less time than previous approaches.
no_new_dataset
0.946498
1504.02305
Haewoon Kwak
Haewoon Kwak and Jeremy Blackburn and Seungyeop Han
Exploring Cyberbullying and Other Toxic Behavior in Team Competition Online Games
CHI'15
null
null
null
cs.CY cs.HC cs.MM cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work we explore cyberbullying and other toxic behavior in team competition online games. Using a dataset of over 10 million player reports on 1.46 million toxic players along with corresponding crowdsourced decisions, we test several hypotheses drawn from theories explaining toxic behavior. Besides providing large-scale, empirical based understanding of toxic behavior, our work can be used as a basis for building systems to detect, prevent, and counter-act toxic behavior.
[ { "version": "v1", "created": "Thu, 9 Apr 2015 13:33:58 GMT" } ]
2015-04-10T00:00:00
[ [ "Kwak", "Haewoon", "" ], [ "Blackburn", "Jeremy", "" ], [ "Han", "Seungyeop", "" ] ]
TITLE: Exploring Cyberbullying and Other Toxic Behavior in Team Competition Online Games ABSTRACT: In this work we explore cyberbullying and other toxic behavior in team competition online games. Using a dataset of over 10 million player reports on 1.46 million toxic players along with corresponding crowdsourced decisions, we test several hypotheses drawn from theories explaining toxic behavior. Besides providing large-scale, empirical based understanding of toxic behavior, our work can be used as a basis for building systems to detect, prevent, and counter-act toxic behavior.
no_new_dataset
0.835819
1504.02340
Wongun Choi
Wongun Choi
Near-Online Multi-target Tracking with Aggregated Local Flow Descriptor
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we focus on the two key aspects of multiple target tracking problem: 1) designing an accurate affinity measure to associate detections and 2) implementing an efficient and accurate (near) online multiple target tracking algorithm. As the first contribution, we introduce a novel Aggregated Local Flow Descriptor (ALFD) that encodes the relative motion pattern between a pair of temporally distant detections using long term interest point trajectories (IPTs). Leveraging on the IPTs, the ALFD provides a robust affinity measure for estimating the likelihood of matching detections regardless of the application scenarios. As another contribution, we present a Near-Online Multi-target Tracking (NOMT) algorithm. The tracking problem is formulated as a data-association between targets and detections in a temporal window, that is performed repeatedly at every frame. While being efficient, NOMT achieves robustness via integrating multiple cues including ALFD metric, target dynamics, appearance similarity, and long term trajectory regularization into the model. Our ablative analysis verifies the superiority of the ALFD metric over the other conventional affinity metrics. We run a comprehensive experimental evaluation on two challenging tracking datasets, KITTI and MOT datasets. The NOMT method combined with ALFD metric achieves the best accuracy in both datasets with significant margins (about 10% higher MOTA) over the state-of-the-arts.
[ { "version": "v1", "created": "Thu, 9 Apr 2015 14:57:32 GMT" } ]
2015-04-10T00:00:00
[ [ "Choi", "Wongun", "" ] ]
TITLE: Near-Online Multi-target Tracking with Aggregated Local Flow Descriptor ABSTRACT: In this paper, we focus on the two key aspects of multiple target tracking problem: 1) designing an accurate affinity measure to associate detections and 2) implementing an efficient and accurate (near) online multiple target tracking algorithm. As the first contribution, we introduce a novel Aggregated Local Flow Descriptor (ALFD) that encodes the relative motion pattern between a pair of temporally distant detections using long term interest point trajectories (IPTs). Leveraging on the IPTs, the ALFD provides a robust affinity measure for estimating the likelihood of matching detections regardless of the application scenarios. As another contribution, we present a Near-Online Multi-target Tracking (NOMT) algorithm. The tracking problem is formulated as a data-association between targets and detections in a temporal window, that is performed repeatedly at every frame. While being efficient, NOMT achieves robustness via integrating multiple cues including ALFD metric, target dynamics, appearance similarity, and long term trajectory regularization into the model. Our ablative analysis verifies the superiority of the ALFD metric over the other conventional affinity metrics. We run a comprehensive experimental evaluation on two challenging tracking datasets, KITTI and MOT datasets. The NOMT method combined with ALFD metric achieves the best accuracy in both datasets with significant margins (about 10% higher MOTA) over the state-of-the-arts.
no_new_dataset
0.945399
1504.02356
Xavier Gir\'o-i-Nieto
Eva Mohedano, Amaia Salvador, Sergi Porta, Xavier Gir\'o-i-Nieto, Graham Healy, Kevin McGuinness, Noel O'Connor and Alan F. Smeaton
Exploring EEG for Object Detection and Retrieval
This preprint is the full version of a short paper accepted in the ACM International Conference on Multimedia Retrieval (ICMR) 2015 (Shanghai, China)
null
null
null
cs.HC cs.CV cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper explores the potential for using Brain Computer Interfaces (BCI) as a relevance feedback mechanism in content-based image retrieval. We investigate if it is possible to capture useful EEG signals to detect if relevant objects are present in a dataset of realistic and complex images. We perform several experiments using a rapid serial visual presentation (RSVP) of images at different rates (5Hz and 10Hz) on 8 users with different degrees of familiarization with BCI and the dataset. We then use the feedback from the BCI and mouse-based interfaces to retrieve localized objects in a subset of TRECVid images. We show that it is indeed possible to detect such objects in complex images and, also, that users with previous knowledge on the dataset or experience with the RSVP outperform others. When the users have limited time to annotate the images (100 seconds in our experiments) both interfaces are comparable in performance. Comparing our best users in a retrieval task, we found that EEG-based relevance feedback outperforms mouse-based feedback. The realistic and complex image dataset differentiates our work from previous studies on EEG for image retrieval.
[ { "version": "v1", "created": "Thu, 9 Apr 2015 15:43:52 GMT" } ]
2015-04-10T00:00:00
[ [ "Mohedano", "Eva", "" ], [ "Salvador", "Amaia", "" ], [ "Porta", "Sergi", "" ], [ "Giró-i-Nieto", "Xavier", "" ], [ "Healy", "Graham", "" ], [ "McGuinness", "Kevin", "" ], [ "O'Connor", "Noel", "" ], [ "Smeaton", "Alan F.", "" ] ]
TITLE: Exploring EEG for Object Detection and Retrieval ABSTRACT: This paper explores the potential for using Brain Computer Interfaces (BCI) as a relevance feedback mechanism in content-based image retrieval. We investigate if it is possible to capture useful EEG signals to detect if relevant objects are present in a dataset of realistic and complex images. We perform several experiments using a rapid serial visual presentation (RSVP) of images at different rates (5Hz and 10Hz) on 8 users with different degrees of familiarization with BCI and the dataset. We then use the feedback from the BCI and mouse-based interfaces to retrieve localized objects in a subset of TRECVid images. We show that it is indeed possible to detect such objects in complex images and, also, that users with previous knowledge on the dataset or experience with the RSVP outperform others. When the users have limited time to annotate the images (100 seconds in our experiments) both interfaces are comparable in performance. Comparing our best users in a retrieval task, we found that EEG-based relevance feedback outperforms mouse-based feedback. The realistic and complex image dataset differentiates our work from previous studies on EEG for image retrieval.
no_new_dataset
0.940353
1504.02363
Ke Zhang
Ke Zhang, Konstantinos Pelechrinis, Theodoros Lappas
Analyzing and Modeling Special Offer Campaigns in Location-based Social Networks
in The 9th International AAAI Conference on Web and Social Media (ICWSM 2015)
null
null
null
cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The proliferation of mobile handheld devices in combination with the technological advancements in mobile computing has led to a number of innovative services that make use of the location information available on such devices. Traditional yellow pages websites have now moved to mobile platforms, giving the opportunity to local businesses and potential, near-by, customers to connect. These platforms can offer an affordable advertisement channel to local businesses. One of the mechanisms offered by location-based social networks (LBSNs) allows businesses to provide special offers to their customers that connect through the platform. We collect a large time-series dataset from approximately 14 million venues on Foursquare and analyze the performance of such campaigns using randomization techniques and (non-parametric) hypothesis testing with statistical bootstrapping. Our main finding indicates that this type of promotions are not as effective as anecdote success stories might suggest. Finally, we design classifiers by extracting three different types of features that are able to provide an educated decision on whether a special offer campaign for a local business will succeed or not both in short and long term.
[ { "version": "v1", "created": "Thu, 9 Apr 2015 16:26:40 GMT" } ]
2015-04-10T00:00:00
[ [ "Zhang", "Ke", "" ], [ "Pelechrinis", "Konstantinos", "" ], [ "Lappas", "Theodoros", "" ] ]
TITLE: Analyzing and Modeling Special Offer Campaigns in Location-based Social Networks ABSTRACT: The proliferation of mobile handheld devices in combination with the technological advancements in mobile computing has led to a number of innovative services that make use of the location information available on such devices. Traditional yellow pages websites have now moved to mobile platforms, giving the opportunity to local businesses and potential, near-by, customers to connect. These platforms can offer an affordable advertisement channel to local businesses. One of the mechanisms offered by location-based social networks (LBSNs) allows businesses to provide special offers to their customers that connect through the platform. We collect a large time-series dataset from approximately 14 million venues on Foursquare and analyze the performance of such campaigns using randomization techniques and (non-parametric) hypothesis testing with statistical bootstrapping. Our main finding indicates that this type of promotions are not as effective as anecdote success stories might suggest. Finally, we design classifiers by extracting three different types of features that are able to provide an educated decision on whether a special offer campaign for a local business will succeed or not both in short and long term.
no_new_dataset
0.933613
1504.02380
Spencer Wheatley Mr.
Spencer Wheatley, Benjamin Sovacool, and Didier Sornette
Of Disasters and Dragon Kings: A Statistical Analysis of Nuclear Power Incidents & Accidents
null
null
null
null
physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We provide, and perform a risk theoretic statistical analysis of, a dataset that is 75 percent larger than the previous best dataset on nuclear incidents and accidents, comparing three measures of severity: INES (International Nuclear Event Scale), radiation released, and damage dollar losses. The annual rate of nuclear accidents, with size above 20 Million US$, per plant, decreased from the 1950s until dropping significantly after Chernobyl (April, 1986). The rate is now roughly stable at 0.002 to 0.003, i.e., around 1 event per year across the current fleet. The distribution of damage values changed after Three Mile Island (TMI; March, 1979), where moderate damages were suppressed but the tail became very heavy, being described by a Pareto distribution with tail index 0.55. Further, there is a runaway disaster regime, associated with the "dragon-king" phenomenon, amplifying the risk of extreme damage. In fact, the damage of the largest event (Fukushima; March, 2011) is equal to 60 percent of the total damage of all 174 accidents in our database since 1946. In dollar losses we compute a 50% chance that (i) a Fukushima event (or larger) occurs in the next 50 years, (ii) a Chernobyl event (or larger) occurs in the next 27 years and (iii) a TMI event (or larger) occurs in the next 10 years. Finally, we find that the INES scale is inconsistent. To be consistent with damage, the Fukushima disaster would need to have an INES level of 11, rather than the maximum of 7.
[ { "version": "v1", "created": "Tue, 7 Apr 2015 18:50:33 GMT" } ]
2015-04-10T00:00:00
[ [ "Wheatley", "Spencer", "" ], [ "Sovacool", "Benjamin", "" ], [ "Sornette", "Didier", "" ] ]
TITLE: Of Disasters and Dragon Kings: A Statistical Analysis of Nuclear Power Incidents & Accidents ABSTRACT: We provide, and perform a risk theoretic statistical analysis of, a dataset that is 75 percent larger than the previous best dataset on nuclear incidents and accidents, comparing three measures of severity: INES (International Nuclear Event Scale), radiation released, and damage dollar losses. The annual rate of nuclear accidents, with size above 20 Million US$, per plant, decreased from the 1950s until dropping significantly after Chernobyl (April, 1986). The rate is now roughly stable at 0.002 to 0.003, i.e., around 1 event per year across the current fleet. The distribution of damage values changed after Three Mile Island (TMI; March, 1979), where moderate damages were suppressed but the tail became very heavy, being described by a Pareto distribution with tail index 0.55. Further, there is a runaway disaster regime, associated with the "dragon-king" phenomenon, amplifying the risk of extreme damage. In fact, the damage of the largest event (Fukushima; March, 2011) is equal to 60 percent of the total damage of all 174 accidents in our database since 1946. In dollar losses we compute a 50% chance that (i) a Fukushima event (or larger) occurs in the next 50 years, (ii) a Chernobyl event (or larger) occurs in the next 27 years and (iii) a TMI event (or larger) occurs in the next 10 years. Finally, we find that the INES scale is inconsistent. To be consistent with damage, the Fukushima disaster would need to have an INES level of 11, rather than the maximum of 7.
no_new_dataset
0.929376
1504.01781
Tiep Mai
Tiep Mai, Deepak Ajwani and Alessandra Sala
Profiling user activities with minimal traffic traces
null
null
null
null
cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Understanding user behavior is essential to personalize and enrich a user's online experience. While there are significant benefits to be accrued from the pursuit of personalized services based on a fine-grained behavioral analysis, care must be taken to address user privacy concerns. In this paper, we consider the use of web traces with truncated URLs - each URL is trimmed to only contain the web domain - for this purpose. While such truncation removes the fine-grained sensitive information, it also strips the data of many features that are crucial to the profiling of user activity. We show how to overcome the severe handicap of lack of crucial features for the purpose of filtering out the URLs representing a user activity from the noisy network traffic trace (including advertisement, spam, analytics, webscripts) with high accuracy. This activity profiling with truncated URLs enables the network operators to provide personalized services while mitigating privacy concerns by storing and sharing only truncated traffic traces. In order to offset the accuracy loss due to truncation, our statistical methodology leverages specialized features extracted from a group of consecutive URLs that represent a micro user action like web click, chat reply, etc., which we call bursts. These bursts, in turn, are detected by a novel algorithm which is based on our observed characteristics of the inter-arrival time of HTTP records. We present an extensive experimental evaluation on a real dataset of mobile web traces, consisting of more than 130 million records, representing the browsing activities of 10,000 users over a period of 30 days. Our results show that the proposed methodology achieves around 90% accuracy in segregating URLs representing user activities from non-representative URLs.
[ { "version": "v1", "created": "Tue, 7 Apr 2015 23:29:23 GMT" } ]
2015-04-09T00:00:00
[ [ "Mai", "Tiep", "" ], [ "Ajwani", "Deepak", "" ], [ "Sala", "Alessandra", "" ] ]
TITLE: Profiling user activities with minimal traffic traces ABSTRACT: Understanding user behavior is essential to personalize and enrich a user's online experience. While there are significant benefits to be accrued from the pursuit of personalized services based on a fine-grained behavioral analysis, care must be taken to address user privacy concerns. In this paper, we consider the use of web traces with truncated URLs - each URL is trimmed to only contain the web domain - for this purpose. While such truncation removes the fine-grained sensitive information, it also strips the data of many features that are crucial to the profiling of user activity. We show how to overcome the severe handicap of lack of crucial features for the purpose of filtering out the URLs representing a user activity from the noisy network traffic trace (including advertisement, spam, analytics, webscripts) with high accuracy. This activity profiling with truncated URLs enables the network operators to provide personalized services while mitigating privacy concerns by storing and sharing only truncated traffic traces. In order to offset the accuracy loss due to truncation, our statistical methodology leverages specialized features extracted from a group of consecutive URLs that represent a micro user action like web click, chat reply, etc., which we call bursts. These bursts, in turn, are detected by a novel algorithm which is based on our observed characteristics of the inter-arrival time of HTTP records. We present an extensive experimental evaluation on a real dataset of mobile web traces, consisting of more than 130 million records, representing the browsing activities of 10,000 users over a period of 30 days. Our results show that the proposed methodology achieves around 90% accuracy in segregating URLs representing user activities from non-representative URLs.
new_dataset
0.958148
1504.01807
Junbin Gao Professor
Boyue Wang and Yongli Hu and Junbin Gao and Yanfeng Sun and Baocai Yin
Low Rank Representation on Grassmann Manifolds: An Extrinsic Perspective
9 pages
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many computer vision algorithms employ subspace models to represent data. The Low-rank representation (LRR) has been successfully applied in subspace clustering for which data are clustered according to their subspace structures. The possibility of extending LRR on Grassmann manifold is explored in this paper. Rather than directly embedding Grassmann manifold into a symmetric matrix space, an extrinsic view is taken by building the self-representation of LRR over the tangent space of each Grassmannian point. A new algorithm for solving the proposed Grassmannian LRR model is designed and implemented. Several clustering experiments are conducted on handwritten digits dataset, dynamic texture video clips and YouTube celebrity face video data. The experimental results show our method outperforms a number of existing methods.
[ { "version": "v1", "created": "Wed, 8 Apr 2015 02:38:04 GMT" } ]
2015-04-09T00:00:00
[ [ "Wang", "Boyue", "" ], [ "Hu", "Yongli", "" ], [ "Gao", "Junbin", "" ], [ "Sun", "Yanfeng", "" ], [ "Yin", "Baocai", "" ] ]
TITLE: Low Rank Representation on Grassmann Manifolds: An Extrinsic Perspective ABSTRACT: Many computer vision algorithms employ subspace models to represent data. The Low-rank representation (LRR) has been successfully applied in subspace clustering for which data are clustered according to their subspace structures. The possibility of extending LRR on Grassmann manifold is explored in this paper. Rather than directly embedding Grassmann manifold into a symmetric matrix space, an extrinsic view is taken by building the self-representation of LRR over the tangent space of each Grassmannian point. A new algorithm for solving the proposed Grassmannian LRR model is designed and implemented. Several clustering experiments are conducted on handwritten digits dataset, dynamic texture video clips and YouTube celebrity face video data. The experimental results show our method outperforms a number of existing methods.
no_new_dataset
0.947478
1504.01840
Yegor Tkachenko
Yegor Tkachenko
Autonomous CRM Control via CLV Approximation with Deep Reinforcement Learning in Discrete and Continuous Action Space
13 pages, 8 figures
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The paper outlines a framework for autonomous control of a CRM (customer relationship management) system. First, it explores how a modified version of the widely accepted Recency-Frequency-Monetary Value system of metrics can be used to define the state space of clients or donors. Second, it describes a procedure to determine the optimal direct marketing action in discrete and continuous action space for the given individual, based on his position in the state space. The procedure involves the use of model-free Q-learning to train a deep neural network that relates a client's position in the state space to rewards associated with possible marketing actions. The estimated value function over the client state space can be interpreted as customer lifetime value, and thus allows for a quick plug-in estimation of CLV for a given client. Experimental results are presented, based on KDD Cup 1998 mailing dataset of donation solicitations.
[ { "version": "v1", "created": "Wed, 8 Apr 2015 06:22:44 GMT" } ]
2015-04-09T00:00:00
[ [ "Tkachenko", "Yegor", "" ] ]
TITLE: Autonomous CRM Control via CLV Approximation with Deep Reinforcement Learning in Discrete and Continuous Action Space ABSTRACT: The paper outlines a framework for autonomous control of a CRM (customer relationship management) system. First, it explores how a modified version of the widely accepted Recency-Frequency-Monetary Value system of metrics can be used to define the state space of clients or donors. Second, it describes a procedure to determine the optimal direct marketing action in discrete and continuous action space for the given individual, based on his position in the state space. The procedure involves the use of model-free Q-learning to train a deep neural network that relates a client's position in the state space to rewards associated with possible marketing actions. The estimated value function over the client state space can be interpreted as customer lifetime value, and thus allows for a quick plug-in estimation of CLV for a given client. Experimental results are presented, based on KDD Cup 1998 mailing dataset of donation solicitations.
no_new_dataset
0.947575
1504.01942
Laura Leal Taix\'e
Laura Leal-Taix\'e and Anton Milan and Ian Reid and Stefan Roth and Konrad Schindler
MOTChallenge 2015: Towards a Benchmark for Multi-Target Tracking
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the recent past, the computer vision community has developed centralized benchmarks for the performance evaluation of a variety of tasks, including generic object and pedestrian detection, 3D reconstruction, optical flow, single-object short-term tracking, and stereo estimation. Despite potential pitfalls of such benchmarks, they have proved to be extremely helpful to advance the state of the art in the respective area. Interestingly, there has been rather limited work on the standardization of quantitative benchmarks for multiple target tracking. One of the few exceptions is the well-known PETS dataset, targeted primarily at surveillance applications. Despite being widely used, it is often applied inconsistently, for example involving using different subsets of the available data, different ways of training the models, or differing evaluation scripts. This paper describes our work toward a novel multiple object tracking benchmark aimed to address such issues. We discuss the challenges of creating such a framework, collecting existing and new data, gathering state-of-the-art methods to be tested on the datasets, and finally creating a unified evaluation system. With MOTChallenge we aim to pave the way toward a unified evaluation framework for a more meaningful quantification of multi-target tracking.
[ { "version": "v1", "created": "Wed, 8 Apr 2015 12:56:38 GMT" } ]
2015-04-09T00:00:00
[ [ "Leal-Taixé", "Laura", "" ], [ "Milan", "Anton", "" ], [ "Reid", "Ian", "" ], [ "Roth", "Stefan", "" ], [ "Schindler", "Konrad", "" ] ]
TITLE: MOTChallenge 2015: Towards a Benchmark for Multi-Target Tracking ABSTRACT: In the recent past, the computer vision community has developed centralized benchmarks for the performance evaluation of a variety of tasks, including generic object and pedestrian detection, 3D reconstruction, optical flow, single-object short-term tracking, and stereo estimation. Despite potential pitfalls of such benchmarks, they have proved to be extremely helpful to advance the state of the art in the respective area. Interestingly, there has been rather limited work on the standardization of quantitative benchmarks for multiple target tracking. One of the few exceptions is the well-known PETS dataset, targeted primarily at surveillance applications. Despite being widely used, it is often applied inconsistently, for example involving using different subsets of the available data, different ways of training the models, or differing evaluation scripts. This paper describes our work toward a novel multiple object tracking benchmark aimed to address such issues. We discuss the challenges of creating such a framework, collecting existing and new data, gathering state-of-the-art methods to be tested on the datasets, and finally creating a unified evaluation system. With MOTChallenge we aim to pave the way toward a unified evaluation framework for a more meaningful quantification of multi-target tracking.
no_new_dataset
0.882276
1504.01957
Tony Tsang
Tony Tsang
Video Contents Prior Storing Server for Optical Access Network
10 pages. in March 2015, Volume 7. Number 1 International Journal of Computer Networks & Communications (IJCNC) March 2015, Volume 7
null
null
null
cs.NI cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One of the most important multimedia applications is Internet protocol TV (IPTV) for next-generation networks. IPTV provides triple-play services that require high-speed access networks with the functions of multicasting and quality of service (QoS) guarantees. Among optical access networks, Ethernet passive optical networks (EPONs) are regarded as among the best solutions to meet higher bandwidth demands. In this paper, we propose a new architecture for multicasting live IPTV traffic in optical access network. The proposed mechanism involves assigning a unique logical link identifier to each IPTV channel. To manage multicasting, a prior storing server in the optical line terminal (OLT) and in each optical network unit (ONU) is constructed. In this work, we propose a partial prior storing strategy that considers the changes in the popularity of the video content segments over time and the access patterns of the users to compute the utility of the objects in the prior storage. We also propose to partition the prior storage to avoid the eviction of the popular objects (those not accessed frequently) by the unpopular ones which are accessed with higher frequency. The popularity distribution and ageing of popularity are measured from two online datasets and use the parameters in simulations. Simulation results show that our proposed architecture can improve the system performance and QoS parameters in terms of packet delay, jitter and packet loss.
[ { "version": "v1", "created": "Wed, 8 Apr 2015 13:29:12 GMT" } ]
2015-04-09T00:00:00
[ [ "Tsang", "Tony", "" ] ]
TITLE: Video Contents Prior Storing Server for Optical Access Network ABSTRACT: One of the most important multimedia applications is Internet protocol TV (IPTV) for next-generation networks. IPTV provides triple-play services that require high-speed access networks with the functions of multicasting and quality of service (QoS) guarantees. Among optical access networks, Ethernet passive optical networks (EPONs) are regarded as among the best solutions to meet higher bandwidth demands. In this paper, we propose a new architecture for multicasting live IPTV traffic in optical access network. The proposed mechanism involves assigning a unique logical link identifier to each IPTV channel. To manage multicasting, a prior storing server in the optical line terminal (OLT) and in each optical network unit (ONU) is constructed. In this work, we propose a partial prior storing strategy that considers the changes in the popularity of the video content segments over time and the access patterns of the users to compute the utility of the objects in the prior storage. We also propose to partition the prior storage to avoid the eviction of the popular objects (those not accessed frequently) by the unpopular ones which are accessed with higher frequency. The popularity distribution and ageing of popularity are measured from two online datasets and use the parameters in simulations. Simulation results show that our proposed architecture can improve the system performance and QoS parameters in terms of packet delay, jitter and packet loss.
no_new_dataset
0.953449
1411.3715
Daniele Barchiesi
Daniele Barchiesi, Dimitrios Giannoulis, Dan Stowell, Mark D. Plumbley
Acoustic Scene Classification
null
IEEE Signal Processing Magazine 32(3) (May 2015) 16-34
10.1109/MSP.2014.2326181
null
cs.SD cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this article we present an account of the state-of-the-art in acoustic scene classification (ASC), the task of classifying environments from the sounds they produce. Starting from a historical review of previous research in this area, we define a general framework for ASC and present different imple- mentations of its components. We then describe a range of different algorithms submitted for a data challenge that was held to provide a general and fair benchmark for ASC techniques. The dataset recorded for this purpose is presented, along with the performance metrics that are used to evaluate the algorithms and statistical significance tests to compare the submitted methods. We use a baseline method that employs MFCCS, GMMS and a maximum likelihood criterion as a benchmark, and only find sufficient evidence to conclude that three algorithms significantly outperform it. We also evaluate the human classification accuracy in performing a similar classification task. The best performing algorithm achieves a mean accuracy that matches the median accuracy obtained by humans, and common pairs of classes are misclassified by both computers and humans. However, all acoustic scenes are correctly classified by at least some individuals, while there are scenes that are misclassified by all algorithms.
[ { "version": "v1", "created": "Thu, 13 Nov 2014 16:03:09 GMT" } ]
2015-04-08T00:00:00
[ [ "Barchiesi", "Daniele", "" ], [ "Giannoulis", "Dimitrios", "" ], [ "Stowell", "Dan", "" ], [ "Plumbley", "Mark D.", "" ] ]
TITLE: Acoustic Scene Classification ABSTRACT: In this article we present an account of the state-of-the-art in acoustic scene classification (ASC), the task of classifying environments from the sounds they produce. Starting from a historical review of previous research in this area, we define a general framework for ASC and present different imple- mentations of its components. We then describe a range of different algorithms submitted for a data challenge that was held to provide a general and fair benchmark for ASC techniques. The dataset recorded for this purpose is presented, along with the performance metrics that are used to evaluate the algorithms and statistical significance tests to compare the submitted methods. We use a baseline method that employs MFCCS, GMMS and a maximum likelihood criterion as a benchmark, and only find sufficient evidence to conclude that three algorithms significantly outperform it. We also evaluate the human classification accuracy in performing a similar classification task. The best performing algorithm achieves a mean accuracy that matches the median accuracy obtained by humans, and common pairs of classes are misclassified by both computers and humans. However, all acoustic scenes are correctly classified by at least some individuals, while there are scenes that are misclassified by all algorithms.
new_dataset
0.973062
1412.6615
Levent Sagun
Levent Sagun, V. Ugur Guney, Gerard Ben Arous, Yann LeCun
Explorations on high dimensional landscapes
11 pages, 8 figures, workshop contribution at ICLR 2015
null
null
null
stat.ML cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Finding minima of a real valued non-convex function over a high dimensional space is a major challenge in science. We provide evidence that some such functions that are defined on high dimensional domains have a narrow band of values whose pre-image contains the bulk of its critical points. This is in contrast with the low dimensional picture in which this band is wide. Our simulations agree with the previous theoretical work on spin glasses that proves the existence of such a band when the dimension of the domain tends to infinity. Furthermore our experiments on teacher-student networks with the MNIST dataset establish a similar phenomenon in deep networks. We finally observe that both the gradient descent and the stochastic gradient descent methods can reach this level within the same number of steps.
[ { "version": "v1", "created": "Sat, 20 Dec 2014 06:57:12 GMT" }, { "version": "v2", "created": "Thu, 25 Dec 2014 01:29:56 GMT" }, { "version": "v3", "created": "Mon, 2 Mar 2015 10:08:16 GMT" }, { "version": "v4", "created": "Mon, 6 Apr 2015 21:47:50 GMT" } ]
2015-04-08T00:00:00
[ [ "Sagun", "Levent", "" ], [ "Guney", "V. Ugur", "" ], [ "Arous", "Gerard Ben", "" ], [ "LeCun", "Yann", "" ] ]
TITLE: Explorations on high dimensional landscapes ABSTRACT: Finding minima of a real valued non-convex function over a high dimensional space is a major challenge in science. We provide evidence that some such functions that are defined on high dimensional domains have a narrow band of values whose pre-image contains the bulk of its critical points. This is in contrast with the low dimensional picture in which this band is wide. Our simulations agree with the previous theoretical work on spin glasses that proves the existence of such a band when the dimension of the domain tends to infinity. Furthermore our experiments on teacher-student networks with the MNIST dataset establish a similar phenomenon in deep networks. We finally observe that both the gradient descent and the stochastic gradient descent methods can reach this level within the same number of steps.
no_new_dataset
0.951414
1504.01383
Cristian Danescu-Niculescu-Mizil
Vlad Niculae, Caroline Suen, Justine Zhang, Cristian Danescu-Niculescu-Mizil, Jure Leskovec
QUOTUS: The Structure of Political Media Coverage as Revealed by Quoting Patterns
To appear in the Proceedings of WWW 2015. 11pp, 10 fig. Interactive visualization, data, and other info available at http://snap.stanford.edu/quotus/
null
null
null
cs.CL cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Given the extremely large pool of events and stories available, media outlets need to focus on a subset of issues and aspects to convey to their audience. Outlets are often accused of exhibiting a systematic bias in this selection process, with different outlets portraying different versions of reality. However, in the absence of objective measures and empirical evidence, the direction and extent of systematicity remains widely disputed. In this paper we propose a framework based on quoting patterns for quantifying and characterizing the degree to which media outlets exhibit systematic bias. We apply this framework to a massive dataset of news articles spanning the six years of Obama's presidency and all of his speeches, and reveal that a systematic pattern does indeed emerge from the outlet's quoting behavior. Moreover, we show that this pattern can be successfully exploited in an unsupervised prediction setting, to determine which new quotes an outlet will select to broadcast. By encoding bias patterns in a low-rank space we provide an analysis of the structure of political media coverage. This reveals a latent media bias space that aligns surprisingly well with political ideology and outlet type. A linguistic analysis exposes striking differences across these latent dimensions, showing how the different types of media outlets portray different realities even when reporting on the same events. For example, outlets mapped to the mainstream conservative side of the latent space focus on quotes that portray a presidential persona disproportionately characterized by negativity.
[ { "version": "v1", "created": "Mon, 6 Apr 2015 20:00:28 GMT" } ]
2015-04-08T00:00:00
[ [ "Niculae", "Vlad", "" ], [ "Suen", "Caroline", "" ], [ "Zhang", "Justine", "" ], [ "Danescu-Niculescu-Mizil", "Cristian", "" ], [ "Leskovec", "Jure", "" ] ]
TITLE: QUOTUS: The Structure of Political Media Coverage as Revealed by Quoting Patterns ABSTRACT: Given the extremely large pool of events and stories available, media outlets need to focus on a subset of issues and aspects to convey to their audience. Outlets are often accused of exhibiting a systematic bias in this selection process, with different outlets portraying different versions of reality. However, in the absence of objective measures and empirical evidence, the direction and extent of systematicity remains widely disputed. In this paper we propose a framework based on quoting patterns for quantifying and characterizing the degree to which media outlets exhibit systematic bias. We apply this framework to a massive dataset of news articles spanning the six years of Obama's presidency and all of his speeches, and reveal that a systematic pattern does indeed emerge from the outlet's quoting behavior. Moreover, we show that this pattern can be successfully exploited in an unsupervised prediction setting, to determine which new quotes an outlet will select to broadcast. By encoding bias patterns in a low-rank space we provide an analysis of the structure of political media coverage. This reveals a latent media bias space that aligns surprisingly well with political ideology and outlet type. A linguistic analysis exposes striking differences across these latent dimensions, showing how the different types of media outlets portray different realities even when reporting on the same events. For example, outlets mapped to the mainstream conservative side of the latent space focus on quotes that portray a presidential persona disproportionately characterized by negativity.
no_new_dataset
0.945551
1504.01684
Miao Fan
Miao Fan, Qiang Zhou, Thomas Fang Zheng and Ralph Grishman
Large Margin Nearest Neighbor Embedding for Knowledge Representation
arXiv admin note: text overlap with arXiv:1503.08155
null
null
null
cs.AI cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Traditional way of storing facts in triplets ({\it head\_entity, relation, tail\_entity}), abbreviated as ({\it h, r, t}), makes the knowledge intuitively displayed and easily acquired by mankind, but hardly computed or even reasoned by AI machines. Inspired by the success in applying {\it Distributed Representations} to AI-related fields, recent studies expect to represent each entity and relation with a unique low-dimensional embedding, which is different from the symbolic and atomic framework of displaying knowledge in triplets. In this way, the knowledge computing and reasoning can be essentially facilitated by means of a simple {\it vector calculation}, i.e. ${\bf h} + {\bf r} \approx {\bf t}$. We thus contribute an effective model to learn better embeddings satisfying the formula by pulling the positive tail entities ${\bf t^{+}}$ to get together and close to {\bf h} + {\bf r} ({\it Nearest Neighbor}), and simultaneously pushing the negatives ${\bf t^{-}}$ away from the positives ${\bf t^{+}}$ via keeping a {\it Large Margin}. We also design a corresponding learning algorithm to efficiently find the optimal solution based on {\it Stochastic Gradient Descent} in iterative fashion. Quantitative experiments illustrate that our approach can achieve the state-of-the-art performance, compared with several latest methods on some benchmark datasets for two classical applications, i.e. {\it Link prediction} and {\it Triplet classification}. Moreover, we analyze the parameter complexities among all the evaluated models, and analytical results indicate that our model needs fewer computational resources on outperforming the other methods.
[ { "version": "v1", "created": "Tue, 7 Apr 2015 17:50:31 GMT" } ]
2015-04-08T00:00:00
[ [ "Fan", "Miao", "" ], [ "Zhou", "Qiang", "" ], [ "Zheng", "Thomas Fang", "" ], [ "Grishman", "Ralph", "" ] ]
TITLE: Large Margin Nearest Neighbor Embedding for Knowledge Representation ABSTRACT: Traditional way of storing facts in triplets ({\it head\_entity, relation, tail\_entity}), abbreviated as ({\it h, r, t}), makes the knowledge intuitively displayed and easily acquired by mankind, but hardly computed or even reasoned by AI machines. Inspired by the success in applying {\it Distributed Representations} to AI-related fields, recent studies expect to represent each entity and relation with a unique low-dimensional embedding, which is different from the symbolic and atomic framework of displaying knowledge in triplets. In this way, the knowledge computing and reasoning can be essentially facilitated by means of a simple {\it vector calculation}, i.e. ${\bf h} + {\bf r} \approx {\bf t}$. We thus contribute an effective model to learn better embeddings satisfying the formula by pulling the positive tail entities ${\bf t^{+}}$ to get together and close to {\bf h} + {\bf r} ({\it Nearest Neighbor}), and simultaneously pushing the negatives ${\bf t^{-}}$ away from the positives ${\bf t^{+}}$ via keeping a {\it Large Margin}. We also design a corresponding learning algorithm to efficiently find the optimal solution based on {\it Stochastic Gradient Descent} in iterative fashion. Quantitative experiments illustrate that our approach can achieve the state-of-the-art performance, compared with several latest methods on some benchmark datasets for two classical applications, i.e. {\it Link prediction} and {\it Triplet classification}. Moreover, we analyze the parameter complexities among all the evaluated models, and analytical results indicate that our model needs fewer computational resources on outperforming the other methods.
no_new_dataset
0.949623
1504.01697
Alex Gittens
Jiyan Yang and Alex Gittens
Tensor machines for learning target-specific polynomial features
19 pages, 4 color figures, 2 tables. Submitted to ECML 2015
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent years have demonstrated that using random feature maps can significantly decrease the training and testing times of kernel-based algorithms without significantly lowering their accuracy. Regrettably, because random features are target-agnostic, typically thousands of such features are necessary to achieve acceptable accuracies. In this work, we consider the problem of learning a small number of explicit polynomial features. Our approach, named Tensor Machines, finds a parsimonious set of features by optimizing over the hypothesis class introduced by Kar and Karnick for random feature maps in a target-specific manner. Exploiting a natural connection between polynomials and tensors, we provide bounds on the generalization error of Tensor Machines. Empirically, Tensor Machines behave favorably on several real-world datasets compared to other state-of-the-art techniques for learning polynomial features, and deliver significantly more parsimonious models.
[ { "version": "v1", "created": "Tue, 7 Apr 2015 18:21:37 GMT" } ]
2015-04-08T00:00:00
[ [ "Yang", "Jiyan", "" ], [ "Gittens", "Alex", "" ] ]
TITLE: Tensor machines for learning target-specific polynomial features ABSTRACT: Recent years have demonstrated that using random feature maps can significantly decrease the training and testing times of kernel-based algorithms without significantly lowering their accuracy. Regrettably, because random features are target-agnostic, typically thousands of such features are necessary to achieve acceptable accuracies. In this work, we consider the problem of learning a small number of explicit polynomial features. Our approach, named Tensor Machines, finds a parsimonious set of features by optimizing over the hypothesis class introduced by Kar and Karnick for random feature maps in a target-specific manner. Exploiting a natural connection between polynomials and tensors, we provide bounds on the generalization error of Tensor Machines. Empirically, Tensor Machines behave favorably on several real-world datasets compared to other state-of-the-art techniques for learning polynomial features, and deliver significantly more parsimonious models.
no_new_dataset
0.947381
1411.1455
Saravanan Thirumuruganathan
Md Farhadur Rahman, Weimo Liu, Saravanan Thirumuruganathan, Nan Zhang, Gautam Das
Rank-Based Inference over Web Databases
null
null
null
null
cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In recent years, there has been much research in Ranked Retrieval model in structured databases, especially those in web databases. With this model, a search query returns top-k tuples according to not just exact matches of selection conditions, but a suitable ranking function. This paper studies a novel problem on the privacy implications of database ranking. The motivation is a novel yet serious privacy leakage we found on real-world web databases which is caused by the ranking function design. Many such databases feature private attributes - e.g., a social network allows users to specify certain attributes as only visible to him/herself, but not to others. While these websites generally respect the privacy settings by not directly displaying private attribute values in search query answers, many of them nevertheless take into account such private attributes in the ranking function design. The conventional belief might be that tuple ranks alone are not enough to reveal the private attribute values. Our investigation, however, shows that this is not the case in reality. To address the problem, we introduce a taxonomy of the problem space with two dimensions, (1) the type of query interface and (2) the capability of adversaries. For each subspace, we develop a novel technique which either guarantees the successful inference of private attributes, or does so for a significant portion of real-world tuples. We demonstrate the effectiveness and efficiency of our techniques through theoretical analysis, extensive experiments over real-world datasets, as well as successful online attacks over websites with tens to hundreds of millions of users - e.g., Amazon Goodreads and Renren.com.
[ { "version": "v1", "created": "Thu, 6 Nov 2014 00:06:44 GMT" }, { "version": "v2", "created": "Tue, 11 Nov 2014 00:36:10 GMT" }, { "version": "v3", "created": "Mon, 2 Feb 2015 07:37:56 GMT" }, { "version": "v4", "created": "Mon, 6 Apr 2015 02:03:27 GMT" } ]
2015-04-07T00:00:00
[ [ "Rahman", "Md Farhadur", "" ], [ "Liu", "Weimo", "" ], [ "Thirumuruganathan", "Saravanan", "" ], [ "Zhang", "Nan", "" ], [ "Das", "Gautam", "" ] ]
TITLE: Rank-Based Inference over Web Databases ABSTRACT: In recent years, there has been much research in Ranked Retrieval model in structured databases, especially those in web databases. With this model, a search query returns top-k tuples according to not just exact matches of selection conditions, but a suitable ranking function. This paper studies a novel problem on the privacy implications of database ranking. The motivation is a novel yet serious privacy leakage we found on real-world web databases which is caused by the ranking function design. Many such databases feature private attributes - e.g., a social network allows users to specify certain attributes as only visible to him/herself, but not to others. While these websites generally respect the privacy settings by not directly displaying private attribute values in search query answers, many of them nevertheless take into account such private attributes in the ranking function design. The conventional belief might be that tuple ranks alone are not enough to reveal the private attribute values. Our investigation, however, shows that this is not the case in reality. To address the problem, we introduce a taxonomy of the problem space with two dimensions, (1) the type of query interface and (2) the capability of adversaries. For each subspace, we develop a novel technique which either guarantees the successful inference of private attributes, or does so for a significant portion of real-world tuples. We demonstrate the effectiveness and efficiency of our techniques through theoretical analysis, extensive experiments over real-world datasets, as well as successful online attacks over websites with tens to hundreds of millions of users - e.g., Amazon Goodreads and Renren.com.
no_new_dataset
0.946101
1504.00325
C. Lawrence Zitnick
Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollar, C. Lawrence Zitnick
Microsoft COCO Captions: Data Collection and Evaluation Server
arXiv admin note: text overlap with arXiv:1411.4952
null
null
null
cs.CV cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we describe the Microsoft COCO Caption dataset and evaluation server. When completed, the dataset will contain over one and a half million captions describing over 330,000 images. For the training and validation images, five independent human generated captions will be provided. To ensure consistency in evaluation of automatic caption generation algorithms, an evaluation server is used. The evaluation server receives candidate captions and scores them using several popular metrics, including BLEU, METEOR, ROUGE and CIDEr. Instructions for using the evaluation server are provided.
[ { "version": "v1", "created": "Wed, 1 Apr 2015 18:13:43 GMT" }, { "version": "v2", "created": "Fri, 3 Apr 2015 20:21:16 GMT" } ]
2015-04-07T00:00:00
[ [ "Chen", "Xinlei", "" ], [ "Fang", "Hao", "" ], [ "Lin", "Tsung-Yi", "" ], [ "Vedantam", "Ramakrishna", "" ], [ "Gupta", "Saurabh", "" ], [ "Dollar", "Piotr", "" ], [ "Zitnick", "C. Lawrence", "" ] ]
TITLE: Microsoft COCO Captions: Data Collection and Evaluation Server ABSTRACT: In this paper we describe the Microsoft COCO Caption dataset and evaluation server. When completed, the dataset will contain over one and a half million captions describing over 330,000 images. For the training and validation images, five independent human generated captions will be provided. To ensure consistency in evaluation of automatic caption generation algorithms, an evaluation server is used. The evaluation server receives candidate captions and scores them using several popular metrics, including BLEU, METEOR, ROUGE and CIDEr. Instructions for using the evaluation server are provided.
no_new_dataset
0.939582
1504.01142
Nam-phuong Nguyen
Nam-phuong Nguyen, Siavash Mirarab, Keerthana Kumar, Tandy Warnow
Ultra-large alignments using Phylogeny-aware Profiles
Online supplemental materials and data are available at http://www.cs.utexas.edu/users/phylo/software/upp/
null
null
null
q-bio.GN cs.CE cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many biological questions, including the estimation of deep evolutionary histories and the detection of remote homology between protein sequences, rely upon multiple sequence alignments (MSAs) and phylogenetic trees of large datasets. However, accurate large-scale multiple sequence alignment is very difficult, especially when the dataset contains fragmentary sequences. We present UPP, an MSA method that uses a new machine learning technique - the Ensemble of Hidden Markov Models - that we propose here. UPP produces highly accurate alignments for both nucleotide and amino acid sequences, even on ultra-large datasets or datasets containing fragmentary sequences. UPP is available at https://github.com/smirarab/sepp.
[ { "version": "v1", "created": "Sun, 5 Apr 2015 17:15:38 GMT" } ]
2015-04-07T00:00:00
[ [ "Nguyen", "Nam-phuong", "" ], [ "Mirarab", "Siavash", "" ], [ "Kumar", "Keerthana", "" ], [ "Warnow", "Tandy", "" ] ]
TITLE: Ultra-large alignments using Phylogeny-aware Profiles ABSTRACT: Many biological questions, including the estimation of deep evolutionary histories and the detection of remote homology between protein sequences, rely upon multiple sequence alignments (MSAs) and phylogenetic trees of large datasets. However, accurate large-scale multiple sequence alignment is very difficult, especially when the dataset contains fragmentary sequences. We present UPP, an MSA method that uses a new machine learning technique - the Ensemble of Hidden Markov Models - that we propose here. UPP produces highly accurate alignments for both nucleotide and amino acid sequences, even on ultra-large datasets or datasets containing fragmentary sequences. UPP is available at https://github.com/smirarab/sepp.
no_new_dataset
0.951997
1504.01220
Xiaodan Liang
Si Liu and Xiaodan Liang and Luoqi Liu and Xiaohui Shen and Jianchao Yang and Changsheng Xu and Liang Lin and Xiaochun Cao and Shuicheng Yan
Matching-CNN Meets KNN: Quasi-Parametric Human Parsing
This manuscript is the accepted version for CVPR 2015
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Both parametric and non-parametric approaches have demonstrated encouraging performances in the human parsing task, namely segmenting a human image into several semantic regions (e.g., hat, bag, left arm, face). In this work, we aim to develop a new solution with the advantages of both methodologies, namely supervision from annotated data and the flexibility to use newly annotated (possibly uncommon) images, and present a quasi-parametric human parsing model. Under the classic K Nearest Neighbor (KNN)-based nonparametric framework, the parametric Matching Convolutional Neural Network (M-CNN) is proposed to predict the matching confidence and displacements of the best matched region in the testing image for a particular semantic region in one KNN image. Given a testing image, we first retrieve its KNN images from the annotated/manually-parsed human image corpus. Then each semantic region in each KNN image is matched with confidence to the testing image using M-CNN, and the matched regions from all KNN images are further fused, followed by a superpixel smoothing procedure to obtain the ultimate human parsing result. The M-CNN differs from the classic CNN in that the tailored cross image matching filters are introduced to characterize the matching between the testing image and the semantic region of a KNN image. The cross image matching filters are defined at different convolutional layers, each aiming to capture a particular range of displacements. Comprehensive evaluations over a large dataset with 7,700 annotated human images well demonstrate the significant performance gain from the quasi-parametric model over the state-of-the-arts, for the human parsing task.
[ { "version": "v1", "created": "Mon, 6 Apr 2015 07:20:02 GMT" } ]
2015-04-07T00:00:00
[ [ "Liu", "Si", "" ], [ "Liang", "Xiaodan", "" ], [ "Liu", "Luoqi", "" ], [ "Shen", "Xiaohui", "" ], [ "Yang", "Jianchao", "" ], [ "Xu", "Changsheng", "" ], [ "Lin", "Liang", "" ], [ "Cao", "Xiaochun", "" ], [ "Yan", "Shuicheng", "" ] ]
TITLE: Matching-CNN Meets KNN: Quasi-Parametric Human Parsing ABSTRACT: Both parametric and non-parametric approaches have demonstrated encouraging performances in the human parsing task, namely segmenting a human image into several semantic regions (e.g., hat, bag, left arm, face). In this work, we aim to develop a new solution with the advantages of both methodologies, namely supervision from annotated data and the flexibility to use newly annotated (possibly uncommon) images, and present a quasi-parametric human parsing model. Under the classic K Nearest Neighbor (KNN)-based nonparametric framework, the parametric Matching Convolutional Neural Network (M-CNN) is proposed to predict the matching confidence and displacements of the best matched region in the testing image for a particular semantic region in one KNN image. Given a testing image, we first retrieve its KNN images from the annotated/manually-parsed human image corpus. Then each semantic region in each KNN image is matched with confidence to the testing image using M-CNN, and the matched regions from all KNN images are further fused, followed by a superpixel smoothing procedure to obtain the ultimate human parsing result. The M-CNN differs from the classic CNN in that the tailored cross image matching filters are introduced to characterize the matching between the testing image and the semantic region of a KNN image. The cross image matching filters are defined at different convolutional layers, each aiming to capture a particular range of displacements. Comprehensive evaluations over a large dataset with 7,700 annotated human images well demonstrate the significant performance gain from the quasi-parametric model over the state-of-the-arts, for the human parsing task.
no_new_dataset
0.951097
1504.01304
Yuzhen Ye
Yuzhen Ye and Haixu Tang
Utilizing de Bruijn graph of metagenome assembly for metatranscriptome analysis
8 pages, 4 figures, accepted in RECOMB-Seq 2015, under consideration in Bioinformatics (a special issue for RECOMB-Seq/CBB)
null
null
null
q-bio.GN cs.CE cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Metagenomics research has accelerated the studies of microbial organisms, providing insights into the composition and potential functionality of various microbial communities. Metatranscriptomics (studies of the transcripts from a mixture of microbial species) and other meta-omics approaches hold even greater promise for providing additional insights into functional and regulatory characteristics of the microbial communities. Current metatranscriptomics projects are often carried out without matched metagenomic datasets (of the same microbial communities). For the projects that produce both metatranscriptomic and metagenomic datasets, their analyses are often not integrated. Metagenome assemblies are far from perfect, partially explaining why metagenome assemblies are not used for the analysis of metatranscriptomic datasets. Here we report a reads mapping algorithm for mapping of short reads onto a de Bruijn graph of assemblies. A hash table of junction k-mers (k-mers spanning branching structures in the de Bruijn graph) is used to facilitate fast mapping of reads to the graph. We developed an application of this mapping algorithm: a reference based approach to metatranscriptome assembly using graphs of metagenome assembly as the reference. Our results show that this new approach (called TAG) helps to assemble substantially more transcripts that otherwise would have been missed or truncated because of the fragmented nature of the reference metagenome. TAG was implemented in C++ and has been tested extensively on the linux platform. It is available for download as open source at http://omics.informatics.indiana.edu/TAG.
[ { "version": "v1", "created": "Mon, 6 Apr 2015 15:17:29 GMT" } ]
2015-04-07T00:00:00
[ [ "Ye", "Yuzhen", "" ], [ "Tang", "Haixu", "" ] ]
TITLE: Utilizing de Bruijn graph of metagenome assembly for metatranscriptome analysis ABSTRACT: Metagenomics research has accelerated the studies of microbial organisms, providing insights into the composition and potential functionality of various microbial communities. Metatranscriptomics (studies of the transcripts from a mixture of microbial species) and other meta-omics approaches hold even greater promise for providing additional insights into functional and regulatory characteristics of the microbial communities. Current metatranscriptomics projects are often carried out without matched metagenomic datasets (of the same microbial communities). For the projects that produce both metatranscriptomic and metagenomic datasets, their analyses are often not integrated. Metagenome assemblies are far from perfect, partially explaining why metagenome assemblies are not used for the analysis of metatranscriptomic datasets. Here we report a reads mapping algorithm for mapping of short reads onto a de Bruijn graph of assemblies. A hash table of junction k-mers (k-mers spanning branching structures in the de Bruijn graph) is used to facilitate fast mapping of reads to the graph. We developed an application of this mapping algorithm: a reference based approach to metatranscriptome assembly using graphs of metagenome assembly as the reference. Our results show that this new approach (called TAG) helps to assemble substantially more transcripts that otherwise would have been missed or truncated because of the fragmented nature of the reference metagenome. TAG was implemented in C++ and has been tested extensively on the linux platform. It is available for download as open source at http://omics.informatics.indiana.edu/TAG.
no_new_dataset
0.944382
1412.1897
Anh Nguyen
Anh Nguyen, Jason Yosinski, Jeff Clune
Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images
To appear at CVPR 2015
null
null
null
cs.CV cs.AI cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep neural networks (DNNs) have recently been achieving state-of-the-art performance on a variety of pattern-recognition tasks, most notably visual classification problems. Given that DNNs are now able to classify objects in images with near-human-level performance, questions naturally arise as to what differences remain between computer and human vision. A recent study revealed that changing an image (e.g. of a lion) in a way imperceptible to humans can cause a DNN to label the image as something else entirely (e.g. mislabeling a lion a library). Here we show a related result: it is easy to produce images that are completely unrecognizable to humans, but that state-of-the-art DNNs believe to be recognizable objects with 99.99% confidence (e.g. labeling with certainty that white noise static is a lion). Specifically, we take convolutional neural networks trained to perform well on either the ImageNet or MNIST datasets and then find images with evolutionary algorithms or gradient ascent that DNNs label with high confidence as belonging to each dataset class. It is possible to produce images totally unrecognizable to human eyes that DNNs believe with near certainty are familiar objects, which we call "fooling images" (more generally, fooling examples). Our results shed light on interesting differences between human vision and current DNNs, and raise questions about the generality of DNN computer vision.
[ { "version": "v1", "created": "Fri, 5 Dec 2014 05:29:43 GMT" }, { "version": "v2", "created": "Thu, 18 Dec 2014 22:27:00 GMT" }, { "version": "v3", "created": "Tue, 3 Mar 2015 16:41:04 GMT" }, { "version": "v4", "created": "Thu, 2 Apr 2015 23:12:56 GMT" } ]
2015-04-06T00:00:00
[ [ "Nguyen", "Anh", "" ], [ "Yosinski", "Jason", "" ], [ "Clune", "Jeff", "" ] ]
TITLE: Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images ABSTRACT: Deep neural networks (DNNs) have recently been achieving state-of-the-art performance on a variety of pattern-recognition tasks, most notably visual classification problems. Given that DNNs are now able to classify objects in images with near-human-level performance, questions naturally arise as to what differences remain between computer and human vision. A recent study revealed that changing an image (e.g. of a lion) in a way imperceptible to humans can cause a DNN to label the image as something else entirely (e.g. mislabeling a lion a library). Here we show a related result: it is easy to produce images that are completely unrecognizable to humans, but that state-of-the-art DNNs believe to be recognizable objects with 99.99% confidence (e.g. labeling with certainty that white noise static is a lion). Specifically, we take convolutional neural networks trained to perform well on either the ImageNet or MNIST datasets and then find images with evolutionary algorithms or gradient ascent that DNNs label with high confidence as belonging to each dataset class. It is possible to produce images totally unrecognizable to human eyes that DNNs believe with near certainty are familiar objects, which we call "fooling images" (more generally, fooling examples). Our results shed light on interesting differences between human vision and current DNNs, and raise questions about the generality of DNN computer vision.
no_new_dataset
0.939692
1412.5758
Zhangyang Wang
Zhangyang Wang, Jianchao Yang, Hailin Jin, Eli Shechtman, Aseem Agarwala, Jonathan Brandt, Thomas S. Huang
Decomposition-Based Domain Adaptation for Real-World Font Recognition
This paper has been withdrawn by the author due to project concerns
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a domain adaption framework to address a domain mismatch between synthetic training and real-world testing data. We demonstrate our method on a challenging fine-grain classification problem: recognizing a font style from an image of text. In this task, it is very easy to generate lots of rendered font examples but very hard to obtain real-world labeled images. This real-to-synthetic domain gap caused poor generalization to new real data in previous font recognition methods (Chen et al. (2014)). In this paper, we introduce a Convolutional Neural Network decomposition approach, leveraging a large training corpus of synthetic data to obtain effective features for classification. This is done using an adaptation technique based on a Stacked Convolutional Auto-Encoder that exploits a large collection of unlabeled real-world text images combined with synthetic data preprocessed in a specific way. The proposed DeepFont method achieves an accuracy of higher than 80% (top-5) on a new large labeled real-world dataset we collected.
[ { "version": "v1", "created": "Thu, 18 Dec 2014 08:51:15 GMT" }, { "version": "v2", "created": "Fri, 19 Dec 2014 02:45:35 GMT" }, { "version": "v3", "created": "Sun, 25 Jan 2015 22:41:40 GMT" }, { "version": "v4", "created": "Wed, 1 Apr 2015 22:02:59 GMT" } ]
2015-04-03T00:00:00
[ [ "Wang", "Zhangyang", "" ], [ "Yang", "Jianchao", "" ], [ "Jin", "Hailin", "" ], [ "Shechtman", "Eli", "" ], [ "Agarwala", "Aseem", "" ], [ "Brandt", "Jonathan", "" ], [ "Huang", "Thomas S.", "" ] ]
TITLE: Decomposition-Based Domain Adaptation for Real-World Font Recognition ABSTRACT: We present a domain adaption framework to address a domain mismatch between synthetic training and real-world testing data. We demonstrate our method on a challenging fine-grain classification problem: recognizing a font style from an image of text. In this task, it is very easy to generate lots of rendered font examples but very hard to obtain real-world labeled images. This real-to-synthetic domain gap caused poor generalization to new real data in previous font recognition methods (Chen et al. (2014)). In this paper, we introduce a Convolutional Neural Network decomposition approach, leveraging a large training corpus of synthetic data to obtain effective features for classification. This is done using an adaptation technique based on a Stacked Convolutional Auto-Encoder that exploits a large collection of unlabeled real-world text images combined with synthetic data preprocessed in a specific way. The proposed DeepFont method achieves an accuracy of higher than 80% (top-5) on a new large labeled real-world dataset we collected.
new_dataset
0.950686
1502.06306
Jinseok Kim
Jinseok Kim and Jana Diesner
Distortive Effects of Initial-Based Name Disambiguation on Measurements of Large-Scale Coauthorship Networks
This is a preprint of an article accepted for publication in Journal of the Association for Information Science and Technology
null
10.1002/asi.23489
null
cs.DL cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Scholars have often relied on name initials to resolve name ambiguities in large-scale coauthorship network research. This approach bears the risk of incorrectly merging or splitting author identities. The use of initial-based disambiguation has been justified by the assumption that such errors would not affect research findings too much. This paper tests this assumption by analyzing coauthorship networks from five academic fields - biology, computer science, nanoscience, neuroscience, and physics - and an interdisciplinary journal, PNAS. Name instances in datasets of this study were disambiguated based on heuristics gained from previous algorithmic disambiguation solutions. We use disambiguated data as a proxy of ground-truth to test the performance of three types of initial-based disambiguation. Our results show that initial-based disambiguation can misrepresent statistical properties of coauthorship networks: it deflates the number of unique authors, number of component, average shortest paths, clustering coefficient, and assortativity, while it inflates average productivity, density, average coauthor number per author, and largest component size. Also, on average, more than half of top 10 productive or collaborative authors drop off the lists. Asian names were found to account for the majority of misidentification by initial-based disambiguation due to their common surname and given name initials.
[ { "version": "v1", "created": "Mon, 23 Feb 2015 02:56:05 GMT" } ]
2015-04-03T00:00:00
[ [ "Kim", "Jinseok", "" ], [ "Diesner", "Jana", "" ] ]
TITLE: Distortive Effects of Initial-Based Name Disambiguation on Measurements of Large-Scale Coauthorship Networks ABSTRACT: Scholars have often relied on name initials to resolve name ambiguities in large-scale coauthorship network research. This approach bears the risk of incorrectly merging or splitting author identities. The use of initial-based disambiguation has been justified by the assumption that such errors would not affect research findings too much. This paper tests this assumption by analyzing coauthorship networks from five academic fields - biology, computer science, nanoscience, neuroscience, and physics - and an interdisciplinary journal, PNAS. Name instances in datasets of this study were disambiguated based on heuristics gained from previous algorithmic disambiguation solutions. We use disambiguated data as a proxy of ground-truth to test the performance of three types of initial-based disambiguation. Our results show that initial-based disambiguation can misrepresent statistical properties of coauthorship networks: it deflates the number of unique authors, number of component, average shortest paths, clustering coefficient, and assortativity, while it inflates average productivity, density, average coauthor number per author, and largest component size. Also, on average, more than half of top 10 productive or collaborative authors drop off the lists. Asian names were found to account for the majority of misidentification by initial-based disambiguation due to their common surname and given name initials.
no_new_dataset
0.950227
1504.00430
Hanyang Peng
Hanyang Peng, Yong Fan
Direct l_(2,p)-Norm Learning for Feature Selection
null
null
null
null
cs.LG cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose a novel sparse learning based feature selection method that directly optimizes a large margin linear classification model sparsity with l_(2,p)-norm (0 < p < 1)subject to data-fitting constraints, rather than using the sparsity as a regularization term. To solve the direct sparsity optimization problem that is non-smooth and non-convex when 0<p<1, we provide an efficient iterative algorithm with proved convergence by converting it to a convex and smooth optimization problem at every iteration step. The proposed algorithm has been evaluated based on publicly available datasets, and extensive comparison experiments have demonstrated that our algorithm could achieve feature selection performance competitive to state-of-the-art algorithms.
[ { "version": "v1", "created": "Thu, 2 Apr 2015 02:16:39 GMT" } ]
2015-04-03T00:00:00
[ [ "Peng", "Hanyang", "" ], [ "Fan", "Yong", "" ] ]
TITLE: Direct l_(2,p)-Norm Learning for Feature Selection ABSTRACT: In this paper, we propose a novel sparse learning based feature selection method that directly optimizes a large margin linear classification model sparsity with l_(2,p)-norm (0 < p < 1)subject to data-fitting constraints, rather than using the sparsity as a regularization term. To solve the direct sparsity optimization problem that is non-smooth and non-convex when 0<p<1, we provide an efficient iterative algorithm with proved convergence by converting it to a convex and smooth optimization problem at every iteration step. The proposed algorithm has been evaluated based on publicly available datasets, and extensive comparison experiments have demonstrated that our algorithm could achieve feature selection performance competitive to state-of-the-art algorithms.
no_new_dataset
0.949809
1404.0284
Jack Kelly
Jack Kelly and William Knottenbelt
The UK-DALE dataset, domestic appliance-level electricity demand and whole-house demand from five UK homes
null
Scientific Data 2 (2015) Article number: 150007 (2015)
10.1038/sdata.2015.7
null
cs.OH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many countries are rolling out smart electricity meters. These measure a home's total power demand. However, research into consumer behaviour suggests that consumers are best able to improve their energy efficiency when provided with itemised, appliance-by-appliance consumption information. Energy disaggregation is a computational technique for estimating appliance-by-appliance energy consumption from a whole-house meter signal. To conduct research on disaggregation algorithms, researchers require data describing not just the aggregate demand per building but also the `ground truth' demand of individual appliances. In this context, we present UK-DALE: an open-access dataset from the UK recording Domestic Appliance-Level Electricity at a sample rate of 16 kHz for the whole-house and at 1/6 Hz for individual appliances. This is the first open access UK dataset at this temporal resolution. We recorded from five houses, one of which was recorded for 655 days, the longest duration we are aware of for any energy dataset at this sample rate. We also describe the low-cost, open-source, wireless system we built for collecting our dataset.
[ { "version": "v1", "created": "Tue, 1 Apr 2014 15:49:00 GMT" }, { "version": "v2", "created": "Mon, 12 Jan 2015 14:29:00 GMT" }, { "version": "v3", "created": "Thu, 19 Mar 2015 10:45:57 GMT" } ]
2015-04-02T00:00:00
[ [ "Kelly", "Jack", "" ], [ "Knottenbelt", "William", "" ] ]
TITLE: The UK-DALE dataset, domestic appliance-level electricity demand and whole-house demand from five UK homes ABSTRACT: Many countries are rolling out smart electricity meters. These measure a home's total power demand. However, research into consumer behaviour suggests that consumers are best able to improve their energy efficiency when provided with itemised, appliance-by-appliance consumption information. Energy disaggregation is a computational technique for estimating appliance-by-appliance energy consumption from a whole-house meter signal. To conduct research on disaggregation algorithms, researchers require data describing not just the aggregate demand per building but also the `ground truth' demand of individual appliances. In this context, we present UK-DALE: an open-access dataset from the UK recording Domestic Appliance-Level Electricity at a sample rate of 16 kHz for the whole-house and at 1/6 Hz for individual appliances. This is the first open access UK dataset at this temporal resolution. We recorded from five houses, one of which was recorded for 655 days, the longest duration we are aware of for any energy dataset at this sample rate. We also describe the low-cost, open-source, wireless system we built for collecting our dataset.
new_dataset
0.963643
1503.02445
Muhammad Uzair
Muhammad Uzair, Faisal Shafait, Bernard Ghanem, Ajmal Mian
Representation Learning with Deep Extreme Learning Machines for Efficient Image Set Classification
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Efficient and accurate joint representation of a collection of images, that belong to the same class, is a major research challenge for practical image set classification. Existing methods either make prior assumptions about the data structure, or perform heavy computations to learn structure from the data itself. In this paper, we propose an efficient image set representation that does not make any prior assumptions about the structure of the underlying data. We learn the non-linear structure of image sets with Deep Extreme Learning Machines (DELM) that are very efficient and generalize well even on a limited number of training samples. Extensive experiments on a broad range of public datasets for image set classification (Honda/UCSD, CMU Mobo, YouTube Celebrities, Celebrity-1000, ETH-80) show that the proposed algorithm consistently outperforms state-of-the-art image set classification methods both in terms of speed and accuracy.
[ { "version": "v1", "created": "Mon, 9 Mar 2015 12:14:42 GMT" }, { "version": "v2", "created": "Mon, 16 Mar 2015 05:29:31 GMT" }, { "version": "v3", "created": "Wed, 1 Apr 2015 10:29:09 GMT" } ]
2015-04-02T00:00:00
[ [ "Uzair", "Muhammad", "" ], [ "Shafait", "Faisal", "" ], [ "Ghanem", "Bernard", "" ], [ "Mian", "Ajmal", "" ] ]
TITLE: Representation Learning with Deep Extreme Learning Machines for Efficient Image Set Classification ABSTRACT: Efficient and accurate joint representation of a collection of images, that belong to the same class, is a major research challenge for practical image set classification. Existing methods either make prior assumptions about the data structure, or perform heavy computations to learn structure from the data itself. In this paper, we propose an efficient image set representation that does not make any prior assumptions about the structure of the underlying data. We learn the non-linear structure of image sets with Deep Extreme Learning Machines (DELM) that are very efficient and generalize well even on a limited number of training samples. Extensive experiments on a broad range of public datasets for image set classification (Honda/UCSD, CMU Mobo, YouTube Celebrities, Celebrity-1000, ETH-80) show that the proposed algorithm consistently outperforms state-of-the-art image set classification methods both in terms of speed and accuracy.
no_new_dataset
0.950915
1504.00028
Zhangyang Wang
Zhangyang Wang, Jianchao Yang, Hailin Jin, Eli Shechtman, Aseem Agarwala, Jonathan Brandt, Thomas S. Huang
Real-World Font Recognition Using Deep Network and Domain Adaptation
null
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We address a challenging fine-grain classification problem: recognizing a font style from an image of text. In this task, it is very easy to generate lots of rendered font examples but very hard to obtain real-world labeled images. This real-to-synthetic domain gap caused poor generalization to new real data in previous methods (Chen et al. (2014)). In this paper, we refer to Convolutional Neural Networks, and use an adaptation technique based on a Stacked Convolutional Auto-Encoder that exploits unlabeled real-world images combined with synthetic data. The proposed method achieves an accuracy of higher than 80% (top-5) on a real-world dataset.
[ { "version": "v1", "created": "Tue, 31 Mar 2015 20:30:00 GMT" } ]
2015-04-02T00:00:00
[ [ "Wang", "Zhangyang", "" ], [ "Yang", "Jianchao", "" ], [ "Jin", "Hailin", "" ], [ "Shechtman", "Eli", "" ], [ "Agarwala", "Aseem", "" ], [ "Brandt", "Jonathan", "" ], [ "Huang", "Thomas S.", "" ] ]
TITLE: Real-World Font Recognition Using Deep Network and Domain Adaptation ABSTRACT: We address a challenging fine-grain classification problem: recognizing a font style from an image of text. In this task, it is very easy to generate lots of rendered font examples but very hard to obtain real-world labeled images. This real-to-synthetic domain gap caused poor generalization to new real data in previous methods (Chen et al. (2014)). In this paper, we refer to Convolutional Neural Networks, and use an adaptation technique based on a Stacked Convolutional Auto-Encoder that exploits unlabeled real-world images combined with synthetic data. The proposed method achieves an accuracy of higher than 80% (top-5) on a real-world dataset.
no_new_dataset
0.954774
1504.00045
Yongxin Yang
Zhiyuan Shi, Yongxin Yang, Timothy M. Hospedales and Tao Xiang
Weakly Supervised Learning of Objects, Attributes and their Associations
14 pages, Accepted to ECCV 2014
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
When humans describe images they tend to use combinations of nouns and adjectives, corresponding to objects and their associated attributes respectively. To generate such a description automatically, one needs to model objects, attributes and their associations. Conventional methods require strong annotation of object and attribute locations, making them less scalable. In this paper, we model object-attribute associations from weakly labelled images, such as those widely available on media sharing sites (e.g. Flickr), where only image-level labels (either object or attributes) are given, without their locations and associations. This is achieved by introducing a novel weakly supervised non-parametric Bayesian model. Once learned, given a new image, our model can describe the image, including objects, attributes and their associations, as well as their locations and segmentation. Extensive experiments on benchmark datasets demonstrate that our weakly supervised model performs at par with strongly supervised models on tasks such as image description and retrieval based on object-attribute associations.
[ { "version": "v1", "created": "Tue, 31 Mar 2015 21:18:18 GMT" } ]
2015-04-02T00:00:00
[ [ "Shi", "Zhiyuan", "" ], [ "Yang", "Yongxin", "" ], [ "Hospedales", "Timothy M.", "" ], [ "Xiang", "Tao", "" ] ]
TITLE: Weakly Supervised Learning of Objects, Attributes and their Associations ABSTRACT: When humans describe images they tend to use combinations of nouns and adjectives, corresponding to objects and their associated attributes respectively. To generate such a description automatically, one needs to model objects, attributes and their associations. Conventional methods require strong annotation of object and attribute locations, making them less scalable. In this paper, we model object-attribute associations from weakly labelled images, such as those widely available on media sharing sites (e.g. Flickr), where only image-level labels (either object or attributes) are given, without their locations and associations. This is achieved by introducing a novel weakly supervised non-parametric Bayesian model. Once learned, given a new image, our model can describe the image, including objects, attributes and their associations, as well as their locations and segmentation. Extensive experiments on benchmark datasets demonstrate that our weakly supervised model performs at par with strongly supervised models on tasks such as image description and retrieval based on object-attribute associations.
no_new_dataset
0.953188
1504.00191
Sanjay Sahay
Rajendra Kumar Roul, Shubham Rohan Asthana, Sanjay Kumar Sahay
Automated Document Indexing via Intelligent Hierarchical Clustering: A Novel Approach
6 Pages, 3 Figures. IEEE Xplore, ICHPCA-2014
null
10.1109/ICHPCA.2014.7045347
null
cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the rising quantity of textual data available in electronic format, the need to organize it become a highly challenging task. In the present paper, we explore a document organization framework that exploits an intelligent hierarchical clustering algorithm to generate an index over a set of documents. The framework has been designed to be scalable and accurate even with large corpora. The advantage of the proposed algorithm lies in the need for minimal inputs, with much of the hierarchy attributes being decided in an automated manner using statistical methods. The use of topic modeling in a pre-processing stage ensures robustness to a range of variations in the input data. For experimental work 20-Newsgroups dataset has been used. The F- measure of the proposed approach has been compared with the traditional K-Means and K-Medoids clustering algorithms. Test results demonstrate the applicability, efficiency and effectiveness of our proposed approach. After extensive experimentation, we conclude that the framework shows promise for further research and specialized commercial applications.
[ { "version": "v1", "created": "Wed, 1 Apr 2015 12:08:36 GMT" } ]
2015-04-02T00:00:00
[ [ "Roul", "Rajendra Kumar", "" ], [ "Asthana", "Shubham Rohan", "" ], [ "Sahay", "Sanjay Kumar", "" ] ]
TITLE: Automated Document Indexing via Intelligent Hierarchical Clustering: A Novel Approach ABSTRACT: With the rising quantity of textual data available in electronic format, the need to organize it become a highly challenging task. In the present paper, we explore a document organization framework that exploits an intelligent hierarchical clustering algorithm to generate an index over a set of documents. The framework has been designed to be scalable and accurate even with large corpora. The advantage of the proposed algorithm lies in the need for minimal inputs, with much of the hierarchy attributes being decided in an automated manner using statistical methods. The use of topic modeling in a pre-processing stage ensures robustness to a range of variations in the input data. For experimental work 20-Newsgroups dataset has been used. The F- measure of the proposed approach has been compared with the traditional K-Means and K-Medoids clustering algorithms. Test results demonstrate the applicability, efficiency and effectiveness of our proposed approach. After extensive experimentation, we conclude that the framework shows promise for further research and specialized commercial applications.
no_new_dataset
0.949435
1504.00331
Eldon Carman Jr
E. Preston Carman Jr. (1), Till Westmann (2), Vinayak R. Borkar (3), Michael J. Carey (3) and Vassilis J. Tsotras (1) ((1) UC Riverside, (2) Oracle Labs, (3) UC Irvine)
Apache VXQuery: A Scalable XQuery Implementation
null
null
null
null
cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The wide use of XML for document management and data exchange has created the need to query large repositories of XML data. To efficiently query such large data collections and take advantage of parallelism, we have implemented Apache VXQuery, an open-source scalable XQuery processor. The system builds upon two other open-source frameworks -- Hyracks, a parallel execution engine, and Algebricks, a language agnostic compiler toolbox. Apache VXQuery extends these two frameworks and provides an implementation of the XQuery specifics (data model, data-model dependent functions and optimizations, and a parser). We describe the architecture of Apache VXQuery, its integration with Hyracks and Algebricks, and the XQuery optimization rules applied to the query plan to improve path expression efficiency and to enable query parallelism. An experimental evaluation using a real 500GB dataset with various selection, aggregation and join XML queries shows that Apache VXQuery performs well both in terms of scale-up and speed-up. Our experiments show that it is about 3x faster than Saxon (an open-source and commercial XQuery processor) on a 4-core, single node implementation, and around 2.5x faster than Apache MRQL (a MapReduce-based parallel query processor) on an eight (4-core) node cluster.
[ { "version": "v1", "created": "Wed, 1 Apr 2015 18:27:23 GMT" } ]
2015-04-02T00:00:00
[ [ "Carman", "E. Preston", "Jr." ], [ "Westmann", "Till", "" ], [ "Borkar", "Vinayak R.", "" ], [ "Carey", "Michael J.", "" ], [ "Tsotras", "Vassilis J.", "" ] ]
TITLE: Apache VXQuery: A Scalable XQuery Implementation ABSTRACT: The wide use of XML for document management and data exchange has created the need to query large repositories of XML data. To efficiently query such large data collections and take advantage of parallelism, we have implemented Apache VXQuery, an open-source scalable XQuery processor. The system builds upon two other open-source frameworks -- Hyracks, a parallel execution engine, and Algebricks, a language agnostic compiler toolbox. Apache VXQuery extends these two frameworks and provides an implementation of the XQuery specifics (data model, data-model dependent functions and optimizations, and a parser). We describe the architecture of Apache VXQuery, its integration with Hyracks and Algebricks, and the XQuery optimization rules applied to the query plan to improve path expression efficiency and to enable query parallelism. An experimental evaluation using a real 500GB dataset with various selection, aggregation and join XML queries shows that Apache VXQuery performs well both in terms of scale-up and speed-up. Our experiments show that it is about 3x faster than Saxon (an open-source and commercial XQuery processor) on a 4-core, single node implementation, and around 2.5x faster than Apache MRQL (a MapReduce-based parallel query processor) on an eight (4-core) node cluster.
no_new_dataset
0.941761
1306.6802
Aris Kosmopoulos
Aris Kosmopoulos, Ioannis Partalas, Eric Gaussier, Georgios Paliouras, Ion Androutsopoulos
Evaluation Measures for Hierarchical Classification: a unified view and novel approaches
Submitted to journal
null
10.1007/s10618-014-0382-x
null
cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Hierarchical classification addresses the problem of classifying items into a hierarchy of classes. An important issue in hierarchical classification is the evaluation of different classification algorithms, which is complicated by the hierarchical relations among the classes. Several evaluation measures have been proposed for hierarchical classification using the hierarchy in different ways. This paper studies the problem of evaluation in hierarchical classification by analyzing and abstracting the key components of the existing performance measures. It also proposes two alternative generic views of hierarchical evaluation and introduces two corresponding novel measures. The proposed measures, along with the state-of-the art ones, are empirically tested on three large datasets from the domain of text classification. The empirical results illustrate the undesirable behavior of existing approaches and how the proposed methods overcome most of these methods across a range of cases.
[ { "version": "v1", "created": "Fri, 28 Jun 2013 11:49:53 GMT" }, { "version": "v2", "created": "Mon, 1 Jul 2013 17:33:58 GMT" } ]
2015-04-01T00:00:00
[ [ "Kosmopoulos", "Aris", "" ], [ "Partalas", "Ioannis", "" ], [ "Gaussier", "Eric", "" ], [ "Paliouras", "Georgios", "" ], [ "Androutsopoulos", "Ion", "" ] ]
TITLE: Evaluation Measures for Hierarchical Classification: a unified view and novel approaches ABSTRACT: Hierarchical classification addresses the problem of classifying items into a hierarchy of classes. An important issue in hierarchical classification is the evaluation of different classification algorithms, which is complicated by the hierarchical relations among the classes. Several evaluation measures have been proposed for hierarchical classification using the hierarchy in different ways. This paper studies the problem of evaluation in hierarchical classification by analyzing and abstracting the key components of the existing performance measures. It also proposes two alternative generic views of hierarchical evaluation and introduces two corresponding novel measures. The proposed measures, along with the state-of-the art ones, are empirically tested on three large datasets from the domain of text classification. The empirical results illustrate the undesirable behavior of existing approaches and how the proposed methods overcome most of these methods across a range of cases.
no_new_dataset
0.947624
1503.02725
Abhishek Sharma
Abhishek Sharma and Oncel Tuzel and David W. Jacobs
Deep Hierarchical Parsing for Semantic Segmentation
IEEE CVPR 2015
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper proposes a learning-based approach to scene parsing inspired by the deep Recursive Context Propagation Network (RCPN). RCPN is a deep feed-forward neural network that utilizes the contextual information from the entire image, through bottom-up followed by top-down context propagation via random binary parse trees. This improves the feature representation of every super-pixel in the image for better classification into semantic categories. We analyze RCPN and propose two novel contributions to further improve the model. We first analyze the learning of RCPN parameters and discover the presence of bypass error paths in the computation graph of RCPN that can hinder contextual propagation. We propose to tackle this problem by including the classification loss of the internal nodes of the random parse trees in the original RCPN loss function. Secondly, we use an MRF on the parse tree nodes to model the hierarchical dependency present in the output. Both modifications provide performance boosts over the original RCPN and the new system achieves state-of-the-art performance on Stanford Background, SIFT-Flow and Daimler urban datasets.
[ { "version": "v1", "created": "Mon, 9 Mar 2015 23:05:26 GMT" }, { "version": "v2", "created": "Mon, 30 Mar 2015 20:03:01 GMT" } ]
2015-04-01T00:00:00
[ [ "Sharma", "Abhishek", "" ], [ "Tuzel", "Oncel", "" ], [ "Jacobs", "David W.", "" ] ]
TITLE: Deep Hierarchical Parsing for Semantic Segmentation ABSTRACT: This paper proposes a learning-based approach to scene parsing inspired by the deep Recursive Context Propagation Network (RCPN). RCPN is a deep feed-forward neural network that utilizes the contextual information from the entire image, through bottom-up followed by top-down context propagation via random binary parse trees. This improves the feature representation of every super-pixel in the image for better classification into semantic categories. We analyze RCPN and propose two novel contributions to further improve the model. We first analyze the learning of RCPN parameters and discover the presence of bypass error paths in the computation graph of RCPN that can hinder contextual propagation. We propose to tackle this problem by including the classification loss of the internal nodes of the random parse trees in the original RCPN loss function. Secondly, we use an MRF on the parse tree nodes to model the hierarchical dependency present in the output. Both modifications provide performance boosts over the original RCPN and the new system achieves state-of-the-art performance on Stanford Background, SIFT-Flow and Daimler urban datasets.
no_new_dataset
0.948489
1503.08843
Peng Sun
Peng Sun, James K. Min, Guanglei Xiong
Globally Tuned Cascade Pose Regression via Back Propagation with Application in 2D Face Pose Estimation and Heart Segmentation in 3D CT Images
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, a successful pose estimation algorithm, called Cascade Pose Regression (CPR), was proposed in the literature. Trained over Pose Index Feature, CPR is a regressor ensemble that is similar to Boosting. In this paper we show how CPR can be represented as a Neural Network. Specifically, we adopt a Graph Transformer Network (GTN) representation and accordingly train CPR with Back Propagation (BP) that permits globally tuning. In contrast, previous CPR literature only took a layer wise training without any post fine tuning. We empirically show that global training with BP outperforms layer-wise (pre-)training. Our CPR-GTN adopts a Multi Layer Percetron as the regressor, which utilized sparse connection to learn local image feature representation. We tested the proposed CPR-GTN on 2D face pose estimation problem as in previous CPR literature. Besides, we also investigated the possibility of extending CPR-GTN to 3D pose estimation by doing experiments using 3D Computed Tomography dataset for heart segmentation.
[ { "version": "v1", "created": "Mon, 30 Mar 2015 20:17:23 GMT" } ]
2015-04-01T00:00:00
[ [ "Sun", "Peng", "" ], [ "Min", "James K.", "" ], [ "Xiong", "Guanglei", "" ] ]
TITLE: Globally Tuned Cascade Pose Regression via Back Propagation with Application in 2D Face Pose Estimation and Heart Segmentation in 3D CT Images ABSTRACT: Recently, a successful pose estimation algorithm, called Cascade Pose Regression (CPR), was proposed in the literature. Trained over Pose Index Feature, CPR is a regressor ensemble that is similar to Boosting. In this paper we show how CPR can be represented as a Neural Network. Specifically, we adopt a Graph Transformer Network (GTN) representation and accordingly train CPR with Back Propagation (BP) that permits globally tuning. In contrast, previous CPR literature only took a layer wise training without any post fine tuning. We empirically show that global training with BP outperforms layer-wise (pre-)training. Our CPR-GTN adopts a Multi Layer Percetron as the regressor, which utilized sparse connection to learn local image feature representation. We tested the proposed CPR-GTN on 2D face pose estimation problem as in previous CPR literature. Besides, we also investigated the possibility of extending CPR-GTN to 3D pose estimation by doing experiments using 3D Computed Tomography dataset for heart segmentation.
no_new_dataset
0.947624
1503.08853
Ali Borji
Ali Borji and James Tanner
Reconciling saliency and object center-bias hypotheses in explaining free-viewing fixations
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Predicting where people look in natural scenes has attracted a lot of interest in computer vision and computational neuroscience over the past two decades. Two seemingly contrasting categories of cues have been proposed to influence where people look: \textit{low-level image saliency} and \textit{high-level semantic information}. Our first contribution is to take a detailed look at these cues to confirm the hypothesis proposed by Henderson~\cite{henderson1993eye} and Nuthmann \& Henderson~\cite{nuthmann2010object} that observers tend to look at the center of objects. We analyzed fixation data for scene free-viewing over 17 observers on 60 fully annotated images with various types of objects. Images contained different types of scenes, such as natural scenes, line drawings, and 3D rendered scenes. Our second contribution is to propose a simple combined model of low-level saliency and object center-bias that outperforms each individual component significantly over our data, as well as on the OSIE dataset by Xu et al.~\cite{xu2014predicting}. The results reconcile saliency with object center-bias hypotheses and highlight that both types of cues are important in guiding fixations. Our work opens new directions to understand strategies that humans use in observing scenes and objects, and demonstrates the construction of combined models of low-level saliency and high-level object-based information.
[ { "version": "v1", "created": "Mon, 30 Mar 2015 21:07:53 GMT" } ]
2015-04-01T00:00:00
[ [ "Borji", "Ali", "" ], [ "Tanner", "James", "" ] ]
TITLE: Reconciling saliency and object center-bias hypotheses in explaining free-viewing fixations ABSTRACT: Predicting where people look in natural scenes has attracted a lot of interest in computer vision and computational neuroscience over the past two decades. Two seemingly contrasting categories of cues have been proposed to influence where people look: \textit{low-level image saliency} and \textit{high-level semantic information}. Our first contribution is to take a detailed look at these cues to confirm the hypothesis proposed by Henderson~\cite{henderson1993eye} and Nuthmann \& Henderson~\cite{nuthmann2010object} that observers tend to look at the center of objects. We analyzed fixation data for scene free-viewing over 17 observers on 60 fully annotated images with various types of objects. Images contained different types of scenes, such as natural scenes, line drawings, and 3D rendered scenes. Our second contribution is to propose a simple combined model of low-level saliency and object center-bias that outperforms each individual component significantly over our data, as well as on the OSIE dataset by Xu et al.~\cite{xu2014predicting}. The results reconcile saliency with object center-bias hypotheses and highlight that both types of cues are important in guiding fixations. Our work opens new directions to understand strategies that humans use in observing scenes and objects, and demonstrates the construction of combined models of low-level saliency and high-level object-based information.
no_new_dataset
0.953579
1503.08873
Paul Mineiro
Paul Mineiro and Nikos Karampatziakis
Fast Label Embeddings for Extremely Large Output Spaces
Accepted as a workshop contribution at ICLR 2015
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many modern multiclass and multilabel problems are characterized by increasingly large output spaces. For these problems, label embeddings have been shown to be a useful primitive that can improve computational and statistical efficiency. In this work we utilize a correspondence between rank constrained estimation and low dimensional label embeddings that uncovers a fast label embedding algorithm which works in both the multiclass and multilabel settings. The result is a randomized algorithm for partial least squares, whose running time is exponentially faster than naive algorithms. We demonstrate our techniques on two large-scale public datasets, from the Large Scale Hierarchical Text Challenge and the Open Directory Project, where we obtain state of the art results.
[ { "version": "v1", "created": "Mon, 30 Mar 2015 23:29:46 GMT" } ]
2015-04-01T00:00:00
[ [ "Mineiro", "Paul", "" ], [ "Karampatziakis", "Nikos", "" ] ]
TITLE: Fast Label Embeddings for Extremely Large Output Spaces ABSTRACT: Many modern multiclass and multilabel problems are characterized by increasingly large output spaces. For these problems, label embeddings have been shown to be a useful primitive that can improve computational and statistical efficiency. In this work we utilize a correspondence between rank constrained estimation and low dimensional label embeddings that uncovers a fast label embedding algorithm which works in both the multiclass and multilabel settings. The result is a randomized algorithm for partial least squares, whose running time is exponentially faster than naive algorithms. We demonstrate our techniques on two large-scale public datasets, from the Large Scale Hierarchical Text Challenge and the Open Directory Project, where we obtain state of the art results.
no_new_dataset
0.948537
1407.7556
Lorenzo Livi
Lorenzo Livi, Alireza Sadeghian, Witold Pedrycz
Entropic one-class classifiers
To appear in IEEE-TNNLS
null
10.1109/TNNLS.2015.2418332
null
cs.CV cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The one-class classification problem is a well-known research endeavor in pattern recognition. The problem is also known under different names, such as outlier and novelty/anomaly detection. The core of the problem consists in modeling and recognizing patterns belonging only to a so-called target class. All other patterns are termed non-target, and therefore they should be recognized as such. In this paper, we propose a novel one-class classification system that is based on an interplay of different techniques. Primarily, we follow a dissimilarity representation based approach; we embed the input data into the dissimilarity space by means of an appropriate parametric dissimilarity measure. This step allows us to process virtually any type of data. The dissimilarity vectors are then represented through a weighted Euclidean graphs, which we use to (i) determine the entropy of the data distribution in the dissimilarity space, and at the same time (ii) derive effective decision regions that are modeled as clusters of vertices. Since the dissimilarity measure for the input data is parametric, we optimize its parameters by means of a global optimization scheme, which considers both mesoscopic and structural characteristics of the data represented through the graphs. The proposed one-class classifier is designed to provide both hard (Boolean) and soft decisions about the recognition of test patterns, allowing an accurate description of the classification process. We evaluate the performance of the system on different benchmarking datasets, containing either feature-based or structured patterns. Experimental results demonstrate the effectiveness of the proposed technique.
[ { "version": "v1", "created": "Mon, 28 Jul 2014 20:26:24 GMT" }, { "version": "v2", "created": "Tue, 16 Dec 2014 20:46:21 GMT" }, { "version": "v3", "created": "Sun, 11 Jan 2015 16:27:23 GMT" } ]
2015-03-31T00:00:00
[ [ "Livi", "Lorenzo", "" ], [ "Sadeghian", "Alireza", "" ], [ "Pedrycz", "Witold", "" ] ]
TITLE: Entropic one-class classifiers ABSTRACT: The one-class classification problem is a well-known research endeavor in pattern recognition. The problem is also known under different names, such as outlier and novelty/anomaly detection. The core of the problem consists in modeling and recognizing patterns belonging only to a so-called target class. All other patterns are termed non-target, and therefore they should be recognized as such. In this paper, we propose a novel one-class classification system that is based on an interplay of different techniques. Primarily, we follow a dissimilarity representation based approach; we embed the input data into the dissimilarity space by means of an appropriate parametric dissimilarity measure. This step allows us to process virtually any type of data. The dissimilarity vectors are then represented through a weighted Euclidean graphs, which we use to (i) determine the entropy of the data distribution in the dissimilarity space, and at the same time (ii) derive effective decision regions that are modeled as clusters of vertices. Since the dissimilarity measure for the input data is parametric, we optimize its parameters by means of a global optimization scheme, which considers both mesoscopic and structural characteristics of the data represented through the graphs. The proposed one-class classifier is designed to provide both hard (Boolean) and soft decisions about the recognition of test patterns, allowing an accurate description of the classification process. We evaluate the performance of the system on different benchmarking datasets, containing either feature-based or structured patterns. Experimental results demonstrate the effectiveness of the proposed technique.
no_new_dataset
0.946498
1410.0760
Chih-Hang Wang
Chih-Hang Wang, De-Nian Yang, Wen-Tsuen Chen
Scheduling for Multi-Camera Surveillance in LTE Networks
9 pages, 10 figures
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Wireless surveillance in cellular networks has become increasingly important, while commercial LTE surveillance cameras are also available nowadays. Nevertheless, most scheduling algorithms in the literature are throughput, fairness, or profit-based approaches, which are not suitable for wireless surveillance. In this paper, therefore, we explore the resource allocation problem for a multi-camera surveillance system in 3GPP Long Term Evolution (LTE) uplink (UL) networks. We minimize the number of allocated resource blocks (RBs) while guaranteeing the coverage requirement for surveillance systems in LTE UL networks. Specifically, we formulate the Camera Set Resource Allocation Problem (CSRAP) and prove that the problem is NP-Hard. We then propose an Integer Linear Programming formulation for general cases to find the optimal solution. Moreover, we present a baseline algorithm and devise an approximation algorithm to solve the problem. Simulation results based on a real surveillance map and synthetic datasets manifest that the number of allocated RBs can be effectively reduced compared to the existing approach for LTE networks.
[ { "version": "v1", "created": "Fri, 3 Oct 2014 06:17:20 GMT" }, { "version": "v2", "created": "Mon, 6 Oct 2014 08:33:46 GMT" }, { "version": "v3", "created": "Mon, 27 Oct 2014 04:24:11 GMT" }, { "version": "v4", "created": "Sat, 28 Mar 2015 13:58:08 GMT" } ]
2015-03-31T00:00:00
[ [ "Wang", "Chih-Hang", "" ], [ "Yang", "De-Nian", "" ], [ "Chen", "Wen-Tsuen", "" ] ]
TITLE: Scheduling for Multi-Camera Surveillance in LTE Networks ABSTRACT: Wireless surveillance in cellular networks has become increasingly important, while commercial LTE surveillance cameras are also available nowadays. Nevertheless, most scheduling algorithms in the literature are throughput, fairness, or profit-based approaches, which are not suitable for wireless surveillance. In this paper, therefore, we explore the resource allocation problem for a multi-camera surveillance system in 3GPP Long Term Evolution (LTE) uplink (UL) networks. We minimize the number of allocated resource blocks (RBs) while guaranteeing the coverage requirement for surveillance systems in LTE UL networks. Specifically, we formulate the Camera Set Resource Allocation Problem (CSRAP) and prove that the problem is NP-Hard. We then propose an Integer Linear Programming formulation for general cases to find the optimal solution. Moreover, we present a baseline algorithm and devise an approximation algorithm to solve the problem. Simulation results based on a real surveillance map and synthetic datasets manifest that the number of allocated RBs can be effectively reduced compared to the existing approach for LTE networks.
no_new_dataset
0.952926
1411.6400
Min Wei
Min Wei, Tommy W. S. Chow, Rosa H. M. Chan
Mutual Information-Based Unsupervised Feature Transformation for Heterogeneous Feature Subset Selection
This paper has been withdrawn by the author due to the number of datasets and classifiers are not sufficient to support the claim. Need more simulation work
null
null
null
stat.ML cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Conventional mutual information (MI) based feature selection (FS) methods are unable to handle heterogeneous feature subset selection properly because of data format differences or estimation methods of MI between feature subset and class label. A way to solve this problem is feature transformation (FT). In this study, a novel unsupervised feature transformation (UFT) which can transform non-numerical features into numerical features is developed and tested. The UFT process is MI-based and independent of class label. MI-based FS algorithms, such as Parzen window feature selector (PWFS), minimum redundancy maximum relevance feature selection (mRMR), and normalized MI feature selection (NMIFS), can all adopt UFT for pre-processing of non-numerical features. Unlike traditional FT methods, the proposed UFT is unbiased while PWFS is utilized to its full advantage. Simulations and analyses of large-scale datasets showed that feature subset selected by the integrated method, UFT-PWFS, outperformed other FT-FS integrated methods in classification accuracy.
[ { "version": "v1", "created": "Mon, 24 Nov 2014 10:15:17 GMT" }, { "version": "v2", "created": "Sun, 29 Mar 2015 05:32:50 GMT" } ]
2015-03-31T00:00:00
[ [ "Wei", "Min", "" ], [ "Chow", "Tommy W. S.", "" ], [ "Chan", "Rosa H. M.", "" ] ]
TITLE: Mutual Information-Based Unsupervised Feature Transformation for Heterogeneous Feature Subset Selection ABSTRACT: Conventional mutual information (MI) based feature selection (FS) methods are unable to handle heterogeneous feature subset selection properly because of data format differences or estimation methods of MI between feature subset and class label. A way to solve this problem is feature transformation (FT). In this study, a novel unsupervised feature transformation (UFT) which can transform non-numerical features into numerical features is developed and tested. The UFT process is MI-based and independent of class label. MI-based FS algorithms, such as Parzen window feature selector (PWFS), minimum redundancy maximum relevance feature selection (mRMR), and normalized MI feature selection (NMIFS), can all adopt UFT for pre-processing of non-numerical features. Unlike traditional FT methods, the proposed UFT is unbiased while PWFS is utilized to its full advantage. Simulations and analyses of large-scale datasets showed that feature subset selected by the integrated method, UFT-PWFS, outperformed other FT-FS integrated methods in classification accuracy.
no_new_dataset
0.948775
1503.01800
Samira Ebrahimi Kahou
Samira Ebrahimi Kahou, Xavier Bouthillier, Pascal Lamblin, Caglar Gulcehre, Vincent Michalski, Kishore Konda, S\'ebastien Jean, Pierre Froumenty, Yann Dauphin, Nicolas Boulanger-Lewandowski, Raul Chandias Ferrari, Mehdi Mirza, David Warde-Farley, Aaron Courville, Pascal Vincent, Roland Memisevic, Christopher Pal, Yoshua Bengio
EmoNets: Multimodal deep learning approaches for emotion recognition in video
null
null
null
null
cs.LG cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The task of the emotion recognition in the wild (EmotiW) Challenge is to assign one of seven emotions to short video clips extracted from Hollywood style movies. The videos depict acted-out emotions under realistic conditions with a large degree of variation in attributes such as pose and illumination, making it worthwhile to explore approaches which consider combinations of features from multiple modalities for label assignment. In this paper we present our approach to learning several specialist models using deep learning techniques, each focusing on one modality. Among these are a convolutional neural network, focusing on capturing visual information in detected faces, a deep belief net focusing on the representation of the audio stream, a K-Means based "bag-of-mouths" model, which extracts visual features around the mouth region and a relational autoencoder, which addresses spatio-temporal aspects of videos. We explore multiple methods for the combination of cues from these modalities into one common classifier. This achieves a considerably greater accuracy than predictions from our strongest single-modality classifier. Our method was the winning submission in the 2013 EmotiW challenge and achieved a test set accuracy of 47.67% on the 2014 dataset.
[ { "version": "v1", "created": "Thu, 5 Mar 2015 22:03:26 GMT" }, { "version": "v2", "created": "Mon, 30 Mar 2015 00:55:02 GMT" } ]
2015-03-31T00:00:00
[ [ "Kahou", "Samira Ebrahimi", "" ], [ "Bouthillier", "Xavier", "" ], [ "Lamblin", "Pascal", "" ], [ "Gulcehre", "Caglar", "" ], [ "Michalski", "Vincent", "" ], [ "Konda", "Kishore", "" ], [ "Jean", "Sébastien", "" ], [ "Froumenty", "Pierre", "" ], [ "Dauphin", "Yann", "" ], [ "Boulanger-Lewandowski", "Nicolas", "" ], [ "Ferrari", "Raul Chandias", "" ], [ "Mirza", "Mehdi", "" ], [ "Warde-Farley", "David", "" ], [ "Courville", "Aaron", "" ], [ "Vincent", "Pascal", "" ], [ "Memisevic", "Roland", "" ], [ "Pal", "Christopher", "" ], [ "Bengio", "Yoshua", "" ] ]
TITLE: EmoNets: Multimodal deep learning approaches for emotion recognition in video ABSTRACT: The task of the emotion recognition in the wild (EmotiW) Challenge is to assign one of seven emotions to short video clips extracted from Hollywood style movies. The videos depict acted-out emotions under realistic conditions with a large degree of variation in attributes such as pose and illumination, making it worthwhile to explore approaches which consider combinations of features from multiple modalities for label assignment. In this paper we present our approach to learning several specialist models using deep learning techniques, each focusing on one modality. Among these are a convolutional neural network, focusing on capturing visual information in detected faces, a deep belief net focusing on the representation of the audio stream, a K-Means based "bag-of-mouths" model, which extracts visual features around the mouth region and a relational autoencoder, which addresses spatio-temporal aspects of videos. We explore multiple methods for the combination of cues from these modalities into one common classifier. This achieves a considerably greater accuracy than predictions from our strongest single-modality classifier. Our method was the winning submission in the 2013 EmotiW challenge and achieved a test set accuracy of 47.67% on the 2014 dataset.
no_new_dataset
0.948585
1503.08263
Chunhua Shen
Fayao Liu, Guosheng Lin, Chunhua Shen
CRF Learning with CNN Features for Image Segmentation
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Conditional Random Rields (CRF) have been widely applied in image segmentations. While most studies rely on hand-crafted features, we here propose to exploit a pre-trained large convolutional neural network (CNN) to generate deep features for CRF learning. The deep CNN is trained on the ImageNet dataset and transferred to image segmentations here for constructing potentials of superpixels. Then the CRF parameters are learnt using a structured support vector machine (SSVM). To fully exploit context information in inference, we construct spatially related co-occurrence pairwise potentials and incorporate them into the energy function. This prefers labelling of object pairs that frequently co-occur in a certain spatial layout and at the same time avoids implausible labellings during the inference. Extensive experiments on binary and multi-class segmentation benchmarks demonstrate the promise of the proposed method. We thus provide new baselines for the segmentation performance on the Weizmann horse, Graz-02, MSRC-21, Stanford Background and PASCAL VOC 2011 datasets.
[ { "version": "v1", "created": "Sat, 28 Mar 2015 04:05:09 GMT" } ]
2015-03-31T00:00:00
[ [ "Liu", "Fayao", "" ], [ "Lin", "Guosheng", "" ], [ "Shen", "Chunhua", "" ] ]
TITLE: CRF Learning with CNN Features for Image Segmentation ABSTRACT: Conditional Random Rields (CRF) have been widely applied in image segmentations. While most studies rely on hand-crafted features, we here propose to exploit a pre-trained large convolutional neural network (CNN) to generate deep features for CRF learning. The deep CNN is trained on the ImageNet dataset and transferred to image segmentations here for constructing potentials of superpixels. Then the CRF parameters are learnt using a structured support vector machine (SSVM). To fully exploit context information in inference, we construct spatially related co-occurrence pairwise potentials and incorporate them into the energy function. This prefers labelling of object pairs that frequently co-occur in a certain spatial layout and at the same time avoids implausible labellings during the inference. Extensive experiments on binary and multi-class segmentation benchmarks demonstrate the promise of the proposed method. We thus provide new baselines for the segmentation performance on the Weizmann horse, Graz-02, MSRC-21, Stanford Background and PASCAL VOC 2011 datasets.
no_new_dataset
0.949059