id
stringlengths
9
16
submitter
stringlengths
3
64
authors
stringlengths
5
6.63k
title
stringlengths
7
245
comments
stringlengths
1
482
journal-ref
stringlengths
4
382
doi
stringlengths
9
151
report-no
stringclasses
984 values
categories
stringlengths
5
108
license
stringclasses
9 values
abstract
stringlengths
83
3.41k
versions
listlengths
1
20
update_date
timestamp[s]date
2007-05-23 00:00:00
2025-04-11 00:00:00
authors_parsed
sequencelengths
1
427
prompt
stringlengths
166
3.49k
label
stringclasses
2 values
prob
float64
0.5
0.98
1204.5507
Ketan Rajawat
Ketan Rajawat, Emiliano Dall'Anese, and Georgios B. Giannakis
Dynamic Network Delay Cartography
Part of this paper has been published in the \emph{IEEE Statistical Signal Processing Workshop}, Ann Arbor, MI, Aug. 2012
null
10.1109/TIT.2014.2311802
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Path delays in IP networks are important metrics, required by network operators for assessment, planning, and fault diagnosis. Monitoring delays of all source-destination pairs in a large network is however challenging and wasteful of resources. The present paper advocates a spatio-temporal Kalman filtering approach to construct network-wide delay maps using measurements on only a few paths. The proposed network cartography framework allows efficient tracking and prediction of delays by relying on both topological as well as historical data. Optimal paths for delay measurement are selected in an online fashion by leveraging the notion of submodularity. The resulting predictor is optimal in the class of linear predictors, and outperforms competing alternatives on real-world datasets.
[ { "version": "v1", "created": "Tue, 24 Apr 2012 22:31:47 GMT" }, { "version": "v2", "created": "Sun, 11 Nov 2012 10:55:56 GMT" } ]
2016-11-17T00:00:00
[ [ "Rajawat", "Ketan", "" ], [ "Dall'Anese", "Emiliano", "" ], [ "Giannakis", "Georgios B.", "" ] ]
TITLE: Dynamic Network Delay Cartography ABSTRACT: Path delays in IP networks are important metrics, required by network operators for assessment, planning, and fault diagnosis. Monitoring delays of all source-destination pairs in a large network is however challenging and wasteful of resources. The present paper advocates a spatio-temporal Kalman filtering approach to construct network-wide delay maps using measurements on only a few paths. The proposed network cartography framework allows efficient tracking and prediction of delays by relying on both topological as well as historical data. Optimal paths for delay measurement are selected in an online fashion by leveraging the notion of submodularity. The resulting predictor is optimal in the class of linear predictors, and outperforms competing alternatives on real-world datasets.
no_new_dataset
0.954137
1212.2262
Jin Wang
Jin Wang, Ping Liu, Mary F.H.She, Saeid Nahavandi and and Abbas Kouzani
Bag-of-Words Representation for Biomedical Time Series Classification
10 pages, 7 figures. Submitted to IEEE Transaction on Biomedical Engineering
null
10.1016/j.bspc.2013.06.004
null
cs.LG cs.AI
http://creativecommons.org/licenses/by-nc-sa/3.0/
Automatic analysis of biomedical time series such as electroencephalogram (EEG) and electrocardiographic (ECG) signals has attracted great interest in the community of biomedical engineering due to its important applications in medicine. In this work, a simple yet effective bag-of-words representation that is able to capture both local and global structure similarity information is proposed for biomedical time series representation. In particular, similar to the bag-of-words model used in text document domain, the proposed method treats a time series as a text document and extracts local segments from the time series as words. The biomedical time series is then represented as a histogram of codewords, each entry of which is the count of a codeword appeared in the time series. Although the temporal order of the local segments is ignored, the bag-of-words representation is able to capture high-level structural information because both local and global structural information are well utilized. The performance of the bag-of-words model is validated on three datasets extracted from real EEG and ECG signals. The experimental results demonstrate that the proposed method is not only insensitive to parameters of the bag-of-words model such as local segment length and codebook size, but also robust to noise.
[ { "version": "v1", "created": "Tue, 11 Dec 2012 00:49:27 GMT" } ]
2016-11-17T00:00:00
[ [ "Wang", "Jin", "" ], [ "Liu", "Ping", "" ], [ "She", "Mary F. H.", "" ], [ "Nahavandi", "Saeid", "" ], [ "Kouzani", "and Abbas", "" ] ]
TITLE: Bag-of-Words Representation for Biomedical Time Series Classification ABSTRACT: Automatic analysis of biomedical time series such as electroencephalogram (EEG) and electrocardiographic (ECG) signals has attracted great interest in the community of biomedical engineering due to its important applications in medicine. In this work, a simple yet effective bag-of-words representation that is able to capture both local and global structure similarity information is proposed for biomedical time series representation. In particular, similar to the bag-of-words model used in text document domain, the proposed method treats a time series as a text document and extracts local segments from the time series as words. The biomedical time series is then represented as a histogram of codewords, each entry of which is the count of a codeword appeared in the time series. Although the temporal order of the local segments is ignored, the bag-of-words representation is able to capture high-level structural information because both local and global structural information are well utilized. The performance of the bag-of-words model is validated on three datasets extracted from real EEG and ECG signals. The experimental results demonstrate that the proposed method is not only insensitive to parameters of the bag-of-words model such as local segment length and codebook size, but also robust to noise.
no_new_dataset
0.952309
1302.6615
Ratnadip Adhikari
Ratnadip Adhikari, R. K. Agrawal, Laxmi Kant
PSO based Neural Networks vs. Traditional Statistical Models for Seasonal Time Series Forecasting
4 figures, 4 tables, 31 references, conference proceedings
IEEE International Advanced Computing Conference (IACC), 2013
10.1109/IAdCC.2013.6514315
null
cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Seasonality is a distinctive characteristic which is often observed in many practical time series. Artificial Neural Networks (ANNs) are a class of promising models for efficiently recognizing and forecasting seasonal patterns. In this paper, the Particle Swarm Optimization (PSO) approach is used to enhance the forecasting strengths of feedforward ANN (FANN) as well as Elman ANN (EANN) models for seasonal data. Three widely popular versions of the basic PSO algorithm, viz. Trelea-I, Trelea-II and Clerc-Type1 are considered here. The empirical analysis is conducted on three real-world seasonal time series. Results clearly show that each version of the PSO algorithm achieves notably better forecasting accuracies than the standard Backpropagation (BP) training method for both FANN and EANN models. The neural network forecasting results are also compared with those from the three traditional statistical models, viz. Seasonal Autoregressive Integrated Moving Average (SARIMA), Holt-Winters (HW) and Support Vector Machine (SVM). The comparison demonstrates that both PSO and BP based neural networks outperform SARIMA, HW and SVM models for all three time series datasets. The forecasting performances of ANNs are further improved through combining the outputs from the three PSO based models.
[ { "version": "v1", "created": "Tue, 26 Feb 2013 22:25:16 GMT" } ]
2016-11-17T00:00:00
[ [ "Adhikari", "Ratnadip", "" ], [ "Agrawal", "R. K.", "" ], [ "Kant", "Laxmi", "" ] ]
TITLE: PSO based Neural Networks vs. Traditional Statistical Models for Seasonal Time Series Forecasting ABSTRACT: Seasonality is a distinctive characteristic which is often observed in many practical time series. Artificial Neural Networks (ANNs) are a class of promising models for efficiently recognizing and forecasting seasonal patterns. In this paper, the Particle Swarm Optimization (PSO) approach is used to enhance the forecasting strengths of feedforward ANN (FANN) as well as Elman ANN (EANN) models for seasonal data. Three widely popular versions of the basic PSO algorithm, viz. Trelea-I, Trelea-II and Clerc-Type1 are considered here. The empirical analysis is conducted on three real-world seasonal time series. Results clearly show that each version of the PSO algorithm achieves notably better forecasting accuracies than the standard Backpropagation (BP) training method for both FANN and EANN models. The neural network forecasting results are also compared with those from the three traditional statistical models, viz. Seasonal Autoregressive Integrated Moving Average (SARIMA), Holt-Winters (HW) and Support Vector Machine (SVM). The comparison demonstrates that both PSO and BP based neural networks outperform SARIMA, HW and SVM models for all three time series datasets. The forecasting performances of ANNs are further improved through combining the outputs from the three PSO based models.
no_new_dataset
0.950778
1306.1350
Tuomo Sipola
Tuomo Sipola, Fengyu Cong, Tapani Ristaniemi, Vinoo Alluri, Petri Toiviainen, Elvira Brattico, Asoke K. Nandi
Diffusion map for clustering fMRI spatial maps extracted by independent component analysis
6 pages. 8 figures. Copyright (c) 2013 IEEE. Published at 2013 IEEE International Workshop on Machine Learning for Signal Processing
null
10.1109/MLSP.2013.6661923
null
cs.CE cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Functional magnetic resonance imaging (fMRI) produces data about activity inside the brain, from which spatial maps can be extracted by independent component analysis (ICA). In datasets, there are n spatial maps that contain p voxels. The number of voxels is very high compared to the number of analyzed spatial maps. Clustering of the spatial maps is usually based on correlation matrices. This usually works well, although such a similarity matrix inherently can explain only a certain amount of the total variance contained in the high-dimensional data where n is relatively small but p is large. For high-dimensional space, it is reasonable to perform dimensionality reduction before clustering. In this research, we used the recently developed diffusion map for dimensionality reduction in conjunction with spectral clustering. This research revealed that the diffusion map based clustering worked as well as the more traditional methods, and produced more compact clusters when needed.
[ { "version": "v1", "created": "Thu, 6 Jun 2013 09:29:25 GMT" }, { "version": "v2", "created": "Fri, 14 Jun 2013 06:44:37 GMT" }, { "version": "v3", "created": "Sun, 14 Jul 2013 16:03:54 GMT" }, { "version": "v4", "created": "Fri, 27 Sep 2013 08:58:30 GMT" } ]
2016-11-17T00:00:00
[ [ "Sipola", "Tuomo", "" ], [ "Cong", "Fengyu", "" ], [ "Ristaniemi", "Tapani", "" ], [ "Alluri", "Vinoo", "" ], [ "Toiviainen", "Petri", "" ], [ "Brattico", "Elvira", "" ], [ "Nandi", "Asoke K.", "" ] ]
TITLE: Diffusion map for clustering fMRI spatial maps extracted by independent component analysis ABSTRACT: Functional magnetic resonance imaging (fMRI) produces data about activity inside the brain, from which spatial maps can be extracted by independent component analysis (ICA). In datasets, there are n spatial maps that contain p voxels. The number of voxels is very high compared to the number of analyzed spatial maps. Clustering of the spatial maps is usually based on correlation matrices. This usually works well, although such a similarity matrix inherently can explain only a certain amount of the total variance contained in the high-dimensional data where n is relatively small but p is large. For high-dimensional space, it is reasonable to perform dimensionality reduction before clustering. In this research, we used the recently developed diffusion map for dimensionality reduction in conjunction with spectral clustering. This research revealed that the diffusion map based clustering worked as well as the more traditional methods, and produced more compact clusters when needed.
no_new_dataset
0.954605
1307.1599
Uwe Aickelin
Chris Roadknight, Uwe Aickelin, Guoping Qiu, John Scholefield, Lindy Durrant
Supervised Learning and Anti-learning of Colorectal Cancer Classes and Survival Rates from Cellular Biology Parameters
IEEE International Conference on Systems, Man, and Cybernetics, pp 797-802, 2012
null
10.1109/ICSMC.2012.6377825
null
cs.LG cs.CE stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we describe a dataset relating to cellular and physical conditions of patients who are operated upon to remove colorectal tumours. This data provides a unique insight into immunological status at the point of tumour removal, tumour classification and post-operative survival. Attempts are made to learn relationships between attributes (physical and immunological) and the resulting tumour stage and survival. Results for conventional machine learning approaches can be considered poor, especially for predicting tumour stages for the most important types of cancer. This poor performance is further investigated and compared with a synthetic, dataset based on the logical exclusive-OR function and it is shown that there is a significant level of 'anti-learning' present in all supervised methods used and this can be explained by the highly dimensional, complex and sparsely representative dataset. For predicting the stage of cancer from the immunological attributes, anti-learning approaches outperform a range of popular algorithms.
[ { "version": "v1", "created": "Fri, 5 Jul 2013 12:53:28 GMT" } ]
2016-11-17T00:00:00
[ [ "Roadknight", "Chris", "" ], [ "Aickelin", "Uwe", "" ], [ "Qiu", "Guoping", "" ], [ "Scholefield", "John", "" ], [ "Durrant", "Lindy", "" ] ]
TITLE: Supervised Learning and Anti-learning of Colorectal Cancer Classes and Survival Rates from Cellular Biology Parameters ABSTRACT: In this paper, we describe a dataset relating to cellular and physical conditions of patients who are operated upon to remove colorectal tumours. This data provides a unique insight into immunological status at the point of tumour removal, tumour classification and post-operative survival. Attempts are made to learn relationships between attributes (physical and immunological) and the resulting tumour stage and survival. Results for conventional machine learning approaches can be considered poor, especially for predicting tumour stages for the most important types of cancer. This poor performance is further investigated and compared with a synthetic, dataset based on the logical exclusive-OR function and it is shown that there is a significant level of 'anti-learning' present in all supervised methods used and this can be explained by the highly dimensional, complex and sparsely representative dataset. For predicting the stage of cancer from the immunological attributes, anti-learning approaches outperform a range of popular algorithms.
new_dataset
0.969091
1309.3323
Ted Underwood
Ted Underwood, Michael L. Black, Loretta Auvil, Boris Capitanu
Mapping Mutable Genres in Structurally Complex Volumes
Preprint accepted for the 2013 IEEE International Conference on Big Data. Revised to include corroborating evidence from a smaller workset
null
10.1109/BigData.2013.6691676
null
cs.CL cs.DL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To mine large digital libraries in humanistically meaningful ways, scholars need to divide them by genre. This is a task that classification algorithms are well suited to assist, but they need adjustment to address the specific challenges of this domain. Digital libraries pose two problems of scale not usually found in the article datasets used to test these algorithms. 1) Because libraries span several centuries, the genres being identified may change gradually across the time axis. 2) Because volumes are much longer than articles, they tend to be internally heterogeneous, and the classification task needs to begin with segmentation. We describe a multi-layered solution that trains hidden Markov models to segment volumes, and uses ensembles of overlapping classifiers to address historical change. We test this approach on a collection of 469,200 volumes drawn from HathiTrust Digital Library. To demonstrate the humanistic value of these methods, we extract 32,209 volumes of fiction from the digital library, and trace the changing proportions of first- and third-person narration in the corpus. We note that narrative points of view seem to have strong associations with particular themes and genres.
[ { "version": "v1", "created": "Thu, 12 Sep 2013 22:27:59 GMT" }, { "version": "v2", "created": "Wed, 18 Sep 2013 17:37:27 GMT" } ]
2016-11-17T00:00:00
[ [ "Underwood", "Ted", "" ], [ "Black", "Michael L.", "" ], [ "Auvil", "Loretta", "" ], [ "Capitanu", "Boris", "" ] ]
TITLE: Mapping Mutable Genres in Structurally Complex Volumes ABSTRACT: To mine large digital libraries in humanistically meaningful ways, scholars need to divide them by genre. This is a task that classification algorithms are well suited to assist, but they need adjustment to address the specific challenges of this domain. Digital libraries pose two problems of scale not usually found in the article datasets used to test these algorithms. 1) Because libraries span several centuries, the genres being identified may change gradually across the time axis. 2) Because volumes are much longer than articles, they tend to be internally heterogeneous, and the classification task needs to begin with segmentation. We describe a multi-layered solution that trains hidden Markov models to segment volumes, and uses ensembles of overlapping classifiers to address historical change. We test this approach on a collection of 469,200 volumes drawn from HathiTrust Digital Library. To demonstrate the humanistic value of these methods, we extract 32,209 volumes of fiction from the digital library, and trace the changing proportions of first- and third-person narration in the corpus. We note that narrative points of view seem to have strong associations with particular themes and genres.
no_new_dataset
0.945651
1310.3101
Eric Strobl
Eric Strobl, Shyam Visweswaran
Deep Multiple Kernel Learning
4 pages, 1 figure, 1 table, conference paper
IEEE 12th International Conference on Machine Learning and Applications (ICMLA 2013)
10.1109/ICMLA.2013.84
null
stat.ML cs.LG
http://creativecommons.org/licenses/publicdomain/
Deep learning methods have predominantly been applied to large artificial neural networks. Despite their state-of-the-art performance, these large networks typically do not generalize well to datasets with limited sample sizes. In this paper, we take a different approach by learning multiple layers of kernels. We combine kernels at each layer and then optimize over an estimate of the support vector machine leave-one-out error rather than the dual objective function. Our experiments on a variety of datasets show that each layer successively increases performance with only a few base kernels.
[ { "version": "v1", "created": "Fri, 11 Oct 2013 12:14:00 GMT" } ]
2016-11-17T00:00:00
[ [ "Strobl", "Eric", "" ], [ "Visweswaran", "Shyam", "" ] ]
TITLE: Deep Multiple Kernel Learning ABSTRACT: Deep learning methods have predominantly been applied to large artificial neural networks. Despite their state-of-the-art performance, these large networks typically do not generalize well to datasets with limited sample sizes. In this paper, we take a different approach by learning multiple layers of kernels. We combine kernels at each layer and then optimize over an estimate of the support vector machine leave-one-out error rather than the dual objective function. Our experiments on a variety of datasets show that each layer successively increases performance with only a few base kernels.
no_new_dataset
0.949995
1312.6808
Feng Xia
Feng Xia, Nana Yaw Asabere, Joel J.P.C. Rodrigues, Filippo Basso, Nakema Deonauth, Wei Wang
Socially-Aware Venue Recommendation for Conference Participants
null
The 10th IEEE International Conference on Ubiquitous Intelligence and Computing (UIC), Vietri sul Mare, Italy, December 2013
10.1109/UIC-ATC.2013.81
null
cs.IR cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Current research environments are witnessing high enormities of presentations occurring in different sessions at academic conferences. This situation makes it difficult for researchers (especially juniors) to attend the right presentation session(s) for effective collaboration. In this paper, we propose an innovative venue recommendation algorithm to enhance smart conference participation. Our proposed algorithm, Social Aware Recommendation of Venues and Environments (SARVE), computes the Pearson Correlation and social characteristic information of conference participants. SARVE further incorporates the current context of both the smart conference community and participants in order to model a recommendation process using distributed community detection. Through the integration of the above computations and techniques, we are able to recommend presentation sessions of active participant presenters that may be of high interest to a particular participant. We evaluate SARVE using a real world dataset. Our experimental results demonstrate that SARVE outperforms other state-of-the-art methods.
[ { "version": "v1", "created": "Tue, 24 Dec 2013 12:33:30 GMT" } ]
2016-11-17T00:00:00
[ [ "Xia", "Feng", "" ], [ "Asabere", "Nana Yaw", "" ], [ "Rodrigues", "Joel J. P. C.", "" ], [ "Basso", "Filippo", "" ], [ "Deonauth", "Nakema", "" ], [ "Wang", "Wei", "" ] ]
TITLE: Socially-Aware Venue Recommendation for Conference Participants ABSTRACT: Current research environments are witnessing high enormities of presentations occurring in different sessions at academic conferences. This situation makes it difficult for researchers (especially juniors) to attend the right presentation session(s) for effective collaboration. In this paper, we propose an innovative venue recommendation algorithm to enhance smart conference participation. Our proposed algorithm, Social Aware Recommendation of Venues and Environments (SARVE), computes the Pearson Correlation and social characteristic information of conference participants. SARVE further incorporates the current context of both the smart conference community and participants in order to model a recommendation process using distributed community detection. Through the integration of the above computations and techniques, we are able to recommend presentation sessions of active participant presenters that may be of high interest to a particular participant. We evaluate SARVE using a real world dataset. Our experimental results demonstrate that SARVE outperforms other state-of-the-art methods.
no_new_dataset
0.952574
1401.3201
Chongjing Sun
Chongjing Sun, Philip S. Yu, Xiangnan Kong and Yan Fu
Privacy Preserving Social Network Publication Against Mutual Friend Attacks
10 pages, 11 figures, extended version of a paper in the 4th IEEE International Workshop on Privacy Aspects of Data Mining(PADM2013)
null
10.1109/ICDMW.2013.71
null
cs.DB cs.CR cs.SI
http://creativecommons.org/licenses/by/3.0/
Publishing social network data for research purposes has raised serious concerns for individual privacy. There exist many privacy-preserving works that can deal with different attack models. In this paper, we introduce a novel privacy attack model and refer it as a mutual friend attack. In this model, the adversary can re-identify a pair of friends by using their number of mutual friends. To address this issue, we propose a new anonymity concept, called k-NMF anonymity, i.e., k-anonymity on the number of mutual friends, which ensures that there exist at least k-1 other friend pairs in the graph that share the same number of mutual friends. We devise algorithms to achieve the k-NMF anonymity while preserving the original vertex set in the sense that we allow the occasional addition but no deletion of vertices. Further we give an algorithm to ensure the k-degree anonymity in addition to the k-NMF anonymity. The experimental results on real-word datasets demonstrate that our approach can preserve the privacy and utility of social networks effectively against mutual friend attacks.
[ { "version": "v1", "created": "Fri, 11 Oct 2013 03:52:41 GMT" } ]
2016-11-17T00:00:00
[ [ "Sun", "Chongjing", "" ], [ "Yu", "Philip S.", "" ], [ "Kong", "Xiangnan", "" ], [ "Fu", "Yan", "" ] ]
TITLE: Privacy Preserving Social Network Publication Against Mutual Friend Attacks ABSTRACT: Publishing social network data for research purposes has raised serious concerns for individual privacy. There exist many privacy-preserving works that can deal with different attack models. In this paper, we introduce a novel privacy attack model and refer it as a mutual friend attack. In this model, the adversary can re-identify a pair of friends by using their number of mutual friends. To address this issue, we propose a new anonymity concept, called k-NMF anonymity, i.e., k-anonymity on the number of mutual friends, which ensures that there exist at least k-1 other friend pairs in the graph that share the same number of mutual friends. We devise algorithms to achieve the k-NMF anonymity while preserving the original vertex set in the sense that we allow the occasional addition but no deletion of vertices. Further we give an algorithm to ensure the k-degree anonymity in addition to the k-NMF anonymity. The experimental results on real-word datasets demonstrate that our approach can preserve the privacy and utility of social networks effectively against mutual friend attacks.
no_new_dataset
0.94743
1404.6560
Jad Hachem
Jad Hachem, Nikhil Karamchandani, Suhas Diggavi
Content Caching and Delivery over Heterogeneous Wireless Networks
A shorter version is to appear in IEEE INFOCOM 2015
null
10.1109/INFOCOM.2015.7218445
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Emerging heterogeneous wireless architectures consist of a dense deployment of local-coverage wireless access points (APs) with high data rates, along with sparsely-distributed, large-coverage macro-cell base stations (BS). We design a coded caching-and-delivery scheme for such architectures that equips APs with storage, enabling content pre-fetching prior to knowing user demands. Users requesting content are served by connecting to local APs with cached content, as well as by listening to a BS broadcast transmission. For any given content popularity profile, the goal is to design the caching-and-delivery scheme so as to optimally trade off the transmission cost at the BS against the storage cost at the APs and the user cost of connecting to multiple APs. We design a coded caching scheme for non-uniform content popularity that dynamically allocates user access to APs based on requested content. We demonstrate the approximate optimality of our scheme with respect to information-theoretic bounds. We numerically evaluate it on a YouTube dataset and quantify the trade-off between transmission rate, storage, and access cost. Our numerical results also suggest the intriguing possibility that, to gain most of the benefits of coded caching, it suffices to divide the content into a small number of popularity classes.
[ { "version": "v1", "created": "Fri, 25 Apr 2014 21:22:55 GMT" }, { "version": "v2", "created": "Thu, 12 Mar 2015 08:48:08 GMT" } ]
2016-11-17T00:00:00
[ [ "Hachem", "Jad", "" ], [ "Karamchandani", "Nikhil", "" ], [ "Diggavi", "Suhas", "" ] ]
TITLE: Content Caching and Delivery over Heterogeneous Wireless Networks ABSTRACT: Emerging heterogeneous wireless architectures consist of a dense deployment of local-coverage wireless access points (APs) with high data rates, along with sparsely-distributed, large-coverage macro-cell base stations (BS). We design a coded caching-and-delivery scheme for such architectures that equips APs with storage, enabling content pre-fetching prior to knowing user demands. Users requesting content are served by connecting to local APs with cached content, as well as by listening to a BS broadcast transmission. For any given content popularity profile, the goal is to design the caching-and-delivery scheme so as to optimally trade off the transmission cost at the BS against the storage cost at the APs and the user cost of connecting to multiple APs. We design a coded caching scheme for non-uniform content popularity that dynamically allocates user access to APs based on requested content. We demonstrate the approximate optimality of our scheme with respect to information-theoretic bounds. We numerically evaluate it on a YouTube dataset and quantify the trade-off between transmission rate, storage, and access cost. Our numerical results also suggest the intriguing possibility that, to gain most of the benefits of coded caching, it suffices to divide the content into a small number of popularity classes.
no_new_dataset
0.943295
1408.4792
Adnan Anwar
Adnan Anwar, Abdun Naser Mahmood
Enhanced Estimation of Autoregressive Wind Power Prediction Model Using Constriction Factor Particle Swarm Optimization
The 9th IEEE Conference on Industrial Electronics and Applications (ICIEA) 2014
null
10.1109/ICIEA.2014.6931336
null
cs.CE cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Accurate forecasting is important for cost-effective and efficient monitoring and control of the renewable energy based power generation. Wind based power is one of the most difficult energy to predict accurately, due to the widely varying and unpredictable nature of wind energy. Although Autoregressive (AR) techniques have been widely used to create wind power models, they have shown limited accuracy in forecasting, as well as difficulty in determining the correct parameters for an optimized AR model. In this paper, Constriction Factor Particle Swarm Optimization (CF-PSO) is employed to optimally determine the parameters of an Autoregressive (AR) model for accurate prediction of the wind power output behaviour. Appropriate lag order of the proposed model is selected based on Akaike information criterion. The performance of the proposed PSO based AR model is compared with four well-established approaches; Forward-backward approach, Geometric lattice approach, Least-squares approach and Yule-Walker approach, that are widely used for error minimization of the AR model. To validate the proposed approach, real-life wind power data of \textit{Capital Wind Farm} was obtained from Australian Energy Market Operator. Experimental evaluation based on a number of different datasets demonstrate that the performance of the AR model is significantly improved compared with benchmark methods.
[ { "version": "v1", "created": "Thu, 21 Aug 2014 00:46:51 GMT" } ]
2016-11-17T00:00:00
[ [ "Anwar", "Adnan", "" ], [ "Mahmood", "Abdun Naser", "" ] ]
TITLE: Enhanced Estimation of Autoregressive Wind Power Prediction Model Using Constriction Factor Particle Swarm Optimization ABSTRACT: Accurate forecasting is important for cost-effective and efficient monitoring and control of the renewable energy based power generation. Wind based power is one of the most difficult energy to predict accurately, due to the widely varying and unpredictable nature of wind energy. Although Autoregressive (AR) techniques have been widely used to create wind power models, they have shown limited accuracy in forecasting, as well as difficulty in determining the correct parameters for an optimized AR model. In this paper, Constriction Factor Particle Swarm Optimization (CF-PSO) is employed to optimally determine the parameters of an Autoregressive (AR) model for accurate prediction of the wind power output behaviour. Appropriate lag order of the proposed model is selected based on Akaike information criterion. The performance of the proposed PSO based AR model is compared with four well-established approaches; Forward-backward approach, Geometric lattice approach, Least-squares approach and Yule-Walker approach, that are widely used for error minimization of the AR model. To validate the proposed approach, real-life wind power data of \textit{Capital Wind Farm} was obtained from Australian Energy Market Operator. Experimental evaluation based on a number of different datasets demonstrate that the performance of the AR model is significantly improved compared with benchmark methods.
no_new_dataset
0.952838
1410.1606
Xiang Xiang
Xiang Xiang, Minh Dao, Gregory D. Hager, Trac D. Tran
Hierarchical Sparse and Collaborative Low-Rank Representation for Emotion Recognition
5 pages, 5 figures; accepted to IEEE ICASSP 2015; programs available at https://github.com/eglxiang/icassp15_emotion/
null
10.1109/ICASSP.2015.7178684
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we design a Collaborative-Hierarchical Sparse and Low-Rank (C-HiSLR) model that is natural for recognizing human emotion in visual data. Previous attempts require explicit expression components, which are often unavailable and difficult to recover. Instead, our model exploits the lowrank property over expressive facial frames and rescue inexact sparse representations by incorporating group sparsity. For the CK+ dataset, C-HiSLR on raw expressive faces performs as competitive as the Sparse Representation based Classification (SRC) applied on manually prepared emotions. C-HiSLR performs even better than SRC in terms of true positive rate.
[ { "version": "v1", "created": "Tue, 7 Oct 2014 03:45:20 GMT" }, { "version": "v2", "created": "Wed, 1 Apr 2015 18:02:33 GMT" } ]
2016-11-17T00:00:00
[ [ "Xiang", "Xiang", "" ], [ "Dao", "Minh", "" ], [ "Hager", "Gregory D.", "" ], [ "Tran", "Trac D.", "" ] ]
TITLE: Hierarchical Sparse and Collaborative Low-Rank Representation for Emotion Recognition ABSTRACT: In this paper, we design a Collaborative-Hierarchical Sparse and Low-Rank (C-HiSLR) model that is natural for recognizing human emotion in visual data. Previous attempts require explicit expression components, which are often unavailable and difficult to recover. Instead, our model exploits the lowrank property over expressive facial frames and rescue inexact sparse representations by incorporating group sparsity. For the CK+ dataset, C-HiSLR on raw expressive faces performs as competitive as the Sparse Representation based Classification (SRC) applied on manually prepared emotions. C-HiSLR performs even better than SRC in terms of true positive rate.
no_new_dataset
0.955981
1410.5358
Claudio Cusano
Claudio Cusano, Paolo Napoletano, Raimondo Schettini
Remote sensing image classification exploiting multiple kernel learning
Accepted for publication on the IEEE Geoscience and Remote Sensing letters
null
10.1109/LGRS.2015.2476365
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a strategy for land use classification which exploits Multiple Kernel Learning (MKL) to automatically determine a suitable combination of a set of features without requiring any heuristic knowledge about the classification task. We present a novel procedure that allows MKL to achieve good performance in the case of small training sets. Experimental results on publicly available datasets demonstrate the feasibility of the proposed approach.
[ { "version": "v1", "created": "Mon, 20 Oct 2014 17:15:50 GMT" }, { "version": "v2", "created": "Fri, 19 Dec 2014 13:17:27 GMT" }, { "version": "v3", "created": "Tue, 1 Sep 2015 09:25:50 GMT" } ]
2016-11-17T00:00:00
[ [ "Cusano", "Claudio", "" ], [ "Napoletano", "Paolo", "" ], [ "Schettini", "Raimondo", "" ] ]
TITLE: Remote sensing image classification exploiting multiple kernel learning ABSTRACT: We propose a strategy for land use classification which exploits Multiple Kernel Learning (MKL) to automatically determine a suitable combination of a set of features without requiring any heuristic knowledge about the classification task. We present a novel procedure that allows MKL to achieve good performance in the case of small training sets. Experimental results on publicly available datasets demonstrate the feasibility of the proposed approach.
no_new_dataset
0.946646
1410.5605
Paolo Napoletano
Paolo Napoletano, Giuseppe Boccignone, Francesco Tisato
Attentive monitoring of multiple video streams driven by a Bayesian foraging strategy
Accepted to IEEE Transactions on Image Processing
null
10.1109/TIP.2015.2431438
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we shall consider the problem of deploying attention to subsets of the video streams for collating the most relevant data and information of interest related to a given task. We formalize this monitoring problem as a foraging problem. We propose a probabilistic framework to model observer's attentive behavior as the behavior of a forager. The forager, moment to moment, focuses its attention on the most informative stream/camera, detects interesting objects or activities, or switches to a more profitable stream. The approach proposed here is suitable to be exploited for multi-stream video summarization. Meanwhile, it can serve as a preliminary step for more sophisticated video surveillance, e.g. activity and behavior analysis. Experimental results achieved on the UCR Videoweb Activities Dataset, a publicly available dataset, are presented to illustrate the utility of the proposed technique.
[ { "version": "v1", "created": "Tue, 21 Oct 2014 10:13:51 GMT" }, { "version": "v2", "created": "Thu, 22 Jan 2015 11:21:47 GMT" }, { "version": "v3", "created": "Mon, 27 Apr 2015 13:02:21 GMT" } ]
2016-11-17T00:00:00
[ [ "Napoletano", "Paolo", "" ], [ "Boccignone", "Giuseppe", "" ], [ "Tisato", "Francesco", "" ] ]
TITLE: Attentive monitoring of multiple video streams driven by a Bayesian foraging strategy ABSTRACT: In this paper we shall consider the problem of deploying attention to subsets of the video streams for collating the most relevant data and information of interest related to a given task. We formalize this monitoring problem as a foraging problem. We propose a probabilistic framework to model observer's attentive behavior as the behavior of a forager. The forager, moment to moment, focuses its attention on the most informative stream/camera, detects interesting objects or activities, or switches to a more profitable stream. The approach proposed here is suitable to be exploited for multi-stream video summarization. Meanwhile, it can serve as a preliminary step for more sophisticated video surveillance, e.g. activity and behavior analysis. Experimental results achieved on the UCR Videoweb Activities Dataset, a publicly available dataset, are presented to illustrate the utility of the proposed technique.
new_dataset
0.868994
1411.1215
Blesson Varghese
Muhammed Asif Saleem, Blesson Varghese and Adam Barker
BigExcel: A Web-Based Framework for Exploring Big Data in Social Sciences
8 pages
Workshop of Big Humanities Data at the IEEE International Conference on Big Data (IEEE BigData) 2014, Washington D. C., USA
10.1109/BigData.2014.7004458
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper argues that there are three fundamental challenges that need to be overcome in order to foster the adoption of big data technologies in non-computer science related disciplines: addressing issues of accessibility of such technologies for non-computer scientists, supporting the ad hoc exploration of large data sets with minimal effort and the availability of lightweight web-based frameworks for quick and easy analytics. In this paper, we address the above three challenges through the development of 'BigExcel', a three tier web-based framework for exploring big data to facilitate the management of user interactions with large data sets, the construction of queries to explore the data set and the management of the infrastructure. The feasibility of BigExcel is demonstrated through two Yahoo Sandbox datasets. The first dataset is the Yahoo Buzz Score data set we use for quantitatively predicting trending technologies and the second is the Yahoo n-gram corpus we use for qualitatively inferring the coverage of important events. A demonstration of the BigExcel framework and source code is available at http://bigdata.cs.st-andrews.ac.uk/projects/bigexcel-exploring-big-data-for-social-sciences/.
[ { "version": "v1", "created": "Wed, 5 Nov 2014 10:22:27 GMT" } ]
2016-11-17T00:00:00
[ [ "Saleem", "Muhammed Asif", "" ], [ "Varghese", "Blesson", "" ], [ "Barker", "Adam", "" ] ]
TITLE: BigExcel: A Web-Based Framework for Exploring Big Data in Social Sciences ABSTRACT: This paper argues that there are three fundamental challenges that need to be overcome in order to foster the adoption of big data technologies in non-computer science related disciplines: addressing issues of accessibility of such technologies for non-computer scientists, supporting the ad hoc exploration of large data sets with minimal effort and the availability of lightweight web-based frameworks for quick and easy analytics. In this paper, we address the above three challenges through the development of 'BigExcel', a three tier web-based framework for exploring big data to facilitate the management of user interactions with large data sets, the construction of queries to explore the data set and the management of the infrastructure. The feasibility of BigExcel is demonstrated through two Yahoo Sandbox datasets. The first dataset is the Yahoo Buzz Score data set we use for quantitatively predicting trending technologies and the second is the Yahoo n-gram corpus we use for qualitatively inferring the coverage of important events. A demonstration of the BigExcel framework and source code is available at http://bigdata.cs.st-andrews.ac.uk/projects/bigexcel-exploring-big-data-for-social-sciences/.
no_new_dataset
0.945801
1501.06180
Shanshan Zhang
Shanshan Zhang, Christian Bauckhage, Dominik A. Klein, Armin B. Cremers
Exploring Human Vision Driven Features for Pedestrian Detection
Accepted for publication in IEEE Transactions on Circuits and Systems for Video Technology (TCSVT)
null
10.1109/TCSVT.2015.2397199
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motivated by the center-surround mechanism in the human visual attention system, we propose to use average contrast maps for the challenge of pedestrian detection in street scenes due to the observation that pedestrians indeed exhibit discriminative contrast texture. Our main contributions are first to design a local, statistical multi-channel descriptorin order to incorporate both color and gradient information. Second, we introduce a multi-direction and multi-scale contrast scheme based on grid-cells in order to integrate expressive local variations. Contributing to the issue of selecting most discriminative features for assessing and classification, we perform extensive comparisons w.r.t. statistical descriptors, contrast measurements, and scale structures. This way, we obtain reasonable results under various configurations. Empirical findings from applying our optimized detector on the INRIA and Caltech pedestrian datasets show that our features yield state-of-the-art performance in pedestrian detection.
[ { "version": "v1", "created": "Sun, 25 Jan 2015 16:52:41 GMT" } ]
2016-11-17T00:00:00
[ [ "Zhang", "Shanshan", "" ], [ "Bauckhage", "Christian", "" ], [ "Klein", "Dominik A.", "" ], [ "Cremers", "Armin B.", "" ] ]
TITLE: Exploring Human Vision Driven Features for Pedestrian Detection ABSTRACT: Motivated by the center-surround mechanism in the human visual attention system, we propose to use average contrast maps for the challenge of pedestrian detection in street scenes due to the observation that pedestrians indeed exhibit discriminative contrast texture. Our main contributions are first to design a local, statistical multi-channel descriptorin order to incorporate both color and gradient information. Second, we introduce a multi-direction and multi-scale contrast scheme based on grid-cells in order to integrate expressive local variations. Contributing to the issue of selecting most discriminative features for assessing and classification, we perform extensive comparisons w.r.t. statistical descriptors, contrast measurements, and scale structures. This way, we obtain reasonable results under various configurations. Empirical findings from applying our optimized detector on the INRIA and Caltech pedestrian datasets show that our features yield state-of-the-art performance in pedestrian detection.
no_new_dataset
0.94474
1502.06084
Jieming Zhu
Jieming Zhu, Pinjia He, Zibin Zheng, Michael R. Lyu
A Privacy-Preserving QoS Prediction Framework for Web Service Recommendation
This paper is published in IEEE International Conference on Web Services (ICWS'15)
null
10.1109/ICWS.2015.41
null
cs.CR cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
QoS-based Web service recommendation has recently gained much attention for providing a promising way to help users find high-quality services. To facilitate such recommendations, existing studies suggest the use of collaborative filtering techniques for personalized QoS prediction. These approaches, by leveraging partially observed QoS values from users, can achieve high accuracy of QoS predictions on the unobserved ones. However, the requirement to collect users' QoS data likely puts user privacy at risk, thus making them unwilling to contribute their usage data to a Web service recommender system. As a result, privacy becomes a critical challenge in developing practical Web service recommender systems. In this paper, we make the first attempt to cope with the privacy concerns for Web service recommendation. Specifically, we propose a simple yet effective privacy-preserving framework by applying data obfuscation techniques, and further develop two representative privacy-preserving QoS prediction approaches under this framework. Evaluation results from a publicly-available QoS dataset of real-world Web services demonstrate the feasibility and effectiveness of our privacy-preserving QoS prediction approaches. We believe our work can serve as a good starting point to inspire more research efforts on privacy-preserving Web service recommendation.
[ { "version": "v1", "created": "Sat, 21 Feb 2015 08:14:39 GMT" }, { "version": "v2", "created": "Sat, 2 May 2015 04:43:00 GMT" } ]
2016-11-17T00:00:00
[ [ "Zhu", "Jieming", "" ], [ "He", "Pinjia", "" ], [ "Zheng", "Zibin", "" ], [ "Lyu", "Michael R.", "" ] ]
TITLE: A Privacy-Preserving QoS Prediction Framework for Web Service Recommendation ABSTRACT: QoS-based Web service recommendation has recently gained much attention for providing a promising way to help users find high-quality services. To facilitate such recommendations, existing studies suggest the use of collaborative filtering techniques for personalized QoS prediction. These approaches, by leveraging partially observed QoS values from users, can achieve high accuracy of QoS predictions on the unobserved ones. However, the requirement to collect users' QoS data likely puts user privacy at risk, thus making them unwilling to contribute their usage data to a Web service recommender system. As a result, privacy becomes a critical challenge in developing practical Web service recommender systems. In this paper, we make the first attempt to cope with the privacy concerns for Web service recommendation. Specifically, we propose a simple yet effective privacy-preserving framework by applying data obfuscation techniques, and further develop two representative privacy-preserving QoS prediction approaches under this framework. Evaluation results from a publicly-available QoS dataset of real-world Web services demonstrate the feasibility and effectiveness of our privacy-preserving QoS prediction approaches. We believe our work can serve as a good starting point to inspire more research efforts on privacy-preserving Web service recommendation.
no_new_dataset
0.945601
1502.07411
Chunhua Shen
Fayao Liu, Chunhua Shen, Guosheng Lin, Ian Reid
Learning Depth from Single Monocular Images Using Deep Convolutional Neural Fields
Appearing in IEEE T. Pattern Analysis and Machine Intelligence. Journal version of arXiv:1411.6387 . Test code is available at https://bitbucket.org/fayao/dcnf-fcsp
null
10.1109/TPAMI.2015.2505283
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this article, we tackle the problem of depth estimation from single monocular images. Compared with depth estimation using multiple images such as stereo depth perception, depth from monocular images is much more challenging. Prior work typically focuses on exploiting geometric priors or additional sources of information, most using hand-crafted features. Recently, there is mounting evidence that features from deep convolutional neural networks (CNN) set new records for various vision applications. On the other hand, considering the continuous characteristic of the depth values, depth estimations can be naturally formulated as a continuous conditional random field (CRF) learning problem. Therefore, here we present a deep convolutional neural field model for estimating depths from single monocular images, aiming to jointly explore the capacity of deep CNN and continuous CRF. In particular, we propose a deep structured learning scheme which learns the unary and pairwise potentials of continuous CRF in a unified deep CNN framework. We then further propose an equally effective model based on fully convolutional networks and a novel superpixel pooling method, which is $\sim 10$ times faster, to speedup the patch-wise convolutions in the deep model. With this more efficient model, we are able to design deeper networks to pursue better performance. Experiments on both indoor and outdoor scene datasets demonstrate that the proposed method outperforms state-of-the-art depth estimation approaches.
[ { "version": "v1", "created": "Thu, 26 Feb 2015 01:26:22 GMT" }, { "version": "v2", "created": "Thu, 19 Mar 2015 03:31:44 GMT" }, { "version": "v3", "created": "Sat, 18 Apr 2015 10:13:39 GMT" }, { "version": "v4", "created": "Wed, 30 Sep 2015 14:19:19 GMT" }, { "version": "v5", "created": "Thu, 8 Oct 2015 06:02:00 GMT" }, { "version": "v6", "created": "Wed, 25 Nov 2015 00:03:31 GMT" } ]
2016-11-17T00:00:00
[ [ "Liu", "Fayao", "" ], [ "Shen", "Chunhua", "" ], [ "Lin", "Guosheng", "" ], [ "Reid", "Ian", "" ] ]
TITLE: Learning Depth from Single Monocular Images Using Deep Convolutional Neural Fields ABSTRACT: In this article, we tackle the problem of depth estimation from single monocular images. Compared with depth estimation using multiple images such as stereo depth perception, depth from monocular images is much more challenging. Prior work typically focuses on exploiting geometric priors or additional sources of information, most using hand-crafted features. Recently, there is mounting evidence that features from deep convolutional neural networks (CNN) set new records for various vision applications. On the other hand, considering the continuous characteristic of the depth values, depth estimations can be naturally formulated as a continuous conditional random field (CRF) learning problem. Therefore, here we present a deep convolutional neural field model for estimating depths from single monocular images, aiming to jointly explore the capacity of deep CNN and continuous CRF. In particular, we propose a deep structured learning scheme which learns the unary and pairwise potentials of continuous CRF in a unified deep CNN framework. We then further propose an equally effective model based on fully convolutional networks and a novel superpixel pooling method, which is $\sim 10$ times faster, to speedup the patch-wise convolutions in the deep model. With this more efficient model, we are able to design deeper networks to pursue better performance. Experiments on both indoor and outdoor scene datasets demonstrate that the proposed method outperforms state-of-the-art depth estimation approaches.
no_new_dataset
0.945751
1503.02318
Arturo Deza
Arturo Deza, Devi Parikh
Understanding Image Virality
Pre-print, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015
null
10.1109/CVPR.2015.7298791
null
cs.SI cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Virality of online content on social networking websites is an important but esoteric phenomenon often studied in fields like marketing, psychology and data mining. In this paper we study viral images from a computer vision perspective. We introduce three new image datasets from Reddit, and define a virality score using Reddit metadata. We train classifiers with state-of-the-art image features to predict virality of individual images, relative virality in pairs of images, and the dominant topic of a viral image. We also compare machine performance to human performance on these tasks. We find that computers perform poorly with low level features, and high level information is critical for predicting virality. We encode semantic information through relative attributes. We identify the 5 key visual attributes that correlate with virality. We create an attribute-based characterization of images that can predict relative virality with 68.10% accuracy (SVM+Deep Relative Attributes) -- better than humans at 60.12%. Finally, we study how human prediction of image virality varies with different `contexts' in which the images are viewed, such as the influence of neighbouring images, images recently viewed, as well as the image title or caption. This work is a first step in understanding the complex but important phenomenon of image virality. Our datasets and annotations will be made publicly available.
[ { "version": "v1", "created": "Sun, 8 Mar 2015 20:29:28 GMT" }, { "version": "v2", "created": "Tue, 14 Apr 2015 18:04:29 GMT" }, { "version": "v3", "created": "Tue, 26 May 2015 16:57:18 GMT" } ]
2016-11-17T00:00:00
[ [ "Deza", "Arturo", "" ], [ "Parikh", "Devi", "" ] ]
TITLE: Understanding Image Virality ABSTRACT: Virality of online content on social networking websites is an important but esoteric phenomenon often studied in fields like marketing, psychology and data mining. In this paper we study viral images from a computer vision perspective. We introduce three new image datasets from Reddit, and define a virality score using Reddit metadata. We train classifiers with state-of-the-art image features to predict virality of individual images, relative virality in pairs of images, and the dominant topic of a viral image. We also compare machine performance to human performance on these tasks. We find that computers perform poorly with low level features, and high level information is critical for predicting virality. We encode semantic information through relative attributes. We identify the 5 key visual attributes that correlate with virality. We create an attribute-based characterization of images that can predict relative virality with 68.10% accuracy (SVM+Deep Relative Attributes) -- better than humans at 60.12%. Finally, we study how human prediction of image virality varies with different `contexts' in which the images are viewed, such as the influence of neighbouring images, images recently viewed, as well as the image title or caption. This work is a first step in understanding the complex but important phenomenon of image virality. Our datasets and annotations will be made publicly available.
new_dataset
0.966505
1504.00788
Gianmarco De Francisci Morales
Muhammad Anis Uddin Nasir, Gianmarco De Francisci Morales, David Garc\'ia-Soriano, Nicolas Kourtellis, Marco Serafini
The Power of Both Choices: Practical Load Balancing for Distributed Stream Processing Engines
31st IEEE International Conference on Data Engineering (ICDE), 2015
null
10.1109/ICDE.2015.7113279
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the problem of load balancing in distributed stream processing engines, which is exacerbated in the presence of skew. We introduce Partial Key Grouping (PKG), a new stream partitioning scheme that adapts the classical "power of two choices" to a distributed streaming setting by leveraging two novel techniques: key splitting and local load estimation. In so doing, it achieves better load balancing than key grouping while being more scalable than shuffle grouping. We test PKG on several large datasets, both real-world and synthetic. Compared to standard hashing, PKG reduces the load imbalance by up to several orders of magnitude, and often achieves nearly-perfect load balance. This result translates into an improvement of up to 60% in throughput and up to 45% in latency when deployed on a real Storm cluster.
[ { "version": "v1", "created": "Fri, 3 Apr 2015 09:24:22 GMT" } ]
2016-11-17T00:00:00
[ [ "Nasir", "Muhammad Anis Uddin", "" ], [ "Morales", "Gianmarco De Francisci", "" ], [ "García-Soriano", "David", "" ], [ "Kourtellis", "Nicolas", "" ], [ "Serafini", "Marco", "" ] ]
TITLE: The Power of Both Choices: Practical Load Balancing for Distributed Stream Processing Engines ABSTRACT: We study the problem of load balancing in distributed stream processing engines, which is exacerbated in the presence of skew. We introduce Partial Key Grouping (PKG), a new stream partitioning scheme that adapts the classical "power of two choices" to a distributed streaming setting by leveraging two novel techniques: key splitting and local load estimation. In so doing, it achieves better load balancing than key grouping while being more scalable than shuffle grouping. We test PKG on several large datasets, both real-world and synthetic. Compared to standard hashing, PKG reduces the load imbalance by up to several orders of magnitude, and often achieves nearly-perfect load balance. This result translates into an improvement of up to 60% in throughput and up to 45% in latency when deployed on a real Storm cluster.
no_new_dataset
0.951369
1504.01000
Alejandro Frery
Avik Bhattacharya, Arnab Muhuri, Shaunak De, Surendar Manickam, Alejandro C. Frery
Modifying the Yamaguchi Four-Component Decomposition Scattering Powers Using a Stochastic Distance
Accepted for publication in IEEE J-STARS (IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing)
null
10.1109/JSTARS.2015.2420683
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Model-based decompositions have gained considerable attention after the initial work of Freeman and Durden. This decomposition which assumes the target to be reflection symmetric was later relaxed in the Yamaguchi et al. decomposition with the addition of the helix parameter. Since then many decomposition have been proposed where either the scattering model was modified to fit the data or the coherency matrix representing the second order statistics of the full polarimetric data is rotated to fit the scattering model. In this paper we propose to modify the Yamaguchi four-component decomposition (Y4O) scattering powers using the concept of statistical information theory for matrices. In order to achieve this modification we propose a method to estimate the polarization orientation angle (OA) from full-polarimetric SAR images using the Hellinger distance. In this method, the OA is estimated by maximizing the Hellinger distance between the un-rotated and the rotated $T_{33}$ and the $T_{22}$ components of the coherency matrix $\mathbf{[T]}$. Then, the powers of the Yamaguchi four-component model-based decomposition (Y4O) are modified using the maximum relative stochastic distance between the $T_{33}$ and the $T_{22}$ components of the coherency matrix at the estimated OA. The results show that the overall double-bounce powers over rotated urban areas have significantly improved with the reduction of volume powers. The percentage of pixels with negative powers have also decreased from the Y4O decomposition. The proposed method is both qualitatively and quantitatively compared with the results obtained from the Y4O and the Y4R decompositions for a Radarsat-2 C-band San-Francisco dataset and an UAVSAR L-band Hayward dataset.
[ { "version": "v1", "created": "Sat, 4 Apr 2015 10:05:41 GMT" } ]
2016-11-17T00:00:00
[ [ "Bhattacharya", "Avik", "" ], [ "Muhuri", "Arnab", "" ], [ "De", "Shaunak", "" ], [ "Manickam", "Surendar", "" ], [ "Frery", "Alejandro C.", "" ] ]
TITLE: Modifying the Yamaguchi Four-Component Decomposition Scattering Powers Using a Stochastic Distance ABSTRACT: Model-based decompositions have gained considerable attention after the initial work of Freeman and Durden. This decomposition which assumes the target to be reflection symmetric was later relaxed in the Yamaguchi et al. decomposition with the addition of the helix parameter. Since then many decomposition have been proposed where either the scattering model was modified to fit the data or the coherency matrix representing the second order statistics of the full polarimetric data is rotated to fit the scattering model. In this paper we propose to modify the Yamaguchi four-component decomposition (Y4O) scattering powers using the concept of statistical information theory for matrices. In order to achieve this modification we propose a method to estimate the polarization orientation angle (OA) from full-polarimetric SAR images using the Hellinger distance. In this method, the OA is estimated by maximizing the Hellinger distance between the un-rotated and the rotated $T_{33}$ and the $T_{22}$ components of the coherency matrix $\mathbf{[T]}$. Then, the powers of the Yamaguchi four-component model-based decomposition (Y4O) are modified using the maximum relative stochastic distance between the $T_{33}$ and the $T_{22}$ components of the coherency matrix at the estimated OA. The results show that the overall double-bounce powers over rotated urban areas have significantly improved with the reduction of volume powers. The percentage of pixels with negative powers have also decreased from the Y4O decomposition. The proposed method is both qualitatively and quantitatively compared with the results obtained from the Y4O and the Y4R decompositions for a Radarsat-2 C-band San-Francisco dataset and an UAVSAR L-band Hayward dataset.
no_new_dataset
0.952353
1504.03810
Smitha M.L.
B.H. Shekar, Smitha M.L.
Text Localization in Video Using Multiscale Weber's Local Descriptor
IEEE SPICES, 2015
null
10.1109/SPICES.2015.7091559
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose a novel approach for detecting the text present in videos and scene images based on the Multiscale Weber's Local Descriptor (MWLD). Given an input video, the shots are identified and the key frames are extracted based on their spatio-temporal relationship. From each key frame, we detect the local region information using WLD with different radius and neighborhood relationship of pixel values and hence obtained intensity enhanced key frames at multiple scales. These multiscale WLD key frames are merged together and then the horizontal gradients are computed using morphological operations. The obtained results are then binarized and the false positives are eliminated based on geometrical properties. Finally, we employ connected component analysis and morphological dilation operation to determine the text regions that aids in text localization. The experimental results obtained on publicly available standard Hua, Horizontal-1 and Horizontal-2 video dataset illustrate that the proposed method can accurately detect and localize texts of various sizes, fonts and colors in videos.
[ { "version": "v1", "created": "Wed, 15 Apr 2015 07:56:05 GMT" } ]
2016-11-17T00:00:00
[ [ "Shekar", "B. H.", "" ], [ "L.", "Smitha M.", "" ] ]
TITLE: Text Localization in Video Using Multiscale Weber's Local Descriptor ABSTRACT: In this paper, we propose a novel approach for detecting the text present in videos and scene images based on the Multiscale Weber's Local Descriptor (MWLD). Given an input video, the shots are identified and the key frames are extracted based on their spatio-temporal relationship. From each key frame, we detect the local region information using WLD with different radius and neighborhood relationship of pixel values and hence obtained intensity enhanced key frames at multiple scales. These multiscale WLD key frames are merged together and then the horizontal gradients are computed using morphological operations. The obtained results are then binarized and the false positives are eliminated based on geometrical properties. Finally, we employ connected component analysis and morphological dilation operation to determine the text regions that aids in text localization. The experimental results obtained on publicly available standard Hua, Horizontal-1 and Horizontal-2 video dataset illustrate that the proposed method can accurately detect and localize texts of various sizes, fonts and colors in videos.
no_new_dataset
0.957198
1504.04663
Jinxue Zhang
Jinxue Zhang, Rui Zhang, Jingchao Sun, Yanchao Zhang, Chi Zhang
TrueTop: A Sybil-Resilient System for User Influence Measurement on Twitter
Accepted by IEEE/ACM Transactions on Networking. This is the Final version
null
10.1109/TNET.2015.2494059
null
cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Influential users have great potential for accelerating information dissemination and acquisition on Twitter. How to measure the influence of Twitter users has attracted significant academic and industrial attention. Existing influential measurement techniques, however, are vulnerable to sybil users that are thriving on Twitter. Although sybil defenses for online social networks have been extensively investigated, they commonly assume unique mappings from human-established trust relationships to online social associations and thus do not apply to Twitter where users can freely follow each other. This paper presents TrueTop, the first sybil-resilient system to measure the influence of Twitter users. TrueTop is firmly rooted in two observations from real Twitter datasets. First, although non-sybil users may incautiously follow strangers, they tend to be more careful and selective in retweeting, replying to, and mentioning other Twitter users. Second, influential users usually get much more retweets, replies, and mentions than non-influential users. Detailed theoretical studies and synthetic simulations show that TrueTop can generate very accurate influence measurement results and also have strong resilience to sybil attacks.
[ { "version": "v1", "created": "Sat, 18 Apr 2015 00:07:10 GMT" }, { "version": "v2", "created": "Thu, 18 Jun 2015 00:35:04 GMT" }, { "version": "v3", "created": "Tue, 20 Oct 2015 22:49:34 GMT" } ]
2016-11-17T00:00:00
[ [ "Zhang", "Jinxue", "" ], [ "Zhang", "Rui", "" ], [ "Sun", "Jingchao", "" ], [ "Zhang", "Yanchao", "" ], [ "Zhang", "Chi", "" ] ]
TITLE: TrueTop: A Sybil-Resilient System for User Influence Measurement on Twitter ABSTRACT: Influential users have great potential for accelerating information dissemination and acquisition on Twitter. How to measure the influence of Twitter users has attracted significant academic and industrial attention. Existing influential measurement techniques, however, are vulnerable to sybil users that are thriving on Twitter. Although sybil defenses for online social networks have been extensively investigated, they commonly assume unique mappings from human-established trust relationships to online social associations and thus do not apply to Twitter where users can freely follow each other. This paper presents TrueTop, the first sybil-resilient system to measure the influence of Twitter users. TrueTop is firmly rooted in two observations from real Twitter datasets. First, although non-sybil users may incautiously follow strangers, they tend to be more careful and selective in retweeting, replying to, and mentioning other Twitter users. Second, influential users usually get much more retweets, replies, and mentions than non-influential users. Detailed theoretical studies and synthetic simulations show that TrueTop can generate very accurate influence measurement results and also have strong resilience to sybil attacks.
no_new_dataset
0.943191
1505.04868
Limin Wang
Limin Wang, Yu Qiao, Xiaoou Tang
Action Recognition with Trajectory-Pooled Deep-Convolutional Descriptors
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015
null
10.1109/CVPR.2015.7299059
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Visual features are of vital importance for human action understanding in videos. This paper presents a new video representation, called trajectory-pooled deep-convolutional descriptor (TDD), which shares the merits of both hand-crafted features and deep-learned features. Specifically, we utilize deep architectures to learn discriminative convolutional feature maps, and conduct trajectory-constrained pooling to aggregate these convolutional features into effective descriptors. To enhance the robustness of TDDs, we design two normalization methods to transform convolutional feature maps, namely spatiotemporal normalization and channel normalization. The advantages of our features come from (i) TDDs are automatically learned and contain high discriminative capacity compared with those hand-crafted features; (ii) TDDs take account of the intrinsic characteristics of temporal dimension and introduce the strategies of trajectory-constrained sampling and pooling for aggregating deep-learned features. We conduct experiments on two challenging datasets: HMDB51 and UCF101. Experimental results show that TDDs outperform previous hand-crafted features and deep-learned features. Our method also achieves superior performance to the state of the art on these datasets (HMDB51 65.9%, UCF101 91.5%).
[ { "version": "v1", "created": "Tue, 19 May 2015 04:36:42 GMT" } ]
2016-11-17T00:00:00
[ [ "Wang", "Limin", "" ], [ "Qiao", "Yu", "" ], [ "Tang", "Xiaoou", "" ] ]
TITLE: Action Recognition with Trajectory-Pooled Deep-Convolutional Descriptors ABSTRACT: Visual features are of vital importance for human action understanding in videos. This paper presents a new video representation, called trajectory-pooled deep-convolutional descriptor (TDD), which shares the merits of both hand-crafted features and deep-learned features. Specifically, we utilize deep architectures to learn discriminative convolutional feature maps, and conduct trajectory-constrained pooling to aggregate these convolutional features into effective descriptors. To enhance the robustness of TDDs, we design two normalization methods to transform convolutional feature maps, namely spatiotemporal normalization and channel normalization. The advantages of our features come from (i) TDDs are automatically learned and contain high discriminative capacity compared with those hand-crafted features; (ii) TDDs take account of the intrinsic characteristics of temporal dimension and introduce the strategies of trajectory-constrained sampling and pooling for aggregating deep-learned features. We conduct experiments on two challenging datasets: HMDB51 and UCF101. Experimental results show that TDDs outperform previous hand-crafted features and deep-learned features. Our method also achieves superior performance to the state of the art on these datasets (HMDB51 65.9%, UCF101 91.5%).
no_new_dataset
0.946745
1506.08754
Andrew Moran
Andrew Moran, Vijay Gadepally, Matthew Hubbell, Jeremy Kepner
Improving Big Data Visual Analytics with Interactive Virtual Reality
6 pages, 8 figures, 2015 IEEE High Performance Extreme Computing Conference (HPEC '15); corrected typos
null
10.1109/HPEC.2015.7322473
null
cs.HC cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For decades, the growth and volume of digital data collection has made it challenging to digest large volumes of information and extract underlying structure. Coined 'Big Data', massive amounts of information has quite often been gathered inconsistently (e.g from many sources, of various forms, at different rates, etc.). These factors impede the practices of not only processing data, but also analyzing and displaying it in an efficient manner to the user. Many efforts have been completed in the data mining and visual analytics community to create effective ways to further improve analysis and achieve the knowledge desired for better understanding. Our approach for improved big data visual analytics is two-fold, focusing on both visualization and interaction. Given geo-tagged information, we are exploring the benefits of visualizing datasets in the original geospatial domain by utilizing a virtual reality platform. After running proven analytics on the data, we intend to represent the information in a more realistic 3D setting, where analysts can achieve an enhanced situational awareness and rely on familiar perceptions to draw in-depth conclusions on the dataset. In addition, developing a human-computer interface that responds to natural user actions and inputs creates a more intuitive environment. Tasks can be performed to manipulate the dataset and allow users to dive deeper upon request, adhering to desired demands and intentions. Due to the volume and popularity of social media, we developed a 3D tool visualizing Twitter on MIT's campus for analysis. Utilizing emerging technologies of today to create a fully immersive tool that promotes visualization and interaction can help ease the process of understanding and representing big data.
[ { "version": "v1", "created": "Mon, 29 Jun 2015 17:50:20 GMT" }, { "version": "v2", "created": "Tue, 6 Oct 2015 18:19:42 GMT" } ]
2016-11-17T00:00:00
[ [ "Moran", "Andrew", "" ], [ "Gadepally", "Vijay", "" ], [ "Hubbell", "Matthew", "" ], [ "Kepner", "Jeremy", "" ] ]
TITLE: Improving Big Data Visual Analytics with Interactive Virtual Reality ABSTRACT: For decades, the growth and volume of digital data collection has made it challenging to digest large volumes of information and extract underlying structure. Coined 'Big Data', massive amounts of information has quite often been gathered inconsistently (e.g from many sources, of various forms, at different rates, etc.). These factors impede the practices of not only processing data, but also analyzing and displaying it in an efficient manner to the user. Many efforts have been completed in the data mining and visual analytics community to create effective ways to further improve analysis and achieve the knowledge desired for better understanding. Our approach for improved big data visual analytics is two-fold, focusing on both visualization and interaction. Given geo-tagged information, we are exploring the benefits of visualizing datasets in the original geospatial domain by utilizing a virtual reality platform. After running proven analytics on the data, we intend to represent the information in a more realistic 3D setting, where analysts can achieve an enhanced situational awareness and rely on familiar perceptions to draw in-depth conclusions on the dataset. In addition, developing a human-computer interface that responds to natural user actions and inputs creates a more intuitive environment. Tasks can be performed to manipulate the dataset and allow users to dive deeper upon request, adhering to desired demands and intentions. Due to the volume and popularity of social media, we developed a 3D tool visualizing Twitter on MIT's campus for analysis. Utilizing emerging technologies of today to create a fully immersive tool that promotes visualization and interaction can help ease the process of understanding and representing big data.
no_new_dataset
0.942876
1507.00389
Nasir Ahmad
Nasir Ahmad, Sybil Derrible, Tarsha Eason and Heriberto Cabezas
Using Fisher Information In Big Data
null
null
10.1098/rsos.160582
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this era of Big Data, proficient use of data mining is the key to capture useful information from any dataset. As numerous data mining techniques make use of information theory concepts, in this paper, we discuss how Fisher information (FI) can be applied to analyze patterns in Big Data. The main advantage of FI is its ability to combine multiple variables together to inform us on the overall trends and stability of a system. It can therefore detect whether a system is losing dynamic order and stability, which may serve as a signal of an impending regime shift. In this work, we first provide a brief overview of Fisher information theory, followed by a simple step-by-step numerical example on how to compute FI. Finally, as a numerical demonstration, we calculate the evolution of FI for GDP per capita (current US Dollar) and total population of the USA from 1960 to 2013.
[ { "version": "v1", "created": "Wed, 1 Jul 2015 23:23:18 GMT" }, { "version": "v2", "created": "Mon, 20 Jul 2015 16:19:49 GMT" } ]
2016-11-17T00:00:00
[ [ "Ahmad", "Nasir", "" ], [ "Derrible", "Sybil", "" ], [ "Eason", "Tarsha", "" ], [ "Cabezas", "Heriberto", "" ] ]
TITLE: Using Fisher Information In Big Data ABSTRACT: In this era of Big Data, proficient use of data mining is the key to capture useful information from any dataset. As numerous data mining techniques make use of information theory concepts, in this paper, we discuss how Fisher information (FI) can be applied to analyze patterns in Big Data. The main advantage of FI is its ability to combine multiple variables together to inform us on the overall trends and stability of a system. It can therefore detect whether a system is losing dynamic order and stability, which may serve as a signal of an impending regime shift. In this work, we first provide a brief overview of Fisher information theory, followed by a simple step-by-step numerical example on how to compute FI. Finally, as a numerical demonstration, we calculate the evolution of FI for GDP per capita (current US Dollar) and total population of the USA from 1960 to 2013.
no_new_dataset
0.948202
1507.01269
Tianpei Xie
Tianpei Xie, Nasser M. Nasrabadi and Alfred O. Hero III
Semi-supervised Multi-sensor Classification via Consensus-based Multi-View Maximum Entropy Discrimination
5 pages, 4 figures, Accepted in 40th IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 15)
null
10.1109/ICASSP.2015.7178308
null
cs.IT cs.AI cs.LG math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we consider multi-sensor classification when there is a large number of unlabeled samples. The problem is formulated under the multi-view learning framework and a Consensus-based Multi-View Maximum Entropy Discrimination (CMV-MED) algorithm is proposed. By iteratively maximizing the stochastic agreement between multiple classifiers on the unlabeled dataset, the algorithm simultaneously learns multiple high accuracy classifiers. We demonstrate that our proposed method can yield improved performance over previous multi-view learning approaches by comparing performance on three real multi-sensor data sets.
[ { "version": "v1", "created": "Sun, 5 Jul 2015 20:23:22 GMT" } ]
2016-11-17T00:00:00
[ [ "Xie", "Tianpei", "" ], [ "Nasrabadi", "Nasser M.", "" ], [ "Hero", "Alfred O.", "III" ] ]
TITLE: Semi-supervised Multi-sensor Classification via Consensus-based Multi-View Maximum Entropy Discrimination ABSTRACT: In this paper, we consider multi-sensor classification when there is a large number of unlabeled samples. The problem is formulated under the multi-view learning framework and a Consensus-based Multi-View Maximum Entropy Discrimination (CMV-MED) algorithm is proposed. By iteratively maximizing the stochastic agreement between multiple classifiers on the unlabeled dataset, the algorithm simultaneously learns multiple high accuracy classifiers. We demonstrate that our proposed method can yield improved performance over previous multi-view learning approaches by comparing performance on three real multi-sensor data sets.
no_new_dataset
0.94743
1508.01055
Roman Fedorov
Roman Fedorov, Alessandro Camerada, Piero Fraternali, Marco Tagliasacchi
Estimating snow cover from publicly available images
submitted to IEEE Transactions on Multimedia
null
10.1109/TMM.2016.2535356
null
cs.MM cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we study the problem of estimating snow cover in mountainous regions, that is, the spatial extent of the earth surface covered by snow. We argue that publicly available visual content, in the form of user generated photographs and image feeds from outdoor webcams, can both be leveraged as additional measurement sources, complementing existing ground, satellite and airborne sensor data. To this end, we describe two content acquisition and processing pipelines that are tailored to such sources, addressing the specific challenges posed by each of them, e.g., identifying the mountain peaks, filtering out images taken in bad weather conditions, handling varying illumination conditions. The final outcome is summarized in a snow cover index, which indicates for a specific mountain and day of the year, the fraction of visible area covered by snow, possibly at different elevations. We created a manually labelled dataset to assess the accuracy of the image snow covered area estimation, achieving 90.0% precision at 91.1% recall. In addition, we show that seasonal trends related to air temperature are captured by the snow cover index.
[ { "version": "v1", "created": "Wed, 5 Aug 2015 12:46:26 GMT" } ]
2016-11-17T00:00:00
[ [ "Fedorov", "Roman", "" ], [ "Camerada", "Alessandro", "" ], [ "Fraternali", "Piero", "" ], [ "Tagliasacchi", "Marco", "" ] ]
TITLE: Estimating snow cover from publicly available images ABSTRACT: In this paper we study the problem of estimating snow cover in mountainous regions, that is, the spatial extent of the earth surface covered by snow. We argue that publicly available visual content, in the form of user generated photographs and image feeds from outdoor webcams, can both be leveraged as additional measurement sources, complementing existing ground, satellite and airborne sensor data. To this end, we describe two content acquisition and processing pipelines that are tailored to such sources, addressing the specific challenges posed by each of them, e.g., identifying the mountain peaks, filtering out images taken in bad weather conditions, handling varying illumination conditions. The final outcome is summarized in a snow cover index, which indicates for a specific mountain and day of the year, the fraction of visible area covered by snow, possibly at different elevations. We created a manually labelled dataset to assess the accuracy of the image snow covered area estimation, achieving 90.0% precision at 91.1% recall. In addition, we show that seasonal trends related to air temperature are captured by the snow cover index.
new_dataset
0.95995
1509.04186
Gaurav Sharma
Gaurav Sharma, Frederic Jurie and Cordelia Schmid
Expanded Parts Model for Semantic Description of Humans in Still Images
Accepted for publication in IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)
null
10.1109/TPAMI.2016.2537325
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce an Expanded Parts Model (EPM) for recognizing human attributes (e.g. young, short hair, wearing suit) and actions (e.g. running, jumping) in still images. An EPM is a collection of part templates which are learnt discriminatively to explain specific scale-space regions in the images (in human centric coordinates). This is in contrast to current models which consist of a relatively few (i.e. a mixture of) 'average' templates. EPM uses only a subset of the parts to score an image and scores the image sparsely in space, i.e. it ignores redundant and random background in an image. To learn our model, we propose an algorithm which automatically mines parts and learns corresponding discriminative templates together with their respective locations from a large number of candidate parts. We validate our method on three recent challenging datasets of human attributes and actions. We obtain convincing qualitative and state-of-the-art quantitative results on the three datasets.
[ { "version": "v1", "created": "Mon, 14 Sep 2015 16:33:04 GMT" }, { "version": "v2", "created": "Thu, 25 Feb 2016 12:14:05 GMT" } ]
2016-11-17T00:00:00
[ [ "Sharma", "Gaurav", "" ], [ "Jurie", "Frederic", "" ], [ "Schmid", "Cordelia", "" ] ]
TITLE: Expanded Parts Model for Semantic Description of Humans in Still Images ABSTRACT: We introduce an Expanded Parts Model (EPM) for recognizing human attributes (e.g. young, short hair, wearing suit) and actions (e.g. running, jumping) in still images. An EPM is a collection of part templates which are learnt discriminatively to explain specific scale-space regions in the images (in human centric coordinates). This is in contrast to current models which consist of a relatively few (i.e. a mixture of) 'average' templates. EPM uses only a subset of the parts to score an image and scores the image sparsely in space, i.e. it ignores redundant and random background in an image. To learn our model, we propose an algorithm which automatically mines parts and learns corresponding discriminative templates together with their respective locations from a large number of candidate parts. We validate our method on three recent challenging datasets of human attributes and actions. We obtain convincing qualitative and state-of-the-art quantitative results on the three datasets.
no_new_dataset
0.950549
1509.04619
Hamid Tizhoosh
Zehra Camlica, H.R. Tizhoosh, Farzad Khalvati
Medical Image Classification via SVM using LBP Features from Saliency-Based Folded Data
To appear in proceedings of The 14th International Conference on Machine Learning and Applications (IEEE ICMLA 2015), Miami, Florida, USA, 2015
null
10.1109/ICMLA.2015.131
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Good results on image classification and retrieval using support vector machines (SVM) with local binary patterns (LBPs) as features have been extensively reported in the literature where an entire image is retrieved or classified. In contrast, in medical imaging, not all parts of the image may be equally significant or relevant to the image retrieval application at hand. For instance, in lung x-ray image, the lung region may contain a tumour, hence being highly significant whereas the surrounding area does not contain significant information from medical diagnosis perspective. In this paper, we propose to detect salient regions of images during training and fold the data to reduce the effect of irrelevant regions. As a result, smaller image areas will be used for LBP features calculation and consequently classification by SVM. We use IRMA 2009 dataset with 14,410 x-ray images to verify the performance of the proposed approach. The results demonstrate the benefits of saliency-based folding approach that delivers comparable classification accuracies with state-of-the-art but exhibits lower computational cost and storage requirements, factors highly important for big data analytics.
[ { "version": "v1", "created": "Tue, 15 Sep 2015 16:08:08 GMT" } ]
2016-11-17T00:00:00
[ [ "Camlica", "Zehra", "" ], [ "Tizhoosh", "H. R.", "" ], [ "Khalvati", "Farzad", "" ] ]
TITLE: Medical Image Classification via SVM using LBP Features from Saliency-Based Folded Data ABSTRACT: Good results on image classification and retrieval using support vector machines (SVM) with local binary patterns (LBPs) as features have been extensively reported in the literature where an entire image is retrieved or classified. In contrast, in medical imaging, not all parts of the image may be equally significant or relevant to the image retrieval application at hand. For instance, in lung x-ray image, the lung region may contain a tumour, hence being highly significant whereas the surrounding area does not contain significant information from medical diagnosis perspective. In this paper, we propose to detect salient regions of images during training and fold the data to reduce the effect of irrelevant regions. As a result, smaller image areas will be used for LBP features calculation and consequently classification by SVM. We use IRMA 2009 dataset with 14,410 x-ray images to verify the performance of the proposed approach. The results demonstrate the benefits of saliency-based folding approach that delivers comparable classification accuracies with state-of-the-art but exhibits lower computational cost and storage requirements, factors highly important for big data analytics.
no_new_dataset
0.952662
1510.03125
Chunhua Shen
Qichang Hu, Sakrapee Paisitkriangkrai, Chunhua Shen, Anton van den Hengel, Fatih Porikli
Fast detection of multiple objects in traffic scenes with a common detection framework
Appearing in IEEE Transactions on Intelligent Transportation Systems
null
10.1109/TITS.2015.2496795
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Traffic scene perception (TSP) aims to real-time extract accurate on-road environment information, which in- volves three phases: detection of objects of interest, recognition of detected objects, and tracking of objects in motion. Since recognition and tracking often rely on the results from detection, the ability to detect objects of interest effectively plays a crucial role in TSP. In this paper, we focus on three important classes of objects: traffic signs, cars, and cyclists. We propose to detect all the three important objects in a single learning based detection framework. The proposed framework consists of a dense feature extractor and detectors of three important classes. Once the dense features have been extracted, these features are shared with all detectors. The advantage of using one common framework is that the detection speed is much faster, since all dense features need only to be evaluated once in the testing phase. In contrast, most previous works have designed specific detectors using different features for each of these objects. To enhance the feature robustness to noises and image deformations, we introduce spatially pooled features as a part of aggregated channel features. In order to further improve the generalization performance, we propose an object subcategorization method as a means of capturing intra-class variation of objects. We experimentally demonstrate the effectiveness and efficiency of the proposed framework in three detection applications: traffic sign detection, car detection, and cyclist detection. The proposed framework achieves the competitive performance with state-of- the-art approaches on several benchmark datasets.
[ { "version": "v1", "created": "Mon, 12 Oct 2015 02:30:22 GMT" } ]
2016-11-17T00:00:00
[ [ "Hu", "Qichang", "" ], [ "Paisitkriangkrai", "Sakrapee", "" ], [ "Shen", "Chunhua", "" ], [ "Hengel", "Anton van den", "" ], [ "Porikli", "Fatih", "" ] ]
TITLE: Fast detection of multiple objects in traffic scenes with a common detection framework ABSTRACT: Traffic scene perception (TSP) aims to real-time extract accurate on-road environment information, which in- volves three phases: detection of objects of interest, recognition of detected objects, and tracking of objects in motion. Since recognition and tracking often rely on the results from detection, the ability to detect objects of interest effectively plays a crucial role in TSP. In this paper, we focus on three important classes of objects: traffic signs, cars, and cyclists. We propose to detect all the three important objects in a single learning based detection framework. The proposed framework consists of a dense feature extractor and detectors of three important classes. Once the dense features have been extracted, these features are shared with all detectors. The advantage of using one common framework is that the detection speed is much faster, since all dense features need only to be evaluated once in the testing phase. In contrast, most previous works have designed specific detectors using different features for each of these objects. To enhance the feature robustness to noises and image deformations, we introduce spatially pooled features as a part of aggregated channel features. In order to further improve the generalization performance, we propose an object subcategorization method as a means of capturing intra-class variation of objects. We experimentally demonstrate the effectiveness and efficiency of the proposed framework in three detection applications: traffic sign detection, car detection, and cyclist detection. The proposed framework achieves the competitive performance with state-of- the-art approaches on several benchmark datasets.
no_new_dataset
0.946051
1510.03336
Subutai Ahmad
Alexander Lavin, Subutai Ahmad
Evaluating Real-time Anomaly Detection Algorithms - the Numenta Anomaly Benchmark
14th International Conference on Machine Learning and Applications (IEEE ICMLA), 2015. Fixed typo in equation and formatting
null
10.1109/ICMLA.2015.141
null
cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Much of the world's data is streaming, time-series data, where anomalies give significant information in critical situations; examples abound in domains such as finance, IT, security, medical, and energy. Yet detecting anomalies in streaming data is a difficult task, requiring detectors to process data in real-time, not batches, and learn while simultaneously making predictions. There are no benchmarks to adequately test and score the efficacy of real-time anomaly detectors. Here we propose the Numenta Anomaly Benchmark (NAB), which attempts to provide a controlled and repeatable environment of open-source tools to test and measure anomaly detection algorithms on streaming data. The perfect detector would detect all anomalies as soon as possible, trigger no false alarms, work with real-world time-series data across a variety of domains, and automatically adapt to changing statistics. Rewarding these characteristics is formalized in NAB, using a scoring algorithm designed for streaming data. NAB evaluates detectors on a benchmark dataset with labeled, real-world time-series data. We present these components, and give results and analyses for several open source, commercially-used algorithms. The goal for NAB is to provide a standard, open source framework with which the research community can compare and evaluate different algorithms for detecting anomalies in streaming data.
[ { "version": "v1", "created": "Mon, 12 Oct 2015 15:30:34 GMT" }, { "version": "v2", "created": "Tue, 13 Oct 2015 23:09:58 GMT" }, { "version": "v3", "created": "Mon, 16 Nov 2015 20:52:44 GMT" }, { "version": "v4", "created": "Tue, 17 Nov 2015 17:17:06 GMT" } ]
2016-11-17T00:00:00
[ [ "Lavin", "Alexander", "" ], [ "Ahmad", "Subutai", "" ] ]
TITLE: Evaluating Real-time Anomaly Detection Algorithms - the Numenta Anomaly Benchmark ABSTRACT: Much of the world's data is streaming, time-series data, where anomalies give significant information in critical situations; examples abound in domains such as finance, IT, security, medical, and energy. Yet detecting anomalies in streaming data is a difficult task, requiring detectors to process data in real-time, not batches, and learn while simultaneously making predictions. There are no benchmarks to adequately test and score the efficacy of real-time anomaly detectors. Here we propose the Numenta Anomaly Benchmark (NAB), which attempts to provide a controlled and repeatable environment of open-source tools to test and measure anomaly detection algorithms on streaming data. The perfect detector would detect all anomalies as soon as possible, trigger no false alarms, work with real-world time-series data across a variety of domains, and automatically adapt to changing statistics. Rewarding these characteristics is formalized in NAB, using a scoring algorithm designed for streaming data. NAB evaluates detectors on a benchmark dataset with labeled, real-world time-series data. We present these components, and give results and analyses for several open source, commercially-used algorithms. The goal for NAB is to provide a standard, open source framework with which the research community can compare and evaluate different algorithms for detecting anomalies in streaming data.
no_new_dataset
0.913599
1510.04039
Nadine Kroher
Nadine Kroher, Emilia G\'omez
Automatic Transcription of Flamenco Singing from Polyphonic Music Recordings
Submitted to the IEEE Transactions on Audio, Speech and Language Processing
null
10.1109/TASLP.2016.2531284
null
cs.SD cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Automatic note-level transcription is considered one of the most challenging tasks in music information retrieval. The specific case of flamenco singing transcription poses a particular challenge due to its complex melodic progressions, intonation inaccuracies, the use of a high degree of ornamentation and the presence of guitar accompaniment. In this study, we explore the limitations of existing state of the art transcription systems for the case of flamenco singing and propose a specific solution for this genre: We first extract the predominant melody and apply a novel contour filtering process to eliminate segments of the pitch contour which originate from the guitar accompaniment. We formulate a set of onset detection functions based on volume and pitch characteristics to segment the resulting vocal pitch contour into discrete note events. A quantised pitch label is assigned to each note event by combining global pitch class probabilities with local pitch contour statistics. The proposed system outperforms state of the art singing transcription systems with respect to voicing accuracy, onset detection and overall performance when evaluated on flamenco singing datasets.
[ { "version": "v1", "created": "Wed, 14 Oct 2015 10:53:00 GMT" } ]
2016-11-17T00:00:00
[ [ "Kroher", "Nadine", "" ], [ "Gómez", "Emilia", "" ] ]
TITLE: Automatic Transcription of Flamenco Singing from Polyphonic Music Recordings ABSTRACT: Automatic note-level transcription is considered one of the most challenging tasks in music information retrieval. The specific case of flamenco singing transcription poses a particular challenge due to its complex melodic progressions, intonation inaccuracies, the use of a high degree of ornamentation and the presence of guitar accompaniment. In this study, we explore the limitations of existing state of the art transcription systems for the case of flamenco singing and propose a specific solution for this genre: We first extract the predominant melody and apply a novel contour filtering process to eliminate segments of the pitch contour which originate from the guitar accompaniment. We formulate a set of onset detection functions based on volume and pitch characteristics to segment the resulting vocal pitch contour into discrete note events. A quantised pitch label is assigned to each note event by combining global pitch class probabilities with local pitch contour statistics. The proposed system outperforms state of the art singing transcription systems with respect to voicing accuracy, onset detection and overall performance when evaluated on flamenco singing datasets.
no_new_dataset
0.946399
1510.07320
S. Hussain Raza
S. Hussain Raza, Matthias Grundmann, Irfan Essa
Geometric Context from Videos
Computer Vision and Pattern Recognition (CVPR), 2013 IEEE Conference on
null
10.1109/CVPR.2013.396
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a novel algorithm for estimating the broad 3D geometric structure of outdoor video scenes. Leveraging spatio-temporal video segmentation, we decompose a dynamic scene captured by a video into geometric classes, based on predictions made by region-classifiers that are trained on appearance and motion features. By examining the homogeneity of the prediction, we combine predictions across multiple segmentation hierarchy levels alleviating the need to determine the granularity a priori. We built a novel, extensive dataset on geometric context of video to evaluate our method, consisting of over 100 ground-truth annotated outdoor videos with over 20,000 frames. To further scale beyond this dataset, we propose a semi-supervised learning framework to expand the pool of labeled data with high confidence predictions obtained from unlabeled data. Our system produces an accurate prediction of geometric context of video achieving 96% accuracy across main geometric classes.
[ { "version": "v1", "created": "Sun, 25 Oct 2015 22:58:30 GMT" } ]
2016-11-17T00:00:00
[ [ "Raza", "S. Hussain", "" ], [ "Grundmann", "Matthias", "" ], [ "Essa", "Irfan", "" ] ]
TITLE: Geometric Context from Videos ABSTRACT: We present a novel algorithm for estimating the broad 3D geometric structure of outdoor video scenes. Leveraging spatio-temporal video segmentation, we decompose a dynamic scene captured by a video into geometric classes, based on predictions made by region-classifiers that are trained on appearance and motion features. By examining the homogeneity of the prediction, we combine predictions across multiple segmentation hierarchy levels alleviating the need to determine the granularity a priori. We built a novel, extensive dataset on geometric context of video to evaluate our method, consisting of over 100 ground-truth annotated outdoor videos with over 20,000 frames. To further scale beyond this dataset, we propose a semi-supervised learning framework to expand the pool of labeled data with high confidence predictions obtained from unlabeled data. Our system produces an accurate prediction of geometric context of video achieving 96% accuracy across main geometric classes.
new_dataset
0.954223
1510.07323
S. Hussain Raza
S. Hussain Raza, Ahmad Humayun, Matthias Grundmann, David Anderson, Irfan Essa
Finding Temporally Consistent Occlusion Boundaries in Videos using Geometric Context
Applications of Computer Vision (WACV), 2015 IEEE Winter Conference on
null
10.1109/WACV.2015.141
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present an algorithm for finding temporally consistent occlusion boundaries in videos to support segmentation of dynamic scenes. We learn occlusion boundaries in a pairwise Markov random field (MRF) framework. We first estimate the probability of an spatio-temporal edge being an occlusion boundary by using appearance, flow, and geometric features. Next, we enforce occlusion boundary continuity in a MRF model by learning pairwise occlusion probabilities using a random forest. Then, we temporally smooth boundaries to remove temporal inconsistencies in occlusion boundary estimation. Our proposed framework provides an efficient approach for finding temporally consistent occlusion boundaries in video by utilizing causality, redundancy in videos, and semantic layout of the scene. We have developed a dataset with fully annotated ground-truth occlusion boundaries of over 30 videos ($5000 frames). This dataset is used to evaluate temporal occlusion boundaries and provides a much needed baseline for future studies. We perform experiments to demonstrate the role of scene layout, and temporal information for occlusion reasoning in dynamic scenes.
[ { "version": "v1", "created": "Sun, 25 Oct 2015 23:20:38 GMT" } ]
2016-11-17T00:00:00
[ [ "Raza", "S. Hussain", "" ], [ "Humayun", "Ahmad", "" ], [ "Grundmann", "Matthias", "" ], [ "Anderson", "David", "" ], [ "Essa", "Irfan", "" ] ]
TITLE: Finding Temporally Consistent Occlusion Boundaries in Videos using Geometric Context ABSTRACT: We present an algorithm for finding temporally consistent occlusion boundaries in videos to support segmentation of dynamic scenes. We learn occlusion boundaries in a pairwise Markov random field (MRF) framework. We first estimate the probability of an spatio-temporal edge being an occlusion boundary by using appearance, flow, and geometric features. Next, we enforce occlusion boundary continuity in a MRF model by learning pairwise occlusion probabilities using a random forest. Then, we temporally smooth boundaries to remove temporal inconsistencies in occlusion boundary estimation. Our proposed framework provides an efficient approach for finding temporally consistent occlusion boundaries in video by utilizing causality, redundancy in videos, and semantic layout of the scene. We have developed a dataset with fully annotated ground-truth occlusion boundaries of over 30 videos ($5000 frames). This dataset is used to evaluate temporal occlusion boundaries and provides a much needed baseline for future studies. We perform experiments to demonstrate the role of scene layout, and temporal information for occlusion reasoning in dynamic scenes.
new_dataset
0.955026
1511.04240
Dylan Campbell
Dylan Campbell, Lars Petersson
An Adaptive Data Representation for Robust Point-Set Registration and Merging
Manuscript in press 2015 IEEE International Conference on Computer Vision
null
10.1109/ICCV.2015.488
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a framework for rigid point-set registration and merging using a robust continuous data representation. Our point-set representation is constructed by training a one-class support vector machine with a Gaussian radial basis function kernel and subsequently approximating the output function with a Gaussian mixture model. We leverage the representation's sparse parametrisation and robustness to noise, outliers and occlusions in an efficient registration algorithm that minimises the L2 distance between our support vector--parametrised Gaussian mixtures. In contrast, existing techniques, such as Iterative Closest Point and Gaussian mixture approaches, manifest a narrower region of convergence and are less robust to occlusions and missing data, as demonstrated in the evaluation on a range of 2D and 3D datasets. Finally, we present a novel algorithm, GMMerge, that parsimoniously and equitably merges aligned mixture models, allowing the framework to be used for reconstruction and mapping.
[ { "version": "v1", "created": "Fri, 13 Nov 2015 11:23:40 GMT" } ]
2016-11-17T00:00:00
[ [ "Campbell", "Dylan", "" ], [ "Petersson", "Lars", "" ] ]
TITLE: An Adaptive Data Representation for Robust Point-Set Registration and Merging ABSTRACT: This paper presents a framework for rigid point-set registration and merging using a robust continuous data representation. Our point-set representation is constructed by training a one-class support vector machine with a Gaussian radial basis function kernel and subsequently approximating the output function with a Gaussian mixture model. We leverage the representation's sparse parametrisation and robustness to noise, outliers and occlusions in an efficient registration algorithm that minimises the L2 distance between our support vector--parametrised Gaussian mixtures. In contrast, existing techniques, such as Iterative Closest Point and Gaussian mixture approaches, manifest a narrower region of convergence and are less robust to occlusions and missing data, as demonstrated in the evaluation on a range of 2D and 3D datasets. Finally, we present a novel algorithm, GMMerge, that parsimoniously and equitably merges aligned mixture models, allowing the framework to be used for reconstruction and mapping.
no_new_dataset
0.949716
1512.04042
Shixia Liu
Shixia Liu, Jialun Yin, Xiting Wang, Weiwei Cui, Kelei Cao, Jian Pei
Online Visual Analytics of Text Streams
IEEE TVCG 2016
null
10.1109/TVCG.2015.2509990
null
cs.IR cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present an online visual analytics approach to helping users explore and understand hierarchical topic evolution in high-volume text streams. The key idea behind this approach is to identify representative topics in incoming documents and align them with the existing representative topics that they immediately follow (in time). To this end, we learn a set of streaming tree cuts from topic trees based on user-selected focus nodes. A dynamic Bayesian network model has been developed to derive the tree cuts in the incoming topic trees to balance the fitness of each tree cut and the smoothness between adjacent tree cuts. By connecting the corresponding topics at different times, we are able to provide an overview of the evolving hierarchical topics. A sedimentation-based visualization has been designed to enable the interactive analysis of streaming text data from global patterns to local details. We evaluated our method on real-world datasets and the results are generally favorable.
[ { "version": "v1", "created": "Sun, 13 Dec 2015 12:22:21 GMT" } ]
2016-11-17T00:00:00
[ [ "Liu", "Shixia", "" ], [ "Yin", "Jialun", "" ], [ "Wang", "Xiting", "" ], [ "Cui", "Weiwei", "" ], [ "Cao", "Kelei", "" ], [ "Pei", "Jian", "" ] ]
TITLE: Online Visual Analytics of Text Streams ABSTRACT: We present an online visual analytics approach to helping users explore and understand hierarchical topic evolution in high-volume text streams. The key idea behind this approach is to identify representative topics in incoming documents and align them with the existing representative topics that they immediately follow (in time). To this end, we learn a set of streaming tree cuts from topic trees based on user-selected focus nodes. A dynamic Bayesian network model has been developed to derive the tree cuts in the incoming topic trees to balance the fitness of each tree cut and the smoothness between adjacent tree cuts. By connecting the corresponding topics at different times, we are able to provide an overview of the evolving hierarchical topics. A sedimentation-based visualization has been designed to enable the interactive analysis of streaming text data from global patterns to local details. We evaluated our method on real-world datasets and the results are generally favorable.
no_new_dataset
0.952574
1512.04133
George Cushen
George Cushen
A Person Re-Identification System For Mobile Devices
Appearing in Proceedings of the 11th IEEE/ACM International Conference on Signal Image Technology & Internet Systems (SITIS 2015)
null
10.1109/SITIS.2015.96
null
cs.CV cs.CR cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Person re-identification is a critical security task for recognizing a person across spatially disjoint sensors. Previous work can be computationally intensive and is mainly based on low-level cues extracted from RGB data and implemented on a PC for a fixed sensor network (such as traditional CCTV). We present a practical and efficient framework for mobile devices (such as smart phones and robots) where high-level semantic soft biometrics are extracted from RGB and depth data. By combining these cues, our approach attempts to provide robustness to noise, illumination, and minor variations in clothing. This mobile approach may be particularly useful for the identification of persons in areas ill-served by fixed sensors or for tasks where the sensor position and direction need to dynamically adapt to a target. Results on the BIWI dataset are preliminary but encouraging. Further evaluation and demonstration of the system will be available on our website.
[ { "version": "v1", "created": "Sun, 13 Dec 2015 22:33:17 GMT" } ]
2016-11-17T00:00:00
[ [ "Cushen", "George", "" ] ]
TITLE: A Person Re-Identification System For Mobile Devices ABSTRACT: Person re-identification is a critical security task for recognizing a person across spatially disjoint sensors. Previous work can be computationally intensive and is mainly based on low-level cues extracted from RGB data and implemented on a PC for a fixed sensor network (such as traditional CCTV). We present a practical and efficient framework for mobile devices (such as smart phones and robots) where high-level semantic soft biometrics are extracted from RGB and depth data. By combining these cues, our approach attempts to provide robustness to noise, illumination, and minor variations in clothing. This mobile approach may be particularly useful for the identification of persons in areas ill-served by fixed sensors or for tasks where the sensor position and direction need to dynamically adapt to a target. Results on the BIWI dataset are preliminary but encouraging. Further evaluation and demonstration of the system will be available on our website.
no_new_dataset
0.946448
1601.07471
Vinay Venkataraman
Vinay Venkataraman, Pavan Turaga
Shape Distributions of Nonlinear Dynamical Systems for Video-based Inference
IEEE Transactions on Pattern Analysis and Machine Intelligence
null
10.1109/TPAMI.2016.2533388
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a shape-theoretic framework for dynamical analysis of nonlinear dynamical systems which appear frequently in several video-based inference tasks. Traditional approaches to dynamical modeling have included linear and nonlinear methods with their respective drawbacks. A novel approach we propose is the use of descriptors of the shape of the dynamical attractor as a feature representation of nature of dynamics. The proposed framework has two main advantages over traditional approaches: a) representation of the dynamical system is derived directly from the observational data, without any inherent assumptions, and b) the proposed features show stability under different time-series lengths where traditional dynamical invariants fail. We illustrate our idea using nonlinear dynamical models such as Lorenz and Rossler systems, where our feature representations (shape distribution) support our hypothesis that the local shape of the reconstructed phase space can be used as a discriminative feature. Our experimental analyses on these models also indicate that the proposed framework show stability for different time-series lengths, which is useful when the available number of samples are small/variable. The specific applications of interest in this paper are: 1) activity recognition using motion capture and RGBD sensors, 2) activity quality assessment for applications in stroke rehabilitation, and 3) dynamical scene classification. We provide experimental validation through action and gesture recognition experiments on motion capture and Kinect datasets. In all these scenarios, we show experimental evidence of the favorable properties of the proposed representation.
[ { "version": "v1", "created": "Wed, 27 Jan 2016 18:01:38 GMT" } ]
2016-11-17T00:00:00
[ [ "Venkataraman", "Vinay", "" ], [ "Turaga", "Pavan", "" ] ]
TITLE: Shape Distributions of Nonlinear Dynamical Systems for Video-based Inference ABSTRACT: This paper presents a shape-theoretic framework for dynamical analysis of nonlinear dynamical systems which appear frequently in several video-based inference tasks. Traditional approaches to dynamical modeling have included linear and nonlinear methods with their respective drawbacks. A novel approach we propose is the use of descriptors of the shape of the dynamical attractor as a feature representation of nature of dynamics. The proposed framework has two main advantages over traditional approaches: a) representation of the dynamical system is derived directly from the observational data, without any inherent assumptions, and b) the proposed features show stability under different time-series lengths where traditional dynamical invariants fail. We illustrate our idea using nonlinear dynamical models such as Lorenz and Rossler systems, where our feature representations (shape distribution) support our hypothesis that the local shape of the reconstructed phase space can be used as a discriminative feature. Our experimental analyses on these models also indicate that the proposed framework show stability for different time-series lengths, which is useful when the available number of samples are small/variable. The specific applications of interest in this paper are: 1) activity recognition using motion capture and RGBD sensors, 2) activity quality assessment for applications in stroke rehabilitation, and 3) dynamical scene classification. We provide experimental validation through action and gesture recognition experiments on motion capture and Kinect datasets. In all these scenarios, we show experimental evidence of the favorable properties of the proposed representation.
no_new_dataset
0.950365
1602.06401
Nikos Bikakis
Nikos Bikakis, John Liagouris, Maria Krommyda, George Papastefanatos, Timos Sellis
graphVizdb: A Scalable Platform for Interactive Large Graph Visualization
32nd IEEE International Conference on Data Engineering (ICDE '16)
null
10.1109/ICDE.2016.7498340
null
cs.HC cs.DB cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a novel platform for the interactive visualization of very large graphs. The platform enables the user to interact with the visualized graph in a way that is very similar to the exploration of maps at multiple levels. Our approach involves an offline preprocessing phase that builds the layout of the graph by assigning coordinates to its nodes with respect to a Euclidean plane. The respective points are indexed with a spatial data structure, i.e., an R-tree, and stored in a database. Multiple abstraction layers of the graph based on various criteria are also created offline, and they are indexed similarly so that the user can explore the dataset at different levels of granularity, depending on her particular needs. Then, our system translates user operations into simple and very efficient spatial operations (i.e., window queries) in the backend. This technique allows for a fine-grained access to very large graphs with extremely low latency and memory requirements and without compromising the functionality of the tool. Our web-based prototype supports three main operations: (1) interactive navigation, (2) multi-level exploration, and (3) keyword search on the graph metadata.
[ { "version": "v1", "created": "Sat, 20 Feb 2016 12:49:09 GMT" } ]
2016-11-17T00:00:00
[ [ "Bikakis", "Nikos", "" ], [ "Liagouris", "John", "" ], [ "Krommyda", "Maria", "" ], [ "Papastefanatos", "George", "" ], [ "Sellis", "Timos", "" ] ]
TITLE: graphVizdb: A Scalable Platform for Interactive Large Graph Visualization ABSTRACT: We present a novel platform for the interactive visualization of very large graphs. The platform enables the user to interact with the visualized graph in a way that is very similar to the exploration of maps at multiple levels. Our approach involves an offline preprocessing phase that builds the layout of the graph by assigning coordinates to its nodes with respect to a Euclidean plane. The respective points are indexed with a spatial data structure, i.e., an R-tree, and stored in a database. Multiple abstraction layers of the graph based on various criteria are also created offline, and they are indexed similarly so that the user can explore the dataset at different levels of granularity, depending on her particular needs. Then, our system translates user operations into simple and very efficient spatial operations (i.e., window queries) in the backend. This technique allows for a fine-grained access to very large graphs with extremely low latency and memory requirements and without compromising the functionality of the tool. Our web-based prototype supports three main operations: (1) interactive navigation, (2) multi-level exploration, and (3) keyword search on the graph metadata.
no_new_dataset
0.947672
1603.07936
Subhajit Sidhanta Subhajit Sidhanta
Subhajit Sidhanta, Wojciech Golab, and Supratik Mukhopadhyay
OptEx: A Deadline-Aware Cost Optimization Model for Spark
10 pages, IEEE CCGrid 2016
null
10.1109/CCGrid.2016.10
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present OptEx, a closed-form model of job execution on Apache Spark, a popular parallel processing engine. To the best of our knowledge, OptEx is the first work that analytically models job completion time on Spark. The model can be used to estimate the completion time of a given Spark job on a cloud, with respect to the size of the input dataset, the number of iterations, the number of nodes comprising the underlying cluster. Experimental results demonstrate that OptEx yields a mean relative error of 6% in estimating the job completion time. Furthermore, the model can be applied for estimating the cost optimal cluster composition for running a given Spark job on a cloud under a completion deadline specified in the SLO (i.e., Service Level Objective). We show experimentally that OptEx is able to correctly estimate the cost optimal cluster composition for running a given Spark job under an SLO deadline with an accuracy of 98%.
[ { "version": "v1", "created": "Fri, 25 Mar 2016 15:28:56 GMT" } ]
2016-11-17T00:00:00
[ [ "Sidhanta", "Subhajit", "" ], [ "Golab", "Wojciech", "" ], [ "Mukhopadhyay", "Supratik", "" ] ]
TITLE: OptEx: A Deadline-Aware Cost Optimization Model for Spark ABSTRACT: We present OptEx, a closed-form model of job execution on Apache Spark, a popular parallel processing engine. To the best of our knowledge, OptEx is the first work that analytically models job completion time on Spark. The model can be used to estimate the completion time of a given Spark job on a cloud, with respect to the size of the input dataset, the number of iterations, the number of nodes comprising the underlying cluster. Experimental results demonstrate that OptEx yields a mean relative error of 6% in estimating the job completion time. Furthermore, the model can be applied for estimating the cost optimal cluster composition for running a given Spark job on a cloud under a completion deadline specified in the SLO (i.e., Service Level Objective). We show experimentally that OptEx is able to correctly estimate the cost optimal cluster composition for running a given Spark job under an SLO deadline with an accuracy of 98%.
no_new_dataset
0.943034
1604.04906
Johannes Stegmaier
Johannes Stegmaier, Julian Arz, Benjamin Schott, Jens C. Otte, Andrei Kobitski, G. Ulrich Nienhaus, Uwe Str\"ahle, Peter Sanders, Ralf Mikut
Generating Semi-Synthetic Validation Benchmarks for Embryomics
Accepted publication at IEEE International Symposium on Biomedical Imaging: From Nano to Macro (ISBI), 2016
null
10.1109/ISBI.2016.7493359
null
cs.CV q-bio.CB q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Systematic validation is an essential part of algorithm development. The enormous dataset sizes and the complexity observed in many recent time-resolved 3D fluorescence microscopy imaging experiments, however, prohibit a comprehensive manual ground truth generation. Moreover, existing simulated benchmarks in this field are often too simple or too specialized to sufficiently validate the observed image analysis problems. We present a new semi-synthetic approach to generate realistic 3D+t benchmarks that combines challenging cellular movement dynamics of real embryos with simulated fluorescent nuclei and artificial image distortions including various parametrizable options like cell numbers, acquisition deficiencies or multiview simulations. We successfully applied the approach to simulate the development of a zebrafish embryo with thousands of cells over 14 hours of its early existence.
[ { "version": "v1", "created": "Sun, 17 Apr 2016 18:29:48 GMT" } ]
2016-11-17T00:00:00
[ [ "Stegmaier", "Johannes", "" ], [ "Arz", "Julian", "" ], [ "Schott", "Benjamin", "" ], [ "Otte", "Jens C.", "" ], [ "Kobitski", "Andrei", "" ], [ "Nienhaus", "G. Ulrich", "" ], [ "Strähle", "Uwe", "" ], [ "Sanders", "Peter", "" ], [ "Mikut", "Ralf", "" ] ]
TITLE: Generating Semi-Synthetic Validation Benchmarks for Embryomics ABSTRACT: Systematic validation is an essential part of algorithm development. The enormous dataset sizes and the complexity observed in many recent time-resolved 3D fluorescence microscopy imaging experiments, however, prohibit a comprehensive manual ground truth generation. Moreover, existing simulated benchmarks in this field are often too simple or too specialized to sufficiently validate the observed image analysis problems. We present a new semi-synthetic approach to generate realistic 3D+t benchmarks that combines challenging cellular movement dynamics of real embryos with simulated fluorescent nuclei and artificial image distortions including various parametrizable options like cell numbers, acquisition deficiencies or multiview simulations. We successfully applied the approach to simulate the development of a zebrafish embryo with thousands of cells over 14 hours of its early existence.
no_new_dataset
0.944944
1605.03498
Micael Carvalho
Micael Carvalho, Matthieu Cord, Sandra Avila, Nicolas Thome, Eduardo Valle
Deep Neural Networks Under Stress
This article corresponds to the accepted version at IEEE ICIP 2016. We will link the DOI as soon as it is available
null
10.1109/ICIP.2016.7533200
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In recent years, deep architectures have been used for transfer learning with state-of-the-art performance in many datasets. The properties of their features remain, however, largely unstudied under the transfer perspective. In this work, we present an extensive analysis of the resiliency of feature vectors extracted from deep models, with special focus on the trade-off between performance and compression rate. By introducing perturbations to image descriptions extracted from a deep convolutional neural network, we change their precision and number of dimensions, measuring how it affects the final score. We show that deep features are more robust to these disturbances when compared to classical approaches, achieving a compression rate of 98.4%, while losing only 0.88% of their original score for Pascal VOC 2007.
[ { "version": "v1", "created": "Wed, 11 May 2016 16:22:23 GMT" }, { "version": "v2", "created": "Mon, 23 May 2016 08:34:50 GMT" } ]
2016-11-17T00:00:00
[ [ "Carvalho", "Micael", "" ], [ "Cord", "Matthieu", "" ], [ "Avila", "Sandra", "" ], [ "Thome", "Nicolas", "" ], [ "Valle", "Eduardo", "" ] ]
TITLE: Deep Neural Networks Under Stress ABSTRACT: In recent years, deep architectures have been used for transfer learning with state-of-the-art performance in many datasets. The properties of their features remain, however, largely unstudied under the transfer perspective. In this work, we present an extensive analysis of the resiliency of feature vectors extracted from deep models, with special focus on the trade-off between performance and compression rate. By introducing perturbations to image descriptions extracted from a deep convolutional neural network, we change their precision and number of dimensions, measuring how it affects the final score. We show that deep features are more robust to these disturbances when compared to classical approaches, achieving a compression rate of 98.4%, while losing only 0.88% of their original score for Pascal VOC 2007.
no_new_dataset
0.944638
1605.04227
Madhav Nimishakavi Mr
Madhav Nimishakavi, Uday Singh Saini and Partha Talukdar
Relation Schema Induction using Tensor Factorization with Side Information
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, November 2016. Austin, TX
null
null
null
cs.IR cs.CL cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Given a set of documents from a specific domain (e.g., medical research journals), how do we automatically build a Knowledge Graph (KG) for that domain? Automatic identification of relations and their schemas, i.e., type signature of arguments of relations (e.g., undergo(Patient, Surgery)), is an important first step towards this goal. We refer to this problem as Relation Schema Induction (RSI). In this paper, we propose Schema Induction using Coupled Tensor Factorization (SICTF), a novel tensor factorization method for relation schema induction. SICTF factorizes Open Information Extraction (OpenIE) triples extracted from a domain corpus along with additional side information in a principled way to induce relation schemas. To the best of our knowledge, this is the first application of tensor factorization for the RSI problem. Through extensive experiments on multiple real-world datasets, we find that SICTF is not only more accurate than state-of-the-art baselines, but also significantly faster (about 14x faster).
[ { "version": "v1", "created": "Thu, 12 May 2016 19:44:04 GMT" }, { "version": "v2", "created": "Tue, 17 May 2016 04:57:09 GMT" }, { "version": "v3", "created": "Wed, 16 Nov 2016 04:53:42 GMT" } ]
2016-11-17T00:00:00
[ [ "Nimishakavi", "Madhav", "" ], [ "Saini", "Uday Singh", "" ], [ "Talukdar", "Partha", "" ] ]
TITLE: Relation Schema Induction using Tensor Factorization with Side Information ABSTRACT: Given a set of documents from a specific domain (e.g., medical research journals), how do we automatically build a Knowledge Graph (KG) for that domain? Automatic identification of relations and their schemas, i.e., type signature of arguments of relations (e.g., undergo(Patient, Surgery)), is an important first step towards this goal. We refer to this problem as Relation Schema Induction (RSI). In this paper, we propose Schema Induction using Coupled Tensor Factorization (SICTF), a novel tensor factorization method for relation schema induction. SICTF factorizes Open Information Extraction (OpenIE) triples extracted from a domain corpus along with additional side information in a principled way to induce relation schemas. To the best of our knowledge, this is the first application of tensor factorization for the RSI problem. Through extensive experiments on multiple real-world datasets, we find that SICTF is not only more accurate than state-of-the-art baselines, but also significantly faster (about 14x faster).
no_new_dataset
0.950824
1606.06900
Panupong Pasupat
Panupong Pasupat and Percy Liang
Inferring Logical Forms From Denotations
Published at the Association for Computational Linguistics (ACL) conference, 2016
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A core problem in learning semantic parsers from denotations is picking out consistent logical forms--those that yield the correct denotation--from a combinatorially large space. To control the search space, previous work relied on restricted set of rules, which limits expressivity. In this paper, we consider a much more expressive class of logical forms, and show how to use dynamic programming to efficiently represent the complete set of consistent logical forms. Expressivity also introduces many more spurious logical forms which are consistent with the correct denotation but do not represent the meaning of the utterance. To address this, we generate fictitious worlds and use crowdsourced denotations on these worlds to filter out spurious logical forms. On the WikiTableQuestions dataset, we increase the coverage of answerable questions from 53.5% to 76%, and the additional crowdsourced supervision lets us rule out 92.1% of spurious logical forms.
[ { "version": "v1", "created": "Wed, 22 Jun 2016 11:07:43 GMT" }, { "version": "v2", "created": "Tue, 15 Nov 2016 21:24:08 GMT" } ]
2016-11-17T00:00:00
[ [ "Pasupat", "Panupong", "" ], [ "Liang", "Percy", "" ] ]
TITLE: Inferring Logical Forms From Denotations ABSTRACT: A core problem in learning semantic parsers from denotations is picking out consistent logical forms--those that yield the correct denotation--from a combinatorially large space. To control the search space, previous work relied on restricted set of rules, which limits expressivity. In this paper, we consider a much more expressive class of logical forms, and show how to use dynamic programming to efficiently represent the complete set of consistent logical forms. Expressivity also introduces many more spurious logical forms which are consistent with the correct denotation but do not represent the meaning of the utterance. To address this, we generate fictitious worlds and use crowdsourced denotations on these worlds to filter out spurious logical forms. On the WikiTableQuestions dataset, we increase the coverage of answerable questions from 53.5% to 76%, and the additional crowdsourced supervision lets us rule out 92.1% of spurious logical forms.
no_new_dataset
0.947478
1607.06190
Uwe Aickelin
Christopher Roadknight, Durga Suryanarayanan, Uwe Aickelin, John Scholefield, Lindy Durrant
An ensemble of machine learning and anti-learning methods for predicting tumour patient survival rates
IEEE International Conference on Data Science and Advanced Analytics (IEEE DSAA'2015), pp. 1-8, 2015. arXiv admin note: text overlap with arXiv:1307.1599, arXiv:1409.0788
null
10.1109/DSAA.2015.7344863
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper primarily addresses a dataset relating to cellular, chemical and physical conditions of patients gathered at the time they are operated upon to remove colorectal tumours. This data provides a unique insight into the biochemical and immunological status of patients at the point of tumour removal along with information about tumour classification and post-operative survival. The relationship between severity of tumour, based on TNM staging, and survival is still unclear for patients with TNM stage 2 and 3 tumours. We ask whether it is possible to predict survival rate more accurately using a selection of machine learning techniques applied to subsets of data to gain a deeper understanding of the relationships between a patient's biochemical markers and survival. We use a range of feature selection and single classification techniques to predict the 5 year survival rate of TNM stage 2 and 3 patients which initially produces less than ideal results. The performance of each model individually is then compared with subsets of the data where agreement is reached for multiple models. This novel method of selective ensembling demonstrates that significant improvements in model accuracy on an unseen test set can be achieved for patients where agreement between models is achieved. Finally we point at a possible method to identify whether a patients prognosis can be accurately predicted or not.
[ { "version": "v1", "created": "Thu, 21 Jul 2016 04:57:16 GMT" } ]
2016-11-17T00:00:00
[ [ "Roadknight", "Christopher", "" ], [ "Suryanarayanan", "Durga", "" ], [ "Aickelin", "Uwe", "" ], [ "Scholefield", "John", "" ], [ "Durrant", "Lindy", "" ] ]
TITLE: An ensemble of machine learning and anti-learning methods for predicting tumour patient survival rates ABSTRACT: This paper primarily addresses a dataset relating to cellular, chemical and physical conditions of patients gathered at the time they are operated upon to remove colorectal tumours. This data provides a unique insight into the biochemical and immunological status of patients at the point of tumour removal along with information about tumour classification and post-operative survival. The relationship between severity of tumour, based on TNM staging, and survival is still unclear for patients with TNM stage 2 and 3 tumours. We ask whether it is possible to predict survival rate more accurately using a selection of machine learning techniques applied to subsets of data to gain a deeper understanding of the relationships between a patient's biochemical markers and survival. We use a range of feature selection and single classification techniques to predict the 5 year survival rate of TNM stage 2 and 3 patients which initially produces less than ideal results. The performance of each model individually is then compared with subsets of the data where agreement is reached for multiple models. This novel method of selective ensembling demonstrates that significant improvements in model accuracy on an unseen test set can be achieved for patients where agreement between models is achieved. Finally we point at a possible method to identify whether a patients prognosis can be accurately predicted or not.
no_new_dataset
0.943764
1607.08220
Mostofa Patwary
Md. Mostofa Ali Patwary, Nadathur Rajagopalan Satish, Narayanan Sundaram, Jialin Liu, Peter Sadowski, Evan Racah, Suren Byna, Craig Tull, Wahid Bhimji, Prabhat, Pradeep Dubey
PANDA: Extreme Scale Parallel K-Nearest Neighbor on Distributed Architectures
11 pages in PANDA: Extreme Scale Parallel K-Nearest Neighbor on Distributed Architectures, Md. Mostofa Ali Patwary et.al., IEEE International Parallel and Distributed Processing Symposium (IPDPS), 2016
null
10.1109/IPDPS.2016.57
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Computing $k$-Nearest Neighbors (KNN) is one of the core kernels used in many machine learning, data mining and scientific computing applications. Although kd-tree based $O(\log n)$ algorithms have been proposed for computing KNN, due to its inherent sequentiality, linear algorithms are being used in practice. This limits the applicability of such methods to millions of data points, with limited scalability for Big Data analytics challenges in the scientific domain. In this paper, we present parallel and highly optimized kd-tree based KNN algorithms (both construction and querying) suitable for distributed architectures. Our algorithm includes novel approaches for pruning search space and improving load balancing and partitioning among nodes and threads. Using TB-sized datasets from three science applications: astrophysics, plasma physics, and particle physics, we show that our implementation can construct kd-tree of 189 billion particles in 48 seconds on utilizing $\sim$50,000 cores. We also demonstrate computation of KNN of 19 billion queries in 12 seconds. We demonstrate almost linear speedup both for shared and distributed memory computers. Our algorithms outperforms earlier implementations by more than order of magnitude; thereby radically improving the applicability of our implementation to state-of-the-art Big Data analytics problems. In addition, we showcase performance and scalability on the recently released Intel Xeon Phi processor showing that our algorithm scales well even on massively parallel architectures.
[ { "version": "v1", "created": "Wed, 27 Jul 2016 19:13:07 GMT" } ]
2016-11-17T00:00:00
[ [ "Patwary", "Md. Mostofa Ali", "" ], [ "Satish", "Nadathur Rajagopalan", "" ], [ "Sundaram", "Narayanan", "" ], [ "Liu", "Jialin", "" ], [ "Sadowski", "Peter", "" ], [ "Racah", "Evan", "" ], [ "Byna", "Suren", "" ], [ "Tull", "Craig", "" ], [ "Bhimji", "Wahid", "" ], [ "Prabhat", "", "" ], [ "Dubey", "Pradeep", "" ] ]
TITLE: PANDA: Extreme Scale Parallel K-Nearest Neighbor on Distributed Architectures ABSTRACT: Computing $k$-Nearest Neighbors (KNN) is one of the core kernels used in many machine learning, data mining and scientific computing applications. Although kd-tree based $O(\log n)$ algorithms have been proposed for computing KNN, due to its inherent sequentiality, linear algorithms are being used in practice. This limits the applicability of such methods to millions of data points, with limited scalability for Big Data analytics challenges in the scientific domain. In this paper, we present parallel and highly optimized kd-tree based KNN algorithms (both construction and querying) suitable for distributed architectures. Our algorithm includes novel approaches for pruning search space and improving load balancing and partitioning among nodes and threads. Using TB-sized datasets from three science applications: astrophysics, plasma physics, and particle physics, we show that our implementation can construct kd-tree of 189 billion particles in 48 seconds on utilizing $\sim$50,000 cores. We also demonstrate computation of KNN of 19 billion queries in 12 seconds. We demonstrate almost linear speedup both for shared and distributed memory computers. Our algorithms outperforms earlier implementations by more than order of magnitude; thereby radically improving the applicability of our implementation to state-of-the-art Big Data analytics problems. In addition, we showcase performance and scalability on the recently released Intel Xeon Phi processor showing that our algorithm scales well even on massively parallel architectures.
no_new_dataset
0.945349
1608.00990
Sebastian Liem
Sebastian Liem
Barrett: out-of-core processing of MultiNest output
5 pages, 1 figure
null
null
null
stat.CO physics.data-an
http://creativecommons.org/licenses/by/4.0/
Barrett is a Python package for processing and visualising statistical inferences made using the nested sampling algorithm MultiNest. The main differential feature from competitors are full out-of-core processing allowing barrett to handle arbitrarily large datasets. This is achieved by using the HDF5 data format.
[ { "version": "v1", "created": "Tue, 2 Aug 2016 20:22:25 GMT" } ]
2016-11-17T00:00:00
[ [ "Liem", "Sebastian", "" ] ]
TITLE: Barrett: out-of-core processing of MultiNest output ABSTRACT: Barrett is a Python package for processing and visualising statistical inferences made using the nested sampling algorithm MultiNest. The main differential feature from competitors are full out-of-core processing allowing barrett to handle arbitrarily large datasets. This is achieved by using the HDF5 data format.
no_new_dataset
0.940188
1608.02755
Jordi Pont-Tuset
Kevis-Kokitsi Maninis and Jordi Pont-Tuset and Pablo Arbel\'aez and Luc Van Gool
Convolutional Oriented Boundaries
ECCV 2016 Camera Ready
null
10.1007/978-3-319-46448-0_35
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present Convolutional Oriented Boundaries (COB), which produces multiscale oriented contours and region hierarchies starting from generic image classification Convolutional Neural Networks (CNNs). COB is computationally efficient, because it requires a single CNN forward pass for contour detection and it uses a novel sparse boundary representation for hierarchical segmentation; it gives a significant leap in performance over the state-of-the-art, and it generalizes very well to unseen categories and datasets. Particularly, we show that learning to estimate not only contour strength but also orientation provides more accurate results. We perform extensive experiments on BSDS, PASCAL Context, PASCAL Segmentation, and MS-COCO, showing that COB provides state-of-the-art contours, region hierarchies, and object proposals in all datasets.
[ { "version": "v1", "created": "Tue, 9 Aug 2016 10:37:52 GMT" } ]
2016-11-17T00:00:00
[ [ "Maninis", "Kevis-Kokitsi", "" ], [ "Pont-Tuset", "Jordi", "" ], [ "Arbeláez", "Pablo", "" ], [ "Van Gool", "Luc", "" ] ]
TITLE: Convolutional Oriented Boundaries ABSTRACT: We present Convolutional Oriented Boundaries (COB), which produces multiscale oriented contours and region hierarchies starting from generic image classification Convolutional Neural Networks (CNNs). COB is computationally efficient, because it requires a single CNN forward pass for contour detection and it uses a novel sparse boundary representation for hierarchical segmentation; it gives a significant leap in performance over the state-of-the-art, and it generalizes very well to unseen categories and datasets. Particularly, we show that learning to estimate not only contour strength but also orientation provides more accurate results. We perform extensive experiments on BSDS, PASCAL Context, PASCAL Segmentation, and MS-COCO, showing that COB provides state-of-the-art contours, region hierarchies, and object proposals in all datasets.
no_new_dataset
0.954052
1609.01103
Kevis-Kokitsi Maninis
Kevis-Kokitsi Maninis and Jordi Pont-Tuset and Pablo Arbel\'aez and Luc Van Gool
Deep Retinal Image Understanding
MICCAI 2016 Camera Ready
null
10.1007/978-3-319-46723-8_17
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents Deep Retinal Image Understanding (DRIU), a unified framework of retinal image analysis that provides both retinal vessel and optic disc segmentation. We make use of deep Convolutional Neural Networks (CNNs), which have proven revolutionary in other fields of computer vision such as object detection and image classification, and we bring their power to the study of eye fundus images. DRIU uses a base network architecture on which two set of specialized layers are trained to solve both the retinal vessel and optic disc segmentation. We present experimental validation, both qualitative and quantitative, in four public datasets for these tasks. In all of them, DRIU presents super-human performance, that is, it shows results more consistent with a gold standard than a second human annotator used as control.
[ { "version": "v1", "created": "Mon, 5 Sep 2016 11:20:30 GMT" } ]
2016-11-17T00:00:00
[ [ "Maninis", "Kevis-Kokitsi", "" ], [ "Pont-Tuset", "Jordi", "" ], [ "Arbeláez", "Pablo", "" ], [ "Van Gool", "Luc", "" ] ]
TITLE: Deep Retinal Image Understanding ABSTRACT: This paper presents Deep Retinal Image Understanding (DRIU), a unified framework of retinal image analysis that provides both retinal vessel and optic disc segmentation. We make use of deep Convolutional Neural Networks (CNNs), which have proven revolutionary in other fields of computer vision such as object detection and image classification, and we bring their power to the study of eye fundus images. DRIU uses a base network architecture on which two set of specialized layers are trained to solve both the retinal vessel and optic disc segmentation. We present experimental validation, both qualitative and quantitative, in four public datasets for these tasks. In all of them, DRIU presents super-human performance, that is, it shows results more consistent with a gold standard than a second human annotator used as control.
no_new_dataset
0.952926
1609.05871
Avisek Lahiri
Avisek Lahiri, Abhijit Guha Roy, Debdoot Sheet, Prabir Kumar Biswas
Deep Neural Ensemble for Retinal Vessel Segmentation in Fundus Images towards Achieving Label-free Angiography
Accepted as a conference paper at IEEE EMBC, 2016
null
10.1109/EMBC.2016.7590955
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Automated segmentation of retinal blood vessels in label-free fundus images entails a pivotal role in computed aided diagnosis of ophthalmic pathologies, viz., diabetic retinopathy, hypertensive disorders and cardiovascular diseases. The challenge remains active in medical image analysis research due to varied distribution of blood vessels, which manifest variations in their dimensions of physical appearance against a noisy background. In this paper we formulate the segmentation challenge as a classification task. Specifically, we employ unsupervised hierarchical feature learning using ensemble of two level of sparsely trained denoised stacked autoencoder. First level training with bootstrap samples ensures decoupling and second level ensemble formed by different network architectures ensures architectural revision. We show that ensemble training of auto-encoders fosters diversity in learning dictionary of visual kernels for vessel segmentation. SoftMax classifier is used for fine tuning each member auto-encoder and multiple strategies are explored for 2-level fusion of ensemble members. On DRIVE dataset, we achieve maximum average accuracy of 95.33\% with an impressively low standard deviation of 0.003 and Kappa agreement coefficient of 0.708 . Comparison with other major algorithms substantiates the high efficacy of our model.
[ { "version": "v1", "created": "Mon, 19 Sep 2016 19:11:05 GMT" } ]
2016-11-17T00:00:00
[ [ "Lahiri", "Avisek", "" ], [ "Roy", "Abhijit Guha", "" ], [ "Sheet", "Debdoot", "" ], [ "Biswas", "Prabir Kumar", "" ] ]
TITLE: Deep Neural Ensemble for Retinal Vessel Segmentation in Fundus Images towards Achieving Label-free Angiography ABSTRACT: Automated segmentation of retinal blood vessels in label-free fundus images entails a pivotal role in computed aided diagnosis of ophthalmic pathologies, viz., diabetic retinopathy, hypertensive disorders and cardiovascular diseases. The challenge remains active in medical image analysis research due to varied distribution of blood vessels, which manifest variations in their dimensions of physical appearance against a noisy background. In this paper we formulate the segmentation challenge as a classification task. Specifically, we employ unsupervised hierarchical feature learning using ensemble of two level of sparsely trained denoised stacked autoencoder. First level training with bootstrap samples ensures decoupling and second level ensemble formed by different network architectures ensures architectural revision. We show that ensemble training of auto-encoders fosters diversity in learning dictionary of visual kernels for vessel segmentation. SoftMax classifier is used for fine tuning each member auto-encoder and multiple strategies are explored for 2-level fusion of ensemble members. On DRIVE dataset, we achieve maximum average accuracy of 95.33\% with an impressively low standard deviation of 0.003 and Kappa agreement coefficient of 0.708 . Comparison with other major algorithms substantiates the high efficacy of our model.
no_new_dataset
0.949669
1609.09471
Lovedeep Gondara
Lovedeep Gondara
Classifier comparison using precision
Extended version
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
New proposed models are often compared to state-of-the-art using statistical significance testing. Literature is scarce for classifier comparison using metrics other than accuracy. We present a survey of statistical methods that can be used for classifier comparison using precision, accounting for inter-precision correlation arising from use of same dataset. Comparisons are made using per-class precision and methods presented to test global null hypothesis of an overall model comparison. Comparisons are extended to multiple multi-class classifiers and to models using cross validation or its variants. Partial Bayesian update to precision is introduced when population prevalence of a class is known. Applications to compare deep architectures are studied.
[ { "version": "v1", "created": "Thu, 29 Sep 2016 19:19:29 GMT" }, { "version": "v2", "created": "Wed, 16 Nov 2016 01:43:21 GMT" } ]
2016-11-17T00:00:00
[ [ "Gondara", "Lovedeep", "" ] ]
TITLE: Classifier comparison using precision ABSTRACT: New proposed models are often compared to state-of-the-art using statistical significance testing. Literature is scarce for classifier comparison using metrics other than accuracy. We present a survey of statistical methods that can be used for classifier comparison using precision, accounting for inter-precision correlation arising from use of same dataset. Comparisons are made using per-class precision and methods presented to test global null hypothesis of an overall model comparison. Comparisons are extended to multiple multi-class classifiers and to models using cross validation or its variants. Partial Bayesian update to precision is introduced when population prevalence of a class is known. Applications to compare deep architectures are studied.
no_new_dataset
0.943191
1610.01706
Chunhua Shen
Yuanzhouhan Cao, Chunhua Shen, Heng Tao Shen
Exploiting Depth from Single Monocular Images for Object Detection and Semantic Segmentation
14 pages. Accepted to IEEE T. Image Processing
null
10.1109/TIP.2016.2621673
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Augmenting RGB data with measured depth has been shown to improve the performance of a range of tasks in computer vision including object detection and semantic segmentation. Although depth sensors such as the Microsoft Kinect have facilitated easy acquisition of such depth information, the vast majority of images used in vision tasks do not contain depth information. In this paper, we show that augmenting RGB images with estimated depth can also improve the accuracy of both object detection and semantic segmentation. Specifically, we first exploit the recent success of depth estimation from monocular images and learn a deep depth estimation model. Then we learn deep depth features from the estimated depth and combine with RGB features for object detection and semantic segmentation. Additionally, we propose an RGB-D semantic segmentation method which applies a multi-task training scheme: semantic label prediction and depth value regression. We test our methods on several datasets and demonstrate that incorporating information from estimated depth improves the performance of object detection and semantic segmentation remarkably.
[ { "version": "v1", "created": "Thu, 6 Oct 2016 01:30:46 GMT" } ]
2016-11-17T00:00:00
[ [ "Cao", "Yuanzhouhan", "" ], [ "Shen", "Chunhua", "" ], [ "Shen", "Heng Tao", "" ] ]
TITLE: Exploiting Depth from Single Monocular Images for Object Detection and Semantic Segmentation ABSTRACT: Augmenting RGB data with measured depth has been shown to improve the performance of a range of tasks in computer vision including object detection and semantic segmentation. Although depth sensors such as the Microsoft Kinect have facilitated easy acquisition of such depth information, the vast majority of images used in vision tasks do not contain depth information. In this paper, we show that augmenting RGB images with estimated depth can also improve the accuracy of both object detection and semantic segmentation. Specifically, we first exploit the recent success of depth estimation from monocular images and learn a deep depth estimation model. Then we learn deep depth features from the estimated depth and combine with RGB features for object detection and semantic segmentation. Additionally, we propose an RGB-D semantic segmentation method which applies a multi-task training scheme: semantic label prediction and depth value regression. We test our methods on several datasets and demonstrate that incorporating information from estimated depth improves the performance of object detection and semantic segmentation remarkably.
no_new_dataset
0.952794
1611.02064
Avijit Dasgupta
Avijit Dasgupta and Sonam Singh
A Fully Convolutional Neural Network based Structured Prediction Approach Towards the Retinal Vessel Segmentation
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Automatic segmentation of retinal blood vessels from fundus images plays an important role in the computer aided diagnosis of retinal diseases. The task of blood vessel segmentation is challenging due to the extreme variations in morphology of the vessels against noisy background. In this paper, we formulate the segmentation task as a multi-label inference task and utilize the implicit advantages of the combination of convolutional neural networks and structured prediction. Our proposed convolutional neural network based model achieves strong performance and significantly outperforms the state-of-the-art for automatic retinal blood vessel segmentation on DRIVE dataset with 95.33% accuracy and 0.974 AUC score.
[ { "version": "v1", "created": "Mon, 7 Nov 2016 14:16:18 GMT" }, { "version": "v2", "created": "Wed, 16 Nov 2016 09:21:40 GMT" } ]
2016-11-17T00:00:00
[ [ "Dasgupta", "Avijit", "" ], [ "Singh", "Sonam", "" ] ]
TITLE: A Fully Convolutional Neural Network based Structured Prediction Approach Towards the Retinal Vessel Segmentation ABSTRACT: Automatic segmentation of retinal blood vessels from fundus images plays an important role in the computer aided diagnosis of retinal diseases. The task of blood vessel segmentation is challenging due to the extreme variations in morphology of the vessels against noisy background. In this paper, we formulate the segmentation task as a multi-label inference task and utilize the implicit advantages of the combination of convolutional neural networks and structured prediction. Our proposed convolutional neural network based model achieves strong performance and significantly outperforms the state-of-the-art for automatic retinal blood vessel segmentation on DRIVE dataset with 95.33% accuracy and 0.974 AUC score.
no_new_dataset
0.951863
1611.05132
Cheng Tang
Cheng Tang, Claire Monteleoni
Convergence rate of stochastic k-means
arXiv admin note: substantial text overlap with arXiv:1610.04900
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We analyze online \cite{BottouBengio} and mini-batch \cite{Sculley} $k$-means variants. Both scale up the widely used $k$-means algorithm via stochastic approximation, and have become popular for large-scale clustering and unsupervised feature learning. We show, for the first time, that starting with any initial solution, they converge to a "local optimum" at rate $O(\frac{1}{t})$ (in terms of the $k$-means objective) under general conditions. In addition, we show if the dataset is clusterable, when initialized with a simple and scalable seeding algorithm, mini-batch $k$-means converges to an optimal $k$-means solution at rate $O(\frac{1}{t})$ with high probability. The $k$-means objective is non-convex and non-differentiable: we exploit ideas from recent work on stochastic gradient descent for non-convex problems \cite{ge:sgd_tensor, balsubramani13} by providing a novel characterization of the trajectory of $k$-means algorithm on its solution space, and circumvent the non-differentiability problem via geometric insights about $k$-means update.
[ { "version": "v1", "created": "Wed, 16 Nov 2016 03:28:08 GMT" } ]
2016-11-17T00:00:00
[ [ "Tang", "Cheng", "" ], [ "Monteleoni", "Claire", "" ] ]
TITLE: Convergence rate of stochastic k-means ABSTRACT: We analyze online \cite{BottouBengio} and mini-batch \cite{Sculley} $k$-means variants. Both scale up the widely used $k$-means algorithm via stochastic approximation, and have become popular for large-scale clustering and unsupervised feature learning. We show, for the first time, that starting with any initial solution, they converge to a "local optimum" at rate $O(\frac{1}{t})$ (in terms of the $k$-means objective) under general conditions. In addition, we show if the dataset is clusterable, when initialized with a simple and scalable seeding algorithm, mini-batch $k$-means converges to an optimal $k$-means solution at rate $O(\frac{1}{t})$ with high probability. The $k$-means objective is non-convex and non-differentiable: we exploit ideas from recent work on stochastic gradient descent for non-convex problems \cite{ge:sgd_tensor, balsubramani13} by providing a novel characterization of the trajectory of $k$-means algorithm on its solution space, and circumvent the non-differentiability problem via geometric insights about $k$-means update.
no_new_dataset
0.941547
1611.05138
Shuangfei Zhai
Shuangfei Zhai, Hui Wu, Abhishek Kumar, Yu Cheng, Yongxi Lu, Zhongfei Zhang, Rogerio Feris
S3Pool: Pooling with Stochastic Spatial Sampling
null
null
null
null
cs.LG cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Feature pooling layers (e.g., max pooling) in convolutional neural networks (CNNs) serve the dual purpose of providing increasingly abstract representations as well as yielding computational savings in subsequent convolutional layers. We view the pooling operation in CNNs as a two-step procedure: first, a pooling window (e.g., $2\times 2$) slides over the feature map with stride one which leaves the spatial resolution intact, and second, downsampling is performed by selecting one pixel from each non-overlapping pooling window in an often uniform and deterministic (e.g., top-left) manner. Our starting point in this work is the observation that this regularly spaced downsampling arising from non-overlapping windows, although intuitive from a signal processing perspective (which has the goal of signal reconstruction), is not necessarily optimal for \emph{learning} (where the goal is to generalize). We study this aspect and propose a novel pooling strategy with stochastic spatial sampling (S3Pool), where the regular downsampling is replaced by a more general stochastic version. We observe that this general stochasticity acts as a strong regularizer, and can also be seen as doing implicit data augmentation by introducing distortions in the feature maps. We further introduce a mechanism to control the amount of distortion to suit different datasets and architectures. To demonstrate the effectiveness of the proposed approach, we perform extensive experiments on several popular image classification benchmarks, observing excellent improvements over baseline models. Experimental code is available at https://github.com/Shuangfei/s3pool.
[ { "version": "v1", "created": "Wed, 16 Nov 2016 04:17:52 GMT" } ]
2016-11-17T00:00:00
[ [ "Zhai", "Shuangfei", "" ], [ "Wu", "Hui", "" ], [ "Kumar", "Abhishek", "" ], [ "Cheng", "Yu", "" ], [ "Lu", "Yongxi", "" ], [ "Zhang", "Zhongfei", "" ], [ "Feris", "Rogerio", "" ] ]
TITLE: S3Pool: Pooling with Stochastic Spatial Sampling ABSTRACT: Feature pooling layers (e.g., max pooling) in convolutional neural networks (CNNs) serve the dual purpose of providing increasingly abstract representations as well as yielding computational savings in subsequent convolutional layers. We view the pooling operation in CNNs as a two-step procedure: first, a pooling window (e.g., $2\times 2$) slides over the feature map with stride one which leaves the spatial resolution intact, and second, downsampling is performed by selecting one pixel from each non-overlapping pooling window in an often uniform and deterministic (e.g., top-left) manner. Our starting point in this work is the observation that this regularly spaced downsampling arising from non-overlapping windows, although intuitive from a signal processing perspective (which has the goal of signal reconstruction), is not necessarily optimal for \emph{learning} (where the goal is to generalize). We study this aspect and propose a novel pooling strategy with stochastic spatial sampling (S3Pool), where the regular downsampling is replaced by a more general stochastic version. We observe that this general stochasticity acts as a strong regularizer, and can also be seen as doing implicit data augmentation by introducing distortions in the feature maps. We further introduce a mechanism to control the amount of distortion to suit different datasets and architectures. To demonstrate the effectiveness of the proposed approach, we perform extensive experiments on several popular image classification benchmarks, observing excellent improvements over baseline models. Experimental code is available at https://github.com/Shuangfei/s3pool.
no_new_dataset
0.951549
1611.05141
Eric Hunsberger
Eric Hunsberger, Chris Eliasmith
Training Spiking Deep Networks for Neuromorphic Hardware
10 pages, 3 figures, 4 tables; the "methods" section of this article draws heavily on arXiv:1510.08829
null
10.13140/RG.2.2.10967.06566
null
cs.NE cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We describe a method to train spiking deep networks that can be run using leaky integrate-and-fire (LIF) neurons, achieving state-of-the-art results for spiking LIF networks on five datasets, including the large ImageNet ILSVRC-2012 benchmark. Our method for transforming deep artificial neural networks into spiking networks is scalable and works with a wide range of neural nonlinearities. We achieve these results by softening the neural response function, such that its derivative remains bounded, and by training the network with noise to provide robustness against the variability introduced by spikes. Our analysis shows that implementations of these networks on neuromorphic hardware will be many times more power-efficient than the equivalent non-spiking networks on traditional hardware.
[ { "version": "v1", "created": "Wed, 16 Nov 2016 04:32:22 GMT" } ]
2016-11-17T00:00:00
[ [ "Hunsberger", "Eric", "" ], [ "Eliasmith", "Chris", "" ] ]
TITLE: Training Spiking Deep Networks for Neuromorphic Hardware ABSTRACT: We describe a method to train spiking deep networks that can be run using leaky integrate-and-fire (LIF) neurons, achieving state-of-the-art results for spiking LIF networks on five datasets, including the large ImageNet ILSVRC-2012 benchmark. Our method for transforming deep artificial neural networks into spiking networks is scalable and works with a wide range of neural nonlinearities. We achieve these results by softening the neural response function, such that its derivative remains bounded, and by training the network with noise to provide robustness against the variability introduced by spikes. Our analysis shows that implementations of these networks on neuromorphic hardware will be many times more power-efficient than the equivalent non-spiking networks on traditional hardware.
no_new_dataset
0.949012
1611.05215
Yemin Shi Shi
Yemin Shi and Yonghong Tian and Yaowei Wang and Tiejun Huang
Joint Network based Attention for Action Recognition
8 pages, 5 figures, JNA
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
By extracting spatial and temporal characteristics in one network, the two-stream ConvNets can achieve the state-of-the-art performance in action recognition. However, such a framework typically suffers from the separately processing of spatial and temporal information between the two standalone streams and is hard to capture long-term temporal dependence of an action. More importantly, it is incapable of finding the salient portions of an action, say, the frames that are the most discriminative to identify the action. To address these problems, a \textbf{j}oint \textbf{n}etwork based \textbf{a}ttention (JNA) is proposed in this study. We find that the fully-connected fusion, branch selection and spatial attention mechanism are totally infeasible for action recognition. Thus in our joint network, the spatial and temporal branches share some information during the training stage. We also introduce an attention mechanism on the temporal domain to capture the long-term dependence meanwhile finding the salient portions. Extensive experiments are conducted on two benchmark datasets, UCF101 and HMDB51. Experimental results show that our method can improve the action recognition performance significantly and achieves the state-of-the-art results on both datasets.
[ { "version": "v1", "created": "Wed, 16 Nov 2016 10:40:30 GMT" } ]
2016-11-17T00:00:00
[ [ "Shi", "Yemin", "" ], [ "Tian", "Yonghong", "" ], [ "Wang", "Yaowei", "" ], [ "Huang", "Tiejun", "" ] ]
TITLE: Joint Network based Attention for Action Recognition ABSTRACT: By extracting spatial and temporal characteristics in one network, the two-stream ConvNets can achieve the state-of-the-art performance in action recognition. However, such a framework typically suffers from the separately processing of spatial and temporal information between the two standalone streams and is hard to capture long-term temporal dependence of an action. More importantly, it is incapable of finding the salient portions of an action, say, the frames that are the most discriminative to identify the action. To address these problems, a \textbf{j}oint \textbf{n}etwork based \textbf{a}ttention (JNA) is proposed in this study. We find that the fully-connected fusion, branch selection and spatial attention mechanism are totally infeasible for action recognition. Thus in our joint network, the spatial and temporal branches share some information during the training stage. We also introduce an attention mechanism on the temporal domain to capture the long-term dependence meanwhile finding the salient portions. Extensive experiments are conducted on two benchmark datasets, UCF101 and HMDB51. Experimental results show that our method can improve the action recognition performance significantly and achieves the state-of-the-art results on both datasets.
no_new_dataset
0.946794
1611.05267
Colin Lea
Colin Lea, Michael D. Flynn, Rene Vidal, Austin Reiter, Gregory D. Hager
Temporal Convolutional Networks for Action Segmentation and Detection
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The ability to identify and temporally segment fine-grained human actions throughout a video is crucial for robotics, surveillance, education, and beyond. Typical approaches decouple this problem by first extracting local spatiotemporal features from video frames and then feeding them into a temporal classifier that captures high-level temporal patterns. We introduce a new class of temporal models, which we call Temporal Convolutional Networks (TCNs), that use a hierarchy of temporal convolutions to perform fine-grained action segmentation or detection. Our Encoder-Decoder TCN uses pooling and upsampling to efficiently capture long-range temporal patterns whereas our Dilated TCN uses dilated convolutions. We show that TCNs are capable of capturing action compositions, segment durations, and long-range dependencies, and are over a magnitude faster to train than competing LSTM-based Recurrent Neural Networks. We apply these models to three challenging fine-grained datasets and show large improvements over the state of the art.
[ { "version": "v1", "created": "Wed, 16 Nov 2016 13:19:19 GMT" } ]
2016-11-17T00:00:00
[ [ "Lea", "Colin", "" ], [ "Flynn", "Michael D.", "" ], [ "Vidal", "Rene", "" ], [ "Reiter", "Austin", "" ], [ "Hager", "Gregory D.", "" ] ]
TITLE: Temporal Convolutional Networks for Action Segmentation and Detection ABSTRACT: The ability to identify and temporally segment fine-grained human actions throughout a video is crucial for robotics, surveillance, education, and beyond. Typical approaches decouple this problem by first extracting local spatiotemporal features from video frames and then feeding them into a temporal classifier that captures high-level temporal patterns. We introduce a new class of temporal models, which we call Temporal Convolutional Networks (TCNs), that use a hierarchy of temporal convolutions to perform fine-grained action segmentation or detection. Our Encoder-Decoder TCN uses pooling and upsampling to efficiently capture long-range temporal patterns whereas our Dilated TCN uses dilated convolutions. We show that TCNs are capable of capturing action compositions, segment durations, and long-range dependencies, and are over a magnitude faster to train than competing LSTM-based Recurrent Neural Networks. We apply these models to three challenging fine-grained datasets and show large improvements over the state of the art.
no_new_dataset
0.947769
1611.05271
Shu Zhang
Shu Zhang, Ran He, and Tieniu Tan
DeMeshNet: Blind Face Inpainting for Deep MeshFace Verification
10pages, submitted to CVPR 17
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
MeshFace photos have been widely used in many Chinese business organizations to protect ID face photos from being misused. The occlusions incurred by random meshes severely degenerate the performance of face verification systems, which raises the MeshFace verification problem between MeshFace and daily photos. Previous methods cast this problem as a typical low-level vision problem, i.e. blind inpainting. They recover perceptually pleasing clear ID photos from MeshFaces by enforcing pixel level similarity between the recovered ID images and the ground-truth clear ID images and then perform face verification on them. Essentially, face verification is conducted on a compact feature space rather than the image pixel space. Therefore, this paper argues that pixel level similarity and feature level similarity jointly offer the key to improve the verification performance. Based on this insight, we offer a novel feature oriented blind face inpainting framework. Specifically, we implement this by establishing a novel DeMeshNet, which consists of three parts. The first part addresses blind inpainting of the MeshFaces by implicitly exploiting extra supervision from the occlusion position to enforce pixel level similarity. The second part explicitly enforces a feature level similarity in the compact feature space, which can explore informative supervision from the feature space to produce better inpainting results for verification. The last part copes with face alignment within the net via a customized spatial transformer module when extracting deep facial features. All the three parts are implemented within an end-to-end network that facilitates efficient optimization. Extensive experiments on two MeshFace datasets demonstrate the effectiveness of the proposed DeMeshNet as well as the insight of this paper.
[ { "version": "v1", "created": "Wed, 16 Nov 2016 13:36:45 GMT" } ]
2016-11-17T00:00:00
[ [ "Zhang", "Shu", "" ], [ "He", "Ran", "" ], [ "Tan", "Tieniu", "" ] ]
TITLE: DeMeshNet: Blind Face Inpainting for Deep MeshFace Verification ABSTRACT: MeshFace photos have been widely used in many Chinese business organizations to protect ID face photos from being misused. The occlusions incurred by random meshes severely degenerate the performance of face verification systems, which raises the MeshFace verification problem between MeshFace and daily photos. Previous methods cast this problem as a typical low-level vision problem, i.e. blind inpainting. They recover perceptually pleasing clear ID photos from MeshFaces by enforcing pixel level similarity between the recovered ID images and the ground-truth clear ID images and then perform face verification on them. Essentially, face verification is conducted on a compact feature space rather than the image pixel space. Therefore, this paper argues that pixel level similarity and feature level similarity jointly offer the key to improve the verification performance. Based on this insight, we offer a novel feature oriented blind face inpainting framework. Specifically, we implement this by establishing a novel DeMeshNet, which consists of three parts. The first part addresses blind inpainting of the MeshFaces by implicitly exploiting extra supervision from the occlusion position to enforce pixel level similarity. The second part explicitly enforces a feature level similarity in the compact feature space, which can explore informative supervision from the feature space to produce better inpainting results for verification. The last part copes with face alignment within the net via a customized spatial transformer module when extracting deep facial features. All the three parts are implemented within an end-to-end network that facilitates efficient optimization. Extensive experiments on two MeshFace datasets demonstrate the effectiveness of the proposed DeMeshNet as well as the insight of this paper.
no_new_dataset
0.951729
1611.05328
Zhiwei Jin
Zhiwei Jin, Juan Cao, Jiebo Luo, and Yongdong Zhang
Image Credibility Analysis with Effective Domain Transferred Deep Networks
null
null
null
null
cs.MM cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Numerous fake images spread on social media today and can severely jeopardize the credibility of online content to public. In this paper, we employ deep networks to learn distinct fake image related features. In contrast to authentic images, fake images tend to be eye-catching and visually striking. Compared with traditional visual recognition tasks, it is extremely challenging to understand these psychologically triggered visual patterns in fake images. Traditional general image classification datasets, such as ImageNet set, are designed for feature learning at the object level but are not suitable for learning the hyper-features that would be required by image credibility analysis. In order to overcome the scarcity of training samples of fake images, we first construct a large-scale auxiliary dataset indirectly related to this task. This auxiliary dataset contains 0.6 million weakly-labeled fake and real images collected automatically from social media. Through an AdaBoost-like transfer learning algorithm, we train a CNN model with a few instances in the target training set and 0.6 million images in the collected auxiliary set. This learning algorithm is able to leverage knowledge from the auxiliary set and gradually transfer it to the target task. Experiments on a real-world testing set show that our proposed domain transferred CNN model outperforms several competing baselines. It obtains superiror results over transfer learning methods based on the general ImageNet set. Moreover, case studies show that our proposed method reveals some interesting patterns for distinguishing fake and authentic images.
[ { "version": "v1", "created": "Wed, 16 Nov 2016 15:45:19 GMT" } ]
2016-11-17T00:00:00
[ [ "Jin", "Zhiwei", "" ], [ "Cao", "Juan", "" ], [ "Luo", "Jiebo", "" ], [ "Zhang", "Yongdong", "" ] ]
TITLE: Image Credibility Analysis with Effective Domain Transferred Deep Networks ABSTRACT: Numerous fake images spread on social media today and can severely jeopardize the credibility of online content to public. In this paper, we employ deep networks to learn distinct fake image related features. In contrast to authentic images, fake images tend to be eye-catching and visually striking. Compared with traditional visual recognition tasks, it is extremely challenging to understand these psychologically triggered visual patterns in fake images. Traditional general image classification datasets, such as ImageNet set, are designed for feature learning at the object level but are not suitable for learning the hyper-features that would be required by image credibility analysis. In order to overcome the scarcity of training samples of fake images, we first construct a large-scale auxiliary dataset indirectly related to this task. This auxiliary dataset contains 0.6 million weakly-labeled fake and real images collected automatically from social media. Through an AdaBoost-like transfer learning algorithm, we train a CNN model with a few instances in the target training set and 0.6 million images in the collected auxiliary set. This learning algorithm is able to leverage knowledge from the auxiliary set and gradually transfer it to the target task. Experiments on a real-world testing set show that our proposed domain transferred CNN model outperforms several competing baselines. It obtains superiror results over transfer learning methods based on the general ImageNet set. Moreover, case studies show that our proposed method reveals some interesting patterns for distinguishing fake and authentic images.
new_dataset
0.971266
1611.05369
Melanie Mitchell
Anthony D. Rhodes, Max H. Quinn, and Melanie Mitchell
Fast On-Line Kernel Density Estimation for Active Object Localization
arXiv admin note: text overlap with arXiv:1607.00548
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A major goal of computer vision is to enable computers to interpret visual situations---abstract concepts (e.g., "a person walking a dog," "a crowd waiting for a bus," "a picnic") whose image instantiations are linked more by their common spatial and semantic structure than by low-level visual similarity. In this paper, we propose a novel method for prior learning and active object localization for this kind of knowledge-driven search in static images. In our system, prior situation knowledge is captured by a set of flexible, kernel-based density estimations---a situation model---that represent the expected spatial structure of the given situation. These estimations are efficiently updated by information gained as the system searches for relevant objects, allowing the system to use context as it is discovered to narrow the search. More specifically, at any given time in a run on a test image, our system uses image features plus contextual information it has discovered to identify a small subset of training images---an importance cluster---that is deemed most similar to the given test image, given the context. This subset is used to generate an updated situation model in an on-line fashion, using an efficient multipole expansion technique. As a proof of concept, we apply our algorithm to a highly varied and challenging dataset consisting of instances of a "dog-walking" situation. Our results support the hypothesis that dynamically-rendered, context-based probability models can support efficient object localization in visual situations. Moreover, our approach is general enough to be applied to diverse machine learning paradigms requiring interpretable, probabilistic representations generated from partially observed data.
[ { "version": "v1", "created": "Wed, 16 Nov 2016 17:04:35 GMT" } ]
2016-11-17T00:00:00
[ [ "Rhodes", "Anthony D.", "" ], [ "Quinn", "Max H.", "" ], [ "Mitchell", "Melanie", "" ] ]
TITLE: Fast On-Line Kernel Density Estimation for Active Object Localization ABSTRACT: A major goal of computer vision is to enable computers to interpret visual situations---abstract concepts (e.g., "a person walking a dog," "a crowd waiting for a bus," "a picnic") whose image instantiations are linked more by their common spatial and semantic structure than by low-level visual similarity. In this paper, we propose a novel method for prior learning and active object localization for this kind of knowledge-driven search in static images. In our system, prior situation knowledge is captured by a set of flexible, kernel-based density estimations---a situation model---that represent the expected spatial structure of the given situation. These estimations are efficiently updated by information gained as the system searches for relevant objects, allowing the system to use context as it is discovered to narrow the search. More specifically, at any given time in a run on a test image, our system uses image features plus contextual information it has discovered to identify a small subset of training images---an importance cluster---that is deemed most similar to the given test image, given the context. This subset is used to generate an updated situation model in an on-line fashion, using an efficient multipole expansion technique. As a proof of concept, we apply our algorithm to a highly varied and challenging dataset consisting of instances of a "dog-walking" situation. Our results support the hypothesis that dynamically-rendered, context-based probability models can support efficient object localization in visual situations. Moreover, our approach is general enough to be applied to diverse machine learning paradigms requiring interpretable, probabilistic representations generated from partially observed data.
no_new_dataset
0.508758
1611.05425
Tim Weninger PhD
Baoxu Shi and Tim Weninger
ProjE: Embedding Projection for Knowledge Graph Completion
14 pages, Accepted to AAAI 2017
null
null
null
cs.AI stat.ML
http://creativecommons.org/licenses/by/4.0/
With the large volume of new information created every day, determining the validity of information in a knowledge graph and filling in its missing parts are crucial tasks for many researchers and practitioners. To address this challenge, a number of knowledge graph completion methods have been developed using low-dimensional graph embeddings. Although researchers continue to improve these models using an increasingly complex feature space, we show that simple changes in the architecture of the underlying model can outperform state-of-the-art models without the need for complex feature engineering. In this work, we present a shared variable neural network model called ProjE that fills-in missing information in a knowledge graph by learning joint embeddings of the knowledge graph's entities and edges, and through subtle, but important, changes to the standard loss function. In doing so, ProjE has a parameter size that is smaller than 11 out of 15 existing methods while performing $37\%$ better than the current-best method on standard datasets. We also show, via a new fact checking task, that ProjE is capable of accurately determining the veracity of many declarative statements.
[ { "version": "v1", "created": "Wed, 16 Nov 2016 20:09:08 GMT" } ]
2016-11-17T00:00:00
[ [ "Shi", "Baoxu", "" ], [ "Weninger", "Tim", "" ] ]
TITLE: ProjE: Embedding Projection for Knowledge Graph Completion ABSTRACT: With the large volume of new information created every day, determining the validity of information in a knowledge graph and filling in its missing parts are crucial tasks for many researchers and practitioners. To address this challenge, a number of knowledge graph completion methods have been developed using low-dimensional graph embeddings. Although researchers continue to improve these models using an increasingly complex feature space, we show that simple changes in the architecture of the underlying model can outperform state-of-the-art models without the need for complex feature engineering. In this work, we present a shared variable neural network model called ProjE that fills-in missing information in a knowledge graph by learning joint embeddings of the knowledge graph's entities and edges, and through subtle, but important, changes to the standard loss function. In doing so, ProjE has a parameter size that is smaller than 11 out of 15 existing methods while performing $37\%$ better than the current-best method on standard datasets. We also show, via a new fact checking task, that ProjE is capable of accurately determining the veracity of many declarative statements.
no_new_dataset
0.941922
cs/0703125
Vladimir Pestov
Vladimir Pestov
Intrinsic dimension of a dataset: what properties does one expect?
6 pages, 6 figures, 1 table, latex with IEEE macros, final submission to Proceedings of the 22nd IJCNN (Orlando, FL, August 12-17, 2007)
Proceedings of the 20th International Joint Conference on Neural Networks (IJCNN'2007), Orlando, Florida (Aug. 12--17, 2007), pp. 1775--1780.
10.1109/IJCNN.2007.4371431
null
cs.LG
null
We propose an axiomatic approach to the concept of an intrinsic dimension of a dataset, based on a viewpoint of geometry of high-dimensional structures. Our first axiom postulates that high values of dimension be indicative of the presence of the curse of dimensionality (in a certain precise mathematical sense). The second axiom requires the dimension to depend smoothly on a distance between datasets (so that the dimension of a dataset and that of an approximating principal manifold would be close to each other). The third axiom is a normalization condition: the dimension of the Euclidean $n$-sphere $\s^n$ is $\Theta(n)$. We give an example of a dimension function satisfying our axioms, even though it is in general computationally unfeasible, and discuss a computationally cheap function satisfying most but not all of our axioms (the ``intrinsic dimensionality'' of Ch\'avez et al.)
[ { "version": "v1", "created": "Sun, 25 Mar 2007 01:19:14 GMT" } ]
2016-11-17T00:00:00
[ [ "Pestov", "Vladimir", "" ] ]
TITLE: Intrinsic dimension of a dataset: what properties does one expect? ABSTRACT: We propose an axiomatic approach to the concept of an intrinsic dimension of a dataset, based on a viewpoint of geometry of high-dimensional structures. Our first axiom postulates that high values of dimension be indicative of the presence of the curse of dimensionality (in a certain precise mathematical sense). The second axiom requires the dimension to depend smoothly on a distance between datasets (so that the dimension of a dataset and that of an approximating principal manifold would be close to each other). The third axiom is a normalization condition: the dimension of the Euclidean $n$-sphere $\s^n$ is $\Theta(n)$. We give an example of a dimension function satisfying our axioms, even though it is in general computationally unfeasible, and discuss a computationally cheap function satisfying most but not all of our axioms (the ``intrinsic dimensionality'' of Ch\'avez et al.)
no_new_dataset
0.944638
cs/9904002
Vladimir Pestov
Vladimir Pestov
A geometric framework for modelling similarity search
11 pages, LaTeX 2.e
Proc. 10-th Int. Workshop on Database and Expert Systems Applications (DEXA'99), Sept. 1-3, 1999, Florence, Italy, IEEE Comp. Soc., pp. 150-154.
10.1109/DEXA.1999.795158
RP-99-12, School of Math and Comp Sci, Victoria University of Wellington, New Zealand
cs.IR cs.DB cs.DS
null
The aim of this paper is to propose a geometric framework for modelling similarity search in large and multidimensional data spaces of general nature, which seems to be flexible enough to address such issues as analysis of complexity, indexability, and the `curse of dimensionality.' Such a framework is provided by the concept of the so-called similarity workload, which is a probability metric space $\Omega$ (query domain) with a distinguished finite subspace $X$ (dataset), together with an assembly of concepts, techniques, and results from metric geometry. They include such notions as metric transform, $\e$-entropy, and the phenomenon of concentration of measure on high-dimensional structures. In particular, we discuss the relevance of the latter to understanding the curse of dimensionality. As some of those concepts and techniques are being currently reinvented by the database community, it seems desirable to try and bridge the gap between database research and the relevant work already done in geometry and analysis.
[ { "version": "v1", "created": "Wed, 7 Apr 1999 04:16:02 GMT" }, { "version": "v2", "created": "Mon, 21 Jun 1999 03:45:13 GMT" } ]
2016-11-17T00:00:00
[ [ "Pestov", "Vladimir", "" ] ]
TITLE: A geometric framework for modelling similarity search ABSTRACT: The aim of this paper is to propose a geometric framework for modelling similarity search in large and multidimensional data spaces of general nature, which seems to be flexible enough to address such issues as analysis of complexity, indexability, and the `curse of dimensionality.' Such a framework is provided by the concept of the so-called similarity workload, which is a probability metric space $\Omega$ (query domain) with a distinguished finite subspace $X$ (dataset), together with an assembly of concepts, techniques, and results from metric geometry. They include such notions as metric transform, $\e$-entropy, and the phenomenon of concentration of measure on high-dimensional structures. In particular, we discuss the relevance of the latter to understanding the curse of dimensionality. As some of those concepts and techniques are being currently reinvented by the database community, it seems desirable to try and bridge the gap between database research and the relevant work already done in geometry and analysis.
no_new_dataset
0.943452
1607.02810
Mahnoosh Kholghi
Mahnoosh Kholghi, Lance De Vine, Laurianne Sitbon, Guido Zuccon, Anthony Nguyen
The Benefits of Word Embeddings Features for Active Learning in Clinical Information Extraction
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This study investigates the use of unsupervised word embeddings and sequence features for sample representation in an active learning framework built to extract clinical concepts from clinical free text. The objective is to further reduce the manual annotation effort while achieving higher effectiveness compared to a set of baseline features. Unsupervised features are derived from skip-gram word embeddings and a sequence representation approach. The comparative performance of unsupervised features and baseline hand-crafted features in an active learning framework are investigated using a wide range of selection criteria including least confidence, information diversity, information density and diversity, and domain knowledge informativeness. Two clinical datasets are used for evaluation: the i2b2/VA 2010 NLP challenge and the ShARe/CLEF 2013 eHealth Evaluation Lab. Our results demonstrate significant improvements in terms of effectiveness as well as annotation effort savings across both datasets. Using unsupervised features along with baseline features for sample representation lead to further savings of up to 9% and 10% of the token and concept annotation rates, respectively.
[ { "version": "v1", "created": "Mon, 11 Jul 2016 02:46:48 GMT" }, { "version": "v2", "created": "Mon, 18 Jul 2016 00:25:18 GMT" }, { "version": "v3", "created": "Wed, 9 Nov 2016 00:16:30 GMT" }, { "version": "v4", "created": "Tue, 15 Nov 2016 05:06:01 GMT" } ]
2016-11-16T00:00:00
[ [ "Kholghi", "Mahnoosh", "" ], [ "De Vine", "Lance", "" ], [ "Sitbon", "Laurianne", "" ], [ "Zuccon", "Guido", "" ], [ "Nguyen", "Anthony", "" ] ]
TITLE: The Benefits of Word Embeddings Features for Active Learning in Clinical Information Extraction ABSTRACT: This study investigates the use of unsupervised word embeddings and sequence features for sample representation in an active learning framework built to extract clinical concepts from clinical free text. The objective is to further reduce the manual annotation effort while achieving higher effectiveness compared to a set of baseline features. Unsupervised features are derived from skip-gram word embeddings and a sequence representation approach. The comparative performance of unsupervised features and baseline hand-crafted features in an active learning framework are investigated using a wide range of selection criteria including least confidence, information diversity, information density and diversity, and domain knowledge informativeness. Two clinical datasets are used for evaluation: the i2b2/VA 2010 NLP challenge and the ShARe/CLEF 2013 eHealth Evaluation Lab. Our results demonstrate significant improvements in terms of effectiveness as well as annotation effort savings across both datasets. Using unsupervised features along with baseline features for sample representation lead to further savings of up to 9% and 10% of the token and concept annotation rates, respectively.
no_new_dataset
0.949389
1607.03547
Ron Appel
Ron Appel, Xavier Burgos-Artizzu, Pietro Perona
Improved Multi-Class Cost-Sensitive Boosting via Estimation of the Minimum-Risk Class
Project website: http://www.vision.caltech.edu/~appel/projects/REBEL/
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a simple unified framework for multi-class cost-sensitive boosting. The minimum-risk class is estimated directly, rather than via an approximation of the posterior distribution. Our method jointly optimizes binary weak learners and their corresponding output vectors, requiring classes to share features at each iteration. By training in a cost-sensitive manner, weak learners are invested in separating classes whose discrimination is important, at the expense of less relevant classification boundaries. Additional contributions are a family of loss functions along with proof that our algorithm is Boostable in the theoretical sense, as well as an efficient procedure for growing decision trees for use as weak learners. We evaluate our method on a variety of datasets: a collection of synthetic planar data, common UCI datasets, MNIST digits, SUN scenes, and CUB-200 birds. Results show state-of-the-art performance across all datasets against several strong baselines, including non-boosting multi-class approaches.
[ { "version": "v1", "created": "Tue, 12 Jul 2016 23:56:33 GMT" }, { "version": "v2", "created": "Tue, 15 Nov 2016 19:29:30 GMT" } ]
2016-11-16T00:00:00
[ [ "Appel", "Ron", "" ], [ "Burgos-Artizzu", "Xavier", "" ], [ "Perona", "Pietro", "" ] ]
TITLE: Improved Multi-Class Cost-Sensitive Boosting via Estimation of the Minimum-Risk Class ABSTRACT: We present a simple unified framework for multi-class cost-sensitive boosting. The minimum-risk class is estimated directly, rather than via an approximation of the posterior distribution. Our method jointly optimizes binary weak learners and their corresponding output vectors, requiring classes to share features at each iteration. By training in a cost-sensitive manner, weak learners are invested in separating classes whose discrimination is important, at the expense of less relevant classification boundaries. Additional contributions are a family of loss functions along with proof that our algorithm is Boostable in the theoretical sense, as well as an efficient procedure for growing decision trees for use as weak learners. We evaluate our method on a variety of datasets: a collection of synthetic planar data, common UCI datasets, MNIST digits, SUN scenes, and CUB-200 birds. Results show state-of-the-art performance across all datasets against several strong baselines, including non-boosting multi-class approaches.
no_new_dataset
0.945751
1611.00456
Bo Wang
Bo Wang, Yanshu Yu, Yuan Wang
Measuring Asymmetric Opinions on Online Social Interrelationship with Language and Network Features
null
null
null
null
cs.SI cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Instead of studying the properties of social relationship from an objective view, in this paper, we focus on individuals' subjective and asymmetric opinions on their interrelationships. Inspired by the theories from sociolinguistics, we investigate two individuals' opinions on their interrelationship with their interactive language features. Eliminating the difference of personal language style, we clarify that the asymmetry of interactive language feature values can indicate individuals' asymmetric opinions on their interrelationship. We also discuss how the degree of opinions' asymmetry is related to the individuals' personality traits. Furthermore, to measure the individuals' asymmetric opinions on interrelationship concretely, we develop a novel model synthetizing interactive language and social network features. The experimental results with Enron email dataset provide multiple evidences of the asymmetric opinions on interrelationship, and also verify the effectiveness of the proposed model in measuring the degree of opinions' asymmetry.
[ { "version": "v1", "created": "Wed, 2 Nov 2016 03:04:42 GMT" }, { "version": "v2", "created": "Tue, 15 Nov 2016 04:18:51 GMT" } ]
2016-11-16T00:00:00
[ [ "Wang", "Bo", "" ], [ "Yu", "Yanshu", "" ], [ "Wang", "Yuan", "" ] ]
TITLE: Measuring Asymmetric Opinions on Online Social Interrelationship with Language and Network Features ABSTRACT: Instead of studying the properties of social relationship from an objective view, in this paper, we focus on individuals' subjective and asymmetric opinions on their interrelationships. Inspired by the theories from sociolinguistics, we investigate two individuals' opinions on their interrelationship with their interactive language features. Eliminating the difference of personal language style, we clarify that the asymmetry of interactive language feature values can indicate individuals' asymmetric opinions on their interrelationship. We also discuss how the degree of opinions' asymmetry is related to the individuals' personality traits. Furthermore, to measure the individuals' asymmetric opinions on interrelationship concretely, we develop a novel model synthetizing interactive language and social network features. The experimental results with Enron email dataset provide multiple evidences of the asymmetric opinions on interrelationship, and also verify the effectiveness of the proposed model in measuring the degree of opinions' asymmetry.
no_new_dataset
0.948537
1611.02639
Ankur Taly
Mukund Sundararajan, Ankur Taly, Qiqi Yan
Gradients of Counterfactuals
null
null
null
null
cs.LG cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Gradients have been used to quantify feature importance in machine learning models. Unfortunately, in nonlinear deep networks, not only individual neurons but also the whole network can saturate, and as a result an important input feature can have a tiny gradient. We study various networks, and observe that this phenomena is indeed widespread, across many inputs. We propose to examine interior gradients, which are gradients of counterfactual inputs constructed by scaling down the original input. We apply our method to the GoogleNet architecture for object recognition in images, as well as a ligand-based virtual screening network with categorical features and an LSTM based language model for the Penn Treebank dataset. We visualize how interior gradients better capture feature importance. Furthermore, interior gradients are applicable to a wide variety of deep networks, and have the attribution property that the feature importance scores sum to the the prediction score. Best of all, interior gradients can be computed just as easily as gradients. In contrast, previous methods are complex to implement, which hinders practical adoption.
[ { "version": "v1", "created": "Tue, 8 Nov 2016 18:10:44 GMT" }, { "version": "v2", "created": "Tue, 15 Nov 2016 19:55:26 GMT" } ]
2016-11-16T00:00:00
[ [ "Sundararajan", "Mukund", "" ], [ "Taly", "Ankur", "" ], [ "Yan", "Qiqi", "" ] ]
TITLE: Gradients of Counterfactuals ABSTRACT: Gradients have been used to quantify feature importance in machine learning models. Unfortunately, in nonlinear deep networks, not only individual neurons but also the whole network can saturate, and as a result an important input feature can have a tiny gradient. We study various networks, and observe that this phenomena is indeed widespread, across many inputs. We propose to examine interior gradients, which are gradients of counterfactual inputs constructed by scaling down the original input. We apply our method to the GoogleNet architecture for object recognition in images, as well as a ligand-based virtual screening network with categorical features and an LSTM based language model for the Penn Treebank dataset. We visualize how interior gradients better capture feature importance. Furthermore, interior gradients are applicable to a wide variety of deep networks, and have the attribution property that the feature importance scores sum to the the prediction score. Best of all, interior gradients can be computed just as easily as gradients. In contrast, previous methods are complex to implement, which hinders practical adoption.
no_new_dataset
0.947235
1611.04035
Murat Kocaoglu
Murat Kocaoglu, Alexandros G. Dimakis, Sriram Vishwanath, Babak Hassibi
Entropic Causal Inference
To appear in AAAI 2017
null
null
null
cs.AI cs.IT math.IT stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the problem of identifying the causal direction between two discrete random variables using observational data. Unlike previous work, we keep the most general functional model but make an assumption on the unobserved exogenous variable: Inspired by Occam's razor, we assume that the exogenous variable is simple in the true causal direction. We quantify simplicity using R\'enyi entropy. Our main result is that, under natural assumptions, if the exogenous variable has low $H_0$ entropy (cardinality) in the true direction, it must have high $H_0$ entropy in the wrong direction. We establish several algorithmic hardness results about estimating the minimum entropy exogenous variable. We show that the problem of finding the exogenous variable with minimum entropy is equivalent to the problem of finding minimum joint entropy given $n$ marginal distributions, also known as minimum entropy coupling problem. We propose an efficient greedy algorithm for the minimum entropy coupling problem, that for $n=2$ provably finds a local optimum. This gives a greedy algorithm for finding the exogenous variable with minimum $H_1$ (Shannon Entropy). Our greedy entropy-based causal inference algorithm has similar performance to the state of the art additive noise models in real datasets. One advantage of our approach is that we make no use of the values of random variables but only their distributions. Our method can therefore be used for causal inference for both ordinal and also categorical data, unlike additive noise models.
[ { "version": "v1", "created": "Sat, 12 Nov 2016 18:56:34 GMT" }, { "version": "v2", "created": "Tue, 15 Nov 2016 03:09:53 GMT" } ]
2016-11-16T00:00:00
[ [ "Kocaoglu", "Murat", "" ], [ "Dimakis", "Alexandros G.", "" ], [ "Vishwanath", "Sriram", "" ], [ "Hassibi", "Babak", "" ] ]
TITLE: Entropic Causal Inference ABSTRACT: We consider the problem of identifying the causal direction between two discrete random variables using observational data. Unlike previous work, we keep the most general functional model but make an assumption on the unobserved exogenous variable: Inspired by Occam's razor, we assume that the exogenous variable is simple in the true causal direction. We quantify simplicity using R\'enyi entropy. Our main result is that, under natural assumptions, if the exogenous variable has low $H_0$ entropy (cardinality) in the true direction, it must have high $H_0$ entropy in the wrong direction. We establish several algorithmic hardness results about estimating the minimum entropy exogenous variable. We show that the problem of finding the exogenous variable with minimum entropy is equivalent to the problem of finding minimum joint entropy given $n$ marginal distributions, also known as minimum entropy coupling problem. We propose an efficient greedy algorithm for the minimum entropy coupling problem, that for $n=2$ provably finds a local optimum. This gives a greedy algorithm for finding the exogenous variable with minimum $H_1$ (Shannon Entropy). Our greedy entropy-based causal inference algorithm has similar performance to the state of the art additive noise models in real datasets. One advantage of our approach is that we make no use of the values of random variables but only their distributions. Our method can therefore be used for causal inference for both ordinal and also categorical data, unlike additive noise models.
no_new_dataset
0.947624
1611.04149
Zebang Shen
Zebang Shen, Hui Qian, Chao Zhang, and Tengfei Zhou
Accelerated Variance Reduced Block Coordinate Descent
null
null
null
null
stat.ML cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Algorithms with fast convergence, small number of data access, and low per-iteration complexity are particularly favorable in the big data era, due to the demand for obtaining \emph{highly accurate solutions} to problems with \emph{a large number of samples} in \emph{ultra-high} dimensional space. Existing algorithms lack at least one of these qualities, and thus are inefficient in handling such big data challenge. In this paper, we propose a method enjoying all these merits with an accelerated convergence rate $O(\frac{1}{k^2})$. Empirical studies on large scale datasets with more than one million features are conducted to show the effectiveness of our methods in practice.
[ { "version": "v1", "created": "Sun, 13 Nov 2016 16:01:10 GMT" } ]
2016-11-16T00:00:00
[ [ "Shen", "Zebang", "" ], [ "Qian", "Hui", "" ], [ "Zhang", "Chao", "" ], [ "Zhou", "Tengfei", "" ] ]
TITLE: Accelerated Variance Reduced Block Coordinate Descent ABSTRACT: Algorithms with fast convergence, small number of data access, and low per-iteration complexity are particularly favorable in the big data era, due to the demand for obtaining \emph{highly accurate solutions} to problems with \emph{a large number of samples} in \emph{ultra-high} dimensional space. Existing algorithms lack at least one of these qualities, and thus are inefficient in handling such big data challenge. In this paper, we propose a method enjoying all these merits with an accelerated convergence rate $O(\frac{1}{k^2})$. Empirical studies on large scale datasets with more than one million features are conducted to show the effectiveness of our methods in practice.
no_new_dataset
0.94625
1611.04358
Weijie Huang
Weijie Huang, Jun Wang
Character-level Convolutional Network for Text Classification Applied to Chinese Corpus
MSc Thesis, 44 pages
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This article provides an interesting exploration of character-level convolutional neural network solving Chinese corpus text classification problem. We constructed a large-scale Chinese language dataset, and the result shows that character-level convolutional neural network works better on Chinese corpus than its corresponding pinyin format dataset. This is the first time that character-level convolutional neural network applied to text classification problem.
[ { "version": "v1", "created": "Mon, 14 Nov 2016 12:24:27 GMT" }, { "version": "v2", "created": "Tue, 15 Nov 2016 14:41:23 GMT" } ]
2016-11-16T00:00:00
[ [ "Huang", "Weijie", "" ], [ "Wang", "Jun", "" ] ]
TITLE: Character-level Convolutional Network for Text Classification Applied to Chinese Corpus ABSTRACT: This article provides an interesting exploration of character-level convolutional neural network solving Chinese corpus text classification problem. We constructed a large-scale Chinese language dataset, and the result shows that character-level convolutional neural network works better on Chinese corpus than its corresponding pinyin format dataset. This is the first time that character-level convolutional neural network applied to text classification problem.
new_dataset
0.954984
1611.04374
Pascale Bayle-Guillemaud
Maxime Boniface (MEM), Lucille Quazuguel (IMN), Julien Danet (MEM), Dominique Guyomard (IMN), Philippe Moreau (IMN), Pascale Bayle-Guillemaud (MEM)
Nanoscale Chemical Evolution of Silicon Negative Electrodes Characterized by Low-Loss STEM-EELS
Nano Letters, American Chemical Society, 2016
null
10.1021/acs.nanolett.6b02883
null
physics.chem-ph cond-mat.mes-hall
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Continuous solid electrolyte interface (SEI) formation remains the limiting factor of the lifetime of silicon nanoparticles (SiNPs) based negative electrodes. Methods that could provide clear diagnosis of the electrode degradation are of utmost necessity to streamline further developments. We demonstrate that electron energy-loss spectroscopy (EELS) in a scanning transmission electron microscope (STEM) can be used to quickly map SEI components and quantify LixSi alloys from single experiments, with resolutions down to 5 nm. Exploiting the low-loss part of the EEL spectrum allowed us to circumvent the degradation phenomena that have so far crippled the application of this technique on such beam-sensitive compounds. Our results provide unprecedented insight into silicon aging mechanisms in full cell configuration. We observe the morphology of the SEI to be extremely heterogeneous at the particle scale but with clear chemical evolutions with extended cycling coming from both SEI accumulation and a transition from lithium-rich carbonate-like compounds to lithium-poor ones. Thanks to the retrieval of several results from a single dataset, we were able to correlate local discrepancies in lithiation to the initial crystallinity of silicon as well as to the local SEI chemistry and morphology. This study emphasizes how initial heterogeneities in the percolating electronic network and the porosity affect SiNPs aggregates along cycling. These findings pinpoint the crucial role of an optimized formulation in silicon-based thick electrodes.
[ { "version": "v1", "created": "Mon, 14 Nov 2016 13:07:22 GMT" } ]
2016-11-16T00:00:00
[ [ "Boniface", "Maxime", "", "MEM" ], [ "Quazuguel", "Lucille", "", "IMN" ], [ "Danet", "Julien", "", "MEM" ], [ "Guyomard", "Dominique", "", "IMN" ], [ "Moreau", "Philippe", "", "IMN" ], [ "Bayle-Guillemaud", "Pascale", "", "MEM" ] ]
TITLE: Nanoscale Chemical Evolution of Silicon Negative Electrodes Characterized by Low-Loss STEM-EELS ABSTRACT: Continuous solid electrolyte interface (SEI) formation remains the limiting factor of the lifetime of silicon nanoparticles (SiNPs) based negative electrodes. Methods that could provide clear diagnosis of the electrode degradation are of utmost necessity to streamline further developments. We demonstrate that electron energy-loss spectroscopy (EELS) in a scanning transmission electron microscope (STEM) can be used to quickly map SEI components and quantify LixSi alloys from single experiments, with resolutions down to 5 nm. Exploiting the low-loss part of the EEL spectrum allowed us to circumvent the degradation phenomena that have so far crippled the application of this technique on such beam-sensitive compounds. Our results provide unprecedented insight into silicon aging mechanisms in full cell configuration. We observe the morphology of the SEI to be extremely heterogeneous at the particle scale but with clear chemical evolutions with extended cycling coming from both SEI accumulation and a transition from lithium-rich carbonate-like compounds to lithium-poor ones. Thanks to the retrieval of several results from a single dataset, we were able to correlate local discrepancies in lithiation to the initial crystallinity of silicon as well as to the local SEI chemistry and morphology. This study emphasizes how initial heterogeneities in the percolating electronic network and the porosity affect SiNPs aggregates along cycling. These findings pinpoint the crucial role of an optimized formulation in silicon-based thick electrodes.
no_new_dataset
0.946051
1611.04636
Honglin Zheng
Honglin Zheng, Tianlang Chen, Jiebo Luo
When Saliency Meets Sentiment: Understanding How Image Content Invokes Emotion and Sentiment
7 pages, 5 figures, submitted to AAAI-17
null
null
null
cs.AI cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Sentiment analysis is crucial for extracting social signals from social media content. Due to the prevalence of images in social media, image sentiment analysis is receiving increasing attention in recent years. However, most existing systems are black-boxes that do not provide insight on how image content invokes sentiment and emotion in the viewers. Psychological studies have confirmed that salient objects in an image often invoke emotions. In this work, we investigate more fine-grained and more comprehensive interaction between visual saliency and visual sentiment. In particular, we partition images in several primary scene-type dimensions, including: open-closed, natural-manmade, indoor-outdoor, and face-noface. Using state of the art saliency detection algorithm and sentiment classification algorithm, we examine how the sentiment of the salient region(s) in an image relates to the overall sentiment of the image. The experiments on a representative image emotion dataset have shown interesting correlation between saliency and sentiment in different scene types and in turn shed light on the mechanism of visual sentiment evocation.
[ { "version": "v1", "created": "Mon, 14 Nov 2016 22:02:09 GMT" } ]
2016-11-16T00:00:00
[ [ "Zheng", "Honglin", "" ], [ "Chen", "Tianlang", "" ], [ "Luo", "Jiebo", "" ] ]
TITLE: When Saliency Meets Sentiment: Understanding How Image Content Invokes Emotion and Sentiment ABSTRACT: Sentiment analysis is crucial for extracting social signals from social media content. Due to the prevalence of images in social media, image sentiment analysis is receiving increasing attention in recent years. However, most existing systems are black-boxes that do not provide insight on how image content invokes sentiment and emotion in the viewers. Psychological studies have confirmed that salient objects in an image often invoke emotions. In this work, we investigate more fine-grained and more comprehensive interaction between visual saliency and visual sentiment. In particular, we partition images in several primary scene-type dimensions, including: open-closed, natural-manmade, indoor-outdoor, and face-noface. Using state of the art saliency detection algorithm and sentiment classification algorithm, we examine how the sentiment of the salient region(s) in an image relates to the overall sentiment of the image. The experiments on a representative image emotion dataset have shown interesting correlation between saliency and sentiment in different scene types and in turn shed light on the mechanism of visual sentiment evocation.
no_new_dataset
0.948775
1611.04686
Hang Zhang
Hang Zhang, Fengyuan Zhu and Shixin Li
Robust Matrix Regression
8 pages, 4 tables
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Modern technologies are producing datasets with complex intrinsic structures, and they can be naturally represented as matrices instead of vectors. To preserve the latent data structures during processing, modern regression approaches incorporate the low-rank property to the model and achieve satisfactory performance for certain applications. These approaches all assume that both predictors and labels for each pair of data within the training set are accurate. However, in real-world applications, it is common to see the training data contaminated by noises, which can affect the robustness of these matrix regression methods. In this paper, we address this issue by introducing a novel robust matrix regression method. We also derive efficient proximal algorithms for model training. To evaluate the performance of our methods, we apply it to real world applications with comparative studies. Our method achieves the state-of-the-art performance, which shows the effectiveness and the practical value of our method.
[ { "version": "v1", "created": "Tue, 15 Nov 2016 03:15:46 GMT" } ]
2016-11-16T00:00:00
[ [ "Zhang", "Hang", "" ], [ "Zhu", "Fengyuan", "" ], [ "Li", "Shixin", "" ] ]
TITLE: Robust Matrix Regression ABSTRACT: Modern technologies are producing datasets with complex intrinsic structures, and they can be naturally represented as matrices instead of vectors. To preserve the latent data structures during processing, modern regression approaches incorporate the low-rank property to the model and achieve satisfactory performance for certain applications. These approaches all assume that both predictors and labels for each pair of data within the training set are accurate. However, in real-world applications, it is common to see the training data contaminated by noises, which can affect the robustness of these matrix regression methods. In this paper, we address this issue by introducing a novel robust matrix regression method. We also derive efficient proximal algorithms for model training. To evaluate the performance of our methods, we apply it to real world applications with comparative studies. Our method achieves the state-of-the-art performance, which shows the effectiveness and the practical value of our method.
no_new_dataset
0.944485
1611.04782
Salvatore Rampone
Gianni D'Angelo, Salvatore Rampone
Feature Extraction and Soft Computing Methods for Aerospace Structure Defect Classification
null
Measurement Volume 85, May 2016, Pages 192-209
10.1016/j.measurement.2016.02.027
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This study concerns the effectiveness of several techniques and methods of signals processing and data interpretation for the diagnosis of aerospace structure defects. This is done by applying different known feature extraction methods, in addition to a new CBIR-based one; and some soft computing techniques including a recent HPC parallel implementation of the U-BRAIN learning algorithm on Non Destructive Testing data. The performance of the resulting detection systems are measured in terms of Accuracy, Sensitivity, Specificity, and Precision. Their effectiveness is evaluated by the Matthews correlation, the Area Under Curve (AUC), and the F-Measure. Several experiments are performed on a standard dataset of eddy current signal samples for aircraft structures. Our experimental results evidence that the key to a successful defect classifier is the feature extraction method - namely the novel CBIR-based one outperforms all the competitors - and they illustrate the greater effectiveness of the U-BRAIN algorithm and the MLP neural network among the soft computing methods in this kind of application. Keywords- Non-destructive testing (NDT); Soft Computing; Feature Extraction; Classification Algorithms; Content-Based Image Retrieval (CBIR); Eddy Currents (EC).
[ { "version": "v1", "created": "Tue, 15 Nov 2016 10:47:12 GMT" } ]
2016-11-16T00:00:00
[ [ "D'Angelo", "Gianni", "" ], [ "Rampone", "Salvatore", "" ] ]
TITLE: Feature Extraction and Soft Computing Methods for Aerospace Structure Defect Classification ABSTRACT: This study concerns the effectiveness of several techniques and methods of signals processing and data interpretation for the diagnosis of aerospace structure defects. This is done by applying different known feature extraction methods, in addition to a new CBIR-based one; and some soft computing techniques including a recent HPC parallel implementation of the U-BRAIN learning algorithm on Non Destructive Testing data. The performance of the resulting detection systems are measured in terms of Accuracy, Sensitivity, Specificity, and Precision. Their effectiveness is evaluated by the Matthews correlation, the Area Under Curve (AUC), and the F-Measure. Several experiments are performed on a standard dataset of eddy current signal samples for aircraft structures. Our experimental results evidence that the key to a successful defect classifier is the feature extraction method - namely the novel CBIR-based one outperforms all the competitors - and they illustrate the greater effectiveness of the U-BRAIN algorithm and the MLP neural network among the soft computing methods in this kind of application. Keywords- Non-destructive testing (NDT); Soft Computing; Feature Extraction; Classification Algorithms; Content-Based Image Retrieval (CBIR); Eddy Currents (EC).
no_new_dataset
0.945601
1611.04835
Nauman Shahid
Nauman Shahid, Francesco Grassi, Pierre Vandergheynst
Multilinear Low-Rank Tensors on Graphs & Applications
null
null
null
null
cs.CV cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a new framework for the analysis of low-rank tensors which lies at the intersection of spectral graph theory and signal processing. As a first step, we present a new graph based low-rank decomposition which approximates the classical low-rank SVD for matrices and multi-linear SVD for tensors. Then, building on this novel decomposition we construct a general class of convex optimization problems for approximately solving low-rank tensor inverse problems, such as tensor Robust PCA. The whole framework is named as 'Multilinear Low-rank tensors on Graphs (MLRTG)'. Our theoretical analysis shows: 1) MLRTG stands on the notion of approximate stationarity of multi-dimensional signals on graphs and 2) the approximation error depends on the eigen gaps of the graphs. We demonstrate applications for a wide variety of 4 artificial and 12 real tensor datasets, such as EEG, FMRI, BCI, surveillance videos and hyperspectral images. Generalization of the tensor concepts to non-euclidean domain, orders of magnitude speed-up, low-memory requirement and significantly enhanced performance at low SNR are the key aspects of our framework.
[ { "version": "v1", "created": "Tue, 15 Nov 2016 14:05:43 GMT" } ]
2016-11-16T00:00:00
[ [ "Shahid", "Nauman", "" ], [ "Grassi", "Francesco", "" ], [ "Vandergheynst", "Pierre", "" ] ]
TITLE: Multilinear Low-Rank Tensors on Graphs & Applications ABSTRACT: We propose a new framework for the analysis of low-rank tensors which lies at the intersection of spectral graph theory and signal processing. As a first step, we present a new graph based low-rank decomposition which approximates the classical low-rank SVD for matrices and multi-linear SVD for tensors. Then, building on this novel decomposition we construct a general class of convex optimization problems for approximately solving low-rank tensor inverse problems, such as tensor Robust PCA. The whole framework is named as 'Multilinear Low-rank tensors on Graphs (MLRTG)'. Our theoretical analysis shows: 1) MLRTG stands on the notion of approximate stationarity of multi-dimensional signals on graphs and 2) the approximation error depends on the eigen gaps of the graphs. We demonstrate applications for a wide variety of 4 artificial and 12 real tensor datasets, such as EEG, FMRI, BCI, surveillance videos and hyperspectral images. Generalization of the tensor concepts to non-euclidean domain, orders of magnitude speed-up, low-memory requirement and significantly enhanced performance at low SNR are the key aspects of our framework.
no_new_dataset
0.944995
1611.04905
Yehya Abouelnaga
Yehya Abouelnaga, Ola S. Ali, Hager Rady, and Mohamed Moustafa
CIFAR-10: KNN-based Ensemble of Classifiers
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we study the performance of different classifiers on the CIFAR-10 dataset, and build an ensemble of classifiers to reach a better performance. We show that, on CIFAR-10, K-Nearest Neighbors (KNN) and Convolutional Neural Network (CNN), on some classes, are mutually exclusive, thus yield in higher accuracy when combined. We reduce KNN overfitting using Principal Component Analysis (PCA), and ensemble it with a CNN to increase its accuracy. Our approach improves our best CNN model from 93.33% to 94.03%.
[ { "version": "v1", "created": "Tue, 15 Nov 2016 16:02:58 GMT" } ]
2016-11-16T00:00:00
[ [ "Abouelnaga", "Yehya", "" ], [ "Ali", "Ola S.", "" ], [ "Rady", "Hager", "" ], [ "Moustafa", "Mohamed", "" ] ]
TITLE: CIFAR-10: KNN-based Ensemble of Classifiers ABSTRACT: In this paper, we study the performance of different classifiers on the CIFAR-10 dataset, and build an ensemble of classifiers to reach a better performance. We show that, on CIFAR-10, K-Nearest Neighbors (KNN) and Convolutional Neural Network (CNN), on some classes, are mutually exclusive, thus yield in higher accuracy when combined. We reduce KNN overfitting using Principal Component Analysis (PCA), and ensemble it with a CNN to increase its accuracy. Our approach improves our best CNN model from 93.33% to 94.03%.
no_new_dataset
0.954265
1611.04999
Cyrus Rashtchian
Paul Beame and Cyrus Rashtchian
Massively-Parallel Similarity Join, Edge-Isoperimetry, and Distance Correlations on the Hypercube
23 pages, plus references and appendix. To appear in SODA 2017
null
null
null
cs.DS cs.CC cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study distributed protocols for finding all pairs of similar vectors in a large dataset. Our results pertain to a variety of discrete metrics, and we give concrete instantiations for Hamming distance. In particular, we give improved upper bounds on the overhead required for similarity defined by Hamming distance $r>1$ and prove a lower bound showing qualitative optimality of the overhead required for similarity over any Hamming distance $r$. Our main conceptual contribution is a connection between similarity search algorithms and certain graph-theoretic quantities. For our upper bounds, we exhibit a general method for designing one-round protocols using edge-isoperimetric shapes in similarity graphs. For our lower bounds, we define a new combinatorial optimization problem, which can be stated in purely graph-theoretic terms yet also captures the core of the analysis in previous theoretical work on distributed similarity joins. As one of our main technical results, we prove new bounds on distance correlations in subsets of the Hamming cube.
[ { "version": "v1", "created": "Tue, 15 Nov 2016 19:36:28 GMT" } ]
2016-11-16T00:00:00
[ [ "Beame", "Paul", "" ], [ "Rashtchian", "Cyrus", "" ] ]
TITLE: Massively-Parallel Similarity Join, Edge-Isoperimetry, and Distance Correlations on the Hypercube ABSTRACT: We study distributed protocols for finding all pairs of similar vectors in a large dataset. Our results pertain to a variety of discrete metrics, and we give concrete instantiations for Hamming distance. In particular, we give improved upper bounds on the overhead required for similarity defined by Hamming distance $r>1$ and prove a lower bound showing qualitative optimality of the overhead required for similarity over any Hamming distance $r$. Our main conceptual contribution is a connection between similarity search algorithms and certain graph-theoretic quantities. For our upper bounds, we exhibit a general method for designing one-round protocols using edge-isoperimetric shapes in similarity graphs. For our lower bounds, we define a new combinatorial optimization problem, which can be stated in purely graph-theoretic terms yet also captures the core of the analysis in previous theoretical work on distributed similarity joins. As one of our main technical results, we prove new bounds on distance correlations in subsets of the Hamming cube.
no_new_dataset
0.946349
1611.05013
Ishaan Gulrajani
Ishaan Gulrajani, Kundan Kumar, Faruk Ahmed, Adrien Ali Taiga, Francesco Visin, David Vazquez, Aaron Courville
PixelVAE: A Latent Variable Model for Natural Images
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Natural image modeling is a landmark challenge of unsupervised learning. Variational Autoencoders (VAEs) learn a useful latent representation and model global structure well but have difficulty capturing small details. PixelCNN models details very well, but lacks a latent code and is difficult to scale for capturing large structures. We present PixelVAE, a VAE model with an autoregressive decoder based on PixelCNN. Our model requires very few expensive autoregressive layers compared to PixelCNN and learns latent codes that are more compressed than a standard VAE while still capturing most non-trivial structure. Finally, we extend our model to a hierarchy of latent variables at different scales. Our model achieves state-of-the-art performance on binarized MNIST, competitive performance on 64x64 ImageNet, and high-quality samples on the LSUN bedrooms dataset.
[ { "version": "v1", "created": "Tue, 15 Nov 2016 20:16:27 GMT" } ]
2016-11-16T00:00:00
[ [ "Gulrajani", "Ishaan", "" ], [ "Kumar", "Kundan", "" ], [ "Ahmed", "Faruk", "" ], [ "Taiga", "Adrien Ali", "" ], [ "Visin", "Francesco", "" ], [ "Vazquez", "David", "" ], [ "Courville", "Aaron", "" ] ]
TITLE: PixelVAE: A Latent Variable Model for Natural Images ABSTRACT: Natural image modeling is a landmark challenge of unsupervised learning. Variational Autoencoders (VAEs) learn a useful latent representation and model global structure well but have difficulty capturing small details. PixelCNN models details very well, but lacks a latent code and is difficult to scale for capturing large structures. We present PixelVAE, a VAE model with an autoregressive decoder based on PixelCNN. Our model requires very few expensive autoregressive layers compared to PixelCNN and learns latent codes that are more compressed than a standard VAE while still capturing most non-trivial structure. Finally, we extend our model to a hierarchy of latent variables at different scales. Our model achieves state-of-the-art performance on binarized MNIST, competitive performance on 64x64 ImageNet, and high-quality samples on the LSUN bedrooms dataset.
no_new_dataset
0.949201
0803.3363
Yoshiharu Maeno
Yoshiharu Maeno
Node discovery in a networked organization
null
Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, San Antonio, October 2009
10.1109/ICSMC.2009.5346826
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, I present a method to solve a node discovery problem in a networked organization. Covert nodes refer to the nodes which are not observable directly. They affect social interactions, but do not appear in the surveillance logs which record the participants of the social interactions. Discovering the covert nodes is defined as identifying the suspicious logs where the covert nodes would appear if the covert nodes became overt. A mathematical model is developed for the maximal likelihood estimation of the network behind the social interactions and for the identification of the suspicious logs. Precision, recall, and F measure characteristics are demonstrated with the dataset generated from a real organization and the computationally synthesized datasets. The performance is close to the theoretical limit for any covert nodes in the networks of any topologies and sizes if the ratio of the number of observation to the number of possible communication patterns is large.
[ { "version": "v1", "created": "Mon, 24 Mar 2008 05:53:39 GMT" }, { "version": "v2", "created": "Fri, 26 Jun 2009 05:13:58 GMT" } ]
2016-11-15T00:00:00
[ [ "Maeno", "Yoshiharu", "" ] ]
TITLE: Node discovery in a networked organization ABSTRACT: In this paper, I present a method to solve a node discovery problem in a networked organization. Covert nodes refer to the nodes which are not observable directly. They affect social interactions, but do not appear in the surveillance logs which record the participants of the social interactions. Discovering the covert nodes is defined as identifying the suspicious logs where the covert nodes would appear if the covert nodes became overt. A mathematical model is developed for the maximal likelihood estimation of the network behind the social interactions and for the identification of the suspicious logs. Precision, recall, and F measure characteristics are demonstrated with the dataset generated from a real organization and the computationally synthesized datasets. The performance is close to the theoretical limit for any covert nodes in the networks of any topologies and sizes if the ratio of the number of observation to the number of possible communication patterns is large.
no_new_dataset
0.949435
1007.5459
John Whitbeck
John Whitbeck, Yoann Lopez, J\'er\'emie Leguay, Vania Conan, Marcelo Dias de Amorim
Relieving the Wireless Infrastructure: When Opportunistic Networks Meet Guaranteed Delays
Accepted at IEEE WoWMoM 2011 conference
null
10.1109/WoWMoM.2011.5986466
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Major wireless operators are nowadays facing network capacity issues in striving to meet the growing demands of mobile users. At the same time, 3G-enabled devices increasingly benefit from ad hoc radio connectivity (e.g., Wi-Fi). In this context of hybrid connectivity, we propose Push-and-track, a content dissemination framework that harnesses ad hoc communication opportunities to minimize the load on the wireless infrastructure while guaranteeing tight delivery delays. It achieves this through a control loop that collects user-sent acknowledgements to determine if new copies need to be reinjected into the network through the 3G interface. Push-and-Track includes multiple strategies to determine how many copies of the content should be injected, when, and to whom. The short delay-tolerance of common content, such as news or road traffic updates, make them suitable for such a system. Based on a realistic large-scale vehicular dataset from the city of Bologna composed of more than 10,000 vehicles, we demonstrate that Push-and-Track consistently meets its delivery objectives while reducing the use of the 3G network by over 90%.
[ { "version": "v1", "created": "Fri, 30 Jul 2010 14:26:53 GMT" }, { "version": "v2", "created": "Sat, 4 Dec 2010 21:44:42 GMT" }, { "version": "v3", "created": "Mon, 30 May 2011 20:17:57 GMT" } ]
2016-11-15T00:00:00
[ [ "Whitbeck", "John", "" ], [ "Lopez", "Yoann", "" ], [ "Leguay", "Jérémie", "" ], [ "Conan", "Vania", "" ], [ "de Amorim", "Marcelo Dias", "" ] ]
TITLE: Relieving the Wireless Infrastructure: When Opportunistic Networks Meet Guaranteed Delays ABSTRACT: Major wireless operators are nowadays facing network capacity issues in striving to meet the growing demands of mobile users. At the same time, 3G-enabled devices increasingly benefit from ad hoc radio connectivity (e.g., Wi-Fi). In this context of hybrid connectivity, we propose Push-and-track, a content dissemination framework that harnesses ad hoc communication opportunities to minimize the load on the wireless infrastructure while guaranteeing tight delivery delays. It achieves this through a control loop that collects user-sent acknowledgements to determine if new copies need to be reinjected into the network through the 3G interface. Push-and-Track includes multiple strategies to determine how many copies of the content should be injected, when, and to whom. The short delay-tolerance of common content, such as news or road traffic updates, make them suitable for such a system. Based on a realistic large-scale vehicular dataset from the city of Bologna composed of more than 10,000 vehicles, we demonstrate that Push-and-Track consistently meets its delivery objectives while reducing the use of the 3G network by over 90%.
no_new_dataset
0.784443
1010.5669
Pascal Pernot
Pascal Pernot and Fabien Cailliez
Semi-empirical correction of ab initio harmonic properties by scaling factors: a validated uncertainty model for calibration and prediction
null
null
null
null
physics.chem-ph physics.data-an
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Bayesian Model Calibration is used to revisit the problem of scaling factor calibration for semi-empirical correction of ab initio harmonic properties (e.g. vibrational frequencies and zero-point energies). A particular attention is devoted to the evaluation of scaling factor uncertainty, and to its effect on the accuracy of scaled properties. We argue that in most cases of interest the standard calibration model is not statistically valid, in the sense that it is not able to fit experimental calibration data within their uncertainty limits. This impairs any attempt to use the results of the standard model for uncertainty analysis and/or uncertainty propagation. We propose to include a stochastic term in the calibration model to account for model inadequacy. This new model is validated in the Bayesian Model Calibration framework. We provide explicit formulae for prediction uncertainty in typical limit cases: large and small calibration sets of data with negligible measurement uncertainty, and datasets with large measurement uncertainties.
[ { "version": "v1", "created": "Wed, 27 Oct 2010 12:27:03 GMT" } ]
2016-11-15T00:00:00
[ [ "Pernot", "Pascal", "" ], [ "Cailliez", "Fabien", "" ] ]
TITLE: Semi-empirical correction of ab initio harmonic properties by scaling factors: a validated uncertainty model for calibration and prediction ABSTRACT: Bayesian Model Calibration is used to revisit the problem of scaling factor calibration for semi-empirical correction of ab initio harmonic properties (e.g. vibrational frequencies and zero-point energies). A particular attention is devoted to the evaluation of scaling factor uncertainty, and to its effect on the accuracy of scaled properties. We argue that in most cases of interest the standard calibration model is not statistically valid, in the sense that it is not able to fit experimental calibration data within their uncertainty limits. This impairs any attempt to use the results of the standard model for uncertainty analysis and/or uncertainty propagation. We propose to include a stochastic term in the calibration model to account for model inadequacy. This new model is validated in the Bayesian Model Calibration framework. We provide explicit formulae for prediction uncertainty in typical limit cases: large and small calibration sets of data with negligible measurement uncertainty, and datasets with large measurement uncertainties.
no_new_dataset
0.951414
1012.0726
John Tang
John Tang, Cecilia Mascolo, Mirco Musolesi, Vito Latora
Exploiting Temporal Complex Network Metrics in Mobile Malware Containment
9 Pages, 13 Figures, In Proceedings of IEEE 12th International Symposium on a World of Wireless, Mobile and Multimedia Networks (WOWMOM '11)
null
10.1109/WoWMoM.2011.5986463
null
cs.NI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Malicious mobile phone worms spread between devices via short-range Bluetooth contacts, similar to the propagation of human and other biological viruses. Recent work has employed models from epidemiology and complex networks to analyse the spread of malware and the effect of patching specific nodes. These approaches have adopted a static view of the mobile networks, i.e., by aggregating all the edges that appear over time, which leads to an approximate representation of the real interactions: instead, these networks are inherently dynamic and the edge appearance and disappearance is highly influenced by the ordering of the human contacts, something which is not captured at all by existing complex network measures. In this paper we first study how the blocking of malware propagation through immunisation of key nodes (even if carefully chosen through static or temporal betweenness centrality metrics) is ineffective: this is due to the richness of alternative paths in these networks. Then we introduce a time-aware containment strategy that spreads a patch message starting from nodes with high temporal closeness centrality and show its effectiveness using three real-world datasets. Temporal closeness allows the identification of nodes able to reach most nodes quickly: we show that this scheme can reduce the cellular network resource consumption and associated costs, achieving, at the same time, a complete containment of the malware in a limited amount of time.
[ { "version": "v1", "created": "Fri, 3 Dec 2010 13:03:58 GMT" }, { "version": "v2", "created": "Tue, 10 May 2011 19:32:14 GMT" } ]
2016-11-15T00:00:00
[ [ "Tang", "John", "" ], [ "Mascolo", "Cecilia", "" ], [ "Musolesi", "Mirco", "" ], [ "Latora", "Vito", "" ] ]
TITLE: Exploiting Temporal Complex Network Metrics in Mobile Malware Containment ABSTRACT: Malicious mobile phone worms spread between devices via short-range Bluetooth contacts, similar to the propagation of human and other biological viruses. Recent work has employed models from epidemiology and complex networks to analyse the spread of malware and the effect of patching specific nodes. These approaches have adopted a static view of the mobile networks, i.e., by aggregating all the edges that appear over time, which leads to an approximate representation of the real interactions: instead, these networks are inherently dynamic and the edge appearance and disappearance is highly influenced by the ordering of the human contacts, something which is not captured at all by existing complex network measures. In this paper we first study how the blocking of malware propagation through immunisation of key nodes (even if carefully chosen through static or temporal betweenness centrality metrics) is ineffective: this is due to the richness of alternative paths in these networks. Then we introduce a time-aware containment strategy that spreads a patch message starting from nodes with high temporal closeness centrality and show its effectiveness using three real-world datasets. Temporal closeness allows the identification of nodes able to reach most nodes quickly: we show that this scheme can reduce the cellular network resource consumption and associated costs, achieving, at the same time, a complete containment of the malware in a limited amount of time.
no_new_dataset
0.949576
1103.2635
Lawrence Cayton
Lawrence Cayton
Accelerating Nearest Neighbor Search on Manycore Systems
null
In Proceedings of the 2012 IEEE 26th International Parallel and Distributed Processing Symposium (IPDPS '12). IEEE Computer Society, Washington, DC, USA, 402-413
10.1109/IPDPS.2012.45
null
cs.DB cs.CG cs.DC cs.DS cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We develop methods for accelerating metric similarity search that are effective on modern hardware. Our algorithms factor into easily parallelizable components, making them simple to deploy and efficient on multicore CPUs and GPUs. Despite the simple structure of our algorithms, their search performance is provably sublinear in the size of the database, with a factor dependent only on its intrinsic dimensionality. We demonstrate that our methods provide substantial speedups on a range of datasets and hardware platforms. In particular, we present results on a 48-core server machine, on graphics hardware, and on a multicore desktop.
[ { "version": "v1", "created": "Mon, 14 Mar 2011 11:39:23 GMT" }, { "version": "v2", "created": "Wed, 30 Mar 2011 18:26:44 GMT" } ]
2016-11-15T00:00:00
[ [ "Cayton", "Lawrence", "" ] ]
TITLE: Accelerating Nearest Neighbor Search on Manycore Systems ABSTRACT: We develop methods for accelerating metric similarity search that are effective on modern hardware. Our algorithms factor into easily parallelizable components, making them simple to deploy and efficient on multicore CPUs and GPUs. Despite the simple structure of our algorithms, their search performance is provably sublinear in the size of the database, with a factor dependent only on its intrinsic dimensionality. We demonstrate that our methods provide substantial speedups on a range of datasets and hardware platforms. In particular, we present results on a 48-core server machine, on graphics hardware, and on a multicore desktop.
no_new_dataset
0.952442
1201.1174
Yongjun Liao
Yongjun Liao, Wei Du, Pierre Geurts and Guy Leduc
DMFSGD: A Decentralized Matrix Factorization Algorithm for Network Distance Prediction
submitted to IEEE/ACM Transactions on Networking on Nov. 2011
null
10.1109/TNET.2012.2228881
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The knowledge of end-to-end network distances is essential to many Internet applications. As active probing of all pairwise distances is infeasible in large-scale networks, a natural idea is to measure a few pairs and to predict the other ones without actually measuring them. This paper formulates the distance prediction problem as matrix completion where unknown entries of an incomplete matrix of pairwise distances are to be predicted. The problem is solvable because strong correlations among network distances exist and cause the constructed distance matrix to be low rank. The new formulation circumvents the well-known drawbacks of existing approaches based on Euclidean embedding. A new algorithm, so-called Decentralized Matrix Factorization by Stochastic Gradient Descent (DMFSGD), is proposed to solve the network distance prediction problem. By letting network nodes exchange messages with each other, the algorithm is fully decentralized and only requires each node to collect and to process local measurements, with neither explicit matrix constructions nor special nodes such as landmarks and central servers. In addition, we compared comprehensively matrix factorization and Euclidean embedding to demonstrate the suitability of the former on network distance prediction. We further studied the incorporation of a robust loss function and of non-negativity constraints. Extensive experiments on various publicly-available datasets of network delays show not only the scalability and the accuracy of our approach but also its usability in real Internet applications.
[ { "version": "v1", "created": "Thu, 5 Jan 2012 14:02:16 GMT" } ]
2016-11-15T00:00:00
[ [ "Liao", "Yongjun", "" ], [ "Du", "Wei", "" ], [ "Geurts", "Pierre", "" ], [ "Leduc", "Guy", "" ] ]
TITLE: DMFSGD: A Decentralized Matrix Factorization Algorithm for Network Distance Prediction ABSTRACT: The knowledge of end-to-end network distances is essential to many Internet applications. As active probing of all pairwise distances is infeasible in large-scale networks, a natural idea is to measure a few pairs and to predict the other ones without actually measuring them. This paper formulates the distance prediction problem as matrix completion where unknown entries of an incomplete matrix of pairwise distances are to be predicted. The problem is solvable because strong correlations among network distances exist and cause the constructed distance matrix to be low rank. The new formulation circumvents the well-known drawbacks of existing approaches based on Euclidean embedding. A new algorithm, so-called Decentralized Matrix Factorization by Stochastic Gradient Descent (DMFSGD), is proposed to solve the network distance prediction problem. By letting network nodes exchange messages with each other, the algorithm is fully decentralized and only requires each node to collect and to process local measurements, with neither explicit matrix constructions nor special nodes such as landmarks and central servers. In addition, we compared comprehensively matrix factorization and Euclidean embedding to demonstrate the suitability of the former on network distance prediction. We further studied the incorporation of a robust loss function and of non-negativity constraints. Extensive experiments on various publicly-available datasets of network delays show not only the scalability and the accuracy of our approach but also its usability in real Internet applications.
no_new_dataset
0.948298
1401.0764
Chunhua Shen
Xi Li, Weiming Hu, Chunhua Shen, Anthony Dick, Zhongfei Zhang
Context-Aware Hypergraph Construction for Robust Spectral Clustering
10 pages. Appearing in IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING: http://doi.ieeecomputersociety.org/10.1109/TKDE.2013.126
null
10.1109/TKDE.2013.126
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Spectral clustering is a powerful tool for unsupervised data analysis. In this paper, we propose a context-aware hypergraph similarity measure (CAHSM), which leads to robust spectral clustering in the case of noisy data. We construct three types of hypergraph---the pairwise hypergraph, the k-nearest-neighbor (kNN) hypergraph, and the high-order over-clustering hypergraph. The pairwise hypergraph captures the pairwise similarity of data points; the kNN hypergraph captures the neighborhood of each point; and the clustering hypergraph encodes high-order contexts within the dataset. By combining the affinity information from these three hypergraphs, the CAHSM algorithm is able to explore the intrinsic topological information of the dataset. Therefore, data clustering using CAHSM tends to be more robust. Considering the intra-cluster compactness and the inter-cluster separability of vertices, we further design a discriminative hypergraph partitioning criterion (DHPC). Using both CAHSM and DHPC, a robust spectral clustering algorithm is developed. Theoretical analysis and experimental evaluation demonstrate the effectiveness and robustness of the proposed algorithm.
[ { "version": "v1", "created": "Sat, 4 Jan 2014 02:05:35 GMT" } ]
2016-11-15T00:00:00
[ [ "Li", "Xi", "" ], [ "Hu", "Weiming", "" ], [ "Shen", "Chunhua", "" ], [ "Dick", "Anthony", "" ], [ "Zhang", "Zhongfei", "" ] ]
TITLE: Context-Aware Hypergraph Construction for Robust Spectral Clustering ABSTRACT: Spectral clustering is a powerful tool for unsupervised data analysis. In this paper, we propose a context-aware hypergraph similarity measure (CAHSM), which leads to robust spectral clustering in the case of noisy data. We construct three types of hypergraph---the pairwise hypergraph, the k-nearest-neighbor (kNN) hypergraph, and the high-order over-clustering hypergraph. The pairwise hypergraph captures the pairwise similarity of data points; the kNN hypergraph captures the neighborhood of each point; and the clustering hypergraph encodes high-order contexts within the dataset. By combining the affinity information from these three hypergraphs, the CAHSM algorithm is able to explore the intrinsic topological information of the dataset. Therefore, data clustering using CAHSM tends to be more robust. Considering the intra-cluster compactness and the inter-cluster separability of vertices, we further design a discriminative hypergraph partitioning criterion (DHPC). Using both CAHSM and DHPC, a robust spectral clustering algorithm is developed. Theoretical analysis and experimental evaluation demonstrate the effectiveness and robustness of the proposed algorithm.
no_new_dataset
0.951997
1404.2999
Tianlin Shi
Tianlin Shi, Liang Ming, Xiaolin Hu
A Reverse Hierarchy Model for Predicting Eye Fixations
CVPR 2014, 27th IEEE Conference on Computer Vision and Pattern Recognition (CVPR). CVPR 2014
null
10.1109/CVPR.2014.361
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A number of psychological and physiological evidences suggest that early visual attention works in a coarse-to-fine way, which lays a basis for the reverse hierarchy theory (RHT). This theory states that attention propagates from the top level of the visual hierarchy that processes gist and abstract information of input, to the bottom level that processes local details. Inspired by the theory, we develop a computational model for saliency detection in images. First, the original image is downsampled to different scales to constitute a pyramid. Then, saliency on each layer is obtained by image super-resolution reconstruction from the layer above, which is defined as unpredictability from this coarse-to-fine reconstruction. Finally, saliency on each layer of the pyramid is fused into stochastic fixations through a probabilistic model, where attention initiates from the top layer and propagates downward through the pyramid. Extensive experiments on two standard eye-tracking datasets show that the proposed method can achieve competitive results with state-of-the-art models.
[ { "version": "v1", "created": "Fri, 11 Apr 2014 04:39:21 GMT" } ]
2016-11-15T00:00:00
[ [ "Shi", "Tianlin", "" ], [ "Ming", "Liang", "" ], [ "Hu", "Xiaolin", "" ] ]
TITLE: A Reverse Hierarchy Model for Predicting Eye Fixations ABSTRACT: A number of psychological and physiological evidences suggest that early visual attention works in a coarse-to-fine way, which lays a basis for the reverse hierarchy theory (RHT). This theory states that attention propagates from the top level of the visual hierarchy that processes gist and abstract information of input, to the bottom level that processes local details. Inspired by the theory, we develop a computational model for saliency detection in images. First, the original image is downsampled to different scales to constitute a pyramid. Then, saliency on each layer is obtained by image super-resolution reconstruction from the layer above, which is defined as unpredictability from this coarse-to-fine reconstruction. Finally, saliency on each layer of the pyramid is fused into stochastic fixations through a probabilistic model, where attention initiates from the top layer and propagates downward through the pyramid. Extensive experiments on two standard eye-tracking datasets show that the proposed method can achieve competitive results with state-of-the-art models.
no_new_dataset
0.950134
1407.7330
Arnold Wiliem
Arnold Wiliem, Peter Hobson, Brian C. Lovell
Discovering Discriminative Cell Attributes for HEp-2 Specimen Image Classification
WACV 2014: IEEE Winter Conference on Applications of Computer Vision
null
10.1109/WACV.2014.6836071
null
cs.CV cs.CE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, there has been a growing interest in developing Computer Aided Diagnostic (CAD) systems for improving the reliability and consistency of pathology test results. This paper describes a novel CAD system for the Anti-Nuclear Antibody (ANA) test via Indirect Immunofluorescence protocol on Human Epithelial Type 2 (HEp-2) cells. While prior works have primarily focused on classifying cell images extracted from ANA specimen images, this work takes a further step by focussing on the specimen image classification problem itself. Our system is able to efficiently classify specimen images as well as producing meaningful descriptions of ANA pattern class which helps physicians to understand the differences between various ANA patterns. We achieve this goal by designing a specimen-level image descriptor that: (1) is highly discriminative; (2) has small descriptor length and (3) is semantically meaningful at the cell level. In our work, a specimen image descriptor is represented by its overall cell attribute descriptors. As such, we propose two max-margin based learning schemes to discover cell attributes whilst still maintaining the discrimination of the specimen image descriptor. Our learning schemes differ from the existing discriminative attribute learning approaches as they primarily focus on discovering image-level attributes. Comparative evaluations were undertaken to contrast the proposed approach to various state-of-the-art approaches on a novel HEp-2 cell dataset which was specifically proposed for the specimen-level classification. Finally, we showcase the ability of the proposed approach to provide textual descriptions to explain ANA patterns.
[ { "version": "v1", "created": "Mon, 28 Jul 2014 06:03:03 GMT" } ]
2016-11-15T00:00:00
[ [ "Wiliem", "Arnold", "" ], [ "Hobson", "Peter", "" ], [ "Lovell", "Brian C.", "" ] ]
TITLE: Discovering Discriminative Cell Attributes for HEp-2 Specimen Image Classification ABSTRACT: Recently, there has been a growing interest in developing Computer Aided Diagnostic (CAD) systems for improving the reliability and consistency of pathology test results. This paper describes a novel CAD system for the Anti-Nuclear Antibody (ANA) test via Indirect Immunofluorescence protocol on Human Epithelial Type 2 (HEp-2) cells. While prior works have primarily focused on classifying cell images extracted from ANA specimen images, this work takes a further step by focussing on the specimen image classification problem itself. Our system is able to efficiently classify specimen images as well as producing meaningful descriptions of ANA pattern class which helps physicians to understand the differences between various ANA patterns. We achieve this goal by designing a specimen-level image descriptor that: (1) is highly discriminative; (2) has small descriptor length and (3) is semantically meaningful at the cell level. In our work, a specimen image descriptor is represented by its overall cell attribute descriptors. As such, we propose two max-margin based learning schemes to discover cell attributes whilst still maintaining the discrimination of the specimen image descriptor. Our learning schemes differ from the existing discriminative attribute learning approaches as they primarily focus on discovering image-level attributes. Comparative evaluations were undertaken to contrast the proposed approach to various state-of-the-art approaches on a novel HEp-2 cell dataset which was specifically proposed for the specimen-level classification. Finally, we showcase the ability of the proposed approach to provide textual descriptions to explain ANA patterns.
no_new_dataset
0.94256
1409.0788
Uwe Aickelin
Chris Roadknight, Uwe Aickelin, John Scholefield, Lindy Durrant
Ensemble Learning of Colorectal Cancer Survival Rates
IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications (CIVEMSA) 2013, pp. 82 - 86, 2013
null
10.1109/CIVEMSA.2013.6617400
null
cs.LG cs.CE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we describe a dataset relating to cellular and physical conditions of patients who are operated upon to remove colorectal tumours. This data provides a unique insight into immunological status at the point of tumour removal, tumour classification and post-operative survival. We build on existing research on clustering and machine learning facets of this data to demonstrate a role for an ensemble approach to highlighting patients with clearer prognosis parameters. Results for survival prediction using 3 different approaches are shown for a subset of the data which is most difficult to model. The performance of each model individually is compared with subsets of the data where some agreement is reached for multiple models. Significant improvements in model accuracy on an unseen test set can be achieved for patients where agreement between models is achieved.
[ { "version": "v1", "created": "Tue, 2 Sep 2014 16:52:16 GMT" } ]
2016-11-15T00:00:00
[ [ "Roadknight", "Chris", "" ], [ "Aickelin", "Uwe", "" ], [ "Scholefield", "John", "" ], [ "Durrant", "Lindy", "" ] ]
TITLE: Ensemble Learning of Colorectal Cancer Survival Rates ABSTRACT: In this paper, we describe a dataset relating to cellular and physical conditions of patients who are operated upon to remove colorectal tumours. This data provides a unique insight into immunological status at the point of tumour removal, tumour classification and post-operative survival. We build on existing research on clustering and machine learning facets of this data to demonstrate a role for an ensemble approach to highlighting patients with clearer prognosis parameters. Results for survival prediction using 3 different approaches are shown for a subset of the data which is most difficult to model. The performance of each model individually is compared with subsets of the data where some agreement is reached for multiple models. Significant improvements in model accuracy on an unseen test set can be achieved for patients where agreement between models is achieved.
new_dataset
0.971102
1409.2195
Daniel Fried
Daniel Fried, Mihai Surdeanu, Stephen Kobourov, Melanie Hingle, Dane Bell
Analyzing the Language of Food on Social Media
An extended abstract of this paper will appear in IEEE Big Data 2014
null
10.1109/BigData.2014.7004305
null
cs.CL cs.CY cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate the predictive power behind the language of food on social media. We collect a corpus of over three million food-related posts from Twitter and demonstrate that many latent population characteristics can be directly predicted from this data: overweight rate, diabetes rate, political leaning, and home geographical location of authors. For all tasks, our language-based models significantly outperform the majority-class baselines. Performance is further improved with more complex natural language processing, such as topic modeling. We analyze which textual features have most predictive power for these datasets, providing insight into the connections between the language of food, geographic locale, and community characteristics. Lastly, we design and implement an online system for real-time query and visualization of the dataset. Visualization tools, such as geo-referenced heatmaps, semantics-preserving wordclouds and temporal histograms, allow us to discover more complex, global patterns mirrored in the language of food.
[ { "version": "v1", "created": "Mon, 8 Sep 2014 03:07:54 GMT" }, { "version": "v2", "created": "Thu, 11 Sep 2014 17:35:02 GMT" } ]
2016-11-15T00:00:00
[ [ "Fried", "Daniel", "" ], [ "Surdeanu", "Mihai", "" ], [ "Kobourov", "Stephen", "" ], [ "Hingle", "Melanie", "" ], [ "Bell", "Dane", "" ] ]
TITLE: Analyzing the Language of Food on Social Media ABSTRACT: We investigate the predictive power behind the language of food on social media. We collect a corpus of over three million food-related posts from Twitter and demonstrate that many latent population characteristics can be directly predicted from this data: overweight rate, diabetes rate, political leaning, and home geographical location of authors. For all tasks, our language-based models significantly outperform the majority-class baselines. Performance is further improved with more complex natural language processing, such as topic modeling. We analyze which textual features have most predictive power for these datasets, providing insight into the connections between the language of food, geographic locale, and community characteristics. Lastly, we design and implement an online system for real-time query and visualization of the dataset. Visualization tools, such as geo-referenced heatmaps, semantics-preserving wordclouds and temporal histograms, allow us to discover more complex, global patterns mirrored in the language of food.
no_new_dataset
0.943138
1502.07019
Shreyansh Daftry
Shreyansh Daftry, Christof Hoppe and Horst Bischof
Building with Drones: Accurate 3D Facade Reconstruction using MAVs
8 Pages, 2015 IEEE International Conference on Robotics and Automation (ICRA '15), Seattle, WA, USA
null
10.1109/ICRA.2015.7139681
null
cs.RO cs.AI cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Automatic reconstruction of 3D models from images using multi-view Structure-from-Motion methods has been one of the most fruitful outcomes of computer vision. These advances combined with the growing popularity of Micro Aerial Vehicles as an autonomous imaging platform, have made 3D vision tools ubiquitous for large number of Architecture, Engineering and Construction applications among audiences, mostly unskilled in computer vision. However, to obtain high-resolution and accurate reconstructions from a large-scale object using SfM, there are many critical constraints on the quality of image data, which often become sources of inaccuracy as the current 3D reconstruction pipelines do not facilitate the users to determine the fidelity of input data during the image acquisition. In this paper, we present and advocate a closed-loop interactive approach that performs incremental reconstruction in real-time and gives users an online feedback about the quality parameters like Ground Sampling Distance (GSD), image redundancy, etc on a surface mesh. We also propose a novel multi-scale camera network design to prevent scene drift caused by incremental map building, and release the first multi-scale image sequence dataset as a benchmark. Further, we evaluate our system on real outdoor scenes, and show that our interactive pipeline combined with a multi-scale camera network approach provides compelling accuracy in multi-view reconstruction tasks when compared against the state-of-the-art methods.
[ { "version": "v1", "created": "Wed, 25 Feb 2015 00:52:11 GMT" } ]
2016-11-15T00:00:00
[ [ "Daftry", "Shreyansh", "" ], [ "Hoppe", "Christof", "" ], [ "Bischof", "Horst", "" ] ]
TITLE: Building with Drones: Accurate 3D Facade Reconstruction using MAVs ABSTRACT: Automatic reconstruction of 3D models from images using multi-view Structure-from-Motion methods has been one of the most fruitful outcomes of computer vision. These advances combined with the growing popularity of Micro Aerial Vehicles as an autonomous imaging platform, have made 3D vision tools ubiquitous for large number of Architecture, Engineering and Construction applications among audiences, mostly unskilled in computer vision. However, to obtain high-resolution and accurate reconstructions from a large-scale object using SfM, there are many critical constraints on the quality of image data, which often become sources of inaccuracy as the current 3D reconstruction pipelines do not facilitate the users to determine the fidelity of input data during the image acquisition. In this paper, we present and advocate a closed-loop interactive approach that performs incremental reconstruction in real-time and gives users an online feedback about the quality parameters like Ground Sampling Distance (GSD), image redundancy, etc on a surface mesh. We also propose a novel multi-scale camera network design to prevent scene drift caused by incremental map building, and release the first multi-scale image sequence dataset as a benchmark. Further, we evaluate our system on real outdoor scenes, and show that our interactive pipeline combined with a multi-scale camera network approach provides compelling accuracy in multi-view reconstruction tasks when compared against the state-of-the-art methods.
no_new_dataset
0.937268
1503.02391
Xiaodan Liang
Xiaodan Liang, Si Liu, Xiaohui Shen, Jianchao Yang, Luoqi Liu, Jian Dong, Liang Lin, Shuicheng Yan
Deep Human Parsing with Active Template Regression
This manuscript is the accepted version for IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) 2015
null
10.1109/TPAMI.2015.2408360
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work, the human parsing task, namely decomposing a human image into semantic fashion/body regions, is formulated as an Active Template Regression (ATR) problem, where the normalized mask of each fashion/body item is expressed as the linear combination of the learned mask templates, and then morphed to a more precise mask with the active shape parameters, including position, scale and visibility of each semantic region. The mask template coefficients and the active shape parameters together can generate the human parsing results, and are thus called the structure outputs for human parsing. The deep Convolutional Neural Network (CNN) is utilized to build the end-to-end relation between the input human image and the structure outputs for human parsing. More specifically, the structure outputs are predicted by two separate networks. The first CNN network is with max-pooling, and designed to predict the template coefficients for each label mask, while the second CNN network is without max-pooling to preserve sensitivity to label mask position and accurately predict the active shape parameters. For a new image, the structure outputs of the two networks are fused to generate the probability of each label for each pixel, and super-pixel smoothing is finally used to refine the human parsing result. Comprehensive evaluations on a large dataset well demonstrate the significant superiority of the ATR framework over other state-of-the-arts for human parsing. In particular, the F1-score reaches $64.38\%$ by our ATR framework, significantly higher than $44.76\%$ based on the state-of-the-art algorithm.
[ { "version": "v1", "created": "Mon, 9 Mar 2015 08:14:12 GMT" } ]
2016-11-15T00:00:00
[ [ "Liang", "Xiaodan", "" ], [ "Liu", "Si", "" ], [ "Shen", "Xiaohui", "" ], [ "Yang", "Jianchao", "" ], [ "Liu", "Luoqi", "" ], [ "Dong", "Jian", "" ], [ "Lin", "Liang", "" ], [ "Yan", "Shuicheng", "" ] ]
TITLE: Deep Human Parsing with Active Template Regression ABSTRACT: In this work, the human parsing task, namely decomposing a human image into semantic fashion/body regions, is formulated as an Active Template Regression (ATR) problem, where the normalized mask of each fashion/body item is expressed as the linear combination of the learned mask templates, and then morphed to a more precise mask with the active shape parameters, including position, scale and visibility of each semantic region. The mask template coefficients and the active shape parameters together can generate the human parsing results, and are thus called the structure outputs for human parsing. The deep Convolutional Neural Network (CNN) is utilized to build the end-to-end relation between the input human image and the structure outputs for human parsing. More specifically, the structure outputs are predicted by two separate networks. The first CNN network is with max-pooling, and designed to predict the template coefficients for each label mask, while the second CNN network is without max-pooling to preserve sensitivity to label mask position and accurately predict the active shape parameters. For a new image, the structure outputs of the two networks are fused to generate the probability of each label for each pixel, and super-pixel smoothing is finally used to refine the human parsing result. Comprehensive evaluations on a large dataset well demonstrate the significant superiority of the ATR framework over other state-of-the-arts for human parsing. In particular, the F1-score reaches $64.38\%$ by our ATR framework, significantly higher than $44.76\%$ based on the state-of-the-art algorithm.
no_new_dataset
0.954942
1503.08479
Lex Fridman
Lex Fridman, Steven Weber, Rachel Greenstadt, Moshe Kam
Active Authentication on Mobile Devices via Stylometry, Application Usage, Web Browsing, and GPS Location
Accepted for Publication in the IEEE Systems Journal
null
10.1109/JSYST.2015.2472579
null
cs.CR stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Active authentication is the problem of continuously verifying the identity of a person based on behavioral aspects of their interaction with a computing device. In this study, we collect and analyze behavioral biometrics data from 200subjects, each using their personal Android mobile device for a period of at least 30 days. This dataset is novel in the context of active authentication due to its size, duration, number of modalities, and absence of restrictions on tracked activity. The geographical colocation of the subjects in the study is representative of a large closed-world environment such as an organization where the unauthorized user of a device is likely to be an insider threat: coming from within the organization. We consider four biometric modalities: (1) text entered via soft keyboard, (2) applications used, (3) websites visited, and (4) physical location of the device as determined from GPS (when outdoors) or WiFi (when indoors). We implement and test a classifier for each modality and organize the classifiers as a parallel binary decision fusion architecture. We are able to characterize the performance of the system with respect to intruder detection time and to quantify the contribution of each modality to the overall performance.
[ { "version": "v1", "created": "Sun, 29 Mar 2015 18:59:23 GMT" } ]
2016-11-15T00:00:00
[ [ "Fridman", "Lex", "" ], [ "Weber", "Steven", "" ], [ "Greenstadt", "Rachel", "" ], [ "Kam", "Moshe", "" ] ]
TITLE: Active Authentication on Mobile Devices via Stylometry, Application Usage, Web Browsing, and GPS Location ABSTRACT: Active authentication is the problem of continuously verifying the identity of a person based on behavioral aspects of their interaction with a computing device. In this study, we collect and analyze behavioral biometrics data from 200subjects, each using their personal Android mobile device for a period of at least 30 days. This dataset is novel in the context of active authentication due to its size, duration, number of modalities, and absence of restrictions on tracked activity. The geographical colocation of the subjects in the study is representative of a large closed-world environment such as an organization where the unauthorized user of a device is likely to be an insider threat: coming from within the organization. We consider four biometric modalities: (1) text entered via soft keyboard, (2) applications used, (3) websites visited, and (4) physical location of the device as determined from GPS (when outdoors) or WiFi (when indoors). We implement and test a classifier for each modality and organize the classifiers as a parallel binary decision fusion architecture. We are able to characterize the performance of the system with respect to intruder detection time and to quantify the contribution of each modality to the overall performance.
no_new_dataset
0.765681
1505.05212
Hamid Tizhoosh
Hamid R. Tizhoosh
Barcode Annotations for Medical Image Retrieval: A Preliminary Investigation
To be published in proceedings of The IEEE International Conference on Image Processing (ICIP 2015), September 27-30, 2015, Quebec City, Canada
null
10.1109/ICIP.2015.7350913
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper proposes to generate and to use barcodes to annotate medical images and/or their regions of interest such as organs, tumors and tissue types. A multitude of efficient feature-based image retrieval methods already exist that can assign a query image to a certain image class. Visual annotations may help to increase the retrieval accuracy if combined with existing feature-based classification paradigms. Whereas with annotations we usually mean textual descriptions, in this paper barcode annotations are proposed. In particular, Radon barcodes (RBC) are introduced. As well, local binary patterns (LBP) and local Radon binary patterns (LRBP) are implemented as barcodes. The IRMA x-ray dataset with 12,677 training images and 1,733 test images is used to verify how barcodes could facilitate image retrieval.
[ { "version": "v1", "created": "Tue, 19 May 2015 23:48:24 GMT" } ]
2016-11-15T00:00:00
[ [ "Tizhoosh", "Hamid R.", "" ] ]
TITLE: Barcode Annotations for Medical Image Retrieval: A Preliminary Investigation ABSTRACT: This paper proposes to generate and to use barcodes to annotate medical images and/or their regions of interest such as organs, tumors and tissue types. A multitude of efficient feature-based image retrieval methods already exist that can assign a query image to a certain image class. Visual annotations may help to increase the retrieval accuracy if combined with existing feature-based classification paradigms. Whereas with annotations we usually mean textual descriptions, in this paper barcode annotations are proposed. In particular, Radon barcodes (RBC) are introduced. As well, local binary patterns (LBP) and local Radon binary patterns (LRBP) are implemented as barcodes. The IRMA x-ray dataset with 12,677 training images and 1,733 test images is used to verify how barcodes could facilitate image retrieval.
new_dataset
0.961061
1505.05233
Lei Zhang
Lei Zhang and David Zhang
Visual Understanding via Multi-Feature Shared Learning with Global Consistency
13 pages,6 figures, this paper is accepted for publication in IEEE Transactions on Multimedia
null
10.1109/TMM.2015.2510509
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Image/video data is usually represented with multiple visual features. Fusion of multi-source information for establishing the attributes has been widely recognized. Multi-feature visual recognition has recently received much attention in multimedia applications. This paper studies visual understanding via a newly proposed l_2-norm based multi-feature shared learning framework, which can simultaneously learn a global label matrix and multiple sub-classifiers with the labeled multi-feature data. Additionally, a group graph manifold regularizer composed of the Laplacian and Hessian graph is proposed for better preserving the manifold structure of each feature, such that the label prediction power is much improved through the semi-supervised learning with global label consistency. For convenience, we call the proposed approach Global-Label-Consistent Classifier (GLCC). The merits of the proposed method include: 1) the manifold structure information of each feature is exploited in learning, resulting in a more faithful classification owing to the global label consistency; 2) a group graph manifold regularizer based on the Laplacian and Hessian regularization is constructed; 3) an efficient alternative optimization method is introduced as a fast solver owing to the convex sub-problems. Experiments on several benchmark visual datasets for multimedia understanding, such as the 17-category Oxford Flower dataset, the challenging 101-category Caltech dataset, the YouTube & Consumer Videos dataset and the large-scale NUS-WIDE dataset, demonstrate that the proposed approach compares favorably with the state-of-the-art algorithms. An extensive experiment on the deep convolutional activation features also show the effectiveness of the proposed approach. The code is available on http://www.escience.cn/people/lei/index.html
[ { "version": "v1", "created": "Wed, 20 May 2015 03:01:08 GMT" }, { "version": "v2", "created": "Wed, 9 Sep 2015 10:07:11 GMT" } ]
2016-11-15T00:00:00
[ [ "Zhang", "Lei", "" ], [ "Zhang", "David", "" ] ]
TITLE: Visual Understanding via Multi-Feature Shared Learning with Global Consistency ABSTRACT: Image/video data is usually represented with multiple visual features. Fusion of multi-source information for establishing the attributes has been widely recognized. Multi-feature visual recognition has recently received much attention in multimedia applications. This paper studies visual understanding via a newly proposed l_2-norm based multi-feature shared learning framework, which can simultaneously learn a global label matrix and multiple sub-classifiers with the labeled multi-feature data. Additionally, a group graph manifold regularizer composed of the Laplacian and Hessian graph is proposed for better preserving the manifold structure of each feature, such that the label prediction power is much improved through the semi-supervised learning with global label consistency. For convenience, we call the proposed approach Global-Label-Consistent Classifier (GLCC). The merits of the proposed method include: 1) the manifold structure information of each feature is exploited in learning, resulting in a more faithful classification owing to the global label consistency; 2) a group graph manifold regularizer based on the Laplacian and Hessian regularization is constructed; 3) an efficient alternative optimization method is introduced as a fast solver owing to the convex sub-problems. Experiments on several benchmark visual datasets for multimedia understanding, such as the 17-category Oxford Flower dataset, the challenging 101-category Caltech dataset, the YouTube & Consumer Videos dataset and the large-scale NUS-WIDE dataset, demonstrate that the proposed approach compares favorably with the state-of-the-art algorithms. An extensive experiment on the deep convolutional activation features also show the effectiveness of the proposed approach. The code is available on http://www.escience.cn/people/lei/index.html
no_new_dataset
0.950134
1508.00299
Pin-Yu Chen
Pin-Yu Chen, Shin-Ming Cheng, Pai-Shun Ting, Chia-Wei Lien, Fu-Jen Chu
When Crowdsourcing Meets Mobile Sensing: A Social Network Perspective
To appear in Oct. IEEE Communications Magazine, feature topic on "Social Networks Meet Next Generation Mobile Multimedia Internet"
null
10.1109/MCOM.2015.7295478
null
cs.SI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mobile sensing is an emerging technology that utilizes agent-participatory data for decision making or state estimation, including multimedia applications. This article investigates the structure of mobile sensing schemes and introduces crowdsourcing methods for mobile sensing. Inspired by social network, one can establish trust among participatory agents to leverage the wisdom of crowds for mobile sensing. A prototype of social network inspired mobile multimedia and sensing application is presented for illustrative purpose. Numerical experiments on real-world datasets show improved performance of mobile sensing via crowdsourcing. Challenges for mobile sensing with respect to Internet layers are discussed.
[ { "version": "v1", "created": "Mon, 3 Aug 2015 02:01:06 GMT" } ]
2016-11-15T00:00:00
[ [ "Chen", "Pin-Yu", "" ], [ "Cheng", "Shin-Ming", "" ], [ "Ting", "Pai-Shun", "" ], [ "Lien", "Chia-Wei", "" ], [ "Chu", "Fu-Jen", "" ] ]
TITLE: When Crowdsourcing Meets Mobile Sensing: A Social Network Perspective ABSTRACT: Mobile sensing is an emerging technology that utilizes agent-participatory data for decision making or state estimation, including multimedia applications. This article investigates the structure of mobile sensing schemes and introduces crowdsourcing methods for mobile sensing. Inspired by social network, one can establish trust among participatory agents to leverage the wisdom of crowds for mobile sensing. A prototype of social network inspired mobile multimedia and sensing application is presented for illustrative purpose. Numerical experiments on real-world datasets show improved performance of mobile sensing via crowdsourcing. Challenges for mobile sensing with respect to Internet layers are discussed.
no_new_dataset
0.95388
1508.06483
Naoki Hamada
Naoki Hamada, Katsumi Homma, Hiroyuki Higuchi and Hideyuki Kikuchi
Population Synthesis via k-Nearest Neighbor Crossover Kernel
10 pages, 4 figures, IEEE International Conference on Data Mining (ICDM) 2015
null
10.1109/ICDM.2015.65
null
cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The recent development of multi-agent simulations brings about a need for population synthesis. It is a task of reconstructing the entire population from a sampling survey of limited size (1% or so), supplying the initial conditions from which simulations begin. This paper presents a new kernel density estimator for this task. Our method is an analogue of the classical Breiman-Meisel-Purcell estimator, but employs novel techniques that harness the huge degree of freedom which is required to model high-dimensional nonlinearly correlated datasets: the crossover kernel, the k-nearest neighbor restriction of the kernel construction set and the bagging of kernels. The performance as a statistical estimator is examined through real and synthetic datasets. We provide an "optimization-free" parameter selection rule for our method, a theory of how our method works and a computational cost analysis. To demonstrate the usefulness as a population synthesizer, our method is applied to a household synthesis task for an urban micro-simulator.
[ { "version": "v1", "created": "Wed, 26 Aug 2015 13:22:37 GMT" } ]
2016-11-15T00:00:00
[ [ "Hamada", "Naoki", "" ], [ "Homma", "Katsumi", "" ], [ "Higuchi", "Hiroyuki", "" ], [ "Kikuchi", "Hideyuki", "" ] ]
TITLE: Population Synthesis via k-Nearest Neighbor Crossover Kernel ABSTRACT: The recent development of multi-agent simulations brings about a need for population synthesis. It is a task of reconstructing the entire population from a sampling survey of limited size (1% or so), supplying the initial conditions from which simulations begin. This paper presents a new kernel density estimator for this task. Our method is an analogue of the classical Breiman-Meisel-Purcell estimator, but employs novel techniques that harness the huge degree of freedom which is required to model high-dimensional nonlinearly correlated datasets: the crossover kernel, the k-nearest neighbor restriction of the kernel construction set and the bagging of kernels. The performance as a statistical estimator is examined through real and synthetic datasets. We provide an "optimization-free" parameter selection rule for our method, a theory of how our method works and a computational cost analysis. To demonstrate the usefulness as a population synthesizer, our method is applied to a household synthesis task for an urban micro-simulator.
no_new_dataset
0.951369
1510.02071
Vinay Bettadapura
Vinay Bettadapura, Grant Schindler, Thomaz Plotz, Irfan Essa
Augmenting Bag-of-Words: Data-Driven Discovery of Temporal and Structural Information for Activity Recognition
8 pages
Proceedings of the 2013 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2013) -- Pages 2619 - 2626
10.1109/CVPR.2013.338
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present data-driven techniques to augment Bag of Words (BoW) models, which allow for more robust modeling and recognition of complex long-term activities, especially when the structure and topology of the activities are not known a priori. Our approach specifically addresses the limitations of standard BoW approaches, which fail to represent the underlying temporal and causal information that is inherent in activity streams. In addition, we also propose the use of randomly sampled regular expressions to discover and encode patterns in activities. We demonstrate the effectiveness of our approach in experimental evaluations where we successfully recognize activities and detect anomalies in four complex datasets.
[ { "version": "v1", "created": "Wed, 7 Oct 2015 19:37:11 GMT" } ]
2016-11-15T00:00:00
[ [ "Bettadapura", "Vinay", "" ], [ "Schindler", "Grant", "" ], [ "Plotz", "Thomaz", "" ], [ "Essa", "Irfan", "" ] ]
TITLE: Augmenting Bag-of-Words: Data-Driven Discovery of Temporal and Structural Information for Activity Recognition ABSTRACT: We present data-driven techniques to augment Bag of Words (BoW) models, which allow for more robust modeling and recognition of complex long-term activities, especially when the structure and topology of the activities are not known a priori. Our approach specifically addresses the limitations of standard BoW approaches, which fail to represent the underlying temporal and causal information that is inherent in activity streams. In addition, we also propose the use of randomly sampled regular expressions to discover and encode patterns in activities. We demonstrate the effectiveness of our approach in experimental evaluations where we successfully recognize activities and detect anomalies in four complex datasets.
no_new_dataset
0.948394
1511.06855
Jianyu Wang
Jianyu Wang, Zhishuai Zhang, Cihang Xie, Vittal Premachandran, Alan Yuille
Unsupervised learning of object semantic parts from internal states of CNNs by population encoding
null
null
null
null
cs.LG cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We address the key question of how object part representations can be found from the internal states of CNNs that are trained for high-level tasks, such as object classification. This work provides a new unsupervised method to learn semantic parts and gives new understanding of the internal representations of CNNs. Our technique is based on the hypothesis that semantic parts are represented by populations of neurons rather than by single filters. We propose a clustering technique to extract part representations, which we call Visual Concepts. We show that visual concepts are semantically coherent in that they represent semantic parts, and visually coherent in that corresponding image patches appear very similar. Also, visual concepts provide full spatial coverage of the parts of an object, rather than a few sparse parts as is typically found in keypoint annotations. Furthermore, We treat single visual concept as part detector and evaluate it for keypoint detection using the PASCAL3D+ dataset and for part detection using our newly annotated ImageNetPart dataset. The experiments demonstrate that visual concepts can be used to detect parts. We also show that some visual concepts respond to several semantic parts, provided these parts are visually similar. Thus visual concepts have the essential properties: semantic meaning and detection capability. Note that our ImageNetPart dataset gives rich part annotations which cover the whole object, making it useful for other part-related applications.
[ { "version": "v1", "created": "Sat, 21 Nov 2015 09:02:21 GMT" }, { "version": "v2", "created": "Thu, 7 Jan 2016 22:10:52 GMT" }, { "version": "v3", "created": "Sat, 12 Nov 2016 13:37:07 GMT" } ]
2016-11-15T00:00:00
[ [ "Wang", "Jianyu", "" ], [ "Zhang", "Zhishuai", "" ], [ "Xie", "Cihang", "" ], [ "Premachandran", "Vittal", "" ], [ "Yuille", "Alan", "" ] ]
TITLE: Unsupervised learning of object semantic parts from internal states of CNNs by population encoding ABSTRACT: We address the key question of how object part representations can be found from the internal states of CNNs that are trained for high-level tasks, such as object classification. This work provides a new unsupervised method to learn semantic parts and gives new understanding of the internal representations of CNNs. Our technique is based on the hypothesis that semantic parts are represented by populations of neurons rather than by single filters. We propose a clustering technique to extract part representations, which we call Visual Concepts. We show that visual concepts are semantically coherent in that they represent semantic parts, and visually coherent in that corresponding image patches appear very similar. Also, visual concepts provide full spatial coverage of the parts of an object, rather than a few sparse parts as is typically found in keypoint annotations. Furthermore, We treat single visual concept as part detector and evaluate it for keypoint detection using the PASCAL3D+ dataset and for part detection using our newly annotated ImageNetPart dataset. The experiments demonstrate that visual concepts can be used to detect parts. We also show that some visual concepts respond to several semantic parts, provided these parts are visually similar. Thus visual concepts have the essential properties: semantic meaning and detection capability. Note that our ImageNetPart dataset gives rich part annotations which cover the whole object, making it useful for other part-related applications.
new_dataset
0.968797